Why I Quit Nvidia at the Start of the AI Boom
You’re unlikely to know Jacopo Pantaleoni’s name, but you definitely know his work. As a computer scientist, Pantaleoni helped develop the graphics systems for James Cameron’s Avatar and write programs for Mental Images, one of the technology companies behind The Matrix. Pantaleoni, whose specialty is turning data into images, went on to spend 15 years at Nvidia developing its flagship product, the graphics processing unit. His work has helped shape the ultrarealistic world of digital images that make up everything from video games to movies to the world of bioinformatics, which is crucial to the study of DNA sequencing. It is also essential for artificial intelligence, serving as the backbone of OpenAI and its competitors.
But Pantaleoni, who lives in Germany, quit Nvidia last July, just as the artificial-intelligence boom was catapulting the company to the tech world’s upper echelons. He has since become an outspoken critic not only of AI technology but of Nvidia’s role as the engine of that boom. To Pantaleoni, the most worrying consequence of AI technology isn’t the kind of apocalyptic, Matrix-like scenario sometimes floated by Silicon Valley magnates. He worries instead about a more familiar scenario: that a small number of AI companies will hoard an unhealthy amount of power, influence, and information. He has since written a book, The Quickest Revolution: An Insider’s Guide to Sweeping Technological Change, and Its Largest Threats, and is now advising government regulators on how to rein in the technology. I spoke with Pantaleoni in late May about why he’s left Big Tech behind.
Tell me what spurred you to leave Nvidia.
It was the realization that the scope of my contributions to society, whether directly or indirectly, was much larger than I thought it would ever be, and the need for a time of reflection and critical thought about that. The bulk of my indirect contributions to society were not necessarily positive. Right now, computing technology is posing several risks to society that are still underestimated — and I’m not talking about existential risks from super-intelligent AI or anything like that.
You’re not?
No. Right now, that’s a known risk. That is being put forward by personalities like Sam Altman to distract from the most urgent issue, which is that [people like Altman] often have very vested interests. The main risk right now is the concentration of power and, consequently, the displacement of jobs. One of the other major things, which I’m exploring in my book, is the increasing cognitive weakening that computing technology is pressing on society at scale.
We’re giving the bulk of our attention away to algorithms specifically designed to maximize the grabbing and holding of our attention and that mostly appeal to our visual cortex — which is what I mostly have contributed to in my lifetime. I did mostly visual computing. And the power of visual computing is actually what allowed companies like Nvidia to succeed.
Tell me about the work you did and how it contributed to the issues you’re so concerned about today.
My background is that of an expert in rendering technologies, which produces pictures that look as realistic as possible from three-dimensional datasets models of visual things that do not exist. I started this in the ’90s for the visual-effects industry. I wrote one of the main rendering engines for Avatar. Before that, I was a visual scientist for Mental Images, which was the company that created the software for The Matrix movies — that was done before I joined. My career focused mostly on high-performance and high-quality, high-fidelity rendering of computer graphics.
That same technology transferred to computer games. That’s how companies like Nvidia were born in the late ’90s and 2000s — they started to produce hardware to bring that kind of rendering to the masses for computer games. I worked for Nvidia Research to develop visual computing technologies with a five-year horizon. Back in 2010, we realized — I mean, Jensen Huang and people like me who do research — we realized that the kind of computing power needed for computer graphics was exactly the kind of power needed for tasks like machine learning and artificial intelligence. That was the genius of Jensen Huang. People like me contributed by making our hardware more and more programmable and more and more accessible to the masses.
At what point did you start to have reservations about it?
I don’t know if reservation is the right word, but I started having a strange feeling about the impact of my work around 2014 or 2015. I saw a drastic shift. I saw that companies like Google and Amazon were becoming much, much bigger customers of that same kind of machinery while developing the massive, parallel computing power for uses that were totally different. They were starting to employ machine learning at scale, essentially for advertising and for driving the attention economy.
How did that actually work? What were they doing that was so different?
I think that’s actually the secret to the success of Google. They were essentially machine-learning companies, the first companies which profited enormously from the scalability of computer equipment. And they were one of the first companies to overcome the so-called diminishing returns that afflicted all the manufacturing companies from the previous ages. They were actually accelerating returns.
How did they do that?
Well, if you have a regular manufacturing company and you want to expand, you have to pay more salaries to more human workers. You have to get more goods, more raw materials. In this case, the raw materials of companies like Google were totally free. It’s just the information. And they could simply process this raw material by throwing more and more computer power at it.
Was it a surprise to you that the technology you were creating was being used this way?
It was a surprise. I had dedicated my life to building technology that turned mathematics into art, and to turn computing into an instrument for scientific research — for example, DNA sequencing. So to see the usage of this technology would be essentially for advertisement and the attention economy, that was a bit of a disappointment.
Was there anything that you could do about that over at Nvidia?
I don’t think so. Nvidia is not contributing to the attention economy, but it is the engine that powers it. A company like Nvidia cannot say “We refuse to power this market.” My thought, when I left, was that it is fundamentally impossible to change the outcomes these companies produce from the inside. The only way to fundamentally alter their effects on society is through regulation.
You mentioned before that the real end risk is the concentration of power. How do we get from a chatbot that is telling people to put glue in their pizza to the problems you’re concerned with?
When you build a tool that everybody on earth ends up using, you are already concentrating power in your hands. These tools automate more and more tasks.
A lot of news organizations, including New York Magazine’s parent company, are partnering with OpenAI. Do you think that will have the same kind of effect on how the news industry operates?
When there’s a single company, or a few companies, that own the tools that essentially produce the news worldwide, that is political power. So these things have to be regulated. There’s no other way around. We are putting ourselves in a dangerous situation and our democracies in a dangerous position.
Do you still own shares in Nvidia?
I do.
Has the price of Nvidia shares affected your life?
It did. Even before this increase of the shares, the skyrocketing of Nvidia’s shares did have an effect on my economic life and on the lives of many colleagues of mine who stayed.
That was another aspect that rang some alarm bells in my head. I essentially noticed that I was going from being a well-paid graphics specialist whose work was still niche to being in a new superstar league, populated by the scientists from companies like Google and Nvidia and Amazon, who are now technically among the one percent in terms of wealth.
Do you feel conflicted about that?
I definitely do. I felt that there was a bit of an imbalance between the surging economic wealth and economic value of computer scientists working at these firms and their actual contribution to the world, especially as I was feeling the net effect was not essentially positive.
You’re saying that the financial reward you got from your work at Nvidia has outweighed your contribution to society?
That’s a good framing. I would say that the impact of my work to society was certainly large, but it was not necessarily as positive as I would have wished.
Do you plan on holding on to the shares of Nvidia you’ve held onto?
Not necessarily. Right now, I just see them as part of my savings for my family and it’s not something I necessarily think I will hold onto.
Are you planning on giving money away?
I don’t have that much wealth that I can actually give it away. I didn’t get rich because of Nvidia. I definitely can enjoy a year without working.
What do you see yourself doing now?
I pivoted toward becoming a public expert. I provided consulting for regulators. I’ve been invited to speak at conferences. I don’t know if I want to work as an expert in technology again — possibly, but I will have to find something that really sits well with my my desire to have a positive impact on society.
What’s a tech project that would have a positive impact on society?Anything related to medicine, for example, that tries to use technology in a positive way to solve medical issues and not in an exploitative way. Or to develop calmer technology — technology that does not try to grab your attention but that is more peripheral, so to speak. Or any kind of technology that would do the opposite of what is being done today with these AI assistants. For example, technology that would inspire children to be more social, or more geared toward learning, to use their intellect to learn to solve problems.
Do you think people are generally aware of how they’re affected by technology?
Right now, there is a great divide. But that is not based on studies. It’s just my own perception of social media and the kind of people who use social media. Many influencers, for example, see social media as their chance in life. And they end up believing that having hundreds of thousands of followers is actually meaningful. In fact, they are all strangers who will not affect their lives in any way, other than through advertisements.
Do you think that AI technology is going to accelerate that?
I think so. It’s being used to maximize attention-grabbing at an even bigger scale and exploiting all the possible information about their users and the preferences of the users. And on the other side, it allows users to generate more content — to populate the web with even more distracting information. It makes it more complex to distinguish fact from fiction.
This interview has been edited for length and clarity.