The Race to Stop AI’s Threats to Democracy

OpenAI became the world’s most valuable private company last week after a stock deal pushed the value of the artificial intelligence developer to $500 billion. The company and its remarkable chatbot ChatGPT have single-handedly accelerated AI’s boom and threatened to upend much of how we work, create, learn, and communicate in the process.

But when OpenAI was founded a decade ago, the company’s approach to artificial intelligence wasn’t taken seriously in Silicon Valley. Tech journalist Karen Hao has been covering OpenAI’s astounding rise for years and recently wrote a book about the company, Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. She says that while many in Silicon Valley warn of AI’s sci fi–like threats, the real risks are already here.

“We are allowing the tech industry to consolidate this extraordinary degree of resources unlike anything ever before,” she tells More To The Story host Al Letson. “We thought that they were already powerful during the social media era. In the AI era, the amount of resources and the amount of influence and domination that they now have is of a fundamentally different degree.”

The Center for Investigative Reporting, which produces Mother Jones, Reveal, and More To The Story, is currently suing OpenAI and Microsoft for copyright infringement.

On this week’s More To The Story, Hao sounds the alarm about the risks to the planet from AI’s growth, examines the Trump administration’s efforts to deregulate the industry, and explains why the version of AI being developed by Silicon Valley could destabilize democracy.

This following interview was edited for length and clarity. More To The Story transcripts are produced by a third-party transcription service and may contain errors.

Al Letson: I’m hoping that you can walk me back from the ledge, because I am so worried about my namesake, AI. So many times, someone will write AI, and I think they’re saying Al, and, “No, dummy. It’s AI.” But no, I tend to use AI often. I’m dyslexic. So I tend to use OpenAI or ChatGPT to proofread, especially if I’m sending out emails or whatever. I just want to make sure that all the dyslexia is taken out. And because I use it for those purposes, I see what it can do. And to be honest, it scares the hell out of me. So for me, when I work with ChatGPT, I am wondering, for lack of better terms, if we are close to the singularity, where everything changes and the world is no longer recognizable from what it used to be. And so, I think the question that I’m thinking about a lot is, are we nearing the singularity where everything is going to shift because of OpenAI?

Karen Hao: It’s such a good question, because there are so many different ways of answering it. One of the challenges of the AI discipline and the AI industry is that there’s sort of a lack of definitional clarity about what the milestones are for AI progress. And part of that is because the idea of artificial intelligence or recreating human intelligence in computers, we don’t have any scientific consensus around what human intelligence is, and then we don’t really have any scientific consensus around what it would mean if we actually did accomplish this goal of fundamentally simulating it in digital technologies.

And so, on one hand, if we were to define the singularity as that kind of moment, then there’s a raging debate within the AI research world about whether AI can actually even ever get there, and 75% of scientists who study this would actually say that current AI models and the existing techniques for advancing AI are probably not going to get us there, if AI can ever do that at all.

But if you were to define the singularity as just the technology having a kind of fundamental transformation on the way that we work and live and everything, I think that’s already happened. Everyone is already grappling with the impacts of AI in various different ways on the multifaceted landscape of their lives.

You said something earlier that really struck me. There’s no scientific consensus around human intelligence, which I would argue given the world we are in today, yeah. I don’t know if human intelligence is such a thing anymore, but if we can’t really judge human intelligence, if scientists, people who have more intelligence than me, cannot say specifically what human intelligence is, then I imagine it’s near impossible to say what computer intelligence or artificial intelligence is.

There’s a really dark history around attempts to quantify human intelligence. There’s basically never been any endeavor to quantify or rank human intelligence without some kind of insidious motivation behind it. So in general, yeah, this entire idea of recreating human intelligence is actually quite fraught. And also, one of the challenges that we’re facing now is, the AI industry has become so resource-rich that most of the AI researchers in the world now are bankrolled by the companies that are ultimately trying to just sell us their technologies.

And there has become this distortion in the fundamental science that is coming out of these researchers in terms of understanding the capabilities and limitations of AI today in the same way that you would imagine climate science would be deeply distorted if most climate scientists were bankrolled by the fossil fuel industry. You would just not get an accurate picture on the actual climate crisis.

And so, we are not actually getting an accurate picture on the capabilities of these systems and all of the different ways that they break down, because a lot of these companies now censor that kind of research or don’t even allow that research to be resourced. So there’s never any investigation along those lines.

You’ve been an AI insider for a while now. You’ve got a degree in mechanical engineering from MIT. When did you start reporting on AI, and are you surprised at how quickly the field has grown?

Yeah. I started reporting on AI in 2017, 2018. And in 2018, I took a job at MIT Technology Review to cover the fundamental AI research that was happening in academia and within corporate labs. And at that time, the AI industry really wasn’t… It didn’t really exist yet. I mean, there were certainly efforts to commercialize the technology, but it was primarily back-office tasks. It was things like Google improving its search engine or Facebook improving its recommendation algorithm, but not actually consumer AI products where you could talk with the AI model or type directly into this chat for it to generate images.

And I have been really surprised. I mean, it’s so interesting, because OpenAI, I was the first journalist to profile OpenAI back in… I embedded within their office for three days in 2019, and then my profile published in early 2020. And at that time, it’s hard to sort of explain to people today, OpenAI was sort of the laughingstock of the AI field and of the tech industry.

People did not take that company seriously, in part because the approach that they said that they were going to take to AI development, which is ultimately what has come to pass, was they were just going to take existing AI techniques and technologies and throw more data on it and use larger supercomputers to train it. And people didn’t think that that would lead to advancements that we see today.

They thought this is an intellectually lazy approach. That’s not real research. We’re not actually investing in real breakthroughs here. But in the end, it turns out that there is a lot of interesting things that happen when you do that. And because they took that approach, they were able to do it very, very quickly, far faster than anyone else could have imagined. So absolutely, I would have never predicted that we would be where we are.

I want to talk a little bit specifically about OpenAI. So it started off as an altruistic organization with a unique structure created by Sam Altman. Can you talk me through that a little bit? How did it start? What happened with the rise and fall and rise again of Sam Altman? That whole story.

Yeah. So OpenAI started as a nonprofit at the end of 2015, and it was co-founded by Elon Musk and Sam Altman. And the origin story for how the two of them came together is that Musk was deeply, deeply concerned about the fact that Google was starting to develop a monopoly on AI talent at the time. They had acquired some of the top AI researchers in the world through the buying out of three researchers that basically started the first AI revolution, AI industry revolution, and as well as the acquisition of DeepMind, the London-based AI lab.

And Musk thought if Google is going to have a controlling influence on this technology, and Google is a for-profit company, that could potentially lead to AI development going very sideways. And what Musk meant by it going very sideways is not that it would then be unsafe for consumers, that it might have bias and discrimination, that it might have huge environmental impacts. What he specifically meant was AI might develop consciousness and then turn against humans and destroy everyone, which is a very sci-fi premise that has taken hold in a lot of parts of the AI industry.

Altman, he was the president of Y Combinator at the time, and he is a very strategic person. He essentially starts cultivating this relationship with Musk, telling him that he agrees with all of these concerns. He also is worried about rogue AI. He’s also worried about Google having a control on this technology, and he essentially proposes to Musk after a bit of a courtship, “Well, why don’t we actually create an organization that is antithetical to everything Google stands for? It’ll be a nonprofit instead of a for-profit. It’ll be highly transparent instead of secretive. It’ll be very collaborative, work together with all of these different entities to ensure ultimately that this technology goes well for humanity.” And so, they create this organization. But key to this origin story, which I sort of didn’t quite articulate until years later, is there is a very egotistical element to that origin. Right? Musk and Altman were basically saying, “We’re the good guys. We want to be the ones that create AI in our image so that it goes well for everyone in humanity.”

And so, it sort of was a natural consequence that then one-and-a-half years into the organization, when they were thinking about, “How do we actually make sure that we dominate, not Google?” that they came to the conclusion that they needed to advance AI faster than Google, faster than anyone else, and the best way or the way to guarantee that was to take these existing techniques, throw extraordinary amounts of data and computational resources at it, and that meant that they just had to build larger supercomputers than had ever been built in history. And then suddenly, the bottleneck was cash. A nonprofit structure didn’t work anymore. They needed some kind of for-profit to raise that level of cash, and OpenAI started its transition away from nonprofit to what is now the most capitalistic organization today, nearing a $500 billion valuation. And basically, since then, Altman led the organization. Musk left.

And there have been these allegations that have continued to follow Altman through his time at OpenAI as well as through the rest of his career that he has this slipperiness about him where he tells people what they want to hear, the same way that he told Musk what he wanted to hear, to sort of extract what he needs out of them, but then he might shift course at any moment to just continue doing what he ultimately wants to do, and no one can quite ascertain what it is that is his actual endgame.

But that basically then led to this very dramatic moment of reckoning where a lot of people suddenly lost trust in his leadership as the head of OpenAI. The board decides to fire him. But then because Altman is just so good at fundraising and so good at amassing the kinds of resources that OpenAI foresaw themselves needing to continue perpetuating what they want to do, that then employees rallied around bringing him back again, because they were concerned that without him, they wouldn’t get access to those resources. And now, he’s, you could argue, stronger than ever.

Yeah. And him and Elon seemed to have beef with each other. I’ve seen both of them making snide remarks about the other. Is that because of the breakup with OpenAI?

It is. So at the time that Musk decided to part, it was amicable. But as OpenAI continued to succeed more and more and more, Musk became more and more frustrated that he was not part of that success, and he then started to feel that he had really been tricked by Altman, because it was originally Musk’s reputation and Musk’s money that allowed OpenAI to establish a strong foundation for its later success. And so, that is why they’re kind of at each other’s throats these days.

I want to talk to you about your reporting that focused on Africa and South America. So early on in the development of ChatGPT, the company hired people there to work on data annotation. Tell me about the work they were doing, and do you think they were exploited?

Absolutely they were exploited. So yeah, everything that an AI system can do, it’s not because the AI system learned how to do it completely on its own. It is because there were human beings that taught the system how to do that. And that means that in order for the entire AI industry enterprise to function, they need to hire huge teams of workers to teach ChatGPT how to chat. The fact that it can even chat was a design decision, and there were workers that had to show ChatGPT, “This is what dialogue looks like. This is how humans converse. One person says one thing. Another person responds with related information.”

And there’s also, they have to do content moderation to make sure that the chatbot won’t spew crazy, racist, hateful, or other abusive content out of it. Most of that labor initially came from the Global South, because the AI industry was looking for the cheapest possible labor in the world. And one of the things that I talk about in my book is that, initially, they thought, “Oh, we want to go to English-speaking countries.” So they went to places like Kenya. They went to places like the Philippines, that have a history of colonialism, and therefore speak English, and also an understanding of American culture.

And now, in the generative AI era, when we see the need now for content moderation on these chatbots, I mean, we are repeating all of the same types of exploitation of content moderators in the social media era. So I went and met workers in Kenya that OpenAI contracted there to build their content moderation filter, and those workers ended up with extreme PTSD. Their personalities completely changed. They went from being extroverted or loving individuals to highly socially anxious, highly isolated. And one man-

What was it that made them that way? What in the work shifted their personalities like that?

They were wading through reams and reams of text that represented the worst content on the internet, because these chatbots… OpenAI decided at some point that they were going to train these chatbots on the entirety of the English-language internet. So they’re scraping all of this stuff willy-nilly, and the datasets have grown so large that they don’t even know what’s in there.
It would take too long for them to manually audit what’s in there. So there’s just a bunch of incredibly awful material that’s in there that is never taken out, which means that when you train an AI model, that AI model is then at risk of regurgitating all of that awful stuff, which would then make their consumer product highly untenable.

And so, they took different examples, thousands and thousands of different examples, of just the awfulness of that, both examples that they were finding on the internet as well as AI-generated, examples where OpenAI was prompting its own models to say, “Imagine the most awful thing that you could imagine.” And then giving it to the workers so that they had to read and then put into a detailed taxonomy exactly what was the badness of that content.

Was this violent content? Was this sexual content? Was it sexual abuse content? Was it child sexual abuse content? And reading that kind of stuff for eight hours a day, every day of the week, just completely deteriorated their mental health, and not just their mental health, but these people belong to communities. And when they break down, the people who depend on them also break down.

So I just want to talk a little bit about the cost of AI, because I think it’s really easy to forget when you’re on your computer, you’ve got a screen up, and you’re asking ChatGPT, or any other AI model, some questions that maybe five years ago, we would have popped it into Google and not gotten a great answer or gotten in the area of what we were looking for. And now, you can pop it into ChatGPT and really kind of fine-tune what you’re looking for. That being said, that is a very easy thing to do. Then recently, I found out the cost of that. Can you kind of walk me through what the cost of that is, both financially, but really I’m thinking about the environmental footprint that’s left behind by AI?

Overall, because of the amount of energy that is used to first develop these systems originally and then to deploy them at scale, McKinsey recently had a report that projected that based on the current pace of data center and supercomputer development to support this, we would need to add two to six times the amount of energy consumed by California onto the global grid in five years, by 2030, and most of that will be fossil fuels, because these data centers have to run around the clock. They cannot just pause when they’re serving up these models or training these models, like xAI with Grok.

There have been a lot of phenomenal journalistic investigations on the supercomputer that they’ve been using in Memphis, Tennessee, that is just being run on 35 unlicensed methane gas turbines that’s pumping thousands of pounds of toxins into the working-class communities in Memphis, Tennessee, who have long had a history of environmental injustice where they are unable to access this fundamental right to clean air.

And then that’s just the energy and air pollution side. There’s also the freshwater side of things where most of these data centers are cooled by water. You can also cool them with just energy, basically running massive air-conditioning units, but it is way more energy. So companies usually opt to cool with water because it’s more energy-efficient, but the water has to be fresh water, because any other type of water leads to corrosion of the equipment or bacterial growth.

And so, Bloomberg recently had an investigation showing that two-thirds of these data centers are actually going into areas that already don’t have enough freshwater resources for the human population. So there are people around the world that are actively competing with computers for their life-sustaining resources. And one of the communities that I reported on in my book was facing this crisis in Montevideo, Uruguay, where residents were facing a historic level of drought to the point where the Montevideo city government was actively mixing salt water into the public drinking water supply just so people could have something come out of their taps.

And for people who were too poor to buy bottled water, that is what they were drinking, and women were having higher rates of miscarriages. People with chronic illnesses were having exacerbated symptoms, and it was in the middle of that that Google proposed to build a data center that would cool with their fresh water. And I point to the Global South and these communities there, but this is actually also happening in the U.S. as well. There are plenty of communities that are now struggling and trying to figure out how to essentially prevent their freshwater resources from being taken by Silicon infrastructure.

When you say that, it feels like the scale of the problem is so big, and yet we’re not really talking about it when we talk about AI. When we talk about AI, we tend to talk about the benefits and the questions of whether it’s going to turn into Skynet and we’ll have a Terminator monitoring our streets next week. But really, the insidious part of it is that it is sucking up natural resources that human beings need and giving it to a machine.

Right. Yeah. A lot of the discourse around AI risks and dangers is ultimately a distraction to the real risks and dangers. We do point to these sci-fi-like scenarios, and in part because Silicon Valley keeps trumpeting that as the scenario, and it is a very convenient one for them to trumpet, because then if people are worried about existential risk and Skynet appearing, they’re not going to worry about the climate.

But to me, the real existential risk is that we are literally leading to the overconsumption of our planet. We are leading to the enormous exploitation of labor in the production of these technologies as well as the application of these technologies and the economic fallout that it could have when it starts to automate away a lot of people’s jobs, and we are allowing the tech industry to consolidate this extraordinary degree of resources unlike anything ever before.
I mean, we thought that they were already powerful during the social media era. In the AI era, the amount of resources and the amount of influence and domination that they now have is of a fundamentally different degree. We are not actually getting innovation that is in the public interest, and we need to hold those companies and the people at the top accountable in order to actually get to a point where we do get technology in the public interest.

I guess looking at the world today, specifically looking at the United States today and where we are politically and also socially, do you think that’s possible? The idea that the government can come in and regulate and really hold these companies accountable.

I think it is possible, but it would not be from the government. So I used to say, when the government was more functional, that that was the endgame, that we really wanted to have the governments do top-down governance and implement legislation and regulation and so forth. Now, I very much believe that we need to shift to bottoms-up governance when there is a crisis of leadership at the top.

The beautiful thing about democracy is that you can still have leadership from the bottom. We have seen artists and writers suing these companies, saying, “You can’t just take our intellectual property and just decide not to credit or compensate us.” And they’re using now litigation as a form of trying to create new mechanisms of governance around data use, around copyright law. We’ve seen so many communities around the U.S. and around the world that are pushing back against unfettered data center development to support the development and deployment of AI.

And actually, just recently, there was a huge victory in Tucson where the residents there, they successfully blocked what was reportedly going to be a hyperscale data center project from Amazon, not because they said, “We just don’t want data centers at all, supercomputers at all.” But specifically, they said, “We cannot accept a project like this that is going to consume a lot of energy. It potentially will consume a lot of our freshwater resources, and have absolutely no transparency around who is using this infrastructure, who is building it, who is using it, what kind of energy and fresh water it might be using, how it might hike up our utility bills, how it might pollute our air quality if these facilities are going to be run on natural gas or other types of fossil fuels.”

And they were basically demanding, like, “Yeah. It can’t just be a top-down… We have no say. It has to be a democratic process of engaging with the community. What are the terms under which you would want this data center to be placed in?” We’re also seeing students and teachers starting to have discussions within classrooms, within universities about a more nuanced AI governance policy that’s in between everyone use it or no one use it at all. And all of these different types of discussions, protest, pushback, I see as different forms of democratic contestation along the sites of the AI supply chain that are really actively pushing the tech industry to start to respond to the fact that they can’t actually just do whatever they want without any resistance.

You’ve mentioned all these concerning aspects of AI. I’m curious how worried you are personally about the growth of this technology and where it’s going.

I really want to emphasize that the thing I’m most worried about is the unfettered expansion of Silicon Valley’s model of AI development, their approach to creating these large-scale, extremely consumptive AI models. But there are so many other AI technologies that actually do not have any of the problems that we talked about, do not have the need for content moderation, do not have the huge environmental and freshwater costs, and those are smaller, task-specific models that are meant to tackle a very specific challenge that actually lends itself to the computational strengths of AI.

So one example of a system like this is DeepMind’s AlphaFold, which was a system that was able to predict with high accuracy how an amino acid sequence would fold into a protein, which is a very, very important first step for then accelerating drug discovery and for understanding different diseases, and it ultimately won the Nobel Prize in Chemistry last year.

That system is far-flung from ChatGPT. It was not trained on the internet. It was trained on just amino acid sequence and protein folding data, and it did not need massive supercomputers. It just needed a few computer chips to create that type of AI technology. And so, I am extremely pessimistic about what would happen if we allowed Silicon Valley to keep building the technology the way that they want to.

I think that, ultimately, they would consolidate so many resources, so much power that it would be the greatest threat that we’ve seen to democracy to date. At the same time, I am extremely optimistic about the other types of AI technologies that are available to us, that if we are to invest more in those other AI technologies, we really can get to a place where AI is actually serving our needs, serving society rather than us being served up to the tech industry.

Find More To The Story on Apple PodcastsSpotifyiHeartRadioPandora, or your favorite podcast app, and don’t forget to subscribe.


This post has been syndicated from Mother Jones, where it was published under this address.

Scroll to Top