News
The People Behind the Machines: What We Learn from the Architects of AI
In the second episode of Creative Intelligence, Kartik Hosanagar is joined by Cade Metz, New York Times tech journalist and author of Genius Makers, to discuss the people, power dynamics, and paradoxes behind the rise of modern AI.
Here are some of the most compelling insights from their conversation:
1. You Can’t Understand AI Without Understanding Its Creators
One of the central themes Cade Metz emphasizes — and what makes his book Genius Makers so compelling — is that AI is not a neutral force. The systems we use today were shaped by a small group of researchers, such as Demis Hassabis and Geoffrey Hinton, who believed in neural networks long before they were mainstream. Their personalities, rivalries, and philosophies continue to influence how AI develops and how it’s deployed.
“If you want to understand where AI is going, you need to understand who built it, what they believed in, and what they were trying to solve.” – Cade Metz
2. Geoffrey Hinton’s $44M Auction Changed Everything
Once mocked for his loyalty to neural networks, Hinton and two students stunned academia and industry when their object recognition model beat benchmarks and triggered a secret auction among tech giants. Hinton stopped the bidding at $44 million, ultimately preferring to work with Google rather than its competitors.
“[Hinton] is not a businessman…but he saw the moment, and seized it,” said Cade.
3. Power and Paradox in AI’s Most Vocal Critics
Many of today’s loudest voices warning about AI’s dangers, from Elon Musk to the founders of OpenAI, are also its most aggressive builders. Cade Metz calls this a “brain-stretching paradox”: they race to develop potentially world-altering technologies while claiming only they can do so safely. These leaders justify their pace by arguing that if someone’s going to build powerful AI, it had better be them. But as Cade notes, beneath that logic lies something simpler and more human: ambition, ego, and the desire for influence.
“They believe they alone can build AI safely. And that gives them reason to race ahead.” – Cade Metz
4. Google Had the Lead — But Didn’t Ship
Despite inventing the transformer architecture (the foundation of modern large language models), employing Geoffrey Hinton, and acquiring DeepMind, Google still got leapfrogged. Why? According to Cade Metz, it came down to culture, caution, and complexity. Google had the talent, data, and compute, but its teams were fragmented across overlapping projects, and leadership feared reputational risk. They knew these systems hallucinate, reflect bias, and are unpredictable, and they didn’t want to release something that could backfire. Meanwhile, OpenAI, a much smaller player, concentrated its resources on a single bet and released ChatGPT, despite its flaws. The public response was overwhelming, and it forced Google to scramble in response.
“It’s the classic thing we talk about in business school: the incumbent becomes so risk-averse because they have a reputation to defend.” – Kartik Hosanagar
5. AI’s Real Risks Are Already Here
While headlines often dwell on sci-fi scenarios like sentient machines, killer robots, runaway super-intelligence, Cade argues that the most urgent risks of AI are already happening, quietly and at scale. These include hallucination (making things up), embedded bias, and disinformation. As Cade puts it, we’re entering a world where everything can be faked — and therefore everything can be doubted. This so-called liar’s dividend makes it easier to dismiss real evidence as fake and harder for the public to agree on basic facts.
“These systems don’t need to destroy the world. They just need to make it harder to know what’s real,” he notes.
For the latest episodes and updates on Creative Intelligence, be sure to subscribe on YouTube, Spotify, or Apple Podcasts.