According to 1M AI News monitoring, excerpts from a new book by technology historian and financial historian Sebastian Mallaby, titled The Infinity Machine, published by The Atlantic, reconstruct the evolution of AI safety thinking by Demis Hassabis, the co-founder of Google DeepMind, from his 2014 Google acquisition onward, based on ongoing interviews conducted from 2023 to 2026.
Hassabis’ early vision was that a single scientific team could build superintelligence in a way that safely benefits all of humanity; he even planned, at a critical moment, to have top researchers pulled into a secret bunker—something akin to the Manhattan Project. A researcher who joined not long after recalls that, in the closing moments of his interview, Hassabis warned him to be mentally prepared to “fly at any time to some secret location in Morocco.”
When Google acquired DeepMind in 2014, Hassabis set rare conditions: establishing an independent external oversight committee, banning military applications, and ensuring Google could not fully control the deployment of the technology. Google agreed. The following year, he convened a secret meeting at Elon Musk’s headquarters in Hawthorne, California, trying to bring potential rivals into a unified front—only to backfire: Musk later teamed up with Sam Altman to found OpenAI.
After that, Hassabis launched a covert operation codenamed “Project Mario,” seeking to wrest back governance autonomy from Google, assemble a legal team, secure a $1 billion funding commitment from LinkedIn founder Reid Hoffman, and even consider spinning DeepMind out from Google as an independent entity. The struggle lasted three years and ultimately failed. Co-founder Mustafa Suleyman was also forced to leave the company in 2019.
After the release of ChatGPT in late 2022, Hassabis completely abandoned the high-minded approach of doing only beneficial science. He told Mallaby, “This is wartime,” and then threw himself fully into the race between Gemini and ChatGPT. Trillions of dollars poured into the AI arms race, and neither national regulation nor corporate governance structures could stop the competition. Hassabis’ safety philosophy also shifted fundamentally: “Safety isn’t about the governance architecture. Even if there’s a governance committee, at the critical moment, it likely won’t do the right thing.” His new strategy was to ensure he had “a seat at the decision-making table, so that when safety issues arise, he can participate in deciding on the solution.” Meanwhile, Google had already been actively pitching AI to U.S. defense systems, a stark reversal of the military ban Hassabis had set as a condition for the acquisition.