Google DeepMind CEO Demis Hassabis says global coordination on AI rules is urgent — but admits politics are getting in the way.
Speaking at the SXSW festival, the AI pioneer called for international cooperation to tackle the ethical, legal, and societal risks of artificial intelligence, but warned it won’t be easy.
Calls for Unity Amid Global Tensions
Demis Hassabis is no stranger to complex challenges. The co-founder and CEO of Google DeepMind has spent years pushing the boundaries of what machines can do. But when it comes to keeping that power in check, he says the real challenge isn’t technical — it’s political.
“The most important thing is it’s got to be some form of international cooperation,” Hassabis said during a panel at SXSW in Austin. “Because the technology is across all borders.”
He’s right. AI isn’t a local phenomenon. It doesn’t stop at customs or care about who’s president. But the rules around it? That’s a different story.
And getting 190-plus countries to agree on anything, let alone something as disruptive as AI, might be wishful thinking.
Ethics Front and Center, But No Easy Answers
The speed of AI’s rollout has shocked even its creators. Just a few years ago, large language models were experimental toys. Now, they’re doing everything from automating customer service to writing court documents.
But with that power comes a growing list of ethical concerns — and not everyone is convinced we’re ready.
Hassabis didn’t sugarcoat the dangers. From deepfakes to disinformation, algorithmic bias to job displacement, AI is already changing society in ways that feel too fast, too soon.
One problem? Artificial General Intelligence — AGI. That’s the big one. A system that can reason, plan, and perform like a human. Or better.
AGI isn’t here yet. But it’s not science fiction either.
Tech giants are pouring billions into it. OpenAI, Anthropic, Meta, and yes, Google — they’re all in a race to crack it. And when it lands, it won’t just affect one country. It’ll affect everyone.
So who sets the rules?
Countries Are Talking — But Not Listening
It’s not like no one’s trying. In February, leaders from 58 countries met in Paris for a global AI summit.
There were bold statements, urgent speeches, and pledges for stronger oversight.
• The European Union backed the idea of a global framework.
• China and India said AI governance was “essential.”
• The African Union Commission signed on to calls for ethical standards.
But not everyone was on board.
The U.S., still reeling from Big Tech lobbying, pushed back. JD Vance, Vice President, warned against “excessive regulation” that could stifle innovation. The UK didn’t sign the final agreement either.
That’s the dilemma. Everyone agrees AI is serious. Few agree on what to do about it.
Geopolitics Are Getting in the Way
The timing couldn’t be worse. International trust is at a low point. Between trade wars, espionage fears, and regional conflicts, global collaboration has taken a back seat.
Hassabis knows it.
He’s spent years in boardrooms and briefing rooms across the world. And while there’s genuine concern from leaders, there’s also fear — about falling behind, losing tech supremacy, or giving up control.
No one wants to be the country that slowed down innovation. But no one wants to be the one caught unprepared either.
The Billion-Dollar AGI Race Isn’t Waiting
Meanwhile, back at headquarters, the race for AGI is heating up.
In 2024 alone, OpenAI reportedly raised over $13 billion. Google DeepMind, Hassabis’ own company, is blending its resources with Alphabet to double down on AI infrastructure. Microsoft and Amazon are in it too.
Here’s a rough breakdown of recent AI funding by major firms:
Company | Estimated 2024 AI Investment | Focus Area |
---|---|---|
OpenAI | $13 billion+ | AGI, LLMs, safety research |
Google DeepMind | $10 billion+ | AGI, foundational models, ethics |
Microsoft | $15 billion+ (via OpenAI) | Infrastructure, integration |
Meta | $8 billion+ | Open-source AI, AGI |
Amazon | $7 billion+ | Cloud-based AI tools |
This isn’t a science fair. It’s a global arms race — but with software.
And while companies say safety is a priority, the incentives to go faster — and beat the competition — are overwhelming.
So What Happens Next?
There’s no blueprint. No AI Geneva Convention. Just a handful of summits, some draft legislation, and a lot of hopes riding on goodwill.
Still, Hassabis remains cautiously optimistic.
He believes scientists, engineers, and policymakers can work together. But it’ll take compromise — and courage.
And probably a few more tense summits before anyone agrees on anything meaningful.
Until then, the machines keep getting smarter. And the humans? We’re still figuring it out.