TextPurr Logo

TextPurr

Loading...
Loading...

The future of intelligence | Demis Hassabis (Co-founder and CEO of DeepMind)

In our final episode of the season, Professor Hannah Fry sits down with Google DeepMind Co-founder and CEO Demis Hassabis for their annual check-in. Together, they look beyond the product launches to the scientific and technological questions that will define the next decade. Demis shares his vision for the path to AGI - from solving "root node" problems in fusion energy and material science to the rise of world models and simulations. They also explore what's beyond the frontier and the importance of balancing scientific rigor amid the competitive dynamics of AI advancement. Thanks for joining us this year! 🔔 Subscribe to stay updated on our return in 2026, and revisit our episode library to catch up on everything from driverless cars to drug discovery: https://www.youtube.com/playlist?list=PLqYmG7hTraZBiUr6_Qf8YTS2Oqy3OGZEj Timecodes 01:42 2025 progress 05:14 Jagged intelligence 07:32 Mathematical version of AlphaGo? 09:30 Science vs commercialization 12:42 Scaling 17:43 Genie and simulation 25:47 Evolution in simulation 28:26 AI bubble 31:56 Building ethical AI 34:31 AGI 44:44 Turing machines 49:06 How it feels to lead ___ Thanks to everyone who made this possible, including but not limited to: Presenter: Professor Hannah Fry Series Producer: Dan Hardoon Editor: Rami Tzabar Commissioner & Producer: Emma Yousif Music composition: Eleni Shaw Audio engineer: Richard Courtice Studio Manager: Nicholas Duke Video Director: Bernardo Resende Video Editor: Anthony Le Audio Engineer: Perry Rogantin Production Coordination: Zoey Roberts Visual Identity and Design: Rob Ashley Commissioned by Google DeepMind __ Subscribe to our channel https://www.youtube.com/@googledeepmind Find us on X https://twitter.com/GoogleDeepMind Follow us on Instagram https://instagram.com/googledeepmind Add us on Linkedin https://www.linkedin.com/company/deepmind/
Hosts: Demis Hassabis, Professor Hannah Fry
📅December 16, 2025
⏱️00:56:07
🌐English

Disclaimer: The transcript on this page is for the YouTube video titled "The future of intelligence | Demis Hassabis (Co-founder and CEO of DeepMind)" from "". All rights to the original content belong to their respective owners. This transcript is provided for educational, research, and informational purposes only. This website is not affiliated with or endorsed by the original content creators or platforms.

Watch the original video here: https://www.youtube.com/watch?v=PqVbypvxDto

00:00:00Demis Hassabis

We effectively, you can think of as 50% of our effort is on scaling, 50% of it is on innovation. My betting is you're going to need both to get to AGI. I've always felt this, that if we build AGI and then use that as a simulation of the mind and then compare that to the real mind, we will then see what the differences are and potentially what's special and remaining about the human mind, right? Maybe that's creativity, maybe it's emotions, maybe it's dreaming. There's a lot of consciousness hypotheses out there about what may or may not be computable. And this comes back to the Turing machine question of, like, what is the limit of a Turing machine?

💬 0 comments
Add to My Notes
00:00:39Professor Hannah Fry

So there's nothing that cannot be done within the sort of computational...

💬 0 comments
Add to My Notes
00:00:43Demis Hassabis

Well, no one's put it this way. Nobody's found anything in the universe that's non-computable so far.

💬 0 comments
Add to My Notes
00:00:48Professor Hannah Fry

So far.

💬 0 comments
Add to My Notes
00:00:49Demis Hassabis

So far.

💬 0 comments
Add to My Notes
00:00:53Professor Hannah Fry

Welcome to Google DeepMind: The Podcast with me, Professor Hannah Fry. It has been an extraordinary year for AI. We have seen the center of gravity shift from large language models to agentic AI. We've seen AI accelerate drug discovery and multimodal models integrated into robotics and driverless cars. Now, these are all topics that we've explored in detail on this podcast. But for the final episode of this year, we wanted to take a broader view, something beyond the headlines and product launches, to consider a much bigger question. Where is all this heading, really? What are the scientific and technological questions that will define the next phase? And someone who spends quite a lot of their time thinking about that is Demis, CEO and co-founder of Google DeepMind. Welcome back to the podcast, Demis.

💬 0 comments
Add to My Notes
00:01:41Demis Hassabis

Great to be back.

💬 0 comments
Add to My Notes
00:01:41Professor Hannah Fry

I mean, quite a lot's happened in the last year.

💬 0 comments
Add to My Notes
00:01:43Demis Hassabis

Yes.

💬 0 comments
Add to My Notes
00:01:45Professor Hannah Fry

What's [laughter]... what sort of the biggest shift do you think?

💬 0 comments
Add to My Notes
00:01:47Demis Hassabis

Oh wow. I mean, it's just so much has happened, as you said. It feels like we've packed in 10 years in one year. I think a lot's happened. Certainly for us, the progress of the models—we've just released Gemini 3 which we're really happy with. The multimodal capabilities, all of those things have just advanced really well. And then probably the thing I guess over the summer that I'm very excited about is world models being advanced. I'm sure we're going to talk about that.

💬 0 comments
Add to My Notes
00:02:14Professor Hannah Fry

Yeah, absolutely. We will get on to all of that stuff in a bit more detail in a moment. I remember the very first time that I interviewed you for this podcast and you were talking about the "root node" problems. About this idea that you can use AI to kind of unlock these downstream benefits. And you've made pretty good on your promise, I have to say. Do you want to give us an update on where we are with those? What are the things that are just around the corner and the things that you've sort of solved or near solved?

💬 0 comments
Add to My Notes
00:02:38Demis Hassabis

Yeah. Well, of course, obviously the big proof point was AlphaFold. It's sort of crazy to think we're coming up to like the 5-year anniversary of AlphaFold being announced to the world—AlphaFold 2 at least. So that was the proof, I guess, that it was possible to do these root node type of problems. And we're exploring all the other ones now. I think material science... I'd love to do a room temperature superconductor and, you know, better batteries, these kinds of things. I think that's on the cards—better materials of all sorts. We're also working on fusion.

💬 0 comments
Add to My Notes
00:03:11Professor Hannah Fry

Because there's a new partnership that's been announced [in] fusion.

💬 0 comments
Add to My Notes
00:03:13Demis Hassabis

Yeah, we've just announced a partnership. We already were collaborating with them, but it's a much deeper one now with Commonwealth Fusion who, you know, I think are probably the best startup working on at least traditional Tokamak reactors. So they're probably closest to having something viable and we want to help accelerate that, helping them contain the plasma in the magnets and maybe even some material design there as well. So that's exciting. And then we're collaborating also with our quantum colleagues—they're doing amazing work at the quantum AI team at Google—and we're helping them with error correction codes where we're using our machine learning to help them, and then maybe one day they'll help us.

💬 0 comments
Add to My Notes
00:03:53Professor Hannah Fry

[Laughter] That's perfect. Exactly. The fusion one is particularly... I mean the difference that that would make to the world, that would be unlocked by that, is gigantic.

💬 0 comments
Add to My Notes
00:04:01Demis Hassabis

Yeah. I mean fusion's always been the holy grail. Of course, I think solar is very promising too, right? Effectively using the fusion reactor in the clouds in the sky. But I think if we could have modular fusion reactors, you know, this promise of almost unlimited renewable clean energy would obviously transform everything. And that's the holy grail. Of course, that's one of the ways we could help with climate.

💬 0 comments
Add to My Notes
00:04:29Professor Hannah Fry

Does make a lot of our existing problems sort of disappear if we can...

💬 0 comments
Add to My Notes
00:04:33Demis Hassabis

Definitely. I mean it opens up many—this is why we think of [it] as a root node. Of course, it helps directly with energy and pollution and so on, and helps with the climate crisis. But also, if energy really was renewable and clean and super cheap or almost free, then many other things would become viable. Like, you know, water access, because we could have desalination plants pretty much everywhere. Even making rocket fuel. It's just there's lots of seawater that contains hydrogen and oxygen. That's basically rocket fuel, but it just takes a lot of energy to split it out into hydrogen and oxygen. But if energy is cheap and renewable and sort of clean, then why not do that? You could have that producing 24/7.

💬 0 comments
Add to My Notes
00:05:14Professor Hannah Fry

You're also seeing a lot of change in the AI that is applying itself to mathematics, right? That, you know, winning medals in the International Math Olympiad and yet at the same time, these models can make quite basic mistakes in high school math. Why is there that paradox?

💬 0 comments
Add to My Notes
00:05:29Demis Hassabis

Yeah, I think it's fascinating actually. One of the most fascinating things—and probably that needs to be fixed as one of the key things while we're not at AGI yet. As you said, we've had a lot of success in other groups on getting like gold medals at the International Math Olympiad. You look at those questions and they're super hard questions that only the top students in the world can do. And on the other hand, if you pose a question in a certain way—we've all seen that with experimenting with chatbots ourselves in our daily lives—that it can make some fairly trivial mistakes on logic problems. They can't really play decent games of chess yet, which is surprising.

💬 0 comments
Add to My Notes
00:07:32Professor Hannah Fry

I also wonder about that story of AlphaGo and then AlphaZero, where you sort of took away all of the human experience and found that the model actually improved. Is there a scientific or a maths version of that in the models that you're creating?

💬 0 comments
Add to My Notes
00:07:46Demis Hassabis

I think what we're trying to build today, it's more like AlphaGo. So, effectively these large language models, these foundation models, they're starting with all of human knowledge—what we put on the internet, which is pretty much everything these days—and compressing that into some useful artifact, right, which they can look up and generalize from. But I do think we're still in the early days of having this search or thinking on top, like AlphaGo had, to kind of use that model to direct in useful reasoning traces, useful planning ideas, and then come up with the best solution to whatever the problem is at that point in time.

💬 0 comments
Add to My Notes
00:09:29Professor Hannah Fry

In terms of all of those missing pieces... I mean, I know that there's this big race at the moment to release commercial products, but I also know that Google DeepMind's roots really lie in that idea of scientific research. And I found a quote from you where you recently said, "If I had had my way, we would have left AI in the lab for longer and done more things like AlphaFold, maybe cured cancer or something like that." Do you think that we lost something by not taking that slower route?

💬 0 comments
Add to My Notes
00:09:55Demis Hassabis

Um, I think we lost and gained something. So I feel like that would have been the more pure scientific approach. At least that was my original plan say 15, 20 years ago when almost no one was working on AI. We were just about to start DeepMind. People thought it was a crazy thing to work on. But we believed in it. And I think the idea was if we would make progress we would continue to sort of incrementally build towards AGI, be very careful about what each step was and the safety aspects of it and so on, analyze what the system was doing.

💬 0 comments
Add to My Notes
00:12:41Professor Hannah Fry

The thing that's strange is that, I mean, this time last year I think there was a lot of talk about scaling eventually hitting a wall, about us running out of data. And yet you know we're recording now Gemini 3 has just been released and it's leading on this whole range of different benchmarks. How has that been possible? Like, wasn't there supposed to be a problem with scaling hitting a wall?

💬 0 comments
Add to My Notes
00:13:02Demis Hassabis

I think a lot of people thought that, especially as other companies have sort of had slower progress should we say. But I think we've never really seen any wall as such. What I would say is maybe there's like diminishing returns. And when I say that, people only think like "oh so there's no returns," like it's zero or one; it's either exponential or it's asymptotic. No, actually there's a lot of room between those two regimes and I think we're in between those. So it's not like you're going [to] double the performance on all the benchmarks every time you release a new iteration. Maybe that's what was happening in the very early days, three, four years ago. But you are getting significant improvements, like we've seen with Gemini 3, that are well worth the investment and the return on that investment and doing.

💬 0 comments
Add to My Notes
00:15:31Professor Hannah Fry

I mean one thing that we are still seeing even in Gemini 3, which is an exceptional model, is this idea of hallucinations. So I think there was one metric that said it can still give an answer when actually it should decline. I mean, could you build the system where Gemini gives a confidence score in the same way that AlphaFold does?

💬 0 comments
Add to My Notes
00:15:52Demis Hassabis

Yeah, I think so. And I think we need that actually. And I think that's sort of one of the missing things. I think we're getting close. I think the better the models get, the more they know about what they know, if that makes sense. And so, and I think the more reliable we could sort of rely on them to actually introspect in some way or do more thinking and actually realize for themselves that they're uncertain or there's uncertainty over this answer. And then we've got to sort of work out how to train it in a way where it can output that as a reasonable answer.

💬 0 comments
Add to My Notes
00:16:50Professor Hannah Fry

Cuz presumably behind the scenes there is some sort of measure of probability of whatever the next token might be.

💬 0 comments
Add to My Notes
00:16:55Demis Hassabis

Yes, there is of the next token. That's how it all works. But that doesn't tell you the overarching piece is, you know, how confident are you about this entire fact or this entire statement. And I think that's why you'll need this. I think we'll need to use the thinking steps and the planning steps to go back over what you just output. At the moment, it's a little bit like the systems are just... it's like talking to a person and they just, you know, when they're on a bad day, they're just literally telling you the first thing that comes to their mind. Most of the time that would be okay. But then sometimes when it's [a] very difficult thing, you'd want to like stop, pause for a moment, and maybe go over what you were about to say and adjust what you were about to say. But perhaps that's happening less and less in the world these days, but that's still the better way of having a discourse. So, you know, I think you can think of it like that. These models need to do that better.

💬 0 comments
Add to My Notes
00:17:43Professor Hannah Fry

I also really want to talk to you about the simulated worlds and putting agents in them because we got to talk to your Genie team earlier today. Tell me why you care about simulation. What can a world model do that a language model can't?

💬 0 comments
Add to My Notes
00:17:57Demis Hassabis

Well, look, it's actually been probably my longest standing passion is world models and simulations in addition to AI, and of course it's all coming together in our most recent work like Genie. And I think language models are able to understand a lot about the world—I think actually more than we expected, more than I expected because language is actually probably richer than we thought. It contains more about the world than we maybe even linguists imagined, and that's proven now with these new systems.

💬 0 comments
Add to My Notes
00:20:26Professor Hannah Fry

All of this.

💬 0 comments
Add to My Notes
00:20:27Demis Hassabis

Yeah. All of the time. [Laughter] Exactly.

💬 0 comments
Add to My Notes
00:20:29Professor Hannah Fry

What about science too though? Could you use it in that domain?

💬 0 comments
Add to My Notes
00:20:31Demis Hassabis

Yes, you could. So science, you know, again I think building models of scientifically complex domains—whether that's materials on [an] atomic level in biology but also like some physical things as well like weather. One way to understand those systems is to build simulations, learn simulations of those systems from the raw data. So you have a bunch of raw data, let's say it's about the weather—and obviously we have some amazing weather projects going on—and then you have a model that kind of learns those dynamics and can recreate those dynamics more efficiently than doing it by brute force. So I think there's huge potential for simulations and kind of world models, maybe specialized ones for aspects of science and mathematics.

💬 0 comments
Add to My Notes
00:21:22Professor Hannah Fry

But then also I mean you can drop an agent into that simulated world too, right? Your Genie 3 team, they had this really lovely quote which was: "almost no prerequisite to any major invention was made with that invention in mind." And they were talking about dropping agents into these simulated environments and allowing them to explore with sort of curiosity being their main motivator, right?

💬 0 comments
Add to My Notes
00:21:45Demis Hassabis

Right. And so that's another really exciting use of these world models is... we have another project called SIMA. We just released SIMA 2. Simulated agents where you have an avatar or an agent and you put it down into a virtual world. It can be a normal, it can be a kind of actual commercial game or something like that, very complex one like No Man's Sky, kind of open world space game. And then you can instruct it—because it's got Gemini under the hood, you can just talk to the agent and give it tasks.

💬 0 comments
Add to My Notes
00:23:21Professor Hannah Fry

Yeah. The end of boring NPCs basically.

💬 0 comments
Add to My Notes
00:23:23Demis Hassabis

[Laughter] Exactly. It's going to be amazing for these games. Yeah.

💬 0 comments
Add to My Notes
00:23:26Professor Hannah Fry

Those worlds that you're creating though, how do you make sure that they really are realistic? I mean, how do you ensure that you don't end up with physics that looks plausible but is actually wrong?

💬 0 comments
Add to My Notes
00:23:35Demis Hassabis

Yeah, that's a great question and can be an issue. It's basically hallucinations again. So some hallucinations are good because it also means you might create something interesting and new. So in fact sometimes if you're trying to create creative things or trying to get your system to create new things, novel things, a bit of hallucination might be good, but you want it to be intentional, right? So now you kind of switch on the hallucinations, or the creative exploration.

💬 0 comments
Add to My Notes
00:25:46Professor Hannah Fry

I know you've been thinking about these simulated worlds for a really long time and I went back to the transcript of our first interview and in it you said that you really like the theory that consciousness was this consequence of evolution, that at some point in our evolutionary past there was like an advantage to understanding the internal state of another and then we sort of turned it in on ourselves. Does that make you curious about running sort of an AI agent in evolution inside of a simulation?

💬 0 comments
Add to My Notes
00:26:10Demis Hassabis

Sure. [Laughter] I mean, I'd love to run that experiment at some point. Kind of rerun evolution, rerun almost social dynamics as well. Like the Santa Fe [Institute] used to run lots of cool experiments on little grid worlds. I used to love some of these, but they're mostly economists and they were trying to run like little artificial societies and they found that all sorts of interesting things got invented. Like if you let agents run around for long enough with the right incentive structures, markets and banks and all sorts of crazy things.

💬 0 comments
Add to My Notes
00:27:32Professor Hannah Fry

Given what we've discovered about sort of emergent properties of these models, right—having sort of conceptual understanding that we weren't expecting—do you also have to be quite careful about running this sort of simulation?

💬 0 comments
Add to My Notes
00:27:42Demis Hassabis

I think you would have to be, yes, but that's the other nice thing about simulations. You can run them in pretty safe sandboxes. Maybe eventually you want to air gap them. And you can of course monitor what's happening in the simulation 24/7 and you have access to all the data. So we may need AI tools to help us monitor the simulations because they'll be so complex and there'll be so much going on in them. If you imagine loads of AIs running around in a simulation, it will be hard for any human scientist to keep up with it on [their own], but we could probably use other AI systems to help us analyze and flag anything interesting or worrying in those simulations automatically.

💬 0 comments
Add to My Notes
00:28:26Professor Hannah Fry

I guess we're still talking sort of medium to long term in terms of this stuff. So just going back to the trajectory that we're on at the moment, I also want to talk to you about the impact that AI and AGI are going to have on wider society. And last time we spoke you said that you thought AI was overhyped in the short term but underhyped in the long term. And I know that this year there's been a lot of chatter about an AI bubble. What happens if there is a bubble and it bursts? What happens?

💬 0 comments
Add to My Notes
00:28:54Demis Hassabis

Well, look, I still subscribe to [the idea that] it's overhyped in the short term still and still underappreciated in the medium to long term, how transformative it's going to be. Yeah, there is a lot of talk of course right now about AI bubbles. In my view, I think it's not one binary thing, "are we or aren't we?" I think there are parts of the AI ecosystem that are probably in bubbles. One example would be just seed rounds for startups that basically haven't even got going yet and they're raising at tens of billions of dollars valuations just out of the gate. It's sort of interesting to see how can that be sustainable? My guess is probably not, at least not in general.

💬 0 comments
Add to My Notes
00:31:56Professor Hannah Fry

In terms of the AI that people have access to at the moment, I know you said recently how important it is not to build AI to maximize user engagement just so we don't repeat the mistakes of social media. But I also wonder whether we are already seeing this in a way. I mean people spending so much time talking to their chatbots that they end up kind of spiraling into self-radicalizing. How do you stop that? How do you build AI that puts users at the center of their own universe, which is sort of the point of this in a lot of ways, but without creating echo chambers of one?

💬 0 comments
Add to My Notes
00:32:28Demis Hassabis

Yeah, it's a very careful balance that I think is one of the most important things that we as an industry have got to get right. So I think we've seen what happens with some systems that were overly sycophantic or, you know, then you get these echo chamber reinforcements that are really bad for the person. So I think part of it is—and actually what we want to build with Gemini, and I'm really pleased with the Gemini 3 persona that we had a great team working on and I helped with too personally—is just this sort of almost like a scientific personality that's warm, it's helpful, it's light, but it's succinct, to the point, and it will push back on things in a friendly way that don't make sense. You know, rather than trying to reinforce you the idea that the earth's flat and you said it and it's like "wonderful idea," I don't think that's good in general for society if that were to happen.

💬 0 comments
Add to My Notes
00:34:31Professor Hannah Fry

We got to talk to Shane Legg a couple weeks ago about AGI in particular. Across everything that's happening in AI at the moment—the language models, the world models, and so on—what's closest to your vision of AGI?

💬 0 comments
Add to My Notes
00:34:43Demis Hassabis

I think actually the combination of obviously Gemini 3—which I think is very capable—but the Imagen 3 system we also launched last week, which is an advanced version of our image creation tool. What's really amazing about that, it has also Gemini under the hood. So it can understand not just images, it sort of understands what's going on semantically in those images. And people have been only playing with it for a week now, but I've seen so much cool stuff on social media about what people are using it for. So for example, you can give it a picture of a complex plane or something like that and it can label all the diagrams of all the different parts of the plane and even visualize it in a form like with all the different parts sort of exposed.

💬 0 comments
Add to My Notes
00:36:19Professor Hannah Fry

I know you've been reading quite a lot about the Industrial Revolution recently. Are there things that we can learn from what happened there to try and mitigate against some of the disruption that we can expect [when] AGI comes?

💬 0 comments
Add to My Notes
00:36:32Demis Hassabis

I think there's a lot we can learn. It's something you sort of study in school, at least in Britain, but on a very superficial level. It was really interesting for me to look into how it all happened, what it started with. The reasons behind the economic reasons behind that, which is like the textile industry and then the first computers were really the sewing machines, right? And then they became punch cards for the early Fortran computers, mainframes. And for a while it was very successful in Britain, became like the center of the textile world because they could make these amazingly high quality things for very cheap because of the automated systems. And then obviously the steam engines and all of those things came in.

💬 0 comments
Add to My Notes
00:38:38Professor Hannah Fry

One of the things that Shane told us was that the kind of current economic system where you exchange your labor for resources effectively, it just won't function the same way in a post-AGI society. Do you have a vision of how society should be reconfigured or might be reconfigured in a way that works?

💬 0 comments
Add to My Notes
00:38:57Demis Hassabis

Yeah, I'm spending more time thinking about this now and Shane's actually leading an effort here on that to sort of think about what a post-AGI world might look like and what we need to prepare for. But I think society in general needs to spend more time thinking about that—economists and social scientists and governments—because as with the Industrial Revolution, the whole working world and working week and everything got changed from pre-Industrial Revolution agriculture. And I think that's going to... at least that level of change is going to happen again. So it's not surprising. I would not be surprised if we needed new economic systems, new economic models to basically help with that transformation and make sure for example the benefits are widely distributed.

💬 0 comments
Add to My Notes
00:41:17Professor Hannah Fry

Do you worry that people don't seem to be paying attention, sort of, or moving as quickly as you'd like to see? What would it take for people to sort of recognize that we need international collaboration on this?

💬 0 comments
Add to My Notes
00:41:28Demis Hassabis

I am worried about that. And I wish that—and again in a sort of ideal world—there would have been a lot more collaboration already, and international specifically, and a lot more research and sort of I guess exploration and discussion going on about these topics. I'm actually pretty surprised there isn't more of that being discussed given that, you know, even our timelines—which there are some very short timelines out there, but even ours are 5 to 10 years—which is not long for institutions or things like that to be built to handle this.

💬 0 comments
Add to My Notes
00:42:58Professor Hannah Fry

Do you think it will take a moment, an incident, for everyone to sort of sit up and pay attention?

💬 0 comments
Add to My Notes
00:43:01Demis Hassabis

I don't know. I mean, I hope not. Most of the main labs are pretty responsible. We try to be as responsible as possible. You know, that's always something we've—as you know if you followed us over the years—that's been at the heart of everything we do. Doesn't mean we'll get everything right, but we try to be as thoughtful and as scientific in our approach as possible. I think most of the major labs are trying to be responsible. Also, there's good commercial pressure actually to be responsible. If you think about agents, and you're renting an agent to another company let's say to do something, that other company is going to want to know what the limits are and the boundaries are and the guardrails are on those agents, in terms of what they might do and not just mess up the data and all of this stuff. So I think that's good because the more kind of cowboy operations, they won't get the business because the enterprises won't choose them.

💬 0 comments
Add to My Notes
00:44:45Professor Hannah Fry

In the long term, so beyond AGI and towards ASI, right, artificial super intelligence, do you think that there are some things that humans can do that machines will ever be able to manage?

💬 0 comments
Add to My Notes
00:44:54Demis Hassabis

Well, I think that's the big question. And I feel like this is related to, as you know, one of my favorite topics is Turing machines. I've always felt this, that if we build AGI and then use that as a simulation of the mind and then compare that to the real mind, we will then see what the differences are and potentially what's special and remaining about the human mind, right? Maybe that's creativity, maybe it's emotions, maybe it's dreaming. There's a lot of consciousness. There's a lot of hypotheses out there about what may or may not be computable. And this comes back to the Turing machine question of, like, what is the limit of a Turing machine? And I think that's the central question of my life really ever since I found out about Turing and Turing machines.

💬 0 comments
Add to My Notes
00:46:57Professor Hannah Fry

So there's nothing that cannot be done within the sort of computational...

💬 0 comments
Add to My Notes
00:47:01Demis Hassabis

Well, no one's put it this way. Nobody's found anything in the universe that's non-computable so far.

💬 0 comments
Add to My Notes
00:47:07Professor Hannah Fry

So far.

💬 0 comments
Add to My Notes
00:47:08Demis Hassabis

Right? And I think we've already shown you can go way beyond the usual complexity theorist P = NP view of like what a classical computer could do today. Things like protein folding and Go and so on. So I don't think anyone knows what that limit is. And that's really, if you boil down to what we're doing at DeepMind and Google and what I'm trying to do, is find that limit.

💬 0 comments
Add to My Notes
00:47:32Professor Hannah Fry

But then in the limit of that though, right, is that in the limit of that idea is that, you know, we're sitting here... there's like the warmth of the lights on our face. We kind of hear the whir of the machine in the background. There's like the feel of the desk under our hands. All of that could be replicable by a classical computer?

💬 0 comments
Add to My Notes
00:47:47Demis Hassabis

Yes. Well, I think in the end, my view on this is—why I love Kant as well as all of my two favorite philosophy—is a construct of the mind. I think that's true. And so, yes, all of those things you mentioned, they're coming into our sensory apparatus and they feel different, right? The warmth of the light, the touch of the table. But in the end it's all information. And we're information processing systems. And I think that's what biology is. This is what we're trying to do with Isomorphic [Labs]. That's how I think we'll end up curing all diseases, is by thinking about biology as an information processing system. And I think in the end that's going to be... and I'm working on my spare time, my 2 minutes of spare time, physics theories about things like information being the most fundamental unit, should we say, of the universe—not energy, not matter, but information. And so it may be that these are all interchangeable in the end, right? But we just sense it, we feel it in a different way. But as far as we know, this is still all these amazing sensors that we have, they're still computable by a Turing machine.

💬 0 comments
Add to My Notes
00:48:53Professor Hannah Fry

But this is why your simulated world is so important, right?

💬 0 comments
Add to My Notes
00:48:56Demis Hassabis

Yes. Exactly. Because that would be one of the ways to get to it. What's the limits of what we can simulate? Because if you can simulate it, then in some sense, you've understood it.

💬 0 comments
Add to My Notes
00:49:05Professor Hannah Fry

I wanted to finish with some personal reflections of what it's like to be at the forefront of this. I mean, does the emotional weight of this ever sort of weigh you down? Does it ever feel quite isolating?

💬 0 comments
Add to My Notes
00:49:18Demis Hassabis

Yes. Um, look, I don't sleep very much, partly because it's too much work, but also I have trouble sleeping. It's very complex emotions to deal with because it's unbelievably exciting. You know, I'm basically doing everything I ever dreamed of. And we're at the absolute frontier of science in so many ways, applied science as well as machine learning. And that's exhilarating, as all scientists know, that feeling of being at the frontier and discovering something for the first time. And that's happening almost on a monthly basis for us, which is amazing.

💬 0 comments
Add to My Notes
00:50:48Professor Hannah Fry

Are there parts of it that have hit you harder than you expected though?

💬 0 comments
Add to My Notes
00:50:52Demis Hassabis

Uh, yes, for sure. On the way, I mean, even the AlphaGo match, right? Just seeing how we managed to crack Go, but Go was this beautiful mystery and it changed it. And so that was interesting and kind of bittersweet. I think even the more recent things of like language and then imaging and what does it mean for creativity... I have huge respect and passion for the creative arts, and having done game design myself, and I talk to film directors and it's an interesting dual moment for them too. There's like first on one hand they've got these amazing tools that speed up prototyping ideas by 10x, but on the other hand is it replacing certain creative skills?

💬 0 comments
Add to My Notes
00:52:19Professor Hannah Fry

When you and the other AI leaders are in a room together, is there sort of sense of solidarity between you that this is a group of people who all know the stakes, who all really understand their things, or does the competition kind of keep you apart from one another?

💬 0 comments
Add to My Notes
00:52:32Demis Hassabis

Well, we all know each other. I get on with pretty much all of them. Some of the others don't get on with each other. And there is... it's hard because we're also in the most ferocious capitalist sort of competition there's ever been probably. You know, investor friends of mine and VC friends of mine who were around in the dot-com era say this is like 10x more ferocious and intense than that was. In many ways, I love that. I mean, I live for competition. It's you know I've always loved that since my chess days. But stepping back, I understand, I hope everyone understands that there's a much bigger thing at stake than just company successes and that type of thing.

💬 0 comments
Add to My Notes
00:53:13Professor Hannah Fry

When it comes to the next decade when you think about it, are there big moments coming up that you're personally most apprehensive about?

💬 0 comments
Add to My Notes
00:53:21Demis Hassabis

I think right now the systems are... I call them passive systems. You put the energy in as the user, you know the question or the task, and then these systems kind of provide you with some summary or some answer. So very much it's human directed and human energy going in and human ideas going in. The next stage is agent-based systems, which I think we're going to start seeing. We're seeing now, but they're pretty primitive. Like in the next couple of years, I think we'll start seeing some really impressive reliable ones. And I think those will be incredibly useful and capable if you think about them as an assistant or something like that, but also they'll be more autonomous. So I think the risks go up as well with those types of systems. So I'm quite worried about what those sorts of systems will be able to do maybe in two, three years time. So, we're working on cyber defense in preparation for a world like that where maybe there's millions of agents roaming around on the internet.

💬 0 comments
Add to My Notes
00:54:22Professor Hannah Fry

And what about what you're most looking forward to? I mean, is there a day when you'll be able to retire sort of knowing that your work is done, or is there more than a lifetime's worth of work left to do?

💬 0 comments
Add to My Notes
00:54:33Demis Hassabis

Yeah, well I could definitely do with a sabbatical [snorts]. And I would spend it doing stuff. Yeah, a week off or even even a day would be good. Um, but look, I think my mission has always been to help the world steward AGI safely over the line for all of humanity. So I think when we get to that point... of course then there's super intelligence and there's post-AGI and there's all the economic stuff we were discussing and societal stuff and maybe I can help in some way there. But I think that will be my core part of my mission, my life mission, will be done. If it's... I mean it's only a small job, you know, just get that over the line or help the world get that over the line. I think it's going to require collaboration like we talked earlier. And I'm quite a collaborative person. So I hope I can help with that from the position that I have.

💬 0 comments
Add to My Notes
00:55:23Professor Hannah Fry

And then you get to have a holiday.

💬 0 comments
Add to My Notes
00:55:25Demis Hassabis

And then I'll get... I'll have the... yeah, exactly. A well-earned sabbatical.

💬 0 comments
Add to My Notes
00:55:29Professor Hannah Fry

Yeah, absolutely. Demis, thank you so much. Helpful as always.

💬 0 comments
Add to My Notes
00:55:33Demis Hassabis

Well, that is it for this season of Google DeepMind: The Podcast with me, Professor Hannah Fry. But be sure to subscribe so you will be among the first to hear about our return in 2026. And in the meantime, why not revisit our vast episode library because we have covered so much this year. From driverless cars to robotics, world models to drug discovery. Plenty to keep you occupied. See you soon.

💬 0 comments
Add to My Notes
Video Player
My Notes📝
Highlighted paragraphs will appear here