Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI
Stanford OnlineDisclaimer: The transcript on this page is for the YouTube video titled "Stanford CS230 | Autumn 2025 | Lecture 9: Career Advice in AI" from "Stanford Online". All rights to the original content belong to their respective owners. This transcript is provided for educational, research, and informational purposes only. This website is not affiliated with or endorsed by the original content creators or platforms.
Watch the original video here: https://www.youtube.com/watch?v=AuZoDsNmG_s
What I want to do today is chat with you about career advice in AI. In previous years, I used to do most of this lecture by myself, but what I thought I'd do today is share just a few thoughts and then hand it over to my good friend Lawrence Moroni. I invited him to speak, and he kindly agreed to come all the way to San Francisco—he lives in Seattle—to share with us a very broad market landscape for what he's seeing in the job market, as well as tips for growing a career in AI.
But there were just two slides and then one more thought I want to share with you before I hand it over to Lawrence. It really feels like the best opportunity, the best time ever, to be building with AI and to be building a career in AI. A few months ago, I know that in social media and traditional media, there were a few questions about, you know, is AI slowing down? People were saying, "Well, is GPT-5 that good?" I think it's actually pretty good, but there are questions about "is AI progress slowing down?" I think part of the reason the question was even raised was because if a benchmark for AI is, you know, 100% is perfect answers, then if you make rapid progress, at some point you cannot get above 100% accuracy.
But one of the studies that most influenced my thinking was work done by this organization [MER meter/Epoch AI] that studied, as time passes, how complex are the tasks that AI could do as measured by how long it takes a human to do that task. So, a few years ago maybe GPT-2 could do tasks that a human could do in a couple of seconds. Then they could do tasks that took a human 4 seconds, then 8 seconds, then a minute, 2 minutes, 4 minutes, and so on. The study estimates that the length of tasks AI can do is doubling every seven months.
On this metric, I feel optimistic that AI will continue making progress, meaning the complexity of tasks as measured by how long a human takes to do something is doubling rapidly. The same study, with a smaller dataset, argued that for AI coding, the doubling time is even shorter—maybe like 70 days. So this code that used to take me 10 minutes to write, then 20 minutes to write, 40 minutes to write... AI could do more and more of that.
The reasons I think this is a golden age to be building—the best time we've ever seen—is maybe two themes: more powerful and faster. So we can all—all of you in this room—can now write software that is more powerful than what anyone on the planet could have built a year ago by using AI building blocks. AI building blocks include Large Language Models, RAG [Retrieval-Augmented Generation], agentic workflows, Voice AI, and of course Deep Learning. It turns out that a lot of LLMs have a decent, at least basic, understanding of deep learning. So if you ever prompt one of the frontier models to implement a cutting-edge neural network for you—try prompting it to implement a Transformer network for you—it's actually not bad at helping you use these building blocks to build software quickly.
We have very powerful building blocks that were very difficult or did not exist a year or two ago. You can now build software that does things that no one else on the planet—even the most advanced users on the planet—could have done. And then also with AI coding, the speed with which you can get software written is much faster than ever before.
I personally found it important to stay on the frontier of tools because the tools for AI coding change really rapidly. Several months ago, my personal number one favorite tool became Claude, moving on from some early generations. And then I think since the release of GPT-5, I think OpenAI Codex has actually made tremendous progress. And this morning Gemini 3 was released, which I haven't had time to play with yet—just this morning, right?—but seems like another huge leap forward.
I feel like if you ask me every three months what my personal favorite coding tool is, it actually probably changes. Definitely every six months, but quite possibly every three months. I find that being half a generation behind in these tools means being, frankly, quite a bit less productive. I know everyone says AI is moving so fast, but AI coding tools—of all the sectors in AI—is one sector where I see the pace of progress is tremendous. Staying at the latest generation of tools rather than a half-generation behind makes you more productive.
With our ability to build more powerful software and build it much faster than ever before, I think one piece of advice that I give much more strongly now than even a year or two years ago is: just go and build stuff. Take classes from Stanford, take online courses, but additionally, your opportunity to build things—and I think Lawrence is going to talk about showing them to others—is greater than ever before.
But there's one weird implication of this that is still... I don't know, more and more people are appreciating but not widely known, which is the product management bottleneck. When it is increasingly easy to go from a clearly written software spec to a piece of code, then the bottleneck increasingly is deciding what to build, or increasingly writing that clear spec for what you actually want to build.
When I'm building software, I often think of going through a loop where we'll write some software, write some code, show it to users to get user feedback—I think of this as PM or product management work—and then based on the user feedback, I'll revise my view on what users like, what they don't like. "This UI is too difficult," "they want this feature," "they don't want that feature," and change my conception of what to build. Then go around this loop many times to hopefully iterate toward a product that users love. Because of AI coding, the process of building software has become much cheaper and much faster than before, but that ironically shifts the bottleneck to deciding what to build.
Some weird trends I'm seeing in Silicon Valley and in many tech companies: people have often talked about an Engineer-to-Product Manager (PM) ratio. You take these ratios with a grain of salt because they're all over the place, but you hear companies talk about a ratio of like 4:1 or 7:1 or 8:1. This idea that one product manager writing product specs can keep four to eight engineers busy. But because engineering is speeding up, whereas product management is not sped up as much by AI as engineering, I'm seeing the Engineer-to-PM ratio trending downward. Maybe even 2:1 or 1:1. Some teams I work with, the proposed headcount was like one PM to one engineer, which is a ratio unlike almost all traditional Silicon Valley companies.
The other thing I'm seeing is that engineers that can also shape product can move really fast. Where you go one step further: take the engineer, take the PM, and collapse them into a single human. I find that there are definitely engineers that like doing engineering work that don't enjoy talking to users and having that more human, empathetic side of work. But I'm finding increasingly that the subset of engineers that learn to talk to users, get feedback, and develop deep empathy for users so that they can make decisions about what to build—those engineers are also the fastest-moving people that I'm seeing in Silicon Valley today.
At the earliest stage in my career, one thing I regretted for years was in one of the roles I had, I went to try to convince a bunch of engineers to do more product work. And I actually made a bunch of really good engineers feel bad for not being good product managers. That was a mistake I made. Regretted that for years; I just shouldn't have done that. Part of me feels like I'm now going back to repeat that exact same mistake. Having said that, I find that the fact that I can write code but also talk to users to shape what to do, that lets me—and the engineers that can do this—go much faster. So I think maybe it's worth taking another look at whether engineers can do a bit more of this work. If you're not waiting for someone else to take the product to customers, you just write code, have a gut for what to do next, and iterate. That velocity of execution is much faster.
Before I hand it to Lawrence, just one last thing I want to share. In terms of navigating your career, I think one of the most strong predictors for your speed of learning and for your level of success is the people you surround yourself with. We're all social creatures. We all learn from people around us. It turns out that there are studies in sociology that show that if your five closest friends are smokers, the odds of you being a smoker is pretty high. Please don't smoke—just an example. I don't know of any studies showing that if your five or ten closest friends are really hardworking, determined people, learning quickly, trying to make the world a better place with AI, that you are more likely to do that too. But it's one of those things that I think is almost certainly true.
All of us are inspired by the people around us. If you're able to find a good group of people to work with, that helps drive you forward. In fact, here at Stanford, I feel very fortunate. Fantastic student body, fantastic group of faculty. The other thing that I think we're fortunate to have at Stanford is a connective tissue. Candidly, a lot of the people working at the cutting-edge AI labs, the frontier labs, were former students of a lot of different Stanford faculty. And so that rich connective tissue means that at Stanford we often find out about a lot of stuff that's not widely known because of the relationships, the friendships. When some company does something, one of my friends in the faculty will call up someone at the company and say, "Hey, that's weird. Does this really work?"
That rich connective tissue means that just as we try to pull our friends forward, our friends also pull us forward with the knowledge and the know-how of bleeding-edge AI, which unfortunately is not all published on the internet at this moment in time. So while you're at Stanford, make those friends, form that rich connective tissue. There have been a lot of times just for myself where frankly I was thinking of going on some technical direction, I'd have one or two phone calls with someone really close to research—either a Stanford researcher or someone in the frontier lab—and they would share something with me that I didn't know before. And that changes the way I choose the technical architecture of a project.
I find that group of friends you surround yourself with—those little pieces of information: "Try this," "Don't do that," "That's just hype," "Ignore the PR," "Don't actually try that thing"—those things make a big difference in your ability to steer the direction of your projects. So while you're at Stanford, take advantage of that. The connective tissue that Stanford has is actually really unique. I really think there's no university in the world that is as privileged as Stanford at this moment in time in terms of the richness of the connectivity to all of the leading AI groups.
But to me, that also means we're lucky here to have a wonderful community of people to work with and learn from. And for you too, if you apply for jobs, the thing that is much more important for your career success would be—if you go to a company—the people you work with day-to-day.
Here's one story that I've told in previous classes I'm going to repeat. There was a Stanford student that I knew many years ago. They did really good work at Stanford—I thought they were a high-flyer—and they applied for a job at a company and got a job offer from one of the companies with a hot AI brand. This company refused to tell him which team he would join. They said, "Oh, come sign up for a job. There's a rotation system, matching system, blah blah blah. Sign on the dotted line first, then we'll figure out what's a good project for you."
Partly because it's a good company—his parents are proud of him for getting a job at this company—this student joined hoping to work on an exciting AI project. After he signed on the dotted line, he was assigned to work on the backend Java payment processing system of the company. Nothing against anyone that wants to do Java backend payment processing systems—I think they're great—but this was an AI student that did not get matched to an AI project. For about a year, he was really frustrated, and he actually left this company after about a year.
The unfortunate thing is I told this story in CS230 some years back, and then after I was already telling the story in this class, a couple of years later, another student in CS230 went through the same experience with the same company. Not Java backend payment processing, but a different project. I think this effect of trying to figure out who you'll be actually working with day-to-day and making sure you're surrounded by people that inspire you and work on exciting projects is important.
If you're completely candid, if a company refuses to tell you what team you'll be assigned to, that does raise a question in my mind. Instead of working for the company with the hottest brand, sometimes if you find a really good team with really hardworking, knowledgeable, smart people trying to do good with AI, but the company logo just isn't as hot, I think that often means you actually learn faster and progress your career better. We don't learn from the excitement of the company logo when you walk through the door. You learn from the people you deal with day-to-day. So I just urge you to use that as a huge criteria for your selection process.
Number one on my advice is: it's become much easier than ever before to build powerful software faster. And what that means is, do be responsible. Don't build software that hurts others. At the same time, there are so many things that each of you can build. I find the number of ideas out in the world is much greater than the number of people with the skill to build them. I know that finding jobs has gotten tougher for fresh college grads. At the same time, a lot of teams just can't find enough skilled people. There are a lot of projects in the world that if you don't build it, I think no one else will build it either. So long as you don't harm others—be responsible—there are a lot of things that you don't need to wait for permission. You don't need to wait for someone else to do it first. The cost of failure is much lower than before because you waste a weekend but learn something. That seems fine to me.
So going for trying things out and building lots of things would be the number one most important thing I think would help your careers. I'm going to say one last thing that is considered not politically correct in some circles, but I'll say it anyway: in some circles, it has become considered not politically correct to encourage others to work hard. I'm going to encourage you to work hard.
Now, I think the reason some people don't like that is because there are some people in a phase of life where they're not in a position to work hard. So right after my children were born, I was not working hard for a short period of time. And there are people, because of an injury, disability, whatever very valid reasons, they're not in a position to work hard at that moment in time. We should respect them, support them, make sure they're well taken care of, even though they're not working hard.
Having said that, all of my say, PhD students that became very successful, I saw every single one of them work incredibly hard. I mean, the 2:00 AM sitting up hyperparameter tuning—been there, done that. Still doing it some days. If you are fortunate enough to be in a position in life where you can work really hard, there are so many opportunities to do things right now. If you get excited as I do spending evenings and weekends coding and building stuff and getting user feedback—if you lean in and do those things, it will increase your odds of being really successful. People that work hard get a lot more done. We should also respect people that don't and people that aren't in a position to do so. But between watching some dumb TV show versus finding your agentic coder on a weekend to try something... I'm going to choose the latter almost every time. Unless I'm watching a show with my kids.
All right. So that was the main things I want to say. What I want to do is hand the stage over to my good friend Lawrence Moroni, who will share a lot more about career advice on AI. Just a quick intro: I've known Lawrence for a long time. He's done a lot of online education work, sometimes with me and my teams. Taught a lot of people TensorFlow, taught a lot of people PyTorch. He was lead AI advocate at Google for many years, now runs a group at ARM. I've also enjoyed quite a few of his books. This is one of them. He recently also published a new book on PyTorch—this is an excellent book introduction to PyTorch. He is a very sought-after speaker in many circles. So I was very grateful when he agreed to come speak to us. So...
Pleasure's all mine. I just want to reinforce something that Andrew was talking about earlier on about choosing the people that you work with being very important. But I also want to show that from the other way around: that the company, when they're interviewing you, are also choosing you. And the good companies really want to choose the people that they work with also.
I've been doing a lot of mentoring of young people over the last 18 months who are hunting for careers for themselves. I want to tell the story of one young man. This guy: very well educated, great experience, super elite coder. He could do every challenge that was in front of him. And he got laid off from his job in April. He worked in medical software and the medical software business has been changing drastically. Funding has been cut by the federal government in a number of areas and he got laid off from his job. With his experience, with his ability, with his skills, he thought that it would be very easy for him to find another job.
The poor young guy had a really terrible April. He got laid off from his job in April. Immediately before that, his girlfriend had broken up with him, and then a couple of weeks later, his dog died. So he was not in a good place. So I sat down with him after a couple of months and took a look. He had a spreadsheet of jobs that he was applying to, and he had over 300 jobs that he was tracking in the spreadsheet. In a number of these jobs, he actually got into the interview process and he went very deep in the interview process with companies like Meta, Microsoft, and one of the other large tech companies where you do lots and lots of interview loops.
Every time towards the end of the loop, he knew he did a great loop. He solved all the coding. He had great conversations with the people—or at least he thought he had. And then every time within a day, the recruiter would call him and say, "No, you didn't get the job." And it was heartbreaking. Like I said, 300-plus jobs he had been tracking.
So I started working with him to do some mock interviews and fine-tuning. Oh, it was Jeff Bezos' company, not Amazon—that was one of the other big tech companies that he'd interviewed with. I started working through with him and doing some test interviews. Terrific candidate. Couldn't figure out what was going wrong until I decided to try and do a different sort of interview where I gave him a really tough interview. I gave him some tough LeetCode. I gave him some really obscure corner cases in his coding and I saw how he reacted.
How he reacted was the advice that was given to him in the recruiting pamphlets. A lot of these recruiting pamphlets will say things like: "You're going to have an opportunity to share an opinion and you got to stand your ground. You got to have a backbone. Don't bend." His interpretation of that was to be really, really tough. So I would pick holes in his code, pick corner cases where things may not work, and I would give him a test of crisis. And this advice that he'd been given to stand his ground ended up making him kind of hostile in these interview environments.
I was looking at this then from the point of view of what Andrew was just talking about, where it's a case of: hey, good people, good teams, people that you can work together with. From the interviewer perspective, if I'm managing this team, this person is that cliché 10x engineer, but I don't want him anywhere near my team because of this attitude. We worked on that. We fine-tuned it. And the strange part is he's a really, really nice guy. It's just this was the advice he was given and he followed that advice and he failed so many interviews as a result.
The next job that he was interviewing at was at a company where teamwork is very, very highly valued. And the good news is he got the job at that company. He's now working there. He doubled his salary from the job he was laid off from. Now he looks back on it—he had six months of unemployment—but at the time when he was going through all of that, it was a very difficult time for him. The flip side of it: if you're looking at a company and looking at the people you'll be working with, it is very important. But also realize they are looking at you in the same way. So if you've gone to tech interview coaching and they gave you that advice to stand your ground and have a backbone, it's good to do that, but don't be a jerk while you're doing so.
Can you see my slides? Okay. So I'm Lawrence. I've been working in tech for more decades than ChatGPT thinks there are—or Strawberry. So I've worked in many of the big tech companies. I spent many years at Microsoft, spent many years at Google, also worked in places like Reuters. I've done a lot of work in startups both in this country and abroad.
What I really want to talk about today is: think about what the career landscape looks like today, particularly in AI. First of all, what Andrew said about being in Stanford: you've got the ability to make use of the networks that you have in Stanford, make use of the prestige that you have. I say use every weapon you have because unfortunately the landscape right now is not ideal. We've gone through some very difficult times. All you have to do is look at the news and you can see massive tech layoffs, slowing hiring in tech, and lots of stuff like that. But it's not necessarily a bad thing if you do it the right way.
So let's have a quick job market reality check. Actually, out of interest... Are you juniors? You're graduating this year or you're graduating next year? Third year of four? Third year of three. I'll say so you're going to be graduating coming summer. How many people are already looking for jobs? Okay, quite a few of you. How many people have had success? Nobody. Oh, one. Okay, sort of. Okay, that's good.
So you're probably seeing some of these things, the signals out there. Junior hiring slowing significantly. When I say junior, I mean graduate level. High-profile layoffs are dominating the headlines. I was at Google a couple of years ago when they had the biggest layoff they'd ever had. We're seeing layoffs at the likes of Amazon, Microsoft, other companies like that. It feels that entry-level positions are scarce—and I'm underlining the word "feels" there, and I want to get into that in a little bit more detail later. And also competition is fierce.
But my question is: should you worry? And I say no. Because if you can approach things in the right way, if you can approach the job hunting thing in the right way—particularly understanding how rapidly the AI landscape is changing—then I think people with the right mindset will thrive.
So what do I mean by that? As Andrew had mentioned, the AI hiring landscape is changing because the AI industry is changing. I actually first got involved in AI way back in 1992. I worked in it for a little while just before the AI winter. Everything failed drastically. But I got bitten by the AI bug and then in 2015, when Google was launching TensorFlow, I got pulled right back into it. Became part of the whole AI boom, launching TensorFlow, advocating TensorFlow to millions of people and seeing the changes that happened.
But along 2021, 2022, we had a global pandemic. The global pandemic caused a massive industrial slowdown. This massive industrial slowdown meant that companies had to start pivoting towards things that drove revenue, and directly drove revenue. At Google, TensorFlow was an open source product. It didn't directly drive revenue. We began to scale back. Every company in the world also scaled back on hiring at this time.
Then we get to about 2022, 2023. What happens? We begin to come out of the global pandemic. We begin to realize all industries have this massive logjam of non-hiring that they had done. And we're also entering a time where AI was exploding on the scene thanks to the work of people like Andrew. The world was pivoting and changing to be AI-first in just about everything. And every company needed to hire like crazy.
Every company then hiring like crazy in 2022, 2023 meant that most companies ended up over-hiring. And what that generally meant was people who were not qualified for higher positions usually got higher positions because you had to enter into a bidding war just to be able to get talent. You ended up having talent grabs and you ended up having stories like the one Andrew told, where it's a case of: here's a person with AI talent, let's grab them, let's throw money at them, let's have them come work for us and then we'll figure out what do we want to do. So as a result, 2022, 2023, all of this massive over-hiring happens because of AI and because of the COVID logjam.
And then 2024, 2025 is the great wakeup. Where a lot of companies realize this over-hiring that they had done. They have ended up with a lot of people who are underqualified for the job that they were doing. A lot of people ended up getting hired just because they had AI on their resume. And there's a big adjustment going on.
Oh, you're not seeing my slides. Okay. I think it's 'cause my power... I'm not plugged into power mains. And in the light of this big adjustment... There we go. In the light of this big adjustment, what has happened is now a lot of companies are much more cautious about AI skills that they're hiring. And if you're coming into that with that mindset and understanding that, realize opportunity is still there and opportunity is there massively if you approach it strategically. So what I want to talk through today is how you can do exactly that.
I see three pillars of success in the business world, and particularly in the AI business world. Nowadays you can't just have AI on your resume and get over-hired. Nowadays, not only do you have to be able to tell that you have the mindset of these three pillars of success, but you also have to be able to show. And to be able to show these, there actually has never been a better time. As Andrew demonstrated earlier on, the ability to vibe code things into existence—he doesn't like the word vibe code, I kind of agree with him—but the ability to prompt things into existence allows you to be able to show better than ever before.
He was talking earlier on about product managers and he had this time when he got engineers to be product managers and then those engineers ended up being really bad product managers. I actually interviewed at Google twice and failed twice, despite being very successful at Microsoft, authored 20-plus books, taught college courses. I interviewed at Google twice and failed twice because I was interviewing to be a product manager. Then when I interviewed to be an engineer, they hired me and they were like, "Why didn't you try to join us years ago?"
So a lot of it is just being a good engineer—you've got the ability to do that and show that nowadays. And with that ratio of engineer to product manager changing, engineering skills are also far more valuable than ever.
So the three pillars to success. Number one: Understanding in Depth. And I'm going to mean this in two different ways. Number one is academically: to have the understanding in depth academically of machine learning, of particular model architectures, to be able to understand them, read papers, understand what's in those papers, and to be able to understand in particular how to take that stuff and put it to work. The second part of understanding in depth is really having your finger on the pulse of particular trends and where the signal-to-noise ratio favors signal in those trends.
Secondly, and also very importantly, is Business Focus. Andrew said something politically incorrect earlier on. I'm going to also say a similar politically incorrect thing regarding hard work. Hard work is such a nebulous term that I would say: think about hard work in terms of "you are what you measure." There is the whole trend out there—I'm trying to remember, was it 996 or is it 669? 996, right? 9:00 a.m. to 9:00 p.m., 6 days a week is a metric of hard work. It's not. That's not a metric of hard work; that's a metric of time spent.
So I would encourage everybody, in the same way as Andrew did, to think about hard work. But what hard work is, is how you measure that hard work. You can work eight hours a day and be incredibly productive. You can work six hours a day and be incredibly productive. I personally measure that from output: things that I have created in the time that I spent.
I joke a lot, but it's true that I've written a lot of books. Andrew held up one that he helped me write a little bit. I actually wrote that book in about two months. People say, "How do you have time with your jobs? You must work 16 hours a day." But the key to me being able to write books is baseball. Any baseball fans here? If you sit down and try to watch baseball on TV, a match can take like three and a half or four hours. So all of my writing I tend to do in baseball season. If I'm going to sit down—I like the Mariners (I'm from Seattle) and I like the Dodgers... nobody booed, okay good—usually one of those is going to be playing at 7:00 at night. So instead of sitting in front of the TV just watching baseball mindlessly, I'll actually be writing a book while baseball's on in the background. It's a very slow-moving game. That's the hard work in this case.
I would encourage you to try to find areas where you can work hard and produce output. And that's the second pillar here: Business Focus. The output that you produce—align that output with the business focus that you want to have and with the work that you want to do. There's an old saying: "Don't dress for the job you have, dress for the one you want." I would say a new angle on that saying would be: "Don't let your output be for the job you have, let your output be for the job you want."
If I go back to when I spoke about I failed twice at Google... to get in the third time when I got in, I had actually decided to do to approach this in a different way. I was interviewing at the time for their cloud team. They were just really launching cloud and I had just written a book on Java. So I decided to see what I could do with Java in their cloud. I ended up writing a Java application that ran in their cloud for predicting stock prices using technical analytics and all that kind of stuff. When I got to the interview, instead of them asking me stupid questions like "How many golf balls can fit in a bus?", they saw this code. I was producing output for the job I wanted. I put this code on my resume and my entire interview loop was them asking me about my code. It put the power on me. It gave me the power to communicate about things that I knew as opposed to going in blind to somebody asking me random questions.
It's the same thing I would say in the AI world. The business focus, the ability for you now to prompt code into existence, to prompt products into existence. If you can build those products and line them up with the thing that it is that you want to do—be it at Google or Meta or a startup—and have that in-depth understanding not just of your code, but how it aligns to that business. This is a pillar of success in this time and age.
And even though it looks like the signals look like there aren't a lot of jobs out there, there are. What there aren't a lot of is a good combination of jobs and people to match them. And then of course, the third pillar: Bias Towards Delivery. Ideas are cheap. Execution is everything. I've interviewed many people who came in with very fluffy ideas and no way to be able to ground them. I've interviewed people who came in with half-baked ideas that they grounded very, very well. Guess which ones got the job?
So, a quick pivot. What's it actually like working in AI right now? As recently as two or three years ago, working in AI was: if you can do a thing, you're great. If you can build an image classifier, you're golden. We'll throw six-figure salaries and massive stock benefits at you. Unfortunately, that's not the case anymore. It's really a lot of today, what you'll see is the P-word: Production. What can you do for production? What can you do... if it's building new models, optimizing models, understanding users—UX is really important—everything is geared towards production. Everything is biased towards production.
The history that I told you about going from the pandemic into the over-hiring phase... businesses have pulled back and are optimized towards the bottom line. I have an old saying that "the bottom line is that the bottom line is the bottom line." And this is the environment that we're in today. If you can come in with that mindset when you're talking with companies, that's one of the keys to open the door.
One of the things I've seen in the field has been maturing from "it used to be really nice that we could do cool things and we could build cool things" to "now it's really build useful things." Those useful things can be cool too, by the way. And the results of them can be cool. It's not just coolness for coolness' sake, but to really focus on delivery, focus on being able to provide value, and then the coolness will follow.
So, four realities. Number one: unfortunately nowadays Business Focus is Non-Negotiable. Let me be a little bit politically incorrect here again for a moment. I've been working for most of the last 35 years in tech. I would say for most of the last 10 years a lot of large companies, particularly in Silicon Valley, have really focused on developing their people above everything. Part of developing their people was bringing their "entire self" to work. Part of bringing their entire self to work was bringing the things that they care about outside of work. And that led to a lot of activism within companies.
Now please let me underline this: there is nothing wrong with activism. There is nothing wrong with wanting to support causes of justice. There is absolutely nothing wrong with that. But the over-indexing on that has, in my experience, led to a lot of companies getting trapped by having to support activism above business. You've probably seen an example about two years ago where activists in Google broke into the Google Cloud head's office because they were protesting a country that Google Cloud was doing business with. They broke into his office, had a sit-in, and they used the bathroom all over his desk. This is where activism got out of hand.
As a result, the unfortunate truth is the good signals in that activism are now being lost because of those actions. People are being laid off. People are losing jobs. Activism is being stifled. Business focus has become non-negotiable. There's a bit of a pendulum swing going on. The pendulum that had swung too far towards allowing people to bring their full selves to work is now swinging back in the other direction. It is that ongoing pendulum. You have to realize going into companies now that business focus is absolutely non-negotiable.
Secondly, Risk Mitigation is Part of the Job. I think if you can come into AI with a focus and a mindset around understanding the risks of transforming a particular business process to be an AI-oriented one, and to help mitigate those risks, I think is really powerful. I would argue in an interview environment that's the number one skill to have: to have that mindset around "You are doing a business transformation from heuristic computing to intelligent computing. Here are the risks, here's how you mitigate those risks, and here's the mindset behind that."
The third part is Responsibility is Evolving. Now responsibility in AI has again changed from a very fluffy definition of "let's make sure that the AI works for everybody" to a definition of "let's make sure that the AI works, let's make sure that it drives the business, and then let's make sure that it works for everybody." Often that has been inverted over the last few years and that has led to some famous documented disasters. Let me share one with you.
Everybody knows text-to-image generation, right? I want to share these things that happened a couple years ago with Gemini. I was doing some testing around this one and I was working heavily on responsible AI. Part of responsible AI is you want to be representative of people. When you're building something like if you're a Google and you're indexing information, you really want to make sure that you don't reinforce negative biases. And if you're generating images, it's very easy to reinforce negative biases. So for example, if I said, "Give me an image of a doctor." If the training set primarily has men as doctors, it's more likely to give a man. If I say, "Give me an image of a nurse," if the training set is more likely to have women as nurses, it's more likely to give me an image of a woman. But that's reinforcing a negative stereotype.
So I wanted to do a test of how Google were trying to overcome that. So I said, "Give me a young Asian woman in a cornfield wearing a summer dress and a straw hat looking intently at her iPhone." And it gave me these beautiful images. It did a really nice job. I said, "This is a virtual actress I've been working with." Then I asked it to give me an Indian one. "A young Indian woman." Same prompt. And it gave me beautiful images of a young Indian woman.
Then I was like, "Okay, what if I want her to be Black?" For some reason, it only gave me three. I'm not sure why, but it still adhered to the prompt. So the responsibility was looking really, really good. Then I asked it to give me a Latina. It gave me four. Yep, she looks pretty Latina—maybe the one on the bottom left looks a little bit like Hermione Granger. But on the whole looks pretty good.
Then I asked it to give me a Caucasian. What do you think happened? "While I understand your request, I am unable to generate images of people as this could potentially lead to harmful stereotypes and biases." Right. This was a very poorly implemented safety filter where the safety filter in this case was looking for the word "Caucasian" or looking for the word "white" and refusing. I was like, "Okay, let me test the filter a little bit." I said, "Instead of Caucasian, let me try white." And: "While I am able to fulfill your request, I'm not currently generating images of people." It lied to my face, right? Because it had just generated images of people.
Anybody know the hack that I used to get it to work? This is a funny one. I asked it to generate an Irish woman. What do you think it did? Right, it gave me this image of an Irish woman, no problem, in a summer dress, straw hat looking intently at her phone. But what do you notice about this image? She's got red hair in every image, right? I grew up in Ireland. Ireland does have the highest proportion of redheads in the world—it's about 8%. But if you're going to draw an image of a person and associate a particular ethnicity with a color of hair, you can begin to see this is massively problematic. There are areas, I believe, in China where the description of a demon is a red-headed person.
So what ended up happening here from the responsible AI perspective was one very narrow view of the world of what is responsible and what is not responsible ended up taking over the model, ended up damaging the reputation of the model and damaging the reputation of the company as a result. In this case, it's borderline offensive to draw all Irish people as having red hair. But that never even entered into the mindset of those that were building the safety filters.
So when I talk about "responsibility is evolving," that's the direction that I want you to think about. Responsible AI has moved out of very fluffy social issues and into more hardline things that are associated with the business and prevent damaging the reputation of the business. There's a lot of great research out there around responsible AI, and that's the stuff that's being rolled into products. And then of course, like I just showed with Gemini, learning from mistakes.
Yes, I've also heard that.
[Question regarding mix of races and ethnicities with historical context.]
Yeah. So the question was: issues where races and things were mixed in historical context was the same problem. So, for example, if you had a prompt that said, "Draw me a samurai," the idea was that the engine that changed the prompt to make sure that it was fair would end up saying, "Give me a mixture of samurai of diverse backgrounds." And then you'd have male and female samurai, samurai of different races. And it was the same prompting that ended up causing the damage that I just demonstrated. The idea was to intercept your prompts to make sure that the output of the model would end up providing something that was more fair when it comes to diverse representation.
So it was a very naive solution that ended up being rolled in. That was a few years ago. They've massively improved it since then. But that's when I'm talking about if you're working in the AI space nowadays, that's how responsibility is evolving. You can't just get away with that stuff anymore. That Gemini example is a good lesson. The mindset of "you will make mistakes" involves constant ongoing learning. And going back to the people point that Andrew made earlier on: the people around you will make mistakes too. So to have the ability to give them grace when they make mistakes and to work through those mistakes and move on is really, really important and is a reality of AI at work.
So now let's talk about Vibe Coding. Let's talk about the whole idea of generating code. Now the meme is out there that it makes engineers less useful by the fact that somebody can just prompt code into existence. There is no smoke without fire of course, but I would say don't let that meme get you down because when you start peeling into these things, that is ultimately not the truth. The more skilled you are as an engineer, the better you become at using this type of prompt-to-coding.
I always like to think about this and to try and put you into the role of being a trusted advisor for the people that you speak with. Whether you're interviewing with somebody, get yourself into the mindset of being a trusted advisor of the company. When you want to get into the idea of being a trusted advisor, then you really need to understand the implications of generated code. And nobody can understand the implications of generated code better than an engineer.
The metric that I always like to use around that is Technical Debt. Quick question, are you familiar with the phrase technical debt? Nobody? Okay. Andrew and I were doing a conference in New York on Friday and I used the phrase and I saw a lot of blank faces. So I didn't realize that people didn't understand what technical debt is. Let me just take a moment to explain that.
Think about debt the way you normally would. If you buy a house, say you borrow half a million dollars to buy a house on a 30-year mortgage. With all the interest that you pay, it is about double. So you have 30 years of home ownership at a cost of $1 million in debt. That is probably a good debt to take on because the value of the house will increase, you're not paying rent, and you're getting greater than a million dollars worth of value out of it. A bad debt would be an impulse purchase on a high-interest credit card—those pair of shoes, $200. By the time I've paid them off, it's $500. You're not getting $500 worth of benefit out of those shoes.
Approaching software development with the same mindset is the right way to go. Every time you build something, you take on debt. It doesn't matter how good it is. There's always going to be bugs, support, new requirements coming in, needs to market it, feedback. All of these things are debt. The only way to avoid a debt is to do nothing. So your mindset should be: when you are creating a thing—whether you're coding it yourself or whether you're vibe coding it—you are increasing your amount of technical debt. So the question becomes: is it worth the technical debt that you're taking on?
What does technical debt generally look like? Bugs that you need to fix, people that you need to convince to help you maintain the code, documentation, features that you need to add. Think about those as extra work that you need to do beyond your current work. That's the debt that you're taking on.
To me, that would be the number one piece of advice that I give, and it's the one that I give every time I work with companies around vibe coding. A lot of companies that I consult with, particularly startups, just want to get straight into opening Gemini or GPT or Anthropic and start churning code out. Let's get to a prototype phase very quickly, let's go to investors. It's great. But debt, debt, debt is always going to be there. How do you manage your debt? A good financial person manages their debt and they become rich. A good coder manages their technical debt and they become rich also.
So how do you get the good technical debt? Well, number one is your objectives. What are they? Are they clear? And have you met them? You knew what you needed to build. You didn't just fire up ChatGPT and start spinning code out. Think about how you build it. AI is there to help you build it faster. I'm kind of working on my own little startup at the moment in the movie-making space and I've been using code generation almost completely for that. But what I've ended up doing for my "clear objectives met" box here is that I've started building this application, I've tested it, I've thrown it away. Started again, tested it, thrown it away. Each time my requirements have been improving.
Is there business value delivered? I've seen people vibe coding for hours on things like Replit to build a really cool website and then the answer was: "So what? How's this helping the business?" It's really cool, Mr. VP, I know you've never written a line of code in your life and it's really cool that you built a website now, but so what?
And then of course, the most understated part of this is Human Understanding. The worst technical debt that you can take on is delivering code that nobody understands. Only you understand that and then you quit and get a better job and then the company is dependent on that code. Being able to make sure that your code is understandable through documentation, through clear algorithms, through the fact that you've spent some time pouring through it to make sure that even simple things like variable names make sense is a really important way to avoid bad technical debt.
Bad technical debt: my favorite one is the classic "solution looking for a problem." If the only tool you have is a hammer, every problem looks like a nail. Spaghetti code, of course, poorly structured stuff—particularly when you prompt and prompt and prompt again. My favorite one at the moment that I'm really struggling with is I'm building a macOS application. Anybody ever build in SwiftUI on macOS? Swift UI is the default language that Apple uses for building for macOS as well as iPhone. But when you look at the training sets that are used to train these models, the vast majority of the code is iPhone code, not macOS code. Even though I'm in Xcode and I've created a macOS app and I'm talking to it in Xcode, it still gives me iOS code. And then if I try to change it using prompting, you end up spiraling into spaghetti code and you have to end up changing a lot of this stuff manually.
So, the Hype Cycle. Hype is the most amazing force. The two fields that I work in that are super hot at the moment and full of hype are AI and Crypto. You should see my Twitter feed—the amount of nonsense that's out there is incredible.
One of the things that I would say about the anatomy of hype that you really need to think about is if you are consuming news via social media, the currency of social media is engagement. Accuracy is not the currency of social media. Even LinkedIn is absolutely overwhelmed with influencers posting things that they've used Gemini or GPT to write so that they can get engagement and likes. The engine itself is engineered to reward those types of posts.
If you are the kind of person who can filter the signal from the noise and then who can encourage others around the signal and not the noise, that puts you in a huge advantage. It's not as quickly and easily tangible as likes and engagements on social media. But when you're in a one-to-one environment like a job interview, or if you are in a job and you are bringing that signal to the table instead of the noise, that makes you immensely valuable.
I want to tell one story. Last year, when agents started becoming the keyword and everybody saying "in 2025 agent will be the word of the year," a company in Europe asked me to help them to implement an agent. So let me ask you a question. If a company came up to you and said "please help me implement an agent," what's the correct first question that you ask them?
What is an agent for you?
Okay, that's good. "What is an agent for you?" I'd actually have a more fundamental question. Yep?
What do you want to do?
Okay, even more fundamental. My question was: Why?
Why? Peel that apart. I spoke with the CEO and he was like, "Oh, yeah, you know, everybody's telling me that I'm going to save business costs and I'm going to be able to do these amazing things." And I'm like, "Well, who told you that?" "Oh, yeah. I read this thing on LinkedIn and I saw this thing on Twitter." We ended up having that conversation and I had to keep peeling apart until we really got to the essence of what he wanted to do.
What he really wanted to do, when we took all domain knowledge about AI aside, was that he wanted to make his salespeople more efficient. I was like, "Okay, you want to make your salespeople more efficient. Nowhere in that sentence do I hear the word AI and nowhere in that sentence do I hear the word agent. So now as a trusted advisor, let me see what I can do to help your salespeople become more efficient."
One of the things you realize what a good salesperson has to do is their homework. Before you have a sales call, you need to check their background, check the company, check the needs of the company. I asked the salespeople, "What do you hate most about your job?" and they were like, "Well, I hate the fact that I have to waste all my time going to visit these company websites, going to look up people on LinkedIn, and every website is structured differently." They were spending about 80% of their time researching and about 20% of their time selling. So we set a goal to make salespeople 20% more efficient, and then we could start rolling out the ideas of AI and agentic AI.
Quick question, what's the difference between AI and Agentic AI?
Excellent. Yeah. So Agentic AI is really about breaking it down into steps. In Agentic AI in particular, I find there's a set pattern of steps that if you follow them, you end up with the whole idea of an agent.
The first of these steps is to Understand Intent. LLMs are really good at understanding. So if the first step is to understand intent—"I want to meet Bob Smith and sell widgets to Bob Smith"—you can use an LLM to do that.
The second part is Planning. You declare to an agent what tools are available to it: browsing the web, searching the web, etc. An LLM is very good at breaking that down into the steps that it needs to execute a plan.
Once it's figured out that plan, then it uses the tools to get to a result. And then once it has the result, the fourth and final step is to Reflect on that result. Did we meet the intent? Yes or no? If we didn't, then go back to that loop. All agent is really broken down into those things. And if you think about breaking any problem down into those four steps, that's when you start building an agent.
So we broke it down into those steps. We built a pilot for the salespeople of this company and they ended up saving between 10 and 15% of their wasted time. The doctrine of unintended consequences hit though after this. The unintended consequence was the salespeople were much happier because the average salesperson was making more sales in a given week, earning more money, and their job just became a little bit less miserable. But if you go in being hype-led—"Oh, build an agent for the thing"—without really peeling apart the business requirements, the why, the what, the how, this company just would have been lost in hype.
You've probably seen reports recently—I think McKinsey put one out last week—showing that about 85% of AI projects at companies fail. Part of the main reason for that is that they're not well-scoped. People are jumping on the hype bandwagon.
Other recent hype examples you've probably seen: "Software engineering is dead." My personal favorite: "Hollywood is dead" or "AGI by year-end."
I was in Saudi Arabia this time last year at a thing called the FII, and it was a dinner at the FII. I sat beside the CEO of a company who I'm not going to name, but this was a CEO of a generative AI company. At that time, he was showing everybody around the table this thing that he'd done where it was text-to-video. He could put in a text prompt and get video out of the prompt and get about six seconds worth of video out of it. Two years ago, that was hot stuff. Nowadays, obviously, it's quite passé; anybody can do it.
But he made a comment at that table, and there were a lot of media executives at that table. He was like, "By this time next year, from a single prompt, we'll be able to do 90 minutes of video. And uh, so bye-bye Hollywood." You know, so the whole "Hollywood is dead" meme, I think, came out of that. First of all, we can't do 90 minutes, even two years later, from a prompt. And even if you did, what kind of prompt would be able to tell you a full story of a movie?
So this type of hype leads to engagement. This type of hype leads to attention. But my encouragement to you is to peel that apart. Look for the signal. Ask the why question. Ask the what question and move on from there.
So, becoming that trusted adviser... The world is drowning in hype. How do you do it? Look at the trends, evaluate them objectively. Look at the genuine opportunities that are out there. There are fashionable distractions—I don't know what the next one is going to be—but there are these distractions that are out there that will get you lots of engagement on social media. Ignore them and ignore the people that are leaning into them.
And then really lean into your skills about explaining technical reality to leadership. One skill that one person coached me in once, that I thought was really interesting because it sounded wrong but it ended up being right, was: whenever you see something like this, try to figure out how to make it as mundane as possible. When you can figure out how to make it as mundane as possible, then you really begin to build the grounding for being able to explain it in detail in ways that people need to understand.
You know, I think Gemini 3 was released today, but there were leaks earlier this week. One person kind of leaked that "I built a Minecraft clone in a prompt," you know, that kind of stuff. This is the opposite of mundane, right? This was massively hyping the thing, massively showing—and of course they didn't. They built a flashy demo. They didn't really build a Minecraft clone. But the idea here is if you can peel that apart to like, okay, how do I think about what are the mundane things that are happening here?
The one that I've been working with a lot recently is video. So text-to-video prompts, as I've mentioned, instead of the magical "you can do whatever you want, all nice and fluffy, Hollywood is dead"—what is the mundane element of doing text-to-video? The mundane element of doing text-to-video is that when you train a model to create video from a text prompt, what it is doing is it's creating a number of successive frames. And each of those successive frames is going to be slightly different from the frame before. And you've trained a model by looking at video to say, well, you know, if in frame one the person's hand is like this, in frame two it's like that, then you can predict it moves this way if there's a matching prompt.
And suddenly it's become a little bit more mundane, but suddenly they begin to understand it. And then the people who are experts in that specific field, not the technical side of it, are now the ones that will actually be able to come up and do brilliant things with it.
So that hype navigation strategy: filter actively, go deep on the fundamentals, get your slides to work, and then of course, keep your finger on the pulse. The hardest part of that, I think, is the third one: really keeping your finger on the pulse. And that's when you have to wade into those cesspits of people just farming engagement and really try to figure out the signal from the noise there. But I think it's really important for you to be able to do that, to be connected, to understand that reading papers is all very good—the signal-to-noise ratio, I think, in reading papers is a lot better—but to understand the landscape that the people that you are advising [are in]. They are the ones who are wading in the cesspools of Twitter and X and LinkedIn. And there's nothing wrong with those platforms in and of themselves, but the stuff that's posted on those platforms.
So overall landscape, it is ripe with opportunity. Absolutely ripe with opportunity. So I would encourage you, as Andrew did, to continue learning, to continue digging into what you can do and to continue building. But there are risks ahead.
Does anybody remember the movie Titanic? Remember the famous phrase in that: "Iceberg, right ahead." Immediately before that, there's a scene in Titanic—if we weren't being filmed, I would show it, but I can't for copyright reasons—where the two guys up in the crow's nest are kind of freezing and talking. The crow's nest at the top of the ship is where the spotters would be to spot any icebergs in front. Go back and watch the movie again; you'll see the conversation between these two guys is that all they're talking about is how cold they are. And then it cuts away to the crew of the ship who are like, "Wait, aren't they supposed to have binoculars?" And then the crew was like, "Oh, we left the binoculars behind in port."
That framing—the whole idea was like they were so arrogant in being able to move forward that they didn't want to look out for any particular risks. And even though they had people whose job it was to look out for risks, they didn't properly equip or train them. And that to me is a really good metaphor for where the AI industry is today. There are risks in front of us. Those risks—the B-word, the bubble word you're probably reading in the news—is there.
To me though, the opportunity and the things to think about in terms of a bubble are... most of you probably don't remember the Dotcom bubble of the 2000s, but if you think about the Dotcom bubble, that was the biggest bubble in history. It burst, but we're still here. And the people who did "dot com" right not only survived, they thrived. Amazon, Google, you know, they did it right. They understood the fundamentals of what it was to build a dot com. They understood the fundamentals of what it was to build a business on dot com. And when the bubble of hype burst, they didn't go with it.
There was one website, I believe it was Pets.com, that had the mindset of "if you build it, they will come." They had Super Bowl commercials around Pets.com. They couldn't handle the traffic that they got. And that was the kind of site that when the bubble burst, those were the kind of sites that just evaporated. So that bubble in AI is likely coming. There is always a bubble. So the companies that are doing AI right are the ones, like I said, that won't just avoid the bubble—that they will actually thrive post-bubble. And the people who are doing AI right, the folks in this room who are thinking about AI and how you bring it to your company and the advice that you're giving to your company and leaning into that in the right way, will also be the ones who not only avoid getting laid off if the bubble crashes but will be the ones who will thrive through and after the bubble.
So, anatomy of any bubble—and what I'm seeing in the AI one in particular—is this kind of pyramid. At the top is the hype that I've been talking about. At the bottom is massive VC investment. I'll be frank, I'm already seeing that drying up. Once upon a time, you could go out with anything that had AI written on it and get VC investment. Then you could go out and do anything with an LLM and get VC investment. Now they're far, far, far more cautious. I've been advising a lot of startups; the amount that they're getting invested is being scaled back. The stuff that's being invested in is changing.
So this second layer down, massive VC investment, is already beginning to vanish. Unrealistic valuations—companies that aren't making money being valued massively high. We all know who they are. We're beginning to see those unrealistic valuations being fed off of that hype. "Me too" products, where somebody does something and it's successful and everybody jumps on the bandwagon; we're also seeing them everywhere. We saw them throughout the Dotcom bubble.
And then right at the bottom is that real value. I probably shouldn't have done a triangle like this; it should be more an upside-down triangle, right? Because the real value here is small. I vibe-coded these slides into existence, so this is one of the technical debts I took on. But that kernel of real value is there, and the ones that build for that will be the ones that survive.
So the direction that I see the AI industry going in, and the direction that I encourage you to start thinking about your skills in, is really over the next five years. There's going to be a bifurcation. I'm just going to be ornery in how I describe them as "big" and "small."
Big AI will be what we see today with the large language models getting bigger in the desire to drive towards AGI. The Geminis, the Claudes, the OpenAIs of the world are going to continue to drive bigger, and "bigger is better" in the mindset of those companies towards achieving AGI or towards achieving better business value. That's going to be one side of the branch.
The other side of the branch, I'm going to call it "small." We've all seen open source models. I hate the term open source; let me call them "open weights" or let me call them "self-hostable." Models are becoming... they're exploding onto the landscape. I read an article recently about Y Combinator, that 80% of the companies in Y Combinator were using small models, from China in particular. The Chinese models in particular are doing really well, probably because of the overall landscape there not leaning into the large models the same way as the West is. I see that bifurcation happening. China I think has that head start on the smaller models. That may last, it may not, I don't know. But the point is we're heading in that particular direction of—I'm going to call them instead of big and small now—models that are hosted on your behalf by somebody else, like a GPT or a Gemini or a Claude, or models that you can host yourself for your own needs.
As this side is right now, is underserved. This bubble may burst. This one right now is underserved, and that this bubble will be later on. And the major skills that I can see developers needing over the next two to three years on this side of the fence will be fine-tuning.
So, the ability to take an open source model and fine-tune it for particular downstream tasks. Let me give one concrete example of that that I've personally experienced. I work a lot in Hollywood and I've worked a lot with studios making movies. One studio in particular I was lucky enough to sell a movie to. It's still in pre-production; it'll probably be in pre-production forever. But one of the things I learned as part of that process was IP in studios is so protected, it's not even funny. Go and Google for James Cameron, who created Avatar, and the lawsuits that he's involved in—of this person who apparently sent him a story many years ago about blue aliens and is now suing him for billions of dollars because obviously there were blue aliens in Avatar. The level of IP protection in Hollywood is insane.
The opportunity with large language models is equally insane. A lot of the focus is on large language models for creation, for storytelling, for rendering and all that. But actually the major opportunity that they have is actually for analysis. To take a look at synopses of movies and find out what works and what doesn't. Why was this movie a hit and this one wasn't? What time of year was this one released and it became successful and this one wasn't? And with the margin on movies being razor thin, that kind of analysis is huge.
But in order to do that kind of analysis, you need to share the details of your movie with a large language model, and they will absolutely not do that with a GPT or a Gemini or whatever because they're now sharing their IP with a third party. Enter small models where they can self-host their own small model, and they are getting smarter and smarter. The 7B model of today is as smart as the 50B model of yesterday. A year from now, the 7B model of a year from now will be as smart as the 300B model of yesteryear. So they're moving in that direction of building using small self-hosted models which they can then fine-tune on downstream tasks. Similar with other things where privacy is important: law offices, medical offices, all of those kind of things. So those type of skills are fundamentally important going forward.
So that's the bifurcation that I'm seeing happening in AI. The sooner bubble, I think, is in the bigger, non-self-hosted. The later bubble is in the smaller, self-hosted. But either way, for you, for your career to avoid the impact of any bubble bursting, focus on the fundamentals, build those real solutions, understand the business side, and most of all, diversify your skills. Don't be that one-trick pony who only knows how to do one thing. I've worked with brilliant people who are fantastic at coding a particular API or particular framework, and then the industry moved on and they got left behind.
Okay. So yeah, when bubbles burst, that overall fallout—kind of spoken about it a little bit already: funding evaporates, hiring freezes become layoffs, projects get cancelled, and talent floods the market. Yep.
Quick questions. I have heard a lot about how [Nvidia], and they're very specific about like they want people who are like very specific about a problem that they have. So they require people to be basically put on that one [task]. So how do you think... how is it more important to like diversify skills versus actually like focusing on, for example, LLM versus like computer vision or like very specific...?
Right. So the question was around Nvidia in particular are hiring for a very specific, very narrow scenario. So then the question is how important is it for you to become an expert in a narrow scenario versus diversifying your skills. I would always argue it's still better to diversify your skills. Because that one narrow scenario is only that one narrow scenario and you're putting all your eggs into one basket.
Nvidia would be a fantastic company to work for. Nothing against them in any way. But if you put all of your eggs into that basket and you don't get it, then what? Right? So I think the idea of really being able to... if you are passionate about a thing, to be very deep in that thing is very, very good. But to only be able to do that thing, I think, you know, I would always encourage to be diversified.
And when I say diversified—like you were saying LLMs or computer vision or anything like that—I think that means that's one part of it, but that knowledge of models and how to use them to me is a uni-skill. The diversification of skills is breaking outside of that also to be able to think: okay, what about building applications on top of these? What does scaling an application look like? What does software engineering in this case look like? What about user experience and user experience skills? Because it's all very well to build a beautiful application, but if nobody can use it—I'm looking at you, Microsoft Office—you know, there's stuff like that that you just... that's what I really mean about diversifying beyond. So even in that like mono example with Nvidia, to be able to break out of like that one particular example, but to show skills in other areas that are of value, I think is really important.
Okay. As we're just running a little bit [late]... So yeah, I just wanted to... I've kind of gone into it a little bit already, but I'm a massive advocate for small AI. I really do believe small AI is the next big thing. Because we're moving into a world—and this is part of the job that I do at ARM—is we're kind of moving into a world of like AI everywhere, all at once.
So there's a traditional—and it's interesting you just brought up Nvidia because there's a traditional conception that compute platforms are CPU plus GPU when it comes to AI, but that's also changing. CPU: general purpose. GPU: specialists. But for example, in mobile space, there's massive innovation being done with a technology called SME—Scalable Matrix Extensions. And what SME is all about is really allowing you to bring AI workloads and put them on the CPU. The front runners in this are a couple of Chinese phone vendors, Vivo and OPPO, who've just recently released phones with SME enabled chips.
And what's magical about these is that A) they don't need to have a separate external chip drawing extra power taking up extra footprint space just to be able to run AI workloads, and B) the CPU of course being a low power pulling thing, being able to run AI workloads on that, they've been able to build interesting new scenarios. And if I talk about one in particular, there's a company called Alipay. And Alipay had an application where you would—and we've all seen these apps where you can go through your photographs and you can search for a particular thing, you know, "places I ate sushi" or something along those lines, and use that to create a slideshow.
All of those require a back-end service. So your photographs are hosted on Google Photos or Apple Photos or something like that. And that backend service runs the model that you can search against it and be able to do the assembly of them. What Alipay wanted to do was say there are three problems with this. Problem number one: privacy. You have to share your photos with a third party. Problem number two: latency. You got to upload those photos, you got to send the thing, you got to have the backend do the thing, and then you got to download the results from the thing. And then number three is building that cloud service and standing that up costs time and money.
So if they could move all of this onto the device itself, now the idea was they could run a model on the device that searches the photos on the device. You don't have the latency, and [from a] business perspective, they're now saving the money on creating this standup service. They now have AI running on the CPU in order to be able to do that.
Apple are also people who've invested heavily in this Scalable Matrix Extensions. You see whenever they talk about—if you've ever watched WWDC or anything like that—when they talk about the new A-series chips and M-series chips about the neural cores and those kind of things in them, that's part of the idea. So to think about breaking that habit that we've gotten into where you need a GPU to be able to do AI is part of the trend that the world is heading in. Apple are probably one of the leaders in that. I'm very, very bullish on Apple and Apple Intelligence as a result and from the AI perspective.
And seeing that trend and following that vector to its logical conclusion: as models are getting smaller, embedded intelligence getting everywhere isn't a pipe dream. It isn't sci-fi anymore. It's going to be a reality that we'll be seeing very, very shortly. So that idea of that convergence of AI because of the ability of smaller models getting smarter and lower powered devices being able to run them—we see that convergence hitting and I see massive opportunity there.
So one last part, and just going back to agents for a moment. I think, you know, the one thing that I always say is a hidden part of artificial intelligence is really what I like to call "artificial understanding." And when you can start using models to understand things on your behalf, and when they understand them on your behalf to be able to craft from that understanding new things, you can actually develop superpowers where you're far more effective than ever before. Be that creating code or creating other things.
And I'm going to give one quick demo just so we can wrap up. Um, and I was talking earlier about generating video. So, uh, this picture is... Whoops. Sorry, the connection here is not very good. I lost it. So, here we go. This picture here is actually of my son playing ice hockey. And I took this picture and I was saying, "Okay, I think I'm very good at prompting." And I wrote a nice prompt for this picture to get him—he's in the middle of taking a slapshot, he's got some beautiful flex on his stick. And I asked it like, "Okay, it's a prompt like you know him scoring a goal. What do you think happened?" Should we watch? Let's see if it works.
Ah, this was the wrong video, but it still shows the same idea. Because of poor prompting or because of poor understanding of my intent—if I talk about it in agentic senses—the arena that he was in, which is a practice arena and doesn't have any people in it... Sorry, pause it. If I just rewind to here, and if we look up in this top right hand corner here, this is basically where they store all the garbage. But the AI didn't know that, had no idea of it, so assumed it was a full arena and it started painting people in. And even though he shot a mile wide, everybody cheers and somehow he has two sticks in his hand instead of one and they forgot his name. Right?
So, I did not go through an agentic workflow to do this. I did not go through the steps of: A) understand intent; B) once you understand my intent, understand the tools that are available to you—in this case it's Veo; and C) understand the intricacies of using Veo. Make a plan of how to use them, make a plan of how to build a prompt for them, and then use them, and then reflect.
So I'm kind of... I've been advising a startup that is working on movie creation using AI, and I want to show you a little sample here of a movie that [I've] been working on with them. Where the whole idea is: if you want to have performances out of virtual actors and actresses, you need to have emotion, right? You need to be able to convey that emotion and you also need to be able to put that emotion in the context of the entire story. Because when you create a video from a prompt, you're creating an 8-second snippet. That 8-second snippet needs to know what's going on in the rest of the story, right? So, if I show this one for a moment—and it's a little wooden at the moment. It's not really working perfectly. I have professional actors who are friends who are advising me on this and they laughed at the performances. But try to view it through the difference that we had from an un-agentic prompt with the hockey player to this one. Let's hopefully we can hear it.
So like here's the idea of like, again, just thinking in terms of agentic as I was saying earlier on, breaking it into those steps. That allowed me to use exactly the same engine as I was showing you earlier on that fails, to be able to show something that works and is able to do things like portraying emotion that I just spoke about. So I know we're a little bit over time so sorry about that. I can take any questions if anybody has any. I see Andrew is here as well. He's at the back. And I just really want to say thank you so much for your attention. I really appreciate it.
[Inaudible Question about Agentic Workflow Improvement vs Data]
It's a great question. Just to repeat for the video: how much of the improvement is from the use of an agentic workflow versus just lack of hockey stuff in the training set for the failed one? Um, I [am] not comparing like to likes, so just using my gut.
When I looked at when I broke this down into the workflow that said, "Okay, I created scenes like this one and they were awful when I just did it directly for myself with no basis, no agentic, no artificial understanding." And when I broke it down into the steps where it's like: okay, in this scene the girl is sitting on the bench and she's upset and the person is talking to her and he wants to comfort her. Feeding that to a large language model along with the entire story and along with the constraints that I had—where the shot has to be 8 seconds long, clear dialogue and all of those kind of things—and then to understand my intent from that one, the LLM ended up expressing a prompt that was far more loquacious than I ever would have, that was far more descriptive than I ever would have. The LLM had understanding of what makes a good shot, what makes a good angle, what makes good emotion, far more than I would have. I could spend hours trying to describe it.
So that first step in the agentic flow of it doing that for me and understanding my intent was huge. The second step then is the tools that it's going to use. So I explicitly said which video engine I'm going to be using. I was using Gemini as the LLM and hopefully Gemini is familiar with Veo, you know that kind of stuff. So to understand the idiosyncrasies of doing things with Veo. What I learned, like for example, Veo is very bad at doing high action scenes but is very good at doing slow camera pulls to do emotion as you saw in this case. So the LLM knew that from me declaring I was using that as a tool. And then further it built a prompt and then further refined the prompt from that.
And then the third part: actually using the tool to actually generate it for me. Generating a video with something like Veo costs, I think, between two and three dollars to generate like four videos in credits. So the last thing I want to do is generate lots and lots and lots and lots of videos and throw good money after bad. But all of that token spend that I did earlier on to understand my intent and then to make the plan for using the agent was saved in the back end where it got it right. Like you know, maybe not get it right first time, but would very rarely take more than two or three tries to get something that was really really nice. So I think, without comparing like with like, I do think that plan of action and going through a workflow like that worked very, very well.
Any other questions, thoughts, comments? Yep. Up at the back.
What has surprised me the most about the AI industry over the years?
Oh, that's a good one. What has surprised me the most... and it probably shouldn't have surprised me, is how much hype took over. I actually, I honestly thought a lot of people who are in important decision-making roles and that kind of thing would be able to see the signal better than they did. And I think the other part was that the desire to make immediate profits as opposed to long-term gains also surprised me a lot.
Let me share one story in that space. One of the things that, after Andrew and I taught the TensorFlow specializations on Coursera, and after that Google launched a professional certificate where the idea of this professional certificate was we give a rigorous exam, and at the end of the rigorous exam if you got the certificate, it was a high prestige thing that would help you find work. Particularly at the time when TensorFlow was a very highly demanded skill—just get work.
Running that program cost Google $100,000 a year. Okay, drop in the bucket. Not a lot of money. The goodwill that came out of it was immense. I'll tell one story very quickly. There was a young man, and he went public in some like advertising stuff with Google, that he lived in Syria. And we all know there was a huge civil war in Syria over the last few years. And he got the TensorFlow certificate. He was one of the first in Syria to get it and it lifted him out of poverty where he was able to move to Germany and get work at a major German firm. I met him at an event in Amsterdam where he told me his story. And now because of like the job that he had in this German firm, he's able to support his family back home and move them out of the war-torn zone into a peaceful zone—all because he got this like AI thing, right? And there were countless stories like that, very inspirational, very beautiful stories.
But the thing that surprised me then was sometimes the lack of investment in that, where there was no revenue being generated for the company out of that. We deliberately kept it revenue neutral so that the price of the exams could go down. We wanted it to self-sustain. It ended up not being revenue neutral. It ended up costing the company about 100,000 to 150,000 a year. So they canned it, you know. And it's a shame because of all the potential goodwill that can come out of something like that. But I think those would be the two that immediately jumped to mind that have surprised me the most.
And then I guess the one other part that I would say is the people who've been able to be very successful with AI, who you wouldn't think would be the ones that would be successful with AI, has always been inspirational to me. So allow me one more story. I have a good friend—I showed ice hockey a moment ago. I have a good friend who is a former professional ice hockey player. And any ice hockey fans here? Okay. You know, it's a kind of a brutal sport, right? You see a lot of fighting and a lot of stuff on the ice. And he dropped out of school when he was 13 years old to focus on skating. And he will always tell everybody that he's the dumbest person alive cuz he's uneducated. He and I are complete opposites, you know, that's why we get on so well.
And he retired from ice hockey because of concussion issues. And he now runs a nonprofit—the ice rinks for nonprofit. And about three years ago, um, we were having a beer and he was like, "So tell me about AI and tell me about this ChatGPT thing. Is it any good?" And I was like, you know, just sharing the whole thing. Yes, it's good, and all that kind of stuff. And it was obviously a loaded question and I didn't know why.
But part of his job at his nonprofit is that every quarter he has to present to the board of directors the results of the operations so that they can be funded properly. Because even though they're a nonprofit, they still need money to operate. And he was spending upwards of $150,000 a year to bring in consultants to kind of pull the data from all of the different sources they were pulling data from. There's like machines in what's called the pump room that has a compressor that cools the ice, and there were spreadsheets and there was accounts and all this kind of stuff. And he was not tech savvy in any way, but he needed to process all this data.
So he did an experiment where he got ChatGPT to do it. And this was the loaded question like asking me was it any good. And so we talked through it a little bit and then he told me why. And so I took a look at the results because he was uploading spreadsheets, he was uploading PDFs and all this kind of thing and getting it to assemble in a report. It takes him now about two hours to do the report himself with ChatGPT and it worked. And it worked brilliantly. And that $150,000 a year that he's saving on consulting is now going to underprivileged kids, right? For hockey equipment, for ice skating equipment, for lessons and all of that kind of thing.
So, it was taken out of the hands of an expensive consulting company and put into the hands of people because of this guy. And he says he's the dumbest person alive, you know—but I hope he's not watching this video—but I told him afterwards that "Congratulations, you're now a developer," right? And he didn't like that. But, you know, but it's like surprises like that. The superpowers that were handed to somebody like him, that he's not technical in any way, but he was able to effectively build a solution that saved his nonprofit $100 to $150,000 a year. And like things like that are always surprising me in a very pleasant way.
Yep. Sorry, I'll get to you next. Sorry. Yeah.
I think [for engineers] like us, it's easy. It's easier to [navigate] because we can understand what signal is from [noise]. [But how do we share] this knowledge like...
So just to repeat the question for the video: for engineers like us sometimes it's easy to navigate the hype to see the signal from the noise, but what about people you know who don't have the same training as us? I think that's our opportunity to be trusted advisers for them. And to really help them through that to understand it.
I think the biggest part in the hype story right now is just understanding the reward mechanism, right? Everything rewards engagement rather than actual substance. And to me, step one is seeing through that. Like the story I just told about my friend, you know, he's seen all this kind of stuff but he wasn't willing to bet his career on it, you know, but he needed like that kind of advice around it and to kind of start peeling apart what he had done and what he did right and what he did wrong.
So that positioning ourselves to be trusted advisers by not leaning into the same mistakes that the untrained people may be leaning into, I think is the key to that. And you know, just understanding that the average person is generally very intelligent, even if they may not be experts in a specific domain, and to key in on that intelligence and help them to foster and to grow that, you know, and navigate them through the parts where they'll have difficulty and let them shine in what they're very, very good at.
All right. Over here, there was one.
I have a question for scientific research. [I wanted] to get your perspective on where you think it is a good idea and...
So, AI and machine learning for scientific research: where is it a good idea and where should you be cautious? Oh, uh, my initial gut check would be I think it's always a good idea, right? I think you know there's no harm in using the tools that you have available to you, but to always just double check your results and double check your expectations against the grounded reality.
I've always been a fan of using automation in research as much as possible. My undergraduate was Physics, many many years ago, and I was actually very successful in the lab because I usually automated things through a computer that other people did handwriting and pen and paper with, so I could move quickly. So I know I'm biased in that regard, but I would say like for most research, for the most part, I think you know use the most powerful tools you have available but check your expectations.
Little story actually on that side was... trivia question: Poorest country in Western Europe, anybody know? Western Europe is Wales. So I actually did my undergraduate in Wales and I went back to do some lectures in the university there. And I met with a researcher there and he was doing research into brain cancer using computer imagery and using various types of computer imagery. And I asked him what's the biggest problem that you have, what's the biggest blocker for your research—and this is about eight years ago—and his answer was access to a GPU.
Because like for him to be able to train his models and run his models, he needed to be able to access a GPU. And the department that he was in had one GPU between 10 researchers, which meant that everybody got it for half a day, right? Monday through Friday. And his half a day was Tuesday afternoon. So in his case, he would spend the entire time that wasn't Tuesday afternoon preparing everything for his model run or his model training or everything like that. And then Tuesday afternoon, once he had access to the GPU, then he would do the training, right? And then he would hope that in that time that he would train his model and he would get the results that he wanted. Otherwise, he'd have to wait a week, you know, to get access to the GPU again.
And then I showed him Google Colab, right? Had anybody ever used Google Colab? And you can have a GPU in the cloud for free with that kind of thing. And the poor guy's brain melted, right? Because I took out my phone and I showed him a notebook running on my phone and Google Colab and training it on that, and it changed everything for him research-wise. And you know, now it was a case of—and this was with free Colab—he had much more than he had with his shared GPU. So I think, you know, for someone like him, machine learning was an important part of his research but he was so gated on it that the ability to widen access to that ended up like really, really advancing his research. I don't know where it ended up. I don't know what he has done. It has been a few years since then but you know that story just came to mind when you asked the question.
Any more questions? Feel free to ask me anything. Oh yeah, at the front here.
Force of sociality or sociality... [Can AI be a force of social equality or inequality?]
So can AI be a force of social equality or social inequality? I think the answer to that is yes. It can be both and it can be neither. I mean I think that ultimately the idea is that... in my opinion, any tool can be used for any means. So the important thing is to educate and inspire people towards using things for the correct means. There's only so much governance can be applied, and sometimes governance can cause more problems than it solves.
So I always love to live my life by assuming good intent but preparing for bad intent. And in the case of AI, I don't think there's any difference there—that everything that I would do and everything that I would advise is assuming good intent, that people would use it for good things, but also to be prepared for it to be misused. The bad examples that I showed earlier on I think were good intent rather than bad intent. And most mistakes that I see like that are good intent being used mistakenly as opposed to bad intent. But I would say that's the only mantra that I can, the only advice that I can give in that kind of thing is always assume good intent but prepare for bad intent. The AI itself has no choice, right? It's how people use it.
Andrew, did you want closing comments or...?
I think we're running [out of time]. All right. Thank you.