TextPurr Logo

TextPurr

Loading...
Loading...

GPUs, TPUs, & The Economics of AI Explained

Invest Like The Best
In this episode of Invest Like The Best, Patrick O'Shaughnessy sits down with investor Gavin Baker to explore the rapidly evolving AI landscape. They dive deep into the infrastructure war between Nvidia and Google, discuss the implications of Gemini 3 and scaling laws, and examine how the transition from Hopper to Blackwell chips is reshaping the industry. Baker shares his insights on frontier AI models and the economics of token production. The conversation also covers data centers in space, the future of robotics, and why traditional SaaS companies are making critical mistakes with AI adoption. A comprehensive look at the technical, economic, and strategic forces driving the AI revolution. Timestamps: 0:00 Intro 5:03 The Blackwell Transition 23:15 The Prisoner's Dilemma 27:12 The Bear Case: Edge AI 37:19 Meta, Open Source, and Model Depreciation 43:08 Geopolitics and Rare Earths 50:42 Data Centers in Space 56:06 Power Constraints as a Governor 1:11:31 The SaaS Mistake 1:16:17 Nuclear and Quantum 1:22:25 Gavin’s Investing Origins #AI #ArtificialIntelligence #Investing #Technology #Nvidia #Google #Blackwell #Gemini #OpenAI #DataCenters #Semiconductors #ScalingLaws #Tech #Innovation #VentureCapital Presented by Ramp: https://ramp.com/invest Sponsored by AlphaSense and Ridgeline: https://www.alpha-sense.com/invest/ https://www.ridgelineapps.com/ ****** Patrick O'Shaughnessy is the CEO of Positive Sum. All opinions expressed by Patrick and podcast guests are solely their own and do not reflect the opinion of Positive Sum. This podcast is for informational purposes only and should not be relied upon as a basis for investment decisions. Clients of Positive Sum may maintain positions in the securities discussed in this podcast. To learn more, visit psum.vc
Hosts: Gavin Baker, Patrick O'Shaughnessy, Grok (AI Response)
📅December 09, 2025
⏱️01:28:25
🌐English

Disclaimer: The transcript on this page is for the YouTube video titled "GPUs, TPUs, & The Economics of AI Explained" from "Invest Like The Best". All rights to the original content belong to their respective owners. This transcript is provided for educational, research, and informational purposes only. This website is not affiliated with or endorsed by the original content creators or platforms.

Watch the original video here: https://www.youtube.com/watch?v=cmUo4841KQw

00:00:00Patrick O'Shaughnessy

I will never forget when I first met Gavin Baker. It was early days of the podcast and he was one of the first people I talked to about markets outside of my area of expertise, which at the time was quantitative investing, about the incredible passionate experience that he's had investing in technology across his career. I find his interest in markets, his curiosity about the world to be about as infectious as any investor that I've ever come across. He is encyclopedic on what is going on in the world of technology today, and I've had the good fortune to host him every year or two since that first meeting on this podcast.

💬 0 comments
Add to My Notes
00:00:33Patrick O'Shaughnessy

In this latest conversation, we talk about everything that interests Gavin. We talk about Nvidia, Google and its TPUs, the changing AI landscape, the changing math and business models around AI companies. This is a life-or-death decision that essentially everyone except Microsoft is failing at. We even discussed the crazy idea of data centers in space, which he communicates with his usual passion and logic.

💬 0 comments
Add to My Notes
00:00:58Gavin Baker

In every way, data centers in space from a first principles perspective are superior to data centers on earth.

💬 0 comments
Add to My Notes
00:01:06Patrick O'Shaughnessy

Because Gavin is one of the most passionate thinkers and investors that I know, these conversations are always amongst my most favorite. I hope you enjoy this latest in a series of discussions with Gavin Baker.

💬 0 comments
Add to My Notes
00:01:19Patrick O'Shaughnessy

I would love to talk about how you like, in the nitty-gritty process, new things that come out in this whole AI world because it's happening so constantly. I'm extremely interested in it and I find it very hard to keep up. You know, I have a couple of blogs that I go read and friends that I call, but maybe let's take Gemini 3 as a recent example when that comes out. What literally—like, take me into your office—what are you doing? How do you and your team process an update like that given how often these things are happening?

💬 0 comments
Add to My Notes
00:01:48Gavin Baker

I mean, I think the first thing is you have to use it yourself.

💬 0 comments
Add to My Notes
00:01:50Patrick O'Shaughnessy

And I would just say I'm amazed at how many famous and august investors are reaching really definitive conclusions about AI...

💬 0 comments
Add to My Notes
00:01:58Gavin Baker

Well, no, based on the free tier.

💬 0 comments
Add to My Notes
00:02:00Patrick O'Shaughnessy

Based on the free tier.

💬 0 comments
Add to My Notes
00:02:01Gavin Baker

The free tier is like you're dealing with a 10-year-old, and you're making conclusions about the 10-year-old's capabilities as an adult. And you could just pay, and I do think actually you do need to pay for the highest tier, whether it's Gemini Ultra, Super Grok, whatever it is. You have to pay the $20-per-month tier, whereas those are like a fully-fledged 30, 35-year-old. It's really hard to extrapolate from an eight or a 10-year-old to the 35-year-old, and yet a lot of people are doing that.

💬 0 comments
Add to My Notes
00:02:28Gavin Baker

And the second thing is, there was an Insider post about OpenAI and they said to a large degree OpenAI runs on Twitter vibes. And I just think AI happens on X. You know, there have been some really memorable moments, like there was a giant fight between the PyTorch team at Meta and the Jax team at Google on X, and the leaders of each lab had to step in publicly say, "No one from my lab is allowed to say bad things about the other lab and I respect them and that is the end of that."

💬 0 comments
Add to My Notes
00:03:02Patrick O'Shaughnessy

Yeah.

💬 0 comments
Add to My Notes
00:03:03Gavin Baker

The companies are all commenting on each other's posts. You know, the research papers come out. If on planet Earth there's 500 to a 1,000 people who really, really understand this and are at the cutting edge of it—and a good number of them live in China—I just think you have to follow those people closely. And I think there is incredible signal to me. Everything in AI is just downstream of those people.

💬 0 comments
Add to My Notes
00:03:29Patrick O'Shaughnessy

Yeah. Everything Andre Karpathy writes, you have to read it three times.

💬 0 comments
Add to My Notes
00:03:33Gavin Baker

Minimum. Yeah. He's incredible. And then I would say anytime at one of those labs, the four labs that matter—you know, being OpenAI, Gemini, Anthropic, and xAI, which are clearly the four leading labs—anytime somebody from one of those labs goes on a podcast, I just think it's so important to listen.

💬 0 comments
Add to My Notes
00:03:52Gavin Baker

And then for me, one of the best use cases of AI is to keep up with all of this. You know, just like listen to a podcast and then if there are parts that I thought were interesting, just talk about it with AI. And I think it's really important to have as little friction as possible. I'll bring it up. You know, I have it like... I can either press this button and pull up Grok or I have this.

💬 0 comments
Add to My Notes
00:04:15Patrick O'Shaughnessy

Oh, wow. I don't touch that. That just brings it right up.

💬 0 comments
Add to My Notes
00:04:18Gavin Baker

Yeah, it brings it right up. [Speaking to AI] What do you think of Patrick O'Shaughnessy?

💬 0 comments
Add to My Notes
00:04:21Grok (AI Response)

Oh man, Patrick O'Shaughnessy is one of my favorite voices in investing. His Invest Like the Best podcast is straight fire. Does deep dives with folks like Bill Gurley or...

💬 0 comments
Add to My Notes
00:04:32Gavin Baker

Gurley? Yes.

💬 0 comments
Add to My Notes
00:04:34Patrick O'Shaughnessy

It's so... Can you believe we have this?

💬 0 comments
Add to My Notes
00:04:37Gavin Baker

I know. It's like we have... Yeah. I think somebody said on X, you know, like we imbued these rocks with crazy spells and now we can summon super intelligent genies on our phones over the air. You know, it's crazy.

💬 0 comments
Add to My Notes
00:04:55Patrick O'Shaughnessy

Crazy. So something like Gemini 3 comes out, you know, the public interpretation was, "Oh, this is interesting. It seems to say something about scaling laws and the pre-training stuff." What is your frame on the state of general progress in frontier models in general? Like what are you watching most closely?

💬 0 comments
Add to My Notes
00:05:12Gavin Baker

Yeah. Well, I do think Gemini 3 was very important because it showed us that scaling laws for pre-training are intact. They stated that unequivocally and that's important because no one on planet Earth knows how or why scaling laws for pre-training work. It's actually not a law. It's an empirical observation, and it's an empirical observation that we've measured extremely precisely and has held for a long time.

💬 0 comments
Add to My Notes
00:05:34Gavin Baker

But our understanding of scaling laws for pre-training—and maybe this is a little bit controversial with 20% of researchers but probably not more than that—is kind of like the ancient British people's understanding of the sun or the ancient Egyptians' understanding of the sun. They can measure it so precisely that the east-west axis of the Great Pyramids are perfectly aligned with the equinoxes and so are the east axises of Stonehenge. Perfect measurement. But they didn't understand orbital mechanics. They had no idea how or why it rose in the east, set in the west, and moved across the horizon.

💬 0 comments
Add to My Notes
00:06:10Patrick O'Shaughnessy

The aliens.

💬 0 comments
Add to My Notes
00:06:10Gavin Baker

Yeah, our God in a chariot. And so it's really important every time we get a confirmation of that. Um, so Gemini 3 was very important in that way. But I'd say I think there's been a big misunderstanding, maybe in the public equity investing community or the broader more generalist community. Based on the scaling laws of pre-training, there really should have been no progress in 2024 and 2025.

💬 0 comments
Add to My Notes
00:06:34Gavin Baker

And the reason for that is, you know, after xAI figured out how to get 200,000 Hoppers coherent, you had to wait for the next generation of chips. Because you really can't get more than 200,000 Hoppers coherent. And coherent just means you could just think of it as each GPU knows what every other GPU is thinking. They kind of are sharing memory, they're connected, they scale up networks and scale out, and they have to be coherent during the pre-training process.

💬 0 comments
Add to My Notes
00:07:01Gavin Baker

I think there's a lot of misunderstanding about Gemini 3 that I think is really important. So everything in AI has a struggle between Google and Nvidia. Google has a TPU and Nvidia has their GPUs. I mean Google only has a TPU and they use a bunch of other chips for networking. Nvidia has the full stack.

💬 0 comments
Add to My Notes
00:07:18Gavin Baker

Blackwell was delayed. Blackwell was Nvidia's next generation chip and the first iteration of that was the Blackwell 200. A lot of different SKUs were cancelled and the reason for that is it was by far the most complex product transition we've ever gone through in technology. Going from Hopper to Blackwell, first you go from air cooled to liquid cooled. The rack goes from weighing round numbers 1,000 lbs to 3,000 lbs. Goes from round numbers 30 kilowatts, which is 30 American homes, to 130 kilowatts, which is 130 American homes.

💬 0 comments
Add to My Notes
00:07:53Gavin Baker

So I analogize it to: imagine if to get a new iPhone you had to change all the outlets in your house to 220 volt, put in a Tesla Powerwall, put in a generator, put in solar panels—that's the power—put in a whole home humidification system and then reinforce the floor because the floor can't handle this. So it was a huge product transition and then just the rack was so dense it was really hard for them to get the heat out. So Blackwells have only really started to be deployed and really scaled deployments over the last 3 or 4 months.

💬 0 comments
Add to My Notes
00:08:32Gavin Baker

Had reasoning not come along, there would have been no AI progress from mid-2024 through essentially Gemini 3. There would have been none. Everything would have stalled and can you imagine what that would have meant to the markets? Like, for sure we would have lived in a very different environment. So reasoning kind of bridged this like 18-month gap. Reasoning kind of saved AI because it let AI make progress without Blackwell or the next generation of TPU which were necessary for the scaling laws for pre-training to continue.

💬 0 comments
Add to My Notes
00:09:09Gavin Baker

The reason we've had all this progress—maybe we could show like the ARC AGI slide where you went from 0 to 8 over four years, 0 to 8% intelligence, and then you went from 8% to 95% in 3 months when the first reasoning model came out from OpenAI—is we have these two new scaling laws of post-training, which is just reinforcement learning with verified rewards. "Verified" is such an important concept in AI. One of Karpathy's great things was: with software, anything you can specify, you can automate. With AI, anything you can verify, you can automate. It's such an important concept and I think an important distinction.

💬 0 comments
Add to My Notes
00:09:47Gavin Baker

And then test time compute. And so all the progress we've had, and we've had immense progress since October 24th through today, was based entirely on these two new scaling laws. And Gemini 3 was arguably the first test since Hopper came out of the scaling law for pre-training and it held. And that's great because all these scaling laws are multiplicative. So now we're going to apply these two new—reinforcement learning with verified rewards and test time compute—to much better base models.

💬 1 comment
Add to My Notes
00:10:20Gavin Baker

Google came out with the TPU v6 in 2024 and the TPU v7 in 2025. And in semiconductor time, it's almost like imagine Hopper is like a World War II era airplane. And it was by far the best World War II era airplane. It's P-51 Mustang with the Merlin engine. And two years later in semiconductor time, that's like you're an F4 Phantom. Because Blackwell was such a complicated product and so hard to ramp, Google was training Gemini 3 on '24 and '25 era TPUs, which are like F4 Phantoms. Like Blackwell, it's like an F-35. It just took a really long time to get it going.

💬 0 comments
Add to My Notes
00:11:06Gavin Baker

So, I think Google for sure has this temporary advantage right now from a pre-training perspective. I think it's also important that they've been the lowest cost producer of tokens. And this is really important because AI is the first time in my career as a tech investor that being the low-cost producer has ever mattered. Apple is not worth trillions because they're the low-cost producer of phones. Microsoft is not worth trillions because they're the low-cost producer of software. Nvidia is not worth trillions because they're the low-cost producer of AI accelerators. It's never mattered.

💬 0 comments
Add to My Notes
00:11:39Gavin Baker

And this is really important because what Google has been doing as the low-cost producer is they have been, I would say, sucking the economic oxygen out of the AI ecosystem, which is an extremely rational strategy for them and for anyone who's a low-cost producer. Let's make life really hard for our competitors. So what happens now? I think this has pretty profound implications. One, we will see the first models trained on Blackwell in early 2026.

💬 0 comments
Add to My Notes
00:12:11Patrick O'Shaughnessy

Why?

💬 0 comments
Add to My Notes
00:12:11Gavin Baker

I think the first Blackwell model will come from xAI. And the reason for that is just, according to Jensen, no one builds data centers faster than Elon. Yes, Jensen has said this on the record. Even once you have the Blackwells, it takes 6 to 9 months to get them performing at the level of Hopper because the Hopper is finally tuned. Everybody knows how to use it, the software is perfect for it, engineers know all its quirks. Everybody knows how to architect a Hopper data center at this point. And by the way, when Hopper came out, it took 6 to 12 months for it to really outperform Ampere, which was the generation before.

💬 0 comments
Add to My Notes
00:12:47Gavin Baker

So, if you're Jensen or Nvidia, you need to get as many GPUs deployed in one data center as fast as possible in a coherent cluster so you can work out the bugs. And so this is what xAI effectively does for Nvidia because they build the data centers the fastest. They can deploy Blackwells at scale the fastest and they can help work with Nvidia to work out the bugs for everyone else. So because they're the fastest, they will have the first Blackwell model.

💬 0 comments
Add to My Notes
00:13:15Gavin Baker

We know that scaling laws for pre-training are intact and this means the Blackwell models are going to be amazing. Blackwell is... I mean it's not an F-35 versus an F4 Phantom, but from my perspective it is a better chip. Maybe it's like an F-35 versus a Rafale. And so now that we know pre-scaling holding, we know that these Blackwell models are going to be really good. And based on the raw specs, they should probably be better.

💬 0 comments
Add to My Notes
00:13:41Gavin Baker

Then something even more important happens. So the GB200 was really hard to get going. The GB300 is a great chip. It is drop-in compatible in every way with those GB200 racks.

💬 1 comment
Add to My Notes
00:14:00Patrick O'Shaughnessy

No new power walls.

💬 0 comments
Add to My Notes
00:14:01Gavin Baker

Yeah. Just any data center that can handle those. You can slot in the GB300s. And now everybody's good at making those racks and you know how to get the heat out, you know how to cool them. You're going to put those GB300s in and then the companies that use the GB300s, they are going to be the low-cost producer of tokens. Particularly if you're vertically integrated. If you're paying a margin to someone else to make those tokens, you're probably not going to be.

💬 0 comments
Add to My Notes
00:14:23Gavin Baker

I think this has pretty profound implications because it has to change Google's strategic calculus. If you have a decisive cost advantage and you're Google and you have search and all these other businesses, why not run AI at a negative 30% margin? It is by far the rational decision. You take the economic oxygen out of the environment. You eventually make it hard for your competitors who need funding, unlike you, to raise the capital they need. And then on the other side of that, maybe have an extremely dominant share position.

💬 0 comments
Add to My Notes
00:14:59Gavin Baker

Well, all that calculus changes once Google is no longer the low-cost producer, which I think will be the case. The Blackwells are now being used for training. And then when that model is trained then you start shifting Blackwell clusters over to inference and then all these cost calculations and these dynamics change. And I do think it's very interesting, the strategic and economic calculations between the players. I've never seen anything like it. Everyone understands their position on the board, what the prize is, what play their opponents are running.

💬 0 comments
Add to My Notes
00:15:39Gavin Baker

So, I just think if Google changes its behavior—cuz it's going to be really painful for them as a higher cost producer to run that negative 30% margin—it might start to impact their stock. That has pretty profound implications for the economics of AI. And then when Rubin comes out, the gap is going to expand significantly versus TPUs and all other ASICs. Now, I think Trainium 3 is probably going to be pretty good. Trainium 4 are going to be good.

💬 0 comments
Add to My Notes
00:16:04Patrick O'Shaughnessy

Why is that the case? Why won't TPU v8, v9 be every bit as good?

💬 0 comments
Add to My Notes
00:16:09Gavin Baker

A couple of things. So one, for whatever reason, Google made more conservative design decisions. I think part of that is... round numbers, let's say the TPU. Google is... So there's front end and back end of semiconductor design and then dealing with Taiwan Semi. You can make an ASIC in a lot of ways. What Google does is they do mostly the front end for the TPU and then Broadcom does the back end and manages Taiwan Semi.

💬 0 comments
Add to My Notes
00:16:42Gavin Baker

It's a crude analogy but like the front end is like the architect of a house. They design the house. The back end is the person who builds the house and the managing Taiwan Semi is like stamping out that house like Lennar or Dr. Horton. And for doing those two ladder parts, Broadcom makes a 50 to 55% gross margin. We don't know what on TPUs. Okay, let's say in 2027...

💬 0 comments
Add to My Notes
00:17:06Patrick O'Shaughnessy

Estimates maybe somewhere around 30 billion? Again, who knows...

💬 0 comments
Add to My Notes
00:17:12Gavin Baker

Yeah, yeah, but 30 billion I think is a reasonable estimate. 50 to 55% gross margins so Google is paying Broadcom $15 billion. That's a lot of money. And at a certain point it makes sense to bring a semiconductor program entirely in house. So in other words Apple does not have an ASIC partner for their chips. They do the front end themselves, the back end, and they manage Taiwan Semi. And the reason is they don't want to pay that 50% margin. So at a certain point it becomes rational to renegotiate this.

💬 0 comments
Add to My Notes
00:17:45Gavin Baker

And just as perspective, the entire opex of Broadcom's semiconductor division is round numbers $5 billion. So it would be economically rational now that Google's paying—if it's 30 billion we're paying them 15—Google can go to every person who works in Broadcom Semi, double their comp, and make an extra 5 billion. You know, in 2028, let's just say it does 50 billion. Now it's 25 billion. You could triple their comp. And by the way, you don't need them all.

💬 0 comments
Add to My Notes
00:18:11Gavin Baker

And of course, they're not going to do that because of competitive concerns. But with TPU v8 and v9, all of this is beginning to have an impact because Google is bringing in MediaTek. This is maybe the first way you send a warning shot to Broadcom: "We're really not happy about all this money we're paying." But they did bring MediaTek in and the Taiwanese ASIC companies have much lower gross margins. So this is kind of the first shot against the bow.

💬 0 comments
Add to My Notes
00:18:36Gavin Baker

And then there's all this stuff people say, "Oh but Broadcom has the best SerDes." Broadcom has really good SerDes, and SerDes is like an extremely foundational technology because it's how the chips communicate with each other—you have to serialize and deserialize. But there are other good SerDes providers in the world. A really good SerDes is not... Maybe it's worth 10 or 15 billion a year, but it's probably not worth 25 billion a year.

💬 0 comments
Add to My Notes
00:19:02Gavin Baker

So because of that friction, and I think conservative design choices on the part of Google—and maybe the reason they made those conservative design choices is because they were going to a bifurcated supply. You know, TPU is slowing down. I would say GPUs are accelerating. This is the first, you know, competitive response of Lisa and Jensen to everybody saying "We're gonna have our own ASIC" is, "Hey, we're just going to accelerate. We're going to do a GPU every year and you cannot keep up with us."

💬 0 comments
Add to My Notes
00:19:33Gavin Baker

And then I think what everybody is learning is like, "Oh wow, that's so cool. You made your own accelerator has an ASIC. Wow, what's the NIC going to be? What's the CPU going to be? What's the scale-up switch going to be? What's the scale-up protocol? What's the scale-out switch? What kind of optics are you going to use? What's the software that's going to make all this work together?" And then it's like, "Oh, I made this tiny little chip and, you know, whether it's admitted or not... oops, what did I do? I thought this was easy."

💬 0 comments
Add to My Notes
00:20:14Gavin Baker

And it also takes at least three generations to make a good chip like the TPU v1. I mean it was an achievement that they made it. It was really not till TPU v3 or v4 that the TPU started to become even vaguely competitive.

💬 0 comments
Add to My Notes
00:20:31Patrick O'Shaughnessy

Is that just a classic learning-by-doing thing?

💬 0 comments
Add to My Notes
00:20:32Gavin Baker

100%. And even if you've made like the first... From my perspective, the best ASIC team at any semiconductor company is actually the Amazon ASIC team. You know, they were the first one to make the Graviton CPU. They have this Nitro. It was the first, it's called SuperNIC. They've been extremely innovative, really clever. And like Trainium and Inferentia 1, maybe they're a little better than the TPU v1, but only a little. Trainium 2, you get a little better. Trainium 3, I think is the first time it's like "Okay," and then I think Trainium 4 will probably be good.

💬 1 comment
Add to My Notes
00:21:07Gavin Baker

I will be surprised if there are a lot of ASICs other than Trainium and TPU. And by the way, Trainium and TPU will both run on customer-owned tooling at some point. We can debate when that will happen but the economics of success that I just described mean it's inevitable. Like no matter what the companies say, just the economics make it—and reasoning from first principles make it—absolutely inevitable.

💬 1 comment
Add to My Notes
00:21:36Patrick O'Shaughnessy

If I were to zoom all the way out on this stuff—because sometimes I find these details unbelievably interesting and it's like the grandest game that's ever been...

💬 0 comments
Add to My Notes
00:21:44Gavin Baker

That's what I mean. It's crazy.

💬 0 comments
Add to My Notes
00:21:45Patrick O'Shaughnessy

It's so crazy and so fun to follow. Sometimes I forget to zoom out and say, "Well, so what?" Like, okay, so project this forward three generations past Rubin or whatever. What is like the global human dividend of all this crazy development? Like we keep making the loss lower on these pre-training scaling models, like who cares? It's been a while since I've asked this thing something that I wasn't kind of blown away by the answer for me personally. What are the next couple of things that all this crazy infrastructure war allows us to unlock because they're so successful?

💬 0 comments
Add to My Notes
00:22:21Gavin Baker

If I were to posit like an event path, I think the Blackwell models are going to be amazing. The dramatic reduction in per token cost enabled by the GB300 and probably more the MI450 than the MI355 will lead to these models being allowed to think for much longer, which means they're going to be able to do new things.

💬 0 comments
Add to My Notes
00:22:40Gavin Baker

Like I was very impressed Gemini 3 made me a restaurant reservation. It's the first time it's done something for me, other than like go research something and teach me stuff. But if you can make a restaurant reservation, you're not that far from being able to make a hotel reservation and an airplane reservation and order me an Uber, and all of a sudden you got an assistant.

💬 0 comments
Add to My Notes
00:23:01Gavin Baker

And you can just imagine everybody talks about that, but you can just imagine it's on your phone. I think that's pretty near-term, but at some big companies that are very tech forward, 50% plus of customer support is already done by AI and that's a $400 billion dollar industry. And then what AI is great about is persuasion—that's sales and customer support. And so of the functions of a company, if you think about them, they're to make stuff, sell stuff, and then support the customers. So right now, maybe you're in late '26, you're going to be pretty good at two of them.

💬 0 comments
Add to My Notes
00:23:35Gavin Baker

I do think it's going to have a big impact on media. I think robotics, as we talked about the last time, are going to finally start to be real. You know, there's an explosion in kind of exciting robotic startups. I do still think that the main battle is going to be between Tesla's Optimus and the Chinese because it's easy to make prototypes, it's hard to mass-produce them. But then it goes back to what Andre Karpathy said about AI can automate anything that can be verified. So any function where there's a right or wrong answer, a right or wrong outcome, you can apply reinforcement learning and make the AI really good at that.

💬 0 comments
Add to My Notes
00:24:08Patrick O'Shaughnessy

What are your favorite examples of that so far or theoretically?

💬 0 comments
Add to My Notes
00:24:11Gavin Baker

Does the model balance? They'll be really good at making models. Do all the books globally reconcile? They'll be really good at accounting because of double-entry bookkeeping. It has to balance. There's a verifiable, you got it right or wrong. Support or sale: Did you make the sale or not? That's very clear. I mean that's just like AlphaGo. You know, did you win or you lose? Did the guy convert or not? Did the customer ask for an escalation during customer support or not? It's like its most important functions are important because they can be verified.

💬 0 comments
Add to My Notes
00:24:49Gavin Baker

So I think if all of this starts to happen and starts to happen in '26, like there'll be an ROI on Blackwell and then all this will continue. And then we'll have Rubin and then that'll be another big quantum of spin. Rubin and the MI450 and the TPU v9.

💬 0 comments
Add to My Notes
00:25:07Gavin Baker

And then I do think just the most interesting question is what are the economic returns to artificial super intelligence? Because all of these companies in this great game, they've been in a prisoners dilemma. They're terrified that if they slow down, they're just gone forever. And their competitors don't—it's an existential risk. And you know, Microsoft blinked for like six weeks earlier this year and I think they would say they regret that. But with Blackwell and for sure with Rubin, economics are going to dominate the prisoners dilemma from a decision-making and spending perspective just because the numbers are so big.

💬 0 comments
Add to My Notes
00:25:44Gavin Baker

And this goes to kind of the ROI on AI question. And the ROI on AI has empirically, factually, unambiguously been positive. Like I just always find it strange that there's any debate about this because the largest bidders on GPUs are public companies. They report something called audited quarterly financials. And you can use those things to calculate something called a return on invested capital. And if you do that calculation, the ROIC of the big public spenders on GPUs is higher than it was before they ramped spending.

💬 0 comments
Add to My Notes
00:26:15Gavin Baker

And you could say, well, part of that is opex savings. Well, at some level that is part of what you expect the ROI to be from AI. And then you say, well, a lot of is actually just applying GPUs, moving the big recommender systems that power the advertising and the recommendation systems from CPUs to GPUs, and you've had massive efficiency gains. And that's why all the revenue growth at these companies has accelerated. But like, so what? The ROI has been there.

💬 0 comments
Add to My Notes
00:26:39Gavin Baker

Um, and it is interesting like every big internet company, the people who are responsible for the revenue are intensely annoyed at the amount of GPUs that are being given to the researchers. It's a very linear equation. If you give me more GPUs, I will drive more revenue. Give me those GPUs, we'll have more revenue, more gross profit, and then we can spend money. So, it's this constant fight at every company. One of the factors in the prisoners dilemma is everybody has this like religious belief that we're going to get to ASI and at the end of the day what do they all want? Almost all of them want to live forever. And they think that ASI is going to help them that.

💬 0 comments
Add to My Notes
00:27:16Patrick O'Shaughnessy

That's a good return.

💬 0 comments
Add to My Notes
00:27:18Gavin Baker

That's a good return. But we don't know.

💬 0 comments
Add to My Notes
00:27:18Patrick O'Shaughnessy

If as humans we have pushed the boundaries of physics, biology and chemistry, the natural laws that govern the universe, I'm very curious about your favorite sort of "throw cold water on this stuff" type takes that you think about sometimes. One would be like the things that would cause—I'm curious what you think the things that would cause this demand for compute to change or even the trajectory of it to change.

💬 0 comments
Add to My Notes
00:27:40Gavin Baker

There's one really obvious bear case and it is just Edge AI, and it's connected to the economic returns to ASI. In three years on a bigger and bulkier phone to fit the amount of DRAM necessary—and the battery won't probably last as long—you will be able to probably run a pruned down version of something like Gemini 5 or Grok 4.1 or Chat GPT at, I don't know, 30, 60 tokens per second. And then that's free. And this is clearly Apple's strategy. It's just we're going to be a distributor of AI and we're going to make it privacy safe and run on the phone and then you can call one of the big models, the god models in the cloud, whenever you have a question.

💬 0 comments
Add to My Notes
00:28:28Gavin Baker

And if that happens, if like 30 to 60 tokens a second at a 115 IQ is good enough, I think that's a bear case other than just the scaling laws break. But in terms of if we assume scaling laws continue—and we now know they're going to continue for pre-training for at least one more generation and we're very early in the two new scaling laws (post-training/mid-training RLVR and test time compute at inference)—we're so early in those and we're getting so much better at helping the models hold more and more context in their minds as they do this test time compute.

💬 0 comments
Add to My Notes
00:29:12Gavin Baker

And that's really powerful because everybody's like, "Well, how's the model going to know this?" Well, eventually if you can hold enough context, you can just hold every Slack message and Outlook message and company manual in a company in your context. And then you can compute the new task and compare it with your knowledge of the world, what you think, what the model thinks, all this context. And you know, it may be that really long context windows are the solution to a lot of the current limitations. And that's enabled by all these cool tricks like KV cache offload and stuff. But I do think like other than scaling laws slowing down, other than there being low economic returns to ASI, Edge AI is to me by far the most plausible and scariest bear case.

💬 0 comments
Add to My Notes
00:30:02Patrick O'Shaughnessy

I like to visualize like different S-curves. Invested through the iPhone and I love to see the visual of the iPhone models as it sort of went from this clunky bricky thing up to what we have now. If you picture something similar for the Frontier models themselves, does it feel like a certain part of that natural technology paradigm progression to you?

💬 0 comments
Add to My Notes
00:30:26Gavin Baker

If you're paying for Gemini Ultra or Super Grok and you're getting the good AI, it's hard to see differences. Like, I have to go really deep on something like, "Do you think PCI Express or Ethernet is a better protocol for scale-up networking and why? Show me the scientific papers." And if you shift between models and you ask a question like that where you know it really deeply, then you see differences.

💬 0 comments
Add to My Notes
00:30:58Gavin Baker

I do play fantasy football—winnings are donated to charity—but it is like, these new models are quite a bit better at helping like who should I play. And they think in much more sophisticated ways. And by the way, if you're a historically good fantasy football player and you're having a bad season, this is why: because you're not using it. And I think we'll see that in more and more domains.

💬 0 comments
Add to My Notes
00:31:26Gavin Baker

But I do think they are already at a level where unless you are a true expert or just have an intellect that is beyond mine, it's hard to see the progress. And that's why I do think we need to shift from getting more intelligent to more useful, unless more intelligence starts leading to these massive scientific breakthroughs and we're curing cancer in '26 and '27. I don't know that we're going to be curing cancer, but I do think from an ROI—almost an ROIS curve—we need to kind of hand off from intelligence to usefulness. And then usefulness will then have to hand off to scientific breakthrough that creates whole new industries.

💬 0 comments
Add to My Notes
00:32:11Patrick O'Shaughnessy

What are the building blocks of usefulness in your mind?

💬 0 comments
Add to My Notes
00:32:14Gavin Baker

Just being able to do things consistently and reliably. And a lot of that is keeping all the context. Like there's a lot of context if someone wants to plan a trip for me. Like, I've acquired these strange preferences. Like I follow that guy Andrew Huberman, so I like to have an east-facing balcony so I can get morning sun. The AI has to remember being on a plane with Starlink is important to me. Okay, here are the resorts I've historically liked, here are the kinds of areas I've liked, here are the rooms that I would really like at each. That's a lot of context. And to keep all of that and kind of weight those, it's a hard problem. So I think context windows are a big part of it.

💬 0 comments
Add to My Notes
00:32:55Gavin Baker

You know, there's this meter task evaluation thing, like how long it can work for. And you could think of that being related in some way to context, although not precisely. But that just task length needs to keep expanding because booking a restaurant is economically useful but it's not that economically useful. But booking me an entire vacation and knowing the preferences of my parents, my sister, my niece, and my nephew—that's a much harder problem. And that's something that like a human might spend three or four hours on optimizing.

💬 0 comments
Add to My Notes
00:33:39Gavin Baker

I do think we're going to see an acceleration in the awesomeness of various products just because engineers are using AI to make products better and faster. We both invested in Forell, the hearing aid company, which is just absolutely remarkable.

💬 0 comments
Add to My Notes
00:34:01Patrick O'Shaughnessy

And we're going to see I think something like that in every vertical...

💬 0 comments
Add to My Notes
00:34:03Gavin Baker

That's AI being used for the most core function of any company which is designing the product. And then it will be AI being used to help manufacture the product and distribute it more efficiently, whether it's optimizing a supply chain or having a vision system watch a production line. So I think a lot of stuff is happening.

💬 0 comments
Add to My Notes
00:34:30Gavin Baker

The other thing I think is really interesting in this whole ROI part is Fortune 500 companies are always the last to adopt a new technology. They're conservative, they have lots of regulations, lots of lawyers. Startups are always the first. So let's think about the cloud, which was the last really truly transformative new technology for enterprises. Being able to have all of your compute in the cloud and use SaaS. So it's always upgraded, it's always great, you can get it on every device. I mean, those were dark days before the cloud. The first AWS re:Invent, I think it was in 2013, every startup on planet Earth ran on the cloud. The idea that you would buy your own server and storage box and router was ridiculous.

💬 0 comments
Add to My Notes
00:35:16Gavin Baker

And then the first big Fortune 500 companies started to standardize on it maybe 5 years later. You see that with AI. I'm sure you've seen this in your startups and I think one reason VCs are more broadly bullish on AI than public market investors is VCs see very real productivity gains. There's all these charts that for a given level of revenue, a company today has significantly fewer employees than a company of two years ago. And the reason is AI is doing a lot of the sales, the support and helping to make the product. And I mean there is Iconic has some charts, a16z... by the way, David George is a good friend, great guy... you know, he has his model busters thing. So there's very clear data that this is happening.

💬 0 comments
Add to My Notes
00:36:00Gavin Baker

So people who have a lens into the world of venture see this. And I do think it was very important in the third quarter, this is the first quarter where we had Fortune 500 companies outside of the tech industry give specific quantitative examples of AI-driven uplift. So C.H. Robinson went up something like 20% on earnings. And let's just say a truck goes from Chicago to Denver. And then the trucker lives in Chicago so it's going to go back from Denver to Chicago. There's an empty load. And C.H. Robinson has all these relationships with these truckers and trucking companies and they match shippers demand with that empty load supply to make the trucking more efficient.

💬 0 comments
Add to My Notes
00:36:46Gavin Baker

One of the most important things they do is they quote price and availability. So, somebody calls them up and says, "Hey, I urgently need three 18-wheelers from Chicago to Denver." In the past they said it would take them 15 to 45 minutes and they only quoted 60% of inbound requests. With AI, they're quoting 100% and doing it in seconds. And so they printed a great quarter and the stock went up 20% and it was because of AI-driven productivity that's impacting the revenue line, the cost line, everything.

💬 0 comments
Add to My Notes
00:37:22Gavin Baker

And so I actually think that's pretty important because I was actually very worried about the idea that we might have this Blackwell ROI air gap because we're spending so much money on Blackwell. Those Blackwells are being used for training and there's no ROI on training. Training is you're making the model. The ROI comes from inference. So I was really worried that we're going to have maybe this three-quarter period where the capex is unimaginably high, those Blackwells are only being used for training, bars staying flat, eyes going up.

💬 0 comments
Add to My Notes
00:37:50Patrick O'Shaughnessy

Yeah. Yeah. Exactly.

💬 0 comments
Add to My Notes
00:37:52Gavin Baker

And so ROIC goes down. And you can see like Meta... Meta printed a quarter where ROIC declined because Meta has not been able to make a frontier model, and that was not good for the stock. So I was really worried about this. I do think that those data points are important in terms of suggesting that maybe we'll be able to navigate this potential air gap in ROIC.

💬 0 comments
Add to My Notes
00:38:12Patrick O'Shaughnessy

Yeah, it makes me wonder about in this market... I'm like everybody else. It's the 10 companies at the top that are all the market cap, more than all of the attention. There's 490 other companies in the 500. You studied those too. Like what do you think about that group? Like what is interesting to you about the group that now nobody seems to talk about and no one really seems to care about because they haven't driven returns?

💬 0 comments
Add to My Notes
00:38:35Gavin Baker

Well, I think that people are going to start to care if you have more and more companies print these C.H. Robinson-like quarters—companies that have historically been really well-run. The reason they have a long track record of success is you cannot succeed without using technology well. And so if you have a kind of internal culture of experimentation and innovation, I think you will do well with AI. I would bet on the best investment banks to be earlier and better adopters of AI than maybe some of the trailing banks.

💬 0 comments
Add to My Notes
00:39:15Gavin Baker

One thing that I have a strong opinion on: all these VCs are setting up these holding companies and "we're going to use AI to make traditional businesses better" and, you know, they're really smart VCs and they're great track records. But that's what private equity has been doing for 50 years. You're just not going to beat private equity at their game.

💬 0 comments
Add to My Notes
00:39:34Patrick O'Shaughnessy

What Vista did in the early days, right?

💬 0 comments
Add to My Notes
00:39:35Gavin Baker

Yeah. Private equity's maybe had a little bit of a tough run. You know, just multiples have gone up. Now, private assets are more expensive, the cost of financing has gone up. It's tough to take a company public because the public valuation is 30% lower than the private valuation. So PE's had a tough run. I actually think these private equity firms are going to be pretty good at systematically applying AI.

💬 0 comments
Add to My Notes
00:39:58Patrick O'Shaughnessy

We haven't spent much time talking about Meta, Anthropic or OpenAI. And I'd love just like your impression on everything that's going on in this infrastructure side that we talked about. These are three really important players in this grand game. How does all of this development that we've discussed so far impact those players specifically do you think?

💬 0 comments
Add to My Notes
00:40:16Gavin Baker

The first thing let me just say about frontier models broadly. In 2023 and 2024 I was fond of quoting Eric Vishria... our friend, brilliant man... and Eric would always say foundation models are the fastest appreciating assets in history. And I would say he was 90% right. I modified the statement. I said foundation models without unique data and internet scale distribution are the fastest depreciating assets in history. But reasoning fundamentally changed that in a really profound way.

💬 0 comments
Add to My Notes
00:40:46Gavin Baker

There was a loop, a flywheel to quote Jeff Bezos, that was at the heart of every great internet company and it was: you made a good product, you got users, those users using the product generated data that could be fed back into the product to make it better. And that flywheel has been spinning at Netflix, at Amazon, at Meta, at Google for over a decade. And that's an incredibly powerful flywheel. And it's why those internet businesses were so tough to compete with. It's why they were increasing returns to scale. Everybody talks about network effects much more and you know network effects were important for social networks. I don't know to what extent Meta is a social network anymore. It's more like a content distribution. But they just had increasing returns to scale because of that flywheel.

💬 0 comments
Add to My Notes
00:41:33Gavin Baker

And that dynamic was not present in the pre-reasoning world of AI. You pre-trained a model, you let it out in the world, and it was what it was. And it was actually pretty hard. They would do RLHF, reinforcement learning with human feedback, and you try and make the bot model better. With reasoning, it's early but that flywheel has started to spin and that is really profound for these frontier labs.

💬 0 comments
Add to My Notes
00:42:07Gavin Baker

So one, reasoning fundamentally changed the industry dynamics of Frontier Labs because if a lot of people are asking a similar question and they're consistently either liking or not liking the answer, then you can kind of use that as a verifiable reward. That's a good outcome. And then you can kind of feed those good answers back into the model. And we're very early at this flywheel spinning—like it's hard to do now—but you can see it beginning to spin. So, this is important fact number one for all of those dynamics.

💬 0 comments
Add to My Notes
00:42:52Gavin Baker

Second, I think it's really important that Meta... you know, Mark Zuckerberg at the beginning of this year in January said, "I anticipate... I'm highly confident..." I'm going to get the quote wrong, "...that at some point in 2025, we're going to have the best and most performant AI." I don't know if he's in the top hundred. Okay. So he was as wrong as it was possible to be. And I think that is a really important fact because it suggests that what these four companies have done is really hard to do because Meta threw a lot of money at it and they failed. Yann LeCun had to leave... well, they had to have the famous billion dollar for AI researchers.

💬 0 comments
Add to My Notes
00:43:28Gavin Baker

And by the way, Microsoft also failed. They did not make such an unequivocal prediction but they bought Inflection AI and there were a lot of comments from them that "we anticipate our internal models quickly getting better and we're going to run more and more of our AI on our internal models." Amazon, they bought a company called Adept AI. They have their models called Nova. No, I don't think they're in the top 20.

💬 0 comments
Add to My Notes
00:43:55Gavin Baker

So clearly it's much harder to do than people thought a year ago and there's many, many reasons for that. Like it's actually really hard to keep a big cluster of GPUs coherent. A lot of these companies were used to running their infrastructure to optimize for cost instead of complexity and performance. Keeping the GPUs running at high utilization rate in a big cluster is actually really hard and there are wild variations in how well companies run GPUs. If you have 30% uptime on that cluster and you're competing with somebody who has 90% uptime, you're not even competing.

💬 0 comments
Add to My Notes
00:44:40Gavin Baker

Two, then I think there is... you know, these AI researchers, they like to talk about "taste". I find it very funny. "Oh why do you make so much money?" "I have very good taste." What taste means is you have a good intuitive sense for the experiments to perform. And this is why you pay people a lot of money because it actually turns out that as these models get bigger, you can no longer run an experiment on a thousand GPU cluster and replicate it on 100,000 GPUs. You need to run that experiment on 50,000 GPUs and maybe it takes days. So there's a very high opportunity cost. And so you have to have a really good team that can make the right decisions about which experiments to run on this. And then you need to do all the reinforcement learning during post-training well and the test time compute well. It's really hard to do.

💬 0 comments
Add to My Notes
00:45:37Gavin Baker

I used to have this saying like, hey, I was a retail analyst long ago. Pick any vertical in America. If you can just run a thousand stores and have them clean, well lit, stocked with relevant goods at good prices and staffed by friendly employees who are not stealing from you, you're going to be a $20 billion company, a $30 billion company. Like 15 companies have been able to do that. It's really hard. And it's the same thing. Doing all of these things well is really hard. And then reasoning with this flywheel... this is beginning to create more separation.

💬 0 comments
Add to My Notes
00:46:16Gavin Baker

And what's even more important, everyone of those labs—xAI, Gemini, OpenAI, and Anthropic—they have a more advanced checkpoint internally of the model. Checkpoint is just, you're kind of continuously working on these models and then you release kind of a checkpoint. The one they're using internally is better and they're using that model to train the next model. And if you do not have that latest checkpoint, you're behind. It's getting really hard to catch up.

💬 0 comments
Add to My Notes
00:46:47Gavin Baker

Chinese open source is a gift from God to Meta because you can use Chinese open source to try and... that can be your checkpoint and you can use that as a way to kind of bootstrap this, and that's what I'm sure they're trying to do and everybody else. Um, the big problem and the big giant swing factor: I think China's made a terrible mistake with this rarest thing. China, because you know they have the Huawei Ascend and it's a decent chip versus something... you know, the deprecated Hopper... it looks okay. So they're trying to force Chinese open source to use their Chinese chips, their domestically designed chips.

💬 0 comments
Add to My Notes
00:47:26Gavin Baker

The problem is Blackwell is going to come out now and the gap between these American frontier labs and Chinese open source is going to blow out because of Blackwell. And actually DeepSeek in their most recent technical paper v3.2 said like one of the reasons we struggle to compete with the American Frontier Labs is we don't have enough compute. That was their very politically correct, still a little bit risky way of saying, "Guys, that might be a big mistake."

💬 0 comments
Add to My Notes
00:47:57Gavin Baker

So, if you just kind of play this out, these four American labs are going to start to widen their gap versus Chinese open source, which then makes it harder for anyone else to catch up because that gap is growing. So, you can't use Chinese open source to bootstrap. And then geopolitically, China thought they had the leverage. They're going to realize, "Oh, whoopsy daisy. We do need the Blackwells." And unfortunately, they'll probably realize that in late '26.

💬 0 comments
Add to My Notes
00:48:22Gavin Baker

And at that point, there's an enormous effort underway. DARPA and DoD have programs to incentivize really clever technological solutions for rare earths. I think rare earths are going to be solved way faster than anyone thinks. They're obviously not that rare, they're just misnamed; they're rare because they're really messy to refine. And so geopolitically, I actually think Blackwell is pretty significant.

💬 0 comments
Add to My Notes
00:49:03Gavin Baker

And then in the context of all of that, going back to the dynamics between these companies, xAI will be out with the first Blackwell model and then they'll be the first ones probably using Blackwell for inference at scale. And I think that's an important moment for them. And by the way, it is funny, if you go on OpenRouter you can just look, they have dominant share now. OpenRouter is whatever it is, it's 1% of API tokens but it's an indication. They process 1.35 trillion tokens. Google did like eight or 900 billion. Anthropic was at 700 billion. xAI is doing really, really well and the model is fantastic.

💬 0 comments
Add to My Notes
00:49:46Gavin Baker

But you'll see xAI come out with this. OpenAI will come out faster. OpenAI's issue that they're trying to solve with Stargate is because they pay a margin to people for compute, and maybe the people who run their compute are not the best at running GPUs. They are a high-cost producer of tokens.

💬 0 comments
Add to My Notes
00:50:05Patrick O'Shaughnessy

And I think this kind of explains a lot of their code red recently.

💬 0 comments
Add to My Notes
00:50:06Gavin Baker

Yeah. Well, just the $1.4 trillion in spending commitments. And I think that was just like, "Hey, they know they're going to need to raise a lot of money." Um, particularly if Google keeps its current strategy of sucking the economic oxygen out of the room. You go from $1.4 trillion rough vibes to code red like pretty fast. And the reason they have a code red is because of all these dynamics. So then they'll come out with a model but they will not have fixed their per token cost disadvantage yet relative to both xAI and Google and almost and Anthropic at that point.

💬 0 comments
Add to My Notes
00:50:38Gavin Baker

Anthropic is a good company. You know they're burning dramatically less cash than OpenAI and growing faster. So I think you have to give Anthropic a lot of credit and a lot of that is their relationship with Google and Amazon for the TPUs and the Trainiums. So Anthropic has been able to benefit from the same dynamics that Google has.

💬 0 comments
Add to My Notes
00:50:56Gavin Baker

I think is very indicative in this great game of chess... You know, Dario and Jensen maybe have taken a few... there have been a few public comments that were made between them.

💬 0 comments
Add to My Notes
00:51:07Patrick O'Shaughnessy

Jousting.

💬 0 comments
Add to My Notes
00:51:07Gavin Baker

A little bit of jousting. Well, Anthropic just signed the $5 billion deal with Nvidia. That is because Dario is a smart man and he understands these dynamics about Blackwell and Rubin relative to TPU. And so Nvidia now goes from having two of the fighters—xAI and OpenAI—to three fighters. So that helps in this Nvidia versus Google battle. And then if Meta can catch up, that's really important. And so I am sure Nvidia is doing whatever they can to help Meta. "You're running those GPUs this way... maybe we should twist the screw this way or turn the dial that way and then it will be also if Blackwell comes back to China." Which it seems like it probably will happen, that will also be very good because then Chinese open source will be back.

💬 0 comments
Add to My Notes
00:51:59Patrick O'Shaughnessy

I'm always so curious about the poles of things. One pole would be the other breakthroughs that you have your mind on, things in the data center that aren't chips that we've talked about before as one example.

💬 0 comments
Add to My Notes
00:52:10Gavin Baker

I think the most important thing that's going to happen in this world in the next 3 to 4 years is data centers in space. And this has really profound implications for everyone building a power plant or a data center on planet earth. And there is a giant gold rush into this.

💬 0 comments
Add to My Notes
00:52:29Patrick O'Shaughnessy

I haven't heard anything about this so please.

💬 0 comments
Add to My Notes
00:52:30Gavin Baker

Yeah. You know it's like everybody thinks like hey AI is risky but you know what I'm going to build a data center, I'm going to build a power plant. We will need that. But if you think about it from first principles, data centers should be in space. Okay. What are the fundamental inputs to running a data center? There are power and there are cooling and then there are the chips. That's like the total if you think about it from a total cost perspective.

💬 0 comments
Add to My Notes
00:53:00Gavin Baker

So in space you can keep a satellite in the sun 24 hours a day and the sun is 30% more intense. You can have the satellite always kind of catching the light. This results in six times more irradiance in outer space than on planet earth. So you're getting a lot of solar energy. Point number one. Point number two, because you're in the sun 24 hours a day, you don't need a battery. And this is a giant percentage of the cost. So the lowest cost energy available in our solar system is solar energy in space.

💬 0 comments
Add to My Notes
00:53:43Gavin Baker

Okay. Second, for cooling. In one of these racks, a majority of the mass and the weight is cooling. And the cooling in these data centers is incredibly complicated. You know, the HVAC, the CDUs, the liquid cooling. In space, cooling is free. You just put a radiator on the dark side of the satellite. It's gold. And it's as close to absolute zero as you can get. So, all that goes away and that is a vast amount of cost.

💬 0 comments
Add to My Notes
00:54:16Gavin Baker

Okay, let's think about how these satellites connect. Maybe each satellite is kind of a rack. How are you going to connect those racks? Well, it's funny. In the data center, the racks are over a certain distance connected with fiber optics. And that just means a laser going through a cable. The only thing faster than a laser going through a fiber optic cable is a laser going through absolute vacuum. So, if you can link these satellites in space together using lasers, you actually have a faster and more coherent network than in a data center on Earth.

💬 0 comments
Add to My Notes
00:54:55Gavin Baker

Okay, for training that's going to take a long time just because it's so big. But for inference... let's think about the user experience when I asked Grok about you. A radio wave traveled from my cell phone to a cell tower. Then it hit the base station, went into a fiber optic cable, went to some sort of metro aggregation facility in New York, probably within like 10 blocks of here. There's a small little metro router that routed those packets to a big xAI data center somewhere. And then the computation was done and it came back over the same path.

💬 0 comments
Add to My Notes
00:55:41Gavin Baker

If the satellites can communicate directly with the phone—and Starlink has demonstrated direct-to-cell capability—you just go boom boom. It's a much better lower cost user experience. So in every way data centers in space from a first principles perspective are superior to data centers on earth.

💬 0 comments
Add to My Notes
00:56:03Patrick O'Shaughnessy

So if we could teleport that into existence, I understand that portion. What are the frictions to that? Why will that not happen? And is it launch cost? Is it launch availability?

💬 0 comments
Add to My Notes
00:56:13Gavin Baker

I mean, we need a lot of the Starships. Like the Starships are the only ones that can economically make that happen. We need a lot of those Starships. Um, you know, maybe China or Russia will be able to land a rocket. Blue Origin just landed a booster. It's an entirely new and different way to think about SpaceX.

💬 0 comments
Add to My Notes
00:56:30Gavin Baker

And it is interesting that Elon posted yesterday or said in an interview that Tesla, SpaceX and xAI are kind of converging, and they really are. So xAI will be the intelligence module for Optimus made by Tesla, with Tesla vision as its perception system. And then SpaceX will have the data centers in space that will power a lot of the AI presumably for xAI and Tesla and the Optimuses and a lot of other companies. And it's just interesting the way that they're converging and each one is kind of creating competitive advantage for the other.

💬 0 comments
Add to My Notes
00:57:53Patrick O'Shaughnessy

If I go to the other end of the spectrum and I think about something that seems to have been historically endemic to the human economic experience—that shortages are always followed by gluts in capital cycles. What if in this case the shortage is compute? Mark Chen now is on the record as saying they would consume 10x as much compute if you gave it to them in like a couple weeks. So like there seems to still be a massive shortage of compute. But there also just seems to be this like iron law of history that gluts follow shortages. What do you think about like that concept as it relates to this?

💬 0 comments
Add to My Notes
00:58:28Gavin Baker

Will technology be a glut? You know, and AI is fundamentally different than software just in that every time you use AI it takes compute in a way that traditional software just did not. I mean it is true like I think every one of these companies could consume 10x more compute. Like what would happen is just the $200 tier would get a lot better. The free tier would get like the $200 tier. Google has started to monetize AI with ads and I think that will give everyone else permission to introduce ads into the free mode and then that is going to be an important source of ROI.

💬 0 comments
Add to My Notes
00:59:13Gavin Baker

You know, here are your three vacations, would you like me to book one? And then they're for sure going to collect a commission. There are many ways you can make money. I think we went into great detail on maybe a prior podcast about how just inventory dynamics made these inventory cycles inevitable in semis. Um, and the iron law of semis is just that customer buffer inventories have to equal lead times. And that's why you got these inventory cycles historically.

💬 0 comments
Add to My Notes
00:59:32Gavin Baker

We haven't seen a true capacity cycle in semis maybe arguably since the late 90s. And that's because Taiwan Semi has been so good at aggregating and smoothing supply. And a big problem in the world right now is that Taiwan Semi is not expanding capacity as fast as their customers want. And I think this is actually they're in the process of making a mistake just because you do have Intel and with these fabs... and they're not as good and it's really hard to work with their PDK... but now you have this guy Lip-Bu who's a really good executive. By the way, Pat Gelsinger I think was also a good executive and he put Intel on the only strategy that could result in success and I actually think it's shameful that the Intel board fired him when they did. But Lip-Bu is a good executive and now he's reaping the benefits of Pat's strategy and Intel has all these empty fabs. And eventually, given the shortages we have of compute, those fabs are going to be filled.

💬 0 comments
Add to My Notes
01:00:31Gavin Baker

So I think Taiwan Semi is in the process of making a mistake but they're just so paranoid about an overbuild. And they're so skeptical. You know, they're the guys who met with Sam Altman and laughed and said, "He's a podcast bro. He has no idea what he's talking about." You know, they're terrified of an overbuild. So it may be that Taiwan Semi singlehandedly puts the breaks on the bubble... is the governor.

💬 0 comments
Add to My Notes
01:01:00Gavin Baker

And we do like... I think governors are good. It's good that power is a governor, it's good that Taiwan Semi is a governor. If Taiwan Semi opens up at the same time when data centers in space relieve all power constraints—but that's like five, six years away that data centers in space are majority of deployed megawatts—yeah, I think you get an overbuild really fast. But just we have these two really powerful natural governors. Smoother and longer is good.

💬 0 comments
Add to My Notes
01:01:28Patrick O'Shaughnessy

We haven't talked about the power other than alluding to it through the space thing. Power was like the most uninteresting topic because nothing really changed for a really, really long time. All of a sudden we're trying to figure out how to get like gigawatts here there and everywhere. How do you think about are you interested in power?

💬 0 comments
Add to My Notes
01:01:45Gavin Baker

I'm very interested. I do feel lucky in a prior life I was the sector leader for the telecom and utilities team. So one, having watts as a constraint is like really good for the most advanced compute players because if watts are the constraint, the price you pay for compute is irrelevant. The TCO of your compute is absolutely irrelevant because if you could get 3x or 4x or 5x more tokens per watt, that is literally three or 4x or 5x more revenue.

💬 0 comments
Add to My Notes
01:02:17Gavin Baker

So as long as power is a governor the best products are going to win irrespective of price and have crazy pricing power. I think that's the first implication that's really important to me. Second, it is... the only solutions to this: we just can't build nuclear fast enough in America. As much as we would love to build nuclear quickly, we just can't. NEPA, all these rules, like it's just it's too hard. A rare ant that we could move and it could be in a better environment can totally delay the construction of a nuclear power plant. One ant. It's crazy actually. Like humans need to come first.

💬 0 comments
Add to My Notes
01:03:32Gavin Baker

But like the solutions are natural gas and solar. And the great thing is the great thing about these AI data centers is apart from the ones that you're going to do inference on, you can locate them anywhere. So I think you were going to see and you're this is why you're seeing all this activity in Abilene, you know, because it's in the middle of a big natural gas basin and we have a lot of natural gas in America because of fracking. I think this is going to be solved. You know, you're going to have power plants fed by gas or solar. All these turbine manufacturers were reluctant to expand capacity. Caterpillar just said, "We're going to increase capacity by 75% over the next few years." So like the system on the power side is beginning to respond.

💬 0 comments
Add to My Notes
01:04:19Patrick O'Shaughnessy

One of the reasons that I always so love talking to you is that you do every like you do as much in the top 10 companies in the world as you do looking at brand new companies with, you know, entrepreneurs that are 25 years old trying to do something amazing. If I think about that second category of young enterprising technologists who now are like "AI," they're like kind of the first generation of AI native entrepreneurs. What are you seeing in that group that's notable or surprising or interesting?

💬 0 comments
Add to My Notes
01:04:47Gavin Baker

These young CEOs, they're just so impressive in all ways and they get more polished faster. And I think the reason is they're talking to the AI. "How should I deal with pitching this investor? I'm meeting with Patrick O'Shaughnessy. What do you think the best ways I should pitch him are?" And it works. "Hey, I have this difficult HR situation. How would you handle it?" That's correct. And it's good at that. "We're struggling to sell our product. What changes would you make?" And it's really good at all of that today.

💬 0 comments
Add to My Notes
01:05:23Gavin Baker

And so, and that goes to these VCs are seeing massive AI productivity in all their companies. It's because their companies are full of these, you know, 23, 24 or even younger AI natives. I've been so impressed with like young investment talent and it's just part of it. Like your podcast is part of that. There's just knowledge and very, very specific knowledge has became so accessible through podcasts and the internet. Impressive young people come in and they're just... I feel like they're where I was as an investor in my early 30s and they're 22 and I'm like, "Oh my god, like I have to run so fast to keep up." These kids who are growing up native in AI, they are just proficient with it in a way that I am trying really hard to become.

💬 0 comments
Add to My Notes
01:06:17Patrick O'Shaughnessy

Can we talk about semi VC specifically and like what is interesting in that universe?

💬 0 comments
Add to My Notes
01:06:20Gavin Baker

Oh, just the one thing I just think is so cool about it and so underappreciated is your average semiconductor venture founder is like 50 years old. Okay. And Jensen and what's happened with Nvidia and the market cap of Nvidia has like singlehandedly ignited semiconductor venture, but the way it's ignited, it's ignited in an awesome way that's like really good for actually Nvidia and Google and everyone.

💬 0 comments
Add to My Notes
01:06:44Gavin Baker

Let's just say you were the best DSP architect in the world. You had made for the last 20 years every two years... because that's what you have to do in semiconductors, it's like every two years you have to win run a race. And if you won the last race you start like a foot ahead. And over time those compound. But like maybe that person and his team... maybe he's the head of networking at a big public company, he's making a lot of money and he has a good life. And then because he sees these outcomes and the size of the markets in the data center he's like, "Wow, why don't I just go start my own company?"

💬 0 comments
Add to My Notes
01:07:22Gavin Baker

But the reason that's important is that there are thousands of parts in a Blackwell rack. And in the Blackwell rack, maybe Nvidia makes two or 300 of those parts. And same thing in an AMD rack. And they need all of those other parts to accelerate with them. So, they couldn't go to this one-year cadence if everything was not keeping up with them. The fact that semiconductor venture has come back with a vengeance... My little firm maybe has done more semiconductor deals in the last seven years than the top 10 VCs combined... but that's really, really important because now you have an ecosystem of companies who can keep up.

💬 0 comments
Add to My Notes
01:08:12Gavin Baker

And that ecosystem of these venture companies is putting pressure on the public companies that also need to be part of this if we're going to go to this annual cadence, which is just so hard. Because not even Nvidia can do it alone. AMD can't do it alone. Google can't do it alone. You need the people who make the transceivers. You need the people who make the wires, who make the backplanes, who make the lasers. They all have to accelerate with you. And one thing that I think is very cool about AI as an investor is it's just it's the first time where every level of the stack that I look at, at least the most important competitors are public and private.

💬 0 comments
Add to My Notes
01:09:08Gavin Baker

You know, Nvidia... they're very important private competitors. Broadcom, important private competitors. Marvell, important private competitors. Lumentum, Coherent... all these companies. There's even like a wave of innovation in memory which is really exciting to see because memory is such a gating factor. By the way, something that could slow all this down and be a natural governor is if we get our first true DRAM cycle since the late 90s.

💬 0 comments
Add to My Notes
01:09:31Patrick O'Shaughnessy

Say more what that means.

💬 0 comments
Add to My Notes
01:09:32Gavin Baker

If like a DRAM wafer is valued at like a 5 karat diamond. In the '90s when you had these true capacity cycles before Taiwan Semi kind of smoothed everything out and DRAM became more of an oligopoly, you would have these crazy shortages where the price would just go 10x. Relative to the last 25 years where like a giant DRAM cycle, a good DRAM cycle is the price stops going down. An epic cycle is maybe it goes up 30, 40, 50%. But I mean, if it starts to go up by X's instead of percentages, that's a whole different game. By the way, we should talk about SaaS.

💬 0 comments
Add to My Notes
01:10:09Patrick O'Shaughnessy

Yeah, let's talk about it. What do you think's going to happen?

💬 0 comments
Add to My Notes
01:10:12Gavin Baker

Application SaaS companies are making the exact same mistake that brick-and-mortar retailers did with e-commerce. So, brick and mortar retailers, particularly after the telecom bubble crashed, they looked at Amazon and they said, "Oh, it's losing money. You know, e-commerce is going to be a low margin business." Just from first principles, how can it ever be more efficient as a business? Right now, our customers pay to transport themselves to the store and then they pay to transport the goods home. How could it ever be more efficient if we're sending shipments out to individual customers? And Amazon's vision, of course, was well eventually we're just going to go down a street and drop off a package at every house.

💬 0 comments
Add to My Notes
01:10:57Gavin Baker

And so, they did not invest in e-commerce. They clearly saw customer demand for it, but they did not like the margin structure of e-commerce. That is the fundamental reason that essentially every brick-and-mortar retailer was really slow to invest in e-commerce. And now here we are and Amazon has higher margins in their North American retail business than a lot of retailers that are mass market retailers. So margins can change and if there's a fundamental transformative new technology that customers are demanding, it's always a mistake not to embrace it.

💬 0 comments
Add to My Notes
01:11:30Gavin Baker

And that's exactly what the SaaS companies are doing. They have their 70, 80, 90% gross margins and they are reluctant to accept AI gross margins. The very nature of AI is... you know, software, you write it once and it's written very efficiently and then you can distribute it broadly at very low cost and that's why it was a great business. AI is the exact opposite where you have to recompute the answer every time and so a good AI company might have gross margins of 40%.

💬 0 comments
Add to My Notes
01:12:00Gavin Baker

Now, the crazy thing is because of those efficiency gains, they're generating cash way earlier than other people, than SaaS companies did historically. But they're generating cash earlier not because they have high gross margins, but because they have very few human employees. And it's just tragic to watch all of these companies like... you want to have an agent? It's never going to succeed if you're not willing to run it at a sub 35% gross margin because that's what the AI natives are running it at. Maybe they're running it at 40. So if you are trying to preserve an 80% gross margin structure, you are guaranteeing that you will not succeed at AI.

💬 0 comments
Add to My Notes
01:12:43Gavin Baker

Absolute guarantee. And this is so crazy to me because one, we have an existence proof for software investors being willing to tolerate gross margin pressure as long as gross profit dollars are okay. And it's called the cloud. People don't remember but when Adobe converted from on-premise to a SaaS model, not only did their margins implode, their actual revenues imploded too because you went from charging up front to charging over a period of years. Microsoft, it was less dramatic, but Microsoft was a tough stock in the early days of the cloud transition because investors were like, "Oh my god, you're an 80% gross margin business." And the cloud is the 50s and they're like, "Well, it's going to be gross profit dollar accretive."

💬 0 comments
Add to My Notes
01:13:33Gavin Baker

Microsoft bought GitHub and they use GitHub as a distribution channel for Copilot. Co-pilot for coding has become a giant business now. For sure it runs at much lower gross margins. But there are so many SaaS companies... like I can't think of a single application SaaS company that could not be running a successful agent strategy. They have a giant advantage over these AI natives in that they have a cash generative business.

💬 0 comments
Add to My Notes
01:14:07Gavin Baker

I think there is room for someone to be a new kind of activist or constructivist and just go to SaaS companies and say, "Stop being so dumb. All you have to do is say here are my AI revenues and here are my AI gross margins. And you know it's real AI because it's low gross margins. I'm going to show you that. And here's a venture competitor over here that's losing a lot of money. So maybe I'll actually take my gross margins to zero for a while but I have this business that the venture-funded company doesn't have." And this is just such an obvious playbook that you can run. Salesforce, ServiceNow, HubSpot, GitLab, Atlassian, all of them could run this.

💬 0 comments
Add to My Notes
01:14:51Patrick O'Shaughnessy

And the way that those companies could or should think about the way to use agents is just to ask the question: okay, what are the core functions we do for the customer now? Like how can we further automate that with agents effectively? Or is it some other...

💬 0 comments
Add to My Notes
01:15:03Gavin Baker

100%. Just like if you're in CRM. Well, what our customers do, they talk to their customers. We're customer relationship management software and we do some customer support, too. So, make an agent that can do that. And sell that at 10 to 20% and let that agent access all the data you have. Cuz what's happening right now is another agent made by someone else is accessing your systems, pulling the data into their system, and then you will eventually be turned off.

💬 0 comments
Add to My Notes
01:15:32Gavin Baker

And it's just crazy. And it's just because, "Oh wow, but we want to preserve our 80% gross margins." This is a life-or-death decision. And essentially everyone except Microsoft is failing it. To quote that memo from that Nokia guy long ago, their platforms are burning. Burning platform. There's a really nice platform right over there and you can just hop to it and then you can put out the fire in your platform that's on fire. And now you GOT TWO PLATFORMS AND IT'S GREAT.

💬 0 comments
Add to My Notes
01:16:05Patrick O'Shaughnessy

Your data centers and space thing makes me wonder if there are other kind of like less discussed off-the-wall things that you're thinking about in the markets in general that we haven't talked about.

💬 0 comments
Add to My Notes
01:16:16Gavin Baker

It does feel like since 2020 kicked off and you know 2022 punctured this kind of a series of rolling bubbles. So in 2020 there was a bubble in like EV startup companies that were not Tesla and that's for sure a bubble and they all went down 99%. And there was kind of a bubble in more speculative stocks, then we had the meme stocks, GameStop. And now it feels like the rolling bubble is in nuclear and quantum.

💬 0 comments
Add to My Notes
01:16:51Gavin Baker

And these are, you know, fusion and SMR. Like it would be a transformative technology. It's amazing. But sadly from my perspective, none of the public ways you can invest in this are really good expressions of this theme or likely to succeed or have any real fundamental support. And same thing with quantum. Like we... I've been looking at quantum for 10 years. We have a really good understanding of quantum and the public quantum companies again are not the leaders. You know, from my perspective, the leaders in quantum would be Google, IBM, and then the Honeywell Quantum. So the public ways you can invest in this theme which probably is exciting are not the best. So you have two really clear bubbles.

💬 0 comments
Add to My Notes
01:17:34Gavin Baker

I also think quantum supremacy is very misunderstood. People hear it and I think that mean it means that quantum computers are going to be better than classical computers at everything. With quantum you can do some calculations that classical computers cannot do. That's it. That's going to be really useful and exciting and awesome. But it doesn't mean that quantum takes over the world.

💬 0 comments
Add to My Notes
01:17:58Gavin Baker

The thought that I have had—this is maybe less related to markets than just AI—I have just been fascinated that for the last two years, whatever AI needs to keep growing and advancing, it gets. Have you ever seen public opinion change so fast in the United States on any issue as nuclear power? Just happened like that. And like why did that happen like right when AI needed it to happen? Now we're running up on boundaries of power on earth... all of a sudden data centers in space. It's just a little strange to me that whenever there is a bottleneck that might slow it down, everything accelerates. Like Rubin is going to be such an easy, seamless transition relative to Blackwell and Rubin's a great chip. And then you have AMD getting into the game with the MI450. Like it's just whatever AI needs, it gets.

💬 0 comments
Add to My Notes
01:19:11Patrick O'Shaughnessy

You're a deep reader of sci-fi, so uh...

💬 0 comments
Add to My Notes
01:19:13Gavin Baker

Yeah, exactly.

💬 0 comments
Add to My Notes
01:19:15Patrick O'Shaughnessy

You're making me think of Kevin Kelly's great book, What Technology Wants. He calls it the Technium, like the overall mass of technology that just is supplied by humans to grow more powerful.

💬 0 comments
Add to My Notes
01:19:25Gavin Baker

Absolutely. Yes. It just wants to grow more and more powerful. And now we're going into an instate.

💬 0 comments
Add to My Notes
01:19:30Patrick O'Shaughnessy

I have a selfish closing question. Speaking of young people, so my kids who are 12 and 10, but especially my son who's older is developing an interest in what I do, which I think is quite natural. And I'm going to try to start asking my friends who are the most passionate about entrepreneurship and investing why they are so passionate about it and what about it is so interesting and life-giving to them. How would you pitch what you've done, the career you built, this part of the world to a young person that's interested in this?

💬 0 comments
Add to My Notes
01:20:01Gavin Baker

I do believe at some level kind of investing is the search for truth. And if you find truth first, and you're right about it being a truth, that's how you generate alpha. And it has to be a truth that other people have not yet seen. You're searching for hidden truths.

💬 0 comments
Add to My Notes
01:20:18Gavin Baker

Earliest thing I can remember is being interested in history. You know, looking at books with pictures of the Phoenicians and the Egyptians and the Greeks and the Romans and pyramids. I loved history. I vividly remember like in the second grade as my dad drove me to school every day, we went through the whole history of World War II in one year and I loved that. And then that translated into a real interest in current events very early. So, like as a pretty young person, I was reading the New York Times and the Washington Post and I would get so excited when the mail came because it meant that maybe there was an Economist or a Newsweek or a Time or US News. And I was really into current events, you know, because current events is kind of like applied history and watching history happen and thinking about what might happen next.

💬 0 comments
Add to My Notes
01:21:17Gavin Baker

And you know, I didn't know anything about investing. My parents were both attorneys. Anytime I won an argument, I was super rewarded. If I could make a reasonable argument why I should stay up late, my parents would be so proud and they'd let me stay up late, but I had to beat them.

💬 0 comments
Add to My Notes
01:21:34Gavin Baker

You know, I was just kind of going through life. I really love to ski and I love rock climbing. And I go to college and rock climbing is by far the most important thing in my life. I dedicated myself to it completely. I did all my homework at the gym. I got to the rock climbing gym like at 7 am, would skip a lot of classes to stay in the gym. I'd do my homework on like a big bouldering mat. Like every weekend I went and climbed somewhere with the Dartmouth Mountaineering Club.

💬 0 comments
Add to My Notes
01:22:03Gavin Baker

My plan after two or three years of college was I was going to leave. I was going to be a ski bum in the winters, work on a river in the summers and that was how I was going to support myself. And then I was going to climb in the shoulder seasons, going to try and be a wildlife photographer and write the next great American novel.

💬 0 comments
Add to My Notes
01:23:11Patrick O'Shaughnessy

I can't believe I never knew this.

💬 0 comments
Add to My Notes
01:23:13Gavin Baker

That was my plan. This was like my plan of record. I was really lucky. My parents very supportive of everything I wanted to do. My parents had very strict parents, so of course they're extremely permissive with me. My parents were lawyers. They had done reasonably well. They both grew up in very economically disadvantaged circumstances. You know, like my dad talks about like he remembers every person who bought him a beer just because he couldn't afford a beer. So they were super on board with this plan.

💬 0 comments
Add to My Notes
01:23:57Gavin Baker

They said, "You know, Gavin, we think this plan... sounds like a great plan, but you know, we've never asked you for anything. We haven't encouraged you to study anything. We've supported you in everything you've wanted to do. Will you please get one professional internship, just one, and we don't care what it is." The only internship I could get, this was at the end of my sophomore summer at Dartmouth, was an internship with Donaldson, Lufkin & Jenrette (DLJ).

💬 0 comments
Add to My Notes
01:24:27Gavin Baker

My job was to every time DLJ published a research report... it was in like the private wealth management division... I would go through and look at which of his clients owned that stock. Then I would mail it to the clients. So this day we wrote on General Electric, so I need to mail the GE report to these 30 people. And then I started like reading the reports and I was like, "Oh my god, this is like the most interesting thing imaginable."

💬 0 comments
Add to My Notes
01:25:04Gavin Baker

Investing. I kind of conceptualized it as a game of skill and chance, kind of like something like poker. There is chance that is irreducible, but there's skill, too. So that really appealed to me. And the way you got an edge in this—the greatest game of skill and chance imaginable—was you had the most thorough knowledge possible of history. And you intersected that with the most accurate understanding of current events in the world to form a differential opinion on what was going to happen next in this game of skill and chance. Which stock is mispriced in the parimutuel system that is the stock market.

💬 0 comments
Add to My Notes
01:25:55Gavin Baker

And that was like day three. I went to the bookstore and I bought like the books that they had which were Peter Lynch's books. I read those books in like two days. And then I read all these Warren books, books about Warren Buffett. Then I read Market Wizards. Then I read Warren Buffett's letters to his shareholders. Then I taught myself accounting. There's this great book, Why Stocks Go Up and Down. Then I went back to school. I changed my majors from English and History to History and Economics. And I never looked back.

💬 0 comments
Add to My Notes
01:26:30Gavin Baker

And it consumed... like I continued to really focus on climbing. I would be in the gym and I would print out everything that the people on the Motley Fool wrote. They were early to talking about return on invested capital and incremental ROIC is like a really important indicator. I would just read it and I would underline it and I'd read books and then I'd read the Wall Street Journal and then eventually there was a computer terminal finally set up near the gym and I'd go to that gym and just read news about stocks. And it was the most important thing in my life and like I barely kept my grades up and yeah that's how I got into it man. History, current events, skill and chance.

💬 0 comments
Add to My Notes
01:27:09Gavin Baker

And I am a competitive person and I've actually never been good at anything else. Okay, I got picked last for every sports team. Like I love to ski, I've literally spent a small fortune on private skiing lessons, I'm not that good of a skier. I like to play ping pong, all my friends could beat me. Um, I tried to get really good at chess and my goal was to beat one of the people in the park. Never beat one of them. Never been good at anything. I thought I would be good at this.

💬 0 comments
Add to My Notes
01:27:50Gavin Baker

And the idea of being good at something other than taking a test that was competitive was very appealing to me. And so I think that's been a really important thing, too. And to this day, this is the only thing I've been vaguely competitive at. I'd love to be good at something else. I'm just not.

💬 0 comments
Add to My Notes
01:28:10Patrick O'Shaughnessy

I think I'm going to start asking this question of everybody. The ongoing education of Pearson... amazing place to close. I love talking about everything so much.

💬 0 comments
Add to My Notes
01:28:17Gavin Baker

This is great, man. Thank you. Thank you. Thank you.

💬 0 comments
Add to My Notes
Video Player
My Notes📝
Highlighted paragraphs will appear here