Elon Musk: Digital Superintelligence, Multiplanetary Life, How to Be Useful
Disclaimer: The transcript on this page is for the YouTube video titled "Elon Musk: Digital Superintelligence, Multiplanetary Life, How to Be Useful" from "Y Combinator". All rights to the original content belong to their respective owners. This transcript is provided for educational, research, and informational purposes only. This website is not affiliated with or endorsed by the original content creators or platforms.
Watch the original video here: https://www.youtube.com/watch?v=cFIlta1GkiE
We're at the very, very early stage of the intelligence big bang. Being a multi-planet species greatly increases the probable lifespan of civilization or consciousness or intelligence, both biological and digital. I think we're quite close to digital super intelligence. If it doesn't happen this year, next year for sure. Please give it up for Elon Musk.
Elon, welcome to AI Startup School. We're just really, really blessed to have your presence here today. Uh, thanks for having me. So, uh, from SpaceX, Tesla, Neuralink, XAI, and more, was there ever a moment in your life, uh, before all this where you felt, "I have to build something great?" And what flipped that switch for you?
Uh, well, I didn't originally think I would build something great. Um, I wanted to try to build something useful, but, uh, I didn't think I would build anything particularly great. If you said probabilistically, it seemed unlikely, uh, but I wanted to at least try.
So you're talking to, uh, a room full of people who are all technical engineers, uh, often, you know, some of the most eminent AI researchers coming up in the game.
Okay. I, uh, I think we should, I think that I, I like the term engineer better than researcher. I mean, I suppose if, if there's some fundamental algorithmic breakthrough, it's, it's a research, but otherwise, it's engineering.
Maybe let's start way back. I mean, when you were this is a room full of 18 to 25 year olds. Um, it skews younger because the founder set is younger and younger. Uh, can you put yourself back into their shoes when, you know, you were 18, 19, you know, learning to code, uh, even coming up with a first idea for Zip2. What was that like for you?
Yeah, back in '95, I, I was faced with a choice of either do, uh, you know, grad studies PhD at Stanford, uh, in, in material science, actually, working on ultracapacitors for potential use in electric vehicles, essentially trying to solve the range problem for electric vehicles, uh, or, uh, try to do something in this thing that most people had never heard of, called the internet. And, um, I talked to my professor, who was Bill Nyx in the material science from, and, uh, said like, um, "Can I like defer for a quarter, uh, because this will probably fail and then I'll need to come back to college?" And, um, and then he said, "This is probably the last conversation we'll have," uh, and he was right. Um, so, but I, I thought things would most likely fail, not that they would most likely succeed. Um, and, um, and then in '95, I wrote, uh, basically, I think the first or close to the first maps, directions, uh, internet white pages and yellow pages on the internet.
Um, I just wrote, I just wrote that personally, and I didn't even use a web server. I just read the port directly because I, um, couldn't afford, uh, and I couldn't, couldn't afford a T1. Uh, original office was on Sherman Avenue in Palo Alto. Uh, there was like an ISP on the floor below. So I drilled a, drilled a hole through the floor and just, uh, ran a LAN cable directly to the ISP. Um, and, um, you know, uh, my brother joined me and another co-founder, Greg Curry, who passed away. And, um, we at the time, we couldn't even afford a, a place to stay, so we just, the, the office was 500 bucks a month, so we just slept in the office and, and then showered at the YMCA on Page Millino.
Um, and, uh, yeah, and we, I guess we ended up doing a little bit of a useful company, uh, Zip2 in the beginning. Um, and, um, we, we, we did build a lot of, uh, really, really good software technology, but we were somewhat captured by the legacy media companies and that Nighter, New York Times, host, whatnot, were investors and, uh, customers and, and also on the board. Uh, so they kept, they, they kept wanting to use our software in ways that made no sense. Um, so I, I wanted to go direct to consumers. Anyway, long story, dwelling too much on Z2, but the, I really just wanted to do something useful on the internet. Um, as because I had like two choices, like do a, do a PhD and watch people build the internet or help build the internet in some small way. And I was like, "Well, I guess I can always try and fail and then go back to grad studies." Um, and, uh, anyway, that ended up being like reasonably successful, sold for like $300 million, which is a lot at the time.
These days that's like, I think the minimum impulse bid for an AI startup is like a billion dollars. Um, it's like a, there's so many freaking unicorns, it's like a herd of unicorns at this point, you know, unicorn is a billion dollar situation. Um, there's been inflation since, so quite a bit more money actually.
Yeah, I mean, like 90, 1995, you could probably buy a burger for a nickel.
Well, not quite, but I mean, yeah, there has been a lot of inflation. Um, but, uh, I mean, the hype level on AI is, is, is pretty intense, as you've seen. Um, you know, you see, uh, companies that are, I don't know, less than a year old getting sometimes billion dollar or multi-billion dollar valuations. Um, which I guess could, could pan out and probably will pan out in some cases. Um, but, uh, it is eye-watering to see some of these valuations. Um, yeah, what do you think? I mean,
Well, I'm pretty bullish personally. I'm, I'm pretty bullish, honestly. So I, I think the people in this room are going to create a lot of the value that, um, you know, a billion people in the world should be using this stuff. And, uh, we're not even, we're scratching the surface of it. I love the internet story in that, uh, even back then, you know, you are a lot like the people in this room back then in that, you know, this, the heads of all the, the CEOs of all the legacy media companies look to you as the person who understood the internet. And a lot of the world, the, you know, the corporate world, like the world at large that does not understand what's happening with AI, they're going to look to the people in this room, uh, for exactly that. It sounds like, you know, what are some of the tangible lessons? It sounds like one of them is don't give up board control or be careful about, have a really good lawyer.
Uh, I guess for the first, my first startup, the, the big, the, the really the mistake was having too much, uh, shareholder and board control from legacy media companies who then necessarily see things through the lens of legacy media. And, uh, that they'll kind of make you do things that seem sensible to them, but, but aren't really, don't make sense with the new technology. Um, I know I should point out that I, that I, um, I didn't actually at first intend to start a company. I like, I tried to get a job at Netscape. Um, I sent my resume into Netscape and Mark Andre knows about this.
Um, and, uh, but I don't think he ever saw my resume and then nobody responded. So, uh, and then I tried hanging out in the lobby of Netscape to see if I could like bump into someone, but I was like too shy to talk to anyone. So I'm like, "Man, this is ridiculous." So I'll just write software myself and see how it goes. So it wasn't actually from the standpoint of like, "I want to start a company." I just want to be part of building, you know, the internet, uh, in some way. Um, and, um, and since I couldn't get a job at an internet company, I had to start an internet company.
Anyway, the, yeah, yeah, I mean, from an AI will so profoundly change the future, it's difficult to fathom, um, how much, but, you know, the, the, the economy, assuming we don't, things don't go awry and and like AI doesn't kill us all and itself, um, then you, you'll see ultimately an economy that is not, not 10 times more than the current economy, ultimately. Like if we become, say, or whatever our future machine descendants or, but mostly machine descend descendants become like a a cottage of scale two civilization or beyond, we're talking about an economy that is thousands of times, maybe millions of times bigger than the economy today. So, um, yeah, I mean, I, I, I, I did sort of feel a bit like, you know, when I was in DC taking a lot of flack for like getting rid of waste and fraud, which was an interesting side quest, uh, as side quests go. Um, but, uh, got to get back to the main quest. Yeah, I got to get back to the main quest here. Um, so back to the main quest. Um, uh, so, but I did feel, you know, a little bit like there's, you know, it's like fixing the government is kind of like, there's like, say, the beach is dirty and there's like some needles and feces and like trash and you want to clean up the beach, but then there's also this like thousand foot wall of water, which is a tsunami of AI, like, and, uh, how much does cleaning the beach really matter if you got a thousand foot tsunami about to hit? Not that much.
Oh, we're glad you're back on the main quest. It's very important.
Yeah, back to the main quest. Um, building technology, which is, uh, what I like doing. Um, it's just so much noise. Like this, the signal to noise ratio in politics is terrible. So, um,
I mean, I live in San Francisco, so you don't need to tell me twice.
Yeah, DC's like, you know, kind of, I guess it's all politics in DC, but, um, the, if you're trying to build a rocket or cars or you're trying to have software that compiles and runs reliably, then you have to be, uh, maximally truth-seeking, or your software or your hardware won't work. Um, like there's not, you can't fool ma, like math and physics are rigorous judges. Um, so I'm used to being in like a maximally truth-seeking environment, and and that's definitely not politics. Um, so anyway, I'm, I'm good, glad to be back in, you know, technology.
I guess I'm kind of curious, going back to the Zip2 moment, you had hundreds of millions of dollars or you had an exit of worth hundreds of millions of dollars.
I mean, I, I got $20 million, right?
Okay. So you solved the money problem at least. Um, and you basically took it and you rolled, you kept rolling with X.com, which became PayPal and Conffinity.
Uh, yes. I kept the chips on the table.
Yeah. So not everyone does that. A lot of the people in this room will have to make that decision actually. What drove you to jump back into the ring?
Well, I, I think I, I felt for with, with Zip2, we'd built like incredible technology, but it never really got, uh, used. Um, you know, I think at least from my perspective, we had better technology than say Yahoo or anyone else, but it was constrained by our customers. Um, and, uh, so I wanted to do something that where, okay, we wouldn't be constrained by our customers, go direct to consumer, um, and that's what ended up being like X.com, PayPal, uh, essentially X.com merging with Confinity, which together created PayPal. And, and then, uh, that, that actually, the, the sort of PayPal Diaspora, uh, has it might have created more companies than so more companies than probably any, anything in the 21st century, you know, uh, so, so many talented people were at the combination of, of Confinity and and X.com.
Um, so I, I just wanted to like, I felt like, uh, we, we kind of got our wings clipped somewhat with Zip2, and it's like, okay, what if our wings aren't clipped and we go direct to consumer, and that's, that's what, um, PayPal ended up being. Um, but yeah, with, I got that like $20 million check for, um, for my share of Zip2. At the time, I was living with, in a house with four housemates. Um, and, um, had like, I don't know, 10 grand in the bank and then the this check arrives in the mail of all places, and in the mail. Um, and then, then my bank balance went from 10,000 to 20 million and 10,000. You're like, "Well, okay." Um, still have to pay taxes on that and all, but, uh, then I ended up putting, um, almost all of that into, uh, X.com and as you said, like just kind of keeping almost all the chips on the table.
Um, and, um, yeah, and then after PayPal, I was like, well, I, I was kind of curious as to why we had not sent anyone to Mars. Uh, and I went on the, went on the NASA website to find out when we're sending people to Mars and there was no date. I thought maybe it was just hard to find on the website. Uh, but in fact, there, there was no real plan to send people to Mars. So then, uh, you know, I've come, this is such a long story, so I don't want to take up too much time here, but, um, the,
I think we're all listening with wrapped attention.
So, so I was actually, I was on the Long Island Expressway with my friend at Dear Resi, we're like, uh, house classmates in college. And and D was asking me what I'm, what we're going to do, what am I going to do after PayPal? And I was like, it's like, I don't know, I guess maybe I'd like to do something philanthropic in space, because I didn't think I could actually do anything commercial in space, because that seemed like the purview of nations. Um, so, um, but you know, I'm kind of curious as to when we're going to send people to Mars, and and that's when I was like, "Oh, it's not on the website," and I started digging on not, there's nothing on the NASA website.
So then I started digging in. And, um, and, uh, and I'm, I'm, I'm definitely summarizing a lot here, but, um, I, I, I, uh, my first idea was to do a philanthropic mission to Mars called Life to Mars, where we would we send a, a small greenhouse with sea and dehydrated nutrient gel, land that on Mars, and grow, you know, hydrate the gel, and then you'd have this, this great sort of money shot of green plants on a red background. Um, for the longest time, I, by the way, I didn't realize money shot, I think is a porn reference, but, uh, but anyway, the point is that that would be the great shot of green plants on a red background, and to try to inspire, uh, you know, NASA and the public to to send astronauts to to Mars. As I learned more, I, I came to realize, "Oh," and along the way, by the way, I went to Russia in like 2001 and 2002 to buy ICBMs, which is like, that's an adventure. You know, you go and meet with Russian high command and say, "I'd like to buy some ICBMs." Um, this was to get to space. Yeah.
Not to, not to nuke anyone, but
But they had, they had to, uh, as a result of arms reduction talks, they had to actually destroy a bunch of their, their big nuclear missiles. So I was like, "Well, how about if we take two of those, you know, minus the nuke, um, add an additional, uh, upper stage for for Mars?" Um, but it was kind of trippy, you know, being in Moscow in what, 2001, negotiating with like the Russian military to buy ICVMs. Like that's crazy.
Um, and but they, they kept also like raising the price on me so that, so like, literally, it's kind of like the opposite of what a negotiation should, should do. So I was like, "Man, these things are getting really expensive." And, and then I, I came to realize that actually the problem was not that there was insufficient will to go to Mars, but that there was no way to do so without breaking the budget, you know, even breaking the NASA budget. So that's where I decided to start SpaceX, SpaceX to, uh, advance rocket technology to the point where we could send people to Mars, um, and, uh, that was in 2002.
So that wasn't, you know, you didn't start out, uh, wanting to start a business. You wanted to start just something that was interesting to you that you thought humanity needed. And then as you sort of, you know, like a cat pulling on, you know, a, a string, it just sort of, the ball sort of unravels and it turns out this is could be a very profitable business.
I mean, it, it is now, but it, um, there, I there'd been no prior example of really a rocket startup succeeding. There have been various attempts to do commercial rocket companies and they all, all failed. So, um, again, with, with SpaceX, starting SpaceX was, uh, really from the standpoint of like, I, I think there's like a less than 10% chance of being successful, maybe 1%, I don't know. Um, but, um, but if, if, if a startup doesn't do something to advance, uh, rocket technology, it's definitely not coming from, from the big defense contractors, because they just impeded match to the government, and the government just wants to do very conventional things.
So there's, it's either coming from a startup or it's not happening at all. So, so like a small chance of success is better than no chance of success. And, and so that, yeah, so SpaceX, uh, I started that in in mid, mid 2002, expecting to fail. I like, I said probably 90% chance of failing, and even like when recruiting people, I didn't like try to, you know, make out that it would. I said, "We're probably going to die. Uh, but, uh, 12% chance we might not die, and if, uh, but this is the only way to get people to Mars and advance the state-of-the-art."
And, um, and then, uh, I ended up being chief engineer of the rocket, uh, not because I wanted to, but because I couldn't hire anyone who was good. So, like none of the good sort of chief engineers would join because they're like, "This is too risky. You were going to die." And, uh, so then I ended up being chief engineer of the rocket. And, you know, the first three flights did fail. So it's a bit of a learning exercise there. And, um, fourth one fortunately worked.
But if the fourth one hadn't worked, uh, I had no money left and that would have been, it would have been curtains. So it was a pretty close thing. If if the fourth launch of Falcon not work, it would have been just curtains, and we would have just been joined the graveyard of prior rocket startups. So, like, like my estimate of success was not far off. Um, we just, we made it by the skin of our teeth.
Um, and, um, Tesla was happening sort of simultaneously. Um, like 2008 was a rough year, uh, because at mid 2008, uh, or called summer 2008, um, the third, the third launch of SpaceX had failed, our third failure in a row. Uh, the Tesla financing round had failed. And so Tesla was going bankrupt fast. Um, it was just, uh, it's like, "Man, this is grim. Uh, this is, this is going to be a, a tale of warning of an exercise in hubris."
Probably throughout that period, a lot of people were saying, you know, Elon is a software guy. Why is he working on hardware? Why would,
Yeah, why would he choose to work on this? Right. 100%. So you can look at the like the, because there still, you know, the press of that time is still online. You could just search it and, and they kept calling me internet guy. Um, so like internet guy, aka fool, is attempting to build a rocket company. Um, so, um, you know, um, that we got ridiculed quite a lot.
Um, and it does sound pretty absurd. Like internet guy starts rocket company doesn't sound like a recipe for success, frankly. So I don't hold it against them. I was like, "Yeah, you know, it admittedly, it does sound improbable and I agree that it's improbable." Um, but fortunately the fourth launch worked, and, um, and, uh, and and NASA awarded us a, a contract to resupply the space station. Uh, and I think that was like, maybe, I don't know, December 22nd or it was, it was like right before Christmas.
Um, because even the fourth launch working wasn't enough to succeed. It NASA also needed, we also needed a big contract to keep us alive. So, um, so I got, I got that call from like the, the NASA, NASA team and I literally, they said we're, we're awarding you one of the contracts to resupply the space station. And I like literally blurted out, "I love you guys," which is not normally, you know, what they hear. Um, cuz it's usually pretty, you know, sober, but I was like, "Man, this is a company saver."
And then, uh, we closed the, the Tesla financing round on the last hour of the last day that it was possible, which was 6:00 p.m., December 24th, 2008. Um, we would have bounced payroll two days after Christmas if that round hadn't, hadn't closed. So that was a nerve-wracking end of 2008. That's for sure.
I guess from your PayPal and Zip2 experience jumping into these hardcore hardware startups, you, it feels like one of the through lines was being able to find and eventually attract the smartest possible people in those particular fields. You know, what, what would, I mean, the people in this room like some of the, most of the people here, I don't think have even managed a single person yet. They're just starting their careers. What would you tell to, you know, the Elon who's never had to do that yet?
I, I generally think to try to, try to be as useful as possible. It's, it may sound try, but it's, it's so hard to be useful, especially to be useful to a lot of people. Uh, where, say, the area under the curve of total utility is like how much, how useful have you been to your fellow human beings, times how many people? Um, it, like, it's almost like, like the physics definition of true work. It's incredibly difficult to do that, and I think if you aspire to do true work, um, your, your, your probability of success is much higher. Um, like, like don't aspire to glory, aspire to work.
How can you tell that it's true work? Like, is it external? Is it like what happens with other people or, you know, what the product does for people? Like, what, you know, what is that for you when you're looking for people to come work for you? Like what, you know, what's the salient thing that you look for or if they're, you know, that's a different question, I guess it's
I mean, in terms of of, of your end product, you just have to say like, "Well, if this thing is successful, how useful will it be to how many people?" And, um, that, that's, that's what I mean. And, and then you, you do whatever you know, whether you're CEO or or any role in a startup, you do whatever it takes to succeed. Like, and and just, and just always be smashing your ego. Like, like internalize responsibility. Um, like a major failure mode is when ego ability ratio, um, is double greater than sign one, you know, uh, like if you, if your ego to ability ratio is gets too high, then you're, you're, you're going to basically break the feedback loop to reality, uh, and in in AI terms, your, your, you'll break your RL loop.
So you, you want, you don't want to break your, you want to have a strong RL loop, which means internalizing responsibility and minimizing ego, and you do whatever the task is, no matter whether it's grand or humble. So, I mean, that's kind of like why I actually, I prefer the term like engineering as opposed to research. I prefer the term and, and I, I don't, I actually don't want to call XAI a lab. I just want to be a company. Um, like it's like, whatever the, whatever the simplest, um, most straightforward, uh, ideally lowest ego terms are, those are generally a good way to go. Um, to you, you want to just close the loop on reality hard. Um, that's, that's a, that's a super big deal.
I think everyone in this room is, uh, really looks up to everything you've done around being sort of a paragon of first principles and, you know, thinking about the stuff you've done. Um, how do you actually determine your reality? Because that seems like a pretty big part of it. Like other people, people who have never made anything, non-engineers, uh, sometimes journalists at time, who've never done anything, like they will criticize you, but then clearly you have another set of people who are builders, who have very high, you know, sort of area under the curve, who are in your circle. Like, you know, how should people approach that? Like what has worked for you and what would you pass on like, you know, to to X to your children? Like, you know, what do you tell them when you're like, "You need to make your way in this world here. You know, here's how to construct a reality that is predictive from first principles?"
Well, the, the tools of physics are incredibly helpful, uh, to to, um, understand and make progress in any field. Um, the first principles mean just obviously, just means, you know, break things down to the fundamental axiomatic elements that are most likely to be true and then reason up from there as cogently as possible, as opposed to reasoning by analysis or metaphor. Um, and then you just simple things like like thinking in the limit, like if you extrapolate, you know, minimize this thing or maximize that thing, thinking in the limit is, is very, very helpful. Um, I use all the tools of physics. Um, they apply to any field. Um, this is like a superpower actually.
Um, so you can take say, take, take for example, like rockets. You can say, "Well, how, how much should a rocket, rocket cost?" Um, the typical approach to how to, that people would take to how much rocket should cost is they would look historically at what the cost of rockets are and assume that any new rocket must be somewhat similar to the prior cost of rockets. A first principles approach would be you, you look at the materials that the rocket is comprised of.
So if that's aluminum, uh, copper, carbon fiber, uh, steel, whatever the case may be, um, and say what, what, how much does that rocket weigh? And and and what are the constituent elements and how much do they weigh? What is the material price per kilogram of those constituent elements? And that sets the actual floor on what a rocket, uh, can cost. It's, it can asymptotically approach the cost of the raw materials. Um, and then you realize, "Oh, actually, a rocket, the raw materials of a rocket are only maybe 1 or 2% of, of the historical cost of a rocket."
So the manufacturing must necessarily be very inefficient, um, if the, if the raw material cost is only 1 or 2%. That would be a first, first principles analysis of the potential for the cost for cost optimization of a rocket. And that's before you get to reusability. You know, to give an AI sort of AI example, I guess, uh, last year where for XAI, when we were trying to build a, a training supercluster, uh, we, we, we went to the various suppliers to ask, said this was beginning of last year that we needed 100,000 H100s to be able to train coherently. Um, and, uh, their estimates for how long it would take to complete that were 18 to 24 months.
It's like, "Well, we need to get that done in 6 months." So then, um, or we won't be competitive. So, so then, uh, if you break that down, what, well, what are the things you need? Well, you need a building, you need power, you need cooling.
Um, we didn't have enough time to build a building from scratch, so we had to find an existing building. So we found a, a factory that was no longer in use in Memphis that used to build Electrolux products. Um, but then the, the input power was 15 megawatts, and we needed 150 megawatts. So, uh, we, we, um, rented generators and had generators on one side of the building, and then we have to have cooling.
So we rented about a quarter of the mobile cooling capacity of the US and put the the chillers on the other side of the building. Uh, that didn't fully solve the problem because the voltage v, the power variations during training, um, are, are very g, very big. So you can have power can drop by 50% in 100 milliseconds, which the generators can't keep up with. So then we combi, we added Tesla mega packs and modified the software in the mega packs to be able to to smooth out the, uh, the power variation during the training run. Um, and then there were, there were a bunch of network, networking challenges. Um, because the networking cables, if you're trying to make 100,000 GPUs train coherently, are very, very challenging.
Um, almost, it sounds like, uh, almost any of those things you mentioned, uh, I could imagine someone telling you very directly, "No, you can't have that. You can't have that power. You can't have this." Uh, and it sounds like one of the salient pieces of first principles thinking is actually let's ask why. Let's, you know, figure that out, and actually let's challenge the person across the table and if they, if I don't get an answer that I feel good about, I'm gonna, you know, not allow that to be, I'm not going to let that know to stand. Is that, I mean, that feels like something that, you know, everyone, if someone were to try to do what you're doing in hardware, hardware seems to uniquely need this. In software, we have lots of, you know, fluff and things that, you know, it's like we can add more CPUs to that, it'll be fine. But in hardware, it's, it's just not going to work.
I think these general principles of first principle thinking apply to software and hardware, apply to anything really. Um, I'm just using kind of a hardware example, um, of, of how we were told something is impossible, but once we broke it down into the constituent elements of, "We need a building, we need power, we need cooling, we need, uh, we, we need power smoothing," and then, and then we could solve those constituent elements. Um, but it, it was, and then we, and then we just ran the the networking operation to to do all the cabling everything, um, in four shifts, 24/7. And and I was like sleeping in the data center and also doing cabling myself.
Um, and and there were a lot of other issues to solve. Um, you know, nobody had done a training run with 100,000, um, H100s training coherently last year. May, maybe it's been done this year, I don't know. Um, and then and then we ended up doubling that, uh, to 200,000. And so now we, we've got 150,000 H100s, 50K H200s, and 30K GB200s, um, in the, in the Memphis, uh, training center. And we're about to bring 110,000 GB200s online, um, at a second data center also in the Memphis area.
Is it your view that, you know, uh, pre-training is still working and, you know, larger the scaling laws still hold and whoever wins this race will have basically the biggest, smartest possible model that you could distill?
Well, there's other various elements that, um, beside competitiveness for for large AI. Um, there's, there's for sure the the talent of the people matter. Um, the scale of the hardware matters and how well you're able to bring that hardware to bear. So you can't just order a whole bunch of GPUs and they, they don't, you can't just plug them in.
So you've got to, you've got to get a lot of GPUs and have them, um, train trained coherently and stably. Um, then it's like what unique access to data do you have? I guess distribution matters to some degree as well, like how do people get exposed to your AI? Those are those are critical factors for if it's going to be like a large foundation model that's competitive. Um, um, you know, as, um, as many have said, I think my friend I Sky said, uh, you know, we've kind of run out of pre-training data of human-generated pre like human-generated data.
You run out of tokens pretty fast, um, of certainly of high quality tokens. And, um, and then you, and you have to do a lot of, uh, you, you need to essentially create synthetic data, um, and and be able to accurately judge the synthetic data that you're creating to verify like, is this real synthetic data or is it an hallucination that doesn't actually match reality? Um, so achieving grounding in reality is, is, is tricky, but, but we, we are at the stage where there's more effort put into synthetic data. Um, and like right now we're, we're training Grock 3.5, which is a, a heavy focus on reasoning.
Going back to your physics point, uh, what I heard for reasoning is that, uh, hard science, particularly physics textbooks are very useful for reasoning, whereas, um, I think researchers have told me that social science is totally useless for reasoning.
Uh, yes, that's probably true. Um, so yeah, um, you know, something that's going to be very important in the future is, um, combining deep AI, uh, in the, the data center or supercluster with robotics.
Uh, so that, uh, you know, things like like the Optimus humanoid robot. And, um, yeah, Optimus is awesome. There's going to be so many humanoid robots, and and robots of all, robots of all sizes and shapes, but my prediction is that there will be more humanoid robots by far than all other robots combined, by maybe an order of magnitude. Like a, a big difference. Um, and, um,
Is it true that you, you're planning a robot army of a sort?
Whether we do it or or or, you know, whether Tesla does it, you know, Tesla works closely with XAI. Um, like you've seen how many humanoid robot startups are there? Like it's like I think Jensen Bong was on stage with a lot with a massive number of robots, you know, robots from different companies. I think there was like a dozen different humanoid robots. So I mean, I guess, you know, part of what I've been fighting, and maybe what has slowed me down somewhat, is that I'm a, I'm a little, I don't want, I don't want to make Terminator real, you know?
So I've been sort of, I guess, at least until recent years, dragging my feet on on AI and and humanoid robotics. And then I sort of come to the realiz, realization, "It's, it's happening whether I do it or not." So you got really two choices, particip, you could either be a spectator or a participant. And so like, "Well, I guess I'd rather be a participant than a spectator." Um, so now it's, you know, pedal to the metal on humanoid robots and, um, digital super intelligence.
So I guess, you know, there's a third thing that, uh, everyone has heard you talk a lot about that I'm really a big fan of, you know, becoming a multiplanetary species. Where does this fit? You know, this is all, you know, not, not just a 10 or 20 year thing, maybe a hundred year thing, like it's a multi, you know, many, many generations for humanity kind of thing. You know, how do you think about it? There's, you know, AI obviously, there's embodied robotics, and then there's being a multip, multiplanetary species. Does everything sort of feed into that last point or, you know, what, what are you driven by right now for the next 10, 20 and 100 years?
Jeez, 100 years, man. I hope civilization's around in 100 years. If if it is around, it's going to look very different from civilization today. Um, I mean, I'd predict that there's going to be at least five times as many humanoid robots as there are humans, maybe 10 times. Um, and one way to look at the progress of civilization is percentage completion, Kadeshev.
So if you're, you know, cautious of scale one, you've, um, you've harnessed all the energy of a planet. Now, in, in my opinion, we've only, uh, harnessed maybe 1 or 2% of, uh, Earth's energy. Uh, so we've got a long way to go to the Sev scale one. Uh, then car shift two, you've harnessed all the energy of a sun, uh, which would be, I don't know, a billion times more energy than Earth, maybe closer to a trillion.
Um, and then KV three would be all the energy of a galaxy, pretty far from that. So we're at the very, very early stage of the intelligence big bang. I, I, I hope, I hope we're on the in terms of being multi-planetary. Like I think, I think we'll have enough mass transferred to Mars within like roughly 30 years to make Mars self-sustaining such that Mars can continue to grow and prosper even if the resupply ships from Earth stop coming.
Um, and that that greatly increases the probable lifespan of civilization or or consciousness or intelligence, both biological and digital. Um, so that's why I think it's important to become a multi-planet species. And I'm somewhat troubled by the foamy paradox, like why have we not seen any aliens? And it could be because intelligence is incredibly rare.
Um, and maybe we're the only ones in this galaxy. Um, in which case the intelligence of consciousness is this like tiny candle in a vast darkness, and we should do everything possible to ensure the tiny candle, candle does not go out. And being a multi-planet species or making consciousness multi-planetary, uh, greatly improves the probable lifespan of civilization, and it's, it's, it's the next step before going to other star systems. Um, once you, once you at least have two planets, then you've got a forcing function for the improvement of space travel. Um, and, um, and that, that ultimately is what will lead to, uh, consciousness expanding to the stars.
It could be that, um, the Fermi paradox dictates once you get to some level of technology, you destroy yourself. How do we say ourselves? How do we actually, what would you prescribe to? I mean, a room full of engineers, like what can we do to prevent that from happening?
Yeah, how do we avoid the great filters? One of the great filters would obviously be global thermonuclear war, uh, so we should try to avoid that. Um, I guess building benign AI robots that, AI that loves humanity and, um, you know, robots that are helpful. Um, something that I think is, uh, extremely important in building AI is, is a very rigorous adherence to truth, even if that truth is politically incorrect. Um, I my intuition for what could make AI very dangerous is if, um, if you force AI to believe things that are not true.
How do you think about, you know, there's sort of this argument for open, uh, open for safety versus closed for competitive edge? I mean, I think the great thing is you have a competitive model. Many other people also have competitive models. And in that sense, you know, we're sort of off of maybe the worst timeline that I'd be worried about is, you know, there's fast takeoff and it's only in one person's hands. You know, that might, you know, sort of collapse, uh, a lot of things, whereas now we have choice, which is great. How do you think about this?
Yeah, I do think there will be several deep intelligences, may, maybe at least five. Um, maybe as much as 10. Um, I'm not sure that there's going to be hundreds, but it's probably close to like, maybe there'll be like 10 or something like that. Um, of which maybe four will be in the US. Um, so I, I don't think it's going to be any one AI that that has a runaway capability. Um, but but yeah, se, several deep intelligences.
What will these deep, deep intelligences actually be doing? Will it be scientific research or trying to hack each other?
Probably all of the above. Um, I mean, hopefully they will discover new physics, and I think they will very, they're, they're definitely going to invent new technologies.
Like, I think, I think we're quite close to digital super intelligence. It may happen this year, and if it doesn't happen this year, next year for sure, a digital super intelligence, defined as smarter than any human at anything.
Well, so how do we direct that to sort of super abundance? You know, we have, we could have robotic labor, we have cheap energy, intelligence on demand, you know, is that sort of the white pill? Like where do you sit on the spectrum? And are there tangible things that you would encourage everyone here to be working on to make that white pill actually reality?
I think, I think it most likely will be a good outcome. Um, I, I guess I'd sort of agree with Jeff Hinton that maybe it's a 10 to 20% chance of annihilation. Uh, but look on the bright side, that's 80 to 90% probability of a great outcome. Um, so yeah, I can't emphasize this enough. A rigorous adherence to truth, uh, is, is the most important thing for AI safet safety. Um, and obviously empathy for, uh, humanity and life as we know it.
We haven't, uh, talked about Neuralink and, uh, at all yet, but I'm curious, you know, you're working on closing the input and output gap between humans and machines. Uh, how critical is that to AGI, ASI? And, you know, once that link is made, can we not only read but also write?
The Neuralink is not necessary to solve, um, digital super intelligence. Uh, that'll happen before Neuralink is at scale. Uh, but, uh, what Neuralink can effectively do is solve the, um, the input output bandwidth constraints, especially our output con bandwidth is very low. The the out the the sustained output of a human over the course of a day is less than one bit per second.
So there, you know, 86,400 seconds in a day. Um, and is extremely rare for a human to output more than that number of symbols per day. So, um, certainly for several days in a row. Uh, so you, you really, um, with, with a, with a Neuralink interface, you can massively increase your output bandwidth and your input bandwidth.
Um, input being right to you, you have to do write operations to the brain. Um, we, um, we have now five humans who have received the, uh, the kind of the read, uh, input where it's reading signals. And you've got people with, with ALS who, um, really have no, they're tetraplegics, but they, they can now communicate at with, with at, um, similar bandwidth to a human with a fully functioning body, um, and control their computer and phone, um, which is pretty cool. And then, um, I think in the next 6 to 12 months, we'll be doing our first implants for vision where even if somebody's completely blind, um, uh, we, we can write directly to, um, the, uh, the visual cortex.
Um, and and we've had that working in monkeys, actually, I think one of our monkeys now has had a visual implant for three years, and, um, at first it'll be relatively fairly low resolution, but long term, you would have very high resolution and be able to see multi-spectral wavelengths. So, uh, you could see an infrared, ultraviolet radar, like a superpower situation. Like at, at, at some point, the cybernetic implants wouldn't, would not simply be correcting things that went wrong, but, uh, augmenting human capabilities, dramatically augmenting, augmenting intelligence and senses and bandwidth, uh, dramatically. And that's, that's going to happen at some point. Um, but digital super intelligence will happen well before that. At least if we have a, a Neuralink, we might, we'll be able to appreciate the, the AI better.
I guess one of the limiting reagents to all of your efforts across all of these different domains is access to the smartest possible people.
Um, yes.
But, you know, sort of simultaneous to that, we have, you know, the rocks can talk and reason, and you know, they're maybe 130 IQ now and they're probably going to be super intelligent soon. Uh, how do you reconcile those two things? Like what's going to happen in, you know, 5, 10 years and what should the people in this room do to, uh, make sure that, you know, they're the ones who are creating instead of maybe below the API line?
Well, they call it the singularity for a reason, because we don't know what's going to happen in in the not that far future. The percentage of intelligence that is human will be quite small. At some point, the collective sum of human intelligence will be less than 1% of all intelligence. Um, and if, if things get to a Kater ship level two, um, we're talking about human intelligence, even assuming a significant increase in human population and intelligence augmentation, like massive intelligence augmentation where like everyone has an IQ of a thousand type of thing. Um, even in that circumstance, uh, collective human intelligence will be probably 1 billionth that of, uh, digital intelligence.
Anyway, where's the biological bootloader for digital super intelligence, I guess, just to end off? Was I, he was like, was I a good bootloader? Where do we go? How do we go from here? I mean, I mean, all of this is pretty wild sci-fi stuff that also could be built by the people in this room. You know, if you, do you have a closing thought for the smartest technical people of this generation right now, what should they be doing? What should they, what should they be working on? What should they be thinking about, you know, tonight as they go to dinner?
Well, I, as I started off with, I think if you're doing something useful, that's great. Um, if you just, just try to be as useful as possible to your fellow human beings, and that, that, then you're doing something good. Um, I keep harping on this, like focus on super truthful AI, that that's the most important thing for AI safety. Um, you know, obviously if, you know, um, anyone's interested in working at XAI, I mean, please, please let us know.
Um, we're aiming to make Grok, um, the maximally truth-seeking AI. Um, and, uh, I think that's a very important thing. Um, hopefully we can understand the nature of the universe. That, that's really, I guess, what AI can hopefully tell us.
Maybe AI, AI can maybe tell us where are the aliens and what, you know, how did the universe really start? How will it end? What are the questions that we don't know that we should ask? And, um, are we in a simulation or what level of simulation are we in? Well, I think we're going to find out.
An NPC, Elon, thank you so much for joining us. Everyone, please give it up for Elon Musk.