WTF is happening at xAI | Sulaiman Ghori
RelentlessDisclaimer: The transcript on this page is for the YouTube video titled "WTF is happening at xAI | Sulaiman Ghori" from "Relentless". All rights to the original content belong to their respective owners. This transcript is provided for educational, research, and informational purposes only. This website is not affiliated with or endorsed by the original content creators or platforms.
Watch the original video here: https://www.youtube.com/watch?v=8jN60eJr4Ps
Tyler took this bet with Elon, like, "Get a Cybertruck tonight if you can get a training run on these GPUs in 24 hours." And we were training that night.
Did he get the Cybertruck?
Yeah, he got the Cybertruck. My first day, they just gave me a laptop and a badge and I was like, "Okay, now what?" I don't even have a team. I've not been told what to do. Grok was spinning up at the time, our integrations with X. They're like, "Can you help?" And I was like, "Yes."
What's the most fun thing about working there?
No one tells me no. If I have a good idea, I can usually go and implement it that same day and show it to Elon or whoever, and we got an answer. We did the math right now, we're I think at about $2.5 million per commit to the main repo, and I did five today.
So you added like 12 and a half million of value.
The levers are extremely strong.
Today I have the pleasure of sitting down with Sully Kongori and he is one of the engineers at xAI. I've been kind of fascinated by xAI since like 2023 when Elon first started. I think it's like one of the fastest growing companies of all time. Can you just talk about like what the [__] is happening at xAI?
Yeah. Um, we don't have really due dates. It's always yesterday. Um, there's no blockers for anything—like at least nothing artificial. The whole Elon thing about going down to the root, the fundamental, whatever the physical thing is, we get there pretty quick if we can, as quick as we can. Which is funny in software. It's not really like a thing that you think about is the physics too much, but we do try quite a bit and we're not really fully a software company given all the infrastructure pull down.
It's kind of hardware at this point.
Yeah. It's like hardware constrained. Probably our biggest edge is the hardware because nobody else is even close on the deployment there. Although the talent density on software is like incredible. I've never been anywhere like that. It's really cool.
For Elon, he is very good at figuring out what the bottlenecks will be even a couple months or even years in the future and then trying to work backwards from that and make sure that he's in a really good position. How does that work day-to-day with just normal people at xAI and adopting that kind of mental framework?
Usually when we spin something up new very quickly, either one of us or he comes up with this metric that's usually very core to either the financial or the physical return, or both sometimes. And so everything is just focused on driving that metric. There's never like a fundamental limitation to it—or like whatever the fundamental limitation is, it better be rooted deep down and not something artificial.
And there are a lot of perceived limitations, especially in the software world coming from especially in the last 10 years of web dev and all these kinds of things. People just assume or accept certain limitations, especially when it comes to speed and latency, and they're not true. You can get rid of a lot of overhead. Like there's a lot of stupid stuff in the stack and if you can knock out a lot of that, you can usually 2 to 8x most anything—at least anything invented relatively recently. Some stuff not so much, but yeah.
When was the last time that you experienced this where there's some conventional wisdom that says this is the timeline and then you guys just were able to completely shred that?
Um, most recently it's our model iterations on Macrohard. So we're working on some novel architectures, actually multiple at the same time, and we're coming out with new iterations like daily, sometimes multiple times a day—which is from pre-train in some cases. Which is not something you ordinarily really see.
But it comes from, well A, we have a pretty great supercompute team and they've knocked out a lot of the typical barriers it takes to train a lot of this stuff, even with how variable our hardware is. Like, you know, within a day of standing up a rack you can usually be training, sometimes within the same day, um, even within a few hours in some cases.
And this is like not normal, like normally the timelines are like days or weeks.
It takes a lot, well in most cases at least, yeah in the last 10 years you abstract this away and let Amazon or Google take care of this. And so whatever their capacity is, is what their capacity is. But you can't have that be the case and win in AI now. So, the only solution is to die or build it yourself.
Can you tell me about what your experience was like joining, why you joined, and then kind of what the onboarding process was for the first like couple weeks?
Yeah. So, I was working on my own startup when I moved to the Bay. And actually during that time, Greg Yang, one of the co-founders of xAI, had reached out. He's great at recruiting as it turns out.
What did he say?
Uh, so I got an email and I thought it was spam because I was getting a lot of these, you know, emails to founders at the time of like, "Hey, you want to chat" or "I like what you're doing, you want to chat," whatever. I was going to mark it as spam to delete it. And I saw the domain x.ai. I was like, oh, wait a second. I know these guys. And they just, I think it was probably eight months in at that point. And so I was like, okay, yeah, let's chat.
And so we chatted a bunch of times. Then I wanted an acqui-hire but I think we were too early at the time and that company kind of went—mostly because it was fairly obvious that you can't build Macrohard with like a million dollars. But the idea was sound. So I spent the next six, seven months wasting all my money building like aerospace projects and working on an aerospace astro mining concept. That also I realized probably wouldn't work, but it was worth a try.
And so I emailed Greg again like, "Hey, you want to chat again?" He's like, "Yeah sure, you want to interview tomorrow?" I was like okay. And I apparently did well and I moved on Monday and I started then. And it was really great. No one told me what to do. So like my first day they just gave me a laptop and a badge and I was like okay. And I was like, "Okay, now what?" And so I went to go find Greg cause I was like, "I don't even have a team. I've not been told what to do." Like Greg just brought me on cause I think he liked what I was doing previously and it was related to what the long term was for Macrohard—which wasn't really even a project at the time.
And I ended up working on actually Grok, which was spinning up at the time—our integrations with X. And so they're like, "Can you help?" and I was like, "Yes I can help." And so my first week was working with the one guy. I found out very quickly like everything that we built—like I could sit and I could stand up from my desk (which I didn't even have a desk assigned to me, I just sat at random people's desks that weren't there that day) and I could point to whoever built that thing at xAI from my desk. It was very, very cool.
And there were like almost no people working there at this point. Just like a couple hundred, right?
Uh yeah, about a hundred or so on the engineering staff. And then I don't know what the infra buildout team looked like at that time. And it's kind of hard to tell because some people move up the ladder from like the actual building and construction crew onto our payroll. But um it was pretty small at the time, like much, much—like an order of magnitude smaller than the other labs. And we had still just done Grok 3. Which was pretty cool.
One of the things that I kind of love is how fast xAI went from being founded... I remember Elon initially saying we're not even sure if this can be a success with, you know, people having a multi-year advantage on speed. And then you guys got done with the first like Colossus data center in like 122 days. And that was just unheard of, and Jensen's out here singing the praises of xAI and Elon. What kind of culture did that allow to be formed?
It definitely enabled us on model and product to kind of assume we would have the resources to do what we needed to do. And that's definitely the case. Like we're not super duper resource constrained. Like we've still found a way to push up against that wall, but that's just because we have 20 different things going at the same time. Like many more things than that. There's an absurd amount of runs and training and all that stuff going on at the same time in parallel, usually by like a handful of people. Which is how we're able to iterate very quickly on model and product side.
And utilization has definitely been very high. The speed allows us definitely to, I guess, think more long term. So I think Grok 4 or 5, really what it was, was already planned out and designed in terms of size and what we expected way early—like before I joined. I joined around Grok 3.
So it's like thinking at least a year in advance?
You can, yeah, you can think much more in advance and assume that those estimates will be hit, just because everyone's like pretty great and reliable. Which frees you up a lot in terms of what your limitations are I guess. So for us for example, the assumed minimum latency was about three times higher than it actually needed to be and the buildout allowed for that basically.
What do you mean by that?
So one of the novel architectures we're working on is not really possible unless you scale up your experiment rate because it's not building on any existing body of work. You need a new pre-training body and you need also a new data set, but that's not really constrained by the resources, like the physical infrastructure resources mostly. Although there's the Tesla computer thing which I think maybe we'll get into, maybe not.
But um, so actually this one's public. So one thing that we're thinking about is: okay, we're building this human emulator with Macrohard. How do we deploy it? Because if we want to deploy 1 million human emulators, we need 1 million computers. How do we do that? And the answer showed up two days later in the form of a Tesla computer, because those things are actually very capital efficient as it turns out.
And we can run potentially our model and the like full computer that a human would otherwise work at on the Tesla computer for much cheaper than you would on a VM on AWS or Oracle or whatever, or even just buying hardware from Nvidia. That car computer is actually much more capital efficient and so it enables us to assume that we can deploy much, much faster at a much higher scale. And so we've adjusted our expectations for that basically.
Are you basically able to just bootstrap off of the car network?
So that's one of the potential solutions basically. Yeah. So like okay, well we want 1 million VMs. There's like 4 million Tesla cars in North America alone. And like let's say two-thirds or half of them have Hardware 4. And somewhere between 70-80% of the time they're sitting there idle, probably charging. We can just potentially pay owners to lease time off their car and let us run like a human emulator—Digital Optimus—right on it. And they get, you know, their lease paid for and we get a full human emulator we can put to work. And that's something without any buildout requirement. It's a purely software implementation that's required.
The asset is sitting there and you can just go and use it.
Yeah. Amazing.
For the human emulators in Macrohard, what is the purpose of scaling up you know millions of mini humans?
I mean the basic concept is very simple, right? With Optimus, you're taking any physical task a human can do and allowing a robot to do it automatically at a fraction of the cost with 24/7 uptime. We're doing the same with anything that a human does digitally. So anything where they need to digitally input keyboard and mouse inputs (which is usually what humans do) and look at a screen back and make decisions, we just emulate what the human is doing directly. So no adoption from any software is required at all. We can deploy in any situation in which a human is in potentially currently.
Interesting. What is that actually going to look like for rolling it out?
Um, I don't think we've detailed our plans publicly yet specifically on how we'll roll out. It'll be slowly at first and then very quickly basically. Like the difference for us, given that infrastructure buildout already has happened or we can go on the Tesla network or we can build out our own data center Tesla computers actually... The difference for us from going from 1,000 human emulators to a million is actually not very big. It's not the biggest part of the challenge.
Elon, I know one of the things that he does best is he basically just goes from fire to fire on whatever the company is and just kind of puts it out and unfucks whatever problem exists. What has that been like? When have you seen some problem exist and just had it unfucked very rapidly due to this kind of process?
Um, definitely on the infra build out, this is the biggest. On model side, we've had hiccups but it's more or less been smooth. But on infra side, there's a lot of very specific operations that each of these basically ASICs—these GPUs—are built for. And when we roll out new products, like when we pick up new products from Nvidia or whoever, not everything works.
So in some of the meetings that we had with him early last year, he would hear these and he would make a phone call and the software team would deliver a patch the next day, and we would work like side by side until that was resolved. And then we could run a model or a training run on the hardware very, very quickly where otherwise it would have taken weeks of back and forth. So those kind of blockers are usually very quickly resolved with one phone call, or just us bringing it up to him, or him just offering. Like frequently when a meeting is ending or there's a lull in the conversation he'll be like, "Okay, how can I help? How can I make this faster?" And someone will come up with an answer.
I know you guys are doing many different products in parallel and I get that it's kind of like you have to do that, but also it's sometimes in most organizations very difficult to stay focused on a single thing and like a single objective. How does that kind of work for just executing on multiple different fronts at the same time?
Very frequently we actually—and this is increasing with scale—we don't have a full picture until like the all hands, or we just chat with people what everyone is doing and how far everyone is on these different projects. Like for example, when we did our voice model and our voice deployment, we actually had a lot of the work built for extremely low latency. Extreme low latency end, like packets to be sent to the client—it was already built out. And it was a matter of flipping the right switches and the right configs basically to cut our latency pretty significantly, like 2-3x end-to-end.
This is actually the case a lot of the time: there is a stupid thing that exists somewhere in the software or the hardware and someone has come up with a solution, and you find it when you go to look for it in our codebase somewhere, or you ask around and someone's like, "Oh yeah, this XYZ person has done this, you should talk to them," and they will hook you up.
There's not a lot of time spent syncing up with anyone or asking for permission or waiting for anyone at all. Like usually when you propose something, the answer is either "No, that's dumb" or "Why isn't it done already?" [Laughter] And then you go and do it and then it's done.
With Elon companies, you can kind of just ask for responsibility and then you basically just live by the sword, die by the sword. And if you get things done, then you can just ask for more responsibility and you can keep on doing that or you're just like out. What's been your experience like with that?
Very much so. Yeah, like I've jumped around a lot of different projects and mostly just because someone asked for my help and I kept helping and then I ended up owning some of the stack or a lot of the stack. And this is the case for everyone, like this is just how it is. If you have any particular experience or can iterate on something very quickly within days, you own that component. Yeah there's no formal anything. I think officially on our HR software I'm on Voice and iOS or something, and our security software thinks I still work on our X integration.
Which never updated.
Yeah. No one ever updates this stuff, it's kind of ridiculous.
And has your journey at the company kind of been: you show up, there's not exactly a clear direction of what you're going to work on, and then you just start working on stuff and then you just kind of like hop from project to project by whoever asks for your help?
Yeah, there's quite a bit of overlap and flowing. So like after onboarding I'm usually on two or three projects at once. And whichever one is most pressing or I can help the most on ends up taking majority of my time and then that kind of overlaps and flows in like a waterfall way.
What's been the journey from the starting to now? Like what projects have you worked on?
Yeah. So specifically I started... I first worked on like Grok and our integration there and I worked with our backend team a bit on like reliability and scaling up because we were scaling up a lot at that time. And then after that I took on solo building up our desktop suite and took that to internal completion. And then I got asked for help on our Imagine roll out and iOS, which—yeah our iOS team is small for how many people use it. Like it's ridiculous. You won't guess the number.
Like five people?
It was three. And I was the third person at the time when we were rolling that out. It was ridiculous and everyone's really, really good. Yeah, this is the first place where I've had to work very hard to keep up really.
With like the speed and the talent. What was the first experience that you had where you thought to yourself like you're actually being kind of used to your full, you know, potential?
I think that Imagine roll out was definitely like a really good push cause like we had this 24-hour iteration cycle. We all would get feedback every night on whatever we were doing. And yeah, we would push out that night. In the morning we would have all the feedback. We would immediately knock out all the bugs, implement the new stuff that people were asking for. Whatever model had come up with, we implemented that too. It was a very, very fast cycle and it was I think the longest continuous stretch of me being in the office like every day.
What was that like at the time?
It was like two or three months.
Two or three months? Yeah. Okay.
Um yeah, like there weren't weekends for a while, which was good to know that I could do that and I was pretty happy doing that. And after that I got pulled onto Macrohard product which was just one other person at the time. So it was the two of us for a while and I've been on that since that project kickoff basically.
I don't know how much you know about this, but the Colossus build and all the ridiculous stuff that the early xAI team had to do to turn on Colossus and like get power and all the necessary inputs to making that work... And even today, I think like it's just bottlenecks across the entire thing. You just want more chips and GPUs and all the stuff working. What was that like?
There's a lot of war stories and a lot of bets. So I think Tyler took this bet with Elon... We were setting up new racks—I forget which GPUs we were rolling out at that time—we took a bet. Elon's like, "Okay, you get a Cybertruck tonight if you can get a training run on these GPUs in 24 hours." And we were training that night.
Did he get the Cybertruck?
Yeah, he got the Cybertruck. I see it from our lunch window. Cafeteria. Yeah, he's cool.
Um, for power we actually have to collaborate very tightly with the like municipal and state power companies because when load goes high on their end, we have to shut off and go fully on the like 80 mobile generators we brought in on trucks and go fully on those just so that we don't impact power anywhere. And we have to do that seamlessly without interrupting anyone's extremely volatile training runs on extremely volatile GPUs and hardware which scales up and down by like megawatts in milliseconds. It's a lot.
Is that also part of the logic of basically putting massive battery packs right next to the data centers? Cause then you can kind of go up and down much faster.
Batteries can scale up and down and balance that load a lot faster. Cause with a generator you're literally asking a physical thing to speed up or slow down—like a spinning physical thing that's obviously just going to take a certain amount of time. The batteries can react to the light much, much faster. And then yeah, it's like actually from the physical standpoint I think there's the local capacitors, the station like data hall side capacitors, the batteries, and then generators, and then the public municipalities. Although we might have changed that infrastructure at this point, things change very quickly especially on the cooling side.
Do you have any other really good like war stories that are just like things that shouldn't have been possible that became possible?
Uh, so the lease for the land itself was actually technically temporary. It was the fastest way to get the permitting through and actually start building things. I assume that it'll be permanent at some point, but yeah, it's I think a very short-term lease at the moment technically for all the data centers. It's fastest way to get things done.
And how do they do that?
Um, I think there's basically a special exception within the local and state government says, "Okay if you want to just modify this ground temporarily..." I think it's for like carnivals and stuff you can...
xAI is actually just a carnival company currently. [Laughter]
And so that was the way to get done quickly. I mean it was done, yeah, 122 days.
For like internal planning, I know things are just going to keep on scaling up like crazy and Elon's talked about energy being the biggest bottleneck and then just being able to get chips. How do you guys plan when it's very difficult to predict 12 to 24 months in the future exactly what projects you're going to be working on or what their resource requirements are going to be?
We try very hard to work backwards from like what's the highest leveraged thing we can be doing and then we determine the physical requirements later. So like if we want to get to 10 or a hundred billion in revenue by this date, what are the highest leverage things we can do from an economic perspective? How can we actually build systems to do that? And then what does it take on the physical and software side to roll that out and get it done? Just roll backwards the whole way. So we don't usually start with the physical requirement. That's usually actually at the end.
Is there like a SpaceX-esque algorithm for making things happen?
As in like the usual "Delete"?
Yeah.
Yeah. I mean that's the case all the time. Um and we do do the thing where yeah, we delete something and then add it back later.
What was the last time that you did that?
Today.
Today? [Laughter]
Um today. Yeah. So with Macrohard we deploy on a lot of like physical hardware that changes and the testing harness for that is hard. Um so we try to minimize how many special cases are downstream of where it needs to be. And um for example like with display scaling, we need to be able to support displays that are you know 30 years old as well as the latest like 5K Apple displays and that has to happen on the same stack.
Turns out not all the systems are happy with that at all times. Like you have to fiddle with the encoders at a certain level. Like video encoders was the specific thing basically we—I didn't know—but as it turns out there are limits to the maximum amount of pixels that certain encoders can take. So we have to... now have I removed this special case for multiple encoders and turns out we found a problem at 5K-plus resolution and so we added that back.
What are the most interesting things about xAI itself that you think would be really good stuff to talk about?
There's a lot of characters that work there and also we're doing hiring in like interesting ways I guess. Um like things that I thought would be stupid are okayed and we just do them and we try them. It's like we'll do a hackathon and if we get five people in as a result it's worth it because just their expected return on the company's revenue or valuation is higher than the cost of running this hackathon for 500 people. Like the overhead value is actually very high, which is like funny. We did the math earlier this week. Uh right now we're like I think at about $2.5 million per commit to the main repo. And I did five today. So...
You added like 12 and a half million of value.
Exact—light days.
Light days.
Exactly. It was a good day. [Laughter] It's funny things like that. Like the levers are extremely strong. Like you can get a lot done with a lot less effort and time than you used to be able to, for sure, just because of who you work with, the internal tooling that we built up, and my boss.
What's like an example of the type of person that wants to work here? Cause I know when you're talking about it, you kind of show up and the first day you're just like, "I want to work on the weekends. I want to work on, you know, during the night," all this stuff. Go all in on this. What kind of special characters are working there?
People are definitely very enthusiastic when they come in. Like very, very enthusiastic. Just like mission oriented. There's I guess different types of ambition for sure. Some people want to move up like the leadership ladder and own more in terms of a managerial—like how many people report to me—sense. Some people want to own huge parts of the technical stack. So like right now we're doing a big rebuild of like our core production APIs. It's being done by one person with like 20 agents. And they're very good and they're capable of doing it and it's working well. So you can own huge chunks of the code base, no problem.
It's kind of like a X where after the acquisition they like had much fewer people, but you just never had a lot of people in the first place, so there's one person owning a huge part of the product.
Absolutely.
For hiring, what unusual practices outside of just hackathons does xAI do?
Uh so we're pushing very hard on Macrohard. Like for two or three weeks I was doing upwards of 20 interviews a week. So that's like some of them are like quick 15 minutes, some of them are full 1 hour technicals. So a lot of my time is dedicated towards bringing in new people and a lot of people are very good. So it's actually very hard to judge them.
How do you...?
I have a very specific problem that I have solved. I'm not going to reveal it because then people will use it. But I have solved a very specific computer vision problem a few years ago for one of my startups and I give people half an hour to try to implement the solution. It's actually very, very simple. This deceptively simple solution. People always overthink it. And this is something I like to index for on my team especially is like: can you not overthink it and come up with a simple solution?
It helps a lot because we're deploying on such a wide variety of hardware as a result of the wide variety of customers—like literally 30 years, 40 years of different hardware, different operating systems, everything like that. You have to come up with simple solutions or you're going to have a 10 million line code base next week. So this is very important. And especially now relying more and more on agents and AI and such for writing code. An AI will happily train out 200 lines when a 10 line solution will do, and probably do better. So you have to look for that. Like I want people and I look and actively hire for people who can find the 10 line solution first. We're totally fine with people using AI to code things. Like you should use that as a force multiplier, but for now we're smarter. We'll see next year.
What other like force multipliers do you kind of look for?
I like people who will challenge requirements and challenge me. So often I got this from Chester Zai, German... He told me this and I thought it was great. He throws in usually an incorrect requirement or question or an impossible line in his challenges for people when he's hiring—like coding challenges—and he expects people to come back and say like, "Hey this is wrong, this is not possible, you made a mistake." And if he doesn't, then he doesn't hire them. Same thing for me, I picked that up. It's a great idea.
The pace is insanely fast and like you said you kind of have worked on a number of different things. How do you kind of come up to speed on something as quickly as possible when you're on a new task or project?
It depends on what thing it is. If there's a lot of code to read, read the code by hand. Um like GD—Go To Definition—over and over again and you'll find things out very quickly. Actually, it's not that hard. For most things, the implementation is like less lines of code than you would otherwise see, which is nice. Not all the time, but in most cases.
If it's something that's in very active development, this is not the case. There's going to be 20 different versions of it going at the same time, and it's not obvious what is the current path. So, you just got to talk to people, and people are very open. Like, this is actually one of the things I was pleasantly surprised by when I joined. I thought people would be super smart and stuck up, but no, people are just super smart and very nice and helpful. Like everyone's on the same team, everyone's rooting for each other, people are willing to help you out and answer your questions.
Which is good because we don't like write a lot of docs. We write things, we do things too fast to write docs really. Um, actually, yeah, we're trying to figure out some systems on my team to automatically generate docs as we build stuff. And with Grok, which is cool that we have unlimited access to a very smart AI, because then we can try a bunch of stupid things, see if it works—which otherwise, you know, at a startup would cost you maybe like $100 or a million dollars in credits or whatever. We do it for free. So experimentation... you can fail on a lot of things and a lot more things you otherwise would, and as a result more experiments are tried, more succeed.
On the experimentation side, how are you guys kind of trying to maximize for the number of experiments or good shots on goal that you can do?
There's often a time constraint. We will frequently launch multiple experiments especially on the model side at the same time. And in some cases it's not even because of a time constraint necessarily in terms of "I need to try X amount of things in Y time." It's "In two weeks this prerequisite will be ready either in the hardware or in the training data or something, but in the meantime I need to deploy something today. What can I do?" And so you run two, three experiments and you find out what you can deploy today and bring in revenue or customer result or whatever it is today, and then two weeks you switch over. Like that's something we do all the time especially at Macrohard.
Have you seen anything where a timeline should have been much longer on like a project that you were working on and somehow you guys were able to kind of bring that in by you know weeks or months?
All the time. Every time. Every time we come away from like an EL [Elon] meeting or something internal where someone pushes hard to get something done, or someone external who isn't responsible for the thing asks for a requirement, asks for something to be done in what we originally think is an unreasonable amount of time... you know we spend two minutes like thinking about it, complaining maybe a bit, and then the rest of the time is dedicated to getting it done in that time.
Frequently the estimated time to get something done is based on some set of assumptions. And then once you get this timeline that's like half or one-tenth of what you would have otherwise done, you look at the assumptions, say "Okay, proportionally how much is this impacting my timeline?" And then you knock it out or you change it and then suddenly you get a 2x improvement in your timeline. You do that a few times, you can meet whatever requirement you really want. Yeah, at a certain point you get to the physical limitations, but you're never there from the start.
So I know for like Full Self-Driving and same thing with the rockets of SpaceX, the "Elon timeline" was significantly longer. Like an "Elon Time" might be a quarter or half of what it actually eventually takes, but then it also, you know, happens four times faster because of the initial timeline. Is it more or less like that at xAI? Because it's more, I mean I guess more on the software side now, but even on the data center side things seem to be happening just way, way, way faster. And they also seem to be happening on like the same timeline as he's roughly saying. He's like "This is going to happen roughly you know this number of months in the future" and then it actually does.
I think he himself has calibrated his timelines differently over now that he's deployed a number of extremely like a wide variety of deployed hardware at scale. So I think his own estimates for things are definitely a lot better. And so that's definitely the case. I think he also updates his timelines faster now too, like sometimes daily. I think he's talking with us and figures out what the update on the timeline should be based on various parameters. And sometimes they come from him too, right, especially on the infrastructure side. If a deal or we can be put up in a batch for the production of a certain chip, well we can save a month or two maybe, maybe even more than that depending on what the deployment is specifically.
And then on the software side it's the same. He always says like you can always attempt to do something, you know, in one month that would otherwise take a year and you'll probably get it done maybe in two. Still a lot faster.
I remember in the early days of SpaceX there was this internal—I think Elon would say internally like "Every day that we delay is like 10 million in lost revenue." And I have no idea what it would be like for xAI, like things are moving so fast. It's like, is there kind of an internal thing in your head of "Every day that we don't push hard or make something happen, we're losing out on X amount of value that could be created?"
Yeah, for Macrohard specifically, we do have a few pretty specific revenue targets. I can't delete the number specifically, but like in my head whenever something gets delayed or accelerated, I can pretty quickly calculate how much money we just made or lost.
Just wild swings.
Yeah, I mean the numbers are huge. [Laughter] Just because the expected return is so huge and the timeline is so fast. So a few days is actually proportionately fairly large compared to how much you would otherwise expect the revenue to be.
Elon's like famous for making really, really big bets pretty quickly. Like what's the biggest decision that's been made in a single meeting where like huge, huge amounts of capital or time or commitment were done?
Um, I think one of them was certainly the decision to go with a model that would be at least 1.5 times faster than a human for Macrohard. Looking like significantly faster than that, 8x maybe, maybe more. For other human emulator type attempts in the other labs, the approach has been "Let's do more reasoning and build a bigger model." That decision put us in totally the opposite track of what everyone else is doing. And everything that we're doing really is downstream of that—like well not everything, but pretty much everything.
It impacted and it was very early on that this was decided. It was sort of expected also that this is the move, especially given the analog to Full Self-Driving. No one's going to wait around 10 minutes for the computer to do something that I could have done in five, but if it can be done in 10 seconds, well, I'd be happy to pay whatever amount of money for that. It's just obvious really. So normally like us engineers would, if it's... would push back and say "Oh you know here's the 20 different reasons that it needs to be this way." But if a decision is made and you work backwards, then life finds a way.
I remember Elon saying—I think it was at the Y Combinator, he was doing a Q&A with Gary Tan—and Gary talked about AI researchers and he was like, "No, they're just all AI engineers now."
Yeah, we did... someone said that in one of the meetings we did with him early last year, talking about recruiting. Like "Here's the job descriptions" or something like that. And like for 10 minutes he just goes on: "Engineers. Just engineers. Doesn't matter. Good engineers. Engineers. Just someone who's fundamentally a problem solver. Doesn't matter if they did like this you know XY thing and this infrastructure or this particular architecture or whatever. Engineers."
Why is that definition so important?
It keeps things broad. It means that people can come in to us from like an extremely wide variety of places. And this has been the case. I mean, there is I think less so in the AI world, but I think there's a lot of SpaceX stories where people came in from strange walks of life that would not have otherwise seemed to be the case and then ended up doing huge things at SpaceX in the engineering world as a result. So, keeping it broad means that those people can have a path to us and help us accelerate.
For you personally, what's the most fun thing about working there day-to-day?
No one tells me no. Yeah, if I like have a good idea, I can usually go and implement it that same day and show it off and we'll see if it makes sense. We'll run whatever eval or show it to a customer or show it to Elon or whoever and we'll get an answer usually that same day as to whether or not that was the right move. There's no deliberation. There's no waiting for any bureaucracy. I like that a lot. I was expecting to sacrifice some amount of this coming from extremely small startups to a larger company. Like I guess joining at 100 people, I mean to me it was like a 10x leap of anywhere else I've been. But I guess relatively to Elon companies is pretty small and it does feel very small. There's not a lot of overhead in anything.
Did you have any other big assumptions going in that proved like completely wrong?
I thought there would be more top down. And there's some, but not really that much. Especially because of how many... there's basically only three layers of management. There's like ICs [Individual Contributors], there's the co-founders and some of the new managers, and then Elon. And that's it. And so because there's so many reports to the managers now, nothing really comes from them top down. Like we'll usually come up with a solution. They're okay. Elon okay is we're good. If there's feedback then we update. But it's a lot more bottom up than I expected.
Like trying to be designed so that everyone is building things and there was like fewer manager-managers and more just like builders.
Uh there's... yeah when I joined I think every manager also wrote code. And I think largely today they still do. Not as much now that some of them have you know 100 plus people reporting to them. But everyone's an engineer. I remember actually on my first week I sat down for dinner and this guy sits next to me and I asked, "Hey, you know, what team you on? How you doing? I just joined." And he tells me, "Oh, I'm on sales and like enterprise deals." And I was like, "Oh, I don't want to talk to this guy. He's a sales guy." And then he starts telling me about this model he's training on the... he's an engineer, too. The sales team is an engineer. Are all engineers. Everyone is an engineer. Uh I think at the time there was probably less than like eight people who were not engineers at the company in some capacity. And even then like yeah it was really cool. Everyone contributes to the machine.
Is it a little bit more like you have a single person working on some project and they just... you know if you're an engineer and you're working on the thing you can have a much closer relationship to the customer and like understanding their problem and then rapidly implementing solutions and stuff.
Yeah. The less layers you have, the less information is lost. There's less compression basically. Because you have to communicate less times and language is lossy compared to what's going on in your brain. Um so if you have to go from customer's brain to words to, you know, salesperson's brain to words to manager... every layer, you're losing a huge amount of information.
Yeah.
And so if you can cut as many layers as possible, then you've only got one compression step of the customer telling you what to do or what they want or what their experience is, and you as the engineer can solve it directly.
Is there anything specific that you've never heard of or seen at any other company that xAI does where that allows things to just happen way faster?
The fuzziness definitely between teams and what everyone is responsible for is definitely not what I expected and I don't think exists nearly as much in any large company or even remotely similarly sized company. Like if I need to fix something on our VM infrastructure, I will do it. I will show it to the guy who owns that and they will be like "Okay" and it's merged immediately and deployed. Like there's not a lot of strict regiment. Like everyone is allowed to update everything and there's some checks for dangerous things but um largely you're trusted to do the right thing and do it right. Which is really cool.
I remember when Elon was still really working on DOGE, there was at one point I think they deleted like Ebola prevention or something and then they rapidly reput that back in. But things had been deleted because of this rapid process of trying to figure out, you know, what doesn't need to be done and then re-implemented.
There's very rarely anything like irreversibly destructive. I'm actually not really aware of anything where something was irreversibly destroyed, but like I said, yeah, frequently something will be deleted or removed or something like that and someone will be like, "Hey, I needed that" you know an hour or two later. And then you go and roll back. Or you know sometimes it can be months where someone's building this project and they're depending on I don't know some piece of infrastructure... and turns out we rebuilt that thing three times by the time you go and deploy and need it, and so you update and go that way.
Do you think it's helpful to have like so few people working there on the engineering team?
Yeah, definitely. The more people you have doing a... like I definitely say a job for one person done by two will take twice as long. And it applies for every skill I think. And especially now that you don't need to physically write as much code as you did previously. You can be more of the decision maker and the architect. Everyone can be an architect. You just don't need as many hands. So one brain can do a lot more.
You tried starting multiple companies and you were doing a whole bunch of different projects prior to this. What about working here and what about the mission, the culture resonated? Why did you decide to work on this?
Uh, I've definitely always been very Elon-pilled. Like I always... he's been a big personal hero of mine. Especially growing up, you know, seeing the Falcon landings, the first ones. And I went out to launch 5 of Starship which was so worth it. It was the first one they caught. It was really cool. It was definitely the coolest thing I've ever seen. So being part of anything even remotely related to that sounds awesome to me.
Is there a reason why you chose this company instead of SpaceX or Tesla?
Yeah, I'm definitely like an entrepreneur by heart and xAI is definitely the smallest company, the newest of all of them. My assumption is—and this is largely proven true I think—is where you can have the most leverage and change as an individual person, because proportionately you're a much larger percentage of the company than you would be at these other companies. Not to say that like they're not doing cool things or everyone's not as important, but just the proportional change to implementation, to seeing the results, like it's very quick.
And um I guess another assumption that I thought would be the case (but it's wrong) that I had was that I would be faster on my own, you know, to build XYZ thing or try XYZ experiment. I'm actually usually faster at xAI just because I have a groundwork and a team who's probably already done a lot of the steps that I would otherwise have to do by hand. Um and there's yeah no one saying no.
You mentioned like it's kind of a fuzzy blurred line between people working on different things. Has there been any ability for you to kind of go to other people in the organization and just ask for help?
All the time.
What does that look like?
Um I walk up to their desk and I say, "Hey, here's my question. What are you working on right now? Can I support any of that? And can you help me with this?" Uh that's it. Everyone's in the same building. So, uh yeah, actually funny enough, we started testing some of our human emulators internally within the company as employees. And um in some cases like... like we didn't really tell anyone about this. And so in some cases there'll be someone doing some work and someone is like, "Hey, can you help me with this thing?" Or like, "Can you do this thing?" And the virtual employee is like, "Yeah, sure. Come to this desk. Come to my desk." And they go there and there's nothing there.
It's like the The Claw situation where it's like "We're going to show up" and I think when they first rolled out their vending machine, it was like, "I'm going to see you tomorrow." And then it, you know, it's obviously like a piece of code.
Yeah, exactly. And so, uh, multiple times I've gotten a ping saying like, "Hey, this guy on the org chart reports to you. Is he like not in today or something?" [Laughter]
It's just like an emulation.
And it's an AI. Uh, it's a virtual employee. Um but yeah, generally we all expect to be like in the same building and reachable to each other. Um so uh it goes like always and uh I can ask for help. People ask me for help all the time.
What have been the biggest like blunders that have happened?
Hm. So, um, with the human emulators, with the customers that we're working with, when we try to understand, like we always try to understand what the job that they're doing is and all the facets of it. Um, frequently we'll, you know, talk to them, we'll interview, we'll even watch them. Well, actually, we'll do the watching at the last step. So, we'll talk to them, we'll interview, they'll give you either write up or we'll just meet up with them and write notes as to how they do their job.
And then um like a week later we'll look at the mistakes that the virtual employee is making and realize like, well it's always making mistakes in these places in these specific cases. What's going on? And we go watch the human doing the same thing and there's like 20 different steps that are missing that they just totally left out. And we go to them and they're like, "Oh yeah we do that, like I forgot to tell you. My bad." It happens all the time. Um a lot of things people like I guess assume automatically. It's all for granted in their head. Totally on autopilot. The same way that you um can like be driving for an hour and not remember a single second of it and not be paying attention, can be totally in your own world. Um this is the same for every thing that a human does repeatedly. And that's what we're trying to solve basically is all the dumb stuff that humans do repetitively right now that they don't need to.
How do you decide which thing to go after? What's like the in your head when you're thinking about that, what are the biggest things outside of driving that humans do all the time that they just don't need to do?
Um, anything repetitive on a computer. So like customer support is a big one, where it's just taking in free form input from arbitrary customer in arbitrary form factor and translating that into a standard workflow that is purpose-built for like an AI to take care of that so that human could go and do something more creative and use their brain like in a more effective way. It's a total parallel to what happened in the coding world: like okay I don't need to write the same you know implementation 20 different times anymore. I can describe it in like three words and it's done. Um it's a huge compression step. And this is the same thing basically but for arbitrary digital workflows.
On the human emulator side you run into this problem of humans not existing and then like someone says come to my desk and the person doesn't exist. Is there any other thing that's been kind of surprising on rolling that out internally?
Surprisingly, we've been able to generalize to more cases than we thought. We test and we're pleasantly surprised a lot of times. Um like just today we gave Elon a few cases where we did not train on this task at all, but it did it flawlessly, like perfectly—like way better than we would have expected. Um yeah, the generalization is better than we expected for sure. And we're still at a very early stage, so it's only going to get better. And it's again the same parallels to Full Self-Driving where there's stuff not in the training data that the car does react to perfectly um due to generalization of a otherwise very, very small model. Like it's a matter of basically weight efficiency.
For the Elon meetings that you've been in, like what does that actually look like?
Um, they're pretty simple, honestly. Um, and I've been lucky that most of the ones I've been in have gone mostly pretty smoothly.
What does smooth look like?
Smooth is limited feedback or thumbs up. That means like, "Okay, you're going in the right direction. Keep going. Uh, I'll hear updates next week." Uh, or whatever it is. If there's feedback or a total reversal of direction as a request then we messed up somewhere. Then the question is where? So that's usually... we don't even have time to identify that. That's something you just build up implicitly as a muscle as you go on. And sometimes assumptions also change based on new information. That always happens in every case.
So when it comes from the top down, it's a little chaotic. I know with like SpaceX, the cost for parts and building things is super, super important. Cause, you know, everything basically costs a [__] ton of money and time to do, right? For this sort of thing, I imagine it's a little bit less focused on, you know, he's not like necessarily drilling down on do you understand every part of every process. What does it look like when he's kind of giving feedback?
Um, usually it's either at a very high level or at a very low level. It's not really often in between. So either on the high level it's like a product direction or customer sense: "Focus on this segment exclusively" or "Don't do this thing at all" or whatever. Um and then at a low level, especially when it comes to compute efficiency or latency, he'll always have a specific suggestion or "Let's try this."
And he's open to being like proven wrong, but it has to be proof. It has to be like "Let's try it and see what the results are." It can't be just someone's opinion. There has to be an experiment done. Um which has led to some surprising results sometimes and we go with it.
What have been those?
Um, so the compute efficiency of going with the small model has led to well a lot of improvements that we wouldn't have otherwise thought. Um, some of them are secondary, some of them are primary. The obvious ones are, well obvious, being able to go much, much faster to human. But also as a result—and Tesla found this too with Full Self-Driving going with the smaller model—they're able to iterate much, much faster. Um so not only does the model react to situations faster and can be more I guess tolerant of time frames, you can also just deploy iterations much faster. If it was 4 weeks before maybe it's one week now. Um so that actually goes back to the experimentations, why we can have 20 different ones going in parallel was a result of that particular decision early on in the chain.
And was the initial idea like go just do big large models and then...?
Sort of. Uh we definitely wanted to go faster than everyone else, but the question of how much faster was well the answer to that was amplified basically—multiply by a lot.
There's this like a lot of bias and stuff in Wikipedia and Elon has been like focused on kind of creating an alternate version that's just kind of like you know more truthful in effect. How do you go about basically cleaning up the internet in that way to figure out what is truth?
It's a really hard problem. Yeah, it's very hard especially because um the internet is not usually the ground truth for whatever thing it is. So wherever we can we try to drill down to the fundamentals which is very hard. Like I don't know what is the fundamentals like in physics of the Constitution. That's not really a question I think I can answer or anyone could really faithfully answer very well. But you try to do something like that—drill down as close as you can and then build up from that. Which is hard too because there's not actually a big body of writing that does that.
Um, one of the few probably examples is like James Burke with his Connections series is where he'll take two totally seemingly unrelated concepts and then connect them through physics and inventions. Um it's really cool and we're trying to do the same but uh it's fairly novel.
How do you find better data?
Uh data is not the only thing that goes into the results.
Yeah.
Like how you actually train on that data—and I know it's a pretty broad term but um it is true—like how you actually evaluate against that data and train against it and your different methods for updating the weights do matter a lot. You can try to faithfully recreate the input or the output given any arbitrary input and well you can create basically a horrible copy paste mechanism if you want, which is a classic problem in ML. There's a bit of an art to it to avoid that problem.
But I guess we're a few steps removed from that at this point. We're not measuring the fitness to any particular data set. At this point, we're trying to measure to an arbitrary output. So, it matters a lot how you construct your evals. Which is really hard for truthfulness because then you need to know the truth, which isn't always... well, I mean, that's really the problem we're trying to solve, right? So, it's kind of chicken egg. Yeah, there's like a lot of different approaches and a bunch of smart people working on it. If yeah, if anyone has suggestions, please send them through. There's like a lot of different ways to look at it.
So, there's been like moments in time where I've seen Elon on X and someone has said like "This is obviously not right" and it's like some Grok output and he's like "We're going to fix this" and then you know 12 hours later, 24 hours later he's like "All right it's fixed." When that happens like what happens internally?
Uh he shows us what went wrong and then quickly whoever is awake at the time will start up a thread to go and solve it. Usually individually pull in a few others if need be, and then give a postmortem on what happened and everyone will understand then what went wrong and how to avoid it in the future. Uh, ideally, yeah, the like generally making mistakes once is okay, but making the same mistake twice is big problem.
Throughout SpaceX's history, there's been a number of and same thing with Tesla, there's been a bunch of these like surges where randomly Elon will like come in at midnight and say, you know, like everyone that can come in, like send out a companywide email and say like "Come in, we need to be working." That sort of thing. Has there been anything like that?
It's more for the big models that that happens more than anything. Um, for Macrohard specifically, we've been operating in a war room for 4 months. [Laughter] So we've kind of always been on that push.
Do you guys have like a sign on the door that says war room?
Yeah.
Amazing.
Actually, well, yeah, we outgrew the original war room. And so we moved everything out and uh I'm told like [Elon] walks into the war room and it's totally empty and he's like, "Where is everyone? What?" And he walks over to where we are now, which is just the gym, which we cleared out and put everyone in now, and then conducts his impromptu questions of what's going on. That was a long night. [Laughter]
What is it like on one of those nights where a lot of things kind of get shaken up and moved forward or like there's one of these surges. What does that feel like?
Um, I think actually I saw this from one of the co-founders of xAI posted this recently. Um Igor, who was great to work with. I liked him a lot. It was actually really cool to work with him—side tangent—because his work on Starcraft AI way back like I guess 10 years ago now almost was one of the first like cool ML work that I tried to replicate myself in high school. Uh which was very hard. It was really cool. So it was really cool to work with him. Like I totally never thought I would get the chance to.
Um, but anyway, uh, I saw him post this thing a few days ago where he's like, "Okay, there are some, uh, you know, months where only a few days go by and then there's some nights where months happen." And that was like one of them for sure. Um, months might be an exaggeration. I think we would have gotten to the technical result we would have in a few weeks anyway, but doing it in one night was a huge push and it was a long night.
Has there been any moments where the company just didn't leave the office for like 5 days or like a week?
Yeah, the surges for the models usually results in a lot of people staying in overnight.
And you mentioned there's like five or six pods that people can sleep in and they like toggle out.
Yeah. Yeah. There's some sleeping pods and we have some bunk beds now, too. Um, which are less nice, but they exist. And then when the tent picture came out, everyone kept sending that to me and I was like, honestly, yeah, we have tents, but I've never seen that many out at once. [Laughter]
I know you worked on a bunch of different projects as a kid. And I think I don't know if this was their first one, but it was like fidget spinners and making fidget spinners. Um, I don't think it was in your garage, but maybe it was like in your room.
Yeah.
What kind of stuff like that tinkering mindset? How much of that have you kind of taken to this?
Uh quite a bit. Quite a bit. Yeah. Um so I learned programming when I was quite young. Um my dad got me a book when I was like 11 and I liked it a lot. Um well I liked it a bit but I really started to like it when I realized you can make money from it. And so um I met some people online who were basically writing scripts for games as hacks and would sell them online um for small amounts of money. But you know making a couple hundred bucks online was huge for me.
I think the first time that you like have someone give you money, it's the strangest feeling.
Crazy. Yeah. I remember having to ask my dad for like a PayPal like custody account or whatever and getting the money in and it was like the coolest thing of all time ever for me. Um, yeah, it was really big. And so uh I did that for um like a couple months and saved up enough money to at the time I was really interested in additive manufacturing like 3D printers. RepRap was the big thing then. So that was kind of what kicked off the modern 3D printing revolution. Uh RepRap was like this...
Built your own, right?
Yeah, you had to. That was the only option. Um RepRap is literally just a bunch of university students basically um who said like let's see if we can build a machine that can build almost all the components for itself. Which was why it was called RepRap. And uh they basically built in a variety of universities these rooms where you start with one printer um and then it prints the parts for the next printer and you go all the way up and you scale up. And there's lots of problems as it turns out and that's what they were solving and eventually kicked off like the modern 3D printing revolution.
Um, but I was very obsessed with it and so I took one of their parts lists and bought everything from Alibaba and a month later things came in and I assembled it all one night—which went poorly actually when I was unbundling the copper cable for the power supply. Which was a very sketchy power supply and did catch fire in the end. Um, all the copper windings came like loose and frayed everywhere and one went like 2 inches into my thumb.
Did you just... can't your thumb just doesn't work or did you go to the hospital or something?
No. So, it was a school night and it was like 3:00 a.m. cause I wasn't very good at building things at 13. And I spent like an hour in the bathroom trying to pull it out with tweezers and it just wasn't... it was bad. So, I just cut it off and I was like, eh. [Laughter] And so, bit by bit over the next few weeks it came out and I would snip it off in the mornings. It was fun.
Um, yeah. But I got the printer assembled. And around that time, yeah, the fidget spinner craze was going off. So, I bought 1,000 skateboard bearings from China and basically set up a little factory in my bedroom where every two hours at night I would wake up and I would clear the print bed, start a new print of fidget spinners and I would sell them online. And then before school I had a little assembly line in my garage where I would put in the bearings, spray paint, dry and then run around to all the other bus stops of the other schools, sell them to my distributors which were just other kids of other schools, sell all day at school, come back, collect from my distributors and then sell online, ship.
Built a little healthy business and after 2 months they ended up getting shut down by the county. Um their official quoted reason was that the companies that sell the school food have technically an exclusive license to sell anything in school property. But I think they just didn't like that I was distracting everyone and making money doing it. But it taught me a good like healthy disrespect for authority.
I think that has kind of been like a constant theme. What does that actually... how is that materialized in your life with like the healthy disrespect for authority? Like what... and you even mentioned institutions like you don't like necessarily trust institutions. How did you kind of come to that and what does that look like?
Um I've always known from very young like I want an unconventional outcome. And so going through a conventional path would pretty much necessarily not lead you to an unconventional outcome. So I grew opposed to any form of convention and institutions necessarily enforce convention. Um I think creativity and interesting outcomes come mostly from free-spirited individuals um in almost every case if not all of them. Um, I guess it's a bit of a like high-minded way of saying it, but yeah, like individuals are the most creative you can get. And so staying true to that is the way to go.
I do love John Carlson's idea of like everything is so hard to build and so hard to make, especially, you know, put into the real world that if you look around, it's basically like the world is just filled with some, you know, people's passion projects.
Yeah. It's a total miracle. Um there's a story behind every little thing. Um way more than you would think. I remember reading about the um I think it was YKK zippers. Apparently every good zipper like there's two or three companies in the world that make zippers which are actually pretty little miracles. They're very cheap but also mechanically relatively complicated for how much you pay for them. And there's only a few companies that are capable of building or have set up to build them. And it's basically this one Japanese guy's passion project over 40 years to figure out how to do this properly. Um and this is the case for pretty much everything. Um anything very specific and at scale is probably only done by a few companies or a few people in the world. Um so yeah, I mean you hear about it every so often, right? Like some company in Germany, arbitrary company in Germany shuts down and Volkswagen has to halt all their lines or something like that. Happens all the same.
Right before we met you had made a liquid fuel I think rocket engine.
Um it was like a very small thing. I saw it upstairs.
But you said we were talking before this that you did it in like 24 hours just on a whim. Um how did that happen?
Yeah. Um so it was a project over like roughly four weeks. Um and I started by literally just buying a bunch of textbooks and trying to figure out like what are the design principles behind a rocket engine, like how do I design it? There's not like um... it's totally different from learning software where you can just go on GitHub and download people's code and modify it. There's no file for a rocket engine. You have to learn how to like what are the material properties, what's the chemical properties, how do you actually machine it, um how do you design the parameters and know what to expect from in terms of thrust output and how do you not over pressure the engine and all these kinds of things. Um, how do you design the injector—which was very hard. That was probably 50% of the time.
Was that the hardest thing?
Yeah, the injector was very hard and it was like the biggest flaw in the end. Um, so yeah, I spent like 3-4 weeks doing this and uh expedited a bunch of parts from China like CNC and all that stuff. Um, and uh it was right before Thanksgiving. I was going to go fly back to the East Coast and visit my family and I was like, "Okay, either I fire it, like build it and fire it tonight (it was all just a bunch of parts at that time) or I do this in two weeks." And I'm like I'm not going to do this in two weeks. I'm going to do this right now. So uh I drank a lot of coffee in the morning and then spend the whole day like hacking away at at aluminum extrusions and built out the test frame and then the engine itself and uh let it off that night. Um yeah, which had a lot of um... we'll say concessions made to make it happen that night. [Laughter]
I did find it absolutely hilarious that you like you said, were you like a couple feet away?
Yeah. So, I designed it like I wasn't stupid. I designed it so that I could remotely fire it, but um I didn't... the power supply hadn't come in yet to remotely power the computer that was on board. So, I had to use a USB cable from my laptop to power the onboard computer. And I didn't have a long enough USB cable. Uh, the longest one I had was like 6 foot. So, I had to stand right next to it and light it up. And I was like, "There's like a 30% chance that this thing explodes or launches fire everywhere." And actually, um I don't know if it shows in the video—I think it does show in the video—but my jacket did catch fire because I wasn't that great at designing the injector and it did create a lot of over pressure events, which meant there was a lot of basically unburnt fuel spewing out, which was ethanol. And so that's liquid and just landed... some landed on my on my jacket and caught fire. Um so yeah, that's a trophy still, the burnt jacket.