Nov 16, 2024
There’s been a tranche of emails released as part of the Musk vs Altman stuff around OpenAI and it makes for some interesting reading.
One of the big things that jumps out is how much focus there is on crafting the narrative and mission for OpenAI.
They’re obsessed with getting the best talent (cheaply it seems), using the mission as the motivator:
Sam Altman to Elon Musk - Jun 24, 2015
The mission would be to create the first general AI and use it for individual empowerment—ie, the distributed version of the future that seems the safest. More generally, safety should be a first-class requirement.
Elon Musk to Sam Altman - Dec 8, 2015
It is super important to get the opening summary section right. This will be what everyone reads and what the press mostly quotes. The whole point of this release is to attract top talent. Not sure Greg totally gets that.
Sam Altman to Elon Musk - Dec 8, 2015 how is this?
__
OpenAI is a non-profit artificial intelligence research company with the goal of advancing digital intelligence in the way that is most likely to benefit humanity as a whole, unencumbered by an obligation to generate financial returns.
Because we don’t have any financial obligations, we can focus on the maximal positive human impact and disseminating AI technology as broadly as possible. We believe AI should be an extension of individual human wills and, in the spirit of liberty, not be concentrated in the hands of the few.
The outcome of this venture is uncertain and the pay is low compared to what others will offer, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.
I think this is one of the things that Elon Musk does very very well - setting out the vision and then using that to capture the best talent.
Sometimes I think he almost inverts the problem of finding a mission. He knows to win he needs the best talent. So he casts about for the problems that would attract the best talent.
He stands in the future, identifies an incredibly motivating mission, and then finds the problem which solves that mission. He wants to get to Mars to prolong humanity’s existence and make us a multibplanetary species, or he wants to slow the rate of global warming by increasing the electrification of cars.
Anyway, if you’re interested, there’s quite a bit in there that reveals the intentions of each of the major players and probably provides some insight into recent moves. I’ll leave that though as that drama belongs elsewhere, instead I’ll just call out one email from Andrej Karpathy:
OpenAI Email Archives (from Musk v. Altman)Andrej Karpathy to Elon Musk - Jan 31, 2018 Working at the cutting edge of AI is unfortunately expensive. For example, DeepMind’s operating expenses in 2016 were at around $250M USD (does not include compute). With their growing team today it might be ~0.5B/yr. But then Alphabet in 2016 reported ~20B net income so it’s still fairly cheap even if DeepMind had no revenue of its own. In addition to DeepMind, Google also has Google Brain, Research, and Cloud. And TensorFlow, TPUs, and they own about a third of all research (in fact, they hold their own AI conferences).
I also strongly suspect that compute horsepower will be necessary (and possibly even sufficient) to reach AGI. If historical trends are any indication, progress in AI is primarily driven by systems - compute, data, infrastructure. The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary.
It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can’t seriously compete but continue to do research in open, you might in fact be making things worse and helping them out “for free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale.
A for-profit pivot might create a more sustainable revenue stream over time and would, with the current team, likely bring in a lot of investment. However, building out a product from scratch would steal focus from AI research, it would take a long time and it’s unclear if a company could “catch up” to Google scale, and the investors might exert too much pressure in the wrong directions.
The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the “first stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The “second stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla’s market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.
I cannot see anything else that has the potential to reach sustainable Google-scale capital within a decade.
-Andrej