Site icon SHENANDOAH

Altman’s Folly or Too Big to Fail?

The following statement was issued on November 6, 2025 on X by Sam Altman:

I would like to clarify a few things.

First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or otherwise lose in the market. If one company fails, other companies will do good work. What we do think might make sense is governments building (and owning) their own AI infrastructure, but then the upside of that should flow to the government as well. We can imagine a world where governments decide to offtake a lot of computing power and get to decide how to use it, and it may make sense to provide lower cost of capital to do so. Building a strategic national reserve of computing power makes a lot of sense. But this should be for the government’s benefit, not the benefit of private companies.

The one area where we have discussed loan guarantees is as part of supporting the buildout of semiconductor fabs in the US, where we and other companies have responded to the government’s call and where we would be happy to help (though we did not formally apply). The basic idea there has been ensuring that the sourcing of the chip supply chain is as American as possible in order to bring jobs and industrialization back to the US, and to enhance the strategic position of the US with an independent supply chain, for the benefit of all American companies. This is of course different from governments guaranteeing private-benefit datacenter buildouts. There are at least 3 “questions behind the question” here that are understandably causing concern.

First, “How is OpenAI going to pay for all this infrastructure it is signing up for?” We expect to end this year above $20 billion in annualized revenue run rate and grow to hundreds of billion by 2030. We are looking at commitments of about $1.4 trillion over the next 8 years. Obviously this requires continued revenue growth, and each doubling is a lot of work! But we are feeling good about our prospects there; we are quite excited about our upcoming enterprise offering for example, and there are categories like new consumer devices and robotics that we also expect to be very significant. But there are also new categories we have a hard time putting specifics on like AI that can do scientific discovery, which we will touch on later. We are also looking at ways to more directly sell compute capacity to other companies (and people); we are pretty sure the world is going to need a lot of “AI cloud”, and we are excited to offer this. We may also raise more equity or debt capital in the future. But everything we currently see suggests that the world is going to need a great deal more computing power than what we are already planning for.

Second, “Is OpenAI trying to become too big to fail, and should the government pick winners and losers?” Our answer on this is an unequivocal no. If we screw up and can’t fix it, we should fail, and other companies will continue on doing good work and servicing customers. That’s how capitalism works and the ecosystem and economy would be fine. We plan to be a wildly successful company, but if we get it wrong, that’s on us. Our CFO talked about government financing yesterday, and then later clarified her point underscoring that she could have phrased things more clearly. As mentioned above, we think that the US government should have a national strategy for its own AI infrastructure. Tyler Cowen asked me a few weeks ago about the federal government becoming the insurer of last resort for AI, in the sense of risks (like nuclear power) not about overbuild. I said “I do think the government ends up as the insurer of last resort, but I think I mean that in a different way than you mean that, and I don’t expect them to actually be writing the policies in the way that maybe they do for nuclear”. Again, this was in a totally different context than datacenter buildout, and not about bailing out a company. What we were talking about is something going catastrophically wrong—say, a rogue actor using an AI to coordinate a large-scale cyberattack that disrupts critical infrastructure—and how intentional misuse of AI could cause harm at a scale that only the government could deal with. I do not think the government should be writing insurance policies for AI companies.

Third, “Why do you need to spend so much now, instead of growing more slowly?”. We are trying to build the infrastructure for a future economy powered by AI, and given everything we see on the horizon in our research program, this is the time to invest to be really scaling up our technology. Massive infrastructure projects take quite awhile to build, so we have to start now. Based on the trends we are seeing of how people are using AI and how much of it they would like to use, we believe the risk to OpenAI of not having enough computing power is more significant and more likely than the risk of having too much. Even today, we and others have to rate limit our products and not offer new features and models because we face such a severe compute constraint. In a world where AI can make important scientific breakthroughs but at the cost of tremendous amounts of computing power, we want to be ready to meet that moment. And we no longer think it’s in the distant future. Our mission requires us to do what we can to not wait many more years to apply AI to hard problems, like contributing to curing deadly diseases, and to bring the benefits of AGI to people as soon as possible. Also, we want a world of abundant and cheap AI. We expect massive demand for this technology, and for it to improve people’s lives in many ways. It is a great privilege to get to be in the arena, and to have the conviction to take a run at building infrastructure at such scale for something so important. This is the bet we are making, and given our vantage point, we feel good about it. But we of course could be wrong, and the market—not the government—will deal with it if we are.

To be honest, this author is in the camp that this smells like one gigantic Pets.com fraud. But hey, I didn’t own Netscape to zero or whatever it sold for nor did I think that America would be retarded enough to repeat the .com bust.

Let’s ignore all the fluffernuttery and feel good crap in the statement above designed to cover for this statement by the CFO reported in the Wall Street Journal and other financial papers yesterday, this version via The Business Times:

OpenAI seeks government backing to boost AI investments

The key excerpt:

Speaking at a Wall Street Journal business conference, OpenAI CFO Sarah Friar explained that government backing could help attract the enormous investment needed for AI computing and infrastructure, given the uncertain lifespan of AI data centres.

“This is where we’re looking for an ecosystem of banks, private equity, maybe even governmental,” Friar said.

Federal loan guarantees would “really drop the cost of the financing,” she explained, enabling OpenAI and its investors to borrow more money at lower rates to meet the company’s ambitious targets.

The proposal – unusual for a Silicon Valley tech giant – would theoretically reduce OpenAI’s borrowing costs since the government would absorb losses if the company defaulted.

Wow. So once again a supposedly non-profit company that wants to issue a one trillion dollar plus IPO suddenly thinks that taxpayer should eat their losses and missteps for the “greater good” or whatever.

No wonder Sam came out with such a fierce statement trying to cover his company’s/non-profit’s/grifting enterprise’s ass.

I only called it all three to cover the bases.

Meanwhile at an AI conference in London, the CEO of Nvidia who desperately needs the “panic” building of AI data centers in the United States to justify the outrageous valuations place on his company made this statement during an interview with the Financial Times:

Between Altman and Huang it almost sounds like either thoughst protesteth too much or that they know they will be bailed out and just want shareholders to prepare for the inevitable.

The Real Big Buyer of Data Centers and AI

Before I conclude this portion of the article which will create even more delusion and disgust with my fellow dying breed, the laissez-faire capitalists, I must warn that some of this might seem tongue in cheek, some of it tin foil, but in reality all of it probable.

The largest buy of Artificial Intelligence products for the last fifteen years has not been Meta, or Google, or any of the other large companies fronting as the “leaders” in AI. The truth is the US taxpayer always has been the largest purchaser of data centers and the storage needed behind them.

How much so? Let’s take a look at what is now technically an antiquated data center by modern standards.

From Forbes, July 26, 2013:

NSA’s Huge Utah Datacenter: How Much Of Your Data Will It Store? Experts Disagree…

But, according to NetApp’s Larry Freeman, even that’s an over-estimate, assuming it’s not all storage:

The most telling statistic is the 65 Megawatt substation, which will limit the amount of racks. … Assuming that 40% of the 25,000 sq ft floor space in each of the 4 data halls would be used to house storage, 2,500 storage racks could be housed on a single floor (with accommodations for front and rear service areas). Each rack could contain about 450 high capacity 4TB HDDs which would mean that 1,125,000 disk drives could be housed on a single data center floor, with 4.5 Exabytes of raw storage capacity.

So let’s take the lower end estimate, from 2013, in this article of 4.5 Exabytes. To provide a perspective that’s 4,718,592 terabytes (Tb) or the equivalent of 4.7 million one terabyte solid state hard drives, pretty much the standard in today’s modern laptop computers.

Except that is from 2013, so one has no clue what the updated hardware, servers, etc. might have in the data center the NSA and DARPA control in Utah.

Now let’s analyze what the private American industry titans of this era are proposing and think about the worst case scenario outcome.

The Future Build Out is Now

From Data Center Magazine:

According to Exploding Topics, around 147 zettabytes of data will be generated in 2024, which will rise to 181 zettabytes in 2025. Currently, videos account for more than half of internet traffic. With these facts in mind, it is essential for the data centre industry to find new ways to innovate to meet rising customer demand for data, whilst keeping emissions down.

The next question is obviously, what is a zettabyte?

Let’s keep this discussion simple, like your humble author and do the math, something that is difficult for most investors and Americans alike.

1 Zettabyte = 1 billion Terabytes

Think about that in context to the discussion in the previous section. Just twelve years ago, the NSA constructed a data center with an estimated 4,718,592 Tb which means that the 181 Zettabytes projected in global data center construction reduces the NSA data center to a mere 0.0045 Zettabytes.

Using Moore’s Law (The observation that the number of transistors on computer chips doubles approximately every two years is known as Moore’s Law) as a base theory, this means unless the racks upon racks of computing power have been updated biannually by the National Security Agency, the NSA data center in Utah is now 6 full generations behind.

Obviously classified updates have occurred, but what if the NSA and a snooping all seeing government wanted its computing power to reign supreme not only over the entire world, but to control the US population?

The Buyer of Last Resort

Perhaps this is why Sam Altman, Jensen Huang, and many other speculators within the AI community might express concern about private investment and speculation in the massive artificial intelligence expansion fleeing, but not worried all that much should the speculative bubble actually burst.

The United States Department of Defense and the National Security Agency will buy up the data, storage centers (even if leased), and additional computational power to supplant the government’s capabilities in a heartbeat.

But why would the government be all that interested in a Skynet like command and control system for such a massive network if the threat of a truly global conflict is so low at this time?

Maybe, just maybe, the threat is from within in their eyes, Constitution be damned.

Article Sharing:
Exit mobile version