Open CloseAI
OpenAI is this company that always puzzles me. The more we know about it, the more puzzling it would become. I am not rooting against it. I am just doing the math.
OpenAI is this company that always puzzles me. I have dedicated several articles to this company already. Mainly pointed out the math doesn't add up, or the developer ecosystem not ready, or the copyright lawsuit revealed its structural opacity and immature legal considerations, from a safe distance.
This company has been the darling of the capital market for the past couple of years, and is expecting a lavish IPO later this year. Maybe lots of the puzzles will be solved when S1 is filed. But till then, I have this feeling that the more we know about it, the more puzzling it would become.
Let me tell you why I think this way.
But keep in mind that I am only speculating for fun. It is your responsibility to distinguish reliable sources from not-so-reliable ones, sound logic from crappy ones.
Part 1: The Architecture Problem Nobody Solved
Now, if you are like me, you wouldn't pay attention to the news these days. It's a messy world to start with, and that messiness is multi-dimensional. So you'd listen to classical music, watch Project Hail Mary and write position papers that need endorsement to upload to arXiv. Yet, you still will be notified by the universe that Musk's vs. OpenAI is ongoing in Oakland, even if you shouted back 'so what'!
Everybody in the solar system knows OpenAI started as a nonprofit. Then it became a capped-profit company. It has been trying to become a fully for-profit company. The transition has been ongoing for years and it is still not done, and there are consequences if they can't make the transition yesterday. It is a structural problem with compounding interest.
In May 2023, when the terms of the Microsoft deal became visible, I wrote following on weibo: "OpenAI accepting Microsoft's conditions revealed OpenAI's extreme fragility. Either they were worried about the future, or they were desperately short of money, or both. Rumor: lost $540 million last year. Conclusion unchanged: don't casually touch large models. Bottomless pit. Period."
Every round of funding since has attached itself to this unresolved architecture. Until SoftBank's $40 billion came with a condition: complete the for-profit restructuring by year end, or the check gets cut to $20 billion.
Well, there was the acquisition offer from Elon. OpenAI rejected it, obviously. But here is what people missed: to reject it, Altman needed to show investors a better outcome. A better outcome means a higher valuation. A higher valuation makes the nonprofit-to-for-profit conversion more legally complicated, because the gap between the nonprofit's original assets and the for-profit's current valuation keeps growing, and somebody has to account for that gap.


Elon's offer was not really an offer. It was a pressure move. OpenAI must reject Elon, and when doing so it must justify a higher number. The higher the number, the harder the architecture problem becomes.
OpenAI has been raising money faster than it has been solving its own governance. That is a race with a bad finish line. The finish line is currently a federal courthouse in Oakland, where Sam's old friend Elon is reminding everyone that this structural issue is not yet solved.
Two men who emailed each other about converting a nonprofit to for-profit are now in a courtroom arguing who cares more about humanity.
Part 2: The Developer Problem
OpenAI came out of a research lab. That is its greatest strength and its most persistent blind spot.
The models are impressive and the research is real. But the developer experience has been, for lack of a more technical term, a recurring headache.
I once watched someone demo a third-party health app that everyone was excited about. I rebuilt the core functionality in five minutes using Apple Shortcuts and pre-installed chatGPT app on the phone, without writing a line of code. The issue is beyond the let's-burn-token-unnecessarily, what the person demoing the health app had no idea how many compliance requirements their "simple" app would trigger in a real enterprise environment.
They had no idea because chatGPT only gave them coding capability. An enterprise or serious consumer app requires much more than the code.
That gap, between what the API can do and what it takes to actually deploy it in production, is where OpenAI's developer community problem lives. Claude Code, Cursor, and Gemini have all been eating into this space. Not because their underlying models are necessarily better in every benchmark, but because the developer experience is more considered.
When ChatGPT started calling users by name during reasoning, including users who had never given it their name, the reaction was chilling. It's extremely creepy. That is a product judgment problem. The kind that a strong developer community would have caught in beta.
The Oracle that doesn't know what time it is. The chatbot that knows your name before you told it. They are product decisions that someone made, or failed to make.
Part 3: The Big Sucker In The Room
Microsoft invested $13 billion into OpenAI. In return, they got Azure as the exclusive cloud provider, deep integration into Copilot, and the right to be notified about major deals.
Notified. Not consulted, not approved, only notified... One would think that $13B demands more respect than that.
When Sam was fired for the first time (hey I didn't say there will be the second time), Microsoft was notified.
When NVIDIA invested in OpenAI, Microsoft found out one day before the announcement. One day. For a $10 billion deal, from their primary AI partner.
Microsoft handled everything gracefully. They did not say or do anything dramatic. But they quietly began preparing. Microsoft added OpenAI to its list of competitors in its 2024 and 2025 regulatory filings, as ChatGPT began competing directly with Bing and Microsoft products. It also sent an internal memo claiming 'AI is no longer optional'.
Then Microsoft announced a partnership with Anthropic.
I called Microsoft a 大冤种 when the term of the deal was made known. 大冤种means something like "the big sucker in the room." Because there was this one particular clause in the contract that allows OpenAI to define AGI, claim it reaches AGI, then break away from the contract. It is something that I've never seen or heard of in the valley, ever. I even predicted that OpenAI would use this card, soon. It is simply too delicious to let go.

But then, also because of the contract, I was quite sure they were far away from AGI. The logic is straightforward. On January 11, 2023. I posted this:
The Louisiana Purchase. America bought a massive piece of land from France for $15 million. Jefferson only wanted New Orleans. Sent a kid out for soy sauce, came back with a cow.
The Americans were thrilled. But the real question is: why did France sell? If they had seen how valuable it would be, would they have?
Fast forward to today. Everyone has seen a rough outline of the OpenAI-Microsoft terms. Same question. Why did Sam sell? If AGI is within reach, $29 billion is nothing.
Just speculation. No practical use. So, that's that.
So, to certain extent, I can understand why MSFT let that clause slipped in. They must have known AGI was far away. Yet from legal negotiation's perspective, that was a bummer.
Though I'd say, they are a sucker who is starting to figure out the game. That clause is removed from the renewed contract.
Part 4: The Scaling Problem, the GPT-5 Problem, and the Debt They Are Carrying
For years, the answer to every question about AI progress was scaling. More compute, more data, more parameters, better results. The line went up. The investment followed the line.
But the line is destined to stop going up. It might already have.
- GPT-5 was underwhelming, and in this context the difference matters enormously. GPT-5 was positioned as the dawn of AGI, if you remember, then it failed to meet AGI. OpenAI's entire financial story depends on the line continuing to go up, or being the first reaching AGI.
- Sora, their video generation model, arrived with significant fanfare and has been discontinued. Image generation with ChatGPT Images 2.0 has been genuinely good, but does it have the potential of AGI? That's the question.
- Meanwhile, the API is priced to lose money. The price point on GPT-5 API access does not make sense as a profitable product. It makes sense as a land-grab, buying market share before the market consolidates. That is a legitimate strategy, only it will burn through money like a wildfire burns through the California hills.
HSBC estimated that to sustain operations through 2030, OpenAI needs to raise an additional $207 billion. Now, that's lots of zeros.
It is burning cash. Its API is priced below cost. The flagship model disappointed. The video model is gone. And it needs $207 billion.
This is what an on-paper valuation looks like when you subtract the story.
Part 5: Where Is the Next Story Coming From
Sam Altman is genuinely good at one thing above all others: raising the next round before the current round becomes uncomfortable. When things get awkward, a larger number appears and the awkwardness gets buried under the announcement.
The $13 billion from Microsoft buried the profit questions. The SoftBank round delayed the reckoning with Musk. Each new number was bigger than the last, and each new number reset the news cycle.
But the next number has a problem.
The Middle East sovereign wealth funds, which have been a significant source of capital for the AI industry, are facing their own pressures. Geopolitical uncertainty, oil price volatility, regional instability. The money that was flowing freely is flowing more carefully now.
The SoftBank deal did ask for a successful for-profit conversion, which hasn't happened yet. And everything is under examination in Oakland.

I once said that if OpenAI signs a deal with AWS, it would indicate it's in trouble. Not because AWS is bad, but because AWS signing it means Microsoft is no longer enough. A new anchor relationship is needed to tell the story. A new partner means the old partner is watching from a different seat.
Now, that deal just happened.
SoftBank wants restructuring. Microsoft wants relevance. AWS wants its own story. Nvidia's investment can have another interpretation. Musk was in Oakland. The architecture is still unresolved.
Where is the next story coming from? I genuinely do not know. And I am not sure OpenAI does either.
Coda: The Name
OpenAI chose that name in 2015. The idea was that artificial general intelligence should benefit all of humanity, that research should be open, that the technology should not be captured by any single entity.
Eleven years later:
- The research is largely closed.
- The governance is unresolved.
- The investors are circling with conditions attached.
- The developer community is going elsewhere.
- The flagship model disappointed.
- The next $207 billion has no clear source.
The name is still Open. The behavior has been something else for a while now.
Maybe the most accurate version is somewhere in the middle. Not fully open, not fully closed. Just a company trying to hold together a very complicated set of promises to a very complicated set of stakeholders, while the technology underneath them keeps doing what it does, which is not always what the press release said it would do.
I have been watching this for years. I am not rooting against them. I am just doing the math.
The math is getting harder to make work.