The Sidebar is where I share the tangents. Reflections on what I’m watching, what I’m reading, or what I’m working through. This time, it’s a piece of investigative journalism that landed this week and keeps pulling me back to a 19th-century railroad financier.
In 1869, Jay Gould cornered the gold market by telling President Grant’s brother-in-law one thing and his own trading partners another. When the scheme collapsed on what became known as Black Friday, Gould didn’t apologize. He reframed. The broken promises weren’t deception, he argued. They were reasonable adaptations to a rapidly changing environment.
I thought about Gould this morning while reading Ronan Farrow and Andrew Marantz’s investigation of Sam Altman in The New Yorker. The details are different. The mechanics are not. The investigation is built on interviews with more than a hundred insiders and two previously undisclosed document sets. The first is a 70-page memo compiled by former Chief Scientist Ilya Sutskever. The second is over 200 pages of internal notes written by Dario Amodei during his time at OpenAI. Both paint a picture of a leader who tells different stories to different rooms and treats the gap between them as operational friction rather than a character problem.
When asked to acknowledge a pattern of deception after being fired, Altman reportedly told the board, “I can’t change my nature.” One director interpreted that as an admission, not a defense. Multiple interviewees independently used the phrase “antisocial personality.” A Microsoft executive reportedly said there’s a real, if small, chance Altman will be remembered alongside Bernie Madoff and Sam Bankman-Fried.
That comparison is probably wrong, but it’s wrong in an interesting way. The Farrow piece doesn’t document fraud. It documents something more structurally dangerous.
The Cycle, in Fast-Forward
Tim Wu’s The Master Switch argues that every major communications technology follows the same arc. Open invention, commercial exploitation, consolidation under a single corporate entity, then control. It happened with telephony under AT&T, with radio under RCA, with television under the networks. The pattern is so consistent that the only variable is how long it takes and whether anyone intervenes before the window closes.
OpenAI is running Wu’s cycle at startup speed. Founded as a nonprofit to prevent AI from being monopolized. Converted to a capped-profit entity. Then uncapped. Then lobbying against the very regulations its CEO publicly endorsed before Congress. The name itself, “Open” AI, is the residue of a structural promise that has been quietly dissolved at every stage.
The stakes here dwarf the earlier versions of this pattern. Theodore Vail consolidated telephony but delivered universal service in the process. David Sarnoff locked up radio and built the country a broadcast infrastructure along the way. Those were real tradeoffs with real outcomes on both sides of the ledger. Altman hasn’t delivered AGI, and the economics of the attempt are increasingly shaky. The Information reported this week that CFO Sarah Friar has told colleagues she doesn’t believe the company is ready for a public offering. When the person managing the books isn’t sure the numbers work, the trust question stops being abstract. It becomes a capital structure question.
The Factory Owner Who Says He’s a Worker
Brian Merchant’s Blood in the Machine draws a line from the English factory owners of 1811 to today’s tech moguls. The move is always the same. Present the adoption of automated machinery as an inevitability driven by progress rather than a choice driven by the desire to capture value. The factory owners told their workers that the machines were coming regardless, and that it was better to have reasonable men in charge of the transition. Altman tells the public the same thing at civilizational scale.
The Farrow investigation adds a layer that Merchant would recognize immediately. It’s the safety theater. The alignment team, publicly promised 20% of OpenAI’s compute, was apparently operating on roughly 1-2% of it, mostly on the company’s oldest hardware. When journalists asked to speak with someone working on existential safety, an OpenAI spokesperson responded with a question of their own. “What do you mean by ‘existential safety’? That’s not a thing.”
A company navigating genuine tension between progress and safety doesn’t say that. A company that made the choice a long time ago and is now managing the optics does.
Why This Matters to Infrastructure
The temptation is to treat the Altman story as personality-driven noise on top of a compute buildout that proceeds regardless of who’s running it. The pipes don’t care about character flaws. The models get trained. The data centers get built. The inference endpoints come online.
But that framing misses the structural point. Concentrated control over a communications platform creates a single point of failure for the entire ecosystem built on top of it. Every content company, every distribution platform, every media workflow that touches AI is building dependency on a layer whose governance is, according to the people closest to it, unstable. Wu’s cycle matters precisely because the monopolist’s personality is irrelevant. The structure is the problem.
The investigation also surfaced a detail that should land differently with this audience. Around 2018, OpenAI’s leadership seriously discussed something they called the “National Plan.” The idea was to have major governments, including China and Russia, bid competitively for access to AI technology. The internal framing was blunt. The goal was to create a prisoner’s dilemma where every country would have to fund OpenAI. The plan was shelved only because employees threatened to quit.
That reveals an organization whose leadership views geopolitical leverage as a business strategy, held in check not by governance or principle but by the willingness of individual employees to walk away.
The Window
Gould’s contemporaries understood what he was doing. Newspapers wrote about it. Competitors complained. Regulators investigated. None of it mattered until the system itself broke badly enough to force structural change. The investigations were documents of a window closing, not interventions that kept it open.
The Farrow piece reads the same way. It is meticulous, sourced to an extraordinary degree, and likely to change nothing in the short term. Altman will remain CEO. The IPO preparations will continue. The compute buildout will proceed. The piece will be discussed for a week and then absorbed into the background noise of an industry that has already decided the technology matters more than the person controlling it.
And Wu’s history tells us that the moments where that felt most obviously true were exactly the moments where the person’s control became permanent.
The window closes when nobody’s watching. Right now, almost everybody is watching the technology instead.
Further reading: Gary Marcus’s response to the investigation. Peter Kafka’s analysis at Business Insider.






