This Boom Is Not The Dot-Com Rerun
Chips, content, standards, and the workflows that outlive the hype
I am going to leave the Netflix–WBD–Paramount knife fight alone for a minute and stay focused on the stack. Last week was about the control layer starting to show its face in interconnects, permissions, and logs. This week the lens pulls back a notch. The stories are about the boom itself and who gets to shape it: trillion dollar incumbents, trade policy, regulators, standards bodies, and the production stacks brands are actually using.
The pattern is simple. This is not a garage startup gold rush. It is a capital intensive, infrastructure heavy boom where control sits with whoever owns the rails: chips, training data, identity, and workflow. The risk is not that everything crashes to zero. The risk is that the stack hardens around a very small set of defaults while everyone is still arguing about whether we are in a bubble.
So this week looks at four layers of the same machine. The macro story about why this boom does not map cleanly to the dot com era. The chip story about H200 exports and how compute policy now works like a tariff on capability. The rights story about who really owns the training set and how that anxiety shows up in creator tools. And the agent story about standards and identity, where the rules for what these systems are allowed to touch are starting to move out of slideware and into code.
This boom is not the dot-com rerun
Source: The New York Times, December 9, 2025
The piece walks through the obvious comparison everyone keeps making: AI in the mid 2020s vs the internet in the late 1990s. The core argument is that the biggest difference is who is holding the bag. The dot-com era was full of thinly capitalized startups whose entire existence depended on public markets. The current AI run-up is dominated by cash-rich platforms like Microsoft, Google, Meta, Amazon, and Nvidia, who can pour tens of billions into data centers without threatening their core businesses.
Why it matters
This is a boom built on durable infrastructure spend, not on banner ads and sock puppet IPOs. The likely outcome is not “everything crashes to zero.” It is more subtle: a small number of integrated players own the hardware, the platforms, and the default interfaces, while everyone else rides on top of their capex curve. If you are in media or creative work, the risk is not that AI disappears. It is that it hardens into a set of defaults you did not design, priced around their cost of capital rather than your margin.
Trump turns export controls into a revenue share
Source: The Wall Street Journal, December 9, 2025
The administration is letting Nvidia sell its H200 AI chips into China again, in exchange for a 25 percent cut of the revenue and a vetting regime for “approved customers.” The newer Blackwell parts stay off the table, but the key is the precedent. Export controls that were framed as hard national security lines are being reworked as a negotiated revenue share on slightly older hardware. At the same time, DOJ is bragging about “Operation Gatekeeper,” a smuggling case that shows exactly how far buyers were already going to get those chips anyway.
Why it matters
This is industrial policy by way of platform terms. The US is trying to keep a performance gap at the very top while still taxing Chinese demand and feeding Nvidia’s growth story. Beijing is simultaneously signaling it may limit H200 imports to force local alternatives. For everyone else, the takeaway is that “where the compute lives” is now explicitly a geopolitical variable. If your roadmap assumes cheap, unrestricted access to top end GPUs in any region, that is no longer a neutral assumption.
Europe asks who really owns the training set
Source: TechCrunch, December 9, 2025
Brussels has opened a fresh antitrust investigation into whether Google has been using publisher content and YouTube videos to train Gemini and power AI search summaries on terms that distort competition. Regulators are worried about “Google Zero” scenarios where AI answers replace outbound traffic, while Google keeps privileged training access to the very content its summaries are undercutting. Creators and publishers, meanwhile, have no symmetrical way to opt out, negotiate, or access the same data for their own models.
Why it matters
This is not just about one company’s AI feature. It is about whether platforms can convert the open web into proprietary training fuel and answer surfaces while everyone else gets rate-limited and paywalled. For news, entertainment, and UGC platforms, the question is whether AI summaries become a new carriage system with no carriage fees. If you care about where audiences come from and who controls their path to your work, this is no longer a theoretical policy fight. It is the next distribution deal, negotiated at the level of models and training sets.
Standards for agents, and an ID card to match
Source: Wall Street Journal, December 9, 2025
OpenAI, Anthropic, Block, and others are co-founding the Agentic AI Foundation under the Linux Foundation, donating standards like Model Context Protocol, goose, and AGENTS.md into a neutral home for agent infrastructure. In parallel, Saviynt just raised 700 million dollars at a reported 3 billion dollar valuation to manage “human and nonhuman” identities and access across enterprise systems, including AI agents.
Why it matters
The agent story is moving from speculative demos to plumbing. MCP and its cousins are an attempt to make “how agents talk to tools and each other” a shared standard instead of a proprietary black box. Saviynt is a bet that every one of those agents will eventually need a first-class identity entry in the same way a new employee does. Put together, you can see the outline of an agent stack where standards decide how they move and identity platforms decide what they are allowed to touch. If you are building media or data products, you will either plug into that stack or compete with it. There will not be a middle ground.
Creative workflows catch up to the infrastructure story
Source: Business Insider / Artlist, December 8, 2025
Artlist’s AI Trend Report 2026, summarized in a sponsored Business Insider piece, reads like a field report from inside the boom. Eighty-seven percent of surveyed creators are already using AI tools for video. Production is no longer a straight line from brief to edit; it is a circular loop of prompting, generating, and refining. The bottleneck is not render time, it is decision time. At the same time, creators are shifting from “can I make this” to “am I legally allowed to use this,” with 63 percent prioritizing clear rights and commercial safety over raw production quality.
Why it matters
This is the practical edge of everything above. When Nvidia, Google, and Microsoft fund the boom, and Brussels fights over training data, the outcome shows up as tools that let small teams produce at enterprise volume. The constraint becomes taste, concept, and legal clarity, not access to cameras or edit bays. For brands and media companies, the risk is assuming this is about side projects and social clips. In reality, AI video stacks like Artlist’s are quietly becoming the default for mid-tier work. The gap will open up between organizations that treat this as a core production system and those still writing one-off “AI innovation” briefs.
Closing note: watching the boom at system level
All of these stories sit on the same axis. A boom financed by incumbents, moderated by export deals, probed by Brussels, written into open standards, and absorbed into creator tooling. The question is not whether the chart is going up too fast. It is which parts of the stack will still be there after the line flattens and who quietly set the defaults while everyone was arguing about bubbles.
At the hardware layer, the H200 decision shows how easily export controls can turn into a price schedule. At the data layer, the EU probe makes it clear that “training on the open web” is no longer an unexamined assumption. At the agent layer, the new foundation and the identity money tell you there will be shared rails for how agents move and centralized gates for what they are allowed to touch. At the workflow edge, the Artlist report is a reminder that most teams will meet all of this through tools that promise “rights safe” production rather than through policy documents.
If you work in media or brands, the risk is not that AI disappears in a crash. The risk is that the infrastructure hardens around a small number of clouds, models, and agent standards, while your teams are still treating each new tool as an experiment. Over the next year, the practical questions look simple. Where does your compute actually live, and how exposed is that to policy swings. Which platforms own the training data and logs you rely on. Which agent protocols and identity systems you are implicitly adopting when you plug into “open” standards. And whether your everyday production stack is giving you leverage, or quietly handing it away.
This boom will end like every other boom: some valuations will deflate, some narratives will fade, some investors will move on. The chips will still be in racks. The models will still be serving traffic. The agent standards and identity graphs will still be stitched into everything. The work now is to pay attention to those layers while they are still technically in play, rather than waking up in a few cycles and realizing the real decisions were made at the level of rails, not apps.







Great framing. The decisions we make now about compute, data ownership, and agent standards will define who actually controls value creation in the next decade. It’s less about shiny tools and more about understanding the architecture we’re building on.
The danger isn’t AI collapsing. The danger is sleepwalking into dependency on a handful of platforms while calling it “innovation.”