NAB 2026 was a fight about whether you tear the supply chain down or abstract over it. Three answers are competing, and the most honest one isn’t on the show floor.
I spent the week at NAB 2026 expecting to hear operators admit they’re tearing down what they built. What I heard was more interesting. Most of them aren’t tearing anything down. They’re abstracting over it, hoping the layer above will do the work the layer below was supposed to.
The diagnosis is widely shared. An operator I sat with on Saturday, who’s been inside the supply-chain rebuild conversation across multiple tier-one media companies, walked me through it without flinching. The top media companies spent the last five-to-eight years building out internal supply chains that worked when the underlying technology was stable. AI made the target volatile. The original engineers have moved on. What’s left is Frankensteined. He’d just come from a meeting where a sports league said the quiet part out loud, we’ve got to tear it down. It doesn’t work at all.
The deeper claim he made stuck with me. Best-of-breed is dying not for snobbish reasons but because AI metabolizes glue. If your components can’t share metadata throughout the pipeline, AI has nothing to act on. A welded-together stack is the only way to get actionable intelligence end-to-end. He pointed at Akta, the Google spinout now powering CBS, Nexstar, Fox TV Stations, Discovery Latam, and TelevisaUnivision’s ViX, as the example of what that looks like when it ships.
The vendors told the same story in public language. AWS used its NAB platform to push agentic AI as the bedrock of modern media operations, with multi-step workflows the existing supply chain was never designed to feed. The chain was built for human consumers. AI exposed the seams. Avid and Google Cloud announced a multi-year partnership during the show to embed Vertex and Gemini directly into Media Composer with a unified data layer underneath. Adobe is moving in the same direction. The bet from the AI-native camp is that the seams are now structural and the only honest fix is a tightly coupled stack where metadata flows end-to-end because it never had to leave the building.
The most architecturally serious answer is the one getting the least press. The EBU, AMWA, BBC, and Linux Foundation are quietly building the open-standards equivalent: MXL for in-memory exchange, DMF for facility orchestration, TAMS for time-addressable storage. Frame-as-URL, but as a public utility rather than a product. It’s running in production at the BBC. Almost nobody on the floor was talking about it.
The proprietary version of the same idea showed up in a fundraising meeting on Sunday morning. Peter Bruggink and the Media-Anywhere team demoed AirFrame, scrubbing through ProRes media stored in S3 in Ohio from a laptop on conference Wi-Fi, with frame-level caching, forensic watermarks burned in per logged-in user, and no proxy. Premiere thinks the files are local. The player asks for thirty frames at a time over async HTTP/2 and the server reads them in parallel from object storage. They’ve already signed a top-three US sports broadcaster. The most interesting customer in their pipeline is a financial institution, because the bank’s compliance posture demands a zero-trust media handling that media itself has never enforced. Their architecture aligns with eight of the ten MovieLabs 2030 principles, which is the studios’ own published roadmap for where the production stack needs to go. The shift they’re betting on is the right one, and it’s the one TAMS is making in the open-source direction at the same time. The file is no longer the unit of work. The recipe is. An immutable description of what plays, when, from where, and how, that humans and AI can both author and that every output port consumes in its native format. The recipe travels for free.
The third answer is what most operators are actually doing, and it’s the least romantic. Backlight’s iconik data shows their customer deployments holding steady at 65% cloud and 35% on-prem. Net Insight is now selling Nimbra Live Intelligence with the explicit promise that it works without rip-and-replace. Wowza and Cloudflare demoed Media over QUIC framed as a way to apply intelligence to streams without rearchitecting the pipeline. The market is rewarding the abstraction story.
Sam Phillips at AMC said it on stage at AWS Theater. Spending three to six months evaluating vendors and choosing one, only to watch the methods go stale before the contract gets signed, isn’t a place his company can afford to be. His answer was PRISM, an overlay on AWS Media Lake plus TwelveLabs Marengo plus Bedrock, abstracted sufficiently that he didn’t have to wait for the rest of the supply chain to get organized. That sentence will be quoted by every CTO with a budget meeting next quarter.
What nobody at NAB wanted to call a re-architecture is the one actually happening to live. I spent a long evening with a technologist who’s worked inside more than one of the major streamers and broadcasters over the last several years. His diagnosis was that live had been bolted onto VOD-shaped organizations and it doesn’t work, because live is operationally and culturally different. He gave me a small example that I keep coming back to. At one company, you’d call a colleague, say you needed to jump on a video call, and they’d send you a calendar invite. At another, you’d call and they’d just join. The first is a VOD mentality. The second is a live mentality. The difference shows up in the operations bridge during an event. He had specific stories about how it shows up, like vendor scoring practices that exist on the VOD side but don’t on the live side, or regional rebalancing for a major event that used data on what people watched as VOD as the proxy for live demand, with predictably uneven results. You can build the world’s best CDN backbone and still ship a bad show because the cultural muscle for live is something you grow, not something you provision.
Netflix’s own Tech Blog has been making the same argument out loud. The April 2026 post described how engineers who built Netflix’s live pipeline were also the ones operating it, with no operational layer to absorb the load, and how Netflix had to stand up a Live Command Center, dedicated Site Reliability and Ops teams, and a Technical Live Management group to run the shows. It’s the most candid disclosure any major streamer has made about what live actually requires, and almost no one in the trade press has picked it up. The cultural rebuild is real. It just isn’t being announced.
The new generative AI studios are running the same play in their corner. Secret Level, where I’m part of the team, is explicitly refusing SaaS in favor of co-production. The economics only pencil if the studio owns its tooling and its IP. The capital problem is real. We aren’t a tech company, we aren’t a film fund, we aren’t an agency, and the institutional VCs that should fund us have theses that can’t accommodate the hybrid shape. So we’re inventing new financing structures, celebrity syndicates, strategic anchor checks, SAFEs instead of priced rounds, that look more like 1920s Hollywood financing than a 2020s software round. The moat isn’t the model. It’s the production pipeline, the rights chain, the governance layer that lets a studio defend reps and warranties on generative output, and the relationships with brand clients who are pulling us up the value chain because the agencies can’t help with narrative content. Autodesk’s 2024 acquisition of Wonder Dynamics was the same pattern from the other direction. Best-of-breed AI tools are collapsing into platform suites because horizontal tools don’t survive contact with production reality.
The thread that connects all of this isn’t AI. It’s the gap between the people closest to the operational reality and the public conversation about what AI is. The operators have already accepted that this is a re-architecture. They’re choosing their answer, rebuild, open standards, or abstract, based on what their balance sheet will allow. The public conversation is still treating AI as a feature you bolt onto a stack you already own. The longer that gap persists, the more expensive the eventual reconciliation gets, because the cost of getting it wrong compounds in metadata you didn’t capture, contracts you didn’t write, and rights chains you can’t reconstruct.
I called the show floor a lagging indicator before I came out here. A week in, I’d stand by it harder. Every conversation that mattered in this piece happened after the badges came off. The Saturday operator. The Sunday morning demo. The long evening with the technologist. The studio founder’s table. The quietest rooms generated the loudest signal, and I think that’s true every year. It’s just easier to notice when the gap between what gets announced on stage and what gets decided in private is this wide. The vendors who paid for the booths weren’t wrong about what they were showing. They were just one cycle behind the people I was drinking with.
The other thing the night does, which doesn’t show up in any thesis, is remind you why this work is still worth doing. My friend Kreig comes home from NAB every year having forgotten how recharged it makes him. Thirty years in and the show still does it for me, not because of what’s on the floor, but because of who’s in the rooms after. People who’ve been solving the same impossible problems for long enough that they’ve stopped expecting solutions and started enjoying the company. Walking out of one of those late conversations and realizing you’re still curious, still useful, still part of something. That’s the part of NAB the show floor cannot manufacture and cannot replace. It’s also the part that makes the architectural fight worth caring about in the first place. The infrastructure matters because the people building it matter, and the only way to keep both honest is to keep showing up to the rooms where the badges don’t.
The most consequential architectural decisions of the next five years aren’t being made on the NAB show floor. They’re being made in sandboxes being stood up to validate emerging ideas. They’re being made in tier-three sports automation pilots where the economics force what tier-one still affords humans to handle, with operators running ten matches at a time. They’re being made in closed-door operator forums whose attendance lists the show floor never sees. And they’re being made in hotel bars at one in the morning by people who already know each other’s tells.
Three answers are competing. The companies betting on one will be wrong about which one wins. The companies betting on none will be gone. But the people in those rooms will still be in those rooms next year, and the year after that, working on the same problems from new angles. That’s the actual continuity in this industry. Not the platforms. Not the architectures. The people who keep showing up.
AI isn’t a feature. It’s a re-architecture. The companies pretending otherwise aren’t saving money. They’re financing the next teardown. The people who saw this coming were already in the room.









