HPA Wasn’t About AI Tools. It Was About Operating Systems.
What the 2026 Tech Retreat revealed about budgets, roadmaps, and the infrastructure layer.
Note: Due to a platform glitch on Friday, this post was not published as planned. This time we are posting it for free for all to enjoy! Apologies if you already got it in your inbox.
I had the pleasure of attending the 2026 HPA Tech Retreat in Rancho Mirage in February. Same desert. Same single-track format. Same mix of studio executives, VFX supervisors, production technologists, cloud vendors, and the standards people who still care about the plumbing.
What changed wasn’t the setting. It was the layer of the stack everyone was focused on.
As this publishes, Netflix has walked away from its bid for Warner Bros. Discovery while Paramount Skydance moves forward. I’m still formulating my thoughts on what that means for consolidation, leverage, and long-term control of IP. I’ll write about that separately. But the timing is not incidental.
Capital is reorganizing at the top of the industry. Production pipelines are being re-architected underneath it. HPA sat squarely in the seam between those two forces.
This wasn’t a week about AI demos. It was a week about operating systems.
HPA is one of the few places where the entire stack sits in the same room and listens to the same thing. No breakouts. No safe silos. If something shifts there, you can feel it.
I didn’t attend consistently during my Microsoft years. There was always another internal priority that took precedence, usually framed as more urgent than a week in the desert talking about standards, pipelines, and long-term architecture. In hindsight, that tells you something about how large organizations rank signal. Being back has been clarifying.
Distance from enterprise gravity changes how you evaluate rooms like this. You start to notice which conversations compound over time and which ones just generate slide decks. HPA still feels like one of the former. This year, the shift was obvious.
No one was seriously debating whether AI works. That conversation is long over. The tools are real. They are embedded. They are already changing team size, iteration cycles, and what gets greenlit. What changed was the tone. The questions moved from “Can it do this?” to “How do we run this?”
I wrote a more disciplined version of that for SMPTE: When AI Becomes an Operating Model. It focuses on the structural shift from experimentation to integration. From novelty to dependency. From demo to operating layer. If you have not read it, please go read it there. SMPTE is one of the few institutions still thinking about interoperability, lineage, authorship tracking, and the unglamorous mechanics that keep creative systems coherent. That work matters right now.
But the SMPTE piece is the field note. This is the longer cut. The published version had to stay tight and operational, and that was the right decision. It focused on integration, governance, and the shift from experimentation to operating model because that is the structural layer SMPTE exists to address. But the thinking did not stop when the piece went live. If anything, it accelerated.
The SMPTE piece captured the shift to operational thinking. What this essay does is continue living inside that shift. It leans into the parts that are still forming, the parts that feel less varnished because they are still being tested in real budgets and real org charts. It traces the threads that only become visible once the event ends and you start mapping what was said against how the industry actually behaves.
This is not a recap of panels. It is a continuation of the signal beneath them, the pattern that becomes clearer once the applause fades and the work resumes.
The Retreat Wasn’t Debating AI. It Was Normalizing It.
If you showed up in Rancho Mirage expecting a philosophical cage match about whether AI belongs in media, you were a year late. No one was debating its legitimacy. The tools are already starting to embed across previs, concept development, editorial assist, roto, clean-up, synthetic elements, even dialogue drafting. The conversation has moved on. What I felt in the room was normalization.
Ed Ulbrich framed it in a way that cut through the noise: AI is probabilistic. Great storytelling is not. That distinction matters because it sets the boundary. Models predict patterns. Storytelling depends on judgment, taste, and the willingness to make choices that are not statistically safe. There was no real resistance to that framing. If anything, it clarified the responsibility. The machine generates options. The human decides what is good.
That tone tells you the industry is past the demo phase. The impressive clips were still there, of course. But the energy was different. The real questions were about placement and ownership. Where does this sit in the pipeline? Who signs off when the output is plausible but wrong? How do you log what was generated, what was modified, and what was ultimately approved? Those are deployment questions.
Hybrid production is already the default. AI is not replacing the set or the edit bay; it is threading through them. Synthetic plates feed live action. AI assists compositing. Writers experiment with generated alt lines and then rewrite them. Smaller teams move faster. None of this feels radical anymore. It feels practical.
That shift from “Look what this can do” to “How do we operate this responsibly” is the real maturity signal. When something becomes an operational dependency, you design around it differently. You think about governance, versioning, liability, insurance, cost structure, and vendor interoperability. You start asking how it integrates into an ecosystem rather than how flashy the output looks in isolation.
That is where the Retreat landed this year. Not in existential anxiety. Not in breathless enthusiasm. In operational thinking. And that, to me, is far more significant than any individual demo.
Media Is Behaving Like Software
Once you accept that AI is no longer a demo but a dependency, something else becomes visible. The production cycle itself is changing shape. At HPA, I heard less about “greenlight moments” and more about iteration loops. Concepts are moving faster from sketch to previs to rough cut. Teams are testing earlier. Revising continuously. Shipping versions internally. Learning then adjusting. That cadence feels much closer to software development than traditional film or television production.
And when cadence changes, structure follows. You start to see living standards instead of static documents. GitHub repos instead of PDFs. Version histories that matter. Toolchains that need to interoperate because no single vendor owns the entire stack. Automation is not targeting the headline creative decisions; it is targeting the glue work between systems. The ingest. The tagging. The reconciliation. The logging. The boring parts that used to live in human muscle memory.
That glue is where leverage hides. Conversations around metadata were not academic. They were practical. If assets are moving through generative systems, compositing tools, editorial environments, and cloud pipelines, the semantic layer has to hold. Lineage has to travel and consent has to be traceable. Credits and compensation cannot get lost in translation. Whether you call it Open Media Cataloging, a shared ontology, or just disciplined metadata hygiene, the backbone matters more when the surface accelerates.
This is the structural signal I think the room was sending. When media starts to behave like software, iteration replaces milestones. Deployment replaces premiere. Toolchains matter as much as talent. And standards stop being back-office concerns and start becoming coordination mechanisms.
If media is now behaving like software, the implications are not creative, they’re structural. The rest of this piece goes there.
AI Roadmaps Are Not Tool Lists
One thing I kept noticing at Rancho Mirage was how casually people talked about “adding AI” to their stack. As if it were another plug-in. Another vendor line item. That framing is already outdated.
The interesting conversations were not about which model had better output. They were about how work flows once the model is embedded. Who touches what. Who approves what. Where lineage lives. What breaks when a generated asset moves from previs into editorial and then into finishing. That is not a tooling question. It is an organizational one.
Let’s make this concrete.
Hybrid Is Permanent
There is a fantasy version of this transition where a production goes “all AI” and the old pipeline fades away. That magical one touch button makes the feature length film from your prompt. That is not what I saw.
Every serious workflow is hybrid. Synthetic elements feeding live action. AI-assisted ideation feeding human direction. Automated roto cleaned by compositors. Dialogue drafts rewritten by writers. The machine accelerates. The human curates. That blend of work is the new baseline.
Which means your roadmap cannot just be about model access. It has to be about how hybrid systems coexist. How assets move between environments. How metadata travels. How approvals are logged. How you reconstruct decisions six months later when legal or insurance comes knocking. Infrastructure starts to matter more than model selection.
Models will improve and vendors will churn. What will not change is the need to move assets across systems without losing data and therefore context. If your roadmap is built around whichever vendor just had the best demo, you will look innovative in Q2 and brittle by Q4.
The deeper implication is uncomfortable. If your roadmap is vendor-driven instead of pipeline-driven, you are not early. You are under-architected. That does not hurt in the lab. It hurts when the scale hits.
The Efficiency Trap Is Budgetary, Not Technical
The other signal hiding in plain sight is speed. Everyone is faster. Iteration cycles are shorter. Tasks that used to bottleneck are now automated or assisted. On paper, that looks like pure gain. But speed rarely translates into slack. It translates into expectation.
If previs takes half the time, do you protect that time for creative refinement? Or do you double the shot count? If roto is faster, do you reduce hours? Or do you expand scope? If editorial gets AI assist, does the team breathe? Or does the delivery calendar tighten?
The trap is not technical. The tech works. The trap is budgetary.
Without explicit governance, compression shifts leverage upstream. The organization absorbs the speed and resets the baseline. Smaller teams do not necessarily mean lighter workloads. They often mean more throughput under the same headcount.
AI creates optionality. But optionality is neutral. It will land wherever the incentive structure pushes it. If you do not decide where the savings and the acceleration go, someone else will decide for you. That is the part we are not saying out loud enough. Roadmaps are not about which model to adopt. They are about who benefits when the system gets faster.
Metadata and Discoverability Are Now Revenue Functions
One of the quieter threads running through HPA, and honestly through most of the serious infrastructure conversations right now, is that discovery is no longer primarily a front-end design problem. It is becoming a semantic systems problem.
When audiences search through conversational interfaces instead of scrolling through carousels, the mechanics change. A user is no longer clicking into a neatly curated genre shelf. They are asking a question in natural language. “Show me political thrillers with morally ambiguous leads.” “Find me something like X, but shorter.” “Pull the scene where…”
That query has to resolve against structured meaning, not just tags. Semantic retrieval systems do not care about how beautiful your key art is. They care about how well your content maps into a knowledge graph. They care whether character relationships are encoded. Whether themes are normalized. Whether rights windows are machine-readable. Whether alternate versions are distinguishable at the data layer.
If that mapping is thin or inconsistent, the system cannot confidently retrieve you. And when conversational agents start synthesizing answers instead of presenting ten blue links, that gap compounds.
This is where efforts like MovieLabs Ontology for Media Creation start to feel less theoretical. A shared semantic backbone is not about tidying up taxonomies for archivists. It is about making sure assets can be understood across systems. Across platforms. Across models. Without every company inventing its own dialect.
Layer provenance on top of that. C2PA is often discussed as a trust initiative, and it is. But provenance metadata is also indexable and actionable. If downstream systems are asked to surface licensed, verifiable, commercially safe content, they will favor assets with clean lineage. That preference is not ideological. It is operational. Systems optimize for clarity.
When I say metadata is now a revenue function, I do not mean it in a metaphorical sense. I mean that discoverability in conversational and agentic environments is directly correlated with how well your semantic layer is maintained. If your content cannot be resolved into meaning that machines understand, it will be under-represented in the interfaces that are replacing traditional browsing.
Historically, metadata lived in the archive team. Or in distribution ops. It was treated as compliance hygiene or library management. Now it sits much closer to growth and monetization. Because if retrieval is semantic and ranking is probabilistic, alignment at the knowledge layer determines who gets surfaced and who does not.
This is not glamorous work. It is taxonomy governance. It is cross-team alignment. It is agreeing on controlled vocabularies and resisting the urge to let every department define “genre” differently. It is investing in people who understand both content and data modeling.
But if you zoom out, the pattern is consistent with everything else we saw at HPA. As the surface layer accelerates and AI normalizes, the structural layers decide outcomes. And in a world where audiences increasingly talk to systems instead of navigating menus, being understood by machines is no longer optional. It is existential in a very practical, revenue-linked way.
Archive Is No Longer a Compliance Issue, It’s Competitive Advantage
There was an undercurrent at HPA that had nothing to do with the flashiest demos.
A few side conversations drifted toward preservation. Aging formats. Hardware that is harder to source. Facilities quietly decommissioning equipment that once felt permanent. Engineers who know how to recover certain media nearing retirement. None of this is new. We’ve been talking about format decay for years. What feels different now is how that conversation intersects with AI.
When archives were primarily about compliance or historical stewardship, it was easy to treat them as a downstream responsibility. Something to handle after delivery. Something to clean up later. If a title needed a remaster or a new territory version, you went back and pulled what you could. But AI systems do not work that way. They do not benefit from “we can probably find it.” They benefit from structured, digitized, semantically described assets that can be embedded, indexed, and retrieved.
If your back catalog lives on tape that has not been ingested, or in storage without coherent metadata, it cannot participate in embedding workflows. It cannot power internal retrieval systems. It cannot feed semantic search layers. It sits there legally owned but operationally inert.
That is where the competitive gap starts to show. Organizations that invested early in digitization and structured metadata now have libraries that can be queried in natural language, surfaced contextually, and recombined in AI-assisted workflows. Their archive becomes computationally accessible. It compounds.
Organizations that deferred that work are discovering that ownership is not the same thing as leverage. And the window is not open indefinitely. Hardware disappears. Vendors sunset support. The cost of digitization goes up when a format becomes rarer. At some point the question shifts from “When should we do this?” to “Can we still do this?”
What struck me at HPA was how few roadmap conversations explicitly connect archive strategy to AI strategy. They are often handled by different teams, under different budgets, with different timelines. Yet the ability to extract value from a catalog in an AI-mediated environment is directly tied to how accessible and well-structured that catalog is.
If you want retrieval systems that understand your history, you need clean source material. If you want embeddings that actually reflect your IP, you need digitized assets with preserved context. If you want to experiment with derivative experiences, you need provenance and structure baked in from the start. Which is why archive capture cannot remain a “post” conversation.
The moment of production is when context is richest. That is when decisions are fresh, rights are clear, and relationships between assets are obvious. If that information is not captured and structured then, it rarely gets reconstructed later with the same fidelity.
The industry has treated archive as the end of the line for decades. In an AI-shaped landscape, it starts to look more like the beginning of the next cycle. And the companies that understand that early will not just preserve history. They will operationalize it.
Standards Participation Is Now a Strategic Signal
One of the things that becomes obvious after a few days at HPA is that the most consequential conversations are rarely on stage. They happen in working groups, side meetings, and follow-ups that trace back to standards bodies and open frameworks most executives never read.
For years, standards participation felt procedural. If you were a vendor, you showed up to protect compatibility. If you were a studio, you waited until something stabilized and then implemented it. It was slow, deliberate, occasionally political, but not usually strategic in a competitive sense.
That posture does not map cleanly onto the current environment. The model itself is shifting. Instead of static PDFs published every few years, you see living standards evolving in public repositories. Iteration happens in the open. Definitions are debated while products are being built. By the time something is “final,” most of the architectural gravity has already formed around it.
Take the momentum around efforts like MovieLabs’ Ontology for Media Creation. A shared semantic backbone for content is not just about cleaning up genre tags. It shapes how discovery works across platforms and AI systems. It affects how rights, relationships, versions, and derivative works are understood computationally. The ontology decisions made in those rooms influence how content is surfaced, recombined, and monetized.
Control-plane efforts sit in a similar category. As workflows become more distributed and more AI-assisted, orchestration layers become critical. Systems need to communicate state, context, and intent across vendors. Initiatives like SMPTE’s Catena, alongside other control-plane and workflow-coordination efforts, are attempts to formalize that signaling layer. The structure of that layer determines how flexible a pipeline remains. If those protocols are shaped without your input, you inherit constraints that may not align with your long-term strategy.
Even color management, through ACES and OCIO integration, now intersects with generative workflows. When AI systems are introduced into imaging pipelines, assumptions about transforms, metadata, and rendering consistency carry downstream implications. What used to be considered high-end workflow hygiene now bleeds into model training consistency and automation reliability.
What all of this suggests is that participation in standards bodies has become a form of strategic positioning. Being in those rooms means you see where definitions are heading before they harden. You understand trade-offs while they are still negotiable. You can align internal roadmaps with emerging frameworks instead of retrofitting after the fact.
If you are not present while those conversations iterate, you are not neutral. You are simply downstream. By the time a specification lands in implementation, the architectural choices have already been shaped by whoever showed up. Standards work used to be something you did to remain compliant. In an AI-shaped media stack, it increasingly looks like part of competitive architecture. And that changes who should care.
Headcount, Skill Mix, and Role Design
If media workflows are starting to behave more like software systems, then the talent model has to shift with them. That was implicit in a lot of the HPA conversations, even when people weren’t naming it directly. Smaller teams with faster cycles. AI threaded through multiple stages of production. More vendors in the stack. More automation touching glue work. That changes what kinds of roles create leverage.
The first tension I keep seeing is generalists versus specialists. Specialists are still essential. High-end compositors, colorists, editors, writers, production designers, none of that goes away. But when AI is embedded across stages, you start needing people who understand how those stages connect. Not just creatively, but operationally. People who can see the pipeline as a system.
That is where the “bridge builder” shows up. Not a prompt engineer in isolation. Someone who understands how a generative tool feeds previs, how that output travels into asset management, how metadata is preserved, how approvals are logged, how deliverables are versioned. Someone who can translate between creative intent and technical constraints without defaulting to either.
As pipelines become more modular and distributed, orchestration becomes a real discipline. Who owns the flow of work across AI systems and traditional tools? Who decides when automation is introduced and when it is overridden? Who ensures that speed gains in one department do not create downstream fragility in another?
Historically, those responsibilities were diffuse. Production managers handled schedules. Post supervisors handled finishing. IT handled infrastructure. In an AI-augmented environment, the seams between those domains get thinner. Orchestration fluency becomes valuable because misalignment compounds quickly when iteration cycles are tight.
Governance roles are also changing shape. When AI is embedded in production, someone has to own model usage policy, data boundaries, vendor compliance, and auditability. Someone has to understand how provenance frameworks intersect with internal workflows. Someone has to ensure that consent and rights metadata travel with assets. That is not a side project. It becomes a defined function.
Then there is the overlap between model ops and metadata ops. If you are training or fine-tuning internal systems, retrieval quality depends on how well your assets are structured. If you are deploying generative tools at scale, performance depends on how cleanly your data is organized and labeled. The people who understand model behavior need to be in conversation with the people who govern taxonomy and metadata hygiene. Otherwise you end up with powerful tools running on messy foundations.
And then there is creative leadership. Creative directors who grew up in longer greenlight cycles are now operating in compressed iteration loops. Understanding how acceleration affects budget, scope, and expectation becomes part of the job. It is not just about taste. It is about deciding when to stop iterating, when to reinvest savings, and when to protect creative time from being absorbed into throughput.
The instinct in many organizations is to respond to AI by hiring prompt engineers. That may solve for experimentation. It does not solve for structure.
If media workflows are behaving more like software systems, you need people who can architect pipelines, not just operate tools. You need owners of governance, not just users of models. You need fluency in orchestration across departments. And you need creative leadership that understands the economics of iteration, not just the aesthetics of output.
Those roles are not theoretical. They are already emerging. The question is whether they are being designed intentionally or forming accidentally around whoever raises their hand first.
The Decision Framework
If you strip all of this down, the normalization of AI, the shift toward hybrid pipelines, the semantic layer, the archive urgency, the standards participation, the changing skill mix, what you are left with is not a technology problem. You are left with a decision problem.
Most organizations are still treating AI adoption as a sequence of experiments. Pilot a tool. Test a workflow. See what sticks. That is reasonable in early phases. It becomes risky once the tools move from optional to embedded. At that point, you need a simple internal framework. Not to sound smart. To avoid drifting into structural decisions by accident.
Here are the three questions I would like to put on the table.
First: Are we building infrastructure, or are we buying tools? It is easy to confuse access with capability. A license to a leading model does not equal operational readiness. The harder question is whether your storage, metadata layer, identity controls, approval flows, and interoperability standards can support hybrid AI workflows without becoming brittle. If the foundation is thin, every new tool increases fragility. If the foundation is deliberate, tool choice becomes more flexible.
Second: Do we know where AI-generated savings land? Acceleration is real. But acceleration does not automatically translate into creative freedom. It often translates into expanded scope or compressed schedules. If iteration cycles are shorter, someone benefits. The question is who.
Are you intentionally reinvesting savings into higher-quality output, longer development time, or new formats? Or are those gains being absorbed into margin targets and throughput expectations? If that decision is not explicit, it is still being made — just not by design.
Third: Are we participating in the governance layer, or waiting for it to be imposed? Standards, provenance frameworks, semantic models, control plane protocols — these are being shaped right now. If you are not in those rooms, you inherit the architecture later. That may be fine. But it is not neutral. It means aligning to definitions set by others.
Taken together, these questions shift the conversation from “Which AI tool should we try next?” to “What kind of system are we becoming?”
That is the deeper signal from HPA this year. AI is no longer the novelty at the edge of the pipeline. It is part of the structure. And once it becomes structural, leadership choices, about architecture, governance, and economics, determine who actually benefits from the change.
You can approach that deliberately. Or you can discover the shape of your system after it hardens. Those are very different futures.
Judgment First Is a Budget Position
When Ed Ulbrich said that AI is probabilistic and great storytelling is not, it would have been easy to file that away as a creative rallying cry. A reminder that humans still matter. A comforting line to end a keynote. But the more I have sat with it, the less it sounds like a creative slogan and the more it sounds like a structural decision.
“Judgment first” isn’t about defending craft. It’s about deciding what the system optimizes for. Left alone, speed compounds. Iteration expands. Throughput becomes the metric. If you want something else to win, coherence, authorship, restraint, you have to design for it. That design shows up in budgets, headcount, and who gets to say no. AI will amplify whatever structure surrounds it. Judgment isn’t a virtue. It’s a constraint you either fund or surrender.
That decision appears in org charts, capital allocation, and review authority. It shows up in how savings are treated, reinvested into development or absorbed into throughput. It shows up in whether metadata capture is embedded in production or deferred to the end of the line. It shows up in whether you shape emerging standards or accept their constraints. Judgment is not a creative preference. It is encoded in the system itself.
AI will amplify whatever structure it is placed inside. If the operating model rewards throughput above all else, it will increase throughput. If it protects context, authorship, and coherence, it will amplify those instead. The models themselves are indifferent. The architecture is not.
That design work is already underway across the industry, whether organizations are naming it or not. Roadmaps are being drafted. Roles are being redefined. Governance layers are being negotiated. Standards are iterating in public. Archives are being digitized or deferred. Budgets are being recalibrated around speed. The question is not whether AI will reshape media. It already is.
The question is whether judgment remains upstream of the model, or becomes a downstream correction after the fact. That is not a philosophical debate. It is an operating decision.
Many of these shifts point toward something deeper. Media infrastructure is beginning to behave less like a pipeline and more like a system that remembers its own history. I’ll return to that idea next week when the Inheritance series continues.
Cocktail Note: The Desert Mirage
Palm Springs has always marketed reinvention. Clean lines. Modernism. The idea that next season will be sharper than the last. But it is still the desert. Heat exposes things. Light is unforgiving. Shadows stretch longer than you expect.
That felt like the right backdrop for this Deep Cut.
This week’s drink is a Desert Mirage.
Ingredients
1.5 oz mezcal
0.75 oz dry curaçao
0.5 oz fresh lime
0.25 oz honey syrup
Expressed grapefruit peel
Shake hard. Serve over a single large cube.
At first sip it reads bright and citrus-forward. The smoke arrives a second later. The honey is not there to sweeten the room; it just keeps the structure from collapsing. Nothing overwhelms anything else. It holds.



