AI Didn’t Break PBS—It Exposed What the Industry Isn’t Facing
Four uncomfortable but essential truths about data quality, governance, human judgment, and trust in an AI-driven media ecosystem.
This essay was contributed by Rebecca Avery, founder and principal of Integration Therapy, a boutique advisory firm that guides media companies in optimizing streaming operations and aligning strategy with financial performance.
One company that rarely gets enough credit for its technical ambition is PBS. In 2024, when AI was still settling in to live on media’s roadmaps, PBS was already testing how automation could strengthen metadata, improve discovery, and modernize the digital experience for viewers. There weren’t enough shared AI lessons at the time to establish deep patterns across the industry yet – there still aren’t, and what PBS did was genuinely bold. Because almost no one in our industry is this transparent.
PBS shared what worked, what didn’t, and what they learned. For data and technology leaders, their openness is a rare gift. It reveals what most companies hide: the structural gaps, operational failures, and foundational issues that surface when automation enters the picture.
If you’re trying to modernize your operations, you should read the original Current.org piece. It’s one of the clearest windows into the real work behind AI adoption that we’ve seen in media and entertainment: The ambitions, the strain points, the unexpected hurdles, and the pieces that genuinely advanced their mission.
PBS’s experience reveals four truths about AI that remain as relevant today as they were then. If you want real leverage from AI, these lessons are essential.
1. AI Forces a Reckoning With Data Quality and Most Companies Aren’t Ready
PBS is explicit that the first principle of AI work is that good results start with curated, high-quality, and well-protected data. Their metadata automation experiments demonstrated exactly why. AI can move through enormous libraries with speed, but it can’t provide meaning. If your taxonomy is inconsistent, overloaded, or poorly aligned with audience behavior, AI will amplify those issues.
Preparing a library for AI requires coordinated movement across data modeling, governance, operations, and sanitation. Without documented taxonomy, clean metadata, clear processes, and validated inputs, automation introduces unnecessary risk. Add high-speed AI to an unprepared system, and structural breaks multiply as fast as the data does—with no automated way to clean it up. The result: downstream trauma in personalization, analytics, and workflows throughout your supply chain.
PBS’s transparency is a reminder to the entire industry: AI doesn’t fix foundational data problems. It exposes them.
2. Governance Is a Missing Ingredient in Most AI Programs
PBS has done the work of defining how AI should be used across editorial, production, marketing, design, and internal operations. Their guidelines are thoughtful, clear, and closely tied to their mission and legal obligations. They outline when AI can create versus validate, when outputs must be manually reviewed, when additional labeling is required, and where generative media is off-limits due to compliance, privacy, or other sensitive issues.
Effective governance is a dynamic set of policies woven into strategy, internal communication, data workflows, and technology projects. If editorial, legal, product, engineering, finance, and content operations aren’t aligned on the purpose of a given AI initiative, or if leadership hasn’t clearly defined what “compliant” looks like, governance becomes theoretical instead of operational.
PBS’s approach shows what governance looks like when strategy is clear, communication is consistent, and data and technology teams are working from the same blueprint.
3. Where AI Reaches Its Limits, Human Expertise Becomes the Asset
PBS’s metadata experiments revealed something every media company needs to internalize: AI can scale work, but it can’t create meaning. Models can surface patterns across huge libraries, but they cannot understand why a piece of content matters, who it is for, or what role it plays in a viewer’s journey. Today’s more advanced models may be better at semantics, but they run into the same boundary: Meaning requires context, and context is a human capability.
PBS captured this directly: crunching a lot of data does not translate into knowledge.
And that line exposes two essential truths about the limits of AI and the beginning of real human oversight:
First, more data (or faster data) does not make decisions easier. It doesn’t clarify intent, reconcile conflicting signals, or determine what “good” looks like for the business. You still need people with data expertise, domain expertise, and leadership judgment to interpret what the model produces and connect it back to strategic outcomes. AI can expand the volume of information available; only humans can decide what that information means and how it should guide the company.
Second, AI and automation are not magic pills that let you fully automate a supply chain. They don’t remove the need for oversight, quality control, editorial review, or operational judgment. The goal is not efficiency through full automation. The goal is to run operations that support your strategy. AI is a tool in that pursuit, not the destination.
PBS’s openness around these limits is instructive. Automation can accelerate workflows and reduce repetitive tasks, but it does not replace the work of interpretation, context, and decision-making. When teams are strong, AI becomes a force multiplier. When teams lack clarity or development, AI reveals that quickly.
4. Ethical and Transparent Use Is Emerging as a Competitive Advantage
PBS’s guidelines and willingness to share their findings reflect their role as a public media institution that values trust. They require clear labeling when AI is involved in visuals, careful expert review, strong protections around likeness and voice, and strict boundaries for content that could mislead or violate editorial standards.
Across the broader technology landscape, companies are now finding themselves needing to give greater transparency to their users about how data is captured, processed, and used to retain trust. Audiences are increasingly aware of how algorithms shape their viewing experience. They notice when personalization helps them and when it doesn’t. When automation is aligned with the viewer’s interests, they’re more forgiving, but transparency is still the anchor that holds the relationship.
PBS’s posture sets a bar that the private sector should pay attention to. Their clarity strengthens trust, and in a media environment shaped by AI across every layer, trust is currency.
A Takeaway for the Industry, and Some Gratitude
PBS didn’t present their AI work as a finished system. They presented it as a real-world look at the complexity, the promise, and the institutional courage of bringing automation into a high-stakes media environment. They showed what worked, what required more structure, and where the tools themselves are still maturing.
AI is no longer a question of “if.” It is a question of readiness. Data readiness. Governance readiness. Team readiness. Ethical readiness. PBS showed the industry what responsible experimentation looks like. They gave us a view into the work that usually happens behind closed doors. And for those of us building, advising, or modernizing media operations, that kind of transparency is rare, and deeply valuable.
We’re better off because they shared it.
Sources:
“How PBS is using AI to improve processes and systems for stations, staff and viewers.” Current.org. October 2024.
“PBS Deploys Amazon Bedrock’s Generative AI for New Viewers Search Engine.” Digital Media World. October 2024.



