Stop Arguing About “AI”
Language, Accountability, and Misplaced Arguments
This is not a defense of AI. It’s a request for precision. We are using the word “AI” as a stand-in for explanation, and it’s degrading the quality of our thinking. What started as marketing shorthand has become a crutch. The more broadly we apply it, the less accurately we describe what systems are actually doing.
That imprecision has consequences. When everything is “AI,” critique drifts upward into abstraction. We end up arguing about intelligence, intent, or autonomy while the real mechanics operate underneath. This is what I mean when I refer to defaults getting set or thresholds getting tuned. Objectives get locked in without us understanding fully the what or how. Those decisions rarely surface once the system is live.
The result is familiar. People argue passionately about “AI” while missing the actual levers of control.
The Core Problem
“AI” is not a single thing. It is a label applied to a wide range of technical functions that behave very differently, carry different risks, and deserve different scrutiny. Treating them as one collapses meaningful distinctions and weakens accountability.
This shows up most clearly in how we assign responsibility. The moment we say “the AI decided,” ownership becomes diffuse. Systems did not decide. Pipelines just execute tasks. Policies get enforced. Someone chose a metric. Someone approved that rollout. Someone accepted the tradeoffs.
Language that implies agency where none exists makes power harder to locate.
Most of This Is Automation
Much of what is marketed as AI today is automation combined with statistical modeling and feedback loops. That does not make it trivial. It makes it understandable.
Automation is consequential. It scales decisions. It reshapes labor. It determines who gets visibility and who gets filtered out. But those effects come from execution at scale, not cognition. Calling automation what it is sharpens critique rather than dulling it.
Inflated language creates inflated fears and misplaced defenses. Accurate language brings the conversation back to incentives and outcomes.
Normalize the Functional Layers
Instead of defaulting to the top-level term, we should normalize the functions underneath it and name systems by what they actually do.
At a practical level, most AI related systems in media and technology today fall into a small set of functions:
Automation executes predefined actions once conditions are met.
Classification sorts inputs into categories.
Prediction estimates likelihoods or probabilities.
Optimization tunes systems toward a defined objective.
Generation produces new text, images, audio, or code.
Retrieval finds and assembles existing information.
Orchestration coordinates tools, steps, or workflows.
Enforcement applies rules and policy at scale.
This is not about building a perfect taxonomy. It is about restoring legibility. Once the function is named directly instead of hidden underneath an AI cover, the right questions should follow naturally. What are we optimizing for? What data shaped the behavior? What happens when the system is wrong? Who can override it?
Attribute, Don’t Abstract
Saying “AI is changing media” is technically true and practically useless. If we want to understand what is happening and argue about it productively, we need to be able to name the function, locate it in the workflow, and describe why it matters. It’s the same system, but a sharper language. And as a side bonus, you’ll sound pretty damn smart when you describe things.
Where This Shows Up in Media Systems
Content recommendations are the most familiar case. What people usually say is that “AI decides what you watch.” That framing implies judgment and intent. What is actually happening is prediction estimating the likelihood that you will click or finish, optimization ranking options to maximize watch time or retention, and automation continuously reordering feeds without direct human review. The real debate here is not about intelligence. It is about objectives. Which metric is being optimized, and what gets suppressed as a result.
The same abstraction shows up in content moderation. The shorthand version is “AI is censoring speech.” In practice, these systems rely on classification to label content as allowed, restricted, or removed, enforcement to apply policy automatically at scale, and orchestration to route edge cases to human reviewers. If the outcome feels wrong, the pressure points are policy design, thresholds, appeal mechanisms, and acceptable error rates. Whether the system is “too smart” is beside the point.
The same pattern appears in advertising and measurement. What people say is that “AI manipulates audiences.” What is actually happening is prediction estimating likelihood to click or convert, optimization allocating budget across audiences, and automation executing bidding and placement in real time. This reduces manual planning, trafficking, and optimization roles. Again, not a sentience problem. An incentives problem.
Finally, consider rights management and takedowns. The shorthand claim is that “AI is stealing or policing content.” In reality, these systems rely on classification through fingerprint matching and similarity detection, enforcement via automated claims and takedowns, and orchestration to manage disputes and escalations. Accuracy, transparency, and appeal integrity matter far more here than the label attached to the system.
And I would be doing you and myself a great disservice if I didn’t talk about GenerativeAI and content creation, Generative tools in production expose the tension most clearly. The common claim is that “AI is replacing creatives.” What is actually happening is narrower in function but explicit in consequence. These systems are being used for generation of drafts, concepts, and variations, retrieval of reference material or stylistic context, and automation of iteration and exploration rather than final authorship. The practical effect is that fewer people are required to deliver work that meets professional quality thresholds, particularly in early and mid-stage production.
You see this most clearly in previs, storyboards, temp audio, and marketing variants, where tasks that once required multiple specialists or extended handoffs can now be completed by smaller teams in tighter cycles. That reduction is real. It is already reshaping team size, role composition, and budgets. The system does not eliminate creativity, but it compresses labor around it. Fewer seats are needed to reach “good enough,” and that changes who gets hired, how long they stay engaged, and which roles remain economically viable.
The throughline is consistent. The tools do not change when we change the language. The outcomes do not disappear when we acknowledge labor compression. But precision lets us argue about the right things.
Once you name the function, you can support or oppose it with specificity. Blanket fear becomes targeted critique. Vague anxiety turns into leverage. That is the point of the taxonomy. Not to defend systems, but to make them legible.
A Challenge to the Critics
This applies most strongly to those who are loudest in their distrust of “AI.” You know who you are, perhaps you loudly claim no AI usage, or perhaps you deride AI as the dumbing down of society. Blanket opposition is easy. Precision is harder but precision is what creates leverage.
If you oppose automated enforcement, say that. If you reject optimization toward engagement at all costs, say that. If you support generative tools for ideation but not final output, say that.
Where creatives often diverge is not just on labor compression, but on compensation and attribution. Many are less concerned with the use of generative tools for ideation or finished work than with how the underlying work that informed those systems is valued, credited, or paid for. Discomfort with training practices, licensing terms, and the absence of durable compensation models is a legitimate critique, separate from questions of workflow efficiency.
If you object to how prior work is compensated or not compensated in the creation of generative systems, say that. Those are all concrete positions. They point to contracts, rights frameworks, and economic structures that can actually be challenged and redesigned.
We move further together when support and opposition are both specific. Vague fear rarely slows systems down. Targeted critique sometimes does.
Closing
When language collapses, agency disappears with it. Responsibility diffuses across abstractions, and power retreats behind a word broad enough to absorb or deflect blame without ever owning outcomes. Systems will continue to operate, metrics will be optimized, and labor gets compressed, but the points of control become harder to see and harder to challenge.
Clear language does not solve these problems on its own, but it makes them more addressable. It keeps objectives visible, surfaces tradeoffs, and forces ownership back into the conversation. It allows us to argue about incentives rather than intent, structure rather than mythology, and consequences rather than metaphors.
If the goal is accountability, precision is not optional. If you cannot name the function being deployed, you are not really critiquing the system that is shaping the outcome.



You had me at "most of this is automation."