Embracing the Interventionist Mindset
Shaping the Future of AI and Media with Ethical Engagement
Recently, danah boyd—whose sharp insights into technology’s human impact I have admired since stumbling across her 2001 Master's thesis, "Faceted Identity"—published a thought-provoking article titled "We Need an Interventionist Mindset." Her timing could not be better, as it lands amid intense debates over AI ethics sparked by generative AI controversies, privacy breaches, misinformation crises, and algorithmic bias scandals. boyd challenges the trendy belief that technology alone holds all the answers, urging us instead toward consistent, thoughtful engagement and deliberate intervention in AI’s evolution.
This resonates deeply with me, because it echoes themes I am revisiting repeatedly right now: technology isn't a neutral force—it carries our values, biases, and blind spots. Inspired by boyd’s powerful critique, I’d like to make the case for why shifting toward an interventionist mindset might fundamentally reshape how we develop AI—and how media plays a central role in that story.
The Trap of Solutionism
At the heart of boyd’s argument is her critique of solutionism—the seductive idea that tech alone can neatly solve messy social problems. Take, for instance, how social platforms initially tried solving misinformation purely through automated moderation, only to create new problems—silencing legitimate voices or letting harmful content slip through.
boyd is spot on when she says, "When engineers build systems without considering consequences, they build risk." I've highlighted similar concerns in my previous writings on AI-driven personalization: quick-fix tech solutions often sideline deeper ethical considerations, resulting in invasions of privacy, bias amplification, or diminished personal autonomy. These aren't theoretical threats—we've already seen biased facial recognition systems and problematic social media algorithms wreak real-world havoc.
Advocating for an Interventionist Mindset
In my earlier piece, "From Vaudeville to Viral Videos: Cultural Determinism in the Age of AI," I explored how culture influences technological adoption, rejecting the notion that technology autonomously dictates societal changes. boyd’s interventionist mindset aligns beautifully here and reminds us we don't just react passively; we actively shape technology through continuous dialogue, ethical scrutiny, and purposeful action.
For me, boyd’s key insight is deceptively simple: "Observing a system unfold without action is also an intervention." Passivity shapes outcomes just as actively, typically for the worse (We might argue this is a neglectful intervention). Consider OTT (over-the-top) streaming platforms like Netflix, Disney+, and HBO Max. Their proliferation has fragmented audiences, slowly inflated user costs, and complicated content discovery, reinforcing the need for intentional intervention to guard against monopolies, cultural silos, and exploitation. As AI increasingly personalizes content, our deliberate engagement to solve these issues becomes even more essential.
If you're with me so far, the next part goes deeper—into how media shapes public understanding of AI, what current narratives get wrong, and how real-world case studies reveal what happens when we intervene—or don’t. It’s dense, but worth it.
And if you make it all the way through? I’ve paired this one with a cocktail worth arguing with. Bitter, bold, and built for the ethically engaged. 🍸
Subscribe to dive in!
Keep reading with a 7-day free trial
Subscribe to Andy Beach's Engines of Change to keep reading this post and get 7 days of free access to the full post archives.