It starts quietly.
A thumbnail swaps overnight. A caption shortens itself. The feed rearranges what it thinks you’ll want next. No one approves the change, but it performs better, so it stays.
Artificial intelligence didn’t begin with imagination. It began with accounting.
The first computers split logic from memory, turning thought into something that could be measured, stored, and replayed. From there, we spent eighty years teaching machines not only to calculate, but to notice what worked.
Now the lesson has turned inward. Models train on their own outputs. Search engines mine the behavior they helped create. Each cycle pulls the loop tighter between what we do and what the system expects. What started as a tool for thinking has become the environment where thinking happens. This feedback loop doesn’t just shape what we see. It shapes what we believe. The same systems that learn to optimize a feed can also learn to mimic truth itself, producing artifacts that feel real because they’ve been trained on what performs as real. The risk isn’t fabrication alone, it’s amplification without interpretation.
This Deep Cut looks at how that shift took shape, how training and inference, once separate, merged into the same current. How learning became the infrastructure everything else runs on.
Credit where it’s due: Kreig DuBose did the deep digging. His paper A History of AI: Inference and Training lays out the technical timeline I’m drawing from. Go read his Substack, Cuss-n-Discuss. It’s the engineer’s view of how we got here.
What follows is the system’s view. A story about how the machines we built to learn started learning us instead. The question is whether we’re still the ones doing the teaching.
Keep reading with a 7-day free trial
Subscribe to Andy Beach's Engines of Change to keep reading this post and get 7 days of free access to the full post archives.


