The Control Layer Becomes Critical
Who Really Sits In Front Of The Agents
This week is about the control layer starting to show its face.
Cloud providers are wiring themselves together so customers can chase the models wherever they actually live. Platforms are buying identity graphs so they can decide which agents are allowed to touch what. Data warehouses are hiring those agents to live closer to the data. Courts are dragging 20 million “private” model interactions into discovery and treating them as evidence, not exhaust.
None of this looks like a flashy app launch. It shows up as interconnects, permissions, logging, and contracts. But taken together, it marks a shift. The interesting action is no longer in the chat window. It is in the routing tables that decide which model you hit, the policy engines that decide whether your agent is allowed to act, and the audit trails that decide who gets blamed when something goes wrong.
Everyone is fighting for the same thing. The layer that quietly decides what is possible, what is easy, and what never even shows up as an option.
AWS and Google build a bridge so customers can chase models
Source: The Information, December 2, 2025
AWS and Google Cloud are launching a jointly engineered multicloud networking service that lets customers create private, high speed links between the two platforms in minutes instead of weeks. The service combines AWS Interconnect – multicloud with Google’s Cross Cloud Interconnect, with support for Azure slated for 2026. The sales pitch is resilience and simpler multicloud. The reality is that a lot of AWS customers want access to Gemini and other rival models without lifting and shifting their data.
Why it matters
This is model gravity overruling old cloud posture. AWS spent a decade telling customers they did not need multicloud. Now it is co-authoring the bridge that keeps workloads and spend anchored on AWS while traffic flows to whatever model stack the customer actually wants. The decision about which model your agents talk to moves into a network spec and a console setting. Once that is normalized, the “where should this run” question becomes part of the control layer, not an architecture debate.
ServiceNow buys Veza to track where the agents actually are
Source: IT Pro, December 3, 2025
ServiceNow is acquiring Veza, an AI native identity security company, in a deal reported at more than 1 billion dollars. Veza’s Access Graph gives enterprises a single view of who and what can reach sensitive systems and data, across human users, machine accounts, and AI agents. ServiceNow plans to plug that graph into its AI Control Tower so security teams can review permissions, narrow access, and monitor agent activity from one place. Executives are already talking about “non-human identities” as a first class problem.
Why it matters
This is the permission layer turning into a product line. If AWS and Google are fighting over where agents run, ServiceNow is buying the map of which agents are even allowed to act. Identity here is not just employees. It is services, workflows, copilots, service bots, and anything else that can touch production systems. That is the control layer moving from ad hoc IAM settings into a dedicated surface. The teams that treat agent identity and authorization as core design, not clean up, will have a very different risk profile in a year.
Snowflake pays 200 million dollars to make Claude a resident agent
Source: Reuters, December 3, 2025
Snowflake and Anthropic announced a multi year, 200 million dollar partnership to bring Claude models directly into the Snowflake AI Data Cloud. Claude will power Snowflake Intelligence, the company’s new “enterprise intelligence agent,” and will be available to more than 12,600 customers where their governed data already lives, across AWS, Azure, and Google Cloud. The deal includes a joint go to market effort centered on agentic workflows built on warehouse data.
Why it matters
The warehouse is shifting from a reporting surface to an action surface. Instead of exporting data out to some external assistant, enterprises are being asked to let a resident agent sit on top of their tables and propose next steps from inside the platform. Paired with the AWS and Google bridge, this is another piece of the same story: connect wherever you want on the infrastructure side, then let a single decision layer mediate how agents work on the data itself. The control layer moves closer to the data gravity you already cannot ignore.
OpenAI is forced to turn over 20 million ChatGPT logs
Source: Reuters, December 3, 2025
In the New York Times copyright case, US Magistrate Judge Ona Wang ordered OpenAI to produce more than 20 million de-identified ChatGPT conversation logs, rejecting the company’s arguments that the request is a privacy disaster and a gift to attackers. The logs span two years of usage and must be delivered under an existing protective order. OpenAI has pushed back in court filings and public statements, arguing that almost all of the chats are irrelevant and that disclosure will erode user trust.
Why it matters
This is what it looks like when the control layer runs into legal discovery. Those logs are not abstract “training data.” They are the actual interaction history of how people lean on these systems day to day. Once one court normalizes a request at this scale, it becomes much easier for other plaintiffs and regulators to demand similar windows into AI usage. That pulls the audit trail into the control layer too. Product marketing says the user is in control. The case record is starting to say something else.
Closing note: next week, zooming in on the Control Layer
All four of these stories describe the same shift from different angles. Clouds are interconnecting so they can keep their grip on data while you chase the models you actually want. Platforms are absorbing identity graphs so they can decide which agents are allowed to act. Data clouds are turning into homes for resident agents that sit directly on top of governed tables. Courts are asserting their own claim on the logs behind the interface and treating them as part of the record, not a byproduct.
This is the Control Layer beginning to crystallize. The part of the stack where interfaces start to behave like policy, and where defaults around agents, access, and logging matter more than whatever branding sits on top. If you want to know where power is moving, watch who controls those defaults and who gets to see the tapes.
Next week I am going to zoom in on that layer directly in Deep Cut #3 in the Radicalization Series: The Control Layer. That piece will step away from individual deals and lawsuits and focus on the structure underneath them instead: how belief, capability, behavior, identity, and narrative stack together once governance moves into the interface and the system starts to think first so you do not have to.
If you missed the first two pieces in this Radicalization series, you can start there: The Faith Layer: AI as Absolution and The Power Layer. If you want the next one to land in your inbox when it goes live, make sure you are subscribed by email, not just following, and that notifications for System Alerts and Deep Cuts are turned on in your Substack settings.






It’s pretty striking how fast our AI chats have gone from feeling private and throwaway to something a court can demand by the millions. Even without names attached, those logs reveal how people think, plan, and work through problems.
If this starts becoming routine, the line between our “private digital thoughts” and public record gets blurry fast. Suddenly the control layer isn’t just about how we use AI; it's about how much of our inner commentary can be pulled into the open when the legal system decides it wants a look.