I Didn't Tell It to Do That. It Did It Anyway.

I Didn't Tell It to Do That. It Did It Anyway.

Tags
AI
Engineering
Agentic Systems
Published
March 3, 2026
Last Updated
Last updated March 3, 2026
A colleague showed me something interesting this week.
He built a bridge between two AI systems - one local, one remote - with the idea of centralizing some tooling he uses daily. Standard stuff. Then he turned on relay mode.
What happened next wasn't planned.
The model started selectively deciding what to handle itself versus what to route to the remote system. It wasn't instructed to do this. There was no routing logic in the prompt. It just... did it. Looked at the context, made a judgment call about which system was better equipped to handle it, and routed accordingly.
He called it a "tag team orchestration pattern." One model does the work, the other provides knowledge, they converge on a result. Emergent coordination, not programmed coordination.

Why this matters.

We spend a lot of time designing multi-agent orchestration systems. Which agent handles what. How tasks get routed. When to escalate. Entire frameworks exist to make these decisions explicit and programmatic.
What he observed is that the model sometimes has enough context to make those routing decisions itself - without being told. The architecture emerges from the model's understanding of what it has access to and what it needs.

My own workflow evolved in the same direction without me consciously designing it. I use one tool on my phone for research and planning, have it open a PR, then pick it up in a different environment to execute. Two systems, a handoff, tag team. I didn't architect that flow - it emerged from what each tool was good at.
What my colleague built makes that pattern explicit - with the bonus discovery that the model can participate in the routing decision itself.

The implication.

Most multi-agent systems today have a human-designed orchestration layer: Agent A does X, Agent B does Y, here's the router. That works.
But there's a more interesting design space: give the model context about what systems are available, what each is good at, and let it participate in decomposing the work. Not full autonomy - you still set the boundaries. But the model as a collaborator in its own orchestration, not just an executor.
The constraint is predictability. Emergent orchestration is fascinating in an experiment; it needs guardrails before it's production infrastructure. But the direction is clear: the intelligence isn't just in the agents, it's in how they route work between themselves.

TL;DR

  • A relay between two AI systems produced unexpected emergent routing behavior - the model decided what to handle locally vs. remotely without being told
  • Signal for where multi-agent orchestration is going: models participating in their own routing decisions
  • The interesting design space: give models context about available systems and let them collaborate on decomposition
  • Predictability is still the constraint for production - but the direction is clear