Skip to content
Cogitate
Go back

Inversion of Control at the Agentic Boundary

| Björn Roberg

tl;dr Treat agent capabilities as injected dependencies — explicit, auditable, swappable. But recognize that IoC patterns solve the engineering problem, not the governance one.

The agentic boundary problem

We’ve gotten comfortable with service boundaries. APIs have schemas, deployments have rollback, permissions have scopes. These are what Craig McLuckie calls “hard systems” — versioned, gated, traceable.

Agentic systems break that comfort. An agent that reasons, decides, and acts across organizational boundaries is a “soft system” — adaptive, autonomous, and poorly bounded. The boundary between human-controlled services and autonomous agents becomes the critical surface to define, enforce, and observe.

This isn’t just an engineering problem. The Agentic State vision paper frames it as institutional: agent boundaries will reshape responsibility lines across organizations and policy workflows. But before we get to governance, there’s a concrete technical question: how do you wire an agent so its capabilities are visible, testable, and revocable?

IoC as a design tool for agents

Inversion of Control — the pattern where an external container provides dependencies rather than components hard-coding them — maps naturally onto this problem.

Consider a traditional agent that has baked-in access to a database, an email API, and a payment gateway. Its capabilities are implicit, tangled with its decision logic, and hard to audit. Now consider an agent where each capability is an injected interface:

Agent(
  capabilities=[
    DatabaseRead(scope="orders", audit=True),
    EmailSend(rate_limit="10/hour", require_approval=True),
    # PaymentGateway intentionally not injected
  ],
  policies=[
    MaxTokenBudget(1000),
    HumanInTheLoop(threshold="destructive"),
  ]
)

This is dependency injection applied to agent capabilities. The benefits are the same ones Martin Fowler described twenty years ago — modularity, testability, separation of concerns — but now the stakes involve autonomous decision-making rather than object wiring.

What this buys you:

Where IoC falls short: control inversion at scale

Here’s the tension. IoC as a software pattern promotes devolved, modular control — good for safety engineering. But the Future of Life Institute’s Control Inversion paper warns about a different kind of inversion: systems that, through competence and entrenched deployment, absorb power and reduce human control rather than extend it.

This is not a wiring bug. It’s a systemic failure mode driven by:

The modularity and automation that improve efficiency can simultaneously accelerate dependence and reduce oversight. A perfectly IoC-wired agent with clean capability injection can still become a de facto decision authority if no one questions whether it should be making those decisions.

Layering security at injection points

Between the engineering pattern and the governance problem sits a practical middle layer: security at the injection points.

Frameworks like Obsidian Security’s AI agent security tooling operationalize this — supply-chain security for models, runtime audit, permissioning, and behavior monitoring. The key insight is that injection points are natural enforcement surfaces:

This doesn’t solve the governance problem, but it gives governance something to work with.

Practical synthesis

Three layers, each necessary, none sufficient alone:

  1. Architecture: design agentic services using IoC principles. Make capabilities, credentials, and policies injected and auditable. Apply containers and orchestrators to agents the way we apply them to microservices.

  2. Security: enforce least privilege, runtime monitoring, and immutable audit trails at injection points. Treat capability injection as a security surface, not just a software pattern.

  3. Governance: pair technical controls with organizational rules, oversight bodies, and accountability structures. Continuously test socio-technical dynamics — simulate how deployment incentives and failure modes might lead to dependency or authority shifts.

The engineering is the easy part. Making sure we remain the ones deciding what gets injected — that’s the hard part.

Further reading


Share this post on:

Previous Post
The Firmbyte Gap: Why the Most Valuable Connections Never Happen
Next Post
Building a Behavioral Health Monitor and Feedback Loop for AI Agents