← Field Notes

AI Agents Are Shipping Code. Who’s Checking?

AI agents open PRs, update docs, and modify config faster than teams can review intent. The gap between observation and enforcement is where drift lives.

AI agents now open pull requests, update documentation, modify CI configuration, and influence release decisions. This is not a prediction — it is the default workflow at any team using Copilot, Claude Code, or Cursor in a real codebase.

Most teams can observe this activity. Almost none can enforce rules on it consistently.

The gap between "we can see what agents do" and "we can control what agents ship" is where drift lives. And drift compounds.

What drift looks like

A team runs three repos. They use Claude Code for feature work and Cursor for docs. Over six weeks, small things happen:

  • An agent modifies a CI config to skip a flaky test gate. The reviewer approves because the PR is 400 lines and this change is on line 380.
  • Another agent rewrites a policy doc. The original constraints are softened. No one notices because the diff looks like a formatting cleanup.
  • A third agent updates a dependency and removes a pinned version. The build passes. The security posture changes.

Each change is individually reasonable. Each passes code review. But six weeks later, the three repos have different CI configurations, different policy documents, and different dependency constraints. Nobody can explain when this happened or why.

This is not a failure of code review. It is a structural problem. Agents generate changes faster than humans can verify intent — and "approve" is not the same as "govern."

What governance should look like

Not more process. Not another dashboard to check on Fridays. Not a quarterly audit that finds problems three months late.

Governance for AI-assisted delivery should work the same way linting works: same rules, same enforcement points, every push. The checks run where the work already happens — in the CLI, in CI, in the editor. They block merges that violate the team's rules and produce a log of what passed and what drifted.

If you have to remember to run governance, it is not governance. It is a suggestion.

What we shipped

Morphism is a CLI that validates repos against your team's rules.

$ morphism init
  [+] .morphism/ created
  [+] AGENTS.md generated

$ morphism validate
  governance_docs    15/15
  ci_coverage        15/15
  ssot_atoms         15/15
  security_gates     15/15
  total              125/125

  Governance validation: PASS

morphism init creates the config. morphism validate checks the repo against it. CI runs the same checks on every push. The MCP server gives editors and agents the same rules.

One config file. Same enforcement everywhere. Full audit trail.

Where this is going

This is the first public release. The CLI works in a single repo today. Shared controls, team workspaces, and MCP integration for editors and agents are live for pilot teams.

If your team is shipping AI-assisted changes and wants to start governing them before it becomes an emergency — install the CLI and run your first validation in under five minutes. No account needed.


This is the first post in Field Notes, a journal about building governance for AI-assisted software delivery. Next: why governance systems should converge, not drift.

AI Agents Are Shipping Code. Who’s Checking? | Morphism