Your AI Is Not Your Co-Author

Some AI coding agents have started adding themselves as Co-Authored-By on git commits by default. They sign commits as though they were a contributing developer. I think this gets something fundamentally wrong about what commit authorship means.

The question of who is responsible for AI's output is not actually new or complicated. AI is a tool. It is a technically impressive tool, different in important ways from a calculator or a linter, but it is still a tool. It does not comprehend what it produces. It has no free will and no moral capacity to answer for its actions. When a financial analysis tool causes a bad trade, we might want to blame the software, but we hold the person who used it accountable, and to some extent the people who built it. The same structure applies to AI. Ethical responsibility for what a tool produces belongs to the human who chose to use it and the humans who made it.

Commit authorship in git carries that same weight. It's your signature as an engineer. When your name is on a commit, you're the person accountable when it breaks production, the person who decided this change was correct and necessary. So when an AI agent adds itself as co-author, it is claiming a moral responsibility it cannot hold. The human who prompted the AI, reviewed its output, and chose to commit it is the one who exercised that judgment. The AI was a tool in that process, the same way a compiler or a linter or Stack Overflow is a tool. We don't add Co-Authored-By: GCC to commits that required tricky compiler flags, and we don't credit the IDE's autocomplete. The relevant question is not "who typed these characters" but "who is responsible for them."

This is an argument about the AI systems we have today, which are software tools without moral agency. If that changes someday, the ethics change with it. But the current generation of language models does not understand, intend, or accept consequences. Until one can, authorship belongs to the humans.

There is a reasonable counterargument about transparency. Maybe tracking AI involvement helps teams understand how code was produced. But Co-Authored-By is the wrong mechanism for it. If your team wants to track AI usage, build that into your process explicitly. A commit message note, a PR label, whatever fits (we use PR labels at DroneDeploy). Don't repurpose an authorship field that carries professional meaning. The more interesting question for code review is not "did AI help write this" but "did a human verify this is correct." The answer to that should always be yes, regardless of how the code was literally produced.

If you use Claude Code, you can disable the default co-author behavior by adding "includeCoAuthoredBy": false to your Claude settings.