Journey into Coding with AI [3/4]: Decision-Bound Programming

AI
coding
software engineering
AI engineering
journey series
AI accelerates code generation but moves the bottleneck to interpretation, comparison, and judgment. The next generation of programming tools should be decision support systems.
Author

synesis

Published

March 15, 2026

Illustration from the original post.

AI is shifting programming from execution-bound work to decision-bound work.

Generation Is Cheap, Evaluation Is Not

Many developers describe AI coding tools as both powerful and exhausting. Recent empirical work offers a plausible explanation. In a randomized controlled experiment, METR asked experienced developers to solve issues in familiar repositories, with and without AI tools [1]. On average, developers took about 19% longer when AI assistance was allowed, even though they believed they were faster. Reuters’ coverage notes that much of the extra time went into prompting, reviewing generated code, and correcting partially correct outputs [2]. These findings suggest that AI lowers the cost of producing candidate code while increasing the effort required to evaluate it.

Once generation becomes cheap, the structure of development changes. Developers can quickly produce implementations, refactorings, or architectural variations. Not every developer will explore every branch, but the number of plausible alternatives grows. Each candidate must then be interpreted, validated, and compared before it can be trusted. Programming becomes less about executing a plan and more about evaluating possibilities.

Heuristics and Context Switching

The theory of bounded rationality [3] suggests that as the number of alternatives grows, people can no longer evaluate every option fully and instead rely on heuristics to reach satisfactory decisions. AI-assisted coding can increase the number of candidate solutions that must be screened and compared, helping explain why developers may perceive higher mental effort even when code is generated faster.

A second burden is context switching. Research on programming interruptions has long shown that rebuilding context is expensive [4]. More recent AI-specific work strengthens this point. The 2026 EditFlow paper reports that 68.81% of code-edit recommendations disrupted developers’ mental flow [5]. A separate five-day field study found that proactive AI suggestions worked better after commits than mid-task [6].

Recent studies also report higher perceived cognitive load during AI-assisted development [7]. Related work on user mental models in AI-driven code completion found that developers want better timing, display, granularity, and explanation [8].

Toward Decision Support

Taken together, the evidence suggests a broader pattern: AI accelerates code generation, but it can also increase interpretation, comparison, judgment, and context management during development. The bottleneck does not disappear. It moves. This means programming environments should evolve beyond code generation toward decision support. They should reduce unnecessary branching, summarize differences between alternatives, surface trade-offs, highlight risks, and make the implications of choices explicit.

In other words, the next generation of programming tools should be decision support systems, not just code generators.

(Part 2: Shifting Gears)


Continue the series: ← Part 1: Running Back to Code · ← Part 2: Shifting Gears


References

[1] METR. “Early-2025 AI on OSS Dev Productivity.” 2025. https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/

[2] “AI slows down some experienced software developers, study finds.” Reuters, 2025. https://www.reuters.com/business/ai-slows-down-some-experienced-software-developers-study-finds-2025-07-10/

[3] Simon, Herbert A. Models of Man. 1957. (See also Klahr, 2004: https://www.cmu.edu/dietrich/psychology/pdf/klahr/PDFs/klahr%202004.pdf)

[4] Parnin, Chris, and Spencer Rugaber. “Resumption Strategies for Interrupted Programming Tasks.” Software Quality Journal, 2011. https://chrisparnin.me/pdf/parnin-sqj11.pdf

[5] Liu, et al. “EditFlow.” arXiv, 2026. https://arxiv.org/abs/2602.21697

[6] Kuo, et al. “Proactive AI Field Study.” arXiv, 2026. https://arxiv.org/abs/2601.10253

[7] Brandebusemeyer, et al. “GenAI Mixed-Methods Field Study.” arXiv, 2025. https://arxiv.org/abs/2512.19926

[8] Desolda, et al. “Mental Models in AI Code Completion.” arXiv, 2025. https://arxiv.org/abs/2502.02194

Originally posted on LinkedIn.