AI is useful. I use it a lot. But I don't think it made software noticeably cheaper yet. The most common narrative is simple: AI writes code, so software should cost less. What I keep seeing in practice is different: cost is not removed, it is redistributed. Some costs become smaller, but new ones appear and often hide inside process overhead.
1) AI reduces drafting cost, not system cost
Before heavy AI use, effort was concentrated in writing, reviewing, debugging, and maintenance. After heavy AI use, first drafts got faster, but surrounding work expanded: validation loops, prompt iteration, output triage, integration cleanup, and tooling spend. The bottleneck moved from "generate code" to "trust and integrate code."
2) The leverage is conditional
AI is strongest when the operator can do three things:
- define constraints clearly,
- detect subtle wrongness quickly,
- and steer the model toward useful tradeoffs.
Without that, teams accumulate plausible output that fails later in expensive ways. This is why AI often amplifies existing engineering quality: strong teams accelerate, weak feedback loops amplify noise.
3) Cutting engineers to fund AI is usually the wrong swap
The people most often removed are mid-level engineers who hold operational context: why a system is shaped the way it is, where it is brittle, and what breaks first. Those are exactly the signals you need when model output volume increases. Removing that layer while increasing generated output creates a governance gap, not an efficiency gain.
4) The "cheap phase" is probably temporary
Current costs still feel manageable for many teams because usage patterns are early. As AI gets embedded across more workflow steps, overhead compounds: larger contexts, more requests, deeper retries, and more control surfaces around the models. At that point, "cheap generation" is only one line item in a wider operating cost profile.
5) I do not expect a crash, I expect normalization
I do not expect AI use to collapse. I expect selection pressure: teams quietly cut low-yield usage, keep high-yield workflows, and become stricter about where models actually create net value. Useful patterns remain, hype-heavy patterns fade.
What feels true after using it heavily
AI makes code generation easier. It does not make system understanding easier. It can speed implementation. It cannot replace product and engineering judgment.
My baseline assumption now is straightforward: AI is a force multiplier for good systems thinking, not a substitute for it.
