Beware of hidden costs: when software is only thought of technically

Saving on technical expertise in software ultimately costs more. About the most expensive decision that hardly anyone recognizes as such in software projects.

listen Print view
Banknotes in the shredder

(Image: Brams.Photography / Shutterstock.com)

14 min. read
By
  • Golo Roden
Contents

There is a pattern that runs through many areas of life: those who save in the short term usually pay in the long term. Those who skip the car inspection will eventually face a major engine failure. Those who forgo advice from a professional make mistakes that are pricier than the fee saved. Or, as my father-in-law used to say, “He who buys cheap tools buys twice.” This principle is as old as economics itself, and most people would probably agree with it immediately.

In software development, there is a variation of this pattern that is surprisingly rarely recognized as such: the omission of a sound technical concept at the beginning of a project. In previous blog posts, I have already written about why so many software projects fail, why the real bottleneck was never coding, and how a deliberately slowed-down process ultimately leads to the goal faster. Today, we will address a question that has been neglected so far: What does it actually cost specifically when technical expertise is skipped? And why is this calculation not getting better, but worse, in the age of AI?

At the beginning of a software project, there is usually an understandable impatience. The budget is approved, the team is ready, expectations are high. The argument I hear repeatedly in such situations sounds something like this:

“Development is expensive and takes a long time. Every day that is not spent programming is a lost day. So we have to start as quickly as possible.”

What sounds like economic sense at first glance is the opposite upon closer inspection.

Because “starting quickly” in this context almost always means skipping the phase of figuring out what the software should actually do. Time is not invested in developing the technical expertise, understanding the processes, questioning the requirements. Instead, technical implementation begins immediately because it looks like progress. Code is created, commits fill the repository, tickets are processed. Everything seems productive. But productivity and progress are not the same thing.

In this logic, technical expertise is not understood as an investment but as a brake. As something that delays implementation and can be picked up along the way during the project. I have observed this error in countless projects, and it is one of the most expensive mistakes a company can make. Because, as I have argued elsewhere, the real bottleneck in software development has never been the speed at which code is produced. It has always been the speed at which understanding is produced. Those who skip understanding to build faster do not build faster. They just built the wrong thing earlier.

If technical expertise is not systematically developed, no vacuum is created. Something worse arises: a collection of implicit assumptions that no one recognizes as such. Every developer has an idea of what the software should do. Every product manager has a picture in their mind. Every stakeholder has expectations. The problem is that these ideas, pictures, and expectations are rarely congruent.

At the beginning, this is not noticeable. Everyone uses more or less the same terms and nods in agreement regularly. But beneath the surface lie different interpretations that only become visible when code exists and someone realizes that the result does not match what they had imagined.

An example from practice: A team is developing software for order processing. Everyone talks about “orders,” but no one clarifies what an order actually is in a technical sense. For one department, an order is a binding purchase contract; for another, it is a non-binding wish list that only becomes an order upon approval. The development team implements a third variant that does not correspond to either of these perspectives. Only months, or in the worst case, years later, during the first test with real users, does the whole thing collapse. The architecture cannot handle the actual complexity because it was built on an understanding that no one verified.

Then the surprise is great, and the search for the culprit begins. But there is no culprit. There is only the lack of a common understanding.

These implicit assumptions are cast into code and thus cemented. Every architectural decision, every data structure, every interface reflects the understanding that existed at the time of implementation. If that understanding was incomplete or wrong, so is the software. And the more code builds upon a faulty foundation, the harder it becomes to correct the course. I described this phenomenon in detail in a blog post about how 75 percent of all software projects fail. The cause is almost never technical. It lies in the gap between what was built and what is needed.

This is where it gets economically interesting. Because the costs incurred by a lack of technical expertise are largely invisible. What companies see are budget overruns and delays. What they don't see is incomparably pricier.

First, there is the software that misses the mark. It works; it does something, but it doesn't solve the problem it was built for. It maps processes that don't exist in reality, or it maps them in such a way that users have to awkwardly integrate them into their daily lives instead of being supported by them. The economic damage caused by this does not appear in any project balance sheet. It is spread over years in the form of inefficiency, workarounds, and missed opportunities.

Then there is the rework that is not recognized as rework. When a team revises features that were based on false assumptions, it is listed in the backlog as further development, not as a correction of an oversight. The hours go into bug fixes, adjustments, those countless small changes that become necessary because the original concept was not sound. No one aggregates these hours and asks:

“How much of this could have been avoided if we had invested two weeks in technical expertise at the beginning?”

And then there are the architectural decisions based on a wrong understanding of the domain. A data structure that bypasses the actual business process. A separation of modules that belong together technically. A simplification that ignores a special case that later turns out to be the rule. Such decisions can either only be corrected with enormous effort or, more often, not at all. They become part of the software, and everything built upon them inherits their imbalance.

In Slow is smooth and smooth is fast, I described how a deliberately slowed-down approach eliminates precisely these correction loops. What was eliminated as rework there does not disappear in most projects. And that comes at a price that accumulates over the entire lifespan of the software.

The real tragedy is that these hidden costs are rarely tallied by anyone. There is no line in the budget called “Costs due to lack of technical understanding.” Instead, they are spread across hundreds of tickets, dozens of meetings, countless hours where smart people solve problems that would never have arisen with a better foundation. The bill is never presented, so it is never paid. At least not consciously.

Into this situation comes the promise of AI-powered software development. The promise is enticing: fewer developers, faster implementation, lower costs. AI tools generate code in seconds that used to take hours. Entire prototypes are created overnight. The message that many decision-makers derive from this is: software development is becoming cheaper and faster. So we can achieve more with less effort.

This is true, but only under one condition: you must know what you want to build. And that is precisely the problem. AI accelerates implementation, but it does not replace understanding. An AI can produce code impressively quickly, but it cannot know if that code solves the right problem. It can implement an interface but not assess whether that interface meaningfully maps the actual business process. It can generate tests but not decide which technical scenarios are truly critical.

Those who don't know what is needed will only produce code with AI that misses the mark faster. You get more software in less time that doesn't do what it's supposed to do. This is not a productivity increase. It is an acceleration of value destruction.

The pattern is not new. The same promise existed with low-code and no-code platforms, with RAD systems, with CASE tools: Each generation had its own technology that was supposed to make programming so easy and fast that the bottleneck would disappear. None of these technologies solved the fundamental problem because none of them could automate understanding. AI is more powerful than anything that came before it, but it too cannot know what a company needs if the company itself doesn't know.

But there is a second effect, perhaps even more dangerous: the apparent cost savings through AI lower the threshold for starting without a technical concept. If code costs almost nothing, why spend time on conceptualization? If a prototype can be created in an afternoon, why spend weeks delving into the technical details beforehand? The argument sounds compelling, but it overlooks that the costs of software lie only partially in writing the code. The far greater portion is for everything that comes afterward: maintenance, adaptation, correction, integration, training, operation.

So, AI changes the cost structure of code generation but not the cost structure of software development as a whole. And it shifts the focus even further towards technology and further away from technical expertise. This is the opposite of what would be necessary. The hidden costs do not decrease; they increase. It's just not noticeable because the visible side of the bill looks so much cheaper.

the next big thing – Golo Roden
the next big thing – Golo Roden

Golo Roden is the founder and CTO of the native web GmbH. He works on the design and development of web and cloud applications and APIs, with a focus on event-driven and service-based distributed architectures. His guiding principle is that software development is not an end in itself, but must always follow an underlying technical expertise.

There is a parallel that illustrates this: Imagine someone building houses and suddenly getting a tool that allows them to erect walls ten times faster. Great, if the blueprint is correct. But if the blueprint is wrong, houses that miss the needs of the residents are simply built faster. Demolition and rebuilding won't be cheaper just because the wall construction was faster. On the contrary, because building seems so cheap, people often start without a plan, and the total costs increase.

My argument can be distilled into a simple formula: Technical expertise at the beginning of a project is not a cost factor. It is an investment with a measurable, albeit rarely measured, return. Every day invested in understanding the domain, questioning assumptions, and building a common understanding ultimately saves many times that amount in corrections, rework, and misguided software.

This is not an abstract assertion. It is the quintessence of countless projects I have experienced and accompanied over more than two decades. The projects that invested in technical expertise at the beginning reached the goal faster, cost less, and delivered better results. The projects that started immediately were faster to launch but slower to reach the goal. Often, they didn't reach the goal at all, at least not the one originally intended.

The paradox is that the investment in technical expertise doesn't have to be large. Often, a few days of intensive discussion with the right people are enough to uncover fundamental misunderstandings before they land in the code. Often, it's enough to ask the one crucial question that no one has asked so far. The effort involved is negligible compared to what a company pays for corrections when this question arises months into development. It is, to return to the opening image, the difference between an inspection costing a few hundred euros and an engine failure costing thousands.

The real bottleneck in software development has never been the speed of implementation. It was and is the understanding of what is to be built in the first place. As long as this understanding is lacking, faster technology does not make anything better. It just makes the wrong thing faster. AI is a striking example of this, but by no means the first.

The hidden costs of lacking technical expertise only become visible when you start to honestly tally the bill. When you ask, How much of our development time goes into correcting misunderstandings? How much of our software actually solves the problem it was intended for? How many of our architectural decisions are based on a sound understanding of the domain, and how many on guesswork?

Those who ask themselves these questions will find that the answers are uncomfortable. But they are the first step to avoiding the most expensive wrong decision in software development: the omission of the foundation upon which everything else rests. (rme)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.