One of my greatest curiosities over the last thirty-five years, while building complex software, is why construction projects for skyscrapers, which are highly complex beasts, seem to go so much smoother than software projects, most of which are far less complex.
When I was young, legend has it that at least 50% of all software projects would fail to produce working code. On top of that, most of what did eventually work was of fairly low quality. People used to talk a lot about the huge risks of development.
Since then, there have been endless waves of ‘magic’ with bold marketing claims, intended to change that, but it seems that our modern success rate is roughly similar. Of the projects that I have seen, interacted with, or discussed with fellow developers, the success rate is poor. It actually seems to have gotten worse. Although outright failures are a little less, the overall quality is far lower. Balls of mud are commonplace.
At some level, it doesn't make sense. People can come together to build extremely sophisticated things reliably in all sorts of other industries. So why is software special?
Some people claim it is because it is purely “digital”. That makes it abstract and highly malleable, somewhat like jello. But somewhere, the rubber always meets the road; software sits on top of hardware, which is entirely physical.
Computer systems are physical objects. They may seem easier to change in theory, but due to their weight, they more often get frozen in place. Sure at the last layer on top, close to the users, the coders can make an infinite number of arbitrary changes, but even when it is actual “code”, it is usually just configuration-based or small algorithmic tweaks. All the other crud sinks quickly to the bottom and becomes untouchable.
You can fiddle close to the top in any other medium. Fiddle with the last layers of wood, concrete, rubber, plastic, or metal. Renovate the inside or outside of a building. So that doesn’t seem to be the real issue.
Another possibility is the iceberg theory. Most of the people involved with building software only see the top 10% that sticks out of the water. The rest is hidden away below the waves. We do oddly say that the last 10% of any project is 90% of the effort, which seems to line up.
I used to think that was just non-technical people looking at GUIs, but it even seems true these days for backend developers blindly integrating in components that they don’t really understand. All other forms of engineering seem to specialize and insist on real depth of knowledge. The people entrusted to build specific parts have specific knowledge.
Software, because it has grown so fast, has a bad habit of avoiding that. Specialties come and go, but generalists are more likely to survive for their full careers. And generalists love to ignore the state of the art and go rogue. They prefer to code something easy for themselves, rather than code things correctly. They’ll often go right back to first principles and attempt to quickly reinvent decades of effort. These bad habits are reinforced by an industry that enshrines them as cool or popular, so they can sell pretend cures to fix them. Why do things properly when you can just buy another product to hide the flaws?
I tend to think that impatience is the strongest factor. I was taught to estimate pretty accurately when I was young, but most of the time, the people controlling the effort would be horrified to learn of those estimates. They are way too long. They want something in days, weeks, or months, but to get the implicit quality that they don’t realize they actually need, would sometimes take years. Their expectations are way, way off. But if you tell them the truth, they will just go to someone else who will tell them what they want to hear. They are told by the industry that everything is quick and easy now, so you can’t contradict that.
That doesn’t happen with big things like skyscrapers simply because there are regulators and inspectors involved at every step of the way to prevent it. Without them, skyscrapers would collapse quite regularly, which would kill an awful lot of people. Most societies can’t accept that, so they work hard to prevent it. Software failures, on the other hand, are extremely common these days, but death from them isn’t, so the industry is held to a lesser standard. That shows.
I do miss the days, long gone, when we actually had the time to craft stuff properly. Libraries and frameworks were scarce; we usually had to roll everything ourselves except for persistence. That was a mixed blessing. For good devs, they would get strong custom work that enhanced their codebases. But you’d see some exceptionally awful stuff too.
Getting mass tonnes of half-baked dependencies didn’t change that. Sure, it is easier for mediocre devs to hook up lots of stuff, but it is still possible for them to hook it up badly, and a lot of the parts are defective. So lots more complexity that still doesn’t work. Same difference, really. Just less real success stories.
All in all, it is a pretty weird industry. We learn stuff, then forget it, so that we have fun redoing it in a cruder way. That keeps the money rolling for everyone, but has clearly made our modern world extremely fragile.
Whacking together something fast isn’t better if the shelf life is far less. What comes together quickly falls apart quickly, too. Overall, it costs way more, adds more friction, and threatens stability. But since software is invisible and it's in a lot of people's best interests that it stays a mess, it seems that it will keep going this way forever. Rumors of its impending demise from AI are premature. Hallucinating LLMs fit in nicely in an industry that doesn’t want quality. Coders will be able to churn 10x or possibly even 1000x more crap out, so it is just a more personal stack overflow and infinite number of customized flakey dependencies. Far more code that is just as poor as today’s standards. It just opens up the door for more magic products that claim to solve that problem. And more grunts to churn code. One more loop through an endless cycle of insanity.
Software is a static list of instructions, which we are constantly changing.
Friday, June 27, 2025
Friday, June 13, 2025
Brute Force Style
The most common type of programming that I have encountered over the years is brute force.
It’s a style crystallized out of the fear of getting trapped in debugging hell. You code every instruction explicitly and redundantly. If you need to do similar things in 4 different places, you duplicate the code 4 times.
It is the easiest style of code to debug. You recreate the bug, see what it should have been for that exact functionality, and then you go right there and modify it with the difference.
To make it more straightforward and literal, the usage of functions is minimized, and hardcoded constants are everywhere.
Minimal functions make reuse difficult, but the intent is not to reuse older code. The hardcoded constants are easy to understand and easy to change.
Brute force code is fragile. Since it is a hardcoded static set of instructions tied to just one limited piece of functionality. Generally, better programs offer far more flexibility. You can customize them to your needs. The code in brute force is wired directly to the programmer’s understanding, not the user’s needs.
Brute force code generally performs badly. It doesn’t have to, but gathering up all of the instructions to go into it tends to be crude, since it takes way longer to code, so lots of extra unnecessary stuff accumulates. As well, the author often doesn’t understand the deeper behaviours of the instructions, so many are misapplied, then patched on top.
Brute force code always has lots of bugs. Any non-trivial system has lots of similar functionality. Because each one of these is implemented separately, they are rarely in sync and drift as the code gets hacked. That drift is a bug multiplier. So, you would see the same bug in 8 places, for instance, but only 2 get noticed and fixed the first time. The second time it emerges, maybe 4 get fixed, but 2 are still waiting. So, one bug, for example, might manifest as 3 or 4 different bugs, or worse.
Brute force is way more work. You don’t ever leverage any of the earlier efforts, so you end up writing the same things over and over again. On the plus side, you get faster at writing redundant stuff, but that and more bugs quickly eat away at any gains.
Brute force code has poor readability. It doesn’t have to, but because it is so long and tedious, even if the first programmer worked diligently to make it readable, all of the following fixes, and there will be a lot of them, will take the easy path and sacrifice readability for bug fix speed. The code quickly converges on noise. And since there is just so much of it, any attempts to refactor are too significant or painful, so it also tends to get frozen into place and act like a death anchor.
Brute force is seen as low risk. It is a reasonable style for quick demos or proofs of concept, but it does not scale at all. Ultimately, it would take hundreds, if not thousands, of added efforts to get it up into the large category of software projects. So, at best, it generally gets trapped in the medium size, then dies a slow, loud, painful death as people keep getting caught up in the initial sunk costs. Mostly in the middle of its lifetime, it becomes a hazard or road bump, and if likely, a generator of artificial complexity side effects.
Evolving brute force code into better styles is possible, but slow and painful. Rewriting it, because it is a sunk of specific details and knowledge, is also slow and painful. Once you are trapped by this type of medium or even a large codebase, getting out of the trap is difficult. It’s best to avoid the trap instead.
Other styles include:
It’s a style crystallized out of the fear of getting trapped in debugging hell. You code every instruction explicitly and redundantly. If you need to do similar things in 4 different places, you duplicate the code 4 times.
It is the easiest style of code to debug. You recreate the bug, see what it should have been for that exact functionality, and then you go right there and modify it with the difference.
To make it more straightforward and literal, the usage of functions is minimized, and hardcoded constants are everywhere.
Minimal functions make reuse difficult, but the intent is not to reuse older code. The hardcoded constants are easy to understand and easy to change.
Brute force code is fragile. Since it is a hardcoded static set of instructions tied to just one limited piece of functionality. Generally, better programs offer far more flexibility. You can customize them to your needs. The code in brute force is wired directly to the programmer’s understanding, not the user’s needs.
Brute force code generally performs badly. It doesn’t have to, but gathering up all of the instructions to go into it tends to be crude, since it takes way longer to code, so lots of extra unnecessary stuff accumulates. As well, the author often doesn’t understand the deeper behaviours of the instructions, so many are misapplied, then patched on top.
Brute force code always has lots of bugs. Any non-trivial system has lots of similar functionality. Because each one of these is implemented separately, they are rarely in sync and drift as the code gets hacked. That drift is a bug multiplier. So, you would see the same bug in 8 places, for instance, but only 2 get noticed and fixed the first time. The second time it emerges, maybe 4 get fixed, but 2 are still waiting. So, one bug, for example, might manifest as 3 or 4 different bugs, or worse.
Brute force is way more work. You don’t ever leverage any of the earlier efforts, so you end up writing the same things over and over again. On the plus side, you get faster at writing redundant stuff, but that and more bugs quickly eat away at any gains.
Brute force code has poor readability. It doesn’t have to, but because it is so long and tedious, even if the first programmer worked diligently to make it readable, all of the following fixes, and there will be a lot of them, will take the easy path and sacrifice readability for bug fix speed. The code quickly converges on noise. And since there is just so much of it, any attempts to refactor are too significant or painful, so it also tends to get frozen into place and act like a death anchor.
Brute force is seen as low risk. It is a reasonable style for quick demos or proofs of concept, but it does not scale at all. Ultimately, it would take hundreds, if not thousands, of added efforts to get it up into the large category of software projects. So, at best, it generally gets trapped in the medium size, then dies a slow, loud, painful death as people keep getting caught up in the initial sunk costs. Mostly in the middle of its lifetime, it becomes a hazard or road bump, and if likely, a generator of artificial complexity side effects.
Evolving brute force code into better styles is possible, but slow and painful. Rewriting it, because it is a sunk of specific details and knowledge, is also slow and painful. Once you are trapped by this type of medium or even a large codebase, getting out of the trap is difficult. It’s best to avoid the trap instead.
Other styles include:
https://theprogrammersparadox.blogspot.com/2023/04/waterloo-style.html
https://theprogrammersparadox.blogspot.com/2025/05/house-of-cards-style.html
Subscribe to:
Posts (Atom)