Friday, July 11, 2025

Assumptions

You’re asked to build a big system that solves a complex business domain problem.

But you don’t know anything about the business domain, or the actual process of handling it, and there are some gaping holes in your technology knowledge for the stack that you need to make it all work properly. What do you do?

Your biggest problem is far too many unknowns. Know unknowns and unknown unknowns. A big difficulty with software development is that we often solve this by diving in anyway, instead of addressing it proactively.

So we make a lot of assumptions. Tonnes of them.

We usually work with a vague understanding of the technologies. Either we ignore the business domain, or our understanding is so grossly over-simplified that it is dangerous. This is why there is so little empathy in our fragile creations.

It would be nice if this changed, but it does not, and has only gotten worse with time.

So instead, we need to deal with it.

First is to assume that almost everything is an assumption. Second is to insert enough flexibility into the work so that only minimal parts of it are lost if your assumptions are wrong.

For technical issues, on occasion, you can blindly guess correctly. More often, if you just follow the trends, for example, in a GUI, do whatever everybody else is doing, it's less likely to change. It’s a mixed bag, though, in that some super popular trends are actually really bad ideas, so it’s good to be a little sceptical. Hedge your bets and avoid things that are just too new and don’t have staying power.

But for business stuff, when it is as far away from what you imagine it to be, it is never easy. The obvious point is to go learn about how it actually works. Or get an expert and trust them fully. Often, that is not an option.

The other side is to be objective about it. Is it actually something that could be handled in multiple different ways? And how many possible variations in handling it can you imagine?

Valuations and pricing are good examples where people are usually very surprised at how different the actual reality is from what they might have guessed. Mostly because the most obvious ways of dealing with them are not practical, and a lot of history has flowed under the bridge already. If you have zero real exposure and you guess, it will most certainly be wrong.

The key is that if you do not know for certain, the code itself should not be static. That is, the code mirrors your own certainty of your own assumptions. Static if you are absolutely 1000% certain, dynamic if you are not.

If you think there might be ten ways to do something, then you implement the one you guessed is likely and make it polymorphic. As others pop up, it is easy to add them too. It takes a little more effort to make something encapsulated and polymorphic, but if you are right about being wrong, you just saved yourself some big trouble and a few bad days.

Flipping that around, scope creep isn’t often really scope creep, but more of assumption convergence. People assumed that a simple, trivial feature would do the trick, but at some point they were enlightened into realizing that was incorrect, so now the code has to do far more than they initially believed that it should. Knowledge was gained; the design and implementations should be updated to reflect that. What already exists should be properly refactored now.

In development projects where the coders don’t want to know anything about the underlying business problems, they get angry at the domain experts for not having known this sooner. In projects where the coders care about the outcomes, they are keen to resolve this properly. The difference is whether you see the job as churning specifications into code or as solving people's problems with code.

A while back, there was a lot of resistance to what was termed speculative generalization. If you could save yourself a few days by not making something encapsulated or polymorphic, it was argued that you should save those days. The problem was that when paired with a lack of caring about what the code was supposed to do, stubbornness in insisting that nothing should change just generated a lot of drama. And that drama and all of the communication around it eats up a tremendous amount of time. The politics flows fast and furious, so it drains the life out of everything else in the project. Everybody’s miserable, and it has eaten far more time than if you just made the change. People used to blame this on the waterfall process, but it is just as ugly and messy in lightweight methodologies.

With that in mind, a little extra time to avoid that larger and more difficult path is a lot of time saved. Just that you should not really forecast where the code base will grow, but instead just work to hedge your own lack of certainty.

It’s a shifting goal, though. As you build more similar things, you learn more and assume less. You know what can be static and what should likely be dynamic. Any other developer will disagree with you, since their experiences and knowledge are different. That makes it hard to get all of the developers on the same page, but development goes way smoother if they are all on the same page and can interchange and back each other up. That is why small, highly synced “tiger” teams can outcode much bigger projects.

It can be hard when something is counterintuitive to convince others that it is what it is. That is a trust and communication issue between the developers themselves. Their collective certainty changes the way they need to code. If it's mandated above or externally, it usually assumes total uncertainty, and so everything is dynamic and thus overengineered. That worst-case scenario is why it aggravates people.

The key, though, is always being objective about what you actually know for certain. If you can step back and not take it personally, you can make good choices in how to hedge your implementation and thus avoid all sorts of negative outcomes. If you get it nearly right, and you’ve focused on readability, defensive coding, and all the other proactive techniques, then releasing the code will be as smooth as you can get it.

If you go the other way and churn a ball of mud, the explosion from releasing it will be bigger than the work to create it. Just as you think it is going out the door and is over, it all blows up in your face, which is not pleasant. They’ll eventually forgive you for being late if it was smooth, but most other negative variations are often fatal.

Thus, the adage “assumptions kill”, but in an industry that is built around and addicted to assumptions, you are already dead, you just don’t know it yet.

Friday, July 4, 2025

Industrial Strength

Software that continues to correctly solve a problem, no matter what chaos surrounds it, is industrial strength. It just works, it always works.

It is about reliability and expectations. The code is there when you need it and it behaves in exactly the way you knew it would.

You can be certain if you build on top of it, that it won’t let you down. There will always be problems, but the industrial-strength stuff is rarely one of them.

If code isn’t industrial strength it is a toy. It may do something cute, it may be clever. The results could be fascinating and highly entertaining. But it is still a toy.

You don’t want to build something serious on top of toys. They break when least expected, and they’ll do strange things periodically. They inject instability into whatever you are doing and whatever you have built on top.

Only toys can be built on other toys; they’ll never be industrial strength. Something fragile that breaks often can’t be counterbalanced properly. Contained perhaps, but the toy nature remains.

Lots of people market toys as if they were industrial strength. They are not, and highly unlikely to ever be. Toys don’t “mature” into industrial strength, they just get more flaky as time goes on.

Industrial strength is a quality that has to be baked into every facet of the design right from day one. It is not accidental. You don’t discover it or blunder into it. You take the state-of-the-art deep knowledge, careful requirements, and desired properties, then you make difficult tradeoffs to balance out the competing concerns. Industrial strength is always intentional and only ever by people who really understand what it means.

There is nothing wrong with toy software, it can be instructive, fun, or entertaining. But for the things that we really depend on, we absolutely do not want toy software involved. Some problems in life are serious and need to be treated seriously.

Friday, June 27, 2025

Building Things

One of my greatest curiosities over the last thirty-five years, while building complex software, is why construction projects for skyscrapers, which are highly complex beasts, seem to go so much smoother than software projects, most of which are far less complex.

When I was young, legend has it that at least 50% of all software projects would fail to produce working code. On top of that, most of what did eventually work was of fairly low quality. People used to talk a lot about the huge risks of development.

Since then, there have been endless waves of ‘magic’ with bold marketing claims, intended to change that, but it seems that our modern success rate is roughly similar. Of the projects that I have seen, interacted with, or discussed with fellow developers, the success rate is poor. It actually seems to have gotten worse. Although outright failures are a little less, the overall quality is far lower. Balls of mud are commonplace.

At some level, it doesn't make sense. People can come together to build extremely sophisticated things reliably in all sorts of other industries. So why is software special?

Some people claim it is because it is purely “digital”. That makes it abstract and highly malleable, somewhat like jello. But somewhere, the rubber always meets the road; software sits on top of hardware, which is entirely physical.

Computer systems are physical objects. They may seem easier to change in theory, but due to their weight, they more often get frozen in place. Sure at the last layer on top, close to the users, the coders can make an infinite number of arbitrary changes, but even when it is actual “code”, it is usually just configuration-based or small algorithmic tweaks. All the other crud sinks quickly to the bottom and becomes untouchable.

You can fiddle close to the top in any other medium. Fiddle with the last layers of wood, concrete, rubber, plastic, or metal. Renovate the inside or outside of a building. So that doesn’t seem to be the real issue.

Another possibility is the iceberg theory. Most of the people involved with building software only see the top 10% that sticks out of the water. The rest is hidden away below the waves. We do oddly say that the last 10% of any project is 90% of the effort, which seems to line up.

I used to think that was just non-technical people looking at GUIs, but it even seems true these days for backend developers blindly integrating in components that they don’t really understand. All other forms of engineering seem to specialize and insist on real depth of knowledge. The people entrusted to build specific parts have specific knowledge.

Software, because it has grown so fast, has a bad habit of avoiding that. Specialties come and go, but generalists are more likely to survive for their full careers. And generalists love to ignore the state of the art and go rogue. They prefer to code something easy for themselves, rather than code things correctly. They’ll often go right back to first principles and attempt to quickly reinvent decades of effort. These bad habits are reinforced by an industry that enshrines them as cool or popular, so they can sell pretend cures to fix them. Why do things properly when you can just buy another product to hide the flaws?

I tend to think that impatience is the strongest factor. I was taught to estimate pretty accurately when I was young, but most of the time, the people controlling the effort would be horrified to learn of those estimates. They are way too long. They want something in days, weeks, or months, but to get the implicit quality that they don’t realize they actually need, would sometimes take years. Their expectations are way, way off. But if you tell them the truth, they will just go to someone else who will tell them what they want to hear. They are told by the industry that everything is quick and easy now, so you can’t contradict that.

That doesn’t happen with big things like skyscrapers simply because there are regulators and inspectors involved at every step of the way to prevent it. Without them, skyscrapers would collapse quite regularly, which would kill an awful lot of people. Most societies can’t accept that, so they work hard to prevent it. Software failures, on the other hand, are extremely common these days, but death from them isn’t, so the industry is held to a lesser standard. That shows.

I do miss the days, long gone, when we actually had the time to craft stuff properly. Libraries and frameworks were scarce; we usually had to roll everything ourselves except for persistence. That was a mixed blessing. For good devs, they would get strong custom work that enhanced their codebases. But you’d see some exceptionally awful stuff too.

Getting mass tonnes of half-baked dependencies didn’t change that. Sure, it is easier for mediocre devs to hook up lots of stuff, but it is still possible for them to hook it up badly, and a lot of the parts are defective. So lots more complexity that still doesn’t work. Same difference, really. Just less real success stories.

All in all, it is a pretty weird industry. We learn stuff, then forget it, so that we have fun redoing it in a cruder way. That keeps the money rolling for everyone, but has clearly made our modern world extremely fragile.

Whacking together something fast isn’t better if the shelf life is far less. What comes together quickly falls apart quickly, too. Overall, it costs way more, adds more friction, and threatens stability. But since software is invisible and it's in a lot of people's best interests that it stays a mess, it seems that it will keep going this way forever. Rumors of its impending demise from AI are premature. Hallucinating LLMs fit in nicely in an industry that doesn’t want quality. Coders will be able to churn 10x or possibly even 1000x more crap out, so it is just a more personal stack overflow and infinite number of customized flakey dependencies. Far more code that is just as poor as today’s standards. It just opens up the door for more magic products that claim to solve that problem. And more grunts to churn code. One more loop through an endless cycle of insanity.

Friday, June 13, 2025

Brute Force Style

The most common type of programming that I have encountered over the years is brute force.

It’s a style crystallized out of the fear of getting trapped in debugging hell. You code every instruction explicitly and redundantly. If you need to do similar things in 4 different places, you duplicate the code 4 times.

It is the easiest style of code to debug. You recreate the bug, see what it should have been for that exact functionality, and then you go right there and modify it with the difference.

To make it more straightforward and literal, the usage of functions is minimized, and hardcoded constants are everywhere.

Minimal functions make reuse difficult, but the intent is not to reuse older code. The hardcoded constants are easy to understand and easy to change.

Brute force code is fragile. Since it is a hardcoded static set of instructions tied to just one limited piece of functionality. Generally, better programs offer far more flexibility. You can customize them to your needs. The code in brute force is wired directly to the programmer’s understanding, not the user’s needs.

Brute force code generally performs badly. It doesn’t have to, but gathering up all of the instructions to go into it tends to be crude, since it takes way longer to code, so lots of extra unnecessary stuff accumulates. As well, the author often doesn’t understand the deeper behaviours of the instructions, so many are misapplied, then patched on top.

Brute force code always has lots of bugs. Any non-trivial system has lots of similar functionality. Because each one of these is implemented separately, they are rarely in sync and drift as the code gets hacked. That drift is a bug multiplier. So, you would see the same bug in 8 places, for instance, but only 2 get noticed and fixed the first time. The second time it emerges, maybe 4 get fixed, but 2 are still waiting. So, one bug, for example, might manifest as 3 or 4 different bugs, or worse.

Brute force is way more work. You don’t ever leverage any of the earlier efforts, so you end up writing the same things over and over again. On the plus side, you get faster at writing redundant stuff, but that and more bugs quickly eat away at any gains.

Brute force code has poor readability. It doesn’t have to, but because it is so long and tedious, even if the first programmer worked diligently to make it readable, all of the following fixes, and there will be a lot of them, will take the easy path and sacrifice readability for bug fix speed. The code quickly converges on noise. And since there is just so much of it, any attempts to refactor are too significant or painful, so it also tends to get frozen into place and act like a death anchor.

Brute force is seen as low risk. It is a reasonable style for quick demos or proofs of concept, but it does not scale at all. Ultimately, it would take hundreds, if not thousands, of added efforts to get it up into the large category of software projects. So, at best, it generally gets trapped in the medium size, then dies a slow, loud, painful death as people keep getting caught up in the initial sunk costs. Mostly in the middle of its lifetime, it becomes a hazard or road bump, and if likely, a generator of artificial complexity side effects.

Evolving brute force code into better styles is possible, but slow and painful. Rewriting it, because it is a sunk of specific details and knowledge, is also slow and painful. Once you are trapped by this type of medium or even a large codebase, getting out of the trap is difficult. It’s best to avoid the trap instead.

Other styles include:

https://theprogrammersparadox.blogspot.com/2023/04/waterloo-style.html

https://theprogrammersparadox.blogspot.com/2025/05/house-of-cards-style.html

Friday, May 23, 2025

House of Cards Style

When you need software to come together super quickly, but you don’t need it to remain together for a long time, or care if it withstands a lot, you can use the light, low-density house of cards style to build it.

You restrict the development to pure glue. If there is any computation that isn’t glue, you do a search for some existing component, download it, and glue it into the overall. The primary development activity is just gluing in as many pieces as you can, as fast as you can.

You configure as little as possible; you change as little as possible. The most important part of it is to avoid any deep or serious thinking. Don’t get curious. If getting some new piece into the code requires understanding how it works, you toss it and move on quickly to the next piece. Fast, low-thought, hacking. Try not to understand the components, don’t consider anything other than all too obvious errors, and don’t worry about any long-term issues. Just throw it all together.

House of Cards only works if the code remains small, so it is suitable for PoC, Demos, or tiny, short-term projects. But they literally collapse like a house of cards if you try to make them any larger.

House of Cards is pure throwaway work. It is far more expensive to evolve or refactor these types of efforts into real systems than to just throw them away and write it all again. Generally, the components don’t fit well or work well with each other, the interface is haphazard -- either really simple or really ugly -- and while it has an incredible number of features, any reasonable user workflow within it is too convoluted to be used regularly.

The work isn’t really wasted if you throw it away, as you did learn a few things from throwing it together, and you get a better sense of the solution space, but the code itself is totally disposable.

The point is to spin something up really quickly to see if it is viable, much like a little clay model of a car design or a little prototype of a machine, before committing to the long and expensive work of doing it for real. The output is a toy and should never be relied on for any critical efforts. Systems need to be engineered as ‘systems’ that hold together and are resilient, that is, a totally different set of skills than just gluing things to each other.

Another example of programming style is:

https://theprogrammersparadox.blogspot.com/2023/04/waterloo-style.html