Friday, June 13, 2025

Brute Force Style

The most common type of programming that I have encountered over the years is brute force.

It’s a style crystallized out of the fear of getting trapped in debugging hell. You code every instruction explicitly and redundantly. If you need to do similar things in 4 different places, you duplicate the code 4 times.

It is the easiest style of code to debug. You recreate the bug, see what it should have been for that exact functionality, and then you go right there and modify it with the difference.

To make it more straightforward and literal, the usage of functions is minimized, and hardcoded constants are everywhere.

Minimal functions make reuse difficult, but the intent is not to reuse older code. The hardcoded constants are easy to understand and easy to change.

Brute force code is fragile. Since it is a hardcoded static set of instructions tied to just one limited piece of functionality. Generally, better programs offer far more flexibility. You can customize them to your needs. The code in brute force is wired directly to the programmer’s understanding, not the user’s needs.

Brute force code generally performs badly. It doesn’t have to, but gathering up all of the instructions to go into it tends to be crude, since it takes way longer to code, so lots of extra unnecessary stuff accumulates. As well, the author often doesn’t understand the deeper behaviours of the instructions, so many are misapplied, then patched on top.

Brute force code always has lots of bugs. Any non-trivial system has lots of similar functionality. Because each one of these is implemented separately, they are rarely in sync and drift as the code gets hacked. That drift is a bug multiplier. So, you would see the same bug in 8 places, for instance, but only 2 get noticed and fixed the first time. The second time it emerges, maybe 4 get fixed, but 2 are still waiting. So, one bug, for example, might manifest as 3 or 4 different bugs, or worse.

Brute force is way more work. You don’t ever leverage any of the earlier efforts, so you end up writing the same things over and over again. On the plus side, you get faster at writing redundant stuff, but that and more bugs quickly eat away at any gains.

Brute force code has poor readability. It doesn’t have to, but because it is so long and tedious, even if the first programmer worked diligently to make it readable, all of the following fixes, and there will be a lot of them, will take the easy path and sacrifice readability for bug fix speed. The code quickly converges on noise. And since there is just so much of it, any attempts to refactor are too significant or painful, so it also tends to get frozen into place and act like a death anchor.

Brute force is seen as low risk. It is a reasonable style for quick demos or proofs of concept, but it does not scale at all. Ultimately, it would take hundreds, if not thousands, of added efforts to get it up into the large category of software projects. So, at best, it generally gets trapped in the medium size, then dies a slow, loud, painful death as people keep getting caught up in the initial sunk costs. Mostly in the middle of its lifetime, it becomes a hazard or road bump, and if likely, a generator of artificial complexity side effects.

Evolving brute force code into better styles is possible, but slow and painful. Rewriting it, because it is a sunk of specific details and knowledge, is also slow and painful. Once you are trapped by this type of medium or even a large codebase, getting out of the trap is difficult. It’s best to avoid the trap instead.

Other styles include:

https://theprogrammersparadox.blogspot.com/2023/04/waterloo-style.html

https://theprogrammersparadox.blogspot.com/2025/05/house-of-cards-style.html

Friday, May 23, 2025

House of Cards Style

When you need software to come together super quickly, but you don’t need it to remain together for a long time, or care if it withstands a lot, you can use the light, low-density house of cards style to build it.

You restrict the development to pure glue. If there is any computation that isn’t glue, you do a search for some existing component, download it, and glue it into the overall. The primary development activity is just gluing in as many pieces as you can, as fast as you can.

You configure as little as possible; you change as little as possible. The most important part of it is to avoid any deep or serious thinking. Don’t get curious. If getting some new piece into the code requires understanding how it works, you toss it and move on quickly to the next piece. Fast, low-thought, hacking. Try not to understand the components, don’t consider anything other than all too obvious errors, and don’t worry about any long-term issues. Just throw it all together.

House of Cards only works if the code remains small, so it is suitable for PoC, Demos, or tiny, short-term projects. But they literally collapse like a house of cards if you try to make them any larger.

House of Cards is pure throwaway work. It is far more expensive to evolve or refactor these types of efforts into real systems than to just throw them away and write it all again. Generally, the components don’t fit well or work well with each other, the interface is haphazard -- either really simple or really ugly -- and while it has an incredible number of features, any reasonable user workflow within it is too convoluted to be used regularly.

The work isn’t really wasted if you throw it away, as you did learn a few things from throwing it together, and you get a better sense of the solution space, but the code itself is totally disposable.

The point is to spin something up really quickly to see if it is viable, much like a little clay model of a car design or a little prototype of a machine, before committing to the long and expensive work of doing it for real. The output is a toy and should never be relied on for any critical efforts. Systems need to be engineered as ‘systems’ that hold together and are resilient, that is, a totally different set of skills than just gluing things to each other.

Another example of programming style is:

https://theprogrammersparadox.blogspot.com/2023/04/waterloo-style.html

 



Friday, May 16, 2025

Intrinsic Complexity

You can’t ignore complexity. Ignoring it doesn’t make it go away; it just makes it worse.

All the futile attempts to force an oversimplification instead of dealing with it are why things have not worked well. If people could just accept the complexity, then they could minimize any extra artificial complexity that is also included. That would not make it disappear, but rather just keep it as small as possible.

There was a time when people would accept that something was complicated and deal with all of that complication. But gradually, that changed.

Dealing with a lot of complexity is hard and time-consuming. Everyone is too impatient now, so they just want to skip straight to the outcome and make positive claims.

You can constrain the context around the complexity. But that is only viable if it stays within those boundaries. If, as is often the case, it wanders out of those boundaries, that deviation will pile on lots of artificial complexity.

The fundamental rule is that for intrinsic complexity, you can not make it go away; you can only shift it elsewhere. It is as complex as it is intrinsically; there is no way to change that.

One of the ways to control complexity, though, is to encapsulate it. The encapsulation itself is a small chunk of artificial complexity, but if it is done well, it independently partitions the internal complexity away from the external complexity. So, at a small cost, you haven’t gotten rid of it, but you have broken it into two pieces that you can correctly deal with, each on its own. But if you do it too much, the artificial complexity from overdoing it will eventually make it impossible to deal with.

Most shortcuts get their speed improvements through some form of oversimplification. In a sense, it is ignoring the dominant context and choosing another, far simpler one. So you do what you need in the simpler one, but when you take that back up to the original context, it spins off a heap of really bad side-effect problems. Some people mistakenly believe that that is similar to partitioning, but the key difference is that the simplified version and the side problems fully interact with each other; the partitioned parts aren’t independent. Trying to patch those interactions spins off more problems and so the artificial complexity explodes.

I suspect that there are better ways to actually reduce complexity, but except for rare examples scattered throughout history, it does not seem to be knowledge we possess. It’s likely, though, that you could take an inventory, then normalize it in a way that is still relatable to human intelligence, and use that as an update. That would obviously work if the complexity was mostly artificial, but it seems as if there are ways that it might also work for some forms of intrinsic complexity. Maybe.

The easiest thing to do is to handwave away complexity. Simply choose to ignore it, then cover up the disastrous effects of doing so. We see this far too often in history and politics. People with little understanding have trouble accepting that some things are actually complex, so the bad actors sell to their nature, then run a lot of interference to claim the bad side effects aren’t related, when they are quite obviously related. This is a rather gaping hole in our social organizations.

We go through periods of mass delusions doing this. Society ignores reality. Lots of people head off in a new direction, but then later, the side effects counteract all of the positive attributes.

We also seem to have a very hard time admitting that this was the truth. We’ll go to crazy complex lengths to try and hide the problems, which is in and of itself a bad side effect that creates more artificial complexity. Our histories and sciences are full of such episodes. All of that wobbling through time is what has led us to having the whack load of complexity we are faced with today, and all of that wobbling is also why we tend to deny it. The answer is that we don’t have an answer, and every time we think we have an answer, it just makes everything worse, not better. But not trying to answer it is also worse, so it’s a paradox.

Acceptance is the key. If we finally accept that things are crazy complex and that we can’t go on this way, then we can start investigating the massive amount of details that are fueling the problem. It is glacially slow and extraordinarily patient, but if we do it and rely on collective intelligence to understand it, then we might gradually undo the mess we’ve gotten ourselves into. That obviously is going to take a crazy long time, but you can’t just snap your fingers and solve thousands of years of buildup. Maybe along the way, we’ll actually figure out how to keep ourselves organized and not do it again.

Friday, May 9, 2025

Habits

I think this article is worth reading carefully:

https://spectrum.ieee.org/lean-software-development

I have been dealing with a bunch of infrastructure software lately, and I have been horrified by how awful it has become. I have decided to call it Franken-ware.

It’s a hobbled-together mess of everything. It has a crazy huge number of half-baked features that are poorly glued into place. It reminds me of the awful mainframe software of the 80s that we were going to replace (but never did). That stuff had less features overall but was just as convoluted.

I see this as a problem of degrading habits. Too many people believe that whacking out more features quickly, even when they don’t make sense, is more valuable than quality and engineering. They value speed over understanding.

Code is always a manifestation of its author’s knowledge. Q&A sites or vibe coding don’t change that; they just delay the inevitable. If the code is a mess, the system is a mess. If the data is garbage, the output is garbage. These things never change (but are often ignored or forgotten).

Good Habits:
  • Format your code very carefully; pay close attention to readability
  • Refine your code, over and over again. It is never ‘done’. Editing is more important than typing it in.
  • Dig into how things work below your code. What you don’t understand will always come back to bite you.
  • Anticipate all possible errors, however unlikely. They like to occur just when it is inconvenient.
  • Make the code easier on the users and harder on yourself. The opposite is cheating.
  • Never blindly throw in a block of code. If you don’t fully understand it, you can’t use it. If you check in the code, it implies that you understand how the code works.
  • Read a lot. Manuals, tutorials, textbooks, articles, blogs, etc. The most important skill you can learn is to be able to absorb tonnes of knowledge quickly and correctly.
  • Control everything you can control. Otherwise, it is out of control.
  • Start with everything locked down tightly. Only loosen the locks when you know what you are doing.
  • Pay close attention to your assumptions. Knowing what they are will tell you where the most likely problems will occur.
  • Spend time to make any interfaces look good. They are how you are presenting your work to the world. If they are ugly ...
  • If you don’t know, ask questions. Ask lots of them, even if they are stupid. Write down the answers, so you can try not to ask the same ones over and over again; people find that annoying.
  • Beware of shortcuts. Nothing is easy in development, so anyone promising that is up to something. Nothing is fast in development; what gets done quickly will fall apart quickly.
  • Always follow suit. Make your code fit into the codebase, your data fit into the database, your interface fit into the software around it. Be as uncreative as possible, save that for actual problem solving, not implementation details.
  • Get to know your users. Have empathy for them, talk to them, meet them, etc. The more you understand about the problems they are facing, the better your solutions will be for them. It’s all about them.
  • Don’t cave under pressure. Keep an open mind, but once you decide that some specific work needs to be done, and how to do it, do it to the best of your abilities, no matter how much noise surrounds you. Later, reflect on whether it was actually a good choice, since that will help you grow.
  • Mastery comes from being able to precisely estimate the work involved and get it done as predicted. If you are experimenting, you have not mastered it yet. It’s software, it is okay to have not mastered it, just don’t fool yourself into thinking that you have; that blinds you to opportunities to improve.
Once you get going, improving your habits is usually far better than lightly learning new technologies. They are long-term abilities that reduce the friction in your career.

Friday, April 25, 2025

Two Passes

I have been reminded quite recently of a very common software operations problem.

You have to do a big upgrade to an existing system.

For this, you should have an incredibly thought-out plan, down to the littlest details, and the ability to roll back if there is trouble.

That is, you are 142% certain that it will work, and if it doesn’t, you are 220% certain you can abort the attempt safely.

“We’re only eighty percent sure, and it is too much work to plan it all out.”

The thing you should never, ever do is just go ahead anyway.

First, you’re less than a hundred percent certain, which is bad. Very bad.

But the key problem is that if you actually knew what you were doing, the creation of the plan would be trivial. That you think it would take a long time is the prime indicator that you don’t really understand what you are doing. The two are the same.

If you don’t know, admit it. It is way less painful.

And then do a 2-pass.

Do it once to make sure it works, and if that goes “perfectly,” do it again for real.

Odds are that you’ll screw it up and have to repeat that first pass a bunch of times. But understand that each time you repeat it, you just saved a crazy amount of both time and pain. You just avoided a disaster.

Yes, it is longer. Yes, it is more work. But to avoid it, you have to be able to produce an incredibly accurate plan and its rollback. You didn’t, you can’t, so you’re pretty much in the 100% fail territory right now, unless you get ‘win millions in a lottery’ lucky. If you’ve got that type of horseshoe, there are more effective things for you to do with your time than upgrades.

It’s really just a variation of ‘measure twice, cut once’.

You walk through all of it, in full, as if for real, but then don’t do the very last thing to commit it. Then with that knowledge, you can obviously draft a plan, so now you can move forward.

Note that if the first pass in a 2-pass works, you probably can’t just flip the switch right then and there. Mostly, you have to let other people around you know in advance, and unless you lied to them about what you're doing the first time, that notification gets done only for the second, actual pass. But that is fine. Doing it twice is a necessity, not a waste of time.