Friday, August 15, 2025

Bad Engineering

Lots of people believe that if you just decompose a big problem into enough little ones, it is solved.

A lot of the time, though, the decomposition isn’t into complete sub-problems but just partial ones. The person identifies a sub-problem, they bite off a little part of it, and then push the rest back out.

A good example is if some data needs processing, so someone builds a middleware solution to help, but it is configurable to let the actual processing be injected. So it effectively wraps the processing, but doesn’t include any actual processing, or minimal versions if it.

Then someone comes along and needs that processing. They learn this new tech, but then later realize that it doesn’t really solve their problem, and now they have a lot more fragments that still need to be solved.

Really, it just splintered into sub-problems; it didn’t solve anything. It’s pure fragmentation, not encapsulation. It’s not really a black box if it’s just a shell to hold the actual boxes …

If you do this a lot as the basis for a system, the sheer number of moving parts will make the system extraordinarily fragile. One tiny, unfortunate change in any of the fragments and it all goes horribly wrong. Worse, that bad change is brutal to find, as it could have been anywhere in any fragment. If it wasn’t well organized and repo’d then it is a needle in a haystack.

Worse, each fragment's injection is different from all of the other fragments’ injections. There is a lot of personality in each component configuration. So instead of having to understand the problem, you now have to understand all of these fragmented variations and how they should all come together, which is often far more complex than the original problem. So you think you've solved it, but instead you just made it worse.

If you look at many popular tech stacks, you see a huge amount of splinter tech dumped there.

They become popular because people think they are a shortcut to not having to understand the problems, and only realize too late that it is the long, treacherous road instead.

Companies like to build splinter tech because it is fast and relatively easy to get to market. You can make great marketing claims, and by the time the grunts figure it out, it is too late to toss, so it is sticky.

Splinter tech is bad engineering. It is both bloat and obfuscation. Fragments are a big complexity multiplier. A little of it might be necessary, but it stacks up quickly. Once it is out of control, there is no easy way back,

It hurts programmers because they end up learning all these ‘component du jour’ oddities, then the industry moves on, and that knowledge is useless. Some other group of splinter tech hackers will find a completely different and weird way of doing similar things later. So it's temporary knowledge with little intrinsic value. Most of this tech has a ten-year or less lifespan. Here today, gone tomorrow. Eventually, people wake up and realize they were duped.

If you build on tech with a short life span, it will mostly cripple your work’s lifespan. The idea is not to grind out code, but to solve problems in ways that stay solved. If it decays rapidly, it is a demo, not a system. There is a huge difference between those two.

If you build on top of bad engineering, then that will define your work. It is bad by construction. You cannot usually un-bad it if you’re just a layer of light work or glue on top. Its badness percolates upwards. Your stuff only works as well as the components it was built on.

No comments:

Post a Comment

Thanks for the Feedback!