Thursday, November 27, 2025

Software Failures

Way back, maybe 15ish years ago, when I was writing that software projects failed just as often then as they did back in the Waterfall era, lots and lots of people said I was wrong.

They insisted that the newer lightweight reactive methodologies had “fixed” all of the issues. But if you understand what was going wrong with these projects, you’d know that was impossible.

So it’s nice to see a modern perspective confirm what I said then:

https://spectrum.ieee.org/it-management-software-failures

The only part of that article that I could disagree with is that it was a bit too positive towards Agile and DevOps. It did mitigate itself at the end of the paragraph, but it still has an overly positive marketing vibe to the writing. “Proved successfully” should have been lower to “claimed successfully”, which is a bit different and a lot more realistic.

If you lumped in all of the software that doesn’t even pay for itself, you’d see that the situation is much worse in most enterprises. Millions of lines of fragile code that kinda do just enough that it convolutes everything else. It’s pretty ugly out there. A big digital data blender.

From my perspective, the chief problem of our industry is expectations. When non-technical people moved in to take control of development projects, they misprioritized things so badly that the work commonly spins out of control.

If a development project is out of control, the best case is that it will produce a lame system; the worst case is that it will be a total outright failure. It is hard to come back from either consequence.

If we want to fix this, we have to change the way we are approaching the work.

First, we have to accept that software development is extremely, extremely slow. There are no silver bullets, no shortcuts, no vibing, no easy or cheap ways out. It is a lot of work, it is tedious work, and it needs to be done carefully and slowly.

Over the 35 years of my career, it just keeps getting faster, but with every jump up, the quality keeps getting worse. Since you need a minimal level of quality for it to not be lame or a failure, you need a minimal amount of time to get there. You try to skimp on that, it falls apart.

Hacking might be a performance art, but programming is never that. It is a slow, intense slog bordering on engineering. It takes time.

Time to design, time to ramp up, time to train, time to learn, time to code, time to test. Time, time, time.

If the problem is that you are trying to race through the work in order to keep the budget under control, the problem is that you are racing through the work. So, slow it down. Simple fix.

For any serious system, it takes years and years for it to reach maturity. Trying to slam part of it out in six months, then, is more than a bit crazy. Libraries and frameworks don’t save you. SAAS products don’t save you. Being overly reactive and loose with lightweight methodologies doesn’t save you either, and then can actually fuel the problems, making it worse, not better.

If you want your software to work properly, you have to put in the effort to make it work properly. That takes time.

The other big issue that is needed is that the group of people you assembled to build a big system matters a whole lot. Huge amount.

Programmers are not easily replaceable cogs. An all-junior team that is barely functional is not only far less expensive, it is also a massive risk. The resulting system is already in big trouble before any code is written.

The people you put in charge of the development work really matter. They need to be skilled, experienced, and understand how to navigate some pretty complicated and difficult tradeoffs. Without that type of background, the work gets lost and then spins out of control.

It’s very common to see too much focus dumped on trivial visible interface issues while the underlying mechanics are hopelessly broken. It’s like worrying about the cup holder in your car when the engine block is cracked. You need someone who knows this, has lived this, and can avoid this.

As well, enough of the team needs to have significant experience too. Experience is what keeps us from making a mess, and a mess is the easiest thing to create with software. So, a gifted team of developers, mixed in experience with both juniors and seniors, led by experience, is pretty much a prerequisite to keeping the risks under control.

Software development has always been about people, what they know, and what they can build. They are the most important resource in building and running large software systems. If you don’t have enough skilled people, nothing else matters. No methodology, process, or paperwork can save you. You lack the talent to get it done. Simple.

That’s mostly the roots of our problems. Not enough time, and not taking the staffing issues seriously enough. Fix those two, and most development gets back on the rails. Nurture them, and most things built out of a strong shop are pretty good. From there, you can decide how high the quality should be, or how to streamline the work, or strategize about direction, but without that concrete foundation, you are lucky if any of it runs at all, and if it does that the crashes just aren’t too epic.

Thursday, November 20, 2025

Integrations

There are two primary ways to integrate independent software components, we’ll call them ‘on-top’ and ‘underneath’.

On top means that the output from the first piece of software goes in from above to trigger the functionality of the second piece of software.

This works really well if there is a generalized third piece of software that acts as a medium.

This is the strength of the Unix philosophy. There is a ‘shell’ on top, which is used to call a lot of smaller, well-refined commands underneath. The output of any one command goes up to the shell, and then down again to any other command. The format is unstructured or at least semi-structured text. Each command CLI takes its input from stdin and command line arguments, then puts its output to stdout, and splits off any errors into stderr. The ‘integration’ between these commands is ‘on top’ of the CLI in the shell. They can all pipe data to each other.

This proved to be an extremely powerful and relatively consistent way of integrating all of these small parts together in a flexible way.

Underneath integrations are the opposite. The first piece of software keeps its own configuration data for the second piece and calls it directly. There may be no third party, although some implementations of this ironically spin up a shell underneath, which then spins up the other command. Sockets are also commonly used to communicate, but they depend on the second command already being up and running and listening on the necessary port, so they are less deterministic.

A lot of modern software prefers the second type of integration, mostly because it is easier for programmers to implement it. They just keep an arbitrary collection of data in the configuration, and then start or call the other software with that configuration.

The problem is that even if the configuration itself is flexible, this is still a ‘hardwired’ integration. The first software must include enough specific code in order to call the second one. The second one might have a generic API or CLI. If it needs the output, the first software needs to parse the output it gets back.

If the interaction is bi-directional and a long-running protocol, this makes a lot of sense. Two programs can establish a connection, get agreement on the specifics, and then communicate back and forth as needed. The downside is that both programs need to be modified, and they need to stay in sync. Communication protocols can be a little tricky to write, but are very well understood.

But this makes a lot less sense if the first program just needs to occasionally trigger some ‘functionality’ in the second one. It’s a lot of work for an infrequent and often time-insensitive handoff. It is better to get the results out of the program and back up into some other medium, where it can be viewed and tracked.

The top-down approach is considerably more flexible, and depending on the third-party is far easier to diagnose problems. You can get a high-level log of the interaction, instead of having to stitch together parts of a bunch of scattered logs. Identifying where a problem originated in a bunch of underneath integrations is a real nightmare.

Messaging backbones act as third parties as well. If they are transaction-oriented and bi-directional, then they are a powerful medium for different software to integrate. They usually define a standard format for communication and data. Unfortunately, they are often vendor-specific, very expensive, locked in, and can have short lifespans.

On-top integrations can be a little more expensive when using resources. They are slower, use more CPU, and it is costly to format to a common format than to parse back to the specifics. So they are not preferred for large-scale high-performance systems. But they are better for low or infrequent interactions.

However, on-top integrations also require a lot more cognitive effort and pre-planning. You have to carefully craft the mechanics to fit well into the medium. You essentially need a ‘philosophy’, then a bunch of implementations. You don’t just randomly evolve your way into them.

Underneath integrations can be quite fragile. When there are a lot of them, they are heavily fragmented; the configurations are scattered all over the place. If there are more than 3 of them chained together, it can get quite hairy to set them up and keep them running. Without some intensive tracking, unnoticed tiny changes in one place can manifest later as larger mysterious issues. It is also quite a bit harder to reason about how the entire thing works, which causes unhappy surprises. Equally problematic is that each integration is very different, and all of these inconsistencies and different idioms increase the likelihood of bugs.

As an industry, we should generally prefer on-top integrations. They proved to be powerful and reliable for decades for Unix systems. It’s just that we need more effort in finding expressive generalized data passing mechanisms. Most of the existing data formats are far too optimized for limited sub-cases or are too awkward to implement correctly. There are hundreds of failed attempts. If we are going to continue to build tiny independent, distributed pieces, we have to work really hard to avoid fragmentation if we want them to be reliable. Otherwise, they are just complexity bombs waiting to go off.

We’ll still need underneath integrations too, but really only for bi-directional extensive, high-speed, or very specific communication. These should be the exception -- optimization -- rather than the rule. It is easier to implement, but it is also less effective and is a dangerous complexity multiplier.

Thursday, November 13, 2025

Unknown unknowns

If I decided to build a house all on my own, I am pretty sure I would face lots of unexpected problems.

I am comfortable building something like a fence or a deck, but those skills and the knowledge I gained using them are nowhere close to what it takes to build a house.

What does it take to build a house? I have no clue. I can look at already built houses, and I can watch videos of people doing some of the work, but that isn’t even close to enough information to empower me to just go off and do it on my own.

If I tried, I would surely mess up.

That might be fine if I were building a little shed out back to store gardening tools. It’s likely that whatever mess I created would probably not result in injuries to people. It’s a very slim possibility.

But knowing that there are a huge number of unknown unknowns out there, I would be more than foolish to start advertising myself as a house builder, and even sillier to take contracts to build houses.

If a building came tumbling down late one night, it could very likely kill its occupants. That is a lot of unnecessary death and mayhem.

Fortunately, for my part of the world, there are plenty of regulators out there with building codes that would prevent me from making such a dangerous mistake.

The building codes are usually specific in how to do things, but they were initially derived from real issues that explain why they are necessary.

If I were to carefully go through the codes, I am sure that their existence -- if I pondered hard enough-- would shed light on some of those unknown unknowns that I am missing.

There might be something specific about roof construction that was driven by roofs needing to withstand a crazy amount of weight from snow. The code mandates the fix, but the reason for seemingly going overboard on the tolerances could be inferred from the existence of the code itself. “Sometimes there is a lot of extra weight on roofs”.

Rain, wind, snow, earthquakes, tsunamis, etc. There are a series of low-frequency events that need to be factored into any construction. They don’t occur often, but the roof needs to survive if and when they manifest themselves.

Obviously, it took a long time and a lot of effort over decades, if not centuries, to build up these building codes. But their existence is important. In a sense, they separate out the novices from the experts.

If I tried to build a house without reading or understanding them, it would be obvious to anyone with a deeper understanding that I was just not paying attention to the right areas of work. The foundations are floppy or missing, the walls can’t hold up even a basic roof, and the roof will cave in under the lightest of loads. The nails are too thin; they’ll snap when the building is sheared. It would be endless, really, and since I don’t know how to build a house, I certainly don’t know all of the forces and situations that would cause my work to fail.

I’ve always thought that it was pretty obvious that software needs building codes as well.

I can’t count the number of times that I dipped into some existing software project only to find that problems that I find very obvious, given my experiences, were completely and totally ignored. And that, once the impending disasters manifested themselves, everybody around me just said “Hey, that is unexpected”, when it was totally expected. I’ve been around the block; I knew it was coming.

Worse is that whenever I tried to forewarn them, they usually didn’t want to listen. Treated me as some old paranoid dude, and went happily right over the cliff.

It gets so boring having to say “I told you so”, that at some point in my career, I just stopped doing it. I stuck with “you can lead a horse to water, but you can not make it drink” instead.

And that is where building codes for software come in. As a new developer in an existing project, I often don’t carry much weight, but if there was an official reference for building codes that covered the exact same thing, it would be easy to prevent. “You’ve violated code 4.3.2, it will cause a severe outage one day”, is better than me trying to explain why the novice blog posts they read that said it was a good idea are so horribly wrong.

Software development is choked with so many myths and inaccuracies that wherever you turn, you bump into something false, like trying to run quickly through a paper-maché maze without destroying it.

We kinda did this in the past with “best practices”, but it was informal and often got co-opted by dubious people with questionable agendas. I think we need to try again. This time, it is a bunch of “specific building codes” that are tightly versioned. They start by listing out strict ‘must’ rules, then maybe some situationally optional ones, and an appendix with the justifications.


It’s oddly very hard to write, and harder to keep it stack, vendor, and paradigm neutral. We should probably start by being very specific, then gradually consolidate those codes into broader, more general ones.

It would look kinda of like:

1.1.1 All variable names must be self-describing and synchronized with any relevant outside domain or technical terminology. They must not include any cryptic or encoded components.

1.2.1 All function names must be self-describing and must clearly indicate the intent and usability of the code that they encapsulate. They must not include any cryptic or encoded components, unless mandated by the language or usage paradigm.

That way, if you crossed a function called FooHandler12Ptr, you could easily just say it was an obvious violation of 1.2.1 and add it as a bug or a code review fix.

In the past, I have worked for a few organizations that tried to do this. Some were successful, some failed miserably. But I think that in all cases, there was too much personality and opinion buried in their efforts. So, the key part here is that each and every code is truly objective. Almost in a mathematical sense, they are all ‘obviously true’ and don’t need to be broken down any further.

I do know that, given the nature of humanity, there is at least one programmer out there in this wide world who currently believes that ‘FooHandler12Ptr’ isn’t just a good name, it should also be considered best practice. For each code, I think they need an appendix, and that is where the arguments and justifications should rest. It is for those people adventurous enough to want to pursue arguing against the rules. There are plenty of romanized opinions and variations on goals; our technical discussions quickly get lost in very non-objective rationales. That should be expected, and the remedy is for people with esoteric views to simply produce their own esoteric building codes. The more, the merrier.

Of course, if we do this, it will eat up some time, both to write up the codes but also to enforce them. The one ever-present truth of most programming is that there isn’t even close to enough time to spare, and most managements are chronically impatient. So, we sell adherence to the codes as a ‘plus’, partially for commercial products or services. “We are ‘XXX 3.2.1 compliant’ as a means of really asserting that the software is actually good enough for its intended usage. In an age where most software isn’t good enough, at some point, this will become a competitive advantage and a bit later a necessity. Just need a few products to go there first, and the rest will have to follow.

Thursday, November 6, 2025

Intent

Recently, I was reading some AI-generated code.

At the high level, it looked like what I would expect for the work that it was trying to do, but once I dug into the details, it was kinda of bizarre.

It was code that one might expect from a novice programmer who was struggling with programming. Its intent was muddled. Its author was clearly confused.

It does help, though, for a discussion of readability.

Really good code just shows you what it is going to do, really easily. It makes it obvious. You don’t need to think too hard or work through the specifics. The code says it is going to do X, and the code does that X, as you would expect. Straightforward, and totally boring, as all good code should be.

In thinking about that, it all comes down to intentions. “What did the programmer intend the code to do?” Is it some application code that moves data from the database and back to the GUI again? Does it calculate some domain metric? Is it trying to span some computation over a large number of resources?

To make it readable, the programmer has to make their intent clear.

The most obvious thing is that if there is a function called GetDataFromFile, which gets data from the database, you know the stated intent is wrong, or obscured, or messed up. Shouldn’t the data come from a file? Why is it going to a database underneath? Did they set it up one way and duct-taped it later, without bothering to properly update the name?

If the code is lying about what it intends to do, it is not readable. That’s an easy point, in that if you have to expend some cognitive effort to remember all of the places where the code is lying to you, that is just unnecessary friction getting in your way. Get enough misdirection in the code, and it is totally useless, even if it compiles. Classic spaghetti.

Programming is construction, but it is also, unfortunately, a performance art. It isn’t enough to just get the code in place; you also have to keep it going, release after release.

Intent also makes it clear for the single responsibility issues.

If the intent of a single function is to “get data from the db”, “... AND to reformat it”, “... AND to check it for bad data”, “... AND to ....” then it’s clear that the function is not doing just one thing, it is doing a bunch of them.

You’d need a higher function that calls all of the steps: “get”, “reformat”, “validate”, etc., as functions. Its name would indicate that it is both grabbing the data and applying some cleanup and/or verification to it. Raw work and derived work are very different beasts.

Programmers hate layering these days, but muddying in a bunch of different things into one giant function not only increases the likelihood of bugs, but also makes it hard for anyone else to understand what is happening. Nobody ever means to write DoHalfOfTheWorkInSomeBizarreInterlacedOrder, but that should really be a far more common function name in a lot of codebases out there. The intent of the coder was to avoid typing in functions to avoid having to name them. What the code itself was doing was forgotten.

If you decompose the code nicely into decent bite-sized chunks and give each chunk a rational, descriptive name, it is pretty easy to follow what the code is trying to do. If it is layered, then you only need to descend into the depths if there is an underlying bug or problem. You can quickly assert that GenerateProgressReport does the 5 steps that you’d expect, and move on. That is readable, easy to understand, and you probably don’t need to go any deeper. You now know those 5 steps are. You need that capability in huge systems; there is often more code there than you can read in a lifetime. If you always have to see it with all of the high and low steps intertwined together, it cripples your ability to write complex or correct code.

In OO languages, you can get it even nicer: progress.report.generate() is nearly self-documenting. The nouns are stacked in the way you’d expect them, even though “progress” is really a nounified verb. If the system were running overnight batches, and the users occasionally wanted to check in on how it was going, that is where you’d expect to find the steps involved. So, if there was a glitch in the progress report, you pretty much know where that has to be located.

A long, long time ago, in the Precambrian days of OO, I remember watching one extremely gifted coder in action. He was working with extremely complicated graphic visualizations. As he’d see a problem on the screen while testing, he had structured his code so well that he pretty much knew exactly which line in it was wrong. That kind of code is super readable, and the readability has an extra property making it highly debuggable as well. The bug nicely tells you exactly where in the code you have a problem. That is a very desirable property.

His intent was to render this complicated stuff; his code was coded in a way to make it easy to know if it was doing the right thing or not. This let him quickly move forward with in his work. If his code had been badly named spaghetti, it would have taken several lifetimes to knock out the bugs. For anybody who does not think those readability and debugability properties are necessary, they don’t realize how much more work they’ve turned it into, how much time they are wasting.

If the intent of the code is obscured or muddled, it limits the value of the code. That’s why we have comments, for adding in extra commentary that the code itself cannot express. Run-once code, even if it works, is too expensive in time to ever allow it to pay for itself. You don’t want to keep writing nearly similar pieces of code each time; if you can just solve it once in a slightly generalized fashion, and then move on to other, larger pieces of work.

It takes a bit of skill to let intent shine through properly. It isn’t something intuitive. The more code from other people you read, the more you learn which mistakes hurt the readability. It’s not always obvious, and it certainly changes a bit as you get more and more experience.

You might, for example, put in a clear idiom that you have seen in the past often, but it could confuse less experienced readers. That implies that you have to stick to the idioms and conventions that match the type of code you are writing. Simple application idioms for the application’s code, and more intricate system programming idioms for the complex or low-level stuff. If you shove some obscure functional programming idiom into some basic application code, it will be hard for other people to read. There is a more ‘application coding’ way of expressing the code.

It is a lot like writing. You have to know who you are coding for and consider your audience. That gives you the most readable code for the work you are doing. It is necessary these days because most reasonably sized coding projects are a team sport now.

It’s worth noting that deliberately hiding your intent and the functionality of the code in an effort to get ‘job security’ is generally and most often unethical. Maybe if it’s some code that you and a small group of people will absolutely maintain for its entire lifetime, then it might make sense, but that is also never actually the case. More often, we write stuff, then move on to writing other stuff.

Friday, October 31, 2025

The Structure of Data

A single point of data -- one ‘variable’ -- isn’t really valuable.

If you have the same variable vibrating over time, then it might give you an indication of its future behavior. We like to call these ‘timeseries’.

You can clump together a bunch of similar variables into a ‘composite variable’. You can mix and match the types; it makes a nexus point within a bunch of arbitrary dimensions.

If you have a group of different types of variables, such as some identifying traits about a specific person, then you can zero in on uniquely identifying one instance of that group and track it over time. You have a ‘key’ to follow it, you know where it has been. You can have multiple different types of keys for the same thing, so long as they are not ambiguous.

You might want to build up a ‘set’ of different things that you are following. There is no real way to order them, but you’d like them to stay together, always. The more data you can bring together, the more value you have collected.

If you can differentiate for at least one given dimension, you can keep them all in an ordered ‘list’. Then you can pick out the first or last ones over the others.

Sometimes things pile up, with one thing about a few others. A few layers of that and we get a ‘tree’. It tends to be how we arrange ourselves socially, but it also works for breaking down categories into subcategories or combining them back up again.

Once in a while, it is a messy tree. The underlying subcategories don’t fit uniquely in one place. That is a ‘directed acrylic graph’ (dag) which also tends back to some optimizing forms of memoization.

When there is no hierarchical order to the whole thing it is just a ‘graph’. It’s a great way to collect things, but the flexibility means it can be dangerously expensive sometimes.

You can impose some flow, making the binary edges into directional ones. It’s a form of embedding traits into the structure itself.

But the limits of a single-dimensional edge may be too imposing, so you could allow edges that connect more than one entry, which is called a ‘hypergraph’. These are rare, but very powerful.

We sometimes use the term ‘entity’ to refer to our main composite variables. They relate to each other within the confines of these other structures, although we look at them slightly differently in terms of, say, 1-to-N relationships, where both sides are effectively wrapped in sets or lists. It forms some expressive composite structures.

You can loosely or tightly structure data as you collect it. Loose works if you are unsure about what you are collecting, it is flexible, but costly.. Tight tends to be considerably more defensive, less bugs, and better error handling.

It’s important not to collect garbage; it has no inherent value, and it causes painful ‘noise’ that makes it harder to understand the real data.

The first thing to do when writing any code is to figure out all of the entities needed and to make sure their structures are well understood. Know your data, or suffer greatly from the code spiraling out of control. Structure tends to get frozen far too quickly; just trying to duct tape over mistakes leads to massive friction and considerable wasted effort. If you misunderstood the structure, admit it and fix the structure first, then the code on top.