Thursday, April 9, 2026

The Quality Bars

For any given software development, there are a bunch of ‘bars’ that you have to jump over, which relate to quality.

At the bottom, there is a minimum quality bar. If the code behaves worse than this, the project will be immediately cancelled. Someone who is watching the money will write the whole thing off as incompetence. To survive, you need to do better.

A little higher is the acceptable quality bar. That is where both management and the users may not be happy about the quality, but the project will definitely keep going. It may face increased scrutiny, and there will probably be lots of drama.

Above that is the reasonable quality bar. The code does what it needs to, in the way people expect it to behave. There are bugs, of course, but none of them are particularly embarrassing. Most of them exist for a short time, then are corrected. The total number of the known long-term outstanding ones is one or two digits. There are probably several places in the code where people think “we should have ...”

Then we get into the good quality bar. Bugs are rare; there are very few regrets. People like using the code; it will stay around for a long, long time. Its weakness isn’t what’s already there; it is making sure future changes don’t negate that value.

There is a great quality bar too. The code is solid, dependable, and can be used as a backbone for all sorts of other stuff. It’s crafted with a level of sophistication that keeps making it useful even for surprising circumstances. People can browse the code and get an immediate sense of what it does and why it works so well.

Above that, there is an excellent quality bar, where the code literally has no known defects. It was meticulously crafted for a very clear purpose and is nearly guaranteed to do exactly that, and nothing but that. It’s the type of code that lives can depend on.

There is a theoretical ‘perfect’ quality bar, too, but it is unreachable. It’s asymptotic.

Getting to the next bar is usually at least 10x more work than getting to that lower bar; the scale is clearly exponential. If it costs 1 just to get to minimum, then it’s 10 for acceptable, and 100 for reasonable. Roughly. This occurs because the higher bars need people to continually revisit and review the effort and aggressively refactor it, over and over again. Code that you splat out in a few hours is usually just minimum quality. Maybe if you’ve written the same thing a few times already, you can start at a higher bar, but that’s unreliable. To get up to those really high bars means having more than one author; it has to be a group of people with an intense focus, all working in sync with up-to-date collective knowledge. Excellent code has a guarantee that it will not deviate from expectations, one that you can rely on, so it’s far more than just a few lines of code.

A great deal of the code out there in libraries and frameworks falls far short of being reasonable. You might not get affected by that, as it’s often code that is sitting idle in little-used features. Still, you have to see them as a landmine waiting to go off when someone tries to push the boundaries of its usage. Code that has been battle tested for decades can generally get near the good bar, but there is always a chance that some future version will fall way, way back.

The overall quality of a codebase is really its lowest bar. So if someone splats some junk into an excellent project, if it’s ever triggered, it can pull down everything else below acceptable. This is the Achilles heel of plugins, as a few poor ones getting popular can cause a lot of damage to perceived quality.

Thursday, April 2, 2026

Outlines

A software system is a finite resource.

For some people, this might be a surprising statement. They might feel that, as they can store a massive amount of data and talk to any other system in the world, this feels a lot more infinite.

At any given time, there is a specific quantity of hardware, wires, and electricity. If more of these resources are available than are currently being consumed, they are still finite. It’s space to grow, but still limited.

Anything that operates within this finite boundary is in itself finite. Sure, it is always changing, usually growing, but despite its massive and somewhat unimaginable size, it is still finite.

If, even in its immense size and complexity, all software is finite, then any one given system within this is also finite.

A software system has fixed boundaries. It does exactly one set of things. Parts of the code may be dynamic, so they have huge expressive capabilities, but there are still very fixed boundaries on exactly how large those are. It may be permutations greater than all of the particles in the known universe, but it still has a limit.

Time might appear to change that, but the period of time for which any given piece of software will be able to run is also finite. Someday it will come to an end. The hardware will disintegrate, the sun may supernova, or humanity may just blow itself up. More likely, though, that it will just get upgraded and essentially become something else.

Given all of this, any given software system in existence, or as imagined, has a very sharp boundary that defines it. In that sense, since it is composed of code and configurations, those precisely dictate what it can and cannot do.

You can go outside of this boundary and write tests that confirm 100% of these lines. It’s just that, given the ability to have dynamic code, it may take far, far longer to precisely test those behaviours than the lifetime of the system itself. Still, even though it is vague, due to time and complexity, the tests form an encapsulating boundary on the outside of the system.

The same is true for specifications. You could precisely specify any and all behaviours within the software system, but to get it entirely precise, the specifications would have to have a direct one-to-one correspondence with the lines of code and configurations. That would effectively make the specifications an Nth generation language that is directly compilable into a 3rd-generation one, or even lower. Because of this, some people equate precise specifications to the code itself, seeing the code as just a specification for the runtime instances of the software.

An exact specification is almost a proof of correctness. I suspect that proofs need a bit more in that they are driven by the larger context of the expectations. They are also generally only applied to algorithms, not the system as a whole.

So, all of this gives us a bunch of different ways to draw the outlines of a system.

On top of this, there are plenty of vague ways to draw or imagine them as well. We have requirements and user stories, as popular means. As well, one could perceive the system by its dual, which is the arrangement and flow of its data. You can more easily describe an inventory system, for example, by the data it holds, the way that data is interacted with, and how it flows from and to other systems. While the dual in this case is more abstract, it is also considerably simpler than specifying the functionality or the code.

Another way is to see the system by how people interact with it, essentially as a set of features that people use to solve their problems. If those features are effectively normalized, it too is a simpler representation, but if they have instead been arbitrarily evolved over years of releases, they probably have become convoluted.

One key point with all of these different outline types is that some are much better at describing certain parts of the expectations for the behaviour than others. You might need a proof of correctness for some tiny critical parts, but a rather vague outline is suitable for everything else.

No one representation fits everything perfectly, which even applies to the code. If the code was constructed over a long period of time, by people without strong habits for keeping it organized, it too has degenerated into spaghetti, and isn’t easily relatable to the other outlines. People may have changed their expectations and grown accustomed to the behaviours, but that still doesn’t make it the best possible representation for what they needed.

In practice, the best choice is often to half-do a bunch of different types of outlines, then set it all running and repair the obvious deficiencies. While this obviously doesn’t ensure high quality or rigorous mapping to the expectations, it is likely the cheapest form of creating complex software systems. It’s just that it is also extremely high-risk and prone to failure.

Thursday, March 26, 2026

Cogtastic

Since I started programming decades ago, there has been one seriously annoying trend that just does not seem to want to go away.

If you work for an enterprise, banging away at their internal systems, the management above you really, really, really wants you to just sit there, do your job, and not cause any problems. They want you to be an obedient little cog. Just a part of the machine that they direct to satisfy their goals.

The utter failure with that is that you are in the trenches. And to build even halfway decent stuff, you have to understand a huge amount about the technology and the domain. If you mix in some empathy for the actual users, then the results are usually pretty good. You’ve solved real problems for real people, and they are usually thankful.

But the managers sitting way up high on the hill are disconnected from all of that. They don’t care about users or technology; they care about budgets, politics, and promotions. That’s not a surprise; that is the world they are forced to live in.

But it is a fly in the ointment, since their games are entirely disconnected from the users' lives. And you’re caught in the middle.

In the best circumstances, management enables you to find an appropriate balance between all sides and still keep mostly to some crazy artificial schedule. They trust you, and they listen to your concerns. You’re not a cog, you’re a critical part of the construction process.

But there are very few higher-up managers who actually subscribe to that perspective. Most, instead, believe that they were anointed as the boss and that their will supercedes all other concerns. These are the people who want you to be a quiet, obedient cog. “Just shut up and do your job.”

From my direct experience, it has always been a disaster. It was what was failing in the 70s, 80s, 90s, and early turn of the century with software development. It wasn’t “Waterfall” that deviated the work from where it needed to be; it was the people in charge who mindlessly went off in the wrong direction. If they had been paying attention, they would be course correcting as needed, following whatever chaotic changes tumbled out of the fray. In those days when things failed, they failed at the top, but somehow it was the weight of the process that got blamed.

From where I have often sat, the “root problem” has and will always be a lack of understanding. The people driving and working on the development project get disconnected from the people who will ultimately use the output. If you don’t know what the user’s problem really is, how can you build any type of solution that will help them? You have to dig into that; it’s not optional.

The desire for the development teams to just be mindless cogs comes directly from that cogtastic viewpoint. There is some type of made-up schedule, but the programmers keep surfacing ignored or forgotten issues. If management indulges, then the schedule gets blown. Instead of blaming analysis or a lack of understanding, it is just easier to suppress the feedback and keep on going as planned, hoping that some good luck happens somewhere along the way.

I’m sure there are lots of examples out there of out-of-control projects getting saved at the last minute by some Wesley who manages to Save the Ship and thus avoid impending disaster. That’s great, but highly unreliable, and the people who do this never get any of the credit they deserve for avoiding the original doomed fate. It’s only when they are gone that people realize that they were quietly avoiding the cliffs, while staying nearly on course.

So, you get this situation where someone is “in charge,” and they want their will to be manifested as and how they decide, but they do not truly have the right objectives to achieve success.

This takes us back to AI. Instead of being some cool new tool to lift the quality of the work we're doing, it seeks to turn the developers themselves into those disconnected managers. So they generate whackloads of code which they don’t understand, and don’t care about, because they are trying to impress the higher-ups with their “velocity”. What is actually happening gets ignored, and what is really needed gets ignored, too. Now, instead of them being the cog, it is some AI agent who fills that role, and the whole cycle repeats.

The act of engineering some large complex system involves both understanding how it works and why it is necessary. Nothing can escape that. In the Waterfall days, people blamed the heavy processes, but making them super light and hugely reactive did nothing to fix the problems. Now, we’ll see the same effect play out again. The developers will just be agent managers, and the auto-cogs below them will pile up more of the wrong stuff. You still have the wrong stuff even if you get it faster, and there is a lot more of it to add to the mudball, which is worse.

The moral of the story is that people just don’t seem to be learning the same lesson that keeps rearing its ugly head over and over again. A vague notion of what you want is not enough to build it. The real work is in taking that loose idea and understanding it at a very deep level so that you can then actuate it into the large number of parts you need that all work together as expected.

The people in the middle who pull off this feat are not and will never be cogs. What’s in their heads, and what they know, is the essence of being able to pull this thing out of the ethos and into reality. Their job is to understand it all and then find a way to turn that understanding into physical bits. They are good at it if what is produced matches everyone’s expectations. The users, management, technologists, etc. There is this codebase that, when built and deployed, becomes a real solution to all of these problems. That codebase, and its boundaries as code, a specification, tests, or proofs, is just a manifestation of the software developers' actual understanding. If, in the middle of doing that, it clicks in that part of the ask is too ambiguous, way off the mark, impossible, or just crazy, it’s the developer's understanding that is the mission-critical part of success. Address that, and it probably works. Ignore that, and it definitely fails.

Thursday, March 19, 2026

Interviews

I was crafted by the Waterloo co-op program in the late eighties. Part of that experience was a crazy large number of interviews, so I got pretty good at them. Since then, I’ve worked for over a dozen companies.

For one interview (for my all-time favourite job), I was really young, so I got asked rapid tech questions and had to bring printouts of my older code with me. That was fine, it was a systems programming job and really competitive, lots of people desired it.

For another interview, at the turn of the century, I just went for dinner and drinks. Definitely my favourite interview.

I’ve had a couple of interviews in coffee shops; they were usually successful. The casual atmosphere really helps to connect.

Once I got ganged up on. I think it was six of them crammed into a little office, but the questions were product, process, and feature-related, not coding, so it was fine.

Often, I was at the interview because a friend I had worked with in the past was trying to pull me in. That generally made them go pretty easy on me.

I’ve had some bad interviews, though. Usually, when applying for advertised jobs.

Once I showed up, they put me in a big room with a bunch of other people and gave us a written test. I took it, sat down, and just signed my name. Then I got up, handed it in and left. No way I was going to work for them.

Another time, I was grilled on tech that I told them I hadn’t used for twenty years. The kid interviewing me was annoyed that I couldn’t remember some esoterica. Seriously?

One time, they gave me an online coding test. An editor embedded in an online chat. The first question was okay, but for the second question, they wanted me to correctly code something huge. I explained the actual theory behind it and why it wasn’t trivial, but they didn’t understand and told me just to grind it out in a little bit of ‘approximate’ code. I hemmed and hawed for a while, then said I wasn’t going to finish. They told me to try anyway, so I sat there quietly until the interview timed out. I was hoping awkward silence made my point.

After one interview, one of the executives proudly told me that I would have to look after his “hobby” system. I turned that down without a second thought.

For another, it was going well, but then I started making jokes about live locks. Turns out the interviewer did his master's thesis on that topic. Opps.

One time, they said my take-home coding test had too many functions. I just laughed.

Another time, the interviewer started in with tricky little puzzle questions. I had seen them all before, so I could have answered, but I was already having a bad day, so I blew up. I got really angry, and the interviewer tried to calm me down. We agreed to meet in person, which went really well. I was sent across the country for a round of second interviews, but I was told I had a bit too much personality for them. It was still fun, and they paid my expenses.

One time, one of my co-workers showed me the tests he was going to use for interviews. He said to find the one problem, I pointed out a whole bunch of them; he got mad at me. Lol.

I often interviewed candidates, too. If we were multiple interviewers, I’d get the others to ask the tech questions, and I’d focus on personality. I like to see that people are curious and keen to learn. If they had that and I could get them talking about something that excited them, I generally accepted them. That had a pretty good track record of finding good people.

I usually expected a long ramp-up time and the need for training, so I was rarely looking for prefab employees. I saw them as longer-term bets, which tended to pay off better.

For one company, since the code was brutal, I used a technical question that I was pretty sure the candidates couldn't answer. I just wanted them to try to work through the problem. I would help, but not give it all away. If they were stumped and started pitching ideas, it was perfect. I had some pretty good hires from that.

For that round of interviews, one of the senior candidates got insulted and said he wouldn’t take the test. I obviously sympathized, but for that type of work, it wasn’t optional; it was our daily grind.

Overall, my hiring track record is iffy. Some great hires, but also some duds. Usually, though, the duds were caused by scarcity and/or my not trusting my own instincts. Sometimes the candidates are too limited; there is not much you can do about it.

I really hate the big, long, stupid interviews, particularly when the questions are way out of whack with the actual work. Seems like an ego problem if they’ll test you on stuff that you’d never have to do. They’re trying too hard to be cool, and I really hate the taste of Kool-Aid. If the interview makes you uncomfortable, the actual job is probably worse. I never felt bad when I just walked away, but I usually wasn't desperate either. That helps.

Only once did I really have to take a job that I didn’t want. I stayed for a while, but some days were hard. The irony was that many of my later jobs were with people I had met at that early one. The job wasn’t great, but the contacts turned out to be awesome. That's why it's important to keep a positive attitude even in a negative situation.

I’ve always figured that if you got the world’s ten greatest programmers all together on a project, they would spend their days fighting with each other and nothing practical would get built. A less impressive team that works together really well is always better. It’s too bad modern interview practices don’t reflect that.

If you’re interviewing these days, be patient, stay strong. It’s a numbers game. Win a few, lose a lot.

Thursday, March 12, 2026

Functions

Over the decades, I’ve seen the common practices around creating functions change quite a bit.

When I first started coding, functions had come out of the procedural paradigm. I guess, long ago, in maybe assembler, a program was just one giant list of instructions. That would be a little crippling, so one of the early attempts to help was to break it up into smaller functions. An added benefit was that you could reuse them.

By the time I started coding, the better practice was to break up the code along the lines of similarity. Code that is similar is clumped together.

As the data structures and object-oriented paradigms started taking hold, the practices switched to being targeted. For instance, you’d write a lot of little ‘atomic’ primitive functions for each action you did against the structure, like create, add, or traverse. Indirectly, that gave rise to the notion of a function just having a single responsibility.

In data structures, you might end up coding up a whole bunch of structures, then stacking them one on top of the other, mostly trying to get to one rather giant data structure for the whole program. That was excellent in building up sophistication from reusable parts, but a lot of programmers just saw it as excessive layering, not one big interactive structure. People kept wanting to decompose, without ever reassembling.

Object-oriented followed suit, but seemed to get lost on that notion of building up application objects. There were often dozens at the top. It also renamed functions to ‘methods’, but I’ll skip that. It was initially a very successful paradigm change, but later people started objecting to the feeling of layering, and to the idea that the entry points were somehow a ‘god’ object.

The very early smalltalk object-oriented code had lots of functions, many of which were just one-liners. My first encounter surprised me. There were so many functions...

I guess, as a later reaction to the success of those earlier styles, the common practices moved back towards procedural. Almost no layers and very huge functions. Giant ones. This reopened the door for more brute force practices, which had been pushed away by those earlier paradigms.

I’ve always known that huge functions were a nightmare. Too much stuff all tangled together, it is unreadable and hard to follow. But any of the earlier attempts to limit function size were too restrictive and too fragile. You can’t just say that all functions must be less than 10 lines of code, for example. The attempts to categorize it as a single responsibility were pretty good, but because of the intense dislike of layers, they didn’t get across the notion that it is one thing at just one level. So, it would be coded as one thing, but with all of the raw instructions below that, as far down as the coder could go, all intertwined together. If you have messy low-level stuff to do, it should be hidden below in more functions, effectively a layer, but not really. For example, string fiddling in the middle of business logic is distracting, quickly killing off the readability.

Layers, as they first came out, were really architectural lines, not stacked data structures. For example, you cut a hard line between code that messes with persistence and code that computes derived objects, so you don’t mix and match. The derived stuff then sits in a layer above the persistence.

The point of those types of lines is to make the code super easy to debug later. If you know it is a derived calculation problem, you can skip the other code.

Overall, I’d say that there can never be too many functions. Each one is a chance to attach some self-describing name to a chunk of code. Think of it as concise language-embedded comments. If you were tight about coding with some of the older paradigms, then the data structures or even objects are pretty much all named with nouns, and the functions are all verbs. Using that, you can implement the code with the same vocabulary that you might verbally describe to a friend or another programmer. The closer that implementation is to descriptive paragraphs of what it does, the more you will be able to verify its behaviour on sight. That doesn’t absolve you of testing, or even for some code, creating a real formal proof of correctness, but it does cut down a lot of the work later when catching bugs.

If you also put a hard split between the domain logic and the technical necessities, you can usually just jump right to the incorrect block of code. When someone describes the bug, you know immediately where it is located. Since diagnostics and debugging eat way more time than coding, any sort of practice to reduce friction for them will really help with scheduling and reducing stress. Code that you can fix effortlessly is worth far more than code that you can write quickly.

For me, I think we should return to that notion of stacking up data structures and objects in order to build up sophistication. The best code I’ve seen does this, and has crazy long shelf lives. Its strength is that it encapsulates really well and makes it easy for reuse. It is also quite defensive, and it helps to zero in quickly on bugs. Realistically, it isn’t layering; it is a form of stacking, and those two should not be confused with each other. A layer is a line in the architecture you should have; stacking is just depth in the call chain. If the stacking is really encapsulated, programmers don’t have to go down a rabbit hole to understand what is happening in the higher levels. Entangling that all together is worse.

You can always get a sense of the code quality by quickly looking at the functions. Big, bloated functions with ambiguous, convoluted, or vague names are just nurseries for bugs. If you can skim the code and mostly know what it should do, then it is readable. If you have to pick over it line by line, it is a cognitive nightmare. If the function says DoX, and the code looks like it might actually do X, then it is pretty good.

Thursday, March 5, 2026

Stress

Being a software developer is difficult and stressful.

In the early days, there is an uncontrollable fear that you cannot build what you were asked to build.

The industry is awash with too many unknown unknowns, and few programmers receive adequate training. Newbies are often just pushed into the deep end with a brick tied to their ankle and expected to figure out how to swim.

Worse, the industry discourse is erratic. Some people claim one thing works correctly, while lots of people contradict them. Everyone argues, so there is usually never a consensus. It’s super trendy and plagued with myths and misunderstandings. Over the decades, this has gotten far worse. It’s a turbulent sea of opinions and amnesia.

At some point, if you survive long enough, you figure out how to build the things they ask you to build.

Well, almost. Each time around, the thing they ask for is larger and far more complicated. That seems to never end. Most programmers believe, as a result of this, that fundamentally everything you do is new, but oddly, most things you do will have been done before by hundreds, if not millions, of other people. Real greenfield software is a rarity, even if it’s a newly evolving domain. The basics that underlie the development have been around for a very long time, and haven’t really changed all that much over the decades, even if the dependencies and stacks are different.

After you’ve sort of managed to get your feet solidly on the ground -- for instance, you can build most different types of common applications -- your problems get worse.

It is inevitable that as you outgrow the work of coding, you find yourselves entangled in all sorts of other industry issues, like management, planning, usability, architecture, design, domain knowledge, etc. Once you are no longer an inexpensive kid, you find that you need to dip your toes into these other issues in order to justify your higher salary. You probably don’t want to, which is why you focused on coding instead. Still, you quickly learn that the more you bring to the table, the more people will be willing to put up with your demands.

That is when software development gets really hard.

The more knowledge you acquire, and the more experiences you survive, the more likely you will find yourself in a situation where you can anticipate a big problem, know absolutely how to avoid it, but are not taken seriously enough to be allowed to do that. So, you’ve shed the creation stress only to be pummeled by tactical or strategic stress. You’re expected to code, but are not supposed to control things. “They” just want you to be a cog in their machine. That is often the low point.

That is the time that you have endless discussions with people about how too little time will grind the quality far below usable, or how throwing in extra bodies will only slow things down, not speed them up. That’s when someone recommends a technology that you know is completely unreliable, or they push a change that is inherently destructive, even if it seems to work on their machine. You end up sitting through meetings about design, where the most popular options are awful and wasteful, and practices that you know will work have been deemed to be too old school.

The biggest skill you end up learning is to pick your battles carefully. Maybe the code is too messy, but the interface is better suited to what the users actually need. Maybe you rush through a throw-away feature in order to get enough time to do some mission-critical core work. As you get more and more experience, you find yourself higher up in the ranks, but if your fingers are still in the code, it is hard to be taken seriously. That irony, where the work is mostly controlled by people who are the most clueless about the nature of the work, starts to haunt you.

Some people give in at this point; others push through the pain. If you push through, you find yourself staring at yet another development effort that is one tiny step away from being a death march, and there is a huge wind trying to push it off the cliff. Sometimes you just have to shrug it off and walk away.

That’s kinda when you change. At first, you thought the priorities were technical engineering. That the code should be as good as the code can get. Then you switched to understanding that helping users through their problems is more important, even if the code gets dinged because of it. Now, though, you wake up and realize that building stuff is stupidly expensive, and what really matters is managing all of those strings tied to the money that you need to continue.

If you’re meta-physical, you’ve moved from being concerned about the code to being concerned about the data and the code. Then you were concerned about the users and whether they were happy or not. Then you’re concerned about the development shop itself. Is it functioning properly?

You get to a point where you're no longer trying to build software; now you are trying to build organizations that can build good software.

And if you wonder past that, then you are concerned about creating organizations that can collect enough funds to be able to set up a development shop that can create software worth using.

Basically, the horizons of what you are trying to build just keep expanding farther and farther afield.

The irony is that the stresses of the early days are looking somewhat pedestrian at that point. You miss just being obsessed with creating good code; it all seemed so much more innocent in those days.

What is the case for most developers is that they start stressed and as they conquer those stresses, they are replaced by even bigger, less manageable ones. Just interacting with a computer was fun; interacting with people, politics, strings, and agendas is not. But if you want to keep on building bigger and more sophisticated things, you have to keep getting broader in your focus.

On the other side of the coin, if you picked a place where eventually you get to a point where there is little or no stress, then you’ll start to get stressed by the upcoming fate of your own obsolescence. That is, any path to avoid the stress will lead you to the stress of being too expensive, too far behind, or easily replaceable.

Stress, it seems, in programming, is unavoidable. At best, you can try to pick the types you are willing to put up with.

Thursday, February 26, 2026

When the Bubble Bursts

I’ve been deep into software since the mid-eighties, obsessively following the industry while I slough through its muddy trenches.

The benefit of having survived so long is that you get the repeated pleasure of seeing the next annoying hype cycle explode.

The pattern is always the same. Something almost newish comes along. It’s okay, but not that big of a deal. Still, it gets exposed to way more people than before. That fuels the adrenaline, which twists into a hype machine detached from reality. As it grows, its growth adds more fuel, until it has been so watered down that it is far beyond irrational. Eventually reality hits, and it goes *pop*.

AI, which started in the sixties, almost hit that point in the eighties. But now it’s returned with a vengeance, this time reaching stratospheric heights and causing untold damage to the world.

To be clear, it is cute. LLMs will survive, and eventually be relegated to the same bucket as full-text search or command line completion. Something that is useful for some people, but not significant and definitely not monetizable. A throwaway feature used by a few people, but not vital.

Not good enough to make profits and definitely not good enough to replace employees. If the world were sane, we would have barely noticed it and just shoved it into the ‘not worth the resources it consumes’ category.

But that’s not what happened. Instead, some tech bros are making suicidal bets on profits, while executroids foolishly believe it will liberate them from payroll woes. Neither will happen, but a lot of people will burn because of these delusions. Again.

The Web was similar. Yes, it survived the dotcom bomb, and gradually ate the world, but the initial gold rush turned out to mostly mine huge chunks of pyrite.

Technology takes a long time to mature. If you rely on it too early, it will bite you. Nothing ever changes that. Not well-written books, management theories, nor aggressive marketing. Immature technology might be fun to play with, but it is not yet industrial strength. It will collapse under any sort of weight.

LLMs play a clever trick with finding paths of tokens through a huge tensor space. That’s all they do. Nothing else. If you anthropomorphize those paths as being anything other than a random ant trail through interwinned data, you are being fooled. Sure, it looks pretty good sometimes. But “sometimes” isn’t even close to good enough.

You wouldn’t replace your employees with Furbies; LLMS are only marginally better. They are no threat to intelligence, even if the lack of it has been triggered by them.

But that isn’t even the real problem.

The technology sets resources on fire. It is an all-consuming flame of computation. So stupidly expensive that even our fabulous modern hardware can barely keep up. So stupidly expensive that its value is not even close to its costs.

Someday in the future, when our computers are thousands of times more powerful than today and have finally been optimized to use minimal electricity, that value may be there. But not today. Not next week, next year, and probably not for at least a decade.

Nothing short of scientific simulations or extreme mathematics eats that amount. Burning that much on a massive scale isn’t viable. And any sort of value is clearly not worth it. There are no profits to be made here, at this point in time.

As an added benefit, the technology obliterates security and opens the door for outlandish surveillance. Since it is too expensive and too flaky to run locally, people have leaped in to help. You’re literally sending all of your IP and process knowledge to these unvetted third parties in the hopes they won’t betray you.

What’s consistent about the 21st Century is that eventually that information will become valuable enough for them to seek profits. And there is absolutely nothing out there to stop them. So, as we have seen over and over again, they’ll go whole hog into monetizing your secrets. Their impending financial crisis will be so large that they won’t even have a choice. There will be data buffets springing up on every corner, hawking your appetizers.

I’m old enough that I don’t even need to predict the burst. It will happen, it always does. And someday in the future, at most interactive text bars, you’ll be able to get stale gobblygook generated locally from a decrepit model that hasn’t been retrained for years. It won’t be as good as now, but it won’t be that much worse either.

As for programmers and the panic setting into the industry, don’t worry. You get paid to know things; code is just what you do with that knowledge. You won’t be replaced by a mechanical procedure that doesn't actually understand anything. Bounce that noise between a thousand models, and it will still fail eventually. And when it does, unless it's been constantly retrained on its own slop, it will be clueless and unable to save the day. Sooner or later, management will wake up to the fact that they are exfiltrating their own information in an epic breach and put a stop to it. If some of that generated code is nearly usable today, when the resource excesses stop, the quality will plummet past hopelessness. Any development that isn’t entirely local is far too dangerous to be allowed to continue. This too shall pass.

Thursday, February 19, 2026

Data Collection

One of the strongest abilities of any software is data collection. Computers are stupid, but they can remember things that are useful.

It’s not enough to just have some widgets display it on a screen. To collect data means that it has to be persisted for the long term. The data survives the programming being run and rerun, over and over again.

But it’s more than that. If you collect data that you don’t need, it is a waste of resources. If you don’t collect the data that you need, it is a bug. If you keep multiple copies of the same data, it is a mistake. The software is most useful when it always collects just what it needs to collect.

And it matters how you represent that data. Each individual piece of data needs to be properly decomposed. That is, if it is two different pieces of information, it needs to be collected into two separate data fields. You don’t want to over-decompose, and you don’t want to clump a bunch of things together.

Decomposition is key because it allows the data to be properly typed. You don’t want to collect an integer as a string; it could be misinterpreted. You don’t want a bunch of fields clumped together as unstructured text. Data in the wrong format opens up the door for it to be misinterpreted as information, causing bugs. You don’t want mystery data, each datam should have a self-describing label that is unambiguous. If you collect data that you can not interpret correctly, then you have not collected that information.

If you have the data format correct, then you can discard invalid junk as you are collecting it. Filling a database with junk is collecting data you don’t need, and if you did that instead of getting the data you did need, it is also a bug.

Datam are never independent. You need to collect data, and that data has a structure that binds together all of the underlying datam correctly. If you downgrade that structure, you have lost the information about it. If you put the data into a broader structure, you have opened up the possibility of it getting filled with junk data. For example, if the relationship between the data is a hierarchical tree, then the data needs to be collected as a tree; neither a list nor a graph is a valid collection.

In most software, most of the data is intertwined with other values. If you started with one specific piece of data, you should be able to quickly navigate to any of the others. That means that you have collected all of the structures and interconnections properly, and you have not lost any of them. There should only be one way to navigate, or you have collected redundant connections.

As such, if you have collected all of the data you need, then you can validate it. There won’t be data that is missing, there won’t be data that is junk. You can write simple validations that will ensure that the software is working properly, as expected. If the validations are difficult, then there is a problem with the data collection.

If you collect all of the data you need for the software correctly, then writing the code on top of it is way simpler and far easier to properly structure. The core software gets the data from persistence, then passes it out to some form of display. It may come back with some edits, which need to be updated in the persistence. There may be some data that you did not collect, but the data you did collect is enough to be able to derive it from a computation. There may be tricky technical issues that are necessary to support scaling, but those are independent from the collection and flow of data.

Collecting data is the foundation of almost all software. If you get it right, you will be able to grow the software to gradually cover larger parts of the problem domain. If you make a mess out of it, the code will get really ugly, and the software will be unreliable.

Thursday, February 12, 2026

Blockers

Some days the coding goes really smoothly. You know what you need; you lay out a draft version, which happens nicely. It kinda works. You pass over it a bunch of times to bang it properly into position. A quick last pass to enhance its readability for later, and then out the door it goes.

Sometimes, there is ‘friction’. You start coding, but you have to keep waiting on other things. So, it’s code a bit, set it aside, code a bit, etc. The delays can be small, but they add up and interfere with the concentration and sense of accomplishment.

Some friction comes from missing analysis. There was something you should have known, but it fell through the cracks. Some comes from interactions with others. You need something from your teammates, or you need it from some other external group.

With some issues for external groups, it will take lots of time to escalate it, arrange introductory meetings, get to the issue, and then finally come to a resolution. You can kinda fake the code a little in the meantime, but that is usually throw-away work, so you’d prefer to minimize it. If you are patient, it will eventually get done.

Occasionally, though, there is a ‘blocker’. It is unpassable. You started to work on something, but it was shut down. You are no longer able to work on it. It’s a dead end.

One type of blocker is that someone else is doing the same work. You were going to write something, but it turns out they got there first or have some type of priority. In some cases, that is fine, but sometimes you feel that you could have done a much better job at the effort, which is frustrating. Their code is limiting.

Another type is knowledge-based. You need something, but it is far too complex or time-consuming for others to let you write it.

Some code is straightforward. But some code requires buckets of very specific knowledge first, or the code will become a time sink. People might stop you from writing systems programming components like persistence, or domain-specific languages, or synchronization, for example. Often, that morphs into a buy-versus-build decision. So something similar exists; you feel you could do it yourself, but they purchase it instead, and the effort to integrate it is ugly. If you don’t already have that knowledge, you dodged a bullet, but if you do have it, it can be very frustrating to watch a lesser component get added into the mix when it could have been avoided with just a bit of time.

There are fear-based blockers as well. People get worried that doing something a particular way may just be another time sink, so they stop it quickly. That is often the justification for brute force style coding, for example. They’d rather run hard and pound it all out as a mess than step back and work through it in a smart way. In some shops, the only allowable code is glue, since they are terrified of turnover.

In that sense, blockers are usually about code. You have it, you need it, where is it going to come from? Are you allowed to write something or not? With knowledge, you can usually do the work to figure it out, or at least approximate it, but there could be some secret knowledge that you really need to move forward, but are fully blocked from getting it, although that is extremely rare.

If you flip that around, when you're building a medium-sized or larger system, the big issue is where is the code for it going to come from? In that sense, building software is the work of getting all of the code you need together in one organized place. Some of it exists already, some of it you have to create yourself.

In the past, the biggest concern about pre-existing code was always ‘support’. You don’t want to build on some complex component only to have it crumble on you, and there is nothing you can do about it. That is an expensive mistake. So, if you aren’t going to write it yourself, then who is going to support it, and how good is that support?

If you follow that, then you generally come to understand that as you build up all of this code, support is crucial. It’s not optional, and it is foolish to assume the code is bug-free and will always work as expected.

It’s why old programmers like to pound out a lot of stuff themselves; they know when doing that, they can support their own code, and they know that that doesn’t waver until they leave the project. The support issue is resolved.

It’s also why most wise programmers don’t just add in any old library. They’ve had issues with little dodgy libraries that were poorly supported in the past, so they have learned to avoid them. Big, necessary components are unavoidable, but the little odd ones are not. If you can’t find a legitimate version of something, doing it yourself is a much better choice.

Which brings us all of the way around to vibe coding. If you’ve been around a while, then nothing seems like a worse idea than having the ability to dynamically generate unsupported code. Tonnes of it.

Particularly if it is complex and somewhat unlimited in depth.

A whack load of boilerplate might be okay; at least you can read and modify it, although a debugger would still likely be necessary to highlight the problem, so it can mean a lot of work recreating the issue. So, it might only be a short-term time saver, but a nasty landmine waiting for later. Supportable, but costly.

But it would be heartbreaking to generate 100K in code, which is almost usable but entirely unsupportable. If you did it in a week, you’d probably just have to live with the flaws or spend years trying to pound out the bugs.

Not surprisingly, people tried this often in the past. They built sophisticated generators, hit the button and got full, ready-to-go applications. You don’t see any of these around anymore, since the support black holes they formed consumed them and everything else around them, so they essentially eradicated the evidence of their existence. It was tried, and it failed miserably.

But even more interesting was that those older application generators were at least deterministic. You could run them ten times, and mostly get back the same code. With vibe coding, each run is a random turkey shoot. You’ll get something different. So, extra unsupportable, and extra crazy.

If you are going to build a big system to solve a complex problem, then you need to avoid any and all blockers that get in your way. Friction can slow you down, but a blocker is often fatal.

These days, you’re not really ‘writing’ the system, so much as you are ‘assembling it’. If you do that from too many unsupportable subparts, then the whole will obviously be unsupportable. Inevitably, if you put something into a production environment, you either have to be prepared to support it somehow or move on to the next gig. But if too much unsupportable crud gets out there, that next gig may be even worse than the one that you tried to flee from.

Thursday, February 5, 2026

Systems Thinking

There are two main schools of thought in software development about how to build really big, complicated stuff.

The most prevalent one, these days, is that you gradually evolve the complexity over time. You start small and keep adding to it.

The other school is that you lay out a huge specification that would fully work through all of the complexity in advance, then build it.

In a sense, it is the difference between the way an entrepreneur might approach doing a startup versus how we build modern skyscrapers. Evolution versus Engineering.

I was working in a large company a while ago, and I stumbled on the fact that they had well over 3000 active systems that were covering dozens of lines of business and all of the internal departments. It had evolved this way over fifty years, and included lots of different tech stacks, as well as countless vendors. Viewed as ‘one’ thing it was a pretty shaky house of cards.

It’s not hard to see that if they had a few really big systems, then a great number of their problems would disappear. The inconsistencies between data, security, operations, quality, and access were huge across all of those disconnected projects. Some systems were up-to-date, some were ancient. Some worked well, some were barely functional. With way fewer systems, a lot of these self-inflicted problems would just go away.

It’s not that you could cut the combined complexity in half, but more likely that you could bring it down to at least one-tenth of what it is today, if not even better. It would function better, be more reliable, and would be far more resilient to change. It would likely cost far less and require fewer employees as well. All sorts of ugly problems that they have now would just not exist.

The core difference between the different schools really centers around how to deal with dependencies.

If you had thousands of little blobs of complexity that were all entirely independent, then getting finished is just a matter of banging out each one by itself until they are all completed. That’s the dream.

But in practice, very few things in a big ecosystem are actually independent. That’s the problem.

If you are going to evolve a system, then you ignore these dependencies. Sort them out afterwards, as the complexity grows. It’s faster, and you can get started right away.

If you were going to design a big system, then these dependencies dictate that design. You have to go through each one and understand them all right away. They change everything from the architecture all the way down to the idioms and style in the code.

But that means that all of the people working to build up this big system have to interact with each other. Coordinate and communicate. That is a lot of friction that management and the programmers don’t want. They tend to feel like it would all get done faster if they could just go off on their own. And it will, in the short-term.

If you ignore a dependency and try to fix it later, it will be more expensive. More time, more effort, more thinking. And it will require the same level of coordination that you tried to avoid initially. Slightly worse, in that the time pressures of doing it correctly generally give way to just getting it done quickly, which pumps up the overall artificial complexity. The more hacks you throw at it, the more hacks you will need to hold it together. It spirals out of control. You lose big in the long-term.

One of the big speed bumps preventing big up-front designs is a general lack of knowledge. Since the foundations like tech stacks, frameworks, and libraries are always changing rapidly these days, there are few accepted best practices, and most issues are incorrectly believed to be subjective. They’re not, of course, but it takes a lot of repeated experience to see that.

The career path of most application programmers is fairly short. In most enterprises, the majority have five years or less of real in-depth experience, and battle-scared twenty-year+ vets are rare. Mostly, these novices are struggling through early career experiences, not ready yet to deal with the unbounded, massive complexity present in a big design.

Also, the other side of it is that evolutionary projects are just more fun. I’ve preferred them. You’re not loaded down with all those messy dependencies. Way fewer meetings, so you can just get into the work and see how it goes. Endlessly arguing about fiddly details in a giant spec is draining, made worse if the experience around you is weak.

Evolutionary projects go very badly sometimes. The larger they grow, the more likely they will derail. And the fun gives way to really bad stress. That severe last-minute panic that comes from knowing that the code doesn't really work as it should, and probably never will. And the longer-term dissatisfaction of having done all that work to ultimately just contribute to the problem, not actually fix it.

Big up-front designs are often better from a stress perspective. A little slow to start and sometimes slow in the middle, they mostly smooth out the overall development process. You’ve got a lot of work to do, but you’ve also got enough time to do it correctly. So you grind through it, piece by piece, being as attentive to the details as possible. Along the way, you actively look for smarter approaches to compress the work. Reuse, for instance, can shave a ton of code off the table, cut down on testing, and provide stronger certainty that the code will do the right thing in production.

The fear that big projects will end up producing the wrong thing is often overstated. It’s true for a startup, but entirely untrue for some large business application for a market that’s been around forever. You don’t need to burn a lot of extra time, breaking the work up into tiny fragments, unless you really don’t have a clue what you are building. If you're replacing some other existing system, not only do you have a clue, you usually have a really solid long-term roadmap. Replace the original work and fix its deficiencies.

There should be some balanced path in the middle somewhere, but I haven’t stumbled across a formal version of it after all these decades.

We could go first to the dependencies, then come up with reasons why they can be temporarily ignored. You can evolve the next release, but still have a vague big design as a long-term plan. You can refactor the design as you come across new, unexpected dependencies. Change your mind, over and over again, to try to get the evolved works to converge on a solid grand design. Start fast, slow right down, speed up, slow down again, and so forth. The goal is one big giant system to rule them all, but it may just take a while to get there.

The other point is that the size of the iterations matters, a whole lot. If they are tiny, it is because you are blindly stumbling forward. If you are not blindly stumbling forward, they should be longer, as it is more effective. They don’t have to all be the same size. And you really should stop and take stock after each iteration. The faster people code, the more cleanup that is required. The longer you avoid cleaning it up, the worse it gets, on basically an exponential scale. If you run forward like crazy and never stop, the working environment will be such a swamp that it will all grind to an abrupt stop. This is true in building anything, or even cooking in a restaurant. Speed is a tradeoff.

Evolution is the way to avoid getting bogged down in engineering, but engineering is the way to ensure that the thing you build really does what it is supposed to do. Engineering is slow, but spinning way out of control is a heck of a lot slower. Evolution is obviously more dynamic, but it is also more chaotic, and you have to continually accept that you’ve gone down a bad path and need to backtrack. That is hard to admit sometimes. For most systems, there are parts that really need to be engineered, and parts that can just be allowed to evolve. The more random the evolutionary path, the more stuff you need to throw away and redo. Wobbling is always expensive. Nature gets away with this by having millions of species, but we really only have one development project, so it isn’t particularly convenient.

Thursday, January 29, 2026

Reap What You Sow

When I first started programming, some thirty-five years ago, it was a somewhat quiet, if not shy, profession. It had already been around for a while, but wasn’t visible to the public. Most people had never even seen a serious computer, just the overly expensive toys sold at department stores.

Back then, to get to an intermediate position took about 5 yrs. That would enable someone to build components on their own. They’d get to senior around 10 yrs, where they might be expected to create a medium-sized system by themselves, from scratch. 20 yrs would open up lead developer positions for large or huge projects, but only if they had the actual experience to back it up. Even then, a single programmer might spend years to get their code size up to medium, so building larger systems required a team.

Not only did the dot-com era rip programming from the shadows, but the job positions also exploded. Suddenly, everyone used computers, everyone needed programmers, and they needed lots of them. That disrupted the old order, but never really replaced it with anything sane.

So, we’d see odd things like someone getting promoted to a senior position after just 2 or 3 years of working; small teams of newbie programmers getting tasked with building big, complex systems with zero guidance; various people with no significant coding experience hypothesising about how to properly build stuff. It’s been a real mess.

For most computer systems, to build them from scratch takes an unreasonably large amount of knowledge and skill, not only about the technical aspects, but also the domain problems and the operational setups, too.

The fastest and best way to gain that knowledge is from mentoring. Courses are good for the basics, but the practice of keeping a big system moving forward is often quite non-intuitive, and once you mix in politics and bureaucracy, it is downright crazy.

If you spend years in the trenches with people who really get what they are doing, you come up to speed a whole lot faster.

We have a lot of stuff documented in books and articles, and some loose notions of best practices, but that knowledge is often polluted with trendy advice, so people bend it incorrectly to help them monetize stuff.

There has always been a big difference between what people say should be done and what they actually do successfully.

Not surprisingly, after a couple of decades of struggling to put stuff together, the process knowledge is nearly muscle memory and somewhat ugly. It’s hard to communicate, but you’ve learned what has really worked versus what just sounds good. That knowledge is passed mouth to mouth, and it’s that knowledge that you really want to learn from a mentor. It ain’t pretty, but you need it.

As a consequence, it is no surprise that the strongest software development shops have always had a good mix of experience. It is important. Kids and hardened vets, with lots of people in the middle. It builds up a good environment and follows loosely from the notion that ‘birds of a feather flock together’. That type of experience diversity is critical, and when it comes together, the shop can smoothly build any type of software it needs to build. Talent attracts talent.

That’s why when we see it crack, and there is a staffing landslide, where a bunch of experienced devs all leave at the same time, it often takes years or decades to recover. Without a strong culture of learning and engineering, it’s hard to attract and keep good people; it’s understaffed, and the turnover is crazy high.

There are always more programming jobs than qualified programmers; it seems that never really changes.

Given that has been an ongoing problem in the industry for half a century, we can see how AI may make it far worse. If companies stop hiring juniors because their intermediates are using AI to whack out that junior-level code, that handoff of knowledge will die. As the older generations leave without passing on any process knowledge, it will eventually be the same as only hiring a bunch of kids with no guidance. AI won’t help prevent that, and its output will be degraded from training on those fast-declining standards.

We’ve seen that before. One of the effects of the dot-com era was that the lifespan of code shrank noticeably. The older code was meant to run for decades; the new stuff is often just replaced within a few years after it was written. That’s part of why we suddenly needed more programmers, but also why the cost of programming got worse. It was offset somewhat by having more libraries and frameworks available, but because they kept changing so fast, they also helped shorten the lifespan. Coding went from being slowly engineered to far more of a performance art. The costs went way up; the quality declined.

If we were sane, we’d actually see the industry go the other way.

If we assume that AI is here to stay for coders, then the most rational thing to do would be to hire way more juniors, and let them spend lots of time experimenting and building up good ways to utilize these new AI tools, while also getting a chance to learn and integrate that messy process knowledge from the other generations. So instead of junior positions shrinking, we’d see an explosion of new junior positions. And we’d see vets get even more expensive.

That we are not see this indicates either a myopic management effect or that AI itself really isn’t that useful right now. What seems to be happening is that management is cutting back on payroll long before the intermediates have successfully discovered how to reliably leverage this new toolset. They are jumping the gun, so to speak, and wiping out their own dev shops as an accidental consequence. It will be a while before they notice.

This has happened before; software development often has a serious case of amnesia and tends to forget its own checkered history. If it follows the older patterns, there will be a few years of decreasing jobs and lower salaries, followed by an explosion of jobs and huge salary increases. They’ll be desperate to undo the damage.

People incorrectly tend to think of software development as one-off projects instead of continually running IT shops. They’ll do all sorts of short-term damage to squeeze value, while losing big overall.

Having lived through the end of programming as we know it a few dozen times already, I am usually very wary of any of these hype cycles. AI will eventually find its usefulness in a few limited areas of development, but it won’t happen until it has become far more deterministic. Essentially, random tools are useless tools. Software development is never a one-off project, even if that delusion keeps persisting. If you can’t reliably move forward, then you are not moving forward. At some point, the ground just drops out below you, sending you back to square one.

The important point at the high level is that you set up and run a shop to produce and maintain the software you need to support your organization’s goals. The health of that shop is vital, since if it is broken, you can’t really keep anything working properly. When the toolset changes, it would be good if the shop can leverage it, but it is up to the people working there to figure it out, not management.

Thursday, January 22, 2026

Dirty Little Secret

At the beginning of this Century, an incredibly successful software executive turned to me and said, “The dirty little secret of the software industry is that none of this stuff really works.”

I was horrified.

"Sure, some of that really ancient stuff didn’t work very well, but that is why we are going to replace it all with all of our flashy new technologies. Our new stuff will definitely fix that and work properly," I replied.

I’ve revisited this conversation in my head dozens of times over the decades. That new flashy stuff of my youth is now the ancient crusty stuff for the kids. It didn’t work either.

Well, to be fair, some parts of it worked just enough, and a lot of it stayed around hidden deep below. But it's weird and eclectic, and people complain about it often, try hard to avoid it, and still dream of replacing it.

History, it seems, did a great job of proving that the executive was correct. Each new generation piles another mess on top of all of the previous messes, with the dream of getting it better, this time.

It’s compounded by the fact that people rush through the work so much faster now.

In those days, we worked on something for ‘months’, now the expectation is ‘weeks’.

Libraries and frameworks were few and far between, but that actually afforded us a chance to gain more knowledge and be more attentive to the little details. The practice of coding software keeps degrading as the breadth of technologies explodes.

The bigger problem is that even though the hardware has advanced by orders and orders of magnitude, the effectiveness of the software has not. It was screens full of awkward widgets back then; it is still the same. Modern GUIs have more graphics, but behave worse than before. You can do more with computers these days, but it is far more cognitively demanding to get it done now. We didn’t improve the technology; we just started churning it out faster.

Another dirty little secret is that there is probably a much better way to code things that was probably more commonly known when it was first discovered in the 70s or the 80s. Most programmers prefer to learn from scratch, forcing them to resolve the same problems people had decades ago. If we keep reinventing the same crude mechanics, it is no surprise that we haven’t advanced at all. We keep writing the same things over and over again while telling ourselves that this time it is really different.

I keep thinking back to all of those times, in so many meetings, were someone was enthusiastically expounding the virtues of some brand new, super trendy, uber cool technology, and essentially claiming “this time, we got it right”, while knowing, that if I wait for another five years, the tides will turn and a new generation will be claiming that that old stuff doesn’t work.

“None of this stuff really works” got stuck in my head way back then, and it keeps proving itself correct.

Thursday, January 15, 2026

This Week's Turbo Encabulator

Sometimes, the software industry tries to sell products that people don’t actually need and won’t solve any real problems. They are very profitable and need minimal support.

The classic example is the set of products that falls loosely under the “data warehouse” category.

The problem some people think they have is that if they did not collect data when they needed it, they don’t have access to it now. In a lot of cases, you can’t simply go back and reconstruct it or get it from another source; it is lost forever.

So, people started coming up with a long series of products that would let you capture massive amounts of unstructured data, then later, when you need it, you could apply some structure on top and use it.

That makes sense, sort of. Collect absolutely everything and dump it into a giant warehouse, then use it later.

The first flaw, though, is that collecting data and keeping it around for a long time is expensive. You need some storage devices, but you also need a way to tell when the data has expired. Consumer data from thirty years ago may be of interest to a historian, but is probably useless for most businesses. So, you have all of this unstructured data, when do you get rid of it? If you don’t know what the data really is, then sadly, the answer is ‘never’, which is insanely expensive.

The other obvious problem is that some of the data you have captured is ‘raw’, and some of it is ‘derived’. It is a waste of money to persist that derived stuff, since you can reconstruct it. But again, if you don’t know what you have captured, you can not distinguish it.

The bigger problem, though, is that you now have this sea of unstructured data, and thanks to various changes over time, it does not fit together nicely. So that act of putting a structure on top is non-trivial. In fact, it is orders of magnitude more complicated now than if you had just sorted out carefully what you needed first.

Changes make it hard to stitch together, and bugs and glitches pad it out with lots and lots of garbage. The noise overwhelms the signal.

It’s so difficult to meticulously pick through it and turn it into usable information that it is highly unlikely that anyone will ever bother doing that. Or if they did it for a while, eventually they’d stop doing it. If they don’t have the time and patience to do it earlier, then why would that change later?

So, you're burning all of these resources to prevent a problem you really shouldn’t have, in the unlikely case that someone may go through Herculean effort to get value out of it later.

If there is a line of business, then there are fundamental core data entities that underpin it. Mostly, they change slowly, but in all cases, you will always have ongoing work to keep up with any changes. You can’t escape that. If you did reasonable analysis and modelled the data correctly, then you could set up partitioned entry points for all of the data and keep teams around to stay synced to any gradual changes. In that case, you know what data is out there, you know how it is structured and what it means, and you have the appropriate systems in place to capture it. Your IT department is organized and effective.

The derived variations of this core data may go through all sorts of weird gyrations, but the fundamentals are easy enough to understand and capture. So, if you are organized and functioning correctly, you wouldn’t need an insurance technology to double up the capture ‘just in case’.

Flipped around, if you think you “need” a data warehouse in order to capture stuff that you worried you might have missed, your actual problem is that your IT department is a disorganized disaster area. You still don’t need a warehouse; you need to fix your IT department. So someone selling you a warehouse as a solution to your IT problems is selling you snake oil. It ain’t going to help.

Now it is possible that there is a lag between ‘changes’ and the ability to update the collection of the data. So, you think that to solve that, you need a warehouse, but the same argument applies.

The changes aren’t frequent enough and really aren’t surprises, so the lag is caused by other internal issues. If you have an important line of business, and it changes every so often, then it would make sense (and is cheaper) if you just have a team waiting to jump into action to keep up with those changes. If they are not fully occupied in between changes, that is not a flaw or waste of money; they are just firefighters and need to be on standby. Sometimes you need firefighters, or a little spare capacity, or some insurance. That is reasonable resource management. Don’t place and build your house on the beach based on low tides; do it based on at least king tides or even tsunamis.

There are plenty of other products sold to enterprises that are similar. If you look at what they do, and you ask reasonable questions about why they exist, you’ll often find that the answers don’t make any real sense. The industry prefers to solve these easy problems on a rotating basis.

There will be wave after wave of questionable solutions to secondary problems that ultimately just compound the whole mess. They make it worse. Then, as people realized that they don’t work very well, a whole new wave of uselessness will hit. So long as everyone is distracted and chasing that latest wave, they will be too busy to question the sanity of what they are implementing.

Thursday, January 8, 2026

Against the Grain

When I was really young and first started coding, I hated relational databases.

They didn’t teach us much about them in university, but they were entirely dominant in enterprise development for the 80s and 90s. If you needed persistence, only an RDBMS could be considered. Any other choice, like rolling it yourself, files, or lighter dbs, caches, was considered inappropriate. People would get mad if you used them.

My first experiences with an RDBMS were somewhat painful. The notion of declarative programming felt a bit foreign to me, and those earlier databases were cruder in their syntax.

But eventually I figured them out, and even came to appreciate all of the primary and secondary capabilities. You don’t just get reliable queries; you can use them for dynamic behaviour and distributed locking issues as well. Optimizations can be a little tiring, and you have to stay tight to normal forms to avoid piling up severe technical debt, but with practice, they are a good, strong, solid foundation. You just have to use them correctly to get the most out of them.

If you need reliable persistence (and you do) and the computations aren’t exotic (and they mostly aren’t), then relying on a good RDBMS is generally the best choice. You just have to learn a lot about the technology and use it properly, but then it works as expected.

If you try to use it incorrectly, you are doing what I like to call “going against the grain”. It’s an ancient woodworking expression, but it is highly fitting for technology. With the grain, you are using it as the originators intended, and against the grain, you are trying to get it to do something clever, awkward, or funny.

Sometimes people think they are clever by trying to force technology to do something unexpected. But that is always a recipe for failure. Even if you could get it to work with minimal side effects, the technology evolves, so the tricks will turn ugly.

Once you’ve matured as a programmer, you realize that clever is just asking for trouble, and usually for no good reason. Most code, most of the time, is pretty basic. At least 90% of it. Usually, it only needs to do the same things that people have been doing for at least half a century. The problem isn’t coming up with some crazy, clever new approach, but rather finding a very reasonable one in an overly short period of time, in a way that you can keep moving it forward over a long series of upgrades.

We build things, but they are not art; they are industrial-strength machinery. They need to be clean, strong, and withstand the ravages of the future. That is quality code; anything else is just a distraction.

Now, if you are pushing the state of the art for some reason, then you would have to go outside of the standard components and their usages. So, I wasn’t surprised that NoSQL came into existence, and I have had a few occasions where I both needed it and really appreciated it. ORMS are similar.

It’s just that I would not have been able to leverage these technologies properly if I didn’t already understand how to get the most out of an RDBMS. I needed to hit the limits first to gain an understanding.

So, when I saw a lot of people using NoSQL to skip learning about RDBMSes, I knew right away that it was a tragic mistake. They failed to understand that their usage was rather stock and just wanted to add cool new technologies to their resumes. That is the absolute worst reason to use any technology, ever. Or as I like to say, experiment on your own time, take your day job seriously.

In that sense, using an RDBMS for something weird is going against the grain, but skipping it for some other eclectic technology is also going against the grain. Two variations on the same problem. If you need to build something that is reliable, then you have to learn what reliable means and use that to make stronger decisions about which components to pile into the foundations. Maybe the best choice is old, and not great for your resume, but that is fine. Doing a good job is always more important.

This applies, of course, to all technologies, not just RDBMSes. My first instinct is to minimize using any external components, but if I have to, then I am looking for the good, reliable, industrial-strength options. Some super-cool, trendy, new component automatically makes me suspicious. Old, crusty, battle-scarred stuff may not look as sweet, but in most cases, it is usually a lot more reliable. And the main quality that I am looking for is reliability.

But even after you decide on the tech, you still have to find the grain and go with it. You pick some reasonable library, but then try to make it jump around in unreasonable ways; it will not end well. In the worst case, you incorrectly convince yourself that it is doing something you need, but it isn’t. Swapping out a big component at the last minute before a release is always a huge failure and tends to result in really painful circumstances. A hole that big could take years to recover from.

So, it plays back to the minimization. If we have to use a component, then we have to know how to use it properly, so it isn’t that much of a time saving, unless it is doing something sophisticated enough that learning all of that from scratch is way out of the time budget. If you just toss in components for a tiny fraction of their functionality, the code degenerates into a huge chaotic mess. You lose that connection to knowing what it will really do, and that is always fatal. Mystery code is not something you ever want to support; it will just burn time, and time is always in short supply.

In general, if you have to add a component, then you want to try to use all of its core features in a way that the authors expected you to use them. And you never want to have two contradictory components in the same system; that is really bad. Use it fully, use it properly, and get the most out of the effort it took to integrate it fully. That will keep things sane. Overall, beware of any components you rely on; they will not save you time; they may just minimize some of the learning you should have done, but they are never free.

Thursday, January 1, 2026

New Year

I was addicted from the moment I bought my first computer: a heavily used Apple ][+ clone. Computers hadn’t significantly altered our world yet, but I saw immense potential in that machine.

None of us, back in those days, could have predicted how much these machines would damage our world. We only saw the good.

And there has been lots of good; I can’t live without GPS in the car, and online shopping is often handy. I can communicate with all sorts of people I would not have been able to meet before. 

But there have also been a massive number of negative effects, from surveillance to endless lies and social divisions.

Tools are inherently neutral, so they have both good and bad uses; that is their nature. We have these incredibly powerful machines, but what have we done with them? The world is far more chaotic, way less fair, and highly polluted now. We could have used the machines to lift ourselves up, but instead we’ve let a dodgy minority use them to squeeze more money out of us. Stupid.

I’m hoping that we can turn a corner for 2026 and get back to leveraging these machines to make the world a better place. That we ignore those ambitious weasels who only care about monetizing everything and instead start to use software to really solve our rapidly growing set of nasty problems. Sure, it is not profitable, but who cares anymore? Having a lot of money while living on a burning planet isn’t really great. Less money on a happy planet is a big improvement.

The worst problem for the software industry is always trying to rush through the work, only solving redundant, trivial problems. We need to switch focus. Go slow, build up knowledge and sophistication, and ignore those people shouting at us to go faster. Good programming is slow. Slow is good. Take your time, concentrate on getting better code, and pay close attention to all of the little details. Programming is pedantic; we seem to have forgotten this.

The other thing is that we need to be far more careful about what software we write. Just say no to writing sleazy code. Not all code should exist. If they find someone else, that is not your problem, but doing questionable work because someone else might is a sad excuse. As more of us refuse, it will get a lot harder for them to achieve their goals. We can’t stop them, but at least we can slow them down a little.

The final thing to do is to forget about the twisted, messed-up history of software, at least for a moment. Think big, think grand. We have these powerhouse intellectual tools; we should be using them to lift humanity, not just for lame data entry. We need to build up stable, strong complexity that we can leverage to solve larger and larger problems. A rewrite of some crude approach with another crude approach just isn’t leveraging any of the capabilities of software. Rearranging the screens and using different widgets is only going sideways, not up. Software can remember the full context and help us make way better decisions. That is its power; we just need to start building things that truly leverage it.

Given the declining state of the world these days, it makes sense that we use this moment to shift focus. If computers got us into this mess, then they can also get us out of it. It’s been clear for quite a while that things are not going very well, but many of the people leveraging that sentiment are only doing so in order to make things worse. It's time we change that.