Thursday, May 7, 2026

Goodness

It really feels like the world has sunk to its lowest point in my lifetime. And it does not seem likely to improve anytime soon. We’re sliding downhill.

When I first started playing with computers, way back in the 80s, I felt like they had huge potential to help humanity. To lift us up, but it seems like they did the opposite. First, they trapped us; now they are forcing us to regress.

It’s not the computers themselves, but the types of monsters that latch onto them in order to make money, grab power, and manipulate people. Sadly, they find it too easy to use software to do this.

It seems like software developers made a rather tragic mistake in not preventing this earlier. We were just too eager to build stuff; we didn’t ask enough questions.

But the good news is that we can still do something about it now. We can build all new stuff that is empathy-driven and meant to really help people, not just pick their pockets or force their behaviour.

The trick is not to get hyper-focused on the technology itself; it doesn’t really matter. Instead, we put ourselves into the shoes of the users. Software without empathy is just a weapon waiting to be exploited.

The problem has always been that empathy-driven software is extraordinarily hard to write. It’s not just getting the technology to dance, or flooding it with domain data; you have to integrate all that very carefully into the full context of a user's life in order to shave off any of the hard spikes. It’s not just code and data; it is code and data that deliberately help people. It all emanates from their perspective, not from the builders or the operators.

To get us going, I’d suggest that everyone just start throwing any “non-monetizable” ideas they have out there. Pick a problem you know, write up a dream solution, and publish it. It doesn’t have to be practical; it doesn’t even have to be possible. It’s not about technology, but about seeing the world from the user’s perspective and making suggestions about how to really improve it. Too often, we first focus on technology, and then we try to shove it back into the solution space. That doesn’t work very well.

Once we have ideas, we can figure out how to implement these as solutions in ways that can’t be subverted by monsters. That, of course, is the difficult part. Serious software is still very expensive to build and run, and the cost of getting it funded has a lot of painful strings that we’ve seen over and over again are used to pull the efforts off in very bad directions.

If we can figure that out, then we just need to find a way to swap these technologies with the mess we’ve got right now.

I’ve occasionally dumped out some raw ideas:

https://theprogrammersparadox.blogspot.com/2024/05/identitynet.html

https://theprogrammersparadox.blogspot.com/2015/08/digital-discussions.html

https://theprogrammersparadox.blogspot.com/2009/04/end-of-coding-as-we-know-it.html

They were mostly unfundable, and since I had needed to pay the bills, they were beyond my ability to take further. But I’ve always thought it would be cool if someone was inspired to do something similar, so I wrote them up.

Other areas that desperately need our attention:

Source of Truth. I appreciate and admire Wikipedia, but I really want something more structured, like an ontology built on graphs or even hypergraphs, that contains all of human knowledge or at least as much as we can capture right now. It would assign a probability to any “knowledge”. For instance, a known mathematical proof would be 100% correct, but most other things we think are true are at best 99ish. And the myths and falsehoods are really low, maybe even 0. If there were multiple competing opinions, they would all exist in the data, but with some percentage of likely truth (as of today). It would be worldwide and not controllable by any country or dictator. Untouchable by monsters. A perfect use case for decentralization.

Privacy. We need to protect any facts about individuals, but still provide some (difficult) means of external verification. This would extend to group conversations as well. Some part of it would only allow retrospective external access if and only if the case made it to a territorial court accepted by all of the individuals. That is, they can’t spy on you, but if you did something bad in some jurisdiction that you have accepted, the information could be retrieved if there were actual court proceedings. It’s the notion that they have to do the policing legwork to catch you, but once you are in trouble, the whole truth will come out.

Time/Complexity Simulations. Being able to list out the consequences of a given decision over a complex circumstance. Lots of moving parts. You could throw together some approximate complexity for something, then play with any possible decisions to see how they fare in the long run. We need this, as too many people can’t see beyond extremely short horizons. Even if it is crude, it would help people think about more than just tomorrow, or next month, or next quarter. If you could come back with a chance that there is a 48.2% that “doing that” would turn the profits negative in the next seven years, it would be harder for someone to just forge ahead blindly. Or that there really are “century” events that we do need to protect against, like pandemics.

Consolidation. It sucks having to rely on dozens of different, widely inconsistent apps. Their collective value is eroded by the combined cognitive load. I’d want something simpler than a spreadsheet that brings together the common data and can trigger code in all sorts of remote places. A customized gateway that makes it easy to leverage the power of a computer, but just for you. The trick is to breach the complexity limits that so often hold us back. The abstraction that holds it together can’t be too abstract but still needs to be powerful enough that it is all-encompassing. If I could configure it for all of the repeatable parts of my life, like a crazy, distributed, super-integrated to-do list, with behaviours and data shareable for a wide range of scenarios, it would be my first point of contact on all my computers. It would have some deep way of reorganizing itself as I keep adding more to it. The key, though, is that it isn’t a remote service; you don’t rent it. You own it, it is yours, it can be seamlessly upgraded over the decades of your life, and it is fully private. The costs are trivial, but it will consume your time. There are parts you can share if you want, but there would never be a way to make money off your contributions; all you get for your efforts is a better life.

Guardrails. Lots of awful stuff happens on the web. Why? Why can’t we keep the good qualities of the Internet, without continuously opening doors for the bad ones? My guess is that capitalism drives an unquenchable thirst for monetization, so making that safe is just too costly. Eats into the profits. So we get half-baked stuff that eventually the monsters figure out how to leverage. From that perspective, it seems like we could put up some types of walls and fences that would protect this weak code from being exploited. Protect private data from going anywhere. You shouldn’t be subject to an attack unless you explicitly lowered your guard. It shouldn’t be possible to trick you into lowering your guard. All of the angst from this not being true today piles on the friction that devalues the capabilities of the computers. Finding a way to stop that is huge.

I’m sure there are a million more issues and ideas out there. Now is the time to flood the world with them, and then maybe we can figure out how to bring the best of them to life. If you do this, odds are you won’t get much credit, and it definitely won’t make you rich, but it is still a good thing to do, so it is worth the effort.

What we ultimately want is for computers to make our lives easier and more meaningful. To take away some of the drudgeries and difficulties of reality, but not numb us into a coma or stupor. Sure, we’ll still turn to the machines for assistance, but we won’t get caught in negative incentive loops like doom scrolling. We will live life in reality, not digitally.

To get this, we need to stop the people who are financially motivated from bending all of the technologies against us. They only see the bad potential, realize their use in carving off profits, and then find ways to slip these into our lives. They’re tearing us apart so they can own mansions, sports cars, jets, and yachts. We have to stop allowing this.

Thursday, April 30, 2026

Shortcuts and Makework

On the face of it, shortcuts and makework may seem like they are opposites.

A shortcut is a faster way to do something that effectively pushes out the consequences down the road. You take the quick and easy way now, only to pay for it later.

Makework, on the other hand, is anything that you are made to do that does not directly or indirectly contribute to the work at hand. For instance, you fill out a complicated form with copious details that is ultimately ignored forever.

Makework is usually some people trying to control or throttle others, often an abuse of power, or a justification of their value.

In bureaucratic organizations, the centralized control over poorly understood aspects of the company is usually thick with makework. There are plenty of administrators trying to control things that they do not understand. Thus, the rules and processes get weird and form the basis for lots of politics.

But the two are oddly related. Where you see one, you usually see the other.

The root cause is time. There is a project that has a tight timeline. But the people working keep losing huge blocks of time to makework. However, as makework is an integral part of the organization, blaming it for being late is not allowed. So, in order to try to catch up, they resort to a lot of shortcuts. The long-term consequences don’t matter if, in the short term, you will get in trouble for being late. The context of the project forces mistakes and panic.

It gets triggered the other way, too. Some people just take shortcuts out of habit; the project looks initially crazy successful. But as the long-term consequences come due, it collapses. In the downfall, lots of unrelated people jump in to “help”, like bureaucrats and generic management. Since they don’t understand and they don’t know why a once successful effort suddenly flipped, they propose a lot of work that they believe will fix the problem. More tracking, more documentation, more sign-offs, more meetings, etc. But all of this is effectively makework, and the real problem of replacing the shortcuts causing all of the grief gets ignored. The project ends up under the microscope, which amplifies its problems and does not correct them.

Mostly, though, the best approach is to be rigorously practical. Minimize both shortcuts and makework. Carefully assess any and all effort with respect to both of those categories. If it smells like one or the other, don’t do it. Get the core work done as best as possible.

The other part is to tackle the hardest parts first, don’t leave them for later, and don’t rush through them. While that gives the appearance of being late right from the get-go, it provides two valuable properties.

First, if you get stuck, you can raise a late flag early, rather than later, which tends to mitigate some of the bureaucrats coming out of the woodwork and drowning you with makework.

Second is that a shortcut on the hard stuff is way more destructive than a shortcut on the easy stuff. If time forces you to take shortcuts, then the ones with minimal consequences are far better. They are less costly to repair later. If the foundations are solid, you have a better shot of recovery when late. In many organizations, you are already late long before you even realize that you have work to do. It’s normal, so you need to adopt habits that mitigate it.

The biggest problem, though, is that coming up with shortcuts or makework is often a lot easier than doing things properly. It’s the easy path for both the workers and management. But it is an unsuccessful path too. It distracts from the things that really need to get done.

Ultimately, there is some work that needs to be finished with at least enough quality to keep the detractors at bay. Do that work, don’t get lost by trying to avoid it.

Thursday, April 23, 2026

Users

A common misconception in software development comes from not understanding users.

For any piece of software, there is a bunch of primary users who are using it to solve their problems. This ranges from commercial product usages all the way to large enterprise usages.

If the software is large and has been evolving for quite some time, these usages are often partitioned into different subgroups. Some use one feature set, others use a different one. Some users are tied directly to their group, and others will overlap between a bunch of groups.

There is usually a second set of users, whose primary tasks are system management, usually related to the data in the software, but sometimes configuration, access, or feature capabilities. Occasionally referred to as data administrators.

They most often sit outside of the primary users and do not use the software to solve those problems; they are just involved in making sure the software itself is capable of allowing the other users to get their work done.

Totally ignored, there is actually a third set of users. These users take the software, install it on the fundamental hardware, and interact with it when there are problems. Sometimes they don’t even know what the software does, but they are still responsible for providing the platform for the software to exist and fulfil its role. 

It used to be that this was classified as an Operations Department, but over the decades, there have often been a lot of Software Developers directly involved as well. These are users, too. They should have their own ops, test, or development accounts; they have complex access issues, and sometimes they have to be able to get into the software to determine that it is working or malfunctioning in a specific way, but most times should not be able to see what the data administrators can see.

Traditionally, people have not designated operations as users, which is a common mistake. They do “use” the software, and getting their work done is also dependent on it. It’s just that they are not focused on using the specific features or managing the data.

If you take a step back, then user requirements, and even user stories, should cover all three groups, not just the first or second one.

But it’s also true that if an enterprise has hundreds of software packages running, from an operations perspective, all of them share a large number of common requirements. They all need to be installed and upgraded, they all need to be monitored, and they all need some form of smoke tests for emergencies. An operations dept could put out a list of mandatory common user requirements long before any specific software project was a twinkle in someone’s eye.

What’s also true is that for the most part, these types of requirements do not change significantly with most tech stacks. The specifics may change a bit, but the base requirement is locked in stone.

This is a rather classic mistake that happens with new tech stacks or operations environments. Because they are new, everyone thinks they can start over again from scratch and ignore any previous round of complexities. However, once things get going, those same complexities come back to haunt the effort, and because they were ignored, they get handled extra poorly.

So, we see people put up software systems without any adequate monitoring, for example, and are surprised when the users complain about the system being down. Pushing the monitoring back onto the first and second user groups is common now, but it still makes the effort look rather amateur. Operations should be the first to know about a crash; they just can’t detect more subtle bugs buried deep under big features.

The users of software are anybody who interacts directly with that software, in any way. Non-users, while they still may be “stakeholders” in the effort, will never run the software, test it, log into it, or trigger some of the features.

Someone may be responsible for making sure the software project gets done on time, but if they do not interact with the software, they are not a user.

User requirements should have special priority above almost all other aspects of the work. They would only take a back seat when there are overriding cost or time constraints. But it really should be written down somewhere that the users did not get the feature or functionality they needed due to the enforcement of these constraints. At minimum, that builds up a wonderful list of future enhancements that should be considered as early in the effort as possible. The key point, though, is knowing what the user’s need in order to actually solve their problems is different than the specifics of a technical solution implementation.

Thursday, April 16, 2026

Strange Loops

I remember when I was in school; we got a difficult programming assignment that I struggled with. The algorithm, if I remember correctly, was to fill an oddly shaped virtual swimming pool at different levels, and then calculate the volume of water. The bottom of the pool was wavy curves.

The most natural way to write the code to simulate adding little bits of water at a time to the pool was with recursion. No doubt, you could unroll that into a loop and maybe a stack or two, but we were supposed to use recursion; it was the key to the assignment.

I had never encountered recursion before, and I found it mystifying. A function calls itself? That just seemed rather odd and crazy. Totally non-intuitive. I coded it up, but it never really worked properly. I got poor marks on that assignment.

Many months later, the light went on. It came out of nowhere, and suddenly, not only did recursion make sense, but it also seemed trivial. Over the decades, I’ve crafted some pretty intense recursive algorithms for all sorts of complex problems. It now comes intuitively, and I find it far easier to express the answer recursively than to unroll it.

Recursion is, I think, a simple version of what Douglas Hofstadter refers to in GEB as a ‘strange loop’; a paradox is a more advanced variety. For me, it is a conceptual understanding that sorts of acts like a cliff. When you are at the bottom, it makes no sense; it seems like a wall, but if you manage to climb up, it becomes obvious.

There are all sorts of strange loops in programming. Problems where the obvious first try is guaranteed to fail, yet there is another simpler way to encode the logic that always works.

A good example is parsing. The world is littered with what I call string-split parsers. They are the most intuitive way to decompose some complex data. You just start chopping the data into pieces, then you look at those pieces and react. For very simple data that works fine, but if you fall into programming languages, or pretty much anywhere where there are some inherent ambiguities, it will fail miserably.

But all you really need to do to climb this cliff is read the Green Dragon book. It gives you all of the practical knowledge you need to implement a parser, but also to understand any of the complicated parser generators, like Antlr or Yacc.

I guess the cool part is that once you have encountered and conquered a particular strange loop, that knowledge is so fundamental that it transcends tech stacks. If you can write a solid parser in C, you can write it in any language. If you understand how to write parsers, you can learn any programming language way faster. And nothing about that knowledge will radically change over the course of your career. You’ll jump around from different domains and stacks, but you always find that the same strange loops are waiting underneath.

A slight change over the decades was that more and more of the systems programming aspects of building software ended up in libraries or components. While that means that you’ll have fewer opportunities to implement strange loops yourself for actual production systems, you really still do need to understand how they work inside the components to leverage them properly. Being able to pound out a parser and an AST does help you understand some of the weirdness of SQL, for example. You intuitively get what a query plan is, and with a bit of understanding of relational algebra and indexing, how it is applied to the tables to satisfy your request. You’ll probably never have enough time in your life to write your own full database, but at least now you can leverage existing ones really well.

I’m not sure it’s technically a strange loop, but there is a group of mathematical solutions that I’ve often encountered that I refer to as ‘basis swaps’ that are similar. Essentially, you have a problem with respect to one basis, but in order to solve it, you need to find an isomorphism to some other basis, swap it, partially solve it there, then swap all or part of it back to the original basis. This happens in linear programming, exponential curve fitting, and it seemed to be the basis of Andrew Wiles solution for Fermat’s last theorem. But I’ve also played around with this for purely technical formal systems, such as device-independent document rendering. I guess ASTs and cross-compilers are there too, as are language-specific VMs like the JVM.

What I’ve seen in practice is that some programmers, when confronted with strange-loop type problems, go looking for shortcuts instead of diving in and trying to understand the actual problem. I do understand this. There is so much to learn in basic software development that you really don’t want to have to keep going off and reading huge textbooks. But I also know that trying to cheat a solution to a strange loop is a massive time waste. You’re always just another bug away from making it work correctly, but it will never work correctly. The best choice if you don’t have the time to do it properly is always not to do it. Instead, find someone else who knows or use something else where it already exists and is of pretty decent quality.

Mostly, there are very few strange loops in most applications programming, although there are knowledge swamps like schema normalization that cause similar grief. Once you stumble into system programming, even if it is just a simple cache, a wee bit of locking, or the necessity of transactional integrity, you run into all sorts of sticky problems that really do require existing knowledge to resolve them permanently.

Strange loops are worth learning. They don’t change with the trends or stacks, and they’ll enable you to be able to write, use, or leverage any of the software components or tools floating around out there. Sure, they slow you down a bit when you first encounter them, but if you bother to jump those hurdles, you’re lightning fast when you get older.

Thursday, April 9, 2026

The Quality Bars

For any given software development, there are a bunch of ‘bars’ that you have to jump over, which relate to quality.

At the bottom, there is a minimum quality bar. If the code behaves worse than this, the project will be immediately cancelled. Someone who is watching the money will write the whole thing off as incompetence. To survive, you need to do better.

A little higher is the acceptable quality bar. That is where both management and the users may not be happy about the quality, but the project will definitely keep going. It may face increased scrutiny, and there will probably be lots of drama.

Above that is the reasonable quality bar. The code does what it needs to, in the way people expect it to behave. There are bugs, of course, but none of them are particularly embarrassing. Most of them exist for a short time, then are corrected. The total number of the known long-term outstanding ones is one or two digits. There are probably several places in the code where people think “we should have ...”

Then we get into the good quality bar. Bugs are rare; there are very few regrets. People like using the code; it will stay around for a long, long time. Its weakness isn’t what’s already there; it is making sure future changes don’t negate that value.

There is a great quality bar too. The code is solid, dependable, and can be used as a backbone for all sorts of other stuff. It’s crafted with a level of sophistication that keeps making it useful even for surprising circumstances. People can browse the code and get an immediate sense of what it does and why it works so well.

Above that, there is an excellent quality bar, where the code literally has no known defects. It was meticulously crafted for a very clear purpose and is nearly guaranteed to do exactly that, and nothing but that. It’s the type of code that lives can depend on.

There is a theoretical ‘perfect’ quality bar, too, but it is unreachable. It’s asymptotic.

Getting to the next bar is usually at least 10x more work than getting to that lower bar; the scale is clearly exponential. If it costs 1 just to get to minimum, then it’s 10 for acceptable, and 100 for reasonable. Roughly. This occurs because the higher bars need people to continually revisit and review the effort and aggressively refactor it, over and over again. Code that you splat out in a few hours is usually just minimum quality. Maybe if you’ve written the same thing a few times already, you can start at a higher bar, but that’s unreliable. To get up to those really high bars means having more than one author; it has to be a group of people with an intense focus, all working in sync with up-to-date collective knowledge. Excellent code has a guarantee that it will not deviate from expectations, one that you can rely on, so it’s far more than just a few lines of code.

A great deal of the code out there in libraries and frameworks falls far short of being reasonable. You might not get affected by that, as it’s often code that is sitting idle in little-used features. Still, you have to see them as a landmine waiting to go off when someone tries to push the boundaries of its usage. Code that has been battle tested for decades can generally get near the good bar, but there is always a chance that some future version will fall way, way back.

The overall quality of a codebase is really its lowest bar. So if someone splats some junk into an excellent project, if it’s ever triggered, it can pull down everything else below acceptable. This is the Achilles heel of plugins, as a few poor ones getting popular can cause a lot of damage to perceived quality.