Thursday, October 31, 2024

Full Speed

I’ve never liked coding in a rush. You just don’t get enough time to put the right parts nicely into the right places, it really hurts the quality of the system. But as it gained in popularity, I often had no choice.

One of the keys to not letting it get totally out of control is to schedule the work dynamically.

When non-technical people interfere, they have a bad tendency to try to serialize the work and then arbitrarily schedule it. They like to dictate what gets done and when. That’s pretty bad, as the most optimal path is not intuitive.

The first point is to start every development cycle by cleaning up the mistakes of the past. If you were diligent in keeping a list as it was going wrong, it can be very constrained work. Essentially you pick a time period, a week or two, and then get as much done in it as you can. To keep the workload smooth, you don’t take a break after a release, you just switch immediately to cleanup mode and keep the pace steady. Steady is good.

Then I always want to tackle the hard or low-level stuff first. Hard stuff because it is tricky to estimate and prone to surprises; low-level stuff because so much is built on top, it needs to get there first.

But that type of work is often a lot of cognitive effort. Due to life and burnout, some days it is just too much cognitive effort. So the trick to rushing is to pace yourself; some days you wake up and want to do easy stuff. Some days you feel you can tackle that hard stuff. You don’t know until the day progresses. You alway keep the option to do both.

I usually keep two lists as I go, an easy and a hard one, so I can have off days where I just focus on mindless stuff. I switch erratically.

So long as I keep those two lists up to date, and the cleanup one as well, it is organized enough to now spin out of control.

This makes it hard to say what I’ll get done and when. It makes it harder to work with others. But it is me at full speed, cranking out as much as I can, as fast as I can, with the best quality achievable under the circumstances. It’s the price of speed.

I always say that we should never code like this, you’re never really going to get anything other than barely acceptable quality this way, but it has become so rare these days to get anything even close to enough time to do things reasonably.

I miss the early days in my career when we actually were allowed to take our time and were actively encouraged to get it right. The stuff we wrote then was way better than the stuff we grind out now. It really shows.

Thursday, October 24, 2024

The Beauty of Technology

Long ago, when the book Beautiful Code was first published, I read a few blog posts where the authors stated that they had never seen beautiful code. Everything they worked on was always grungy, I felt sorry for them.

Technologies can be beautiful; it is rare in modern times, but still possible.

I’ll broadly define ‘technology’ as anything that allows humans to manipulate the world around us. It’s a very wide net.

Probably the first such occurrence was mastering fire. We learned to use it to purify food and for warmth and security. Clothes are another early example, they reduce the effect of the weather on us.

Everything we do to alter or control the natural world is technology. Farming, animal domestication, and tools. It has obviously had a great effect on us, we reached our highest-ever population, which would have not been possible before. We’ve spanned the globe, surviving in even its darkest corners. So, as far as we are concerned technology is good, it has helped us immensely.

But as well as making our lives easier, it is also the source of a lot of misery. We’ve become proficient at war for example. We have a long history of inventing ever-increasing weapons that kill more effectively. Without knives, swords, guns, canons, tanks, planes, ships, etc. wars would be far less lethal and have less collateral damage. Technologies make it easy to rain down destruction on large tracts of land killing all sorts of innocents.

So technologies are both good and bad. It depends on how we choose to use them.

Within that context, technologies can be refined or crude. You might have a finely crafted hammer with a clever design that makes it usable for all sorts of applications. Or a big stick that can be used to beat other people. We can build intricate machines like cars that transport wherever we want to go at speeds far better than walking, or planes that can carry hundreds of people quickly to the other side of the planet.

When we match the complexities of our solutions to the difficulties of the problems we want to solve, we get good fitting technologies. It might require a lot of effort to manufacture a good winter jacket for example, but it holds in the heat and lasts for a long time. All of the hundreds of years of effort in materials, textiles, and production can produce a high-quality coat that can last for decades. You can rely on your jacket for winter after winter, knowing that it will keep the cold at bay.

And it’s that trust and reliability along with its underlying sophistication that make it beautiful. You know that a lot of care went into the design and construction and that any unintentional side effects are negligible or at least mostly mitigated. You have some faith for example that the jacket is not lined with asbestos and that the price of warmth is a sizable risk of cancer.

You can trust that such obvious flaws were avoided out of concern and good craftsmanship. It doesn’t mean that there is a guarantee that the jacket is entirely not carcinogenic, just that it is not intentional or even likely accidental. Everyone involved in the production cared enough to make sure that it was the best it could be given all of the things we know about the world.

So, rather obviously if some accountant figured out how to save 0.1% of the costs as extra profit by substituting in a dangerous or inferior material, that new jacket is not beautiful, it is an abomination. A trick to pretend to be something it is not.

In that sense beautiful references not only its design and its construction but also that everybody along the way in its creation did their part with integrity and care. Beauty is not just how it looks, but is ingrained into every aspect of what it is.

We do have many beautiful technologies around us, although since the popularity of planned obsolescence, we are getting tricked far more often. Still, there are these technologies that we trust and that really do help make our lives better. They look good both externally and internally. You have to admire their conception. People cared about doing a good job and it shows.

Beauty is an artifact of craftsmanship. It is a manifestation of all of the skill, knowledge, patience, and understanding that went into it. It is the sum of its parts and all of its parts are beautiful. The way it was assembled was beautiful, everything in and around it was carefully labored over and done as best as it can be imagined to have been done. Beauty in that sense is holistic. All around.

Beauty is not more expensive, just rarer. If you were tricked into thinking something is beautiful when it is not, you have already priced it in, the difference is an increase in profit.

People always want easy money, so beauty is fleeting. The effort that went into something in the past is torn apart by those looking for money, thus negative tradeoffs get made. Things are a little cheaper in the hopes that people don’t notice. Little substitutions here and there that quietly devalue the technology in order to squeeze more money from it. Tarnished beauty, a sort of monument to the beauty of the past, but no longer present. Driven by greed and ambition, counter to the skilled craftsmanship of earlier times.

Technologies can be beautiful, but that comes from the care and skill of the people who create them. Too often in the modern world, this is discouraged, as if just profit all by itself is enough of a justification. It is not. Nobody wants to surround themselves with crap or cheap untrustworthy stuff. Somehow we have forgotten that.

Thursday, October 17, 2024

Programming

Some programming is routine. You follow the general industry guidelines, get some libraries and a framework, and then whack out the code.

All you need to get it done correctly is a reasonably trained programmer and a bit of time. It is straightforward. Paint by numbers.

Some programming is close to impossible. The author has to be able to twist an incredible amount of details and complexity around in their head, in order to find one of the few tangible ways to implement it.

Difficult programming is a rare skill and takes a very long time to master. There is a lot of prerequisite knowledge needed and it takes a lot to hone those skills.

Some difficult programming is technical. It requires deep knowledge of computer science and mathematics.

Some difficult programming is domain-based, it requires deep knowledge of large parts of the given domain.

In either case, it requires both significant skill and lots of knowledge.

All application and system programming is a combination of the two: routine and difficult. The mix is different for every situation.

If you throw inexperienced programmers at a difficult task it will likely fail. They do not have the ability to succeed.

Mentorship is the best way that people learn to deal with difficult programming. Learning from failures takes too long, is too stressful, and requires humility. Lots of reading is important, too.

If it is difficult and you want it to work correctly, you need to get programmers who are used to coping with these types of difficulties. They are specialists.

Thursday, October 10, 2024

Avoidance

When writing software, the issues you try to avoid often come back to haunt you later.

A good example is error handling. Most people consider it a boolean; it works or does not. But it is more complicated than that. Sometimes you want the logic to stop and display an error, but sometimes you just want it to pause for a while and try again. Some errors are a full stop, and some are local. In a big distributed system, you also don’t want one component failure to domino into a lot of others, so the others should just wait.

This means that when there is an error, you need to think very clearly about how you will handle it. Some errors are immediate, and some are retries. Some are logged, others trigger diagnostics or backup plans. There may be other strategies too. You’d usually need some kind of map that binds specific errors to different types of handlers.

However, a lot of newer languages give you exceptions. Their purpose is that instead of worrying about how you will handle the error now, you throw an exception and figure it out later. That’s quite convenient for coding.

It’s just that when you’ve done that hundreds of times, and later never comes, the behavior of the code gets erratic, which people obviously report as a bug.

And if you try to fix it by just treating it as a boolean, you’ll miss the subtleties of proper error handling, so it will keep causing trouble. One simple bit of logic will never correctly cover all of the variations.

Exceptions are good if you have a strong theory about putting them in place and you are strict enough to always follow it. An example is to catch it low and handle it there, or let it percolate to the very top. That makes it easy later to double-check that all of the exceptions are doing what you want. It puts the handling consistently at the bottom.

But instead, many people litter the code indiscriminately with exceptions, which guarantees that the behavior is unpredictable. Thus, lots of bugs.

So, language features like exceptions let you avoid thinking about error handling, but you’ll pay for them later if you haven’t braced that with enough discipline to use them consistently.

Another example is data.

All code depends on the data it is handling underneath. If that data is not modeled correctly -- proper structures and representations -- then it will need to be fixed at some point. Which means all of the code that relies on it needs to be fixed too.

A lot of people don’t want to dig into the data and understand it, instead, they start making very loose assumptions about it.

Data is funny in that in some cases it can be extraordinarily complicated in the real world. Any digital artifact for it then will have to be complicated. So, people ignore that, assume the data is far simpler than it really is, and end up dealing with endless scope creep.

Incorrectly modeled data diminishes any real value from the code, usually a whole collection of bugs. It is the foundation, which sets the base quality for everything.

If you understand the data in great depth then you can model it appropriately in the beginning and the code that sits on top of it will be far more stable. If you defer that to others, they probably won’t catch on to all of the issues, so they over-simplify it. Later, when these deficiencies materialize, a great deal of the code built on top will need to be redone, thus wasting a massive amount of time. Even trying to minimize the code changes through clever hacks will just amplify the problems. Unless you solve them, these types of problems always get worse, not better.

Performance is an example too.

The expression “Premature optimization is the root of all evil” is true. You should not spend a lot of time finely optimizing your code, until way later when it has settled down nicely and is not subject to a lot of changes. So, optimizations should always come last. Write it, edit it, test it, battle harden it, then optimize it.

But you can also deoptimize code. Deliberately make it more resource-intensive. For example, you can allocate a huge chunk of memory, only to use it to store a tiny amount of data. The size causes behavioral issues with the operating system; paging a lot of memory is expensive. By writing the code to only use the memory you need, you are not optimizing it, you are just being frugal.

There are lots of ways you can express the same code that doesn’t waste resources but still maintains readability. These are the good habits of coding. They don’t take much extra effort and they are not optimizations. The code keeps the data in reasonable structures, and it does not do a lot of wasteful transformations. It only loops when it needs to, and it does not have repetitive throwaway work. Not only does the code run more efficiently, it is also more readable and far more understandable.

It does take some effort and coordination to do this. The development team should not blindly rewrite stuff everywhere, and you have to spend effort understanding what the existing representations are. This will make you think, learn, and slow you down a little in the beginning. Which is why it is so commonly avoided.

You see these mega-messes where most of the processing of the data is just to pass it between internal code boundaries. Pointless. One component models the data one way, and another wastes a lot of resources flipping it around for no real reason. That is a very common deoptimization, you see it everywhere. Had the programmers not avoided learning the other parts of the system, everything would have worked a whole lot better.

In software development, it is very often true that the things you avoid thinking about and digging into, are the root causes of most of the bugs and drama. They usually contribute to the most serious bugs. Coming back later to correct these avoidances is often so expensive that it never gets done. Instead, the system hobbles along, getting bad patch after bad patch, until it sort of works and everybody puts in manual processes to counteract its deficiencies. I’ve seen systems that cost far more than manual processes and are way less reliable. On top of that, years later everybody get frustrated with it, commissions a rewrite, and makes the exact same mistakes all over again.

Thursday, October 3, 2024

Time

Time is often a big problem for people when they make decisions.

Mostly, if you have to choose between possible options, you should weigh the pros and cons of each, carefully, then decide.

But time obscures that.

For instance, there might be some seriously bad consequences related to one of the options, but if they take place far enough into the future, people don’t want to think about them or weigh them into their decisions. They tend to live in the moment.

Later, when trouble arises, they tend to disassociate the current trouble with most of those past decisions. It is disconnected, so they don’t learn from the experiences.

Frequency also causes problems. If something happens once, it is very different than if it is a repeating event. You can accept less-than-perfect outcomes for one-off events, but the consequences tend to add up unexpectedly for repetitive events. What looks like an irritant becomes a major issue.

Tunnel vision is the worst of the problems. People are so focused on achieving one short-term objective that they set the stage to lose heavily in the future. The first objective works out okay, but then the long-term ones fall apart.

We see this in software development all of the time. The total work is significantly underestimated which becomes apparent during coding. The reasonable options are to move the dates back or reduce the work. But more often, everything gets horrifically rushed. That resulting drop in quality sends the technical debt spiraling out of control. What is ignored or done poorly usually comes back with a vengeance and costs at least 10x more effort, sometimes way above that. Weighted against the other choices, without any consideration for the future, rushing the work does not seem that bad of an option, which is why it is so common.

Time can be difficult. People have problems dealing with it, but ignoring it does not make the problems go away, it only intensifies them.