Thursday, November 28, 2024

Portability Layer

Way back, when I was first coding, if we had to write code to run on a wide variety of different “platforms” the first thing we would always do is write a portability layer.

The idea was that above that layer, the code is simple and clean. It is the layer that deals with all of the strange stuff on the different platforms and different configurations.

In that sense, it is a separation of the code itself from the ugliness of making it run everywhere.

Sometime later, the practice fell out of use. Partly because there were fewer operating systems, so most people were just opting to rewrite their stuff for each one. But also because there was a plethora of attempts like the JVM to write universal machines to hide portability issues. They were meant to be that portability layer.

Portability is oddly related to user preferences and configurations. You can see both as a whackload of conditional code triggered by static or nearly static data. A platform or a session are just tightened contexts.

And that is the essence of a portability layer. It is there to hide all of the ugly, messy, nested mess that is triggered from above.

You can’t let the operating system, specific machine configs, or user preferences muddle the code. The code needs to do one thing. Only one thing. Part of that one thing might be a little different below, but above the layer, it is all the same. So the layer hides that.

There may be extra steps for certain users or hardware. The layer hides that too.

The presentation may be different for different users, or even the language. The layer should hide that as well.

It would be better to call this an ugliness layer than a portability layer. It separates the high-level actions of the code from the implementation nightmares below it. And it should be driven by configuration data, which is just the same as any other data in the system. Realistically part of the database, part of the data model.

In a very abstract sense, some code is triggered with some configuration data, and some alternative code chains that combine to do the work. At the top level, you can lay out the full logic, while underneath the portability layer if some things aren’t required they are ‘no-op’s, basically empty functions. When structured that way, it’s a lot easier to verify, and debug.

The high level is a large set of polymorphic instructions, the underlying level is specific to the context.

The system is a big set of functions, that implement various features. Each function is high level, these are mapped to the lower level stuff that is specific to the machine, user, etc. Any sub-context that can be applied.

When code is structured that way, it is easy to figure out what it is doing, and a whole lot less messy. Code that is easy to figure is also easy to test and debug. Easy code is higher quality code. Encapsulated madness below is where you’ll hit the bugs, and is harder to test. But if that is clearly separated, then it is just a matter of reproducing the problem and noticing why it was ineffective.

We should have not forgotten about portability layers, they were definitely a far better decomposition to help with overly complex runtime issues.

Friday, November 22, 2024

Wobble

As a software development project spins out of control, it wobbles all over the place. The real issues get replaced by fabricated ones.

This is evident in its political discussions.

If all of the issues are grounded, then they are just work that needs to be accomplished. But if they have become disjoint from the underlying technical or domain constraints, then the project is wobbling.

Because the issues are no longer bound to reality, it does not matter how they are handled. They are not the problem, the wobble that caused them is the problem.

A wobbling project quickly gets messy and disorganized. That lowers the quality of both the proposed solution and its implementation. You end up with a lot of badly fitting technology that requires significant handholding just to get through basic functionality. Its fragility defeats its utility.

Wobbles are caused by all sorts of technical, domain, and control issues. These issues are disconnected from the work of building the solution itself. They are irrational and sometimes quite emotional.

The best thing you can do is recognize them as a wobble and work carefully to not let them interfere with the actual development effort. That can be tricky.

If some technology is irrationally banned even though it is suitable for the current development, you end up just having to find an acceptable replacement that doesn’t force a full re-architecture.

If the domain features and functionality are in question, you usually need to go right back to the users themselves and figure out directly from them what parts they actually need, and what parts they will not use. Skip the middle players, they are the source of the wobble.

For process issues, sometimes you need the illusion of following a useless makework process, but ensure that that effort does not steal away too much focus from the real underlying process. Nontechnical people love to throw all sorts of hoops in your way to jump over. You just have to make sure it is not too disruptive.

The biggest cause of most wobbles is a lack of real development experience, from the developers, management, stakeholders, or any others with influence. Software development often seems conceptually easy, but the size, scale, and sheer amount of effort involved in even a medium-sized effort make it somewhat non-intuitive. Good experience is necessary to detect and avoid wobbling.

Thursday, November 14, 2024

Clarity of Expression

For software programs, there are three conflicting goals.

First, the programmer has to express their understanding of the problems they are trying to solve.

Next, the computer has to understand the programmer’s instructions and do the right thing.

Third, other programmers will have to read the code and figure out what was intended to verify, change, or extend it. Code is never finished.

Some languages add all sorts of syntactic sugar and implicit mechanics in order to make it faster for a programmers to express themselves.

Those features can trick the original programmer into not fully understanding why specific behaviors are occurring. Things seem to work, they don’t really know why.

It can lead to ambiguities in the way the code executes.

But more importantly, it convolutes the code so that other programmers can’t easily understand what was intended.

It moves the language closer to what is known as ‘write once, read never’ code. Where it is impossible to see the full scope of expression, so you pretty much have to just run it to see what it does then decide if you like that based on very limited experience.

As languages mature, people often add more and more syntactic sugar or magical implicit handling. They believe that are helping the programmers write code faster, but really they're just making it all too complicated and unreadable. They are negative value.

For a really simple language, it is often important to utilize all aspects of it.

For a mature language, the opposite is true, you just want to pick a reasonable subset that best expresses your intent. Overusing all of the later language features just impairs readability. It’s a balance, you need to use just enough to make your expression crystal clear.The thing you are doing is not trying to save your own time, but rather get the meaning and intent to shine through. Code that is never read tends to get replaced quickly.

Thursday, November 7, 2024

The Value of Technology

It is pretty simple.

For an individual, technologies only have value when they somehow make life better.

For a society, the same is also true. A valuable technology improves society.

Using this as a baseline we can see that a technology that sometimes works well but fails very badly at other times is misleading. For some people, the technology works, they see it as valuable. However, for others, when technology fails them, they become frustrated. Some of those failures are catastrophic. It is a negative value.

To really evaluate a technology properly, we need to look at those failures. They define more of its value than the successes.

For example, someone builds an inventory system. Great, now people can quickly see what is on hand. But the digital information in that system drifts from the actual inventory. Sometimes it says stuff is there when it is not. Sometimes it is the opposite. The value for such a system is its reliability, which is its ability to not drift too far from reality. If the drift is significant, the system is unreliable, it has very little value then, in fact, it might be hugely negative. Just smoke and mirrors that misleads people.

We see that with other categories of systems, particularly communications. If a system delivers accurate information, it is valuable. But if it is susceptible to being subverted to deliver misinformation, spin, or propaganda, then its value is low, or even worse it can be a literal threat to a society. It can be controlled by people with a negative agenda.

If people mistakenly trust that system, yet it delivers information that causes them to make tragic mistakes, the value is extraordinarily negative. It is a bad system. Very bad. They blindly trust the information they are getting, but it is actually not correct and controlled by other people who do not have their best interests at heart. It is a tool for mass exploitation, not for enlightenment.

We can verify that a technology is negative by simply looking at it retroactively. If the trajectory of something within the sphere of influence has gone off in a bad direction, and some technology was instrumental in changing that course, then it clearly has negative value. So individual or collective negative acts taint the mediums that influenced them.

If a group of people do something so obviously self-destructive and were driven to do so by use of some questionable technology, it's clear that that technology is bad. Very bad. There is something seriously wrong with it, if it was effective at promoting bad choices.

What we need to do is get better at evaluating technologies and mitigating the effects of any negative technologies on our societies. If we don’t they will eventually be used to bring about our ruin. We went through this with machines too. Initially, they amplified our labor, but then also started to amplify destruction. We still have a lot of confusion about the difference.