Thursday, April 16, 2015

Shades of Grey

Every profession changes its practitioners. How could it not? They'll spend a big part of their life approaching problems from a given angle, applying knowledge built up by others with that same perspective. Prolonged immersion in such an environment gradually colours their views of the world.

Programmers encounter this frequently. Coding is all about engaging in a dialogue with a rigid machines that don't tolerate ambiguity or understand intentions. They do exactly, and precisely, what we tell them to do. There is no greyness, no magic; the machines only follow their instructions as given and nothing else (if we ignore issues like faulty hardware and viruses).

This constant rigidity leads us to frame our instructions as being 'right' or 'wrong'. That is, if the code compiles and runs then it is right, else it is wrong. There is nothing in between.

Gradually this invades how we see things. We quickly go from the idea that the instructions are right, to the idea that we've built the right thing for the users, and then we talk about the right ways to design systems and run projects. But for many, it doesn't just stop there. It continues to propagate outwards, affecting the rest of their views of the world. And the way they interact with it.

A long, long time ago, on learning the basis of first learning object oriented, one of my friends declared "There can only be 'one' right object oriented design. Everything else is wrong". I giggled a bit when I heard this, but then I went on a rather prolonged ramble about the greyness of trade-offs.

I explained that known issues like space-time trade-offs meant that there were actually a great number of different, yet equally valid designs, all of which should be considered 'right'. That some of those trade-offs might be constrained by known user or operational requirements, but that there were many more that differed only because of 'free' parameters.

My friend was sceptical at first, but he came around when I began talking about different 'sorting' algorithms. Ones with very different computational complexities in the best, average and worst categories. They all sorted, quite efficiently, but the whole collection of behaviours was considerably varied. Optimizing for any one aspect of a complex system always offsets some other aspect. That is, one is traded off for the other.

The simplest example of this is 2D vectors on a Cartesian plot. If magnitude is the only constraint, then there are an infinite number of vectors that are all basically identical. It doesn't matter where they are on the plot, or at what angle. A single constraint in n variables maps lots of stuff onto each other.

Getting back to software, it's discrete, so there is actually a fixed set of 'good' object oriented designs, but it is a really huge set. Outside of that set there is a much larger one of all of the less than correct designs. But that still doesn't make any one specific design 'right', it just makes it one of the many 'appropriate' solutions. "Right" is definitely the wrong word to use in this context

When I started programming, the big decision we had to make was whether we were 'vi' or 'emacs' people. Both editors requires significant effort to learn to use correctly. Emacs was seen as the grander solution, but vi was pretty much available everywhere and consumed a lot less resources. On the early internet, wars raged between the proponents of both, each pounding on their keyboards about the rightness of their view. That of course is madness, editors are just a tool; the different varieties suite different people, but almost by definition there is no one perfect tool.

There is no 'right' editor. There are certainly some crappy editors out there, but they are invalid only because they are plagued with bugs, or are missing core functionality. They are deficient, rather than 'wrong'. If you can do the basics of editing text, then an editor is functional.

In that same sense, we can talk about lines of code being right, in that they actually compile, and we can talk about compiled code being right in the sense that it actually runs, and perhaps we could include 'proofs of correctness' to show that the code does what we think it does but beyond that it quickly becomes grey. There is no right program for the users. There is no right architecture. No right way to build systems. As we get farther away from the code, the shades of grey intensify. Right and wrong quickly become meaningless distinctions.

A quick perusal of the many discussions on the web about computers out there are predicated on this mistake. People argue in general about how one end of a trade-off is so much better than the other. But it doesn't make sense, and it siphons away energy from more meaningful discussions.

A black and white perspective comes with the early stages of learning to program. It is very easy to let those views cloud everything else. I fell into this trap for a long time. Gradually, I have been trying to correct my views. Right and wrong is far too rigid to use in being able to make sense of the world. It certainly doesn't help us understand how to interact with people or to come together and build sophisticated computer programs. It doesn't make life any easier, nor does it make anyone happy. It's sole purpose is to help down at the coding and debugging level, but once you've learned its place, you have to learn to leave it where it belongs.

The safest thing to do, is to not use the words 'right' or 'wrong' at all. They tend to lead towards missing the essential nature of the underlying trade-offs. It's far better to come at complex systems and environments with an objective viewpoint. In that way, any specifics might have better qualities, at the expense of known or unknown side-effects.

Another easy example is with computer languages. Some languages make doing a task faster and more understandable, but all of them are essentially equivalent in that they are Turing-complete. For a given problem, it might be more naturally expressed in Lisp, but it also might be way easier to find Java programmers that could continue to work on the project. The time saved in one regard, could be quite costly in another.

More to the point, the underlying paradigms available as primitives in Lisp could be reconstructed in Java, so that the upper-level semantics is similar. That is, you don't necessarily have to just do lispy things only in Lisp, they are transportable to any other programming language, but you have to take one step backwards from the language in order to see that. And you can't be objective if you are dead-set against the inherent wrongness of something.

Putting that all together, although at the lowest levels we need to view things in a black and white context, at the higher levels labelling everything as right and wrong is debilitating. If you're discussing some complex aspect of software development and you use the word 'right', you're probably doing it wrong.

No comments:

Post a Comment

Thanks for the Feedback!