Sunday, August 24, 2014


I'm feeling out of sync. When I started programming several decades ago, we basically followed an engineering ethic of always trying to build the "right" thing. These days though, it seems that the software development culture has drifted away from that mindset, leaving me stranded in the past. 

By way of an analogy let us consider a simple program. For its requirements, lets say that the users specified that they only need this program on Tuesdays. Now just for argument sake, lets say that it is considerably easier to write this program with the day of the week hardcoded to Tuesday, so that only on Tuesdays will the program work, but for all other days of the week it will have errors.

Personally, I would never say that the Tuesday program 'works'. Sort of works. Partially works, sure but I'd never describe it as a working program. However, lately in several different variations I have had people absolutely declare it working properly. After all, the requirements said 'Tuesday' and the program functions according to the requirements. 

"It works on Tuesday, that's what is supposed to do. It doesn't need to do anything else," they say. "All we need to do now is to document its behavior."

For me, it seems negligent for programmers to know that there is more than one day in a week, but choose to ignore it. What if someone wants to use the program on a Thursday? Their answer is "All we have to do is make a new copy of the Tuesday program, then change the hardcoded Tuesday value to Thursday," as if that was somehow sane or obvious. 

In a way I do get this new mindset. It's a form of minimalism, and damn the consequences. Is it really so bad to head to a place where there is a special hard coded version of the Tuesday program for every day of the week? If people really need the program for all seven days, then there will only be 'seven' almost identical copies of the program. "It's not so bad," they claim.

The problem, as I see it, is that this type of strict interpretation of the requirements overlooks the first unwritten requirement that most users have: they don't want crap. It's sometimes not the case, and it is never written down, but for most projects, most of the time, especially if the users are in a rush, they're not going to be pleased on the first Thursday that they need to use the program to sit around waiting for the Thursday version to be created. And their not going to be impressed if this happens three to five more times. They'll forgive the programmers after that, but only until the next code modification comes along and now they have to wait either seven times longer or live with different inconsistencies on different days. All of the early gains, by quickly releasing the Tuesday program, will be swept away by the slowness or aggravation they get later. They'll realize that their very first, unwritten, unspoken and believed to be obvious requirement of "don't do this badly" will have been broken. If they eventually get angry, can you blame them? If they don't trust the developers later, would you?

In a much more abstract sense, particularly if you think my analogy is a straw man, what I've been seeing is that programmers have narrowed down the problem context well past a reasonable minimum so that they can deliver quickly. It's not complex at all to deal with all 7 days of the week if you can deal with dates, but if you squeeze that context down to only Tuesday you can avoid some of the drudgery of doing it properly. Saves a bit of time... today. For lack of a better name I'll call this 'context hacking'. The real experts in this art can even take a useless piece of code and claim it "works" too, just by creatively hacking off enough context. It is similar to the all too common "it works on my machine" excuse. There are lots of different variations out there.

Now this should not ever be confused with over-engineering, in that the difference is one of direction. Over-engineering inflates the context well beyond reasonable maximums, while context hacking seeks to go below the minimum. Sometimes that can be a thin line, if during the entire lifetime of the Tuesday program nobody ever uses it on any other day but Tuesday, one might argue that just working on Tuesdays was the real minimum. But the old ethic was that if we have to deal with one day of the week, and we know there are seven in total, then it would be wrong not to make it work for all of them regardless of whether or not the users would actually use it on another day. That is, any day of the week support, of any kind, means that we have to implement the code to do the “right thing” for all of the days. Or basically that the Tuesday requirement is moot if we need any day of the week handling, which takes precedence. And of course there is the added bonus that for the next project -- because there will always be another one -- if we know how to properly handle week days then we can encapsulate this and use it again, which at bare minimum means less testing.

In fact, not writing the Tuesday-specific code, but rather encapsulating the data handling in a reusable manner in a library gradually constructs a toolkit of reusable stuff that can greatly increase the ability to meet future challenges at higher levels, which when compared to repetitively whacking out copies of the same program over and over again is a much more satisfying programming experience. 

Writing the Tuesday program was considered when I started to be very unprofessional. It is all too common these days, but I really am hoping that we rediscover our sense of ethics again. Fast, high-stress programming at first appears to make it all more glamorous and to deliver quickly, but the mess it leaves behind does not make the work enjoyable. It just becomes a quagmire. If we're finding ourselves surrounded by way too much bad code it's because that is what we are creating.

Monday, August 4, 2014

Mathematics and Software Development

Programming a computer is the act of building up a large number of instructions for a machine to follow based on ‘primitive’ operations and underlying libraries. These instructions or ‘algorithms’ are always computed rigorously, which is occasionally not what we intended. Thus ‘bugs’ may interfere with the user’s objectives, but they will not harm the machine itself (although they can occasionally damage peripherals). The machine is simply following the steps that it was told in a deterministic manner.

Underneath, the computer manipulates ‘data’, which itself has to fit to a predefined format. Data stored in a computer is a symbolic placeholder for things that exist in the real world. It doesn’t really exist, but it can be used to track and analyse the world around us.

As such software can be viewed as a ‘system’, whose set of instructions is ‘formal’. We can’t just create any arbitrary collection of bits and expect it to run, the computer will reject the code or data if it isn’t structured properly.

Mathematics, in its simplest sense, is the study of ‘abstract’ formal systems. Mathematicians create sets of ‘primitives’ that act on abstract mathematical ‘objects’. Collectively, the primitives are used to express relationships between the objects that are rigorous, often these can be combined together to form ‘theorems’ and algorithms. That is, for any mathematics to be valid, it must conform to strict formal rules. Mathematical objects exist only in the abstract sense, although they are often used symbolically to relate back to real things in this world. Doing so allows us to explain or predict the way things are working around us. There are many domains that attempt to model the real world by applying mathematics including statistics, physics, economics and most other sciences.

Both mathematics and computer languages are primarily about formal systems. They both allow us to build up the underlying primitives into larger components such as theorems or libraries. They both exist away from the real world, and their utility comes from mapping them back. The underlying most expressive formal system for computers is the Turing Machine. Within this context we often create other more specific systems like programming languages or applications. Turing Machines also exist within mathematics, however they are not the most expressive formal system that is known. There are larger formal systems that encompass Turing Machines. As such we can see that the formal systems within computers are a subset of those within mathematics.

An interesting difference is that mathematics is completely abstract. It can be written down in a serialized fashion, but it is not otherwise tangible. Software however runs on physical machines, that are derived from the mechanization started by the industrial revolution. Internally software might be abstract, but externally the computers on which it runs are subject to real world issues like electricity, temperature and moisture. In this way software manages to bridge the gap between our abstract thoughts and the real world around us. Software also often interfaces directly with users who are creating new input or trying to analyse what is already known.

Given the relationship, software is a form of applied mathematics. Its formal systems share all of the abstract qualities of mathematical formal systems, and the underlying data is essentially various mathematical objects. Building up software on a computer is similar to building up theorems or algorithms within a branch of mathematics. Both mathematics and computers have issues when getting mapped back to reality, since they are at best ‘approximations’ to the world around them.

Software development has been underway for at least five decades, and as such has built up a large base of existing code, knowledge and libraries. The many partial sub-systems of software, like operating systems or domain specific languages can help to hide its underlying mathematical nature, particularly when crafting graphical user interfaces, but ultimately a strong understanding of mathematics helps considerably in understanding and structuring new parts for systems. There are some aspects of programming that are non-mathematical, such as styling an interface or the content of an image, but for the rest of software development having a good sense of mathematics and logic aid in being able to craft elegant, correct and consistent instructions.

Since software is an applied branch of mathematics, it is clear that not all code is intrinsically mathematics, but realistically what isn’t lies only within the intersection between the machine and the user. That is, whatever irrational or illogical adaptations are needed to make the system more usable for people, is non-mathematical. The rest of the system however, if well-written (and possibly elegant) is a formal system like the many that are known in other branches of mathematics.