Over the years I've worked in many development sites, read tonnes of code and heard way too many horror stories about development projects gone bad. While software development is inherently risky, many of the problems I have witnessed were self-inflicted, thus fixable. It is an odd historically-driven aspect of programming, our need to make the work more difficult than necessary.
I wasn't focusing on writing yet another list of things to simplify development, but in my other writings the same issues kept bubbling up over and over again. Sometimes even when you explicitly see something, it is not easy to put a name or description to it. In this case, as I explored more elementary topics I kept getting these 'pieces' floating to the top, but not fitting into the other works. Once I collected these four together, they fit quite naturally.
For this blog entry, I'd like to address this commonality with basic development problems, but try as much as possible to avoid falling into simple platitudes (no guarantees). These problems are elemental, mostly obvious, but surprisingly persistent. Software developers, I am strongly aware, are their own worst enemy. Sure, there are good rants to be had at the expense of management, users or technology, but the present of other problems doesn't validate the desire to overlook the fixable ones.
One significant problem that rears its ugly head again and again, is our actual perception of what we are doing. We all know that if you think you will fail, you definitely will. There must also be some well-known universal truth that states if you go in with the wrong viewpoint, it will significantly reduce your likelihood of success. Even in the case where you are positive, if you really don't understand what you are doing, then you are implicitly relying on luck. And some days, you're just unlucky. If you want to be consistently good at something, you really need to understand it.
When I first started programming I subscribed to the 'magic broom' theory of programming. The best illustration comes from the episode called the Sorcerer's Apprentice in Disney's animated film Fantasia. There is at least one youTube link to it:
In the episode, Mickey Mouse as a Wizard's apprentice, gets a hold of the 'magic hat', then starts issuing special commands to an inanimate broom to deliver water to the Wizard's workshop. Mostly because he falls asleep, the circumstances start to go rapidly out of control. Awaking in a panic, he is unable to stop the growing problem. A hatchet job only intensifies the issue. In the end, he even tries consulting the manual, a desperate step to be sure, but it is far too little, far too late. Is this not an excellent allegory of a programming project gone awry?
My early view of programming was similar. You issued a series of magic commands to the various 'brooms', and presto, blamo, your work is done for you. It is all magic underneath. In this animation, the fact that any apprentice with access to the hat can easily issue commands, even when they do not fully understand the consequences of their actions, relates this back to computers quite effectively. As this common quote likes to describe:
"To err is human, but to really foul things up requires a computer." -- Farmers' Almanac, 1978
Computers amplify their user's abilities; they are the mental equivalent of a bulldozer.
Needless to say, I was disappointed when I discovered that computers were just simple deterministic machines that never deviated from what they are told, and that the commands themselves where only abstractions thought up by other programmers. A significant amount of the 'magic' is just artificial complexity added in over years and years of development.
This is nothing magical about computers. They are machines that unless there is a hardware failure, consistently do the same things day after day. Of course, that's no longer so obvious to most users of our modern 'personal computers'. They have becomes so complex and often so erratic, that many people feel they have personality. Some even think that their machines are actively plotting against them.
THE ART OF COMPUTER PROGRAMMING
For a quite a while I subscribed to the 'writing software is an art form' school of thinking. This view sees programming as a very specific art form, similar to painting or writing poetry. It is an inherently elitist viewpoint based around the idea that some people are naturally born to 'it', while others shouldn't even try. Not everyone is cut out to be an artist, or in this case a programmer.
While not quite as fantastical as the magic broom theory, there is an implication that software is not repeatable, in the same way that any large great work of art is not repeatable. If you accept this, the consequences are quite scary. Given that we are dependent on software, this view directly implies that there is no way to consistently make software stable. The 'fate' of any version of the system rests with the ability to get a hold of the right 'artists'. And these are limited in quantity. So limited, that many programmers would insist that there is only a small handful. Or, at very least a few thousand. But certainly not the masses of people slaving away in IT right now.
Software as an art form also implies that there is some 'essence' that we can't teach. Great art for me goes beyond technique and materials, it happens when an 'artist' manages to capture 'emotion' in the work along with all of the other details. A grand painting inspires some emotional response, while a work of 'graphic art' just looks pretty. It's the same with musicians and pop music. Rarely does a pop song paint a vivid picture, instead people just tap along to the beat. When it 'gets' to you it is art, otherwise it is just 'easy listening'.
Given that definition, it's hard to picture a software application that directly inspires the user on an emotional level. That would be a word processor that makes you depressed just by starting it, long before you've been disappointed by its inherent bugs. Or a browser that just makes you happy, or angry, or something. The idea, then in this context is clearly silly. A tool, such as a hammer, may look pretty but it does not carry with it any emotional baggage. For that you need art, and the 'thing' that a true artist has, that is not 'common' in the rest of the population (and often requires a bit of mental instability), is the capability to embed 'emotion' directly in their work for all to see. A very rare skill.
What is driving a lot of programmers towards the 'art form' theory is their need to see what they do as 'creative'. As it involves complex analysis, and building tools to solve specific problems, there is a huge amount of creativity required to find a working solution to a set of problems. But, and this needs to be said, once that design is complete you've got to get down to work and actually build the thing. Building is just raw and ugly work, it is never pretty. If you are doing it well, it shouldn't be creative.
I don't want for example, the electrician in a new house randomly locating the plugs based on his own emotional creativity at the moment. That would suck, particularly if I have to plug things into the ceiling in some rooms because it was a bad hair day. Artists can be temperamental and have periods were they cannot work. Software developers are professionals, and come to the office every day, ready to work (lack of available coffee can change this).
As for the elitist perspective, there are certainly skills in software development, such as generalization and analysis that are extremely difficult to master. And in the case of generalization, some people naturally have more abstract minds than others. Beyond that, the 'encoding' of a design into chunks of code with specific structures is not difficult. Not unless it is just not specified. Elegance is also hard to master, and certainly some people see that right away, but most people should be able to write a basic software program that works at least. Of course, 'just working' is only a small part of the overall solution.
Software development is often 5% creativity to find the solution and 95% work to get it implemented. Sometimes, particularly at the beginning, when you are still looking for the 'shape' of the tool, it may be more like 10% or 15% creativity, but ultimately the work done to implement the code is just work and nothing more. If we want it to be more reliable, then we need to accept it as less creative.
We like to make programming hard, because it satisfies our egos. We feel good if we solve problems that we believe other people cannot solve. Our own personal self-esteem is ridding on this. But our desire to inject 'creativity' into the implementation process, is really quite insane. In no other field would it be acceptable to make a plan, then while implementing the plan, go off and do something completely different. Is our high rate of failure, not directly linked to this self-destructive behavior? Even more oddly, the often stated evolution from foolishly ignoring the 'plan', is to just not have one in the first place. That way you don't have to ignore it. That can't end well.
For me, I have grown as a programmer only when I accepted that there is less creativity in what I do, than I would like. That's OK, I look to my 'life' to fulfil some of my creative juices, with things like drawing or photography. Really good hobbies that allow unadulterated expression to bloom. With that concession, I can see software development through an objective eye. From that perspective I found I could build faster, better and more accurately.
There are very complicated parts of programming. In particular, analysis, because of the inherent messiness of people is hard and often because it is the initial cause that knocks over the other dominoes in the project. But, once in having decided on what it is that we need to build, the 'hardness' of the problem is significantly reduced. The creative part comes and goes. We shouldn't hold onto it because we want to appear smart, or because we are trying to be an artist, or even because it makes the days more entertaining. Adding 'artistic' inconsistencies to a program is fun, but people hate it, it is messy, and it ends up being more work. You get a lot of misery in exchange for very little creativity.
Good programmers write elegant bits of code. Great programmers write consistent code. More so than any other 'skill', programmers need self-discipline to keep their code clean and consistent, after all they will be working on it for years and years. Forget all of the techniques, approaches or styles, it all comes down to using one, and only one consistent way to do things. Even if that way is arbitrary, that is irrelevant (well, its not, but it is close enough that it doesn't matter).
Honestly, it doesn't actually matter what style of code you write so long as you are consistent. If you implement some programming construct into the code repeatably, but identically, it becomes a simple matter of removing it and refactoring it with something more advanced. If you implemented it into the system in twenty different ways, the problem of just finding what to change and what not to change is complicated, never mind replacing the actual code. A stupid idea implemented consistently is worth far more than a grand idea implemented in multiple different ways. At least with a consistent stupid idea, you can fix it easily.
It isn't hard to stop every once in a while and refactor the code to cleanup the multiple ways of handling the same problem or consolidate several different versions of the essentially same block of code. It is just self-discipline. I've always like to start all major development with some warm-up refactoring work. In that way, as the project progresses, the code base gets cleaner. Gradually over time the big problems are handled, and removed.
For this, programmers love to blame management for not giving them the time, but if it is a standard at the start of any development cycle, the time is usually available. Just don't give management the option to remove the work, make it a mandatory part of the development. Because it is first, it will get done, even if the current cycle ends in a rush. Cleaning up the code isn't hard, it usually isn't time consuming, and it is definitely professional.
For both individuals and groups, if you look at their code base, you get a picture of how well they are operating. Big ugly messy code bases, show an inherent sloppiness. Sloppiness is always tied heavily to bugs and frequent interface inconsistencies. It is also commonly tied to schedule slippages and bad work estimates. Sloppy code is hard to use and nearly impossible to leverage. Sloppy code is common. Even most of the cleanest commercial products barely score a 6 out of 10 on any reasonable cleanliness scale, while most are far less than that. Other than procrastination, there often isn't even an excuse for messy code.
We don't like discipline because it is repetitive. We don't like repetition because it isn't creative. These two problems are clearly linked. The most 'creative' programmers often sit atop of huge messes. Sure, they get it, now. But should they ever come back in eight years and have to alter their own code, as I did, they too would get to thinking "boy, did I make a mess of this".
An important corollary of maintaining self-discipline is to accept that the code isn't just written for yourself, successful code will have lots of authors over the years.
Computers need to follow long sequences of instructions in order to implement their functionality. Using 'brute' force as a technique to write a program is an approach that comes from pounding out, in as specific a manner as possible, all of the possible instructions in a computer language that implements that piece of functionality. Aside from being incredibly long, the results tend towards being fragile. Mostly the repetitive nature of the instructions, as they get changed over time tend towards the various blocks of code falling out of sync with each other. These differences create the instabilities in behavior so often called bugs. They also make the software fugly. While brute force is slow to build, fragile and hard to maintain, it is by far the most common way for programmers to write systems.
If you generalize the code, a smaller amount of it can solve much larger problems. It is this 'batching' of code that is often needed to keep up with the demand and schedules for development. If you need to write 50 screens, each one takes a week, so you've got a year's worth of effort. But if you take two weeks to write a generalize screen that would act in place of 10 other screens, then you could compact your work down to 10 weeks. Each new screen is twice as hard as the original, but leveraging them cuts the work by 1/5. Of course you'll never be that lucky, you might only get 4 out of 5 to be generalized, but your now at 18 weeks which is still half the time. That type of 'math' is common in coding, but most people don't see it, or are afraid to go the generalization route out of fear that a screen might take 4 to 8 weeks instead of 2. Note that: 8*4 = 32 + 10, is still less than 50, and your 4X off your estimate. Doesn't that still allow for 8 weeks of vacation?
Generalization is hard for many people, and I think it is this ability that programmer's so often confuse with creativity. Generalizing is probably a form of creativity, but it is the ability to alter ones perspective that is key. To generalize a solution, you need to take a step back from the problem and look at it abstractly:
There have been many attempts to take generalizations, as specific abstractions and embed them directly in the programming languages or process. Object-oriented design and programming, for example appear to follow in the Abstract Data Type (ADT) footsteps, by taking a style based abstraction and extending it to various programming languages. Design Patterns are essentially style based abstractions that are commonly, and incorrectly used as base implementation primitives. While a language or a style may provide a level of abstraction, the essence of abstracting is to compact the code into something that is generalized, reusable and tight. Any of the inherent abstractions provided assistance, but used on their own or abused they will not meet the requirements of really reducing the size of the problem. Often, when improperly used they actually convolute the problems and make the complexity worse.
Even as we generalize, we still need the solution to be clean, consistent and understandable by the many people that will come in contact with the code over its years of life. Abstractions that convolute are not the same as ones that simplify. Focusing on the data always helps, and researching existing algorithms can be a huge savings in experimentation. Often the best approach is to start with some simplified algorithm, and extend it to meet the overall problem space.
This higher-level view simplifies the details, allowing less code, reuse, optimizations and many other benefits. Of course it is harder to write, but with experience and practice it comes smoothly for most programmers. It should be considered a core software developer skill.
Generalization, when it is implemented properly is an abstraction that is encapsulated at specific layers within the system. An elegant implementation of an abstraction is the closest thing we have to a silver bullet. Most programming problems resemble fractals with lots of little similar, but slightly different problems all over the place. If you implement something that handles the base and can be stretched in many ways to meet all of the variations, then one simple piece of code could solve a huge number of problems. That type of leverage is necessary in all programming projects to keep up or meet the deadlines. The reduction of code is necessary to allow for easy expansions of the functionality.
If you have master the first three things in this post, you have the ability to build virtually anything. However, that doesn't help you, unless you actually know 'what' to build.
The weakest link and clearly the hardest task of most programming projects is the analysis. It is usually weak for several key reasons (a) the problem domain is not well understood, (b) the other influencing domains, like development and operations are not factored in and (c) the empathy for the user is missing, or was misdirected.
Software is just a tool for the users to work with. The only thing it can actually do is be used to build up a pile of data. Given that simplicity you'd think it would be easy to relate the tool back to the problem domain. But, as with all things, two common problems exist, (1) the designers never leave their shop to take a look at the actual usage of their tools, and (2) even if they do, they don't do it enough, so that they have the wrong impression about what is actually 'important'.
To be able to build up a workable software tool, you first need to understand the vocabulary of the industry. Then, you need to understand the common processes. Then you need to tie it together to find places where specific digital tools would actually make the lives of the users easier, not harder. Nothing is worse than a tool that just makes misery in a poor attempt to 'control' or 'restrict'. If you want the users to appreciate the tool, it needs to solve a problem for them. If you really want them to love the tool, it needs to solve lots of problems.
The key to understanding a problem domain is listening to the potential users. They have the various 'fragments' that are needed to piece together the solution. Although the users are the experts in the problem domain, the developers should be the experts in building tools. That means that the nature and understanding of the problem comes from the users, while the nature and understanding of the solution comes from the developers.
Analysis is an extremely difficult skill to master, it is not just taking notes, writing down 'requirements' or any other simple documentation process. It is listening to what the users are saying, and reading between the lines to put together a comprehensive understanding of what is really driving their behavior and how that relates back to their need to amass a big pile of data about something. In the end, thought, it is always worth remembering that software is just a tool to manipulate data. Knowing a lot about the users is important, but knowing everything is not necessary. It is usually best to stick close to the data, the processes and the driving forces behind their actions.
Understanding the problem domain is important, but the whole space is not just the problem domain, but also the development and operations domains. How the program is created and maintained are often key parts of the overall problem. Building software in a place that cannot maintain it is asking for trouble. Building something that is hard to install is not good either. The development, testing and deployment domains have an effect if you consider the 'whole' product as it goes out over time and really gets used.
Removing big programming problems produces more stable releases, requiring less testing. That in turn means the code gets into usage faster. If you utilize this, the releases can be smaller, and the benefits get to the user faster. This shorten cycle of development, especially in the early development days helps to validate direction more closely. It doesn't save you from going down the wrong path, but it does save you from going down too far to turn back.
An important point for analysis is to always err on the side of being too simple. The primary reason for this is that it is easy to add complexity when you determine that it is needed, but very difficult to remove it. Once you've zoomed yourself in believing a model or rules are correct, deleting parts of it becomes impossible. Working up from simple, often meets with the development goals and the product goals as well. Another reason is that a tool that is too complex is not usable, while one that is too simple is just annoying. It is better to keep the users working, even if they are frustrated, then it is to grind it all to a halt. They have goals to accomplish too.
Many programmer's are quick to put down the complains of their users as misdirected, thinking they just noise, or ignorance. Where there is smoke, there is usually fire. A lesson that all programmers should consider. The biggest problem in most systems today is the total failure of the programmers to have any empathy for their users. And it shows. "Use this because I think it works, and I can do it", produces an arrogant array of badly behaving, but pragmatically convenient code. That of course is backwards, if the code needs to be 'hard' to make the user experience 'simple' then that is the actual work that needs to be done. Making the code 'simple', and dismissing the complaints from the users, is a cop-out to put it mildly.
People are messy, inconvenient, lazy and they almost always take the easiest path. Sometimes this means that you need to control their behavior. However, overly strict tools diminish their own usefulness. Sometimes this means giving the users as much freedom as possible. However this is more work. Balancing this contradiction is a key part of building great tools. If the essence of software development rests on deterministic behavior, the essence of understanding users rest on their irrational and inconsistent actions. This is nothing wrong with this, in so long as you build any 'uncertainty' into the architecture. If the users really can't decide between two options, then both should be present. If they choose easily, then so can you. Match the fuzziness in the responses of the users with the fuzziness in the analysis with the fuzziness in the design.
Feedback is the all important final point. If your tool simplifies the issues, then there will be problems with its deployment. This should feedback into the upcoming development cycles, in a way that allows for the overall understanding to grow properly. Ultimately, some new code should be added, and some code should be deleted. It is worth noting, again, for self-discipline reasons, that any code that should be deleted, is actually deleted. It should go.
In this way, over time the tool expands to fill in the solution and more and more of the corner-cases get handled. The results of a good development project will get better over time, that may seem obvious, but with so many newer commercial versions of popular software getting significantly 'worse' than their predecessors, we obviously need to explicitly say this.
IT SHOULD, AND CAN BE EASIER
Hang around enough projects and you'll find that most 'fixable' implementation problems generally come from one the above issues. There are, of course, other issues and possibly serious management problems circling around like vultures, but those are essentially external. You just have to accept some aspects of every industry as they are forever beyond your sphere of control; a very wise senior developer once told me when I was young: "pick your battles". There is no point fighting a losing battle, it is a waste of your energy.
Those things that come from within, or nearby are things that can be controlled, fixed or repaired. Flailing away at millions of lines of sloppy code is painful, but enhancing millions of lines of elegant code is exhilarating. The difference is just refactoring. It could be a lot of time, for sure, but it will never get done if you never start, and to benefit, you needn't do it all right away. Cleaning up code is an ongoing chore that never ends.
You may have noticed that the these development problems are mostly personal things. Things that each developer can do on their own. For individuals and small teams this is fairly easy. Big teams are a whole other problem. Often the real problems with the huge teams are at their organizational level. Inconsistencies happen because the teams are structured to make them happen. At the lower-levels in a big team the best you can do is assure that your own work is reasonable and that you try to enlighten as many of your colleagues as possible to follow suit. Getting all the programmers to 'orient' themselves in the same direction is a management issue. Mapping the architecture in a way to avoid or minimize overlaps and inconsistencies onto the various teams is a design problem; part of the project's development problem domain.
Finally, you know you've stumbled onto something good when it summarizes well:
If you change your perspective and keep the discipline, mostly your code will work. If you add in good analysis it will satisfy. If you generalize, then you can save masses of work, and get it done in a reasonable amount of time. These four things more than any others are critical. Dealing with them makes the rest of software development easy and reliable.