"Familiarity breeds contempt" is a common cliche. It sums up an overall attitude, but for technology I don't think it is that simple.
I am certainly aware that the more we work with something, the more we come to really understand its weaknesses. To the same degree, we tend to overlook the most familiar flaws just because we don't want to admit their existence. We tend towards self-imposed blindness, right up to the instant before we replace the old with the new. It was perfect last week; this week it is legacy.
Software development has once again come to that state where we are searching for the next great thing. We are at a familiar set of cross-roads, one that will possibly take us to the next level.
In the past, as we have evolved, we have also dumped out "the baby with the bath-water", so to speak. We'd go two steps forward, and then one and three-quarters of a step back. Progress has been slow and difficult. Each new generation of programmers has rebuilt the same underlying foundations, often with only the slightest of improvements, and most of the old familiar problems.
To get around this, I think we need to be more honest about our technologies. Sure, they sometimes work and have great attributes, but ultimately we should not be afraid to explore their dark sides as well. All things have flaws, and technology often has more than most. In that, nothing should really be unassailable. If we close our minds, and pretend that it is perfect, we'll make very little progress, if any.
For this blog entry, I wanted to look at the Object-Oriented family of programming philosophies. They have a long and distinguished history, and have become a significant programming paradigm. Often, possibly because of their history, most younger programmers just assume that they work for everything, and that they are the only valid approach to programming. They clearly excel in some areas, but they are not as well rounded as most people want to believe.
Although it is hard to establish, the principles of Object-Oriented (OO) programming appear to be based around Abstract Data Types (ADT). The formalization of ADTs comes in and around the the genesis of the first Object-Oriented programming language Simula, in 1962. In particular, The Art of Computer Programming, by Donald Knuth which explores the then commonly known data structures, was also started in 1962 although the first volume wasn't published until 1968. Certainly, whatever the initial relationship between the two, they are very close, yet they are very different.
Abstract Data Types as a movement is about using data structures as the basis of the code. Around these data structures, the programmer writes a series of primitive, atomic functions that are restricted to just accessing the structure. This is very similar to Object-Orientation, except that in ADTs it is a "philosophy of implementation" -- completely language independent -- while in OO it is a fundamental abstraction buried deep within the programming language.
One can easily see that OO is the extension of the ADT ideas into the syntax and semantics of the programming languages. The main difference being that ADTs are a style of programming, while OO is a family of language abstractions.
Presumably, by embedding the ideas into the language, it makes it more difficult for the programmers to create unstructured spaghetti code. The language pushes the programmer towards writing better structured code, making it more awkward to do the wrong thing. The compiler or interpreter assists the programmer in preventing bad structure.
While ADTs are similar to OO -- you set up a data-type and then build some access functions to it -- at a higher level there is no specific philosophy. The philosophy only covers the low-level data structures, but absolutely nothing is said about the rest of the code. Practice, however is to use structured code to layout each of the high-level algorithms that are used to drive the functionality.
This means there is a huge difference between ADTs and OO. Outside of the data structures, you can fall back into pure procedural style programming in ADTs. The result is that ADT style programming looks similar Object-Oriented at the low-level, but is really structured as 'algorithms' at the high-level. A well-structured ADT program consists of a number of data-structures, the access functions and a series of higher-level algorithms that work the overall logic of the system. Since data drives most of the logic in most programs, the non-data parts of the code are usually control loops, interfaces to dispatch functionality or glue code to interfaces between different control loops. Which ever way, they can be coded in the simplest most obvious fashion, since there are no structural requirements.
That ambiguity in defining the style of the code at a high level in ADTs is very important. This means that there is a natural separation between the data and the algorithms, and they get coded slightly differently. But we'll get back to this later.
Software development often hinges on simplifications, so it is not surprising that we really only want one consistent abstraction in our programming languages.
Because of this, we spend a lot of time arguing about which approach is better: a fixed language, that is strongly-typed or one that is loosely-typed. We spend a lot time arguing about syntax, and a lot of time arguing about the basic semantics. Mostly we spend a lot time arguing about whether or not the language should be flexible and loose, or restricted and strict. It is a central issue in most programming discussions.
If you come an impasse enough times, sooner or later you need to examine why you keep returning to the same spot. Truthfully, we work in two different levels with our implementations. At the higher level, the instructions are more important. At the low level it is the data.
At the low level we want to make sure the data that we are assembling is 'exactly' what we want.
When I was building a PDF rendering engine in Perl, for example, I added in an extra layer of complexity. The engine was designed to build up a complicated page as a data-structure and then traverse it, printing each element out to a file. This type of processing is really simple in a strongly typed language, but can get really messy in a loosely typed one. To get around Perl's loose semantics I wrapped each piece of data in a small hash table with an explicit type. With the addition of a couple of really simple access functions, this had the great quality of emulating a strongly typed syntax, both making the code easier to write, but also guaranteeing less errors. It was a very simple structuring that also made it really easy to extend the original code.
At the higher level we want to focus on the instructions and their order, data is not important. Batch languages and text processing tools are usually loosely typed because it makes more sense. The data is really minimally important, but the order of the functions is critical.
As another example, I was building a dynamic interface engine in Java. In this case, I wasn't really interested in what the data was, only that at the last moment it was getting converted into whatever data-type I needed for display or storage. This type of problem is usually trivial in a loosely-typed language, so in Java, I added in a class to 'partially' loosely type the data. Effectively, I was concerned with 'removing' via a single call, any of the attributes of the type of the data. It does not matter what the data is, only what it should be. Again it is a simple structuring, but one that effectively eliminated a huge amount of nearly redundant code and convoluted error handling.
With enough coding experience in different languages, you come to realize that being strict at a low-level is a great quality, but being flexible at the high level is also important. It is easy enough to get around the bias of the language, but it does count as some additional complexity that might prove confusing to other developers. It is no wonder that the various arguments about which type of language is superior are never conclusively put to rest, it is more than a simple trade-off. There is an inherent duality related to the depth of the programming. The arguments for and against, then are not really subjective as much as they are just trying to compare two completely different things.
Well-rounded implementations need both strong-typing and loose-typing. It matches how we see the solutions.
Another fundamental problem we face, comes from not realizing that there are actually two different models of the solution at work in most pieces of software.
The way a user needs to work in the problem domain is not a simple transformation from the way the data is naturally structured. The two models often don't even have a simple one-to-one mapping, instead the user model is an internal abstraction that is convenient for the users, while the data model is a detail-oriented format that needs to be consistent and strictly checked.
A classic example is from a discussion between Jim Coplien and Bob Martin:
Specifically Jim said:
"It's not like the savings account is some money sitting on the shelf on a bank somewhere, even though that is the user perspective, and you've just got to know that there are these relatively intricate structures in the foundations of a banking system to support the tax people and the actuaries and all these other folks, that you can't get to in an incremental way."
In that part of the conversation Jim talked about how we see our interaction with the bank as a saving account, but underneath due to regulatory concerns the real model is far more complex. This example does a great job of outlining the 'users' perspective of the software, as opposed to the underlying structure of the data. Depending on the usage, a bank's customers deal with the bank tellers operating the software. The customer's view of their accounts is the same one that the system's users -- the tellers -- need, even if underneath, the administrators and regulators in the bank have completely different perspectives on the data.
While these views need to be tied to each other, they need not be simple or even rational. The user's model of the system is from their own perspective, and they expect -- rightly so -- that the computer should simplify their view-point. The computer is a powerful tool, and one of its strengths is to be able to 'transform' the model into something simpler allowing the user to more easily interact with it, and then transform it back to something that can be stored and later mined.
It is a very common problem for many systems to have the software developers insist that there is only one 'true' way to look at the data. The way it needs to be structured in the database. It is not uncommon to see forms tools, generators or other coding philosophies that are built directly on the underlying data model.
This makes it possible to generate code and it cuts down on the required work, but rarely is the data model structured in the same way as the user needs. The results are expected. The user's can't internally map the data to their perspective, so the applications are extremely awkward.
A single model type of design works only for very simple applications where there is little mapping between the user model and the data model. Mostly that is sample applications and very simple functions. While there are some applications of this sort, most domain specific systems require a significant degree of complex mapping, often maintaining state, something the computer can and should be able to do.
The essence of this, is that the 'users' have a model that is independent of the 'data' model. E.g. The interface translates between the real underlying structure of the data, and the perspective that the user 'needs' to complete their work. More importantly the models are not one-to-one, or we would be able to build effective translators. This duality and inconsistent mapping is why code generators and forms tools don't work. You can't build a usable application from the data model, because it is only loosely tied to the user model.
So what does this have to do with Object-Oriented programming?
The structure of an Object-Oriented language makes it relatively easy to model the structure of the data in the system. At the lower data level, the semantics of the language allow it to be used to build complex systems. You create all the objects that are needed to model the data, or the user abstraction. Object-Oriented languages are at their height when they are encoding easily structured objects into the language.
The classic example is a GUI system where the objects in the system are mapped directly to the visual objects on the screen, that type of mapping means that when there are problems it is really easy to go back to the screen and find the errors. The key here, is that one-to-one link between the visual elements and the objects. The point of going to a lot of extra effort, is to make it easy to find and fix problems. You pay for the extra effort in structuring the OO code by getting reduced effort in debugging it.
At the higher level, all of the OO languages totally fall apart. We add mass amounts of the artificial complexity into the works just to be able to structure the problem in an Object-Oriented manner. Ultimately this makes the programs fragile and prone to breakage. Ideas such as inversion of control, and dependency injection are counter-intuitive to the actual coding problems. They become too many moving parts.
Now, instead of the problem being obvious, you may need to spend a significant amount of time in a debugger tracing through essentially arbitrarily structured objects. Constructs like Design Patterns help, but the essence of the implementation has still moved far away from the real underlying problem. The developer is quickly buried in a heap of technical complexity and abstractions.
If what you wanted was a simple control loop, a traversal or some hooks into a series of functions, it would be nice if the simplicity of the requirement actually matched the simplicity of the code.
Even more disconcerting, we must leave our primary language to deal with building and packaging problems. We use various other languages to really deal with the problems at the highest level, splitting the development problem domain into pieces. Fragmentation just adds more complexity.
Our build, package and deploy scripts depend on shell, ant or make. We could write the build mechanics in our higher level language, but writing that type of building code in Java for example, is so much more awkward than writing it in ant. If we see it that clearly for those higher-level problems, does it not seem as obvious for the higher-level in the rest of the system?
Another critical problem with the Object-Oriented model is the confusion between the user model, and data model and the persistent storage. We keep hoping that we can knock out a significant amount of the code, if can simplify this all down to one unique thing. The Object-Oriented approach pushes us towards only one single representation of the data in the system. A noble goal, but only if it works.
The problem is that in heading in that direction, we tend to get stingy with the changes. E.g the first version of the application gets built with one big internal model. It works, but the user's quickly find it awkward and start making changes. At different depths in the code, the two models start to appear, but they are so overlapped it is impossible to separate them. As integration concerns grow, because more people want access to the data, the persistence model starts to drift away as well.
Pretty soon there are three distinct ways of looking at the underlying business logic, but nobody has realized it. Instead the code gets patched over and over again, often toggling between leaning towards one model, and then back again towards another. Creating an endless amount of work. And allowing the inconsistencies tp create an endless amount of bugs.
If the architecture identifies and encapsulates the different models from each other, the overall system is more stable. In fact the code is far easier to write. The only big questions comes about with deciding the real differences between the user model and the data model. When a conflict is discovered, how does one actually know that the models should differ from each other or that both are wrong?
Ironically, even though they are probably older and less popular, ADTs were better equipped for handling many implementation problems. Not because of what they are, but because of what they are not. Their main weakness was that they were not enforced by the language, so it was easy to ignore them. On the plus side, they solved the same low-level problem that Object-Oriented did, but allowed for better structuring at the higher level. It was just that it was up to the self-discipline of the programmer to implement it correctly.
I built a huge commercial system in Perl, which has rudimentary Object-Oriented support. At first I tried to utilize the OO features, but rapidly I fell back into a pure ADT style. The reason was that I found the code was way simpler. In OO we spend too much time 'jamming' the code into the 'right' way. That mismatch makes for fragile code with lots of bugs. If you want to increase your quality, you have to start with trying to get the 'computer' to do more work for you, and then you have to make it more obvious were the problems lay. If you can scan the code and see the bugs, it is far easier to work with.
We keep looking of the 'perfect' language. The one-size-fits-all idea that covers over both the high and low level aspects of programming. Not surprisingly, because they are essentially two different problems, we've not been able to find something that covers the problem entirely. Instead of admitting to our problems, we prefer to get caught up in endless arguments about which partial solution is better than the others.
The good qualities of ADTs ended up going into the Object-Oriented language design, but the cost of getting one consistent way of doing things was a paradigm where it is very easy to end up with very fragile high-level construction. Perhaps that's why C++ was so widely adopted. While it allowed OO design, it could still function as a non-OO language allowing a more natural way to express the solution. Used with caution, this allows programmers to avoid forcing the code into an artificial mechanism, not that anyprogrammers who have relied on this would ever admit it.
The lesson here I guess is that the good things that make Object-Oriented popular, are also the same things that make it convoluted to some degree. If we know we have this duality, then for the next language we design for the masses, we should account for this. It is not particularly complicated to go back to ADTs and use them to redefine another newer paradigm, this time making it different at the low and high levels. Our own need for one consistent approach seems to be driving our technologies into becoming excessively complicated. We simplify for the wrong variables.
We don't want to pick the 'right' way to do it anymore, we want to pick the one that means we are most likely to get the tool built. Is a single consistent type mechanism better than actually making it easier to get working code?