Thursday, June 18, 2015

Encapsulation

One of the strongest, but possibly least understood principles of object-oriented (OO) programming is 'encapsulation'.

The OO paradigm explicitly injects structure on top of code which allows programmers to build and maintain considerably larger programs than in the distant past. This extra level of organization is the key to managing complexity. But while this amplifies our abilities to build big programs, there is still a 'threshold of complexity' that once crossed will quickly start to degrade the overall stability of the development project, and eventually the software itself.

An individual programmer has fixed limits on how quickly they can build up instructions and later on how quickly they can correct problems. A highly-effective team can support and extend a much larger code base than the sum of its individuals, but eventually the complexity will grow beyond their abilities. There is always some physical maximum after which the work becomes excessively error prone or consistently slower or both. There is no getting around complexity, it is a fundamental limitation on scale.

However we can significantly minimize it, to prevent crossing the threshold for as long as possible. This obviously comes from the strict avoidance of adding any artificial complexity, such as special cases, twisted logic, arbitrary categorizations and other forms of disorganization. That helps, but there is another approach as well.

Encapsulation can be seen as drawing a 'black box' around a subset of a complex system. That box prevents anyone from outside in seeing the inner workings, but it also ensures that what's on the inside is not influenced by random outside behaviour. The inside and outside are explicitly walled off from each other, so that the only interaction is a precisely defined 'interface'.

To get the most out of encapsulation, the contents of the box must do something significantly more than just trivially implement an interface. That is, boxing off something simple is essentially negative, given that the box itself is a bump in complexity. To actually reduce the overall complexity, enough sub-complexity must be hidden away to make the box itself worth the effort.

For example, one could write a new layer on top of a technology like sockets and call it something like 'connections', but unless this new layer really encapsulates enough underlying complexity, like implementing a communications protocol and a data transfer format, then it has hurt rather than helped. It is 'shallow'. What this means is that for any useful encapsulation, it must hide a significant amount of complexity, thus there should be plenty of code and data buried inside of the box that is no longer necessary to know outside of it. It should not leak out any of this knowledge. So a connection that seamlessly synchronizes data between two parties (how? We don't know) correctly removes a chunk of knowledge out of the upper levels of the system. And it does it in a way that it is clear and easy to triage problems as being 'in' or 'out' of the box.

Once a subset of the complexity has been fully encapsulated, and is easily diagnosable, it can be vetted and ignored for the moment. That is, if the connections library is known to work for all current circumstances used by the project, then programmers don't have to revisit the internals, unless they want to expand them. That's the real power of encapsulation. For a little bit of extra thinking and work, some part of the system is carved off and put to rest. Later, because of reuse or enhancements, it may make sense to revisit the code and widen its functionality, but for the moment it is one less (hopefully large) problem to deal with. The system is more complex, but the system minus the encapsulated parts is relatively less complex, and thus it is easier to work on what remains unsolved.

In little programs, encapsulation isn't really necessary, it might help but there just isn't enough overall complexity to worry about. Once the system grows however, it approaches the threshold really fast. Fast enough that many software developers ignore it until it is way too late, and then the costs of correcting the code becomes unmanageable.

It is for that reason that many seasoned developers have learned the habit of encapsulating early and often. They essentially have a second 'editing pass' on any new code that is aimed at breaking off any potentially encapsulated parts into their own independent chunks. This is essentially 'partial' encapsulation. The code is partitioned, but not rigorously enforced. Doing this regularly and reliably means that at some point later, when it is required, the code can easily be enhanced to full encapsulation.

If you look at any well-written code base, the related instructions tend to be localized, and there is at least an implicit organization that cleanly separates the underlying pieces. That is, the individual lines of code appear in the sub-parts of the system, exactly where you would expect them to be located. Perhaps, some of the code is formally encapsulated, but often that degree of rigor is not necessary at the current stage of the code base, so its just partial. Contrast that with code containing an over abundance of hollow encapsulation or just what seems to be code randomly located anywhere and you can see why this principle is so important in keeping the code useful.

To build big systems, you need to build up a huge and extremely complex code base. To keep it manageable you need to keep it heavily organized, but you also need to carve out chunks of it that have been done and dusted for the moment, so that you can focus your current efforts on moving the work forward. There are no short-cuts available in software development that won't harm a project, just a lot of very careful, dedicated, disciplined work that when done correctly, helps towards continuing the active lifespan of the code.