Saturday, May 17, 2014

Predictions

Since I've been following it, hardware innovations have always been the leading indicator for software direction. 

Mainframes were followed by minis, then micros, networks and servers, and now smart phones and tablets. Each new wave of hardware spawned in software another set of waves a few years later such as operating systems, better languages, data structures, GUIs, networks, libraries, client-server, object-oriented, the web, NUIs, distributed systems and mobile apps. These came as a response to the increased capability of the new hardware. Since it was all new territory, most of these new software technologies were redone from scratch. In this fashion we've built up billions of lines of code, in distinct multiple generations, managed by millions of programmers. 

In a very general sense, software has never had the chance to really mature because adapting to the next wave of hardware took precedence. This dates way back beyond me and might come loosely under the category of "the software crisis" from the 60s. An ongoing epidemic where each new generation of programmers ignores the knowledge of the previous ones because they were trying to keep up with the new demands brought on by advances in hardware. We don't get caught up in fixing the existing problems, we just move on to the next ones. 

The Internet of Things promises to continue this trend for yet another round. They'll be another huge wave of small hardware devices, followed by billions of lines of code to try and do something useful with all of these interfaces and the data collected.

The 'crisis' part of this evolution is that although software has innovated to catch up, the general quality of it has been sacrificed to keep pace. A big change in approach like Object Oriented programming did open the doors for us to build considerably larger systems, but each new generation compensated for the increased complexity by reducing aspects of the quality. 

There are of course, lots of counter examples of great modern software that is extremely well-built, but considering the billions of lines of code out there today, they just don't count for a sizeable volume. Most code is fragile. More importantly, our critical data is kept on or processed by older, more reliable technologies like mainframes, COBOL or even Fortran. That's partly because they are better debugged by now, but also because they were better engineered as well. Cruder hardware forced more concentration on the solutions, which resulted in the software being smaller and dependable. More thought went into the work, because there was more time available for it.

All of the new hardware coming out for the Internet of Things will once again hide this basic problem. Data was initially generated mostly by people, so it was small, prone to error and needed extensive editing. But it was highly structured and deep. With hardware generating the data the opposite will be true. It results in a mass amount of data that is extremely accurate, but also very shallow. Mostly it will be endless time-series of specific points in the real world, like a living room lamp. We'll get great volumes of stuff like ON 3:46pm, OFF 4:23pm, ON 7:23pm, and so on. In a sense, it is data that just 'outlines' some base information like the fact that someone came home at quarter to four for a while and then returned later for dinner. It doesn't say that directly, but it can be inferred.

The rush to utilize this new hardware input will once again obscure the fundamental problems with software development. We'll get an app to flicker the lights on and off for example, but beyond the basic tricks, the software side will only marginally improve our lives at the cost of making them once again more complex. Now to get the lights, you'll have to fumble around to find that lamp app buried within dozens of other ones. 

In fact this has been the ongoing trend for software since the begining. Computers haven't increased the 'quality' of our lives as much as they've could have and at the same time they've allowed us to amplify the complexities of our societies to a dangerous level. 

This is best seen in all of the security problems we've been having lately. It has become far too easy for people to get important data on anything they want, and it is frequently the case that they choose to use this data for means that are not good for either individuals or our societies. Identity theft, credit card fraud and illegal spying are all consequences of the software industries inability to reliably create large software systems. Since security isn't visible to the consumers, it is one of those isses that can be left for later.

Added to that, there are great questions that we could answer now about various behaviours of 'things' in our societies. We've collected the data, it is stored, but unfortunately it spans so many repositories that it is essentially unmineable. We converged on this 'siloed' version of our collective data because the problems we could have tackled are just too large for any single individual to comprehend them, and there hasn't been any market or financial benefits for organizations or individuals working together in the software industry to solve this. Thus we have very few standards, many many exceptions and great walls between our data.

With more hardware on the way, the industry will obviously continue on its current trajectory. We'll just build more hardcoded ways to view the new data, and gradually combine little bits of it together, here and there. The new features will keep people amused long enough that they forget about all of the underlying bad things that are happening. That will last us at least another decade at its present rate, but what then? What do we do when we have trillions of lines of code, erratically spread all over, generating data that makes petabytes look tiny?

At some point, hardware just isn't going to be able to cover up the sins of the software industry anymore. Software will start to change very slowly and there will be a great outcry for it to start living up to its promise. Of course by then, the collective mess that we've built will be well beyond any individual or group's comprehension. And worse, it will be nearly locked in stone because we'll not have the resources to chase through the tangled webs to unravel the fundumental problems. Still, once progress is no longer apparent, there will finally be a demand to correct what was so hastily thrown together.

I'd guess what happens at that point, but I think many people are just hoping we invent artificial intelligences by then and make it their problem. They'll have the patience to slowly unwind what we've built up over decades of haste. With a bit of luck we can avoid the consequences of our immaturity.

We could, of course, start working on research to find a solution to our growing problem. Most likely it would take decades and involve going all of the way back to the basic models of computation. There are a few groups out there with this type of goal, but there seems to be very little funding available to really put significant effort into it. It's not short-term enough to interest investors and if something great were discovered it wouldn't achieve widespread usage unless it was essentially 'shared' with the rest of the world. Those qualities are generally bad for profits. It would also be too long, deep and slow to make it a viable goal for movements like OpenSource. Any notoriety or payoff would come so late that interest would have long since waned. People would just give up and move onto something more topical. 

With all of that in mind, what we'll most likely see over the next decade is a new round of technology driven by the increase in hardware meant to capitalize on the Internet of Things wave. It will probably center around being distributed, and will need to handle large volumes of simple data. These waves usually come with a new round of languages, so any of the more interesting distributed ones might grow in popularity but it is more likely that the next set will be loosely based on what is out there right now. Like all of the other waves, it will ignore the ongoing problems with version changes and it may even degrade the representational quality of the data itself to save space, time or both. It will be harder to install, and it will inadvertently take more effort to keep it constantly upgraded, both these attributes have been getting worse lately. Of course its security will be abysmal and any serious logging or auditing will be absent (sacrificed because of space). Still people will flock to it, and the next generation of coders will pronounce the rest of us out-of-touch with modern software for not degrading our practices enough to keep up.

History will repeat itself at least once more. Some trends are just too strong to escape. On a relate note, if anyone has millions of spare dollars and a hankering to finally break this cycle, I've got enough crazy ideas to last decades...


Saturday, May 10, 2014

Architecture

Writing a small program, a few thousand lines, is not particularly difficult once you've mastered logic and algorithms. You focus on a very specific part of the problem, work out the details and then pound it into shape. As the size of the program grows you might keep on with the same approach; decompose the problem into smaller ones, then dump each answer somewhere arbitrary in the code base. At some point however, the amount of code that is built up begins to become the problem in itself. 

With a small program, if you make some sloppy mistakes it is fairly easy to find and fix. A couple of days at most. So the code can be disorganized. It can be highly redundant. Each instance of the same solution can have its own unique approach. You can get away with a lot of really bad practices. It can be a mess.

As the code base gets larger, the cost of changing it grows -- probably exponential -- until it is a huge task. Somewhere around the medium range, say 40,000 to 60,000 lines of code, just dumping in code anywhere builds up enough disorganization that it hinders further development. It only gets worse as more code is added. A few hundred thousand disorganized lines is painful, millions of lines of that is totally unmanageable.

Disorganized code can make it very hard to correct problems. Finding a bug in a small program isn't too hard once you've mastered debugging, but in a big code base organization is required to let you quickly jump to the right area. Without organization, it could take weeks, months or even years to find the problem, you have to search everywhere. 

When there is enough functionality and enough complexity, fixing the problems is a mess too. It starts to resemble stuffing straw into an old sack. You stuff it in one hole, only to watch it fall out from others. The changes cause unexpected side effects, which cause more changes, then more side-effects, etc. It becomes an endless and mostly hopeless task made worse by impending deadlines. 

A big mess can also be hard to extend. You don't want to disturb the existing functionality, so you copy and paste bits of code from all over into a new location, but the new redundacies will start to contridict each other quickly causing their own special bugs. Instead of making things better, you are probably just making them worse.

Big disorganized code bases are unstable and suffer from all sorts of horrible operational issues and usually have annoying interface problems, not to mention endless security issues. In most cases they look as ugly from the outside as they do internally. 

As the size of any system grows, the organization of the code quickly becomes the most significant problem in the development. Fortunately there is a well-known solution.

Architecture, for software, is a combination of the underlying technology choices and the higher level organization that encapsulates the many peices of a big system from each other. Mostly, due to resource limitations, it is arrived at through a long-term vision that is then mediated via compromises and politics. It isn't necessary for small programs, but from medium to large and all the way through to huge programs, it becomes increasingly significant to the stability, quality and extendability of the system. 

A good architecture draws 'lines' through the code, providing the high level encapsulation between components. It makes it easy to attribute bugs to specific parts and it allows one to place a scope on the impact of any code changes. As well it defines exactly where common extentions should be made to the code and lays out a direction for how to add brand new peices. 

An architecture can be documented, but if the lines are clear and consistent in the code base an experienced programmer can pick it up directly from reading the source. It might help to understand why things are organized in a particular manner, but not knowing that doesn't prevent someone from following the pattern of the existing implementation. 

One way to know there is an good architecture in place is to guess where some existing functionality should be, then verify that it is actually there and only there. If that is true for most of the functionality, then it is well-organized in a manner that can be understood.

Besides all its other qualities, a good architecture makes it easier for refactoring and for code reuse. If you need to make changes there are a limited number of places to change, and they are all in a consistent location close together. This allows you to either enhance the behaviour across the system, or route new functionality through one single interface. This of course provides a strategy for building a massive system over years and years through an endless huge number of versions. Each new iteration usually involves first refactoring things, then extending them with consitent organization. Done well, the first part can even be non-destructive and quickly testable, saving time and increasing the confidence level for the new code. 

One frequent concern with extending an architecture in this manner is the possibility of creating too many intermediate layers, but again the depth of the layers is an organizational issue itself. And endless number is disorganized by definition, so this meta-problem needs similar treatment to the base organization. In fact, for millions of lines of code, this sort of upper level organizational issue repeats itself over and over again, getting a bit higher each time. Finding a reasonable multi-level organization for millions of anything is of course a non-trivial problem that lies at the heart of architecure.

Small programs don't need an architecture. It doesn't hurt, but it is not necessary. Big programs abolosutely must have one that is clean and consistent. Otherwise they become unmanageable balls of mud. Architecture is a different set of skills than programming, but an architect should know both. They should have a lot of experience with programming first. They also need to be able to grasp the larger context and help focus the work of the programmers to avoid re-implementations. Efficiency for ongoing development is driven directly by the organization of the code base. Extending a well-architected program is considerable faster than just tacking on stuff to a ball of mud. The coding may take longer, but all of the other costs are minimized. The cost of not having an architecture is a steep decline in development speed over the lifetime of a project. That generally shows as either lower productivity or lower quailty or both.

If you want to design and build big, complex systems than the only way to get them to that size and keep them moving forward is to have and enforce an architecture. Disorganization may seem faster to some people, but that's only a limited short-term perspective.