I tend to be a low-budget traveler, easily choosing quantity over quality for my accommodations. So, not surprisingly -- in some Chinese city, which will remain nameless -- I found myself checking into a rather inexpensive double room, with private bathroom, in a youth hostel.
All and all, it was a nice place. The youth hostels in China are badly categorized. Unlike their European inspirations, they don't just cater to the younger set. Old folks, families, and kids are also common sites, the designation is more about historic roots than actual clientele.
And so it was, that I found myself checking out the bathroom for our latest room.
At first glance, being neat and beautifully adorned with big deep brown tiles, it seems to belong to a much more expensive accommodation. Very modern looking, like something you'd find in a swank big-city hotel.
I reached out to use the sink, but the tap was wobbly. Not "a little wobbly". In order to turn it on, I had to hold it with both hands or risk causing some underlying damage to the pipes.
As I stood there, contemplating whether to poke my head underneath the vanity and manually tighten the screws, I noticed a sign on the wall making two big points.
The first point was a warning that in some of the rooms, the hot and cold water dials were reversed.
Sure enough, when I looked at the shower, while the hardware had a little red and blue sticker adorning each knob in the right places, the wall behind the taps had similar little red and blue stickers but reversed. Hot was cold, and cold was hot.
Checking out the shower controls made me suddenly aware of the large clear plastic square on the floor that was overlying the shower drain.
Now, it's not uncommon in inexpensive hotels to find bathrooms where the shower stall is the bathroom. They just tile everything, and when you take a shower, the water gets sprayed all over. It's a sort of innovative use of space, and hey, you don't have to clean anything if people are showering a lot.
I've seen this layout before. But the see-through plastic square was weird.
The second point on the sign near the sink helped clear it up. It said that during showers the square should be moved off the drain so the water can escape, but afterward, it should be placed back over the drain again. It doesn't take long to figure out why.
It seems they forgot to put a trap on the drainage line, so the sewer gases find their way back up the plumbing. If you forget to replace the cover, the bathroom quickly smells like sewage. If you remember to replace the cover, then it only "slowly" smells like sewage. Quite the fix.
By now, you are probably wondering what a gross series of plumbing errors in a bathroom in China has to do with programming or software development. We'll get to that soon enough.
In North America, I would mostly expect, unless someone did a hacked home-brew job, that the sink bolt would be tight, the hot and cold water values would be in the right order and that there is a trap on the drainage pipe. I'd expect this because we have standards. But also because those standards are enforced.
I'd like to think that the bathroom I was standing in, would have resulted in the plumber being dragged out and beaten. Or possibly sued and drummed out of plumbing for life. OK, maybe not that serious, but I couldn't imagine someone here just accepting such a cluster fuck and not trying to fix it. Not demanding that it get fixed.
Now, I don't know if China even has standards, but if they do, they were clearly not followed. Although the bathroom was perfectly functional -- and it was a rather nice shower, all things considered -- from a North American perspective it was totally crazy that this was allowed to occur, and even crazier that all the proprietors did was post up some nicely written signs instructing their patrons to work around the issues.
Total madness, when it would not have been that hard to do it properly in the first place.
But in traveling, you quickly learn to accept these types of things. Parts of the world can be really disorganized and crude. This particular bathroom was actually one of the nicer, more modern and better laid out ones. As a traveler, I'd probably even recommend the hostel.
I'm fine with this in a hostel in China, but no doubt I'd be furious if I hired a plumber to fix my bathroom and this was the result. My expectations for my house are much higher. The hot water tap should be on the left. No excuses accepted.
Interestingly enough, if you asked someone who lived in China, it wouldn't surprise me if they just shrugged off the question, and secretly thought you were nuts. Since -- depending on how common it is -- this type of plumbing job is probably nothing out of the ordinary. It's probably standard, of normal quality. A local might even praise the extra effort of having warned someone with a sign of the tap reversal. Maybe.
If you live in those parts of the world where people have to frequently make do with "whatever", they tend to have significantly lowered expectations. Little things to them, aren't worth crying over, they generally have much larger, more significant problems to worry about. Problems that rarely affect wealthier nations. Problems that make inconsistent plumbing seem trivial.
It's all a question of perspective.
Well, not really. It's more about stability and advancement. My world has less big problems, so we can afford to concentrate on fixing the smaller, yet still annoying ones. We were probably back there once, but over time we managed to move ahead. We can trust the order of our taps.
What's interesting about these specific plumbing problems is that they bring me right back to software development.
In a very real sense, I see so much "commercial", "enterprise" and "professional" software out there that smacks of being put together by Chinese plumbers. It's exactly the same; there are so many awkward bits and gross inconsistencies that it is hard to imagine people ignoring them. Clearly, the stuff works well enough, and people are buying it, but I'm always shocked to see just how poor, messy and unprofessional it really is. How far back it is.
In software, I always feel like a tourist. I've often just stopped by at a few typical Enterprises and been totally shocked to see how development and operations sit on the edge of chaos. I realize that things are working, but exactly like the plumbing, I come from a place where I don't think what I see is a necessary problem. It can, and should be fixed.
Modern software development operates at the same level as plumbing in inexpensive hostels in China (I don't want to generalize too much :-). People just whack it together, and then accept whatever, if it's just good enough. They quickly move on to whack out more stuff.
There are few applicable standards, and even if there were, they are not being followed or enforced.
The software might work, but while I don't mind seeing it when I travel, I'd hate to have to depend on it. Oh wait, we do...
That many programmers go at their jobs with the ethics of Chinese plumbers is bad enough, but too often I see them trying to justify it as some sort of creative license. It's at that point, I think, that it is really insane. Sheer madness. I have a hard time believing that hooking up the hot water to the cold water faucet is an expression of artistic freedom, yet in one form or another, that argument echos all over programming discussions.
No doubt the plumber working quickly in the hostel would have been irritated by someone criticizing his efforts, containing his work-force or just slowing him down, but all I can see is a missed opportunity by someone -- anyone really -- to ensure that a critical job gets done correctly. It wouldn't have taken much more effort to avoid that mess in the first place. It says a lot about the level of operation. Things that shouldn't be, shouldn't be.
Someday, perhaps, the software industry will find its way out of our current state. Someday things will install or upgrade easily, mostly work, and be easily fixable when they crash. Data won't get lost, and inter-operation won't be a mythical buzzword. Someday we will assemble massive systems, spend minimal effort collecting data and then use the results to further refine our efforts. Someday we won't have to finely read the manual in order to grok yet another level of arbitrary madness. Someday comparisons between programmers and people who hack off their own ears for undecipherable reasons won't be so common. Someday.
Finality: No, I didn't say that programming was the same as plumbing! Read it again.
Software is a static list of instructions, which we are constantly changing.
Thursday, June 4, 2009
Sunday, May 31, 2009
Architectural Ramblings
While there is a huge range in opinions, I think most software architects would agree that their position is primarily about defining broad strokes for the development of computer systems. Laying down a master plan, or an overview of some type. The development then happens by one or more teams of programmers.
The fastest way to pound out a big software system would be to lay out the instructions in the largest possible sets. Reusing highly repetitive sections helps in speed, but beyond that, adding structure is actually more work. Coding a big system into thousands of small functions is a significant effort, regardless of what's actually being built.
However, we break up the code into smaller pieces because it makes it easier to extend the system later. Most software development has shifted away from the idea that it is a huge one-shot deal, and accepted the fact that software is continuous iterations for the life-time of the code. In this case, the big problem is not writing the first version, it is all of the work happening over years and years to move that version forward into various different incarnations of itself.
VERTICAL AND HORIZONTAL LINES
There are two big things that effect the development of software: the technology and the problem domain (business logic). Philosophically, the two lay themselves out perpendicular to one another.
Domain logic problems are vertical. The user needs some functionality which cuts through the system, and ultimately results in some changes to some underlying data. It's a thin line from the user to the data, and then back to the user again. The system implements an update feature, for example, or a report or some other specific set of functions constrained by a specific set of data that the user (or system) trigger.
Technological problems are horizontal. The same problems repeat themselves over and over again across the whole system, no matter which functionality is being used. All Web applications, for example have similar problems. All distributed systems need the same type of care and feeding. Relational databases, work in in a similar manner. The problems are all unrelated to the functionality (or at least unattached to it) . For example all systems that use transactions in relational databases have the same basic consistency problems.
CONSISTENCY AND ONIONS
Easily, the greatest problem that most big systems have is consistency. It's not uncommon to see large software products where each sub-section in the system bares little resemblance to the others around it.
Aside from just looking messy, it makes for a poor user experience because it is harder to guess or anticipate how to utilize the system. If everything is different, then you have to learn it all in detail, rather then just being able to grok the basics and navigate around the system easily.
In small systems, if there was only one programmer, often the functionality, interface, data, etc. is consistent as a consequence. It's one of those early skills that good programmers learn. Picking a dozen different ways to do things simply makes it harder to maintain the code, and the user's hate it. A messy system is an unpopular one. A pretty, consistent one, even if it has bugs, is always appreciated.
Ideally, if you were deploying programming resources, the best approach would be to assign individual programmers to each section that needs to be consistent. An easy way to do this is by arranging them in an onion-like structure. A series of containing layers, each one fully enclosing the others.
In this approach you would have a database programmer create one big consistent schema, and all of the database triggers and logic. Another programmer would be responsible for getting that model out of it's persistent state and into a suitable form usable by the application, and perhaps augmenting that with some bigger calculations. The application/interface programmer would then find consistent ways to tie user elements to functionality. In this scenario, the inconsistencies between the styles of the different programmers would mostly go unnoticed by the users.
It's probably because of the consistency that most of the really big systems out there, actually started as smaller projects. Growing something into a bigger size, while mostly maintaining the consistency, is far easier than trying to enforce it initially. Big projects tend to fail.
STRUCTURE AND TEAMS
Deploying big teams for large software projects is a huge problem.
The domain logic runs through the system vertically. If you're building a large system, and you partition the work out to various groups based on which parts of the subsystem they'll develop, it is a vertical partition. Of course, we know that arrangement will generally result in massive inconsistencies along the functionality lines. It practically guarantees inconsistencies.
If you orient the developers horizontally, there are two main problems. Generally technology is piled high for most systems, so it means that each horizontal group is dependent on the one below, and effects the one above. The work cannot start all at the same time, it has to be staggered, or excess useless work gets done. In general, you tend to have teams standing around, waiting for one another, or building the wrong things.
The other big problem is that the developers are experts on, and focus on the technology, so they tend not to see the obvious problems with the domain data. That can lead to significant effort going off into useless directions that have to be scrapped. Developers that don't understand the domain always write the most convenient code from their perspective. That is, unless they really have a great grip on real domain problems -- which makes for some brutally complex coding issues -- they will choose to interpret the problem in the fashion that is the most convenient for making the code simple, not the functionality. This leads to functional, but highly convoluted systems.
The choice for dividing up the teams, along either set of lines, is a choice that heavily effects the architecture of the system. One is tied to the other.
MANAGEMENT, TECHNOLOGY AND BUSINESS
An architect's main job is to lay down a set of broad strokes for a group of developers or an enterprise. Horizontally or vertically, these strokes must match existing technological or domain logic lines. That is, the architects don't really create the lines, they just choose to highlight and follow a specific subset of them. If they don't and choose to make their own lines, it is assured that the cross-over points will contain significant errors and often become sinks for wasted effort.
Architects, then, have to both have strong understanding of the underlying technologies and strong understanding of the user domains. You cannot architect a system for a bank, for example, if you do not have both a strong understanding of finance, and most of the technologies used in the system.
I've certainly seem millions of dollars get flushed away because the architects didn't understand the technology, or because they just didn't get the real underlying problems in the business domain. Guessing in either corner is a serious flaw.
Often, in practice, older architects will have great and vast business knowledge, but are weak on the the more modern technologies. That's not uncommon, and it doesn't have to be fatal. Two architects can combine forces, one vertical, the other horizontal, if they are both wise enough to know not to step on each other's turf. Nether issues really takes precedence, they both are critical.
The biggest thing to understand is that the arrangement, and partition of the work into various teams, is fundamentally an architectural issue. Management of the teams, and how they interact, which is reflected in the code, is also an architectural issue. So, an architect inheriting four or five cranky, unrelated development teams, needs to be very wary of how they are deployed.
APPROACHING CONSISTENCY
The simplest way to build a big system is to start small and grow it. Careful extensions, backed by lots of refactoring and cleanup, can keep the code neat along both the horizontal and vertical lines, but it is a lot of work to keep up with it.
In time, when it needs more effort, or if the original project is time-dependent, organizing big development teams will directly effect the code-base. In a very large system, it is just not possible or practical to have one and only one programmer spanning all of the things that are needed for consistency. But consistency is still critical to success.
Like water or electricity, people always flow towards the easiest path. It is just that different people have widely different ideas about what easy really means.
In general, for most of humanity, it is easier to be inconsistent, than not. That's why, for example, building construction has a large number of different standards that have to be followed. But just standardizing something itself is not enough.
Not reading or following a standard is far easier than doing it right. So, there needs to be some extra level of incentive given to push people into abiding by the rules. The only way that really works is by separating the concerns, and giving space for someone to be an expert in the consistency of a very small domain. I.e. if you want people to follow some consistent subset of rules, you need to make an inspector (with the power of enforcement) to look after and enforce that consistency.
This, then, brings the issue back into the domain of a single person, although their actual depth in the work is very light.
So, what I am getting at, is that in a very large project, one with a couple of architects, and several teams of developers: to enforce consistency you also need teams of inspectors. Each individual is responsible for a small subset of rules, but one in which they can apply their efforts consistently. Where the architects lay down the broad strokes, the inspectors examine various pieces at appropriate times in the development, and point out infringements. The software is ready when it has reached a suitably low infringement state.
If you just have an architect laying out a grand scheme, it's unlikely to get followed closely enough to be able to justify the work. Normally, no matter how it starts, over time the meta-team structure disintegrates, and the various groups shift their focus away from commonality and towards just doing what needs to be done. From each of their individual perspectives, it seems like a reasonable approach, but for the project as a whole, it is a fatal step towards a death march. The breakdown in development structure, mirrored by a similar one in the architecture, opens up gaping holes that become endless time sinks.
Besides consistency, the inspectors also become the force that tries to stop each team from solving their own little time sinks by throwing them at the other teams. If the lines are clear, then there shouldn't be any political issues about where the problems lay. If the lines are blurry, as things get worse, everything becomes political.
AND FINALLY
There are a lot of software architects out there that grew up in the ranks of programmers. From this view, they want architecture to be primarily focus on either technology or domain issues, depending on their own personal background. The big problem with that desire is that as just a meta-programmer (a coder at a higher level), without taking into account the environment, a huge chunk of the possibility of success or failure rests on someone else's shoulders. If you leave all of the personal arrangement, politics and other development dynamics to an unrelated set of managers, you are unable to control the huge effect on the overall work.
The environment will build things in, effects how we build them. It can also be the biggest factor between success and failure. If these things represent big issues that can sway the results of an architecture, then the architect needs to be in control of them. If you can't break up and restructure the development teams as needed, then you're not really in control of a major aspect of the work are you?
Although the architects level is higher and more generalized, they do need the possibility of exerting total control over any and all aspects that could derail their efforts. That includes management and politics.
The fastest way to pound out a big software system would be to lay out the instructions in the largest possible sets. Reusing highly repetitive sections helps in speed, but beyond that, adding structure is actually more work. Coding a big system into thousands of small functions is a significant effort, regardless of what's actually being built.
However, we break up the code into smaller pieces because it makes it easier to extend the system later. Most software development has shifted away from the idea that it is a huge one-shot deal, and accepted the fact that software is continuous iterations for the life-time of the code. In this case, the big problem is not writing the first version, it is all of the work happening over years and years to move that version forward into various different incarnations of itself.
VERTICAL AND HORIZONTAL LINES
There are two big things that effect the development of software: the technology and the problem domain (business logic). Philosophically, the two lay themselves out perpendicular to one another.
Domain logic problems are vertical. The user needs some functionality which cuts through the system, and ultimately results in some changes to some underlying data. It's a thin line from the user to the data, and then back to the user again. The system implements an update feature, for example, or a report or some other specific set of functions constrained by a specific set of data that the user (or system) trigger.
Technological problems are horizontal. The same problems repeat themselves over and over again across the whole system, no matter which functionality is being used. All Web applications, for example have similar problems. All distributed systems need the same type of care and feeding. Relational databases, work in in a similar manner. The problems are all unrelated to the functionality (or at least unattached to it) . For example all systems that use transactions in relational databases have the same basic consistency problems.
CONSISTENCY AND ONIONS
Easily, the greatest problem that most big systems have is consistency. It's not uncommon to see large software products where each sub-section in the system bares little resemblance to the others around it.
Aside from just looking messy, it makes for a poor user experience because it is harder to guess or anticipate how to utilize the system. If everything is different, then you have to learn it all in detail, rather then just being able to grok the basics and navigate around the system easily.
In small systems, if there was only one programmer, often the functionality, interface, data, etc. is consistent as a consequence. It's one of those early skills that good programmers learn. Picking a dozen different ways to do things simply makes it harder to maintain the code, and the user's hate it. A messy system is an unpopular one. A pretty, consistent one, even if it has bugs, is always appreciated.
Ideally, if you were deploying programming resources, the best approach would be to assign individual programmers to each section that needs to be consistent. An easy way to do this is by arranging them in an onion-like structure. A series of containing layers, each one fully enclosing the others.
In this approach you would have a database programmer create one big consistent schema, and all of the database triggers and logic. Another programmer would be responsible for getting that model out of it's persistent state and into a suitable form usable by the application, and perhaps augmenting that with some bigger calculations. The application/interface programmer would then find consistent ways to tie user elements to functionality. In this scenario, the inconsistencies between the styles of the different programmers would mostly go unnoticed by the users.
It's probably because of the consistency that most of the really big systems out there, actually started as smaller projects. Growing something into a bigger size, while mostly maintaining the consistency, is far easier than trying to enforce it initially. Big projects tend to fail.
STRUCTURE AND TEAMS
Deploying big teams for large software projects is a huge problem.
The domain logic runs through the system vertically. If you're building a large system, and you partition the work out to various groups based on which parts of the subsystem they'll develop, it is a vertical partition. Of course, we know that arrangement will generally result in massive inconsistencies along the functionality lines. It practically guarantees inconsistencies.
If you orient the developers horizontally, there are two main problems. Generally technology is piled high for most systems, so it means that each horizontal group is dependent on the one below, and effects the one above. The work cannot start all at the same time, it has to be staggered, or excess useless work gets done. In general, you tend to have teams standing around, waiting for one another, or building the wrong things.
The other big problem is that the developers are experts on, and focus on the technology, so they tend not to see the obvious problems with the domain data. That can lead to significant effort going off into useless directions that have to be scrapped. Developers that don't understand the domain always write the most convenient code from their perspective. That is, unless they really have a great grip on real domain problems -- which makes for some brutally complex coding issues -- they will choose to interpret the problem in the fashion that is the most convenient for making the code simple, not the functionality. This leads to functional, but highly convoluted systems.
The choice for dividing up the teams, along either set of lines, is a choice that heavily effects the architecture of the system. One is tied to the other.
MANAGEMENT, TECHNOLOGY AND BUSINESS
An architect's main job is to lay down a set of broad strokes for a group of developers or an enterprise. Horizontally or vertically, these strokes must match existing technological or domain logic lines. That is, the architects don't really create the lines, they just choose to highlight and follow a specific subset of them. If they don't and choose to make their own lines, it is assured that the cross-over points will contain significant errors and often become sinks for wasted effort.
Architects, then, have to both have strong understanding of the underlying technologies and strong understanding of the user domains. You cannot architect a system for a bank, for example, if you do not have both a strong understanding of finance, and most of the technologies used in the system.
I've certainly seem millions of dollars get flushed away because the architects didn't understand the technology, or because they just didn't get the real underlying problems in the business domain. Guessing in either corner is a serious flaw.
Often, in practice, older architects will have great and vast business knowledge, but are weak on the the more modern technologies. That's not uncommon, and it doesn't have to be fatal. Two architects can combine forces, one vertical, the other horizontal, if they are both wise enough to know not to step on each other's turf. Nether issues really takes precedence, they both are critical.
The biggest thing to understand is that the arrangement, and partition of the work into various teams, is fundamentally an architectural issue. Management of the teams, and how they interact, which is reflected in the code, is also an architectural issue. So, an architect inheriting four or five cranky, unrelated development teams, needs to be very wary of how they are deployed.
APPROACHING CONSISTENCY
The simplest way to build a big system is to start small and grow it. Careful extensions, backed by lots of refactoring and cleanup, can keep the code neat along both the horizontal and vertical lines, but it is a lot of work to keep up with it.
In time, when it needs more effort, or if the original project is time-dependent, organizing big development teams will directly effect the code-base. In a very large system, it is just not possible or practical to have one and only one programmer spanning all of the things that are needed for consistency. But consistency is still critical to success.
Like water or electricity, people always flow towards the easiest path. It is just that different people have widely different ideas about what easy really means.
In general, for most of humanity, it is easier to be inconsistent, than not. That's why, for example, building construction has a large number of different standards that have to be followed. But just standardizing something itself is not enough.
Not reading or following a standard is far easier than doing it right. So, there needs to be some extra level of incentive given to push people into abiding by the rules. The only way that really works is by separating the concerns, and giving space for someone to be an expert in the consistency of a very small domain. I.e. if you want people to follow some consistent subset of rules, you need to make an inspector (with the power of enforcement) to look after and enforce that consistency.
This, then, brings the issue back into the domain of a single person, although their actual depth in the work is very light.
So, what I am getting at, is that in a very large project, one with a couple of architects, and several teams of developers: to enforce consistency you also need teams of inspectors. Each individual is responsible for a small subset of rules, but one in which they can apply their efforts consistently. Where the architects lay down the broad strokes, the inspectors examine various pieces at appropriate times in the development, and point out infringements. The software is ready when it has reached a suitably low infringement state.
If you just have an architect laying out a grand scheme, it's unlikely to get followed closely enough to be able to justify the work. Normally, no matter how it starts, over time the meta-team structure disintegrates, and the various groups shift their focus away from commonality and towards just doing what needs to be done. From each of their individual perspectives, it seems like a reasonable approach, but for the project as a whole, it is a fatal step towards a death march. The breakdown in development structure, mirrored by a similar one in the architecture, opens up gaping holes that become endless time sinks.
Besides consistency, the inspectors also become the force that tries to stop each team from solving their own little time sinks by throwing them at the other teams. If the lines are clear, then there shouldn't be any political issues about where the problems lay. If the lines are blurry, as things get worse, everything becomes political.
AND FINALLY
There are a lot of software architects out there that grew up in the ranks of programmers. From this view, they want architecture to be primarily focus on either technology or domain issues, depending on their own personal background. The big problem with that desire is that as just a meta-programmer (a coder at a higher level), without taking into account the environment, a huge chunk of the possibility of success or failure rests on someone else's shoulders. If you leave all of the personal arrangement, politics and other development dynamics to an unrelated set of managers, you are unable to control the huge effect on the overall work.
The environment will build things in, effects how we build them. It can also be the biggest factor between success and failure. If these things represent big issues that can sway the results of an architecture, then the architect needs to be in control of them. If you can't break up and restructure the development teams as needed, then you're not really in control of a major aspect of the work are you?
Although the architects level is higher and more generalized, they do need the possibility of exerting total control over any and all aspects that could derail their efforts. That includes management and politics.
Friday, May 22, 2009
Driven by Work
Recently I returned from spending several weeks backpacking in China. I stomped on the Great Wall, gawked at the Terracotta warriors and even managed to plunge lightly into some of the mysteries of Tibet. All great locations in a land filled with a long and complex history.
Traveling always renews my excitement for life and reawakens my sense of curiosity. I find that if I'm stuck too long without some type of grand adventure, I tend towards pessimism. I think getting caught in a rut just clouds my mood.
Every so often I need to break free of the constraints of modern living and lead a more nomadic existence. I need to get away from the routine, and react more dynamically to the world around me. I just need to break free of all those bits of life that just keep piling up around us. I need to get back into the real world, not just my little corner of it.
LANGUAGE AND OCCUPATION
There are a lot of theories, such as the Sapir-Whorf hypothesis, that suggest that languages influence how we think and see the world around us. For many modern languages, it may seem to some to be a stretch, but I seem to remember a reference to an ancient Chinese "water" language with only 400 written characters (although I couldn't find an Internet reference, so you'll have to trust my memory). With such a limited vocabulary it would certainly be difficult, if not impossible, to craft an effective rant on most subjects. The words just aren't available. If you don't have the words, you can't express things easily. Even if you could create long complex sentences; the lack of brevity starts to be become an impediment to the ideas.
That's probably why so many professions end up with their own huge technical dictionaries. Short precise terms that have large significant meaning. Words that encapsulate complex ideas. We always need to be able to communicate larger ideas with less bandwidth.
More obvious than language, one's occupation surely has an effect on how we perceive the world around us. You can't spend 40 to 60 hours a week engaged in something, year after year and not expect it to affect you in some way.
Lawyers probably argue more than most, doctors seem preoccupied with their health (although I know several that smoke and drink) and accountants tend to pinch pennies. If you keep looking at the same problems in the same ways, it's hard to prevent that from spilling over into other aspects of your life.
I'm sure everyone will have counter examples where they know of someone that breaks the mold -- there are always exceptions -- but I think it's still pretty likely that the way we live our lives directly affects how we see the world around us.
It certainly shows up heavily with programmers and other techies. You see it clearly in their ideas and interactions. The web is plagued with examples.
THREE CHANGES
Over the years I've noticed several changes in the way I see things. Some changes are purely a result of age or education, but there are definitely some that have come directly from how I am spending huge chunks of my time. Influences from a lifetime of pounding out software.
Three changes in specific, appear over and over again. I've become more detail-oriented, I tend to view the world more often in a black and white perspective, and I'm often more disappointed when things are not easily deterministic.
It makes sense. Each of these attributes helps significantly in developing software.
The key to getting any big project designed, coded and into production is to make sure all of the details are strictly and carefully attend to. No matter how you look at the big picture, the details always either act as time sinks or have to be carefully managed.
Also, as a software developer I always have to tightly constrain the world around me into a strict, yet limited black and white perspective. Dynamic shades of grey make for horribly inconsistent software.
And I've learned to stick to the things that I know are strictly deterministic in nature. If it is going to work, it needs to work every time, consistently. A program that works sometimes is all but useless.
Spending my days and often nights in pursuit of big functioning systems gradually takes a toll on the way I see the world. On the way I want to interact with it. For each new domain I enter, and each new product I create, I have to break everything down into relatively simple, straight-forward chunks.
While these changes have helped me build bigger and better things in my career, I've often mistakenly applied the same back to the world around me, to a negative effect.
Software straddles the fence between the purity of mathematics and the messiness of the real world. The real world doesn't work the same way that a computable one does. You can't debug a personal relationship for example, and some things such as weather are just too chaotic in nature to be predicable.
While often we can understand the underlying rules, that doesn't mean we understand the results. You can see how the stock market works for example, but still be unable to profit from it. Those attributes that help me create programs also set me at odds with the world around me. The harder I work at it, the more they seem to influence my perspective.
I think that's why traveling restores my sense of balance, and reminds me that not everything should be simple, consistent, rational or even clear. It's often a breath of fresh air, in an otherwise stuffy environment. A subtle reminder to not get too caught up in myself.
I do notice that besides myself, often others in the community are highly afflicted by a computationally-constrained perspective as well. One that probably makes their lives more difficult. There are a huge number of examples, but the web itself becomes a great historian for studying people who have become far too rigid.
THE DEVILS IN THE DETAILS
While the small details are very important in getting large complex projects to work, being too attentive to them can lead to missing the big picture. Given the choice, getting the broad strokes wrong is far worse than missing any of the details. In the later case, the distance to fix the problem is often far shorter.
Focusing too hard on too many small things leads to a tunnel vision. A perspective of a world were the importance of nearly trivial details are often thrown out of proportion.
It's not uncommon, for example, to see techies highly irritated by spelling mistakes and typos.
Language is the container for ideas, and while good spelling and grammar make reading easier, it is rare that they introduce real ambiguities into the text. As such, they are really minor annoyances, yet some people seem so put off by even the simplest of them.
A well-edited piece, quite truly, is easier to read, yet often it is symptomatic of the work being less raw and more contrived. A polished effort. You'd expect good writing in a magazine for example, but you always know that the polishing 'twists' the content. You are not getting the real thoughts of the author, but instead you're getting their packaged version.
Ultimately, what's important is the substance of what people say, not how they say it. Missing that, leads to dismissing possibly correct ideas for the wrong reasons. The truth is true no matter how badly it is packaged, while falsehoods tend towards better packaging to make them more appetizing.
BLACK AND WHITE
You can't just toss anything into a computer and expect it too work. Computers help users compile data, which can be used to automate work, but ultimately everything in a computer is a shadow of the real world. A symbolic projection into a mathematical space.
We can strive for a lot of complexity, but we can never match the real world in depth, nor would we want to. To make our systems work, we have to reduce a world full of ambiguity and grey into a simplified black and white version. If it's not easily explainable, then it's hardly usable.
This can be a huge problem.
Too often you see programmers or developers trying to pound the real world into overtly simplistic boxes of good and bad. Right or wrong. Left or right.
That type of poor deconstruction, driven by a Star Wars mentality, leads to many of the stupid flame wars that are so popular were two sides pound on each other assuming the other is wrong.
You know: my programming language is good, yours is bad. My OS is better than yours. My hardware is right, yours is inferior. That type of nonsense.
Clearly, all technologies have their good and bad points, but that quickly becomes ignored in a black and white world. Everything get reduced into two overly simplified categories, whether or not it makes any sense. A stiff rigid viewpoint fits well inside of a machine, but isn't even close to working with the world outside.
If you're oriented, for example, to see all of your fellow employees as only either good or bad, then because of that limited perspective you are missing out on a broad (and ultimately more successful) view of the world around you. People have such a wide range of strengths and weaknesses, that assigning them to one list or the other misses out on their potential. You'll end up relying on weak people at the wrong times, while passing up some well-suited resources.
Black and white works well for Hollywood or comic book plots, but it forces us to miss the depth of the world around us.
A LACK OF DETERMINISM
A programmer lays out a pre-determined series of steps for a computer to follow. Even with today's overly complex technologies, the computer still precisely follows these instructions. It does exactly what it is told to do. It does so in a deterministic and predictable manner. It is logical, and rational in it's behavior.
We get used to this behavior from the machines, so it's not uncommon that programmers start to expect this from other parts of the real world as well.
This becomes most obvious when you see technical people discussing business.
Management consultants lay out huge numbers of theories for business to follow, but hopefully they don't really believe that it works out that simply. Business, like weather, has some simple driving factors underneath, but at the practical level it is chaotic enough that it should be considered irrational.
If there existed some simple rules on how to succeed in business, eventually enough smart people would figure them out, to a significant enough degree that their exploiting the behavior would change the system. That is, if to many people know how to exceed the average, then they become the new average. If everybody wins, then nobody wins.
The markets are intrisically irrational (from a distant perspective) yet that doesn't stop techies from apply bad logic to predict or explain their behavior. It's epidemic, examples of people explaining why a superior technology will dominate the market, how being nice to the customers will increase business, how careful listening will get the requirements or any other assumption that presumes that the underlying behavior is deterministic, logical or predictable.
This works well for dealing with the internals of software systems, but for business and politics most programmers would do well to except that they are just irrational and unpredictable.
SUMMARY
If you know that you're biased, it is far easier to compensate for it. However, if you're walking around in an imaginary world, you tend to find that it's an uphill fight in the real one. You keep bumping into invisible walls.
The best way around this is to try very hard to separate these two worlds into two distinct perspectives. We don't have to unify our working perspective with the real one, but we do have to be aware when one is corrupting the other. We can keep them separate, and have separate expectations for each. They don't need to be combined.
When I travel, it reminds me of the greyness, the people and the irrationality of the world around me. Although I can often break things down into simple bits for the system development work, I always need to be reminded that that is just a mere subset of the reality surrounding me.
Computers should be predictable devices that are easily explainable, at least at a high usability level. The real world, outside of the machines, on the other hand, is an infinitely deep messy collection of exceptions to each and every rule. If you think you've figured it out, then you haven't gone deep enough yet.
It brings nothing but pain and frustration to expect that the real world will work on the same principles as a computational one, but still it's very common to see people caught in this mental trap. It certainly is something worth avoiding. Even if you have to occasionally trek around on the other side of the planet to do it.
Traveling always renews my excitement for life and reawakens my sense of curiosity. I find that if I'm stuck too long without some type of grand adventure, I tend towards pessimism. I think getting caught in a rut just clouds my mood.
Every so often I need to break free of the constraints of modern living and lead a more nomadic existence. I need to get away from the routine, and react more dynamically to the world around me. I just need to break free of all those bits of life that just keep piling up around us. I need to get back into the real world, not just my little corner of it.
LANGUAGE AND OCCUPATION
There are a lot of theories, such as the Sapir-Whorf hypothesis, that suggest that languages influence how we think and see the world around us. For many modern languages, it may seem to some to be a stretch, but I seem to remember a reference to an ancient Chinese "water" language with only 400 written characters (although I couldn't find an Internet reference, so you'll have to trust my memory). With such a limited vocabulary it would certainly be difficult, if not impossible, to craft an effective rant on most subjects. The words just aren't available. If you don't have the words, you can't express things easily. Even if you could create long complex sentences; the lack of brevity starts to be become an impediment to the ideas.
That's probably why so many professions end up with their own huge technical dictionaries. Short precise terms that have large significant meaning. Words that encapsulate complex ideas. We always need to be able to communicate larger ideas with less bandwidth.
More obvious than language, one's occupation surely has an effect on how we perceive the world around us. You can't spend 40 to 60 hours a week engaged in something, year after year and not expect it to affect you in some way.
Lawyers probably argue more than most, doctors seem preoccupied with their health (although I know several that smoke and drink) and accountants tend to pinch pennies. If you keep looking at the same problems in the same ways, it's hard to prevent that from spilling over into other aspects of your life.
I'm sure everyone will have counter examples where they know of someone that breaks the mold -- there are always exceptions -- but I think it's still pretty likely that the way we live our lives directly affects how we see the world around us.
It certainly shows up heavily with programmers and other techies. You see it clearly in their ideas and interactions. The web is plagued with examples.
THREE CHANGES
Over the years I've noticed several changes in the way I see things. Some changes are purely a result of age or education, but there are definitely some that have come directly from how I am spending huge chunks of my time. Influences from a lifetime of pounding out software.
Three changes in specific, appear over and over again. I've become more detail-oriented, I tend to view the world more often in a black and white perspective, and I'm often more disappointed when things are not easily deterministic.
It makes sense. Each of these attributes helps significantly in developing software.
The key to getting any big project designed, coded and into production is to make sure all of the details are strictly and carefully attend to. No matter how you look at the big picture, the details always either act as time sinks or have to be carefully managed.
Also, as a software developer I always have to tightly constrain the world around me into a strict, yet limited black and white perspective. Dynamic shades of grey make for horribly inconsistent software.
And I've learned to stick to the things that I know are strictly deterministic in nature. If it is going to work, it needs to work every time, consistently. A program that works sometimes is all but useless.
Spending my days and often nights in pursuit of big functioning systems gradually takes a toll on the way I see the world. On the way I want to interact with it. For each new domain I enter, and each new product I create, I have to break everything down into relatively simple, straight-forward chunks.
While these changes have helped me build bigger and better things in my career, I've often mistakenly applied the same back to the world around me, to a negative effect.
Software straddles the fence between the purity of mathematics and the messiness of the real world. The real world doesn't work the same way that a computable one does. You can't debug a personal relationship for example, and some things such as weather are just too chaotic in nature to be predicable.
While often we can understand the underlying rules, that doesn't mean we understand the results. You can see how the stock market works for example, but still be unable to profit from it. Those attributes that help me create programs also set me at odds with the world around me. The harder I work at it, the more they seem to influence my perspective.
I think that's why traveling restores my sense of balance, and reminds me that not everything should be simple, consistent, rational or even clear. It's often a breath of fresh air, in an otherwise stuffy environment. A subtle reminder to not get too caught up in myself.
I do notice that besides myself, often others in the community are highly afflicted by a computationally-constrained perspective as well. One that probably makes their lives more difficult. There are a huge number of examples, but the web itself becomes a great historian for studying people who have become far too rigid.
THE DEVILS IN THE DETAILS
While the small details are very important in getting large complex projects to work, being too attentive to them can lead to missing the big picture. Given the choice, getting the broad strokes wrong is far worse than missing any of the details. In the later case, the distance to fix the problem is often far shorter.
Focusing too hard on too many small things leads to a tunnel vision. A perspective of a world were the importance of nearly trivial details are often thrown out of proportion.
It's not uncommon, for example, to see techies highly irritated by spelling mistakes and typos.
Language is the container for ideas, and while good spelling and grammar make reading easier, it is rare that they introduce real ambiguities into the text. As such, they are really minor annoyances, yet some people seem so put off by even the simplest of them.
A well-edited piece, quite truly, is easier to read, yet often it is symptomatic of the work being less raw and more contrived. A polished effort. You'd expect good writing in a magazine for example, but you always know that the polishing 'twists' the content. You are not getting the real thoughts of the author, but instead you're getting their packaged version.
Ultimately, what's important is the substance of what people say, not how they say it. Missing that, leads to dismissing possibly correct ideas for the wrong reasons. The truth is true no matter how badly it is packaged, while falsehoods tend towards better packaging to make them more appetizing.
BLACK AND WHITE
You can't just toss anything into a computer and expect it too work. Computers help users compile data, which can be used to automate work, but ultimately everything in a computer is a shadow of the real world. A symbolic projection into a mathematical space.
We can strive for a lot of complexity, but we can never match the real world in depth, nor would we want to. To make our systems work, we have to reduce a world full of ambiguity and grey into a simplified black and white version. If it's not easily explainable, then it's hardly usable.
This can be a huge problem.
Too often you see programmers or developers trying to pound the real world into overtly simplistic boxes of good and bad. Right or wrong. Left or right.
That type of poor deconstruction, driven by a Star Wars mentality, leads to many of the stupid flame wars that are so popular were two sides pound on each other assuming the other is wrong.
You know: my programming language is good, yours is bad. My OS is better than yours. My hardware is right, yours is inferior. That type of nonsense.
Clearly, all technologies have their good and bad points, but that quickly becomes ignored in a black and white world. Everything get reduced into two overly simplified categories, whether or not it makes any sense. A stiff rigid viewpoint fits well inside of a machine, but isn't even close to working with the world outside.
If you're oriented, for example, to see all of your fellow employees as only either good or bad, then because of that limited perspective you are missing out on a broad (and ultimately more successful) view of the world around you. People have such a wide range of strengths and weaknesses, that assigning them to one list or the other misses out on their potential. You'll end up relying on weak people at the wrong times, while passing up some well-suited resources.
Black and white works well for Hollywood or comic book plots, but it forces us to miss the depth of the world around us.
A LACK OF DETERMINISM
A programmer lays out a pre-determined series of steps for a computer to follow. Even with today's overly complex technologies, the computer still precisely follows these instructions. It does exactly what it is told to do. It does so in a deterministic and predictable manner. It is logical, and rational in it's behavior.
We get used to this behavior from the machines, so it's not uncommon that programmers start to expect this from other parts of the real world as well.
This becomes most obvious when you see technical people discussing business.
Management consultants lay out huge numbers of theories for business to follow, but hopefully they don't really believe that it works out that simply. Business, like weather, has some simple driving factors underneath, but at the practical level it is chaotic enough that it should be considered irrational.
If there existed some simple rules on how to succeed in business, eventually enough smart people would figure them out, to a significant enough degree that their exploiting the behavior would change the system. That is, if to many people know how to exceed the average, then they become the new average. If everybody wins, then nobody wins.
The markets are intrisically irrational (from a distant perspective) yet that doesn't stop techies from apply bad logic to predict or explain their behavior. It's epidemic, examples of people explaining why a superior technology will dominate the market, how being nice to the customers will increase business, how careful listening will get the requirements or any other assumption that presumes that the underlying behavior is deterministic, logical or predictable.
This works well for dealing with the internals of software systems, but for business and politics most programmers would do well to except that they are just irrational and unpredictable.
SUMMARY
If you know that you're biased, it is far easier to compensate for it. However, if you're walking around in an imaginary world, you tend to find that it's an uphill fight in the real one. You keep bumping into invisible walls.
The best way around this is to try very hard to separate these two worlds into two distinct perspectives. We don't have to unify our working perspective with the real one, but we do have to be aware when one is corrupting the other. We can keep them separate, and have separate expectations for each. They don't need to be combined.
When I travel, it reminds me of the greyness, the people and the irrationality of the world around me. Although I can often break things down into simple bits for the system development work, I always need to be reminded that that is just a mere subset of the reality surrounding me.
Computers should be predictable devices that are easily explainable, at least at a high usability level. The real world, outside of the machines, on the other hand, is an infinitely deep messy collection of exceptions to each and every rule. If you think you've figured it out, then you haven't gone deep enough yet.
It brings nothing but pain and frustration to expect that the real world will work on the same principles as a computational one, but still it's very common to see people caught in this mental trap. It certainly is something worth avoiding. Even if you have to occasionally trek around on the other side of the planet to do it.
Thursday, April 16, 2009
The End of Coding as We Know It
It's time for a bold prediction.
I know that most software developers see computer language programming as the essential element of software creation. For most people it is inconceivable that we might one day automate large chunks of this effort.
Automation, it is believed, can only come from tasks that are repetitive and require little thinking. Programming requires deep concentration, complex visualizations, and often some pretty creative thinking to get around both the technical and domain-based problems.
Thus, it is easy to make the assumption that because it is rooted in intellectual work, programming in computer languages will always be an essential part of part software development.
However, I think this is a false statement.
THE LAST OF THE TYPESETTERS
I can remember my Dad -- an editor for a printing-industry magazine -- taking me on a tour of one of the last vestiges of movable type. In its day, movable type was a technical marvel. They had invented a super-fast way of laying out printable pages.
Movable type involved the careful positioning of thousands of tiny small metal letters, called glyphs. A typesetter could quickly create page after page, and reuse the glyphs for more layouts after they were finished printing.
It was a huge step forward for printing, a major leap up from the hand carved printing blocks of the past. Printing in a matter of days.
A typesetter was responsible for hand positioning each element. Good quality workmanship was difficult, and the occupation was considered skilled. A great deal of knowledge and understanding were often required to get a high-quality readable layout, including hyphenation, kerning and leading. Typesetters, as a subset of typographers, which also includes font designers, were considered master craftsmen.
The future had looked bright for typesetters. Their highly skilled profession involved the careful layout of huge amounts of text. Printing was a major growth area; one of the great technical endeavors of the industrial revolution. It was a great job, well paying, and with a lot of esteem. It was a smart career choice, surely a job that would be required forever.
Forever didn't last long.
Over time, the technologies improved, yet right up to the 1960s, typesetters were still required. From movable type to hot press, then later to cold press, the work endured while the technologies changed around it. Still skilled, still complex, but gradually becoming less and less significant.
Finally, the growth of desktop publishing killed it. Much of the beauty and art of the layout was lost. Computers, even with good font hints, don't generate the same quality of output, however, consumers no longer appreciate the difference. High-quality typesetting became a lost art.
Typesetting still occurs in some context, but it is very different from its origins. It is a specialty now, used for very limited occasions. My dad lamented the end of typesetting, often telling me that desktop published documents were not nearly as readable or classy.
LESSONS LEARNED
So what does this have to do with computers?
It is an excellent example of how easily technology creates and removes occupations. How easily it changes things. Typesetters can be forgiven for believing that their positions would survive far longer. It was a skilled labor job, that required both a visual eye for detail, a considerable amount of intelligence, and a good knowledge of spelling and grammar for hyphenating the text. It would have been way better than laboring in a factory.
Way back then, with our current frame of reference, it would be easily to assume that the intellectual effort involved in typesetting rendered it impossible to automate. Anybody that has dealt into the murky world of fonts and layout knows that it is far messier and way more complex than most people realize. But then we know that aspects of the problem gradually got redistributed to fonts, layout programs, or just lost. The constrains of the original efforts disappeared, and people gradually accepted the reduced quality in their outputs. Automation brought mediocrity. Average.
Programming has aspects to it that require intelligence, but it also contains large amounts of rather mindless effort in the whole work. We code a big mess, then spend a lot of time finding fiddly small problems or reworking it. Intellectual work is intellectual work, no computer can ever do it, but that doesn't mean that it has to be done each and every time we build a system. That's the basis of a false assumption.
While the core of the intellectual work will never change, how it's done, how much of it really exists and whether or not we'll have to keep coding forever are all up for grabs. All we need do is restructure the way we look at programming and we can see some fairly simple ways to collapse the effort, or at very least get far more re-use out of our current efforts.
I'll start with a simple model.
CONTEXTS
Consider the idea of a 'data context'. Basically a pool of data. Each one holding some amount of data. In a context, a datum has a very specific structure and type. The overall collection of data may be complex, but it is finite and deterministic. There are a fixed number of things in a context.
Getting data from one context to another is a simple matter. The data in one context has a very specific type, while it may be quite different in another context. To go from one to the other the data must go through a finite series of transformations. Each transformation takes a list of parameters, and returns a list of modified values. Each transformation is well-defined.
We can see modern computers as being a whole series of different contexts. There is a persistent data context, an application model context, a domain model context and often a lot of temporary in between contexts, while some computation is underway or the data is being moved about. Data appears in many different contexts, and these stay persistent for various different lengths of time.
A context is simply a well-defined discrete pool of data.
EXTENDED TYPE STRUCTURES
We often talk of 'type' in the sense of programming language variables being strongly typed or loosely typed. Type, even though it may be based on a hierarchy, is generally a reference to a specific data type of a specific variable. It is usually a singular thing.
In a general context, we do use it as a broader structure based definition, such as referring to Lists, Trees and Hash Tables in abstract data structures, but most people don't classically associate an ADT like List with the 'type' of a variable. They tend to see them as 'typeless' containers.
For this discussion we need to go higher, and think more in terms of an 'extended type', a fully structural arrangement of a much larger set of variables, where the interaction isn't restricted to just hierarchies. The full structural information for an extended type includes all of the possible generalizations of the type itself, including any alternative terminology (such as language translations).
The type of any variable, then is all of information necessary to be very explicit or completely generalized in the handling of any collection of explicitly related data. The extended type information is a structure.
We can take 'type' to mean a specific node in this complex graph of inter-related type-based information. A place in a taxonomy for instance.
Type, then includes any other reasonable alternative "terms" or aliases for the underlying names of the data. For example, floating-point-number, number, value, percentage, yield, bond-yield, bond-statistic, financial-instrument-statistic, Canadian-bond-statistic or Canadian-bond-yield may all refer to the same underlying value: 4.3. Each title is just another way of mentioning the same thing, although its reference ranges from being very generalized to being very specific.
Type can also include a restricted sub-range of the fully expressible type. For example, it may only be integers between 2 and 28. Thus an integer of 123 cannot be mindlessly cast to an integer_2..28, it does not belong to that 'type', but the integer 15 does.
Data of one type can be moved effortlessly to data of any other type in the same structure, they are one in the same. Data that is not within the same type structure requires some explicit transformation to convert it.
TRANSFORMATIONS
A transformation is a small amount of manipulation required to move some data from one unique type to a different one. Consider it to be a mini-program. A very specific function, procedure or method to take data of type A, and convert it to type B.
A transformation is always doable. Any data that comes in, can be transformed to something outbound (although the results may not make sense to humans). Transformations are finite, deterministic and discrete, although they don't have to be reversible. The average value of a set of numbers for example is a non-reversible (one-way) transformation.
Transformations can have loops, possibly apply slightly differently calculations based on input, and could run for a long time. Also the output is a set of things, basically anything that has changed in some way, from the input. There are no side-effects, everything modified is passed in, everything changed is returned.
The transformation is specific, its input is a set of specific values of specific types, and its output is another set of values of specific types. No conditional processing, no invalid input, no side-effects. A transformation takes a given variable, applies some simple logic and then produces a set of resulting variables.
The underlying language for transformations could be any programming language, such as C, Java, Perl, etc. Mostly, I think most of the modern functional programming languages, such as Haskell and Erlang define their functions in same manner (although I am just guessing), but Perl is the only language that I am aware of that can return lists (as native variable) from function calls.
PUTTING IT ALL TOGETHER
The three simple concepts: contexts, types and transformations form a complete computational model for utilizing computer hardware.
We can skip over any essential proofs, if we accept that the model itself is just a way to partition an underlying Turing complete language, in the same way that Objected Oriented doesn't make anything more or less Turing complete.
I think that higher level structural decompositions do not intrinsically change the expressibility of the underlying semantics. In other words, nothing about this model constraints or changes the usability of the underlying transformation programming language, it just restructures the overall perspective. It is an architectural decomposition, not a computational one.
A path from context to context, involves a long pipeline of simple transformations. Each one takes a specific set of input, which it converts into output. To further simplify things, each transformation is actually the smallest transformation possible given the data. Each one does a near trivial change, and then returns the data. If there are conditional elements to the path, that processing takes place outside of the transformations, at a higher level. The transformations are always a simple path from one context to another.
In that way, the entire system could consist of millions and millions of transformations, some acting on general data types, others gradually getting more specific as the transformations require. Each one is well defined, and the path from context to context for each datum is also well-understood.
From a particular context, working backwards, it is an entirely straight-forward and deterministic pathway to get back to some known context starting point. That is, the computer can easily assemble the transformations required for a specific pipeline if the start and end contexts are known.
There is no limit to the number of contexts or the length of time they stay around. There could be a small number or as we often cache a lot in modern systems, there could be a very large number of smaller contexts.
We can build massive computer systems from a massive number of these transformations that help the system to move data from one context to another. It would not take a huge amount of effort -- in comparison to normal programming efforts -- to break down all of the domain specific data into a explicit data types and then map out a huge number of transformations between the different types. We do this work constantly anyways when building a big system, this just allows us the ultimate 'reuse' for our efforts.
Users of this type of system would create a context for themselves. They would fill it with all of the references to the various different bits of data they want to access, and then for each, map it back to a starting context. In a very real sense, the users can pick from a sea of data, and assemble their own screens as they see fit. A big browser, and some drag and drop capabilities would be more than enough to allow the users to create their own specific 'context' pages in the system.
We already see this type of interface with portal web applications like iGoogle, but instead of little gadgets, the users get to pick actual data from their accessible contexts. No doubt they would be able to apply a few presentation transformations to the data as well to change how it appears in their own context. Common contexts could be shared (or act as starting templates).
As an interface, it is simple and no more complicated than many of the web based apps. Other than the three core concepts, there are no unknown technologies, algorithms or other bits necessary to implement this.
RAMIFICATIONS
Software would no longer be a set of features in an application. Instead it would be millions and millions of simple transformations, which could be conveniently mixed and matched as needed.
Upgrading a computer would involve dumping in more transformations. The underlying software could be sophisticated enough to be able to performance test different pipeline variations, so you could get newer more optimized transformations over time. Bad pipeline combinations could be marked as unusable.
Big releases from different vendors or even different domains could be mixed and matched as needed. One could easily write a simple series of patching transformations to map different sets of obscure data onto each other. All of our modern vertical silo problems would go away.
Distributed programming or parallel programming are also easy in this model, since it becomes a smaller question of how the individual pipelines are synchronized. Once reasonable algorithms get developed -- since they don't change -- the overall quality will be extremely high and very dependable.
In fact the system will stabilize quickly as more and more transformations get added, quantified and set into place. Unlike modern systems the changes will get less and less significant in time, meaning the quality will intrinsically get better and better. Something we definitely don't have now.
Of course the transformations themselves are still programming. But the scope of the programming has gone from having to create hugely complex massive programs to a massive number of hugely simple small ones. The assembly is left to the computer (and indirectly to the user to pick the data).
Eventually, though, the need for new transformations would slow down, as all the major data types for all of the various different domains would get added. Gradually, creating new transformations would be rarer and rarer, although there would always be some need to create a few.
Just quickly skipping back to typesetting, it should be noted that Graphic Designers still occasionally manually tweak kerning or leading to make some graphic layouts have a high quality appearance. The job disappeared, but some vestiges of it still remain.
Of course, we will still need data analysis and operations people to handle setting up and running big systems in production, but the role of the programmer agonizing over line after line of code is not necessary in this model. The computer just assembles the code dynamically as needed.
SUMMATION
I presented these ideas to show that there is at least one simple model that could eliminate programing as we know it. Although these ideas are fairly simple, building such a system involves a great deal more complexity that I addressed.
It is entirely possible, but even if these ideas are picked up right away, don't expect to see anything commonly in production for a long time. Complex ideas generally need about twenty years -- a generation -- to find acceptance, and some ideas need to sit on the bench a lot longer before people are willing to accept them.
Even if we built the best distributed transformation pipeline systems with near perfect quality, it would still takes decades for the last of the old software to die out. People become rather attached to their old ways, even if they are shown to not work very well. Technology rusts quickly, but fades slowly, it seems.
Programming, while doomed, will be around for a while yet.
I know that most software developers see computer language programming as the essential element of software creation. For most people it is inconceivable that we might one day automate large chunks of this effort.
Automation, it is believed, can only come from tasks that are repetitive and require little thinking. Programming requires deep concentration, complex visualizations, and often some pretty creative thinking to get around both the technical and domain-based problems.
Thus, it is easy to make the assumption that because it is rooted in intellectual work, programming in computer languages will always be an essential part of part software development.
However, I think this is a false statement.
THE LAST OF THE TYPESETTERS
I can remember my Dad -- an editor for a printing-industry magazine -- taking me on a tour of one of the last vestiges of movable type. In its day, movable type was a technical marvel. They had invented a super-fast way of laying out printable pages.
Movable type involved the careful positioning of thousands of tiny small metal letters, called glyphs. A typesetter could quickly create page after page, and reuse the glyphs for more layouts after they were finished printing.
It was a huge step forward for printing, a major leap up from the hand carved printing blocks of the past. Printing in a matter of days.
A typesetter was responsible for hand positioning each element. Good quality workmanship was difficult, and the occupation was considered skilled. A great deal of knowledge and understanding were often required to get a high-quality readable layout, including hyphenation, kerning and leading. Typesetters, as a subset of typographers, which also includes font designers, were considered master craftsmen.
The future had looked bright for typesetters. Their highly skilled profession involved the careful layout of huge amounts of text. Printing was a major growth area; one of the great technical endeavors of the industrial revolution. It was a great job, well paying, and with a lot of esteem. It was a smart career choice, surely a job that would be required forever.
Forever didn't last long.
Over time, the technologies improved, yet right up to the 1960s, typesetters were still required. From movable type to hot press, then later to cold press, the work endured while the technologies changed around it. Still skilled, still complex, but gradually becoming less and less significant.
Finally, the growth of desktop publishing killed it. Much of the beauty and art of the layout was lost. Computers, even with good font hints, don't generate the same quality of output, however, consumers no longer appreciate the difference. High-quality typesetting became a lost art.
Typesetting still occurs in some context, but it is very different from its origins. It is a specialty now, used for very limited occasions. My dad lamented the end of typesetting, often telling me that desktop published documents were not nearly as readable or classy.
LESSONS LEARNED
So what does this have to do with computers?
It is an excellent example of how easily technology creates and removes occupations. How easily it changes things. Typesetters can be forgiven for believing that their positions would survive far longer. It was a skilled labor job, that required both a visual eye for detail, a considerable amount of intelligence, and a good knowledge of spelling and grammar for hyphenating the text. It would have been way better than laboring in a factory.
Way back then, with our current frame of reference, it would be easily to assume that the intellectual effort involved in typesetting rendered it impossible to automate. Anybody that has dealt into the murky world of fonts and layout knows that it is far messier and way more complex than most people realize. But then we know that aspects of the problem gradually got redistributed to fonts, layout programs, or just lost. The constrains of the original efforts disappeared, and people gradually accepted the reduced quality in their outputs. Automation brought mediocrity. Average.
Programming has aspects to it that require intelligence, but it also contains large amounts of rather mindless effort in the whole work. We code a big mess, then spend a lot of time finding fiddly small problems or reworking it. Intellectual work is intellectual work, no computer can ever do it, but that doesn't mean that it has to be done each and every time we build a system. That's the basis of a false assumption.
While the core of the intellectual work will never change, how it's done, how much of it really exists and whether or not we'll have to keep coding forever are all up for grabs. All we need do is restructure the way we look at programming and we can see some fairly simple ways to collapse the effort, or at very least get far more re-use out of our current efforts.
I'll start with a simple model.
CONTEXTS
Consider the idea of a 'data context'. Basically a pool of data. Each one holding some amount of data. In a context, a datum has a very specific structure and type. The overall collection of data may be complex, but it is finite and deterministic. There are a fixed number of things in a context.
Getting data from one context to another is a simple matter. The data in one context has a very specific type, while it may be quite different in another context. To go from one to the other the data must go through a finite series of transformations. Each transformation takes a list of parameters, and returns a list of modified values. Each transformation is well-defined.
We can see modern computers as being a whole series of different contexts. There is a persistent data context, an application model context, a domain model context and often a lot of temporary in between contexts, while some computation is underway or the data is being moved about. Data appears in many different contexts, and these stay persistent for various different lengths of time.
A context is simply a well-defined discrete pool of data.
EXTENDED TYPE STRUCTURES
We often talk of 'type' in the sense of programming language variables being strongly typed or loosely typed. Type, even though it may be based on a hierarchy, is generally a reference to a specific data type of a specific variable. It is usually a singular thing.
In a general context, we do use it as a broader structure based definition, such as referring to Lists, Trees and Hash Tables in abstract data structures, but most people don't classically associate an ADT like List with the 'type' of a variable. They tend to see them as 'typeless' containers.
For this discussion we need to go higher, and think more in terms of an 'extended type', a fully structural arrangement of a much larger set of variables, where the interaction isn't restricted to just hierarchies. The full structural information for an extended type includes all of the possible generalizations of the type itself, including any alternative terminology (such as language translations).
The type of any variable, then is all of information necessary to be very explicit or completely generalized in the handling of any collection of explicitly related data. The extended type information is a structure.
We can take 'type' to mean a specific node in this complex graph of inter-related type-based information. A place in a taxonomy for instance.
Type, then includes any other reasonable alternative "terms" or aliases for the underlying names of the data. For example, floating-point-number, number, value, percentage, yield, bond-yield, bond-statistic, financial-instrument-statistic, Canadian-bond-statistic or Canadian-bond-yield may all refer to the same underlying value: 4.3. Each title is just another way of mentioning the same thing, although its reference ranges from being very generalized to being very specific.
Type can also include a restricted sub-range of the fully expressible type. For example, it may only be integers between 2 and 28. Thus an integer of 123 cannot be mindlessly cast to an integer_2..28, it does not belong to that 'type', but the integer 15 does.
Data of one type can be moved effortlessly to data of any other type in the same structure, they are one in the same. Data that is not within the same type structure requires some explicit transformation to convert it.
TRANSFORMATIONS
A transformation is a small amount of manipulation required to move some data from one unique type to a different one. Consider it to be a mini-program. A very specific function, procedure or method to take data of type A, and convert it to type B.
A transformation is always doable. Any data that comes in, can be transformed to something outbound (although the results may not make sense to humans). Transformations are finite, deterministic and discrete, although they don't have to be reversible. The average value of a set of numbers for example is a non-reversible (one-way) transformation.
Transformations can have loops, possibly apply slightly differently calculations based on input, and could run for a long time. Also the output is a set of things, basically anything that has changed in some way, from the input. There are no side-effects, everything modified is passed in, everything changed is returned.
The transformation is specific, its input is a set of specific values of specific types, and its output is another set of values of specific types. No conditional processing, no invalid input, no side-effects. A transformation takes a given variable, applies some simple logic and then produces a set of resulting variables.
The underlying language for transformations could be any programming language, such as C, Java, Perl, etc. Mostly, I think most of the modern functional programming languages, such as Haskell and Erlang define their functions in same manner (although I am just guessing), but Perl is the only language that I am aware of that can return lists (as native variable) from function calls.
PUTTING IT ALL TOGETHER
The three simple concepts: contexts, types and transformations form a complete computational model for utilizing computer hardware.
We can skip over any essential proofs, if we accept that the model itself is just a way to partition an underlying Turing complete language, in the same way that Objected Oriented doesn't make anything more or less Turing complete.
I think that higher level structural decompositions do not intrinsically change the expressibility of the underlying semantics. In other words, nothing about this model constraints or changes the usability of the underlying transformation programming language, it just restructures the overall perspective. It is an architectural decomposition, not a computational one.
A path from context to context, involves a long pipeline of simple transformations. Each one takes a specific set of input, which it converts into output. To further simplify things, each transformation is actually the smallest transformation possible given the data. Each one does a near trivial change, and then returns the data. If there are conditional elements to the path, that processing takes place outside of the transformations, at a higher level. The transformations are always a simple path from one context to another.
In that way, the entire system could consist of millions and millions of transformations, some acting on general data types, others gradually getting more specific as the transformations require. Each one is well defined, and the path from context to context for each datum is also well-understood.
From a particular context, working backwards, it is an entirely straight-forward and deterministic pathway to get back to some known context starting point. That is, the computer can easily assemble the transformations required for a specific pipeline if the start and end contexts are known.
There is no limit to the number of contexts or the length of time they stay around. There could be a small number or as we often cache a lot in modern systems, there could be a very large number of smaller contexts.
We can build massive computer systems from a massive number of these transformations that help the system to move data from one context to another. It would not take a huge amount of effort -- in comparison to normal programming efforts -- to break down all of the domain specific data into a explicit data types and then map out a huge number of transformations between the different types. We do this work constantly anyways when building a big system, this just allows us the ultimate 'reuse' for our efforts.
Users of this type of system would create a context for themselves. They would fill it with all of the references to the various different bits of data they want to access, and then for each, map it back to a starting context. In a very real sense, the users can pick from a sea of data, and assemble their own screens as they see fit. A big browser, and some drag and drop capabilities would be more than enough to allow the users to create their own specific 'context' pages in the system.
We already see this type of interface with portal web applications like iGoogle, but instead of little gadgets, the users get to pick actual data from their accessible contexts. No doubt they would be able to apply a few presentation transformations to the data as well to change how it appears in their own context. Common contexts could be shared (or act as starting templates).
As an interface, it is simple and no more complicated than many of the web based apps. Other than the three core concepts, there are no unknown technologies, algorithms or other bits necessary to implement this.
RAMIFICATIONS
Software would no longer be a set of features in an application. Instead it would be millions and millions of simple transformations, which could be conveniently mixed and matched as needed.
Upgrading a computer would involve dumping in more transformations. The underlying software could be sophisticated enough to be able to performance test different pipeline variations, so you could get newer more optimized transformations over time. Bad pipeline combinations could be marked as unusable.
Big releases from different vendors or even different domains could be mixed and matched as needed. One could easily write a simple series of patching transformations to map different sets of obscure data onto each other. All of our modern vertical silo problems would go away.
Distributed programming or parallel programming are also easy in this model, since it becomes a smaller question of how the individual pipelines are synchronized. Once reasonable algorithms get developed -- since they don't change -- the overall quality will be extremely high and very dependable.
In fact the system will stabilize quickly as more and more transformations get added, quantified and set into place. Unlike modern systems the changes will get less and less significant in time, meaning the quality will intrinsically get better and better. Something we definitely don't have now.
Of course the transformations themselves are still programming. But the scope of the programming has gone from having to create hugely complex massive programs to a massive number of hugely simple small ones. The assembly is left to the computer (and indirectly to the user to pick the data).
Eventually, though, the need for new transformations would slow down, as all the major data types for all of the various different domains would get added. Gradually, creating new transformations would be rarer and rarer, although there would always be some need to create a few.
Just quickly skipping back to typesetting, it should be noted that Graphic Designers still occasionally manually tweak kerning or leading to make some graphic layouts have a high quality appearance. The job disappeared, but some vestiges of it still remain.
Of course, we will still need data analysis and operations people to handle setting up and running big systems in production, but the role of the programmer agonizing over line after line of code is not necessary in this model. The computer just assembles the code dynamically as needed.
SUMMATION
I presented these ideas to show that there is at least one simple model that could eliminate programing as we know it. Although these ideas are fairly simple, building such a system involves a great deal more complexity that I addressed.
It is entirely possible, but even if these ideas are picked up right away, don't expect to see anything commonly in production for a long time. Complex ideas generally need about twenty years -- a generation -- to find acceptance, and some ideas need to sit on the bench a lot longer before people are willing to accept them.
Even if we built the best distributed transformation pipeline systems with near perfect quality, it would still takes decades for the last of the old software to die out. People become rather attached to their old ways, even if they are shown to not work very well. Technology rusts quickly, but fades slowly, it seems.
Programming, while doomed, will be around for a while yet.
Saturday, April 4, 2009
My PC Crashed (Again)
An ode to my PC:
I hate my PC. I've had many computers over the years, and often they have found a soft place in my heart. My Apple ][+, bought used, served admirably for years. My XT lasted longer than I could have ever imagined. The super well-run BSD Unix boxes at the University of Waterloo were always a delight to use. Even my VMS workstation which could be a bit cranky, survived for five years without ever being turned off. Most of my Unix workstations went six months or more without issues or reboots. Yes, I've been lucky to be able to work with computers that actually worked for me.
I hate my PC because it's undependable. It's the hardware: cheap crappy stuff that keeps failing. I'd buy better, but I can't tell anymore what is good and what is crap. Possibly because it is all crap now. It's the software: millions upon millions of lines of slipshod, hacked junk so full of potholes that it would take several million lifetimes to patch them. The Microsoft stuff is bad, but the overall industry stuff is worse. It's an endless sea of spaghetti barfed up late in the night. It's the support: anything goes wrong, well too bad, we told you in the disclaimer that we weren't responsible. We'd like to help, we really would, but who really understands these things anymore? It's the product as a whole: the PC and all of those things that go along with it. The hardware, the software, the market, the add-ons, the environment, the culture and all of the services. The whole kitten-kaboodle. PCs started life as the hacker machine, allowing the hacker culture to build up high and mighty around them. You can do anything with these machines, except make them work consistently.
I hate my PC because they over-charged me. Research is expensive, but despite that, computers have spawned a huge number of fortunes. Fabulously rich people. All those millionaires and billionaires strutting around, raving about their successes, writing books and giving advice. That would be OK if my machine actually worked. But given that it doesn't, a fair price would have been a fraction of what I paid, or what they tricked me out of. It isn't for love or for knowledge that they work at building these machines, but for the rights to a big mansion, a fancy car and a huge boat. And to throw it all back in my face, many of them now spend their days giving away all of the 'extra' dough they collected to charities and needy causes, instead of actually making the machines work properly. We didn't get what we paid for.
I hate my PC because it is the gateway to the Internet. That once hallowed sea of massive information is now nothing more than a giant propaganda machine. A medium for cheap hustlers to make a buck. Proof that too much of humanity is irredeemable. Gone are the days of information and knowledge, replaced by cheap tabloids, marketing and gossip. Just another crowd of people hoping to cash in. It has become another form of TV, bent on keeping the masses mollified. A medium to be mastered, it's simply a question of which groups are winning in that endless race to waste your time and take your money.
I hate my PC because it is plagued with infestations. As if bugs weren't bad enough, it now has viruses, trojans and worms galore. Written by misguided kids to generate profits for organized crime or organized business, both eerily similar. One could understand the original phone phreaks and their inherent curiosity to explore technology, but it was bounded by a strong no destruction ethic. These days in the free-for-all online world, the motivations have changed. It is just mean and ugly now; for glory or for profit, it doesn't matter who gets hurt. Whatever good there might have been vanished long ago.
I hate my PC because I fear that some big corporate stooge is going to install one to maintain my back account. The big, super-expensive, slow, honking great mainframe computers that we've relieved on for decades are damn near impossible to change, yet that's probably the reason why I don't have to go rushing into my bank branch each month complaining about missing money from my account. With constant moves to take the retrograde PC operating system -- cobbled together from a long, nasty history of flat foods, micromanagement and insane deadlines -- and jam it down the collective corporate throats, there is an increasing chance that we'll become more dependent, not just on those old run-of-the-mill crappy mainframe computers, but on these newer bottom-of-the-barrel super-crappy PC ones. A bad day, waiting to get worse, for sure. I don't want this crap on my desk, so I certainly don't want it in my bank's fancy air-conditioned machine room, nor anywhere near anything that is even remotely vital to our lives. The dump is my preferred location.
I hate my PC because it is a metaphor for what has happened to our society. We have become overly complex; well over the top. And yet underneath, we are increasingly dysfunctional. We're on a steady downward tumble, selling our soles for cheap disposable bobbles. Filling our basements with dusty junk. Foolish victims of a society run amok, no longer grounded in the things that matter. We create stupid rules, and then pile them up on top of each other, so high that they collapse of their own weight. What decades ago started as a movement to fix the world, fell down to simply changing it, gradually for the worse. Now everybody wants to leave their mark, even if it is just graffiti. And we have no way of unfixing it after they're done. The status quo might not have been great, but our present technologically sophisticated, flashy, but all around fragile existence, like our cheap crappy bloated PCs is constantly just one wetware virus away from us having to hit the reboot button and lose all of our data. Again.
I hate my PC because of what it is not. It is not the tool to save the world, nor is it the answer to mankind's problems. It doesn't automate stuff or make our lives better. Instead it disconnects us from what really matters and lures us with a shiny yet false promise. Over the years, each machine I used has become more sophisticated and faster following Moore's law, yet the later ones, one after another have followed an anti-law as well. Each one gets crappier than the last; each one gets more undependable. Each one is more vulnerable to spammers, virus writers and marketers. Each one descends a little farther down that slope, giving me only the barest sense of stupid improvement by flashing some more pretty spinning 3D graphics on my screen. Each one strengthens my disappointment. Each machine, now slowly eats away at our collective sanity.
Most of all, I hate my PC because it displaces what should have been there on my desk. A machine that works, one that I trust and one that improves my world. Now, instead of making my life easier, I find some new crisis there every three to six months, be it another dead piece of hardware, a bad software bug or a full-blown virus/trojan attack. And when it's not a major crisis, it is still continually wining about some useless upgrade, or that the net is unavailable again, or it is just being dog slow. I cannot trust this thing, it simply disappoints me whenever it gets the chance. Instead of helping me make sense of the world around me, I find a sea of messy disconnected data, that is hopelessly inconsistent, and incoherent. Showing me that while there might actually be answers, today is not my day. Instead of giving me more luxury time to explore the world around me, it chains me to my chair and forces me to endless install, restall, destall or just stall my life trying to find some temporary combination of crappy software, bad utilities and irritating sites that will momentarily make some minor life improvement, before, once again sending me back into the breach to fix yet another stupid but entirely avoidable and moronic issue with these cursed machines. All I want is a real computer, one that works. One that I can trust.
I hate my PC.
I hate my PC. I've had many computers over the years, and often they have found a soft place in my heart. My Apple ][+, bought used, served admirably for years. My XT lasted longer than I could have ever imagined. The super well-run BSD Unix boxes at the University of Waterloo were always a delight to use. Even my VMS workstation which could be a bit cranky, survived for five years without ever being turned off. Most of my Unix workstations went six months or more without issues or reboots. Yes, I've been lucky to be able to work with computers that actually worked for me.
I hate my PC because it's undependable. It's the hardware: cheap crappy stuff that keeps failing. I'd buy better, but I can't tell anymore what is good and what is crap. Possibly because it is all crap now. It's the software: millions upon millions of lines of slipshod, hacked junk so full of potholes that it would take several million lifetimes to patch them. The Microsoft stuff is bad, but the overall industry stuff is worse. It's an endless sea of spaghetti barfed up late in the night. It's the support: anything goes wrong, well too bad, we told you in the disclaimer that we weren't responsible. We'd like to help, we really would, but who really understands these things anymore? It's the product as a whole: the PC and all of those things that go along with it. The hardware, the software, the market, the add-ons, the environment, the culture and all of the services. The whole kitten-kaboodle. PCs started life as the hacker machine, allowing the hacker culture to build up high and mighty around them. You can do anything with these machines, except make them work consistently.
I hate my PC because they over-charged me. Research is expensive, but despite that, computers have spawned a huge number of fortunes. Fabulously rich people. All those millionaires and billionaires strutting around, raving about their successes, writing books and giving advice. That would be OK if my machine actually worked. But given that it doesn't, a fair price would have been a fraction of what I paid, or what they tricked me out of. It isn't for love or for knowledge that they work at building these machines, but for the rights to a big mansion, a fancy car and a huge boat. And to throw it all back in my face, many of them now spend their days giving away all of the 'extra' dough they collected to charities and needy causes, instead of actually making the machines work properly. We didn't get what we paid for.
I hate my PC because it is the gateway to the Internet. That once hallowed sea of massive information is now nothing more than a giant propaganda machine. A medium for cheap hustlers to make a buck. Proof that too much of humanity is irredeemable. Gone are the days of information and knowledge, replaced by cheap tabloids, marketing and gossip. Just another crowd of people hoping to cash in. It has become another form of TV, bent on keeping the masses mollified. A medium to be mastered, it's simply a question of which groups are winning in that endless race to waste your time and take your money.
I hate my PC because it is plagued with infestations. As if bugs weren't bad enough, it now has viruses, trojans and worms galore. Written by misguided kids to generate profits for organized crime or organized business, both eerily similar. One could understand the original phone phreaks and their inherent curiosity to explore technology, but it was bounded by a strong no destruction ethic. These days in the free-for-all online world, the motivations have changed. It is just mean and ugly now; for glory or for profit, it doesn't matter who gets hurt. Whatever good there might have been vanished long ago.
I hate my PC because I fear that some big corporate stooge is going to install one to maintain my back account. The big, super-expensive, slow, honking great mainframe computers that we've relieved on for decades are damn near impossible to change, yet that's probably the reason why I don't have to go rushing into my bank branch each month complaining about missing money from my account. With constant moves to take the retrograde PC operating system -- cobbled together from a long, nasty history of flat foods, micromanagement and insane deadlines -- and jam it down the collective corporate throats, there is an increasing chance that we'll become more dependent, not just on those old run-of-the-mill crappy mainframe computers, but on these newer bottom-of-the-barrel super-crappy PC ones. A bad day, waiting to get worse, for sure. I don't want this crap on my desk, so I certainly don't want it in my bank's fancy air-conditioned machine room, nor anywhere near anything that is even remotely vital to our lives. The dump is my preferred location.
I hate my PC because it is a metaphor for what has happened to our society. We have become overly complex; well over the top. And yet underneath, we are increasingly dysfunctional. We're on a steady downward tumble, selling our soles for cheap disposable bobbles. Filling our basements with dusty junk. Foolish victims of a society run amok, no longer grounded in the things that matter. We create stupid rules, and then pile them up on top of each other, so high that they collapse of their own weight. What decades ago started as a movement to fix the world, fell down to simply changing it, gradually for the worse. Now everybody wants to leave their mark, even if it is just graffiti. And we have no way of unfixing it after they're done. The status quo might not have been great, but our present technologically sophisticated, flashy, but all around fragile existence, like our cheap crappy bloated PCs is constantly just one wetware virus away from us having to hit the reboot button and lose all of our data. Again.
I hate my PC because of what it is not. It is not the tool to save the world, nor is it the answer to mankind's problems. It doesn't automate stuff or make our lives better. Instead it disconnects us from what really matters and lures us with a shiny yet false promise. Over the years, each machine I used has become more sophisticated and faster following Moore's law, yet the later ones, one after another have followed an anti-law as well. Each one gets crappier than the last; each one gets more undependable. Each one is more vulnerable to spammers, virus writers and marketers. Each one descends a little farther down that slope, giving me only the barest sense of stupid improvement by flashing some more pretty spinning 3D graphics on my screen. Each one strengthens my disappointment. Each machine, now slowly eats away at our collective sanity.
Most of all, I hate my PC because it displaces what should have been there on my desk. A machine that works, one that I trust and one that improves my world. Now, instead of making my life easier, I find some new crisis there every three to six months, be it another dead piece of hardware, a bad software bug or a full-blown virus/trojan attack. And when it's not a major crisis, it is still continually wining about some useless upgrade, or that the net is unavailable again, or it is just being dog slow. I cannot trust this thing, it simply disappoints me whenever it gets the chance. Instead of helping me make sense of the world around me, I find a sea of messy disconnected data, that is hopelessly inconsistent, and incoherent. Showing me that while there might actually be answers, today is not my day. Instead of giving me more luxury time to explore the world around me, it chains me to my chair and forces me to endless install, restall, destall or just stall my life trying to find some temporary combination of crappy software, bad utilities and irritating sites that will momentarily make some minor life improvement, before, once again sending me back into the breach to fix yet another stupid but entirely avoidable and moronic issue with these cursed machines. All I want is a real computer, one that works. One that I can trust.
I hate my PC.
Subscribe to:
Comments (Atom)