I'm standing on the other side of what I think is some type of huge wall. A great wall that stretches for I don't know how far and is higher than I can see over or even climb. It is an obstacle through which I cannot pass.
We are drawn in by the simplest of things. You for instance, probably started reading this post because of its title "4 Easy Ways to Design Better Software". Your probably curious as to what simple and easy things you can do to design better systems, and this seems like a great place to find that information quickly. A quick simple solution is what you really want. Having to think about it, particularly thinking hard about it, will not do.
And so, software developers, programmers and other techies are drawn to the many fads that make up our daily fare of literary pulp for technical articles. 12 ways to do that; 5 ways for this; the best 20 of these. Most are seeking the easy answers, not understanding.
The things you didn't stop to read had big ugly titles, or maybe delved too deeply into the underlying makeup of the issues. Perhaps you bookmarked them into an optimistic 'readme' category or something similar, but still their ultimate fate, we can predict, is to languish in the dark corners of nowhere.
Ironic, because buried deep in all of that uninteresting stuff there lies at least four easy ways to design software, if not many more. If only you looked there, you might be able to find the real understanding that you are craving. Then again, there is always YouTube; have you seen the latest lolcats video?
I'm a horribly uncomfortable writer that feels compelled to continue churning out unreadable and uninteresting opinions because it turns out that after many years of searching, understanding and contemplating software, those answers that I was seeking were really close to home and I need to share. But, sadly what I am saying is bound to keep me on the other side of that looming wall of obscurity. The side that nobody really wants to visit because it is not easy or fun, and it requires too much thinking, which is clearly a drag.
I figure if I keep chucking things over the wall, someday, someone is going to come around and understand it. At some point, they may even forgive me for my pitiful attempts at self-promotion. It can be hard if you think there is value in what you are saying, to see it get ignored by the masses. Lost forever in a chorus of 95 million voices.
Ironically, in the end it is all really simple: the 4 easy ways to design better software are to stop looking for 4 easy ways to design better software. Well, almost. I still owe you three, but at least now -- maybe -- you'll be able to find them yourself if you want to.
Software is a static list of instructions, which we are constantly changing.
Friday, August 31, 2007
Wednesday, August 29, 2007
Is Open Source hurting Computer Science?
Reality is bursting with counterintuitive examples. Thinking deeply on this statement shows that it makes perfect sense: we are still fairly limited intellectually, so we are drawn towards the conclusions we like, rather than the ones that are actually correct. Our intuition sometimes leads us astray.
You'd think that a whole lot of free stuff would be a good thing, wouldn't you? A movement, such as the Open Source one, of people willing to donate their time and effort should be a noble cause.
Not that the core of Open Source is really about free stuff. It is not. It is about having access to the underlying computer source code for all of your software dependencies. At least that was how it all started.
Poorly documented proprietary solutions that required Herculean effort to understand were driving people nuts. These early systems were often rushed to market, neglecting documentation as a consequence. There were few resources available for finding information, other than trial and error. Interfacing with another system was generally a slow and painful process.
Open Source came about as a reaction to this common and frustrating problem. But it was an unpopular reaction.
If you are building on someone else's underlying code, then having the source as a reference can make diagnosing problems easier. If of course, that source is readable, elegant and short, which is so rarely the case anymore. Still, just the ability to peek into the source provides a measure of comfort.
On the other side of the fence, if you make your source code visible to other programmers, it turns out that it makes it rather easy for them to create similar but legal copies. They don't need to fully understand the design to come up with a related one. If they get the general idea that is often enough. Not that they couldn't eventually figure it out, it is just that you might save them lots of time and millions of dollars in resources. Millions that you already had to pay.
The side effect of all this openness is to make it extremely risky to release an Open Source application. Once out there, your competitors have as much understanding about the system as you do. So risky, given the millions needed to write a commercial product, that the only people initially willing to write Open Source code were ones that could afford to give it away for free. The students and the academics. More poignantly they were going to give it away for free anyways, as they always had before the Open Source movement even started.
Along the way, this caused 'free' to became synonymous with Open Source, which started things rolling at an ever increasing rate.
Early on, most companies hated the movement. Open Source tools were commonly banned in IT departments for years. There is still some hesitation, based around support issues, but mostly that has changed.
Once it got larger, more and more people were willing to try to capitalize on Open Source.
Two things emerge from this type of momentum: the first is the tendency for Open Source to consume all of the low hanging fruit; the smaller coding challenges. The second, is for it to target popular commercial offerings, trying to replicate their success. Both are serious problems.
With so many programmers willing to spend their evenings and weekends writing small bits of code for free, there are less and less places for companies to get launched. The small libraries and applications are all taken. This means that there are less companies coming into the market, so there is less competition, and subsequently there is worse code. Quality becomes optional.
There is also less incentive for big companies to release new products as well. You can't come into the market if the smallest thing you can profitably write costs millions to build. It is difficult even if you are large and confident.
What about the existing Open Source code? Turning a free piece of code into a proprietary one creates enemies fast. You can't directly monetize the code once it has gone Open Source. A sudden conversion towards profit will quickly spawn rival projects that will copy the ideas and give them away for free. You'll lose, in the same way you'll lose if Microsoft decides to write a version of your application. In fact, that pressure from both sides keeps the market empty.
Software is always a huge risk with a lot of money. Eliminating the smaller projects just ups the ante.
Since Open Source often holds few corporate ties, it could conceivably be a means for great research. There is freedom to truly experiment, and to build things that are beyond the bleeding edge.
Oddly, the behavior is reversed, as most Open Source projects chase commercial ones in vein attempts to give away the commercial functionality for free. No doubt it is easier to redo something similar, then it is to create a brand new paradigm. People take the easy route. The less risky one. Even though they are not writing for commercial purposes, the programmers still want fame and ultimately, riches. The game may change, but the motivations stay the same.
There are a few innovative Open Source projects, but the big ones have all grown successful by displacing earlier commercial offerings. Some have lead the market, but many are just playing catchup. Far too often, the Open Source programmers try to do too much on their own, so for example the commercial products utilize experienced graphic designers and technical writers, while the Open Source versions just wing it. They may be close technically, but that is only part of the battle.
Getting back to the question in the title: do I think Open Source is hurting Computer Science? My answer would be 'yes'. Primarily because it is stifling innovation and keeping the smaller commercial players from entering into the market. Combined, these two issues have caused a huge glut of code, but mostly of the low quality knock-offs variety. Innovations have been rare. This has driven software to become a commodity, one that directly has little or no value. Companies can no longer rely on software revenues, instead they are driven by service contracts or advertising. Being almost worthless, means little incentive to explore, push the bounds or even fix the known bugs for existing products. Effort always follows the money.
While it is nice that I can download so much functionality onto my machine for free, so much of it is unstable that it diminishes its own utility. It becomes something to collect for the sake of collecting it, but not to depend on when you need to get something done. And it has become a sore point that I end up downloading so many bad offerings, in the hopes of finding some new usable tools. A lot of time is wasted on near-misses. It's almost worth paying someone to separate out the usable technologies.
We do seem to be getting it from both sides. The quality of commercial products have dropped because of a decrease in the value of software. The Open Source code is quickly assembled by part-timers seeking fame, not quality. We end up living in a world of low-quality software. Plentiful, but undependable. One probably caused by too much 'free' work.
If I could change anything, I would push for the Open Source programmers to stop trying to copy their commercial counterparts. They have an opportunity to leap frog the existing technologies, so to squander that on trying to produce replicas of already broken interfaces just seems cruel. Computer Science has stagnated. All we get are really old ideas, from decades ago, recycled in new shiny wrapping paper.
It has been quite a while since someone, anyone actually, has produced a really new innovative idea. Sure, there are a few cases now where they've managed better implementations, but the underlying ideas were there all along.
We've reached the saturation point with our current technological factoring. The same as in the past, the pieces that we have created will bear no further weight on them. The whole becomes too complex and too unstable to manage. As always, at this point, some brave souls need to head back to the basics and find yet another factoring, but one that supports a broader upper range. Instead of producing what are effectively brand-name knock-offs, the Open Source programmers have a chance to contribute to the industry and push it to the next level. It might help redeem some of the damage that they have caused.
You'd think that a whole lot of free stuff would be a good thing, wouldn't you? A movement, such as the Open Source one, of people willing to donate their time and effort should be a noble cause.
Not that the core of Open Source is really about free stuff. It is not. It is about having access to the underlying computer source code for all of your software dependencies. At least that was how it all started.
Poorly documented proprietary solutions that required Herculean effort to understand were driving people nuts. These early systems were often rushed to market, neglecting documentation as a consequence. There were few resources available for finding information, other than trial and error. Interfacing with another system was generally a slow and painful process.
Open Source came about as a reaction to this common and frustrating problem. But it was an unpopular reaction.
If you are building on someone else's underlying code, then having the source as a reference can make diagnosing problems easier. If of course, that source is readable, elegant and short, which is so rarely the case anymore. Still, just the ability to peek into the source provides a measure of comfort.
On the other side of the fence, if you make your source code visible to other programmers, it turns out that it makes it rather easy for them to create similar but legal copies. They don't need to fully understand the design to come up with a related one. If they get the general idea that is often enough. Not that they couldn't eventually figure it out, it is just that you might save them lots of time and millions of dollars in resources. Millions that you already had to pay.
The side effect of all this openness is to make it extremely risky to release an Open Source application. Once out there, your competitors have as much understanding about the system as you do. So risky, given the millions needed to write a commercial product, that the only people initially willing to write Open Source code were ones that could afford to give it away for free. The students and the academics. More poignantly they were going to give it away for free anyways, as they always had before the Open Source movement even started.
Along the way, this caused 'free' to became synonymous with Open Source, which started things rolling at an ever increasing rate.
Early on, most companies hated the movement. Open Source tools were commonly banned in IT departments for years. There is still some hesitation, based around support issues, but mostly that has changed.
Once it got larger, more and more people were willing to try to capitalize on Open Source.
Two things emerge from this type of momentum: the first is the tendency for Open Source to consume all of the low hanging fruit; the smaller coding challenges. The second, is for it to target popular commercial offerings, trying to replicate their success. Both are serious problems.
With so many programmers willing to spend their evenings and weekends writing small bits of code for free, there are less and less places for companies to get launched. The small libraries and applications are all taken. This means that there are less companies coming into the market, so there is less competition, and subsequently there is worse code. Quality becomes optional.
There is also less incentive for big companies to release new products as well. You can't come into the market if the smallest thing you can profitably write costs millions to build. It is difficult even if you are large and confident.
What about the existing Open Source code? Turning a free piece of code into a proprietary one creates enemies fast. You can't directly monetize the code once it has gone Open Source. A sudden conversion towards profit will quickly spawn rival projects that will copy the ideas and give them away for free. You'll lose, in the same way you'll lose if Microsoft decides to write a version of your application. In fact, that pressure from both sides keeps the market empty.
Software is always a huge risk with a lot of money. Eliminating the smaller projects just ups the ante.
Since Open Source often holds few corporate ties, it could conceivably be a means for great research. There is freedom to truly experiment, and to build things that are beyond the bleeding edge.
Oddly, the behavior is reversed, as most Open Source projects chase commercial ones in vein attempts to give away the commercial functionality for free. No doubt it is easier to redo something similar, then it is to create a brand new paradigm. People take the easy route. The less risky one. Even though they are not writing for commercial purposes, the programmers still want fame and ultimately, riches. The game may change, but the motivations stay the same.
There are a few innovative Open Source projects, but the big ones have all grown successful by displacing earlier commercial offerings. Some have lead the market, but many are just playing catchup. Far too often, the Open Source programmers try to do too much on their own, so for example the commercial products utilize experienced graphic designers and technical writers, while the Open Source versions just wing it. They may be close technically, but that is only part of the battle.
Getting back to the question in the title: do I think Open Source is hurting Computer Science? My answer would be 'yes'. Primarily because it is stifling innovation and keeping the smaller commercial players from entering into the market. Combined, these two issues have caused a huge glut of code, but mostly of the low quality knock-offs variety. Innovations have been rare. This has driven software to become a commodity, one that directly has little or no value. Companies can no longer rely on software revenues, instead they are driven by service contracts or advertising. Being almost worthless, means little incentive to explore, push the bounds or even fix the known bugs for existing products. Effort always follows the money.
While it is nice that I can download so much functionality onto my machine for free, so much of it is unstable that it diminishes its own utility. It becomes something to collect for the sake of collecting it, but not to depend on when you need to get something done. And it has become a sore point that I end up downloading so many bad offerings, in the hopes of finding some new usable tools. A lot of time is wasted on near-misses. It's almost worth paying someone to separate out the usable technologies.
We do seem to be getting it from both sides. The quality of commercial products have dropped because of a decrease in the value of software. The Open Source code is quickly assembled by part-timers seeking fame, not quality. We end up living in a world of low-quality software. Plentiful, but undependable. One probably caused by too much 'free' work.
If I could change anything, I would push for the Open Source programmers to stop trying to copy their commercial counterparts. They have an opportunity to leap frog the existing technologies, so to squander that on trying to produce replicas of already broken interfaces just seems cruel. Computer Science has stagnated. All we get are really old ideas, from decades ago, recycled in new shiny wrapping paper.
It has been quite a while since someone, anyone actually, has produced a really new innovative idea. Sure, there are a few cases now where they've managed better implementations, but the underlying ideas were there all along.
We've reached the saturation point with our current technological factoring. The same as in the past, the pieces that we have created will bear no further weight on them. The whole becomes too complex and too unstable to manage. As always, at this point, some brave souls need to head back to the basics and find yet another factoring, but one that supports a broader upper range. Instead of producing what are effectively brand-name knock-offs, the Open Source programmers have a chance to contribute to the industry and push it to the next level. It might help redeem some of the damage that they have caused.
Wednesday, August 22, 2007
Few Technological Conveniences
The photocopies are long overdue. There are several other more important things on my agenda for the day, but I've delayed doing them for so long I have no choice.
Wearily, I rush upstairs to the photocopier and insert my clump of papers. I enter a security code and a few other options causing the machine to spring to life, sucking paper in at one end and spiting it out in several places at the other.
But then -- as one always knows will eventually happen with photocopying -- the sounds of activity cease. It all comes clanking to a halt, my work being left undone.
The lights blink ... then there is a message on the little console screen: please change the paper in tray 2.
Ok. No problem, I think. I can handle this.
As I look down, to my horror I see that there is no tray 2. There never was. For some insane reason the machine has gone on a bender half way through my work and is now pining for something that never existed in the first place. Foog! Stupid technology. Fooled again.
But hey, I'm a software guy, this happens to me all the time. Maybe not with photocopies, but certain with at least 70% of the applications I use on a day to day basis. They suddenly choke up for no good reason.
Further investigation only shows that I should not have left my bed this morning. Some engineers, somewhere, decided for some reason that they don't have any empathy for their users, which I happen to be right now. They decided that they don't want to spend that extra effort to clean up the mess. They decided that it really ain't that bad. Or they even decided that they like the way it works, it speaks of their character.
The truth is, I don't much care. And if I could send them some bad karma, well, they should just be careful and not move around the house too much today.
So much crap, so little time to expose it.
At least with physical things like photocopiers, most people can immediately identify the crappiness of it, so it gradually gets fixed over time. Not so with software, where it may have been good once, but sooner or later some fun-starved geek is going to turn it into an uber-complexity nightmare. Even if it works today, it may be wrecked tomorrow.
Bad technology is so easy to build. Just sprinkle in a little bit of:
Nothing is more fun then changing some small seemingly insignificant parameter only to see that the whole application blows up spectacularly. Even better: have it blow up a few days later so it is even more difficult to trace back the crashing to the changing of the parameter. That kind of "well-designed" software is just such a joy to play with.
I could go on and on forever I think, as there are way too many good examples of bad technology out there. Worse though, is that people are becoming accustomed to the crappiness. They look at me like I am mad for just wanting something to work consistently or correctly. There was a time when much of what we use now would have been rated with an F-, but now it has become standard fare. The coders probably aren't even trying to fix it, are they?
I'm tired of complaining, but I keep getting sucked into this stuff. It is particularly awful with the stuff floating around the IDEs that people are using. "Use the new module X to solve all of your growing needs" they proudly exclaim while you watch another two to three of your hours get flushed away on another pitiful excuse for technology. I keep falling for this, I am such a sucker.
I get caught, I guess, because I am still looking for tools that work. And I am still being disappointed.
These days I can definitely say that the depth and frequency of my disappointment is increasing. If we have progressed, we have done so by learning how to churn out more crap at a faster rate and by convincing ourselves that it ain't so bad. Yes, thats it, progress from being able to tie the blindfold around our eyes that much faster. That's a form of accomplishment, isn't it?
So I kick the photocopier, then I open up all of the trays and close them. After that I turn it off, then on, then off again. I spin three times, kick the trays again (for good luck) hit the on switch and enter my security code. I hit go and it springs back into life, without any sense of why it had previously failed.
Whew, I exclaim, to nobody in particular. I dodged a big one. It could have been weeks before that machine worked again. If ever.
Wearily, I rush upstairs to the photocopier and insert my clump of papers. I enter a security code and a few other options causing the machine to spring to life, sucking paper in at one end and spiting it out in several places at the other.
But then -- as one always knows will eventually happen with photocopying -- the sounds of activity cease. It all comes clanking to a halt, my work being left undone.
The lights blink ... then there is a message on the little console screen: please change the paper in tray 2.
Ok. No problem, I think. I can handle this.
As I look down, to my horror I see that there is no tray 2. There never was. For some insane reason the machine has gone on a bender half way through my work and is now pining for something that never existed in the first place. Foog! Stupid technology. Fooled again.
But hey, I'm a software guy, this happens to me all the time. Maybe not with photocopies, but certain with at least 70% of the applications I use on a day to day basis. They suddenly choke up for no good reason.
Further investigation only shows that I should not have left my bed this morning. Some engineers, somewhere, decided for some reason that they don't have any empathy for their users, which I happen to be right now. They decided that they don't want to spend that extra effort to clean up the mess. They decided that it really ain't that bad. Or they even decided that they like the way it works, it speaks of their character.
The truth is, I don't much care. And if I could send them some bad karma, well, they should just be careful and not move around the house too much today.
So much crap, so little time to expose it.
At least with physical things like photocopiers, most people can immediately identify the crappiness of it, so it gradually gets fixed over time. Not so with software, where it may have been good once, but sooner or later some fun-starved geek is going to turn it into an uber-complexity nightmare. Even if it works today, it may be wrecked tomorrow.
Bad technology is so easy to build. Just sprinkle in a little bit of:
- Fiddly bits that you can play with, but shouldn't or things will break.
- Weird unintuitive abstractions that are unrelated to the problem.
- Inconsistency to show that the coder is a creative sort.
- Sensitivity to small changes, causing big random effects.
- Non-documentation and torture-torials, preferably with minimal editing.
- One-size-fits-all errors that are not traceable back to the original input.
Nothing is more fun then changing some small seemingly insignificant parameter only to see that the whole application blows up spectacularly. Even better: have it blow up a few days later so it is even more difficult to trace back the crashing to the changing of the parameter. That kind of "well-designed" software is just such a joy to play with.
I could go on and on forever I think, as there are way too many good examples of bad technology out there. Worse though, is that people are becoming accustomed to the crappiness. They look at me like I am mad for just wanting something to work consistently or correctly. There was a time when much of what we use now would have been rated with an F-, but now it has become standard fare. The coders probably aren't even trying to fix it, are they?
I'm tired of complaining, but I keep getting sucked into this stuff. It is particularly awful with the stuff floating around the IDEs that people are using. "Use the new module X to solve all of your growing needs" they proudly exclaim while you watch another two to three of your hours get flushed away on another pitiful excuse for technology. I keep falling for this, I am such a sucker.
I get caught, I guess, because I am still looking for tools that work. And I am still being disappointed.
These days I can definitely say that the depth and frequency of my disappointment is increasing. If we have progressed, we have done so by learning how to churn out more crap at a faster rate and by convincing ourselves that it ain't so bad. Yes, thats it, progress from being able to tie the blindfold around our eyes that much faster. That's a form of accomplishment, isn't it?
So I kick the photocopier, then I open up all of the trays and close them. After that I turn it off, then on, then off again. I spin three times, kick the trays again (for good luck) hit the on switch and enter my security code. I hit go and it springs back into life, without any sense of why it had previously failed.
Whew, I exclaim, to nobody in particular. I dodged a big one. It could have been weeks before that machine worked again. If ever.
Monday, August 20, 2007
Driving Design
It was late. I was sitting in front of the keyboard, excitement flowing through my veins like the swarm of electrons that was feeding my computer's circuit board. The statistics for my site were excellent, I think. Both the visitors and the subscriptions were at an all time high.
The fan on the computer was quietly humming in its case, drowning out the stillness of the night. The cursor silently blinking, begging me to get to work.
The problem with technology is that you can find out too much information too quickly. Without a proper context you may know something, but do you really understand it?
Inspired by the statistics from my site, I figured I needed to do it again. Another post, but this time I'll really push the envelope; perhaps I can lead with a scene that has lots and lots of color. Way back, I should have remembered being disgusted: it was while reading some technical book where the author went on and on about his dog. Apparently I learned little from that experience, as I quickly worked my pet into the subtext.
I set the scene -- walking the dog late at night -- but regrettably I think I failed in tying it back to the topic. I'm new at this, so you need to forgive my enthusiasm.
I got the usual sense of relief when I finished posting the entry, but that was soon replaced by a growing sense of dread as I kept checking my statistics each day. Down they went. Spiraling counterclockwise like water in a newly flushed toilet bowl. Even if I was in Australia, where rumor has it that the rotation is reversed, I wouldn't be saved from the momentum of my writing. The post was tanking, and it was tanking fast.
Ok, so maybe dogs are a really bad idea in a technological post? Sure, that's it. Must be. It is some type of doggie kiss of death. A big slobbery one. Mention your pooch and watch your stats plummet. Tossing in clever references like "toilet bowl" or "hairball" can't be great either, just more ways to drive people away. You'd think I'd learn this at some point.
So now that I've killed off my readership, I might as well get back on the methodology thread. I feel it is important, but it is one of those things that people really don't want to read about. Ironically, I think it would make their lives easier if only they worked through the issues. Most people however, can't see past the poodle, so to speak.
When last I stumbled over it, I was pushing the idea that the elements of the design are actually the responsibility of the methodology itself. As well, the flaws in the design flow directly into the implementation, making it paramount to not only get the design defined correctly, but also tightly. With all of that in mind, we need -- as part of the process -- to find the least destructive way of gathering together the details of the design and compressing them into a usable blueprint.
A design is composed of: constraints, the key functionality, and the bulk of the matter. These three elements form the essence, beyond that there remains the issue of how the design interacts with the world around it.
Constraints are those rules that must absolutely be in place or the project is considered a failure. Many projects truly have none that are absolute, but for some designs there are just things cannot afford to be wrong. These can be technical, such as meeting a bare minimum of performance, or behavioral, such as guaranteeing that the results meet a level of conformance. If there are constraints and they are not meet, then the project is a total failure. If that's not a true statement, then they're not really constraints and shouldn't be treated as such.
All systems are based around some key functionality; driven directly by the problem domain and addressed by the tool. That needs to be fleshed out, as the tool is generally useless without getting this right. This is the meat of the system. Mostly, this functionality is built on some set of complex algorithms, generally underlying the mechanics. Sometimes it a batch thing. Whatever it is, understanding these functions is understanding the system. You need to spend time designing here, because to not do so would result in the system not working.
The remainder of the system is everything that needs to exist to round out the product, but is not vital in terms of being specified. For instance, in any application with users, there must be administrative facilities to change the user data. De-constructing that example, the secondary functionality of most systems revolves around viewing and editing all of the "other" data in the system that is not necessarily core. Now, we could list this out in terms of its functionality -- most commonly done as requirements -- but truthfully that is a long redundant boring list of: view "this", and edit "that" descriptions. The "this" and "that" are data, and it is for handling this data that we require a design. This leads to our understanding that all we need do is list the data and a few rules about its visibility within the system. Structurally, if I said that the system contains users and groups and a bunch of screens to update and modify them -- accessible only by the administrator -- you know exactly what I mean. There are a few variations on implementing groups, but they are all similar. If it really mattered because it effects the core functionality, then it should be referenced in the core.
What exactly do the screens look like? If they are in a GUI-based interface, they must fit within the existing visual layout. This makes them well-defined if they follow the conventions of the rest of the screens.
The system lets you manipulate the user and group information; simple functionality that is bound by the rest of the system. There is very little flexibility in the design. It needs to match to be correct.
Not exactly rocket science, but still people feel the need to repeat all of the tiniest of details. Often in doing so, they create inconsistencies that then are transfered into the code.
Specifying it once is best, and least likely to be inconsistent.
If you can get the constraints, the main functionality and all of the data fleshed out, what remains is how the system interacts with the rest of the world. There are many contact points with people or systems, ranging from simple configuration files, application programming libraries, installation interfaces, up to command line interfaces and large graphical user interfaces. Wherever anyone -- other than the original programmers -- interacts with the code, data or configuration, that is an interface and that needs to be defined.
We need to know the interfaces to build them, but people also need to know the interfaces to use them. In that, what is really important is the consistency. Even in something simple like a configuration file, consistently makes it easy to use and leaves a good feeling about interacting with the system.
Inconsistencies piss people off, kinda like spurious dog references in technical writing. Not a good idea.
The specs should exist both for the developers and the users. If we can draft them only once we can leverage our effort. Not only does this save work, but it helps to ensure that the interfaces as documented are the interfaces as built. A labor saving way to get the interface right, implement them and make sure they are accessible.
The key elements and the interfaces provide the full specification for a system, but they are still subject to the basic problem of people over complicating the design. The great programmers simplify their designs naturally, everyone else must work hard at it manually. Applying brute force in the design stage leads to bloated code, which indirectly implies that the opposite should be true. If we work hard to compact the design, particularly when we have something that is small and abstract, that can be manipulated in our heads, we have a chance of finding those inspirations that would lead to a simpler, more elegant approach. And we find these improvements at a time when it is still possible to implement them into the code.
The standard approach to software design says to explicitly iterate each and every requirement, breaking down all compound ones. Recovering from that ill fated approach is what many programmers accomplish during their implementation if they are successful.
By compacting the pieces in the design there is a huge improvement in reducing both the size of the implementation and the risk of failure. Not only are we doing less work, but the work we are doing is more accurate. A small tight design leads to a better implementation. Practical experience has always shown me this, but for some reason it does not seem to be intuitive.
So much time has been wasted in needlessly expanding all of the minuscule details for designs. Effort that I know is misguided.
At the end of the day, a clear well-formed, reasonably detailed specification that is only five pages is far better than sixty pages of redundant, repetitive, over-stated requirements and screen shots. Even if the sixty pages is consistent and perfect from a design perspective, there is a still an increased chance that the programmers will not correctly get it into the implementation. It is more work to extract the data from a larger volume and more likely that is won't be done correctly.
For all of the extra work to be pedantic, all that is created is more work, more problems and more risk. Not something we really need on a big project. Personally I prefer to win.
So I figured out the statistics thing. I'm sure it is the dog's fault. It has to be. I can't think of any other reason, or at least one that I would really like to accept.
In thinking hard about this I've come to a simple conclusion. I need to buy a parakeet or possibly a ferret. See, if I color my posts with something other than a canine, that will attract readers, won't it? What is better than a good ferret story? People love that kind of thing. It can't fail. It has got to be an exotic pet thing, but if you don't think so you can always send me some feedback. It is actually subjective, so you can't be wrong. Fish, fowl or even something better...
Just don't try telling me to stick to dog stories, I've been there and I think I've actually managed to catch on that it doesn't work. It just takes me a while, that's all.
The fan on the computer was quietly humming in its case, drowning out the stillness of the night. The cursor silently blinking, begging me to get to work.
The problem with technology is that you can find out too much information too quickly. Without a proper context you may know something, but do you really understand it?
Inspired by the statistics from my site, I figured I needed to do it again. Another post, but this time I'll really push the envelope; perhaps I can lead with a scene that has lots and lots of color. Way back, I should have remembered being disgusted: it was while reading some technical book where the author went on and on about his dog. Apparently I learned little from that experience, as I quickly worked my pet into the subtext.
I set the scene -- walking the dog late at night -- but regrettably I think I failed in tying it back to the topic. I'm new at this, so you need to forgive my enthusiasm.
I got the usual sense of relief when I finished posting the entry, but that was soon replaced by a growing sense of dread as I kept checking my statistics each day. Down they went. Spiraling counterclockwise like water in a newly flushed toilet bowl. Even if I was in Australia, where rumor has it that the rotation is reversed, I wouldn't be saved from the momentum of my writing. The post was tanking, and it was tanking fast.
Ok, so maybe dogs are a really bad idea in a technological post? Sure, that's it. Must be. It is some type of doggie kiss of death. A big slobbery one. Mention your pooch and watch your stats plummet. Tossing in clever references like "toilet bowl" or "hairball" can't be great either, just more ways to drive people away. You'd think I'd learn this at some point.
So now that I've killed off my readership, I might as well get back on the methodology thread. I feel it is important, but it is one of those things that people really don't want to read about. Ironically, I think it would make their lives easier if only they worked through the issues. Most people however, can't see past the poodle, so to speak.
When last I stumbled over it, I was pushing the idea that the elements of the design are actually the responsibility of the methodology itself. As well, the flaws in the design flow directly into the implementation, making it paramount to not only get the design defined correctly, but also tightly. With all of that in mind, we need -- as part of the process -- to find the least destructive way of gathering together the details of the design and compressing them into a usable blueprint.
A design is composed of: constraints, the key functionality, and the bulk of the matter. These three elements form the essence, beyond that there remains the issue of how the design interacts with the world around it.
Constraints are those rules that must absolutely be in place or the project is considered a failure. Many projects truly have none that are absolute, but for some designs there are just things cannot afford to be wrong. These can be technical, such as meeting a bare minimum of performance, or behavioral, such as guaranteeing that the results meet a level of conformance. If there are constraints and they are not meet, then the project is a total failure. If that's not a true statement, then they're not really constraints and shouldn't be treated as such.
All systems are based around some key functionality; driven directly by the problem domain and addressed by the tool. That needs to be fleshed out, as the tool is generally useless without getting this right. This is the meat of the system. Mostly, this functionality is built on some set of complex algorithms, generally underlying the mechanics. Sometimes it a batch thing. Whatever it is, understanding these functions is understanding the system. You need to spend time designing here, because to not do so would result in the system not working.
The remainder of the system is everything that needs to exist to round out the product, but is not vital in terms of being specified. For instance, in any application with users, there must be administrative facilities to change the user data. De-constructing that example, the secondary functionality of most systems revolves around viewing and editing all of the "other" data in the system that is not necessarily core. Now, we could list this out in terms of its functionality -- most commonly done as requirements -- but truthfully that is a long redundant boring list of: view "this", and edit "that" descriptions. The "this" and "that" are data, and it is for handling this data that we require a design. This leads to our understanding that all we need do is list the data and a few rules about its visibility within the system. Structurally, if I said that the system contains users and groups and a bunch of screens to update and modify them -- accessible only by the administrator -- you know exactly what I mean. There are a few variations on implementing groups, but they are all similar. If it really mattered because it effects the core functionality, then it should be referenced in the core.
What exactly do the screens look like? If they are in a GUI-based interface, they must fit within the existing visual layout. This makes them well-defined if they follow the conventions of the rest of the screens.
The system lets you manipulate the user and group information; simple functionality that is bound by the rest of the system. There is very little flexibility in the design. It needs to match to be correct.
Not exactly rocket science, but still people feel the need to repeat all of the tiniest of details. Often in doing so, they create inconsistencies that then are transfered into the code.
Specifying it once is best, and least likely to be inconsistent.
If you can get the constraints, the main functionality and all of the data fleshed out, what remains is how the system interacts with the rest of the world. There are many contact points with people or systems, ranging from simple configuration files, application programming libraries, installation interfaces, up to command line interfaces and large graphical user interfaces. Wherever anyone -- other than the original programmers -- interacts with the code, data or configuration, that is an interface and that needs to be defined.
We need to know the interfaces to build them, but people also need to know the interfaces to use them. In that, what is really important is the consistency. Even in something simple like a configuration file, consistently makes it easy to use and leaves a good feeling about interacting with the system.
Inconsistencies piss people off, kinda like spurious dog references in technical writing. Not a good idea.
The specs should exist both for the developers and the users. If we can draft them only once we can leverage our effort. Not only does this save work, but it helps to ensure that the interfaces as documented are the interfaces as built. A labor saving way to get the interface right, implement them and make sure they are accessible.
The key elements and the interfaces provide the full specification for a system, but they are still subject to the basic problem of people over complicating the design. The great programmers simplify their designs naturally, everyone else must work hard at it manually. Applying brute force in the design stage leads to bloated code, which indirectly implies that the opposite should be true. If we work hard to compact the design, particularly when we have something that is small and abstract, that can be manipulated in our heads, we have a chance of finding those inspirations that would lead to a simpler, more elegant approach. And we find these improvements at a time when it is still possible to implement them into the code.
The standard approach to software design says to explicitly iterate each and every requirement, breaking down all compound ones. Recovering from that ill fated approach is what many programmers accomplish during their implementation if they are successful.
By compacting the pieces in the design there is a huge improvement in reducing both the size of the implementation and the risk of failure. Not only are we doing less work, but the work we are doing is more accurate. A small tight design leads to a better implementation. Practical experience has always shown me this, but for some reason it does not seem to be intuitive.
So much time has been wasted in needlessly expanding all of the minuscule details for designs. Effort that I know is misguided.
At the end of the day, a clear well-formed, reasonably detailed specification that is only five pages is far better than sixty pages of redundant, repetitive, over-stated requirements and screen shots. Even if the sixty pages is consistent and perfect from a design perspective, there is a still an increased chance that the programmers will not correctly get it into the implementation. It is more work to extract the data from a larger volume and more likely that is won't be done correctly.
For all of the extra work to be pedantic, all that is created is more work, more problems and more risk. Not something we really need on a big project. Personally I prefer to win.
So I figured out the statistics thing. I'm sure it is the dog's fault. It has to be. I can't think of any other reason, or at least one that I would really like to accept.
In thinking hard about this I've come to a simple conclusion. I need to buy a parakeet or possibly a ferret. See, if I color my posts with something other than a canine, that will attract readers, won't it? What is better than a good ferret story? People love that kind of thing. It can't fail. It has got to be an exotic pet thing, but if you don't think so you can always send me some feedback. It is actually subjective, so you can't be wrong. Fish, fowl or even something better...
Just don't try telling me to stick to dog stories, I've been there and I think I've actually managed to catch on that it doesn't work. It just takes me a while, that's all.
Thursday, August 16, 2007
In the Still of the Night
I'm outside walking the dog; the air is cool and calm. It is a nightly ritual -- generally occurring right after watching the news -- giving me time to mull over the days issues. I'm not sure what it is about being alone on a cold dark street, wandering around leashed to 65 pounds of pure panting energy but my mind wonders. The dog pulls, we twist and turn down various streets and gradually my thoughts fall into place. They began taking a shape all their own.
Claude Shannon delved into the nature of information and rolled it into a comprehensive theory. He might have started with wondering about how few bits you really needed to represent specific data. Building it up based on simple questions like that. Information Theory is a incredibly important branch of mathematics that has helped shape the foundations of software.
In the same way that he examined how little we need, I've often come to contemplate on how little 'code' we need to represent a large sequence of operations. In a sense, how small an expression do you need to clearly express some underlying functionality?
The dog tugs hard, as a cat goes screeching by. It jars me back into the scene. Some of the neighborhood cats are terrified when she approaches, while others actually head straight for the dog as if they were willing to confront her directly. Generally, for the crazy ones I intervene and change course, allowing the cat to win. Cats, you see, are not to be messed with. They may be a faction of the size of a dog, but they go at life with far more determination.
Falling back into my line of thinking, I decide to steal an idea from Economics, where they love to talk about the "economies of scale". I like this saying, but I think the term "economies of expression" suits my current line of thought. My definition for this, flipping it a bit from its origins, would have to be something like: "reductions in syntactic expression for a set of underlying actions can produce decreases in complexity". You get -- if I can manage to clarify it -- less complexity in the system if you can encapsulate your ever-increasing sets of steps into smaller and smaller expressions. Each line of code built up, should pack more of a punch then the earlier ones. If the expressions are still recognizably associated with the steps, then the complexity is reduced.
A great example of this is Graph Viz. It is a little language interpreter that draws complex graphs. It has a small simple syntax that quietly hides its full expressive power. It is amazingly broad in its capabilities, yet stunningly simple in execution. A great example of elegance.
We should seek solutions that have these characteristics. However, we often expand the interfaces instead of refactoring them. To work with the bloat, you need more code to express less underneath. It is the opposite of elegance.
When I develop, I am always aware of how I am building layers onto top of each other. In each new layer, the expression is smaller, but the functionality is broader. Inside of writing a program, I'm often just building a toolkit in which the final program can be expressed as a small set of the axioms. If well-balanced and tight, the higher abstractions can be combined to form ever increasing higher abstractions. Over the life of the project, you should be developing the ability to specify a great deal, with an ever-decreasing amount of code. Layer on layer you build up the capabilities of the system. It becomes easier to add more as the code grows, not harder.
That is what I mean by leveraging the economies of expression. The earlier work makes the future work easier. As it progresses, I can express a complicated set of tasks with a minimal amount of syntax and still have the flexibility to apply it to related tasks. I build upwards, often by employing abstractions, but definitely by encapsulating the details -- completely -- below.
The code gets easier to write as the project matures. The progress is faster. We can add more functionality, more correctly, and in less time; building on our foundations as we go.
This time, my dog sees a raccoon in a tree and she goes crazy. She is jumping up and down, making a racket and looking wild. It is a quiet area, late at night so I often worry that her freaking out is making enough noise to disrupt people. I've never been one to enjoy rocking the boat without a reason; but there is little I can do to calm her down. She sees the presence of this hairy scavenger as a worthy cause; making a Tasmanian Devil sort of sound effect as her hair stands on end while bouncing into the air in a sort of frenzy. It could be a scary site, except that I know she is really just a big cream-puff underneath. All blow and no show, so to speak. If that raccoon turned and faced her, she'd be just as happy changing course and finding something else to bark at for the night.
Claude Shannon delved into the nature of information and rolled it into a comprehensive theory. He might have started with wondering about how few bits you really needed to represent specific data. Building it up based on simple questions like that. Information Theory is a incredibly important branch of mathematics that has helped shape the foundations of software.
In the same way that he examined how little we need, I've often come to contemplate on how little 'code' we need to represent a large sequence of operations. In a sense, how small an expression do you need to clearly express some underlying functionality?
The dog tugs hard, as a cat goes screeching by. It jars me back into the scene. Some of the neighborhood cats are terrified when she approaches, while others actually head straight for the dog as if they were willing to confront her directly. Generally, for the crazy ones I intervene and change course, allowing the cat to win. Cats, you see, are not to be messed with. They may be a faction of the size of a dog, but they go at life with far more determination.
Falling back into my line of thinking, I decide to steal an idea from Economics, where they love to talk about the "economies of scale". I like this saying, but I think the term "economies of expression" suits my current line of thought. My definition for this, flipping it a bit from its origins, would have to be something like: "reductions in syntactic expression for a set of underlying actions can produce decreases in complexity". You get -- if I can manage to clarify it -- less complexity in the system if you can encapsulate your ever-increasing sets of steps into smaller and smaller expressions. Each line of code built up, should pack more of a punch then the earlier ones. If the expressions are still recognizably associated with the steps, then the complexity is reduced.
A great example of this is Graph Viz. It is a little language interpreter that draws complex graphs. It has a small simple syntax that quietly hides its full expressive power. It is amazingly broad in its capabilities, yet stunningly simple in execution. A great example of elegance.
We should seek solutions that have these characteristics. However, we often expand the interfaces instead of refactoring them. To work with the bloat, you need more code to express less underneath. It is the opposite of elegance.
When I develop, I am always aware of how I am building layers onto top of each other. In each new layer, the expression is smaller, but the functionality is broader. Inside of writing a program, I'm often just building a toolkit in which the final program can be expressed as a small set of the axioms. If well-balanced and tight, the higher abstractions can be combined to form ever increasing higher abstractions. Over the life of the project, you should be developing the ability to specify a great deal, with an ever-decreasing amount of code. Layer on layer you build up the capabilities of the system. It becomes easier to add more as the code grows, not harder.
That is what I mean by leveraging the economies of expression. The earlier work makes the future work easier. As it progresses, I can express a complicated set of tasks with a minimal amount of syntax and still have the flexibility to apply it to related tasks. I build upwards, often by employing abstractions, but definitely by encapsulating the details -- completely -- below.
The code gets easier to write as the project matures. The progress is faster. We can add more functionality, more correctly, and in less time; building on our foundations as we go.
This time, my dog sees a raccoon in a tree and she goes crazy. She is jumping up and down, making a racket and looking wild. It is a quiet area, late at night so I often worry that her freaking out is making enough noise to disrupt people. I've never been one to enjoy rocking the boat without a reason; but there is little I can do to calm her down. She sees the presence of this hairy scavenger as a worthy cause; making a Tasmanian Devil sort of sound effect as her hair stands on end while bouncing into the air in a sort of frenzy. It could be a scary site, except that I know she is really just a big cream-puff underneath. All blow and no show, so to speak. If that raccoon turned and faced her, she'd be just as happy changing course and finding something else to bark at for the night.
Tuesday, August 14, 2007
Return to the Dreaded M...
You’re being spontaneous. You buy a huge collection of tools. Everything from hammers and saws all the way up to clamps, shovels and drills. You order in tonnes of concrete, wood and drywall. Perhaps you even throw in a bit of re-bar -- those steel rods used to reinforce concrete -- just for extra strength. With all of your energy and a few of your closest friends you hit the dirt and start digging like mad. Dig, dig dig, and then you pour the concrete. Creatively, of course; in various spots around the excavation. A few truck loads at least if not more. A bit here, a bit there. Your not sure yet were it really belongs, but you'll figure that out as you go. You pound in some boards: big and small. Then it is time to whack up the drywall. So you add a wall here, maybe a window or a doorway over there. A bit more concrete; then some more wood; you keep going. It looks like some stairs might help at this point. Towards the end you toss up some "roofing" areas to keep the rain out. Add in a few shingles and "presto", you have finished house. No fuss, no muss and definitely no plan. Sound's a little crazy doesn't it?
So honestly now: how messed up do you think this house is going to be? I've been fixing parts of my house for decades, but I can't imagine building a whole house from scratch. I'd definitely never try it without having a well thought out plan in advance. It would be insane to spend tens of thousands of dollars on supplies, equipment, and all of that time, if you suspected from day one that the house was probably going to fall down. But you'd expect that, wouldn't you, if you started working without a plan. Even if you were a professional house builder with years under your belt, unless you were following a design, detail-by-detail from memory, things will go wrong. Horribly wrong. A house is a complicated thing, you just can't build one without some sort of coherent design. All of the pieces have to come together for it to work. Built randomly, the building will have an inordinately large amount of problems. Any structure built under the same circumstance would, in fact anything at all built under the same circumstance would have lots of problems, unless of course it was trivial.
If you wouldn't do it with thousands of dollars worth of housing supplies, you certainly wouldn't do it with millions of dollars worth of software development, would you? Or would you?
Virtual or not, building software without a design will result in an edifice that won't stand up to the rigors of time. Other than a simple project, there must be a plan and it must contain a design. Arguing otherwise is not rational, and certainly not a position you want to dig in and defend for any length of time. A design is necessary to avoid wasting time and resources, and in the case of a house, in keeping the structure from collapsing on your head. Designing is a good thing. No one should seriously suggest otherwise.
You need a design, but how much of it do you really need? What about going out there and using a Polaroid camera to capture all of the good bits that you see in other houses. Polaroids are useful because you can almost instantly get a physical hard-copy of the image right after taking the picture. Snap, snap, snap, and you capture this corner, that wall, this part of the window frame, and so on. If you visit enough houses, and take enough pictures of all of the cool details, you should be able to put it all together to form an uber-design for the perfect house, right?
Is it that surprising that your stack of Polaroids holds no more value then just winging it?
An incoherent design, one that doesn't present a complete model of the thing to be constructed can't be validated. If it is not small enough, or not abstract enough to be manipulated inside of someone's head, it can't be validated. As such, there will be problems. The only question is how fatal will they be? The problem with a stack of Polaroids as a design is that the details may all be there but the big picture is missing and the organization is awful. The stack might be enough input to produce a design, but it is no substitute for a well-drafted blueprint of a house. Blueprints are a representation that can be worked from. Small, abstract, but highly detailed in a manner that is quickly readable and easily validated. Content is important, but how it is presented is equally as vital.
So why, then do software developers gather buckets of requirements; the little details, and oodles of screen shots; more little details, and then expect to build a coherent system from that mess. Tossing in more details in the form of a bunch of disconnected use cases isn't going to fix the problem.
Capturing the bits may be crucial, but if they aren't integrated into a design they are useless. Or even worse: misleading.
The bulk of most designs is pretty much 'routine engineering' anyways. It is repetitive, and has been done before. Lots of times, actually. It doesn't serve anyone to iterate out all of the same basic details over and over again. The real design quickly gets lost in the mush. You certainly don't see that with blueprints for a house; the actual design is projected onto a 2D surface where the true spacial locations for things are not explicitly specified, yet there is enough there to correctly build it. That is a key point. A critical aspect.
A design needs to encompass most of the details -- it shouldn't get lost in that 80% of redundant stuff -- but the other 20% must be clear. To be useful, the developers need quick access to the right information at the right time. Should you break out each requirement into mind-numbingly trivial detail, it only slows down the process. By being pedantic we often obscure the details with noise. A massive list of all of the possible requirements for a system fails to work as a design. It it too big to validate, too redundant to pay attention to, and too noisy to fully understand.
As well, I think that fully listing out all fifty screens in the system is not a good design either. It makes more sense to design a couple of 'types' of screens such as a details view and a list view, then match them back to the fifty. In that way, there is a concept (screen type), a high-level view (different types of screens) and then the remaining details; a list of fifty screens and their resulting mappings.
Going at the design in this manner serves several useful purposes. First, it is smaller and more likely that a single human can validate it correctness. A feat that should not be underestimated in usefulness.
Second, it helps to provide the much needed consistency to the overall design. The screens flow, because they were designed to flow, from detail to list and back again. Users love that type of consistency, and getting it into the design early means a better likelihood of keeping it there for the implementation.
The most important reason for handling design like this, is that any and all of the problems at the design stage directly influence the implementation; no design for instance causes chaos. While having too many little bits, such as requirements and screen shots blurs the overall picture with too much noise. We are told to avoid compound requirements, as if they are bad somehow. But if the designers define a huge number of independent requirements, that "mass" of detail is exactly what will be coded, and that coding style will be "brute force". The results will be a big ugly mess of code. So, not only will all of the requirements make the design larger and more cumbersome than necessary, but the implementation itself will follow suit; a problem that threatens the design of the system and the ability for the development to finish on time.
Compacting the design compacts the implementation.
Getting back to house building, it is done with considerable care from a working design. Generally one that is thought out in advance. Designing a house is straightforward and well understood, particularly if you gloss over some of the weirder local conventions. Software development needs an equivalent mechanism for design.
We live in our houses, and renovate them over time. They get extended and modified to suit the various different occupants. The designers could not have or would never have conceived of what would happen down the road, it is not possible. My own house is over 100 years old and has been renovated so many times I could not even guess. The original builders would barely recognize it today, and it looks quite different from the other similar houses in our neighborhood that all started life as the same design.
Knowing that about houses, brings up two corollaries for software: one is that the design is going to change over time, as things change around it. People live in their software systems the same way we live in our houses. It changes over time but that should never be an excuse not to design. You might not know exactly where it is headed, but you do know that it is a house you are building.
That leads to the second point: it is not possible to renovate my house, or any other small structure into an apartment building. Won't happen. If someday that's what ends up covering our land, it is because it came from tearing down our house (and several others) and rebuilding it. A house is a house, and you cannot enhance it to what it is not. You need to design and build an apartment building from the ground up to bear the types of stresses and strains that are unique to it. Often that point is missed in software. It is just that we can tear it down in bits and convince ourselves that we haven't made major changes. Or sometimes, we can choose not to tear it down, even though it would have resulted in better code and less work. Virtual monstrosities may not collapse on your head, but they do come tumbling down.
Still there? Good, the M from the title comes rolling back into this blog entry in the form of stating that a methodology defines the steps necessary to design and build software. The aspects of the design -- what goes into its makeup -- are key to the methodology. The way the needs of the users are collected, processed and carefully laid out in a design should be explicit steps in the process. How the design is packed together, should also spring from the process. If you were working with an architect to build or enhance a house, the "thing" that you are working on is the blueprints for the house. That much is defined by the process of building a house.
A good software methodology needs the same. It should tell you how to produce a design in a way that it will be as simple and compact as possible. The blueprints for the system should be precise, but simplified abstractions that contain the overall view embedded with enough detail. What that actually looks like on paper, is a problem for the methodology, not the software developer.
So honestly now: how messed up do you think this house is going to be? I've been fixing parts of my house for decades, but I can't imagine building a whole house from scratch. I'd definitely never try it without having a well thought out plan in advance. It would be insane to spend tens of thousands of dollars on supplies, equipment, and all of that time, if you suspected from day one that the house was probably going to fall down. But you'd expect that, wouldn't you, if you started working without a plan. Even if you were a professional house builder with years under your belt, unless you were following a design, detail-by-detail from memory, things will go wrong. Horribly wrong. A house is a complicated thing, you just can't build one without some sort of coherent design. All of the pieces have to come together for it to work. Built randomly, the building will have an inordinately large amount of problems. Any structure built under the same circumstance would, in fact anything at all built under the same circumstance would have lots of problems, unless of course it was trivial.
If you wouldn't do it with thousands of dollars worth of housing supplies, you certainly wouldn't do it with millions of dollars worth of software development, would you? Or would you?
Virtual or not, building software without a design will result in an edifice that won't stand up to the rigors of time. Other than a simple project, there must be a plan and it must contain a design. Arguing otherwise is not rational, and certainly not a position you want to dig in and defend for any length of time. A design is necessary to avoid wasting time and resources, and in the case of a house, in keeping the structure from collapsing on your head. Designing is a good thing. No one should seriously suggest otherwise.
You need a design, but how much of it do you really need? What about going out there and using a Polaroid camera to capture all of the good bits that you see in other houses. Polaroids are useful because you can almost instantly get a physical hard-copy of the image right after taking the picture. Snap, snap, snap, and you capture this corner, that wall, this part of the window frame, and so on. If you visit enough houses, and take enough pictures of all of the cool details, you should be able to put it all together to form an uber-design for the perfect house, right?
Is it that surprising that your stack of Polaroids holds no more value then just winging it?
An incoherent design, one that doesn't present a complete model of the thing to be constructed can't be validated. If it is not small enough, or not abstract enough to be manipulated inside of someone's head, it can't be validated. As such, there will be problems. The only question is how fatal will they be? The problem with a stack of Polaroids as a design is that the details may all be there but the big picture is missing and the organization is awful. The stack might be enough input to produce a design, but it is no substitute for a well-drafted blueprint of a house. Blueprints are a representation that can be worked from. Small, abstract, but highly detailed in a manner that is quickly readable and easily validated. Content is important, but how it is presented is equally as vital.
So why, then do software developers gather buckets of requirements; the little details, and oodles of screen shots; more little details, and then expect to build a coherent system from that mess. Tossing in more details in the form of a bunch of disconnected use cases isn't going to fix the problem.
Capturing the bits may be crucial, but if they aren't integrated into a design they are useless. Or even worse: misleading.
The bulk of most designs is pretty much 'routine engineering' anyways. It is repetitive, and has been done before. Lots of times, actually. It doesn't serve anyone to iterate out all of the same basic details over and over again. The real design quickly gets lost in the mush. You certainly don't see that with blueprints for a house; the actual design is projected onto a 2D surface where the true spacial locations for things are not explicitly specified, yet there is enough there to correctly build it. That is a key point. A critical aspect.
A design needs to encompass most of the details -- it shouldn't get lost in that 80% of redundant stuff -- but the other 20% must be clear. To be useful, the developers need quick access to the right information at the right time. Should you break out each requirement into mind-numbingly trivial detail, it only slows down the process. By being pedantic we often obscure the details with noise. A massive list of all of the possible requirements for a system fails to work as a design. It it too big to validate, too redundant to pay attention to, and too noisy to fully understand.
As well, I think that fully listing out all fifty screens in the system is not a good design either. It makes more sense to design a couple of 'types' of screens such as a details view and a list view, then match them back to the fifty. In that way, there is a concept (screen type), a high-level view (different types of screens) and then the remaining details; a list of fifty screens and their resulting mappings.
Going at the design in this manner serves several useful purposes. First, it is smaller and more likely that a single human can validate it correctness. A feat that should not be underestimated in usefulness.
Second, it helps to provide the much needed consistency to the overall design. The screens flow, because they were designed to flow, from detail to list and back again. Users love that type of consistency, and getting it into the design early means a better likelihood of keeping it there for the implementation.
The most important reason for handling design like this, is that any and all of the problems at the design stage directly influence the implementation; no design for instance causes chaos. While having too many little bits, such as requirements and screen shots blurs the overall picture with too much noise. We are told to avoid compound requirements, as if they are bad somehow. But if the designers define a huge number of independent requirements, that "mass" of detail is exactly what will be coded, and that coding style will be "brute force". The results will be a big ugly mess of code. So, not only will all of the requirements make the design larger and more cumbersome than necessary, but the implementation itself will follow suit; a problem that threatens the design of the system and the ability for the development to finish on time.
Compacting the design compacts the implementation.
Getting back to house building, it is done with considerable care from a working design. Generally one that is thought out in advance. Designing a house is straightforward and well understood, particularly if you gloss over some of the weirder local conventions. Software development needs an equivalent mechanism for design.
We live in our houses, and renovate them over time. They get extended and modified to suit the various different occupants. The designers could not have or would never have conceived of what would happen down the road, it is not possible. My own house is over 100 years old and has been renovated so many times I could not even guess. The original builders would barely recognize it today, and it looks quite different from the other similar houses in our neighborhood that all started life as the same design.
Knowing that about houses, brings up two corollaries for software: one is that the design is going to change over time, as things change around it. People live in their software systems the same way we live in our houses. It changes over time but that should never be an excuse not to design. You might not know exactly where it is headed, but you do know that it is a house you are building.
That leads to the second point: it is not possible to renovate my house, or any other small structure into an apartment building. Won't happen. If someday that's what ends up covering our land, it is because it came from tearing down our house (and several others) and rebuilding it. A house is a house, and you cannot enhance it to what it is not. You need to design and build an apartment building from the ground up to bear the types of stresses and strains that are unique to it. Often that point is missed in software. It is just that we can tear it down in bits and convince ourselves that we haven't made major changes. Or sometimes, we can choose not to tear it down, even though it would have resulted in better code and less work. Virtual monstrosities may not collapse on your head, but they do come tumbling down.
Still there? Good, the M from the title comes rolling back into this blog entry in the form of stating that a methodology defines the steps necessary to design and build software. The aspects of the design -- what goes into its makeup -- are key to the methodology. The way the needs of the users are collected, processed and carefully laid out in a design should be explicit steps in the process. How the design is packed together, should also spring from the process. If you were working with an architect to build or enhance a house, the "thing" that you are working on is the blueprints for the house. That much is defined by the process of building a house.
A good software methodology needs the same. It should tell you how to produce a design in a way that it will be as simple and compact as possible. The blueprints for the system should be precise, but simplified abstractions that contain the overall view embedded with enough detail. What that actually looks like on paper, is a problem for the methodology, not the software developer.
Saturday, August 11, 2007
With Cat-like Dignity and Grace...
And I thought I was doing so well. I had found my voice, knew what I wanted to say and was starting to express it an enlightening and hopefully entertaining way. Then why, you ask would I deliberately go out and make my readers cough up a hairball?
"Why not", I say. "Sometimes, you just have to talk about the things that people really don't want to talk about". Sometimes they should listen.
Building software is challenging, but many of our problems spring from the way we are going about it, not from the technology itself. At the very heart of all development is the dreaded m word. Muggle, you say after having read too much Harry Potter? No, the other one: methodology. It is the 'thing' that sets the steps of the process; laying it out for us.
Perhaps it is an infancy thing, but for software the range of methodologies used is unusually vast, ranging from virtually nothing, all of the way up to inches and inches of documented process. The size of the range speaks volumes about where we are in our understandings.
Nothing, is really bad, of course. A complete lack of any recognizable process, when you see it, is a mess. Like a steel ball in a giant pinball machine, flipping from crisis to crisis, the organization never follows the same path twice. The inconsistencies fuel the game. There may be occasional wins but there will be lots of crashes too. Failure is an option. And a common one too, as the ball frequently is missed by the flippers and goes sinking back into the drain. Chaos quickly takes its toll on the organization: low morale, high turnover, and volatility.
Contrast that with one of those long, slow, overwhelming processes. You know the type that act like a giant poisonous gas cloud, sucking the life out of everything they touch. Its presence leaving the coders fearful and huddling in their endless maze of cubicles. A full three letter acronym of dread and despair that is sure to choke off even the most adventurous. The code gets done. Eventually. But it is so damn ugly by the end that most people turn a blind eye towards it, trying instead to develop hobbies outside of work. One last redeeming attempt to get some source of satisfaction in their lives.
Both ends of the spectrum are horrible places to be stuck, but they don't take away from the underlying need to have a good process. Even more important: the process you use absolutely shapes your output. Yep. You may throw all the cute technologies you want into the pile, you may even gather the best and the brightest, but the process always taints the results.
It is that important. It is that influential. It is that elementary.
I've used many processes over the years, none of which I would recommend. They fail more often because of their negative side-effects, not their central focus. It is sad because the process should be there to help me. I need something I can depend upon.
Over my next few blog entries I want to continue this discussion, focusing in on the areas where I think changing the process is crucial. Were it could work. We need new ideas for design, implementation and testing. Not complicated ones, but they need to fix the problems. Our current ones don't live up to our expectations. We either avoid the hairballs, or we rehash the same ideas, dressed differently, but still clinging to the same old flaws.
My ultimate goal is to write another book. A simple reference to a simple methodology. One that fixes the problems, not contributes to them. There is enough knowledge available, it just needs to be consolidated, sifted and repackaged in a way that it is usable. For this task, feedback is important to me.
We probably already know the next great methodology that will transform our industry for the 21st century, we just don't know that we know it, yet.
"Why not", I say. "Sometimes, you just have to talk about the things that people really don't want to talk about". Sometimes they should listen.
Building software is challenging, but many of our problems spring from the way we are going about it, not from the technology itself. At the very heart of all development is the dreaded m word. Muggle, you say after having read too much Harry Potter? No, the other one: methodology. It is the 'thing' that sets the steps of the process; laying it out for us.
Perhaps it is an infancy thing, but for software the range of methodologies used is unusually vast, ranging from virtually nothing, all of the way up to inches and inches of documented process. The size of the range speaks volumes about where we are in our understandings.
Nothing, is really bad, of course. A complete lack of any recognizable process, when you see it, is a mess. Like a steel ball in a giant pinball machine, flipping from crisis to crisis, the organization never follows the same path twice. The inconsistencies fuel the game. There may be occasional wins but there will be lots of crashes too. Failure is an option. And a common one too, as the ball frequently is missed by the flippers and goes sinking back into the drain. Chaos quickly takes its toll on the organization: low morale, high turnover, and volatility.
Contrast that with one of those long, slow, overwhelming processes. You know the type that act like a giant poisonous gas cloud, sucking the life out of everything they touch. Its presence leaving the coders fearful and huddling in their endless maze of cubicles. A full three letter acronym of dread and despair that is sure to choke off even the most adventurous. The code gets done. Eventually. But it is so damn ugly by the end that most people turn a blind eye towards it, trying instead to develop hobbies outside of work. One last redeeming attempt to get some source of satisfaction in their lives.
Both ends of the spectrum are horrible places to be stuck, but they don't take away from the underlying need to have a good process. Even more important: the process you use absolutely shapes your output. Yep. You may throw all the cute technologies you want into the pile, you may even gather the best and the brightest, but the process always taints the results.
It is that important. It is that influential. It is that elementary.
I've used many processes over the years, none of which I would recommend. They fail more often because of their negative side-effects, not their central focus. It is sad because the process should be there to help me. I need something I can depend upon.
Over my next few blog entries I want to continue this discussion, focusing in on the areas where I think changing the process is crucial. Were it could work. We need new ideas for design, implementation and testing. Not complicated ones, but they need to fix the problems. Our current ones don't live up to our expectations. We either avoid the hairballs, or we rehash the same ideas, dressed differently, but still clinging to the same old flaws.
My ultimate goal is to write another book. A simple reference to a simple methodology. One that fixes the problems, not contributes to them. There is enough knowledge available, it just needs to be consolidated, sifted and repackaged in a way that it is usable. For this task, feedback is important to me.
We probably already know the next great methodology that will transform our industry for the 21st century, we just don't know that we know it, yet.
Tuesday, August 7, 2007
As Written
Verbicide? "That's a rather interesting name for it," I say to no one in particular.
While that thought is rolling lazily around in my head I stretch out my feet, lean back and proceed to enjoy the summer sun.
I appear to be sitting in the middle of some type of insect super-highway. They travel back and forth around the yard, buzzing as they go, constantly passing me. The grass is green, the air is cool in the shade and I am reading a book on effective writing. I burn the sounds of the insects into my memory, but only because I am in the middle of reading a chapter on color in "A Writer's Coach", an excellent book, beautifully written by Jack Hart. In the chapter on humanity the use of a vignette is suggested, which seems as I read it to alert me to the presence of the bugs, but for me, the pages reveal more appropriate wisdom.
If I am writing in Microsoft Word and I ask the grammar checker to highlight passive sentences, I always get an unusually large number of them. Somehow I skipped grammar in school, so it does not always stick that writing "the batter hits the ball" is an active sentence, while "the ball was hit by the batter" is passive. Before the book, it was just a bad habit I was trying to break.
Passive sentences are weak. They lack force, and according to the book, they are verbicide; killing a sentence with the presence of a weak verb.
So, my passive sentences are bad, I admit. A habit that I am desperately trying to change, but it does lead to an interesting question: why do I keep writing passive sentences in the first place? What is it about me or the English language that draws me back, again and again to a sentence structure that surely I should find as unsatisfying as any other reader?
I suspect but am unlikely to prove any time soon, that the answer is related to my profession. As a computer programmer, I constantly struggle with complex rules and structure. On a daily basis, I work with logic, consistency and a huge dash of complexity. Somehow, the structure of a passive sentence comes easily, so I expect that it is most similar to coding. Like most programmers, I start with functionality which I apply to the data. In that sense, a passive phrase like "the list is sorted by qsort" is far more natural then "the program qsorts the list". Coding begets passivity. My problems then are deeply rooted in my daily activities.
If that is really true then my verbicidal tendencies, while still my problem, are definitely less my fault. I'll always have to spend my life combating them, but at least I have a better understanding of why they happen. Feeling a little less impatience with myself, I figure I should relax and enjoy the scene, taking in the fresh air, the wind rustling through the trees and pondering on a way to end this piece with a hard consonant, for maximum impact.
While that thought is rolling lazily around in my head I stretch out my feet, lean back and proceed to enjoy the summer sun.
I appear to be sitting in the middle of some type of insect super-highway. They travel back and forth around the yard, buzzing as they go, constantly passing me. The grass is green, the air is cool in the shade and I am reading a book on effective writing. I burn the sounds of the insects into my memory, but only because I am in the middle of reading a chapter on color in "A Writer's Coach", an excellent book, beautifully written by Jack Hart. In the chapter on humanity the use of a vignette is suggested, which seems as I read it to alert me to the presence of the bugs, but for me, the pages reveal more appropriate wisdom.
If I am writing in Microsoft Word and I ask the grammar checker to highlight passive sentences, I always get an unusually large number of them. Somehow I skipped grammar in school, so it does not always stick that writing "the batter hits the ball" is an active sentence, while "the ball was hit by the batter" is passive. Before the book, it was just a bad habit I was trying to break.
Passive sentences are weak. They lack force, and according to the book, they are verbicide; killing a sentence with the presence of a weak verb.
So, my passive sentences are bad, I admit. A habit that I am desperately trying to change, but it does lead to an interesting question: why do I keep writing passive sentences in the first place? What is it about me or the English language that draws me back, again and again to a sentence structure that surely I should find as unsatisfying as any other reader?
I suspect but am unlikely to prove any time soon, that the answer is related to my profession. As a computer programmer, I constantly struggle with complex rules and structure. On a daily basis, I work with logic, consistency and a huge dash of complexity. Somehow, the structure of a passive sentence comes easily, so I expect that it is most similar to coding. Like most programmers, I start with functionality which I apply to the data. In that sense, a passive phrase like "the list is sorted by qsort" is far more natural then "the program qsorts the list". Coding begets passivity. My problems then are deeply rooted in my daily activities.
If that is really true then my verbicidal tendencies, while still my problem, are definitely less my fault. I'll always have to spend my life combating them, but at least I have a better understanding of why they happen. Feeling a little less impatience with myself, I figure I should relax and enjoy the scene, taking in the fresh air, the wind rustling through the trees and pondering on a way to end this piece with a hard consonant, for maximum impact.
Subscribe to:
Posts (Atom)