We are awash in an endless sea of data. Collecting it is easy. Storing it is gradually getting cheaper.
All of this information we gather serves us absolutely no purpose if we are unable to structure it and then summarize its meaning. With so much data, if we can't abstract our representation into some geometrically simple form of presentation, then we can never understand what it is that we have collected.
Crunching that much information is the job of a massive system. The type of behemoth that probably doesn't exist, but that most of us can image.
You can't build systems in that caliber by yourself. We can't even build them with a team. We have the technologies, but not the methodologies for this type of work. Large systems require large groups of developers to come together.
Directing a big group of people towards a successful end is not an easy task. When that task involves building something complicated with lots of dependencies, it becomes ever harder. Software, although virtual and seemingly 'lightweight' is plagued by a surprising amount of complexity. Hidden complexity that is usually underestimated.
By yourself, it may be easy to conceive of a simple design for some software program and over a few months or even a few years belt out the implementation. But the problem changes when you need to include other people, and it really changes when you include them right from the beginning.
To build big, you need a big group of people, there is no choice.
To get a group of programmers together, and to get them all building the same system, you need some form of control. You are unlikely -- for instance -- to get twenty great programmers; some of them will be more skilled than others. You are unlikely to get twenty mind readers; most of them require communication to direct them towards their work. You can't just fire the ones you don't like. The ones that don't measure up to your standards. You'll be left by yourself and you just aren't fast enough to build big systems.
So you need some way of aligning their goals towards a common vision. You need an architecture, and you need it in a form that is easily sharable. Blueprints. You need some type of blueprints that are:
- Simple, yet abstract.
- Small enough to understand.
- Unambiguous, so it gets done correctly.
It is not that taking input from all of the programmers is a bad thing, or should be discouraged, but that 'design by committee' rarely produces elegant results. Where it matters, at the various 'levels' in the design you want consistency, so you want the same designer's influence. Where it could be considered independent, you could have multiple designers.
The quality of the design depends on its underlying consistency. The quality of what you build depends on the number of 'cooks'. Too many cooks spoil the broth.
That is why for many buildings -- at least where they actually care about appearance -- there is an architect. It is that one 'vision' that binds it all together, in a way that it is coherent.
The core of a blueprint is just enough information contained in an abstract sense that at least one person can cram it all into their mind and visualize it. The size of the abstraction is crucial, as you cannot easily validate it if it is too large.
A good architecture is an abstract design for the thing to be built, and its blueprints are the graphical (and textual) instantiation of that abstraction. They need just enough detail to go forward in an unambiguous manner. They needn't contain every little detail, but those details that are absent should be constrained in some way, for example by well known standards.
Having blueprints that you can share with the team is only the beginning. A good architecture does more than just iterate out the coding elements, it can also be used for:
- Iterative Development Support.
- Bug Reporting Triage.
- Testing.
In the architecture, you draw both horizontal and vertical 'lines'. The system is decomposed into layers and components. When these pieces are truly encapsulated from each other, you get a stability to the project. Choosing to upgrade to a more professional resource management layer for example, can be done all by itself with no effect on the rest of the code. For each iteration, you can pick another layer or component and upgrade its depth and scope; growing the system piece by piece. If you know where you want to go in the future, this is a reliable way of getting there, one step at a time.
As well, with discreet well-understood 'lines', it becomes easier to quickly call the location of a bug. This attribute cuts down on support, as one person can accurately triage incoming bugs towards the right resources without having to get the whole team involved. A significant time saving.
The architectural lines effect testing as well, but I'll leave that as the subject of a follow up post. It is too complex to fit into this entry.
When you lay down the blueprints for an architecture you set a structure over the project. This type of framing gives stability, but it also allows for you to delegate responsibilities for the pieces to various sub-teams. If you've gone with a 'big ball of mud' for an architecture, the sub-teams will inevitably overlap with each other. That causes confusion and strife.
While we can envision huge systems capable of processing mass amounts of data, coordinating our efforts as a team towards a single development goal has always been a cultural problem for programmers. You can see the possibilities, but by not working together, all you get is frustration. Unlocking that potential comes from finding a way to communicate a consistent unambiguous design to a team of eager developers. We don't want to be mindless drones, but when that little extra freedom we crave threatens to derail the project, it also brings with it that horrible stress of possibly failing.
Blueprints that really work would alleviate many of the anxieties. If we get past this problem, then we can surf the sea of data and do some really interesting work, instead of reinventing the same simple applications over and over again.
Software is a static list of instructions, which we are constantly changing.
▼
Friday, September 28, 2007
Thursday, September 20, 2007
Turbulent Waters
I've been here before. The howling winds keep forcing me backwards. I'm drowning in a raging sea of technological insecurities. I hit a few highs from interesting features, only to be tumbled back by deep lows for bad implementation choices. The ups and downs of a perfect storm born of weak technology.
I'm back, again. At that initial point in a new project where I am seeking new technologies. But instead of enjoying my evaluations, I am being tossed about by broken features and annoying flaws. All of the bad ideas over the years come together to produce unstable tools and awkward libraries. And I keep coming across more horrible things. Bad, really deplorable things. Things we learned not to do so long ago, but somehow we forgot. It is a nasty wind lashing at my attempts to find ways to work-around the problems. I long for the peace and quiet of a more stable technology base.
Unfortunately, I'm sure I've surpassed my quota for ranting lately. I don't want to be negative anymore. At least not for a little while. I don't want to talk about any of the technologies or any of their specific weaknesses. I figured I could use my time a little more productively by describing the types of things "I want" in technology. There are a few common attributes that I think are desirable, but get lost in that endless race to dish out the latest and greatest.
Simple things should be simple. Nothing, I think states elegance more than this property. If you want to use a technology for something simple, hey, it is simple to do. If you want more depth, then you have to invest more. No other attribute has value over this one. Often claimed, rarely duplicated. It can be measured by the amount of time you need to spend reading the documentation or fiddling with the functionality.
Once I get it, I should really get it. I don't want to have to reach for the manual for every little function and I don't want to have to go to the Internet for each problem I encounter. If I use the technology enough, I should be able to understand it, guess at how it works, and it should be consistent enough to allow me to do that. The farther away I can sit from the manuals, the better it is. If I can't grok it, after nearly twenty years of working with similar things, well, it probably ain't me.
I can use it out of the box. If I need it, I need it now. Package it up for me in a way that I can use it. I don't want to have to wire up twenty more modules before it actually works. Or change twenty configuration parameters. It's not fun spending all day setting up software. Why, if you built it to do something for me, should I have to go to some site to download a bunch of common plugins that everyone needs in order to do anything useful. Package it up for me. If its a distribution or licensing problem, fix it. The more crap we depend on, the more we pay for it.
I can set it up my way. My chief complaint for most IDEs is that some guy (or gal) is making a lot of choices about how I work when I code. Turns out, mostly they are wrong choices, and I end up hating the interfaces because of that. I'd rather someone just integrate vim, Perl and tk and let me set up my own functions mapped to keys and buttons. I want a couple of editor windows (not a lousy split one), and a couple of bars of buttons that are hooked to frequently used scripts. I'd like to share scripts with my friends, and possibly the rest of the world. Check the share option, and it is done. I want it to be that simple.
The ability to automate. If I can do it with the GUI, I should be able to automate it in the command line. Why do we have to do the same 30 clicks over and over again, while worrying about forgetting one or two of them, when you can harness the power of a computer to remember the number and order of the steps for you. Computers are this powerful tool that we constantly fail to leverage. Progress in automation has been backwards.
Meaningful documentation. How much time has been spent on documentation that is entirely useless? Unreadable. It always boggles my mind. Reference backed FAQs or answers to questions are best. Give me MAN page syntax: a quick reference, a bit of an overview and then more detail on the references. Don't try to impress me with your writing, save that for your blog. If I get one more programmer bragging about how "great" their lame technology is, I'm going to lose it (again).
Simple abstractions and common terminology. My first ever attempt to put words to this problem were 'mystical worlds'. Programmers keep creating their own customized worlds with their own terms and structures and whatnot. Hey, this ain't science fiction. You don't need to invent your own elf language. You start reading about something like a "foo-flow demarcating its reverse bounding anti-reduction pattern", and you start thinking "These guys are crazy. They really are". Its easy to make up new demented terms, but its avoiding the problem and just making software worse for the ware. I don't want real-world brute force, but then I don't want over-the-top different either. Give me something that looks like the problem or looks like an abstraction of the problem. If you can't explain how the problem maps back to the abstraction, you may have been able to code it, but you really don't get it. And it probably ain't going to work correctly for me.
Real Encapsulation. Man, if nothing bugs me more than some encapsulating layer that still leaves all of the original stuff visible and accessible. Usually because the designer was too worried about restricting the freedoms of others, or something lame like that. Why do I want to use your 'thing' if it only saves me a couple of weeks of coding, but requires months of reading the crappy docs to get it. Ahhh. You figure it out, encapsulate it, and then give me a simple way to access it. If I wanted to learn about the details I wouldn't be using your stuff.
A clean, simple, consistent and complete interface. Ohh, these horrible programmers that embed all of this so called 'functionality' into annoying mini-widgets, incomprehensible icons or other dancing baloney. They get some kick from packing it so densely onto the screen that even twenty years of staring at the thing still won't shed light on all of the stupid trivial hidden functionality. We could call them: plain-sight easter-eggs packed into fugly interfaces. Blaa. I want simple and I want complete. If I can add, then I should be able to delete. All of the yins are balanced with opposing yangs. If I can cut, then I can paste, and I can cut it if I can see it. And I don't want to read any documentation "at all" in order to understand how it works. It should be self evident. It should match the popular conventions of the day. Highly creative, but awkward interfaces are really just annoying and awkward interfaces.
Well, I'd better stop there. The storm rages on, and I need to get back to feeling battered and helpless. I'm sure there is a lot more angst buried deep down, just waiting for the next stupid bug to give it energy and meaning.
I did a poor job of not ranting, but it is so hard not too, particularly when so many technologies are dedicated to disappointing their users. It is just an endless array of examples of what elegance is not, and never will be. You get those rare moments of being caught up in the eye of the storm --- thinking just for a moment that you could use the technology for good -- only to find yourself thrown right back into the fury. You lucky if you can just make something work fairly reliably.
Strangely enough, you'd figure that at least programmers would be able to build excellent tools for other programmers, given the obvious fact that they are expert users in the same field. But something odd, drives them to short cut their solutions and produce rather lame tools. Editors for instance have declined, not improved over the years. Scripting has as well. IDEs are better, but still badly designed and implemented.
The newer crop of coders seem dedicated to reinventing all of the problems of old, while ignoring all of the innovations. In some cases I suspect that the Internet is acting as crutch to allow really bad technologies that should have died a natural death, to carry on torturing their users. Feeding -- because of the easy availability of information -- the storms that plague our industry.
I'm back, again. At that initial point in a new project where I am seeking new technologies. But instead of enjoying my evaluations, I am being tossed about by broken features and annoying flaws. All of the bad ideas over the years come together to produce unstable tools and awkward libraries. And I keep coming across more horrible things. Bad, really deplorable things. Things we learned not to do so long ago, but somehow we forgot. It is a nasty wind lashing at my attempts to find ways to work-around the problems. I long for the peace and quiet of a more stable technology base.
Unfortunately, I'm sure I've surpassed my quota for ranting lately. I don't want to be negative anymore. At least not for a little while. I don't want to talk about any of the technologies or any of their specific weaknesses. I figured I could use my time a little more productively by describing the types of things "I want" in technology. There are a few common attributes that I think are desirable, but get lost in that endless race to dish out the latest and greatest.
Simple things should be simple. Nothing, I think states elegance more than this property. If you want to use a technology for something simple, hey, it is simple to do. If you want more depth, then you have to invest more. No other attribute has value over this one. Often claimed, rarely duplicated. It can be measured by the amount of time you need to spend reading the documentation or fiddling with the functionality.
Once I get it, I should really get it. I don't want to have to reach for the manual for every little function and I don't want to have to go to the Internet for each problem I encounter. If I use the technology enough, I should be able to understand it, guess at how it works, and it should be consistent enough to allow me to do that. The farther away I can sit from the manuals, the better it is. If I can't grok it, after nearly twenty years of working with similar things, well, it probably ain't me.
I can use it out of the box. If I need it, I need it now. Package it up for me in a way that I can use it. I don't want to have to wire up twenty more modules before it actually works. Or change twenty configuration parameters. It's not fun spending all day setting up software. Why, if you built it to do something for me, should I have to go to some site to download a bunch of common plugins that everyone needs in order to do anything useful. Package it up for me. If its a distribution or licensing problem, fix it. The more crap we depend on, the more we pay for it.
I can set it up my way. My chief complaint for most IDEs is that some guy (or gal) is making a lot of choices about how I work when I code. Turns out, mostly they are wrong choices, and I end up hating the interfaces because of that. I'd rather someone just integrate vim, Perl and tk and let me set up my own functions mapped to keys and buttons. I want a couple of editor windows (not a lousy split one), and a couple of bars of buttons that are hooked to frequently used scripts. I'd like to share scripts with my friends, and possibly the rest of the world. Check the share option, and it is done. I want it to be that simple.
The ability to automate. If I can do it with the GUI, I should be able to automate it in the command line. Why do we have to do the same 30 clicks over and over again, while worrying about forgetting one or two of them, when you can harness the power of a computer to remember the number and order of the steps for you. Computers are this powerful tool that we constantly fail to leverage. Progress in automation has been backwards.
Meaningful documentation. How much time has been spent on documentation that is entirely useless? Unreadable. It always boggles my mind. Reference backed FAQs or answers to questions are best. Give me MAN page syntax: a quick reference, a bit of an overview and then more detail on the references. Don't try to impress me with your writing, save that for your blog. If I get one more programmer bragging about how "great" their lame technology is, I'm going to lose it (again).
Simple abstractions and common terminology. My first ever attempt to put words to this problem were 'mystical worlds'. Programmers keep creating their own customized worlds with their own terms and structures and whatnot. Hey, this ain't science fiction. You don't need to invent your own elf language. You start reading about something like a "foo-flow demarcating its reverse bounding anti-reduction pattern", and you start thinking "These guys are crazy. They really are". Its easy to make up new demented terms, but its avoiding the problem and just making software worse for the ware. I don't want real-world brute force, but then I don't want over-the-top different either. Give me something that looks like the problem or looks like an abstraction of the problem. If you can't explain how the problem maps back to the abstraction, you may have been able to code it, but you really don't get it. And it probably ain't going to work correctly for me.
Real Encapsulation. Man, if nothing bugs me more than some encapsulating layer that still leaves all of the original stuff visible and accessible. Usually because the designer was too worried about restricting the freedoms of others, or something lame like that. Why do I want to use your 'thing' if it only saves me a couple of weeks of coding, but requires months of reading the crappy docs to get it. Ahhh. You figure it out, encapsulate it, and then give me a simple way to access it. If I wanted to learn about the details I wouldn't be using your stuff.
A clean, simple, consistent and complete interface. Ohh, these horrible programmers that embed all of this so called 'functionality' into annoying mini-widgets, incomprehensible icons or other dancing baloney. They get some kick from packing it so densely onto the screen that even twenty years of staring at the thing still won't shed light on all of the stupid trivial hidden functionality. We could call them: plain-sight easter-eggs packed into fugly interfaces. Blaa. I want simple and I want complete. If I can add, then I should be able to delete. All of the yins are balanced with opposing yangs. If I can cut, then I can paste, and I can cut it if I can see it. And I don't want to read any documentation "at all" in order to understand how it works. It should be self evident. It should match the popular conventions of the day. Highly creative, but awkward interfaces are really just annoying and awkward interfaces.
Well, I'd better stop there. The storm rages on, and I need to get back to feeling battered and helpless. I'm sure there is a lot more angst buried deep down, just waiting for the next stupid bug to give it energy and meaning.
I did a poor job of not ranting, but it is so hard not too, particularly when so many technologies are dedicated to disappointing their users. It is just an endless array of examples of what elegance is not, and never will be. You get those rare moments of being caught up in the eye of the storm --- thinking just for a moment that you could use the technology for good -- only to find yourself thrown right back into the fury. You lucky if you can just make something work fairly reliably.
Strangely enough, you'd figure that at least programmers would be able to build excellent tools for other programmers, given the obvious fact that they are expert users in the same field. But something odd, drives them to short cut their solutions and produce rather lame tools. Editors for instance have declined, not improved over the years. Scripting has as well. IDEs are better, but still badly designed and implemented.
The newer crop of coders seem dedicated to reinventing all of the problems of old, while ignoring all of the innovations. In some cases I suspect that the Internet is acting as crutch to allow really bad technologies that should have died a natural death, to carry on torturing their users. Feeding -- because of the easy availability of information -- the storms that plague our industry.
Sunday, September 16, 2007
Paddling Upstream
Another shiver ripped violently through Coco. She stood on the dock, facing a huge dilemma: whether to get into the canoe or not. Only minutes before, she had been tied to a tree as we drifted silently across the lake, away from her. Our departure provoking loud yelps of displeasure and frantic barking.
We had tried to get her into the canoe, but she refused. Possibly she didn't like the noise of the water pounding up against the side of the canoe, or maybe it was just the idea of sitting there and not being allowed to move that made her fear it. Whatever fears it was that were motivating her, she was clearly in turmoil.
We returned to the dock to pick her up. She seemed to get the choice: either we head off again with her or without her, it was that simple.
Poor Coco, but you have to see it from my perspective. My wife and I have gone to the cottage for a week off. There is a lake close by where we can canoe. We were looking forward to cruising around the lake and it was a beautiful day for it. However, Coco is afraid to go beyond her comfort zone. She is normally a strong dog -- the adventurous type -- but something about the canoe just wasn't clicking. The three of us had been out a bit earlier, but we had to turn back because Coco kept trying to jump out of the canoe; threating in the process to tip our little boat over.
We figured we'd leave her behind, tied to a tree, but that didn't work either. Her furious yelps of distress were echoing across the lake. It sounded like someone was beating the dog. It was embarrassing. We returned to pick her up, but now instead of jumping into the canoe, she is having the doggie equivalent of a hissy fit.
As usual for a cottage trip I brought along a number of books to read. One of them, "The Pragmatic Programmer" by Andrew Hunt and David Thomas, has been on my must-read list for years. I've read bits and pieces, but I've never sat down to read it cover to cover.
It is a great book -- one that I wish I had written -- full of understanding and good advice, but sometimes I find the programmer culture leaking through too much. One such section that struck me as particularly odd was:
"Finally, there is the straightjack effect. A design that leaves the coder no room for interpretation robs the programming effort of any skill and art. Some would say this is for the best, but they're wrong. Often, it is only during coding that certain options become apparent."
Mainstream programming culture is all about freedom. And that means giving every programmer the right to make major decisions in the project. Realistically that can't be a good thing; too much freedom is always chaos. It is no wonder than, that so many projects implode by the hands of their own coders.
There are a multitude ways of doing anything, particularly with a computer, but consistency is critical towards manging complexity. Leaving too many degrees of freedom until the last moment for some coder to respond to it 'instinctively' is a bad idea. You cannot build a big project if you cannot control how it is built.
The results of the position advocated in the book -- more freedom for the programmers -- is a common problem often found in software. Dig in a bit deeply into many GUI interfaces and suddenly the paradigm changes. The interface feels different and obeys different conventions.
Freedom to design at the last moment means a lack of overall consistency. Whether it be a library interface or a full graphical one. In a small program the inconsistencies might not be noticeable, but they can be quite sever and debilitating in a large one.
Ironically, programmers don't really need room for interpretation.
The Pragamatic Programmer's view of development is closely aligned with the lightweight methodology and Agile programming movement. As such, a foundation of the development strategy, and alluded to in the book as "Tracer Bullets", is to build in a highly iterative approach. Small sets of development. Using that and refactoring, if a programmer discovers a better option for implementation, they could do the original design first and then enhance it in the next iteration, thus keeping the whole project on a determinable schedule.
If the design role and programming role are separated, then a post implementation discussion would set the stage for implementing better technology in the next phase. So long as there weren't professional jealousies involved. And with truly iterative development, the phases are short enough so there is little time to wait before the work actually gets into operation.
Developers would do better to stick to the original design, but use their understanding of its flaws to guide enhancements. In this way, brilliant inspirations of programming don't become islands of isolated code. The system -- the whole system -- should obtain a reasonable measure of consistency.
Another book I was reading at the cottage was called "Under the Banner of Heaven" by Jon Krakauer, the author of "Into Thin Air". This book was about Mormon fundamentalists and how they related back to the Mormon faith. Another good book, although it can be a bit preachy at times.
Strangely enough, it got me thinking about mainstream Computer Science, and what might be considered to be on the outskirts of the mainstream.
The mainstream for software seems split these days between those that are advocating a lighter Agile approach to development and those that are following some variation on the traditional heavyweight stance. While the lightweight side is more visible on the web, the heavyweight approach appears to be more popular with the actual software producing organizations. Both sides tend towards the same cultural bias of allowing individual programmers a significant leeway in what they build; although the heavyweight side tries not to admit it.
Whenever you drift away from the masses, you run the danger of being labeled a fundamentalist. The term seems to mean a desire to go back to the fundamentals, before the problems began, but it also implies a fanatical stance. For religion there will always be someone wanting to return to the golden years. For software, I think we are still waiting for the golden years to start, which makes being a fundamentalist a bit more challenging. You can't return to something that doesn't exist yet. Punch cards or assembler were hardly golden, and debugging dangling pointers ate huge amounts of time for C programmers.
Still the problems with software development, when they are not ignored or dismissed, may have changed a bit but have not really improved. There is just more code out there because the industry has been around a lot longer. The use of brute force is still the most popular way of belting out computer instructions. It is still about endlessly stuffing straw into one side of a bag, while it is leaking out the other side. The technologies look better, but are less reliable.
I think it could be easier if we just made it easier, and it would be more successful if we stopped shooting ourselves in the foot all of the time. Many -- but not all -- problems with software development are self inflicted because of our determination to stay with our specific development culture. The key in the quote was "... robs the effort ... of any skill and art".
The mainstream, it seems, wants what Paul Graham is advocating in "Hackers and Painters" to be true. That programming is an intensely creative discipline, much like painting. That we need freedom of expression to truly rise to great heights of development. That unlike construction, inconsistencies don't matter particularly if they are inspired. We are not bound by the physical world, so why do we need to be consistent?
A funny sort of approach given that programmers are no less annoyed by ugly inconsistencies, than their users are. Ranting about frustrations with a big company's messy interface is common practice. It is only when it is our own personal inconsistencies that we think it is acceptable.
I suspect that this will change. I think in time, as Computer Science matures, it will look for more rational and concrete approaches to programming. It will ask for more assurances that the right code is being developed, and it will move alway from all of the snake-oil salesmanship about getting the documentation correct or the wishful thinking that maybe we can skip a design all together and just code merrily along all day.
The mainstream will bend away from personal freedoms towards being more successful.
Programmers will eventually move out of their comfort zones and accept a somewhat lessor degree of freedom during implementation. They have to, if they want to build bigger and more complex systems. But in exchange, they will get an enhanced job satisfaction and less stress from impending failures.
We, as craftsmen want to build really sophisticated systems, but our own culture and methodologies have become the gravity that binds our hands. Until you can accept a measure less in control, you cannot effectively combine your skills with others. That defines the limits of your abilities. That defines the limits of the project. That defines the limits of the industry. Cross beyond those limits and the obvious happens.
Eventually Coco succumbed. She stood up, then collapsed in an exaggerated moan. Crying and pacing, and almost pleading for us not to go canoing. But she realized that we 'would' go without her.
First one paw, then gingerly another; finally the whole dog was seated in the center of the canoe.
We shoved off from dock, and made our way for a nice leisurely trip around the lake. It was a warm sunny day, with a light breeze. That fresh scent of lake water and evergreen trees was permeating the air. What could be better for a canoe ride?
After several minutes, Coco calmed down, and after a few more I think she actually started to enjoy the ride. At least she wasn't left behind, terrified to break out of her comfort zone. For all her fussing, fears and hysteria it turned out that canoing about the lake just wasn't such a big deal after all. If only I'd know how to explain this to her in the beginning.
We had tried to get her into the canoe, but she refused. Possibly she didn't like the noise of the water pounding up against the side of the canoe, or maybe it was just the idea of sitting there and not being allowed to move that made her fear it. Whatever fears it was that were motivating her, she was clearly in turmoil.
We returned to the dock to pick her up. She seemed to get the choice: either we head off again with her or without her, it was that simple.
Poor Coco, but you have to see it from my perspective. My wife and I have gone to the cottage for a week off. There is a lake close by where we can canoe. We were looking forward to cruising around the lake and it was a beautiful day for it. However, Coco is afraid to go beyond her comfort zone. She is normally a strong dog -- the adventurous type -- but something about the canoe just wasn't clicking. The three of us had been out a bit earlier, but we had to turn back because Coco kept trying to jump out of the canoe; threating in the process to tip our little boat over.
We figured we'd leave her behind, tied to a tree, but that didn't work either. Her furious yelps of distress were echoing across the lake. It sounded like someone was beating the dog. It was embarrassing. We returned to pick her up, but now instead of jumping into the canoe, she is having the doggie equivalent of a hissy fit.
As usual for a cottage trip I brought along a number of books to read. One of them, "The Pragmatic Programmer" by Andrew Hunt and David Thomas, has been on my must-read list for years. I've read bits and pieces, but I've never sat down to read it cover to cover.
It is a great book -- one that I wish I had written -- full of understanding and good advice, but sometimes I find the programmer culture leaking through too much. One such section that struck me as particularly odd was:
"Finally, there is the straightjack effect. A design that leaves the coder no room for interpretation robs the programming effort of any skill and art. Some would say this is for the best, but they're wrong. Often, it is only during coding that certain options become apparent."
Mainstream programming culture is all about freedom. And that means giving every programmer the right to make major decisions in the project. Realistically that can't be a good thing; too much freedom is always chaos. It is no wonder than, that so many projects implode by the hands of their own coders.
There are a multitude ways of doing anything, particularly with a computer, but consistency is critical towards manging complexity. Leaving too many degrees of freedom until the last moment for some coder to respond to it 'instinctively' is a bad idea. You cannot build a big project if you cannot control how it is built.
The results of the position advocated in the book -- more freedom for the programmers -- is a common problem often found in software. Dig in a bit deeply into many GUI interfaces and suddenly the paradigm changes. The interface feels different and obeys different conventions.
Freedom to design at the last moment means a lack of overall consistency. Whether it be a library interface or a full graphical one. In a small program the inconsistencies might not be noticeable, but they can be quite sever and debilitating in a large one.
Ironically, programmers don't really need room for interpretation.
The Pragamatic Programmer's view of development is closely aligned with the lightweight methodology and Agile programming movement. As such, a foundation of the development strategy, and alluded to in the book as "Tracer Bullets", is to build in a highly iterative approach. Small sets of development. Using that and refactoring, if a programmer discovers a better option for implementation, they could do the original design first and then enhance it in the next iteration, thus keeping the whole project on a determinable schedule.
If the design role and programming role are separated, then a post implementation discussion would set the stage for implementing better technology in the next phase. So long as there weren't professional jealousies involved. And with truly iterative development, the phases are short enough so there is little time to wait before the work actually gets into operation.
Developers would do better to stick to the original design, but use their understanding of its flaws to guide enhancements. In this way, brilliant inspirations of programming don't become islands of isolated code. The system -- the whole system -- should obtain a reasonable measure of consistency.
Another book I was reading at the cottage was called "Under the Banner of Heaven" by Jon Krakauer, the author of "Into Thin Air". This book was about Mormon fundamentalists and how they related back to the Mormon faith. Another good book, although it can be a bit preachy at times.
Strangely enough, it got me thinking about mainstream Computer Science, and what might be considered to be on the outskirts of the mainstream.
The mainstream for software seems split these days between those that are advocating a lighter Agile approach to development and those that are following some variation on the traditional heavyweight stance. While the lightweight side is more visible on the web, the heavyweight approach appears to be more popular with the actual software producing organizations. Both sides tend towards the same cultural bias of allowing individual programmers a significant leeway in what they build; although the heavyweight side tries not to admit it.
Whenever you drift away from the masses, you run the danger of being labeled a fundamentalist. The term seems to mean a desire to go back to the fundamentals, before the problems began, but it also implies a fanatical stance. For religion there will always be someone wanting to return to the golden years. For software, I think we are still waiting for the golden years to start, which makes being a fundamentalist a bit more challenging. You can't return to something that doesn't exist yet. Punch cards or assembler were hardly golden, and debugging dangling pointers ate huge amounts of time for C programmers.
Still the problems with software development, when they are not ignored or dismissed, may have changed a bit but have not really improved. There is just more code out there because the industry has been around a lot longer. The use of brute force is still the most popular way of belting out computer instructions. It is still about endlessly stuffing straw into one side of a bag, while it is leaking out the other side. The technologies look better, but are less reliable.
I think it could be easier if we just made it easier, and it would be more successful if we stopped shooting ourselves in the foot all of the time. Many -- but not all -- problems with software development are self inflicted because of our determination to stay with our specific development culture. The key in the quote was "... robs the effort ... of any skill and art".
The mainstream, it seems, wants what Paul Graham is advocating in "Hackers and Painters" to be true. That programming is an intensely creative discipline, much like painting. That we need freedom of expression to truly rise to great heights of development. That unlike construction, inconsistencies don't matter particularly if they are inspired. We are not bound by the physical world, so why do we need to be consistent?
A funny sort of approach given that programmers are no less annoyed by ugly inconsistencies, than their users are. Ranting about frustrations with a big company's messy interface is common practice. It is only when it is our own personal inconsistencies that we think it is acceptable.
I suspect that this will change. I think in time, as Computer Science matures, it will look for more rational and concrete approaches to programming. It will ask for more assurances that the right code is being developed, and it will move alway from all of the snake-oil salesmanship about getting the documentation correct or the wishful thinking that maybe we can skip a design all together and just code merrily along all day.
The mainstream will bend away from personal freedoms towards being more successful.
Programmers will eventually move out of their comfort zones and accept a somewhat lessor degree of freedom during implementation. They have to, if they want to build bigger and more complex systems. But in exchange, they will get an enhanced job satisfaction and less stress from impending failures.
We, as craftsmen want to build really sophisticated systems, but our own culture and methodologies have become the gravity that binds our hands. Until you can accept a measure less in control, you cannot effectively combine your skills with others. That defines the limits of your abilities. That defines the limits of the project. That defines the limits of the industry. Cross beyond those limits and the obvious happens.
Eventually Coco succumbed. She stood up, then collapsed in an exaggerated moan. Crying and pacing, and almost pleading for us not to go canoing. But she realized that we 'would' go without her.
First one paw, then gingerly another; finally the whole dog was seated in the center of the canoe.
We shoved off from dock, and made our way for a nice leisurely trip around the lake. It was a warm sunny day, with a light breeze. That fresh scent of lake water and evergreen trees was permeating the air. What could be better for a canoe ride?
After several minutes, Coco calmed down, and after a few more I think she actually started to enjoy the ride. At least she wasn't left behind, terrified to break out of her comfort zone. For all her fussing, fears and hysteria it turned out that canoing about the lake just wasn't such a big deal after all. If only I'd know how to explain this to her in the beginning.
Friday, September 7, 2007
Patterns of Misdirection
A while ago, I was reading a discussion group that I infrequent and I saw a post about design patterns. This particular post -- I remember -- was asking questions about the Visitor pattern. It seems that it is 'unpopular' to use this pattern, the author noted.
At the time, I didn't pay much attention to it other than to realize that I wanted to stay far away from that discussion, but I didn't know why.
For those few who are unfamiliar with them, design patterns come from a great reference book by the same name that was released in the middle of the 90s. Design Patterns has become so popular that it has its own set of Wikipedia entries. It has become a cult thing now.
The book starts with a programming example and then lists out a large number of 'design patterns'. It explains them in detail, describing how each one can be implemented and shows some sample code. These are common 'patterns' that the authors felt occurred frequently while programming in Object Oriented languages.
When I first read that book, I found it amazing.
I had always reused a collection of what I liked to call 'mechanisms' -- frequently used programming idioms -- that I had learned over the years. That was not uncommon, as most programmers I worked with at that time, had similar approaches. We all reused the best tried and true solutions for common coding problems.
Design Patterns took that a step further, putting some structure onto this common practice and making it possible for programmers to share their various knowledge with each other.
More importantly, at the time 'real world' objects were becoming more increasingly popular. It was a backwards approach to Object Orient that integrated brute force style programming right into the object orient model. The idea was to build behemoth objects that behaved as closely as possible to the real world. Instead of thinking about it, you just pound in as much knowledge of real life circumstances as possible directly into the code. These mega-objects could then be used anywhere. In theory.
Design Patterns was a shift back towards abstraction.
Seeking to find small abstract representations for generically implementing reusable solutions. Thus reducing the problems into something manageable. It was a good thing.
The ironic part of programming culture is that while programmers are always styling themselves as artisans, which is primarily subjective, they tend towards thinking in black and white, and thus, rather objectively. I saw an example of this when I was younger with one of my fellow programmers telling me that there was only 'one' true object oriented design for any programming problem. It is funny for people to believe it is a black art on one hand, but that there is only 'one' right way to do it on the other. But that contradiction has always been part of the culture.
Years ago, I knew something had gone terribly wrong with Design Patterns when in an interview, I was asked: which design patterns would you use to build an email server? That question blew my mind. I'd been away from Object Oriented programming for a while, but I had no idea that things had become so distorted.
By now, some of you are probably wondering what I am babbling about. You've been taught that Design Patterns are the primitives out of which you assemble your system. A few singletons, a couple of factories, a facade, a visitor and presto, blamo, you have a working system.
Well, it is exactly that approach that is the problem. First, the original idea behind design patterns was that they were places to start, not building blocks to assemble. While you are building your system you could refer to the patterns for some design, but the idea behind a pattern is that you morph it into what you need. It is not a matter of doing it right; it isn't a 'complete' thing; it is just a 'pattern' to get you started.
So, you don't 'design an email server' with patterns. That would be madness.
But there is more to it than that. Most of the Design Patterns are oriented around 'how' objects interact in the system: they are verbs; not nouns. Designing an easily understandable system comes from dealing with the underlying problem in the most simple and elegant manner. With respect to that constraint, software is just a tool to manipulate data. The code we write manipulates data. The data we manipulate is stored in objects.
We do apply 'functionality', or verbs to the data, but it is all about the nouns. That is why it is called "object" oriented, not "function" oriented programming.
When we decompose a system into 'how' it interacts with itself, we enhance the complexity by huge orders. This sideways view causes us to have to create a lot of unnecessary plumbing. If we decompose it by 'what' it is working on, we can simplify it. The only thing software does is grab data and manipulate it. If we structure our code to match what it actually does, the code is simple. And as an added bonus we can abstract and generalize the data, allowing us to apply the same code to a broader context.
If you read a lot of code, you will come across examples of Object Oriented programming that are written to strictly adhere to the existing patterns.
They are always more complex than necessary. And often, there is a secondary layer of complexity built on top, caused by the authors stuffing in strange pieces between the patterns. This happens because the patterns themselves don't form a complete set of primitives. Programmers 'invert' the structure, just to piece together incompatible patterns, causing quite the mess.
So, sticking exactly to design patterns, and worse, sticking to only design patterns is really bad practice. While that statement might 'irk' a few people, its truth lies in the basis of what we are trying to do.
Programming, is about telling the computer to perform a set of instructions. Usually they are a complicated set of instructions.
Programming takes so long and is so slow that we need to minimize the amount of work in order to get it done in a reasonable amount of time.
Thinking of instructions for a computer to perform is actually not very hard. The problems come once they get entered into the machine. Life, and humans, being what they are, mistakes happen often. Listing out instructions for a computer to follow is not difficult, but finding where they deviate from our original intent can be nearly impossible. Or at the very least phenomenally time consuming.
So, a key reason we decompose the steps into smaller pieces is to make it easier to fix problems when it goes wrong. It also makes it easier to enter into computer and can help in making it reusable, but those are secondary attributes.
The point behind Object Oriented programming is not just some arbitrary way of factoring the code into smaller bits. The idea is that those things in the program, visual or otherwise get intuitively mapped back to sections of code. There should be a direct relationship between what the program does and where the code is located.
So, for instance if there is a problem with one of you programs that displays the number of visitors to your web site, what you need to do to find the problem is to go back to the web-site visitor's counter object and make sure it is working as expected. From there you can work your way back up to the display of the data.
You should be able to trace your problems to the code directly, and "easily".
A good factoring of the code makes it possible. Poor factoring makes that hard. You can't connect problems with the 'data' as you see it being manipulated in the system, if the code is structured around 'how' it interacts with itself. Relating one decomposition back to the other is problematic.
Ultimately the problem with a counter may be how it is calculated during a traversal as it 'visits' various places to be summed up, but you would look into that type of algorithmic problem only after you were assured that there wasn't something wrong with the data itself. Why go through an long elaborate debug process, if the problem was a missing increment operator?
Finding a bug is time consuming, so we need to reduce that time to its minimum.
The key to a good factoring is to make it easier to debug the code. Why else would you go through all of that extra work? It should save you time, not waste it. It should relate the code back to the way it runs.
The most important techniques for programming make it easier to fix your code. It is all about readability, and about being clean and consistent. It is about making sure you are not trying to be clever. And it is about finding ways to express the code so that other programmers will understand too.
When you consider this, the idea of a pattern being popular or unpopular is just another bit of the madness. It shouldn't matter; if a pattern is close enough to what you need, then you should change it to make it fit in properly. If you are focusing on getting the data into the system, the patterns, when they infrequently occur are just on-the-side verb issues. They fit between the nouns.
A common way of sharing 'patterns' for solving problems is a great idea. An awkward way of decomposing problems into an inconsistent set of primitives is a bad idea.
What started out as getting people back on track, turned and lead them astray in a different direction. Because of this, so much of the code is littered with outrageously complex abstractions that don't cover their domain problems well.
Not that I think we should abandon the use of design patterns. They just shouldn't be used as primitives to try and frame the problems. They are things that come in handy now and again, but if the base pattern is not readable, then you should change it to be so. You always need to make the code as simple and clear as possible. That is the highest priority.
At the time I was reading the discussion group, I instinctively felt there were serous problems with the whole 'Design Patterns' thing that I was reading, but it took me quite a while to put it into words. It was wise, I think, to have stayed away from posting a reply in that discussion group. It would have taken so long to explain that someone would have accused me of being a being evil before I got finished.
At the time, I didn't pay much attention to it other than to realize that I wanted to stay far away from that discussion, but I didn't know why.
For those few who are unfamiliar with them, design patterns come from a great reference book by the same name that was released in the middle of the 90s. Design Patterns has become so popular that it has its own set of Wikipedia entries. It has become a cult thing now.
The book starts with a programming example and then lists out a large number of 'design patterns'. It explains them in detail, describing how each one can be implemented and shows some sample code. These are common 'patterns' that the authors felt occurred frequently while programming in Object Oriented languages.
When I first read that book, I found it amazing.
I had always reused a collection of what I liked to call 'mechanisms' -- frequently used programming idioms -- that I had learned over the years. That was not uncommon, as most programmers I worked with at that time, had similar approaches. We all reused the best tried and true solutions for common coding problems.
Design Patterns took that a step further, putting some structure onto this common practice and making it possible for programmers to share their various knowledge with each other.
More importantly, at the time 'real world' objects were becoming more increasingly popular. It was a backwards approach to Object Orient that integrated brute force style programming right into the object orient model. The idea was to build behemoth objects that behaved as closely as possible to the real world. Instead of thinking about it, you just pound in as much knowledge of real life circumstances as possible directly into the code. These mega-objects could then be used anywhere. In theory.
Design Patterns was a shift back towards abstraction.
Seeking to find small abstract representations for generically implementing reusable solutions. Thus reducing the problems into something manageable. It was a good thing.
The ironic part of programming culture is that while programmers are always styling themselves as artisans, which is primarily subjective, they tend towards thinking in black and white, and thus, rather objectively. I saw an example of this when I was younger with one of my fellow programmers telling me that there was only 'one' true object oriented design for any programming problem. It is funny for people to believe it is a black art on one hand, but that there is only 'one' right way to do it on the other. But that contradiction has always been part of the culture.
Years ago, I knew something had gone terribly wrong with Design Patterns when in an interview, I was asked: which design patterns would you use to build an email server? That question blew my mind. I'd been away from Object Oriented programming for a while, but I had no idea that things had become so distorted.
By now, some of you are probably wondering what I am babbling about. You've been taught that Design Patterns are the primitives out of which you assemble your system. A few singletons, a couple of factories, a facade, a visitor and presto, blamo, you have a working system.
Well, it is exactly that approach that is the problem. First, the original idea behind design patterns was that they were places to start, not building blocks to assemble. While you are building your system you could refer to the patterns for some design, but the idea behind a pattern is that you morph it into what you need. It is not a matter of doing it right; it isn't a 'complete' thing; it is just a 'pattern' to get you started.
So, you don't 'design an email server' with patterns. That would be madness.
But there is more to it than that. Most of the Design Patterns are oriented around 'how' objects interact in the system: they are verbs; not nouns. Designing an easily understandable system comes from dealing with the underlying problem in the most simple and elegant manner. With respect to that constraint, software is just a tool to manipulate data. The code we write manipulates data. The data we manipulate is stored in objects.
We do apply 'functionality', or verbs to the data, but it is all about the nouns. That is why it is called "object" oriented, not "function" oriented programming.
When we decompose a system into 'how' it interacts with itself, we enhance the complexity by huge orders. This sideways view causes us to have to create a lot of unnecessary plumbing. If we decompose it by 'what' it is working on, we can simplify it. The only thing software does is grab data and manipulate it. If we structure our code to match what it actually does, the code is simple. And as an added bonus we can abstract and generalize the data, allowing us to apply the same code to a broader context.
If you read a lot of code, you will come across examples of Object Oriented programming that are written to strictly adhere to the existing patterns.
They are always more complex than necessary. And often, there is a secondary layer of complexity built on top, caused by the authors stuffing in strange pieces between the patterns. This happens because the patterns themselves don't form a complete set of primitives. Programmers 'invert' the structure, just to piece together incompatible patterns, causing quite the mess.
So, sticking exactly to design patterns, and worse, sticking to only design patterns is really bad practice. While that statement might 'irk' a few people, its truth lies in the basis of what we are trying to do.
Programming, is about telling the computer to perform a set of instructions. Usually they are a complicated set of instructions.
Programming takes so long and is so slow that we need to minimize the amount of work in order to get it done in a reasonable amount of time.
Thinking of instructions for a computer to perform is actually not very hard. The problems come once they get entered into the machine. Life, and humans, being what they are, mistakes happen often. Listing out instructions for a computer to follow is not difficult, but finding where they deviate from our original intent can be nearly impossible. Or at the very least phenomenally time consuming.
So, a key reason we decompose the steps into smaller pieces is to make it easier to fix problems when it goes wrong. It also makes it easier to enter into computer and can help in making it reusable, but those are secondary attributes.
The point behind Object Oriented programming is not just some arbitrary way of factoring the code into smaller bits. The idea is that those things in the program, visual or otherwise get intuitively mapped back to sections of code. There should be a direct relationship between what the program does and where the code is located.
So, for instance if there is a problem with one of you programs that displays the number of visitors to your web site, what you need to do to find the problem is to go back to the web-site visitor's counter object and make sure it is working as expected. From there you can work your way back up to the display of the data.
You should be able to trace your problems to the code directly, and "easily".
A good factoring of the code makes it possible. Poor factoring makes that hard. You can't connect problems with the 'data' as you see it being manipulated in the system, if the code is structured around 'how' it interacts with itself. Relating one decomposition back to the other is problematic.
Ultimately the problem with a counter may be how it is calculated during a traversal as it 'visits' various places to be summed up, but you would look into that type of algorithmic problem only after you were assured that there wasn't something wrong with the data itself. Why go through an long elaborate debug process, if the problem was a missing increment operator?
Finding a bug is time consuming, so we need to reduce that time to its minimum.
The key to a good factoring is to make it easier to debug the code. Why else would you go through all of that extra work? It should save you time, not waste it. It should relate the code back to the way it runs.
The most important techniques for programming make it easier to fix your code. It is all about readability, and about being clean and consistent. It is about making sure you are not trying to be clever. And it is about finding ways to express the code so that other programmers will understand too.
When you consider this, the idea of a pattern being popular or unpopular is just another bit of the madness. It shouldn't matter; if a pattern is close enough to what you need, then you should change it to make it fit in properly. If you are focusing on getting the data into the system, the patterns, when they infrequently occur are just on-the-side verb issues. They fit between the nouns.
A common way of sharing 'patterns' for solving problems is a great idea. An awkward way of decomposing problems into an inconsistent set of primitives is a bad idea.
What started out as getting people back on track, turned and lead them astray in a different direction. Because of this, so much of the code is littered with outrageously complex abstractions that don't cover their domain problems well.
Not that I think we should abandon the use of design patterns. They just shouldn't be used as primitives to try and frame the problems. They are things that come in handy now and again, but if the base pattern is not readable, then you should change it to be so. You always need to make the code as simple and clear as possible. That is the highest priority.
At the time I was reading the discussion group, I instinctively felt there were serous problems with the whole 'Design Patterns' thing that I was reading, but it took me quite a while to put it into words. It was wise, I think, to have stayed away from posting a reply in that discussion group. It would have taken so long to explain that someone would have accused me of being a being evil before I got finished.
Monday, September 3, 2007
Underlying Issues
Imagine -- if you have a moment -- that you've been hired to reveal the source of some critical problems. A string of dangerous accidents has occurred at a new housing development project, leading to some serious injuries. They want you to get to the bottom of it.
To analyze this, you head out immediately to the housing site.
There are three phases to the development, one where people are already living, one that is in progress, and another that is just starting up. The accidents have all happened in the first phase, and have ranged in severity.
To get a feel for the issues, you visit the second, 'in process' phase and while there you see several disturbing problems.
Blueprints exist for the individual house designs, but you find they are strangely incomplete. Sections of them are expressed in great detail, in fact too much detail, crammed together so you can barely read the underlying details. Contrasting that, some sections of the design are missing altogether. The lines in the design just fad away, leaving large gapping holes.
You look to see that there are people everywhere working extremely hard. They have tools, which at first seem sufficient, but on further investigation you realize two big issues.
Firstly, some of the tools are in poor repair or rusted. For some, they've never been used, while others weren't taken care of.
Secondly, the workers are often using the wrong tools for the wrong job, so for instance they are hammering away at screws, or using the wheel barrow as the lunch table, while carrying everything -- heavy or not -- by hand.
But it is the workers themselves that provoke the most interest.
Because of the problems with the plans and their own issues with the tools, they feel the need to inject a lot of 'independence' into the building process. If the plan calls for a beam in a specific location, they may choose to interpret it a bit loosely. Plus or minus a few feet really isn't a problem. Wallboards? Close enough is good enough. They tend towards the more creative placement of the various beams and structures. Sometimes choosing to nail the panels, something choosing to screw them in. They exhibit a great deal of latitude in their workmanship.
All sites get messy, but the workers have developed a habit of ignoring the mess. For things like trash, they simply dump it directly into the walls as they build. Nothing is ever swept up, nor is it removed or fixed. The building sites get progressively messier as the work moves on, becoming more and more of a hazard.
Overseeing the project and insuring quality control are a number of on-site inspectors, but they tend to either stand around and talk to each other or test the worthiness of the walls -- for example -- by giving them a good solid kick. Their methods of detecting problems are erratic at best. Mostly they don't get along with the other workers and fell neglected and under-valued.
As you put the various pieces together you note that there is a plan and a design for the buildings. There are enough people working. The schedule is being followed. There is a quality control inspection process. And the houses are actually getting built.
Taking just those elements into consideration there doesn't seem to be anything wrong. But, the problems with building the houses are the key. Several existing houses have fallen down, many are in serious trouble, and even the best built one is plagued with an endless supply of little problems. Not that it comes as a surprise. If the craftsmen take liberties with the design, any 'engineering' that went into it is effectivily lost. Even if all of the work is done, if it is not done correctly it is unlikely to be effective.
In the physical world, if we visited such a dysfunctional project it would be easy to spot were the problems are occurring. Generally, this means that they are fixed pretty rapidly.
In a virtual world, however, the same problems are largely ignored. Many software development shops resemble the above description, but few people would attempt to change or fix it. Instead they blindly continue down the same road, surprised again and again that their systems have serious problems. That their houses are sometimes falling over. While pointing the finger at anything but the actual causes.
A good housing development project runs a fairly strong process to insure that the development is going (mostly) according to plan. Software development projects are missing an analogous version that takes into account their own unique culture and development issues.
Once you know what you are looking for, you see the same basic problems repeated in many software development shops. In some shops with serious problems, the methodology itself enforces the bad behavior. Fixing these problems needn't be too complicated. We just need to create a methodology that really makes sure the issues are fixed and the problems aren't avoided. A working blueprint, the correct use of tools, consistency in work by the craftsmen, cleaning up the messes and effective quality control would go a long way to insuring that the end results lived up to our expectations.
If the process used for development is flawed it doesn't matter if you have the time, people and tools; the results are far from guaranteed. Many software development projects would be horrifyingly bad if they were physically visible. Most projects have all of the right elements, they just aren't using them correctly. It doesn't need to be this way.
To analyze this, you head out immediately to the housing site.
There are three phases to the development, one where people are already living, one that is in progress, and another that is just starting up. The accidents have all happened in the first phase, and have ranged in severity.
To get a feel for the issues, you visit the second, 'in process' phase and while there you see several disturbing problems.
Blueprints exist for the individual house designs, but you find they are strangely incomplete. Sections of them are expressed in great detail, in fact too much detail, crammed together so you can barely read the underlying details. Contrasting that, some sections of the design are missing altogether. The lines in the design just fad away, leaving large gapping holes.
You look to see that there are people everywhere working extremely hard. They have tools, which at first seem sufficient, but on further investigation you realize two big issues.
Firstly, some of the tools are in poor repair or rusted. For some, they've never been used, while others weren't taken care of.
Secondly, the workers are often using the wrong tools for the wrong job, so for instance they are hammering away at screws, or using the wheel barrow as the lunch table, while carrying everything -- heavy or not -- by hand.
But it is the workers themselves that provoke the most interest.
Because of the problems with the plans and their own issues with the tools, they feel the need to inject a lot of 'independence' into the building process. If the plan calls for a beam in a specific location, they may choose to interpret it a bit loosely. Plus or minus a few feet really isn't a problem. Wallboards? Close enough is good enough. They tend towards the more creative placement of the various beams and structures. Sometimes choosing to nail the panels, something choosing to screw them in. They exhibit a great deal of latitude in their workmanship.
All sites get messy, but the workers have developed a habit of ignoring the mess. For things like trash, they simply dump it directly into the walls as they build. Nothing is ever swept up, nor is it removed or fixed. The building sites get progressively messier as the work moves on, becoming more and more of a hazard.
Overseeing the project and insuring quality control are a number of on-site inspectors, but they tend to either stand around and talk to each other or test the worthiness of the walls -- for example -- by giving them a good solid kick. Their methods of detecting problems are erratic at best. Mostly they don't get along with the other workers and fell neglected and under-valued.
As you put the various pieces together you note that there is a plan and a design for the buildings. There are enough people working. The schedule is being followed. There is a quality control inspection process. And the houses are actually getting built.
Taking just those elements into consideration there doesn't seem to be anything wrong. But, the problems with building the houses are the key. Several existing houses have fallen down, many are in serious trouble, and even the best built one is plagued with an endless supply of little problems. Not that it comes as a surprise. If the craftsmen take liberties with the design, any 'engineering' that went into it is effectivily lost. Even if all of the work is done, if it is not done correctly it is unlikely to be effective.
In the physical world, if we visited such a dysfunctional project it would be easy to spot were the problems are occurring. Generally, this means that they are fixed pretty rapidly.
In a virtual world, however, the same problems are largely ignored. Many software development shops resemble the above description, but few people would attempt to change or fix it. Instead they blindly continue down the same road, surprised again and again that their systems have serious problems. That their houses are sometimes falling over. While pointing the finger at anything but the actual causes.
A good housing development project runs a fairly strong process to insure that the development is going (mostly) according to plan. Software development projects are missing an analogous version that takes into account their own unique culture and development issues.
Once you know what you are looking for, you see the same basic problems repeated in many software development shops. In some shops with serious problems, the methodology itself enforces the bad behavior. Fixing these problems needn't be too complicated. We just need to create a methodology that really makes sure the issues are fixed and the problems aren't avoided. A working blueprint, the correct use of tools, consistency in work by the craftsmen, cleaning up the messes and effective quality control would go a long way to insuring that the end results lived up to our expectations.
If the process used for development is flawed it doesn't matter if you have the time, people and tools; the results are far from guaranteed. Many software development projects would be horrifyingly bad if they were physically visible. Most projects have all of the right elements, they just aren't using them correctly. It doesn't need to be this way.