"The jury is in. The controversy is over. The debate has ended, and the conclusion is: TDD works. Sorry."
-- Robert C. Martin (Uncle Bob)
A quote from the upcoming work "97 Things Every Programmer Should Know":
http://commons.oreilly.com/wiki/index.php/Professionalism_and_Test-Driven_Development
Honestly, I think that this quote by this very well-known and out-spoken consultant in the Computer Software industry says far more about the current state of our industry than just its mere words alone.
Test Driven Development (TDD) for those that are unfamiliar with it, is more or less a "game" to be played while coding, with a set of three "laws" orienting a programmer's efforts. The programmer essentially builds up the program by first building simple tests and then making the code cause the tests to pass. In this way, step by step the tests and the code gradually get more sophisticated.
It's an interesting process offshoot coming from the vocally loud, but not entirely popular family of software methodologies that are collectively called Agile.
The overall essence of Agile is to change the focus of coding to be more customer-oriented, and in the process hopefully, increase the speed of development and the quality of output. It's a mixed bag of things, where some of the Agile ideas are really good, and in general, most of the newer approaches are definitely a step better than the old report-oriented waterfall methods of the past.
However, even from it's earliest days, Agile has been bent by its cheerleaders towards ideas that resonate well with young and inexperienced programmers but are not necessarily practical. Ideas meant to be popular. In short, it focuses far harder on trying to make programming fun and exciting, and far less on trying to actually solve any of the real underlying causes of our inability to consistently deliver workable products. The tendency is to address moral issues, rather than development ones. It's far more driven by what actually sells, then by what would fix things.
Consequently, there are always great streams of feedback coming from practitioners about how things did not work out as planned. The ideas sound good, but the projects are still failing.
And, as is normal for any consulting industry trying to protect itself with spin, there are always experts coming forward to claim that all of this backlash is false, misleading and not applicable. Somebody always seems to know of a successful project, somewhere, run by someone else. Any "discussions" are often long on personal attacks and short on tangible details. The term Agile more applicably describes it defenders than the actual ideas.
It's not that I don't want my job to be fun, far from it. It's just that I don't think fun comes from playing silly little games while working. I don't think fun comes from everybody jumping in with their two cents. I don't think work should be 100% fun all of the time. All laughs and chuckles.
I think fun comes from getting together with a group of skilled people and doing something well. I think it comes from being successful, time and time again, in building things that last and will get used for a long time. I think that it comes from not having too much stress, and not embarking on death marches or long drawn out programming disasters. In short, I think it comes from actually producing things, on time and on budget. I think success is fun.
A successful project is always more satisfying and way less stressful than a failure. But, and this is a huge but, to get a successful project, at times the individual parts of it are not necessary intensively creative, they are not necessarily interesting, and they are not, in themselves, necessarily fun.
Work, as it always has been, is work not play. It calls for getting our nose down to the keyboard and powering through the sometimes painful, monotonous bits of coding that so often surround the more entertaining stuff. Coding can and always will be tiring at times, that's just the way it is. Good programmers know this, and the rest should just learn to accept it.
If someone is coming around and selling ways to make it "fun" and "easier", then we ought to be very wary. Distractions from work may improve moral in the short-term, but they are generally tickets to losing. And losing just builds long term stress.
Getting back to TDD, I'm sure for some younger programmers there is some value in using this approach to help them better learn to structure their code. When you're first learning how to program, there is too much emphasis on the lines of code themselves and not enough on the other attributes. Looking at the code simultaneously from both the testing and coding perspectives is bound to strengthen one's overall perspective. Nice. Probably very applicable as a technique for teaching programming, and to help juniors along in their early years.
But it is expensive. Very expensive. Obviously, you're doing more work. And obviously, you're creating far more code, far more dependencies, and far more downstream work.
These things, some people try to point out, are offset by some magical property of this new code being just so much better than what might have been written beforehand. To steal a phrase from Saturday Night Live: "Really?!".
Because honestly, if you were bound to write sloppy code that was bug-infested in the first place, having a better overall structure doesn't change the fact that the things you "missed" the first time around, will still be missed the second time around. I.e. just because you have a "test" doesn't mean that the test completely covers all of the inputs and outputs of the code (infinity, after all, is hard to achieve).
And if you did write a suite of tests so large and extensive that they do in fact cover most of the expected possible inputs and output, while the code may be perfect, you've invested a huge (huge) amount of time and effort. Too much time.
Consider, for example, testing some trivial string concatenation code. We don't need to test every permutation to know that the code works. We as humans are quite capable of essentially doing a proof by induction on the algorithm, so all we need to test are nulls, empty strings and N character strings for each input.
Still, if we've been coding long enough, the tests for null are obvious (and we'd quickly noticed if they don't work), so we merely need to ascertain that the mainline concatenation works, and then we can move on. It should be a non-event for a senior programmer.
But, if you were applying TDD, you would have to test for each and every permutation of the input strings, since you would have to ratchet up the testing to match the functionality of the code. Start small and grow. So I believe you'd start with the null inputs first and enhance the code to really work with all combinations of strings.
Well, maybe. Or maybe, you'd just add one test to ascertain that the strings are actually merged. Oops.
Of course, you can see a problem, since either way is no good. You've either spend way too long writing trivial tests or you've not really written the tests that cover the routine (and now foolishly believe that you have). Both are recipes to failure. You've wasted time or you're wrong (or both).
More importantly, the functionality of merging two string is trivial. An experienced programmer shouldn't even give it a second look, let alone it's own test. If the problem exists with something this simple, it will only get magnified as the complexity grows. It only gets worse. Either the tests aren't complete, or the effort sunk into the work is massive (and redundant).
Worse still is that the suite of tests has ended up multiplying the overall code base by huge proportions. I don't care if the code is core or scaffolding (like tests), any and all code in a system must be kept up to date as the system grows. Complexity in code certainly grows far faster than linear and is likely well past exponential. If you double the code, you have more than double the effort to update and fix it. If you're short on time now, adding more work to the mix isn't going to make that better.
Also of note, in coding, there are always parts of the program that have real complexity issues, so it is not a great idea to get hung up on the parts that are nearly trivial. The bulk of most systems should be simple, it's still an artifact of our technologies. It's all that code that doesn't need much testing, generally works and is really boring to work on. The biggest problems with this type of code are the growing number of small inconsistencies caused in redundant code, not algorithmic issues. Unit testing doesn't detect inconsistent code.
In programming, the clock is always ticking, and there is never enough time to get the work finished. Programming is still too slow and tedious to be priced correctly, so nobody wants to pay fair value and everybody, all the time is skipping steps.
But again, that in itself is not a reason to push in extra low-level testing. It is actually a reason to be very very stingy with one's time. To "work smarter, not harder". To protect our time carefully, and to always spend it where it has the most impact.
Inevitably the code will go out before it is ready, so do you want to protect it from little failures or do you want to protect it from big ones?
And it is big failures that are clearly the most significant issues around. It's not when a widget slightly doesn't work that really matters, it's usually when the system grinds to a complete and utter halt. That type of event causes big problems which are really noticeable. Little bugs are annoying, but big ones grind the whole show to a screeching halt. A bad screeching halt. An embarrassing screeching halt.
The only real ways to ensure that all components of a system are humming along together in perfect unison is by ensuring that all components of the system are humming along in perfect unison. You have to test it.
And you have to test it in the way that it is being used, in the environment that it is being used in, and with the most common things that it is being used by. The only real tests that prove this are the top-down system tests on the final system (in an integrated environment). Unless you really know what the system is doing, you are just guessing at how it will behave. The more you guess, the more likely you will be wrong.
Not that having something like a suite of regression tests wouldn't be hugely useful and automate a significant amount of the overall effort, but it is just that these tests are best and most effective if they are done at the highest level in the code, not the lowest one.
You don't want a lot of automated unit tests, you want a lot of automated system tests. In the end, that's where the real problems lay, and that's what really works in practice. But it is a lot of work, and it's not the type of effort that is really practical for most typical systems. There are often better ways to spend the effort.
Coming right back to my initial starting place, my real concern about the initial quote isn't TDD itself. History may ultimately prove it to be more useful than I believe, although I doubt it. It's an interesting exercise that has its time and place but is not applicable at a global level. It is not a cure for what plagues most development projects.
My real concern is the "Mission Accomplished" tone of the statement, and how we're seeing that over and over again in our industry.
We've spent decades now claiming success on each and every new approach or technology that has come along without every having really verified the results. Without having any real proof. Without having many actual successes.
Meanwhile, we keep ignoring the glaring, and horrible fact that the really big companies keep all their really big and important data on forty-year-old "mainframe" technologies, precisely because all of this new stuff doesn't work. We've had a long string of broken technologies.
Actually, they sort of work, but never well enough. For most organizations, it has been one expensive dud after another.
So, now instead of taking a hard and fast look at the real source of our problems: flaky technology, poor analysis, and bad design, some of our industry has moved on and started declaring victories with this new version of game playing. Last week it was extreme, this week testing and next is will be lean. We'd certainly rather claim victory than actually be victorious.
Personally, I think it's good to get new ideas into the mix. Obviously, we haven't found what works, so we shouldn't just plop down and start building trenches. We're probably a long way from real answers, but that doesn't mean we should stop trying. Nor should we declare it "over", until it really is over. New ideas are good, but we shouldn't overstate them.
In as long as our industry distracts itself with low hanging fruit, purely for the purpose of convincing huge organizations to spend more money on more probable project failures, we are not going to put in the time or effort to fix any real problems. We are not going to make any significant progress.
Sure we've had some success, after all, we've managed to take a huge potentially useful technology like the Internet and allowed it to decay into something that now has the same intellectual capabilities of bad 70's TV. And, instead of mumbling to just ourselves in public, we can now mumble to our friends and family through our fancy phones, or just play video games on them. Yea us. We've clearly done well. We've learned to utilize our wonderful machines for churning out masses of noise and misinformation.
I guess in the end I shouldn't be too surprised or frustrated, we've clearly entered a period in history where truth and values are meaningless; all that matters is making profits and spinning reality. The software industry follows the larger overall trend towards style and away from substance. It's not whether it is right or wrong, but how it's played that is important. Truth is just another casualty of our era. If you can't win, at least you can declare that you did, that's close enough.
Software is a static list of instructions, which we are constantly changing.
Monday, June 29, 2009
Thursday, June 11, 2009
Programming is Simple!
Writing software to program computers is a simple process. You start by deciding which new data you want to be supported by the system. All systems revolve around their underlying data.
From there, you decide which functionality is necessary. Most of it is fairly trivial, just the usual adding, deleting and modifications. That accounts for at least 80% of most systems.
The other 20% can be complex and require some research in textbooks or the web. Chances are someone has at least tackled the basics of the algorithm, at some time in the past. Certainly, the category is probably covered.
Once the data and functionality are understood it is time to start actualizing the code.
This works best by starting at the persistence layer and working upwards. Most systems use some form of relational database, but all persistent data stores have some type of schema that needs to be extended for the new data types.
From the database, with its universal model, the data needs to work its way into a running application-specific model. This often involves a slight skew and the addition of some underlying context. Nothing horrible, unless it is ignored.
With the data in the application, the next big piece is to wire up the functionality to any required interfaces, be it GUI, command line or even some type of batch mechanism.
As the data moves forward to the user, it often needs to be dressed with fancy presentations, stuff that gets easily stripped away later. The original data type should drive these visualizations.
Thus, programming is simple.
Well, almost. There are a few things that keep this neat progression from easily happening on a quick, and repeatable basis.
The first one is that our modern technology has serious problems. Really serious problems.
I couldn't imagine the very early years in computing when "programmers" had to manually trigger actual switches on the boxes in order to boot their machines as they started up. It must have been so painful and boring. But all I can think of is that someday in the future, people will look back at these days and think "I can't imagine how these guys coped with their millions of repeating lines of code, it must have been so painful and boring".
Toss in the supplementary code, scripts, documentation, designs, testing, and tutorials, and the same underlying simple bits of information get splattered redundantly across a humongous range of different formats, files, and locations. We don't just repeat ourselves, we go nuts in doing so.
Given what we actually need to accomplish, our technologies and their inherent weaknesses drive us to distraction.
In many ways, their most serious problem has become their insanely overcomplicated architectures. We jump through bigger hoops and tricks, just to solve the same simple technical problems over and over again, that the real domain problems get lost in the carnage.
The second big thing we face is people. And it's a problem that hits programmers from two sides simultaneously.
From the front, the users know what they want, but we orient our systems vertically. Thus we are only interested in a thin slice of their problems. This impedance mismatch makes it hard to get them to communicate exactly, in a precise way, how we should quantify that little slice of the world and automate it. They can't say it, and often we're not interested in guessing or digging deeply. Many programmers think their boundaries stop at the edges of their little niche and choose to be very territorial.
From the back, most systems are more work than a single person could complete in a reasonable time. Thus, the work needs to be partitioned amongst a larger group. The culture of coding is about freedom and independence, so like any good herd of cats, shortly after the design meeting, everybody goes off in their own direction and does their own thing.
Management is genuinely surprised when it all doesn't come together in the end. All the individuals believe that their particular direction and approach was the correct one. It generally goes downhill from there.
Assuming that the lame technology and organizational problems aren't particularly fatal, the single greatest threat to simplicity faced by all programmers is themselves.
Most of us fell into our employment positions because we were fascinated by intricate things like the inner workings of a clock. That's good because we like what we do, but bad because it means that we have a real tendency to push an overwhelming amount of complexity onto even trivial problems.
We like complexity. We think it's neat. So it's no wonder that we're drawn toward taking some wantonly over-complex approach to even the simplest of development problems. It's inherent in our nature.
Competing with that is that programming is a huge amount of work. Too much work. And the stress of all of that pending effort forces many programmers to try and rush through the coding process at their fastest possible speeds. High-speed sustainable programming is great, but charging forward in spasmodic spurts leads programmers towards just crudely pounding out bad code, in the hops that it can be fixed later. And later never comes.
If it wasn't for these three problems, programming would be simple. We would simply decide what data we wanted to add, how we going to use it, and then go about implementing and testing the code.
In most programs, most of the time, once we've gotten past the initial technical architectures, the essence of programming shouldn't be any more complex than that.
But it just doesn't work that way. It should. It makes life easier, and programming more enjoyable, but in practice, we never seem to get there, although some of us figure it is possible.
So how do we get to simple?
There isn't much we can do about the technology in the short run. It is what it is. Although, we should try wherever possible to choose technologies that are inherently elegant. If the market demanded elegance, the vendors would have little choice. Right now we don't, so they don't provide it. Why do extra work if it doesn't pay?
Elegance is simple enough to distinguish. If the technology looks simple, like you might have been able to build it yourself in a few weeks, then it is probably elegant.
Choosing simple, straightforward technologies that minimize their requirements for integration will go a long way towards weeding out the many bad apples we currently have.
People, however, will always be a problem.
The best way to deal with the users is to understand that the software developers are really the experts in the equation. The users have a broad perspective across the whole landscape and need to be mined for that view, but they don't have the capacity to turn their knowledge and experiences into a working system. If they did, they wouldn't need programmers.
My earlier post on Architectural Ramblings dealt with matching system architecture to team structure. It's an open problem, and always a trade-off, but at least if you understand it, it is less likely to be disruptive. It's near impossible to achieve a strong amount of consistency in programming right now with a group, so the overall elements of the architecture should assume that, and not rely on it in order for the system to be successful.
Failure has to be built in or explicitly monitored.
As for ourselves, for most of us, we will always be the greatest enemy that we face. Our need to get excited by the work compels us to push our own envelopes. Doing the same old thing, again and again, is boring.
Inevitably, that leads to mistakes in experience or judgment. Simple choices, gone wrong and then hidden in the code. Often causing larger downstream problems. There probably isn't a system written that doesn't have some nasty kind of WTF buried at its core. At least one, if not hundreds.
Even when it's not obviously wrong, most programmers tend towards over-complicating their designs, then resorting to brute force to slam in the actual mechanics to meet their deadlines. They go extreme on one side, then extra crude on the other, creating brittle code that is trying too hard.
We can handle this in two ways. The first is by starting out with code that is ridiculously over-simplified. Complexity is easy to add but nearly impossible to remove. If the first iteration of code is an algorithm so simple that it couldn't possibly work, it is not that hard to slowly enhance it into something that meets the criteria. Over-simplify, and then work backward.
The second way to handle this is by expecting that the significant bulk of our programming effort is to not be writing new code. Deletions and modifications to code are far more valuable to an existing system. If you start the day by deleting a few thousand lines of code, it is a good programming day.
Many people know about refactoring, but often people are too afraid to actually apply it to their own code. Instead, they'll put up with obvious bad deficiencies, because they don't want to tip over the apple cart. They don't want to introduce a whole new suite of bugs, as they are working on stuff.
Truthfully, programmers have hot days, and not so hot days. The best skill one can add to their repertoire is to find the self-discipline to clean up and refactor on the off days. If you touch the file, and it's messy, you shouldn't leave it, unless you've cleaned up the problems.
For those paranoid institutions that don't want total chaos in their codebase every week as programmers maddeningly change everything, it is a simple matter of restricting the refactorings to be non-destructive. You can compress or clean up the code, so long as its (expected) algorithmic behavior doesn't change. Sure, initially some things will change a bit too often, but those will be bugs (or just insanely complex bits). Over time, the code will stabilize. The short term pain will be worth the long term gain.
Programming should be simple.
It's often not, but for reasons that are mostly fixable. There may be some scrambling initially as the direction of the project gets set, but over time, development on a codebase should always get easier. This happens because the base problems get solved and encapsulated.
As more and more of the mechanics get implemented, the scope of the functionality should grow larger, get more complete, and be more enjoyable to work on.
If you're on a development that is past its very first initial stage, and it still is a hard slog, then it is likely that it is caused by self-inflicted actions. Punishment for not pushing harder to make sure things are easier. After all, programming is simple.
From there, you decide which functionality is necessary. Most of it is fairly trivial, just the usual adding, deleting and modifications. That accounts for at least 80% of most systems.
The other 20% can be complex and require some research in textbooks or the web. Chances are someone has at least tackled the basics of the algorithm, at some time in the past. Certainly, the category is probably covered.
Once the data and functionality are understood it is time to start actualizing the code.
This works best by starting at the persistence layer and working upwards. Most systems use some form of relational database, but all persistent data stores have some type of schema that needs to be extended for the new data types.
From the database, with its universal model, the data needs to work its way into a running application-specific model. This often involves a slight skew and the addition of some underlying context. Nothing horrible, unless it is ignored.
With the data in the application, the next big piece is to wire up the functionality to any required interfaces, be it GUI, command line or even some type of batch mechanism.
As the data moves forward to the user, it often needs to be dressed with fancy presentations, stuff that gets easily stripped away later. The original data type should drive these visualizations.
Thus, programming is simple.
Well, almost. There are a few things that keep this neat progression from easily happening on a quick, and repeatable basis.
The first one is that our modern technology has serious problems. Really serious problems.
I couldn't imagine the very early years in computing when "programmers" had to manually trigger actual switches on the boxes in order to boot their machines as they started up. It must have been so painful and boring. But all I can think of is that someday in the future, people will look back at these days and think "I can't imagine how these guys coped with their millions of repeating lines of code, it must have been so painful and boring".
Toss in the supplementary code, scripts, documentation, designs, testing, and tutorials, and the same underlying simple bits of information get splattered redundantly across a humongous range of different formats, files, and locations. We don't just repeat ourselves, we go nuts in doing so.
Given what we actually need to accomplish, our technologies and their inherent weaknesses drive us to distraction.
In many ways, their most serious problem has become their insanely overcomplicated architectures. We jump through bigger hoops and tricks, just to solve the same simple technical problems over and over again, that the real domain problems get lost in the carnage.
The second big thing we face is people. And it's a problem that hits programmers from two sides simultaneously.
From the front, the users know what they want, but we orient our systems vertically. Thus we are only interested in a thin slice of their problems. This impedance mismatch makes it hard to get them to communicate exactly, in a precise way, how we should quantify that little slice of the world and automate it. They can't say it, and often we're not interested in guessing or digging deeply. Many programmers think their boundaries stop at the edges of their little niche and choose to be very territorial.
From the back, most systems are more work than a single person could complete in a reasonable time. Thus, the work needs to be partitioned amongst a larger group. The culture of coding is about freedom and independence, so like any good herd of cats, shortly after the design meeting, everybody goes off in their own direction and does their own thing.
Management is genuinely surprised when it all doesn't come together in the end. All the individuals believe that their particular direction and approach was the correct one. It generally goes downhill from there.
Assuming that the lame technology and organizational problems aren't particularly fatal, the single greatest threat to simplicity faced by all programmers is themselves.
Most of us fell into our employment positions because we were fascinated by intricate things like the inner workings of a clock. That's good because we like what we do, but bad because it means that we have a real tendency to push an overwhelming amount of complexity onto even trivial problems.
We like complexity. We think it's neat. So it's no wonder that we're drawn toward taking some wantonly over-complex approach to even the simplest of development problems. It's inherent in our nature.
Competing with that is that programming is a huge amount of work. Too much work. And the stress of all of that pending effort forces many programmers to try and rush through the coding process at their fastest possible speeds. High-speed sustainable programming is great, but charging forward in spasmodic spurts leads programmers towards just crudely pounding out bad code, in the hops that it can be fixed later. And later never comes.
If it wasn't for these three problems, programming would be simple. We would simply decide what data we wanted to add, how we going to use it, and then go about implementing and testing the code.
In most programs, most of the time, once we've gotten past the initial technical architectures, the essence of programming shouldn't be any more complex than that.
But it just doesn't work that way. It should. It makes life easier, and programming more enjoyable, but in practice, we never seem to get there, although some of us figure it is possible.
So how do we get to simple?
There isn't much we can do about the technology in the short run. It is what it is. Although, we should try wherever possible to choose technologies that are inherently elegant. If the market demanded elegance, the vendors would have little choice. Right now we don't, so they don't provide it. Why do extra work if it doesn't pay?
Elegance is simple enough to distinguish. If the technology looks simple, like you might have been able to build it yourself in a few weeks, then it is probably elegant.
Choosing simple, straightforward technologies that minimize their requirements for integration will go a long way towards weeding out the many bad apples we currently have.
People, however, will always be a problem.
The best way to deal with the users is to understand that the software developers are really the experts in the equation. The users have a broad perspective across the whole landscape and need to be mined for that view, but they don't have the capacity to turn their knowledge and experiences into a working system. If they did, they wouldn't need programmers.
My earlier post on Architectural Ramblings dealt with matching system architecture to team structure. It's an open problem, and always a trade-off, but at least if you understand it, it is less likely to be disruptive. It's near impossible to achieve a strong amount of consistency in programming right now with a group, so the overall elements of the architecture should assume that, and not rely on it in order for the system to be successful.
Failure has to be built in or explicitly monitored.
As for ourselves, for most of us, we will always be the greatest enemy that we face. Our need to get excited by the work compels us to push our own envelopes. Doing the same old thing, again and again, is boring.
Inevitably, that leads to mistakes in experience or judgment. Simple choices, gone wrong and then hidden in the code. Often causing larger downstream problems. There probably isn't a system written that doesn't have some nasty kind of WTF buried at its core. At least one, if not hundreds.
Even when it's not obviously wrong, most programmers tend towards over-complicating their designs, then resorting to brute force to slam in the actual mechanics to meet their deadlines. They go extreme on one side, then extra crude on the other, creating brittle code that is trying too hard.
We can handle this in two ways. The first is by starting out with code that is ridiculously over-simplified. Complexity is easy to add but nearly impossible to remove. If the first iteration of code is an algorithm so simple that it couldn't possibly work, it is not that hard to slowly enhance it into something that meets the criteria. Over-simplify, and then work backward.
The second way to handle this is by expecting that the significant bulk of our programming effort is to not be writing new code. Deletions and modifications to code are far more valuable to an existing system. If you start the day by deleting a few thousand lines of code, it is a good programming day.
Many people know about refactoring, but often people are too afraid to actually apply it to their own code. Instead, they'll put up with obvious bad deficiencies, because they don't want to tip over the apple cart. They don't want to introduce a whole new suite of bugs, as they are working on stuff.
Truthfully, programmers have hot days, and not so hot days. The best skill one can add to their repertoire is to find the self-discipline to clean up and refactor on the off days. If you touch the file, and it's messy, you shouldn't leave it, unless you've cleaned up the problems.
For those paranoid institutions that don't want total chaos in their codebase every week as programmers maddeningly change everything, it is a simple matter of restricting the refactorings to be non-destructive. You can compress or clean up the code, so long as its (expected) algorithmic behavior doesn't change. Sure, initially some things will change a bit too often, but those will be bugs (or just insanely complex bits). Over time, the code will stabilize. The short term pain will be worth the long term gain.
Programming should be simple.
It's often not, but for reasons that are mostly fixable. There may be some scrambling initially as the direction of the project gets set, but over time, development on a codebase should always get easier. This happens because the base problems get solved and encapsulated.
As more and more of the mechanics get implemented, the scope of the functionality should grow larger, get more complete, and be more enjoyable to work on.
If you're on a development that is past its very first initial stage, and it still is a hard slog, then it is likely that it is caused by self-inflicted actions. Punishment for not pushing harder to make sure things are easier. After all, programming is simple.
Thursday, June 4, 2009
Chinese Plumbers
I tend to be a low-budget traveler, easily choosing quantity over quality for my accommodations. So, not surprisingly -- in some Chinese city, which will remain nameless -- I found myself checking into a rather inexpensive double room, with private bathroom, in a youth hostel.
All and all, it was a nice place. The youth hostels in China are badly categorized. Unlike their European inspirations, they don't just cater to the younger set. Old folks, families, and kids are also common sites, the designation is more about historic roots than actual clientele.
And so it was, that I found myself checking out the bathroom for our latest room.
At first glance, being neat and beautifully adorned with big deep brown tiles, it seems to belong to a much more expensive accommodation. Very modern looking, like something you'd find in a swank big-city hotel.
I reached out to use the sink, but the tap was wobbly. Not "a little wobbly". In order to turn it on, I had to hold it with both hands or risk causing some underlying damage to the pipes.
As I stood there, contemplating whether to poke my head underneath the vanity and manually tighten the screws, I noticed a sign on the wall making two big points.
The first point was a warning that in some of the rooms, the hot and cold water dials were reversed.
Sure enough, when I looked at the shower, while the hardware had a little red and blue sticker adorning each knob in the right places, the wall behind the taps had similar little red and blue stickers but reversed. Hot was cold, and cold was hot.
Checking out the shower controls made me suddenly aware of the large clear plastic square on the floor that was overlying the shower drain.
Now, it's not uncommon in inexpensive hotels to find bathrooms where the shower stall is the bathroom. They just tile everything, and when you take a shower, the water gets sprayed all over. It's a sort of innovative use of space, and hey, you don't have to clean anything if people are showering a lot.
I've seen this layout before. But the see-through plastic square was weird.
The second point on the sign near the sink helped clear it up. It said that during showers the square should be moved off the drain so the water can escape, but afterward, it should be placed back over the drain again. It doesn't take long to figure out why.
It seems they forgot to put a trap on the drainage line, so the sewer gases find their way back up the plumbing. If you forget to replace the cover, the bathroom quickly smells like sewage. If you remember to replace the cover, then it only "slowly" smells like sewage. Quite the fix.
By now, you are probably wondering what a gross series of plumbing errors in a bathroom in China has to do with programming or software development. We'll get to that soon enough.
In North America, I would mostly expect, unless someone did a hacked home-brew job, that the sink bolt would be tight, the hot and cold water values would be in the right order and that there is a trap on the drainage pipe. I'd expect this because we have standards. But also because those standards are enforced.
I'd like to think that the bathroom I was standing in, would have resulted in the plumber being dragged out and beaten. Or possibly sued and drummed out of plumbing for life. OK, maybe not that serious, but I couldn't imagine someone here just accepting such a cluster fuck and not trying to fix it. Not demanding that it get fixed.
Now, I don't know if China even has standards, but if they do, they were clearly not followed. Although the bathroom was perfectly functional -- and it was a rather nice shower, all things considered -- from a North American perspective it was totally crazy that this was allowed to occur, and even crazier that all the proprietors did was post up some nicely written signs instructing their patrons to work around the issues.
Total madness, when it would not have been that hard to do it properly in the first place.
But in traveling, you quickly learn to accept these types of things. Parts of the world can be really disorganized and crude. This particular bathroom was actually one of the nicer, more modern and better laid out ones. As a traveler, I'd probably even recommend the hostel.
I'm fine with this in a hostel in China, but no doubt I'd be furious if I hired a plumber to fix my bathroom and this was the result. My expectations for my house are much higher. The hot water tap should be on the left. No excuses accepted.
Interestingly enough, if you asked someone who lived in China, it wouldn't surprise me if they just shrugged off the question, and secretly thought you were nuts. Since -- depending on how common it is -- this type of plumbing job is probably nothing out of the ordinary. It's probably standard, of normal quality. A local might even praise the extra effort of having warned someone with a sign of the tap reversal. Maybe.
If you live in those parts of the world where people have to frequently make do with "whatever", they tend to have significantly lowered expectations. Little things to them, aren't worth crying over, they generally have much larger, more significant problems to worry about. Problems that rarely affect wealthier nations. Problems that make inconsistent plumbing seem trivial.
It's all a question of perspective.
Well, not really. It's more about stability and advancement. My world has less big problems, so we can afford to concentrate on fixing the smaller, yet still annoying ones. We were probably back there once, but over time we managed to move ahead. We can trust the order of our taps.
What's interesting about these specific plumbing problems is that they bring me right back to software development.
In a very real sense, I see so much "commercial", "enterprise" and "professional" software out there that smacks of being put together by Chinese plumbers. It's exactly the same; there are so many awkward bits and gross inconsistencies that it is hard to imagine people ignoring them. Clearly, the stuff works well enough, and people are buying it, but I'm always shocked to see just how poor, messy and unprofessional it really is. How far back it is.
In software, I always feel like a tourist. I've often just stopped by at a few typical Enterprises and been totally shocked to see how development and operations sit on the edge of chaos. I realize that things are working, but exactly like the plumbing, I come from a place where I don't think what I see is a necessary problem. It can, and should be fixed.
Modern software development operates at the same level as plumbing in inexpensive hostels in China (I don't want to generalize too much :-). People just whack it together, and then accept whatever, if it's just good enough. They quickly move on to whack out more stuff.
There are few applicable standards, and even if there were, they are not being followed or enforced.
The software might work, but while I don't mind seeing it when I travel, I'd hate to have to depend on it. Oh wait, we do...
That many programmers go at their jobs with the ethics of Chinese plumbers is bad enough, but too often I see them trying to justify it as some sort of creative license. It's at that point, I think, that it is really insane. Sheer madness. I have a hard time believing that hooking up the hot water to the cold water faucet is an expression of artistic freedom, yet in one form or another, that argument echos all over programming discussions.
No doubt the plumber working quickly in the hostel would have been irritated by someone criticizing his efforts, containing his work-force or just slowing him down, but all I can see is a missed opportunity by someone -- anyone really -- to ensure that a critical job gets done correctly. It wouldn't have taken much more effort to avoid that mess in the first place. It says a lot about the level of operation. Things that shouldn't be, shouldn't be.
Someday, perhaps, the software industry will find its way out of our current state. Someday things will install or upgrade easily, mostly work, and be easily fixable when they crash. Data won't get lost, and inter-operation won't be a mythical buzzword. Someday we will assemble massive systems, spend minimal effort collecting data and then use the results to further refine our efforts. Someday we won't have to finely read the manual in order to grok yet another level of arbitrary madness. Someday comparisons between programmers and people who hack off their own ears for undecipherable reasons won't be so common. Someday.
Finality: No, I didn't say that programming was the same as plumbing! Read it again.
All and all, it was a nice place. The youth hostels in China are badly categorized. Unlike their European inspirations, they don't just cater to the younger set. Old folks, families, and kids are also common sites, the designation is more about historic roots than actual clientele.
And so it was, that I found myself checking out the bathroom for our latest room.
At first glance, being neat and beautifully adorned with big deep brown tiles, it seems to belong to a much more expensive accommodation. Very modern looking, like something you'd find in a swank big-city hotel.
I reached out to use the sink, but the tap was wobbly. Not "a little wobbly". In order to turn it on, I had to hold it with both hands or risk causing some underlying damage to the pipes.
As I stood there, contemplating whether to poke my head underneath the vanity and manually tighten the screws, I noticed a sign on the wall making two big points.
The first point was a warning that in some of the rooms, the hot and cold water dials were reversed.
Sure enough, when I looked at the shower, while the hardware had a little red and blue sticker adorning each knob in the right places, the wall behind the taps had similar little red and blue stickers but reversed. Hot was cold, and cold was hot.
Checking out the shower controls made me suddenly aware of the large clear plastic square on the floor that was overlying the shower drain.
Now, it's not uncommon in inexpensive hotels to find bathrooms where the shower stall is the bathroom. They just tile everything, and when you take a shower, the water gets sprayed all over. It's a sort of innovative use of space, and hey, you don't have to clean anything if people are showering a lot.
I've seen this layout before. But the see-through plastic square was weird.
The second point on the sign near the sink helped clear it up. It said that during showers the square should be moved off the drain so the water can escape, but afterward, it should be placed back over the drain again. It doesn't take long to figure out why.
It seems they forgot to put a trap on the drainage line, so the sewer gases find their way back up the plumbing. If you forget to replace the cover, the bathroom quickly smells like sewage. If you remember to replace the cover, then it only "slowly" smells like sewage. Quite the fix.
By now, you are probably wondering what a gross series of plumbing errors in a bathroom in China has to do with programming or software development. We'll get to that soon enough.
In North America, I would mostly expect, unless someone did a hacked home-brew job, that the sink bolt would be tight, the hot and cold water values would be in the right order and that there is a trap on the drainage pipe. I'd expect this because we have standards. But also because those standards are enforced.
I'd like to think that the bathroom I was standing in, would have resulted in the plumber being dragged out and beaten. Or possibly sued and drummed out of plumbing for life. OK, maybe not that serious, but I couldn't imagine someone here just accepting such a cluster fuck and not trying to fix it. Not demanding that it get fixed.
Now, I don't know if China even has standards, but if they do, they were clearly not followed. Although the bathroom was perfectly functional -- and it was a rather nice shower, all things considered -- from a North American perspective it was totally crazy that this was allowed to occur, and even crazier that all the proprietors did was post up some nicely written signs instructing their patrons to work around the issues.
Total madness, when it would not have been that hard to do it properly in the first place.
But in traveling, you quickly learn to accept these types of things. Parts of the world can be really disorganized and crude. This particular bathroom was actually one of the nicer, more modern and better laid out ones. As a traveler, I'd probably even recommend the hostel.
I'm fine with this in a hostel in China, but no doubt I'd be furious if I hired a plumber to fix my bathroom and this was the result. My expectations for my house are much higher. The hot water tap should be on the left. No excuses accepted.
Interestingly enough, if you asked someone who lived in China, it wouldn't surprise me if they just shrugged off the question, and secretly thought you were nuts. Since -- depending on how common it is -- this type of plumbing job is probably nothing out of the ordinary. It's probably standard, of normal quality. A local might even praise the extra effort of having warned someone with a sign of the tap reversal. Maybe.
If you live in those parts of the world where people have to frequently make do with "whatever", they tend to have significantly lowered expectations. Little things to them, aren't worth crying over, they generally have much larger, more significant problems to worry about. Problems that rarely affect wealthier nations. Problems that make inconsistent plumbing seem trivial.
It's all a question of perspective.
Well, not really. It's more about stability and advancement. My world has less big problems, so we can afford to concentrate on fixing the smaller, yet still annoying ones. We were probably back there once, but over time we managed to move ahead. We can trust the order of our taps.
What's interesting about these specific plumbing problems is that they bring me right back to software development.
In a very real sense, I see so much "commercial", "enterprise" and "professional" software out there that smacks of being put together by Chinese plumbers. It's exactly the same; there are so many awkward bits and gross inconsistencies that it is hard to imagine people ignoring them. Clearly, the stuff works well enough, and people are buying it, but I'm always shocked to see just how poor, messy and unprofessional it really is. How far back it is.
In software, I always feel like a tourist. I've often just stopped by at a few typical Enterprises and been totally shocked to see how development and operations sit on the edge of chaos. I realize that things are working, but exactly like the plumbing, I come from a place where I don't think what I see is a necessary problem. It can, and should be fixed.
Modern software development operates at the same level as plumbing in inexpensive hostels in China (I don't want to generalize too much :-). People just whack it together, and then accept whatever, if it's just good enough. They quickly move on to whack out more stuff.
There are few applicable standards, and even if there were, they are not being followed or enforced.
The software might work, but while I don't mind seeing it when I travel, I'd hate to have to depend on it. Oh wait, we do...
That many programmers go at their jobs with the ethics of Chinese plumbers is bad enough, but too often I see them trying to justify it as some sort of creative license. It's at that point, I think, that it is really insane. Sheer madness. I have a hard time believing that hooking up the hot water to the cold water faucet is an expression of artistic freedom, yet in one form or another, that argument echos all over programming discussions.
No doubt the plumber working quickly in the hostel would have been irritated by someone criticizing his efforts, containing his work-force or just slowing him down, but all I can see is a missed opportunity by someone -- anyone really -- to ensure that a critical job gets done correctly. It wouldn't have taken much more effort to avoid that mess in the first place. It says a lot about the level of operation. Things that shouldn't be, shouldn't be.
Someday, perhaps, the software industry will find its way out of our current state. Someday things will install or upgrade easily, mostly work, and be easily fixable when they crash. Data won't get lost, and inter-operation won't be a mythical buzzword. Someday we will assemble massive systems, spend minimal effort collecting data and then use the results to further refine our efforts. Someday we won't have to finely read the manual in order to grok yet another level of arbitrary madness. Someday comparisons between programmers and people who hack off their own ears for undecipherable reasons won't be so common. Someday.
Finality: No, I didn't say that programming was the same as plumbing! Read it again.
Sunday, May 31, 2009
Architectural Ramblings
While there is a huge range in opinions, I think most software architects would agree that their position is primarily about defining broad strokes for the development of computer systems. Laying down a master plan, or an overview of some type. The development then happens by one or more teams of programmers.
The fastest way to pound out a big software system would be to lay out the instructions in the largest possible sets. Reusing highly repetitive sections helps in speed, but beyond that, adding structure is actually more work. Coding a big system into thousands of small functions is a significant effort, regardless of what's actually being built.
However, we break up the code into smaller pieces because it makes it easier to extend the system later. Most software development has shifted away from the idea that it is a huge one-shot deal, and accepted the fact that software is continuous iterations for the life-time of the code. In this case, the big problem is not writing the first version, it is all of the work happening over years and years to move that version forward into various different incarnations of itself.
VERTICAL AND HORIZONTAL LINES
There are two big things that effect the development of software: the technology and the problem domain (business logic). Philosophically, the two lay themselves out perpendicular to one another.
Domain logic problems are vertical. The user needs some functionality which cuts through the system, and ultimately results in some changes to some underlying data. It's a thin line from the user to the data, and then back to the user again. The system implements an update feature, for example, or a report or some other specific set of functions constrained by a specific set of data that the user (or system) trigger.
Technological problems are horizontal. The same problems repeat themselves over and over again across the whole system, no matter which functionality is being used. All Web applications, for example have similar problems. All distributed systems need the same type of care and feeding. Relational databases, work in in a similar manner. The problems are all unrelated to the functionality (or at least unattached to it) . For example all systems that use transactions in relational databases have the same basic consistency problems.
CONSISTENCY AND ONIONS
Easily, the greatest problem that most big systems have is consistency. It's not uncommon to see large software products where each sub-section in the system bares little resemblance to the others around it.
Aside from just looking messy, it makes for a poor user experience because it is harder to guess or anticipate how to utilize the system. If everything is different, then you have to learn it all in detail, rather then just being able to grok the basics and navigate around the system easily.
In small systems, if there was only one programmer, often the functionality, interface, data, etc. is consistent as a consequence. It's one of those early skills that good programmers learn. Picking a dozen different ways to do things simply makes it harder to maintain the code, and the user's hate it. A messy system is an unpopular one. A pretty, consistent one, even if it has bugs, is always appreciated.
Ideally, if you were deploying programming resources, the best approach would be to assign individual programmers to each section that needs to be consistent. An easy way to do this is by arranging them in an onion-like structure. A series of containing layers, each one fully enclosing the others.
In this approach you would have a database programmer create one big consistent schema, and all of the database triggers and logic. Another programmer would be responsible for getting that model out of it's persistent state and into a suitable form usable by the application, and perhaps augmenting that with some bigger calculations. The application/interface programmer would then find consistent ways to tie user elements to functionality. In this scenario, the inconsistencies between the styles of the different programmers would mostly go unnoticed by the users.
It's probably because of the consistency that most of the really big systems out there, actually started as smaller projects. Growing something into a bigger size, while mostly maintaining the consistency, is far easier than trying to enforce it initially. Big projects tend to fail.
STRUCTURE AND TEAMS
Deploying big teams for large software projects is a huge problem.
The domain logic runs through the system vertically. If you're building a large system, and you partition the work out to various groups based on which parts of the subsystem they'll develop, it is a vertical partition. Of course, we know that arrangement will generally result in massive inconsistencies along the functionality lines. It practically guarantees inconsistencies.
If you orient the developers horizontally, there are two main problems. Generally technology is piled high for most systems, so it means that each horizontal group is dependent on the one below, and effects the one above. The work cannot start all at the same time, it has to be staggered, or excess useless work gets done. In general, you tend to have teams standing around, waiting for one another, or building the wrong things.
The other big problem is that the developers are experts on, and focus on the technology, so they tend not to see the obvious problems with the domain data. That can lead to significant effort going off into useless directions that have to be scrapped. Developers that don't understand the domain always write the most convenient code from their perspective. That is, unless they really have a great grip on real domain problems -- which makes for some brutally complex coding issues -- they will choose to interpret the problem in the fashion that is the most convenient for making the code simple, not the functionality. This leads to functional, but highly convoluted systems.
The choice for dividing up the teams, along either set of lines, is a choice that heavily effects the architecture of the system. One is tied to the other.
MANAGEMENT, TECHNOLOGY AND BUSINESS
An architect's main job is to lay down a set of broad strokes for a group of developers or an enterprise. Horizontally or vertically, these strokes must match existing technological or domain logic lines. That is, the architects don't really create the lines, they just choose to highlight and follow a specific subset of them. If they don't and choose to make their own lines, it is assured that the cross-over points will contain significant errors and often become sinks for wasted effort.
Architects, then, have to both have strong understanding of the underlying technologies and strong understanding of the user domains. You cannot architect a system for a bank, for example, if you do not have both a strong understanding of finance, and most of the technologies used in the system.
I've certainly seem millions of dollars get flushed away because the architects didn't understand the technology, or because they just didn't get the real underlying problems in the business domain. Guessing in either corner is a serious flaw.
Often, in practice, older architects will have great and vast business knowledge, but are weak on the the more modern technologies. That's not uncommon, and it doesn't have to be fatal. Two architects can combine forces, one vertical, the other horizontal, if they are both wise enough to know not to step on each other's turf. Nether issues really takes precedence, they both are critical.
The biggest thing to understand is that the arrangement, and partition of the work into various teams, is fundamentally an architectural issue. Management of the teams, and how they interact, which is reflected in the code, is also an architectural issue. So, an architect inheriting four or five cranky, unrelated development teams, needs to be very wary of how they are deployed.
APPROACHING CONSISTENCY
The simplest way to build a big system is to start small and grow it. Careful extensions, backed by lots of refactoring and cleanup, can keep the code neat along both the horizontal and vertical lines, but it is a lot of work to keep up with it.
In time, when it needs more effort, or if the original project is time-dependent, organizing big development teams will directly effect the code-base. In a very large system, it is just not possible or practical to have one and only one programmer spanning all of the things that are needed for consistency. But consistency is still critical to success.
Like water or electricity, people always flow towards the easiest path. It is just that different people have widely different ideas about what easy really means.
In general, for most of humanity, it is easier to be inconsistent, than not. That's why, for example, building construction has a large number of different standards that have to be followed. But just standardizing something itself is not enough.
Not reading or following a standard is far easier than doing it right. So, there needs to be some extra level of incentive given to push people into abiding by the rules. The only way that really works is by separating the concerns, and giving space for someone to be an expert in the consistency of a very small domain. I.e. if you want people to follow some consistent subset of rules, you need to make an inspector (with the power of enforcement) to look after and enforce that consistency.
This, then, brings the issue back into the domain of a single person, although their actual depth in the work is very light.
So, what I am getting at, is that in a very large project, one with a couple of architects, and several teams of developers: to enforce consistency you also need teams of inspectors. Each individual is responsible for a small subset of rules, but one in which they can apply their efforts consistently. Where the architects lay down the broad strokes, the inspectors examine various pieces at appropriate times in the development, and point out infringements. The software is ready when it has reached a suitably low infringement state.
If you just have an architect laying out a grand scheme, it's unlikely to get followed closely enough to be able to justify the work. Normally, no matter how it starts, over time the meta-team structure disintegrates, and the various groups shift their focus away from commonality and towards just doing what needs to be done. From each of their individual perspectives, it seems like a reasonable approach, but for the project as a whole, it is a fatal step towards a death march. The breakdown in development structure, mirrored by a similar one in the architecture, opens up gaping holes that become endless time sinks.
Besides consistency, the inspectors also become the force that tries to stop each team from solving their own little time sinks by throwing them at the other teams. If the lines are clear, then there shouldn't be any political issues about where the problems lay. If the lines are blurry, as things get worse, everything becomes political.
AND FINALLY
There are a lot of software architects out there that grew up in the ranks of programmers. From this view, they want architecture to be primarily focus on either technology or domain issues, depending on their own personal background. The big problem with that desire is that as just a meta-programmer (a coder at a higher level), without taking into account the environment, a huge chunk of the possibility of success or failure rests on someone else's shoulders. If you leave all of the personal arrangement, politics and other development dynamics to an unrelated set of managers, you are unable to control the huge effect on the overall work.
The environment will build things in, effects how we build them. It can also be the biggest factor between success and failure. If these things represent big issues that can sway the results of an architecture, then the architect needs to be in control of them. If you can't break up and restructure the development teams as needed, then you're not really in control of a major aspect of the work are you?
Although the architects level is higher and more generalized, they do need the possibility of exerting total control over any and all aspects that could derail their efforts. That includes management and politics.
The fastest way to pound out a big software system would be to lay out the instructions in the largest possible sets. Reusing highly repetitive sections helps in speed, but beyond that, adding structure is actually more work. Coding a big system into thousands of small functions is a significant effort, regardless of what's actually being built.
However, we break up the code into smaller pieces because it makes it easier to extend the system later. Most software development has shifted away from the idea that it is a huge one-shot deal, and accepted the fact that software is continuous iterations for the life-time of the code. In this case, the big problem is not writing the first version, it is all of the work happening over years and years to move that version forward into various different incarnations of itself.
VERTICAL AND HORIZONTAL LINES
There are two big things that effect the development of software: the technology and the problem domain (business logic). Philosophically, the two lay themselves out perpendicular to one another.
Domain logic problems are vertical. The user needs some functionality which cuts through the system, and ultimately results in some changes to some underlying data. It's a thin line from the user to the data, and then back to the user again. The system implements an update feature, for example, or a report or some other specific set of functions constrained by a specific set of data that the user (or system) trigger.
Technological problems are horizontal. The same problems repeat themselves over and over again across the whole system, no matter which functionality is being used. All Web applications, for example have similar problems. All distributed systems need the same type of care and feeding. Relational databases, work in in a similar manner. The problems are all unrelated to the functionality (or at least unattached to it) . For example all systems that use transactions in relational databases have the same basic consistency problems.
CONSISTENCY AND ONIONS
Easily, the greatest problem that most big systems have is consistency. It's not uncommon to see large software products where each sub-section in the system bares little resemblance to the others around it.
Aside from just looking messy, it makes for a poor user experience because it is harder to guess or anticipate how to utilize the system. If everything is different, then you have to learn it all in detail, rather then just being able to grok the basics and navigate around the system easily.
In small systems, if there was only one programmer, often the functionality, interface, data, etc. is consistent as a consequence. It's one of those early skills that good programmers learn. Picking a dozen different ways to do things simply makes it harder to maintain the code, and the user's hate it. A messy system is an unpopular one. A pretty, consistent one, even if it has bugs, is always appreciated.
Ideally, if you were deploying programming resources, the best approach would be to assign individual programmers to each section that needs to be consistent. An easy way to do this is by arranging them in an onion-like structure. A series of containing layers, each one fully enclosing the others.
In this approach you would have a database programmer create one big consistent schema, and all of the database triggers and logic. Another programmer would be responsible for getting that model out of it's persistent state and into a suitable form usable by the application, and perhaps augmenting that with some bigger calculations. The application/interface programmer would then find consistent ways to tie user elements to functionality. In this scenario, the inconsistencies between the styles of the different programmers would mostly go unnoticed by the users.
It's probably because of the consistency that most of the really big systems out there, actually started as smaller projects. Growing something into a bigger size, while mostly maintaining the consistency, is far easier than trying to enforce it initially. Big projects tend to fail.
STRUCTURE AND TEAMS
Deploying big teams for large software projects is a huge problem.
The domain logic runs through the system vertically. If you're building a large system, and you partition the work out to various groups based on which parts of the subsystem they'll develop, it is a vertical partition. Of course, we know that arrangement will generally result in massive inconsistencies along the functionality lines. It practically guarantees inconsistencies.
If you orient the developers horizontally, there are two main problems. Generally technology is piled high for most systems, so it means that each horizontal group is dependent on the one below, and effects the one above. The work cannot start all at the same time, it has to be staggered, or excess useless work gets done. In general, you tend to have teams standing around, waiting for one another, or building the wrong things.
The other big problem is that the developers are experts on, and focus on the technology, so they tend not to see the obvious problems with the domain data. That can lead to significant effort going off into useless directions that have to be scrapped. Developers that don't understand the domain always write the most convenient code from their perspective. That is, unless they really have a great grip on real domain problems -- which makes for some brutally complex coding issues -- they will choose to interpret the problem in the fashion that is the most convenient for making the code simple, not the functionality. This leads to functional, but highly convoluted systems.
The choice for dividing up the teams, along either set of lines, is a choice that heavily effects the architecture of the system. One is tied to the other.
MANAGEMENT, TECHNOLOGY AND BUSINESS
An architect's main job is to lay down a set of broad strokes for a group of developers or an enterprise. Horizontally or vertically, these strokes must match existing technological or domain logic lines. That is, the architects don't really create the lines, they just choose to highlight and follow a specific subset of them. If they don't and choose to make their own lines, it is assured that the cross-over points will contain significant errors and often become sinks for wasted effort.
Architects, then, have to both have strong understanding of the underlying technologies and strong understanding of the user domains. You cannot architect a system for a bank, for example, if you do not have both a strong understanding of finance, and most of the technologies used in the system.
I've certainly seem millions of dollars get flushed away because the architects didn't understand the technology, or because they just didn't get the real underlying problems in the business domain. Guessing in either corner is a serious flaw.
Often, in practice, older architects will have great and vast business knowledge, but are weak on the the more modern technologies. That's not uncommon, and it doesn't have to be fatal. Two architects can combine forces, one vertical, the other horizontal, if they are both wise enough to know not to step on each other's turf. Nether issues really takes precedence, they both are critical.
The biggest thing to understand is that the arrangement, and partition of the work into various teams, is fundamentally an architectural issue. Management of the teams, and how they interact, which is reflected in the code, is also an architectural issue. So, an architect inheriting four or five cranky, unrelated development teams, needs to be very wary of how they are deployed.
APPROACHING CONSISTENCY
The simplest way to build a big system is to start small and grow it. Careful extensions, backed by lots of refactoring and cleanup, can keep the code neat along both the horizontal and vertical lines, but it is a lot of work to keep up with it.
In time, when it needs more effort, or if the original project is time-dependent, organizing big development teams will directly effect the code-base. In a very large system, it is just not possible or practical to have one and only one programmer spanning all of the things that are needed for consistency. But consistency is still critical to success.
Like water or electricity, people always flow towards the easiest path. It is just that different people have widely different ideas about what easy really means.
In general, for most of humanity, it is easier to be inconsistent, than not. That's why, for example, building construction has a large number of different standards that have to be followed. But just standardizing something itself is not enough.
Not reading or following a standard is far easier than doing it right. So, there needs to be some extra level of incentive given to push people into abiding by the rules. The only way that really works is by separating the concerns, and giving space for someone to be an expert in the consistency of a very small domain. I.e. if you want people to follow some consistent subset of rules, you need to make an inspector (with the power of enforcement) to look after and enforce that consistency.
This, then, brings the issue back into the domain of a single person, although their actual depth in the work is very light.
So, what I am getting at, is that in a very large project, one with a couple of architects, and several teams of developers: to enforce consistency you also need teams of inspectors. Each individual is responsible for a small subset of rules, but one in which they can apply their efforts consistently. Where the architects lay down the broad strokes, the inspectors examine various pieces at appropriate times in the development, and point out infringements. The software is ready when it has reached a suitably low infringement state.
If you just have an architect laying out a grand scheme, it's unlikely to get followed closely enough to be able to justify the work. Normally, no matter how it starts, over time the meta-team structure disintegrates, and the various groups shift their focus away from commonality and towards just doing what needs to be done. From each of their individual perspectives, it seems like a reasonable approach, but for the project as a whole, it is a fatal step towards a death march. The breakdown in development structure, mirrored by a similar one in the architecture, opens up gaping holes that become endless time sinks.
Besides consistency, the inspectors also become the force that tries to stop each team from solving their own little time sinks by throwing them at the other teams. If the lines are clear, then there shouldn't be any political issues about where the problems lay. If the lines are blurry, as things get worse, everything becomes political.
AND FINALLY
There are a lot of software architects out there that grew up in the ranks of programmers. From this view, they want architecture to be primarily focus on either technology or domain issues, depending on their own personal background. The big problem with that desire is that as just a meta-programmer (a coder at a higher level), without taking into account the environment, a huge chunk of the possibility of success or failure rests on someone else's shoulders. If you leave all of the personal arrangement, politics and other development dynamics to an unrelated set of managers, you are unable to control the huge effect on the overall work.
The environment will build things in, effects how we build them. It can also be the biggest factor between success and failure. If these things represent big issues that can sway the results of an architecture, then the architect needs to be in control of them. If you can't break up and restructure the development teams as needed, then you're not really in control of a major aspect of the work are you?
Although the architects level is higher and more generalized, they do need the possibility of exerting total control over any and all aspects that could derail their efforts. That includes management and politics.
Friday, May 22, 2009
Driven by Work
Recently I returned from spending several weeks backpacking in China. I stomped on the Great Wall, gawked at the Terracotta warriors and even managed to plunge lightly into some of the mysteries of Tibet. All great locations in a land filled with a long and complex history.
Traveling always renews my excitement for life and reawakens my sense of curiosity. I find that if I'm stuck too long without some type of grand adventure, I tend towards pessimism. I think getting caught in a rut just clouds my mood.
Every so often I need to break free of the constraints of modern living and lead a more nomadic existence. I need to get away from the routine, and react more dynamically to the world around me. I just need to break free of all those bits of life that just keep piling up around us. I need to get back into the real world, not just my little corner of it.
LANGUAGE AND OCCUPATION
There are a lot of theories, such as the Sapir-Whorf hypothesis, that suggest that languages influence how we think and see the world around us. For many modern languages, it may seem to some to be a stretch, but I seem to remember a reference to an ancient Chinese "water" language with only 400 written characters (although I couldn't find an Internet reference, so you'll have to trust my memory). With such a limited vocabulary it would certainly be difficult, if not impossible, to craft an effective rant on most subjects. The words just aren't available. If you don't have the words, you can't express things easily. Even if you could create long complex sentences; the lack of brevity starts to be become an impediment to the ideas.
That's probably why so many professions end up with their own huge technical dictionaries. Short precise terms that have large significant meaning. Words that encapsulate complex ideas. We always need to be able to communicate larger ideas with less bandwidth.
More obvious than language, one's occupation surely has an effect on how we perceive the world around us. You can't spend 40 to 60 hours a week engaged in something, year after year and not expect it to affect you in some way.
Lawyers probably argue more than most, doctors seem preoccupied with their health (although I know several that smoke and drink) and accountants tend to pinch pennies. If you keep looking at the same problems in the same ways, it's hard to prevent that from spilling over into other aspects of your life.
I'm sure everyone will have counter examples where they know of someone that breaks the mold -- there are always exceptions -- but I think it's still pretty likely that the way we live our lives directly affects how we see the world around us.
It certainly shows up heavily with programmers and other techies. You see it clearly in their ideas and interactions. The web is plagued with examples.
THREE CHANGES
Over the years I've noticed several changes in the way I see things. Some changes are purely a result of age or education, but there are definitely some that have come directly from how I am spending huge chunks of my time. Influences from a lifetime of pounding out software.
Three changes in specific, appear over and over again. I've become more detail-oriented, I tend to view the world more often in a black and white perspective, and I'm often more disappointed when things are not easily deterministic.
It makes sense. Each of these attributes helps significantly in developing software.
The key to getting any big project designed, coded and into production is to make sure all of the details are strictly and carefully attend to. No matter how you look at the big picture, the details always either act as time sinks or have to be carefully managed.
Also, as a software developer I always have to tightly constrain the world around me into a strict, yet limited black and white perspective. Dynamic shades of grey make for horribly inconsistent software.
And I've learned to stick to the things that I know are strictly deterministic in nature. If it is going to work, it needs to work every time, consistently. A program that works sometimes is all but useless.
Spending my days and often nights in pursuit of big functioning systems gradually takes a toll on the way I see the world. On the way I want to interact with it. For each new domain I enter, and each new product I create, I have to break everything down into relatively simple, straight-forward chunks.
While these changes have helped me build bigger and better things in my career, I've often mistakenly applied the same back to the world around me, to a negative effect.
Software straddles the fence between the purity of mathematics and the messiness of the real world. The real world doesn't work the same way that a computable one does. You can't debug a personal relationship for example, and some things such as weather are just too chaotic in nature to be predicable.
While often we can understand the underlying rules, that doesn't mean we understand the results. You can see how the stock market works for example, but still be unable to profit from it. Those attributes that help me create programs also set me at odds with the world around me. The harder I work at it, the more they seem to influence my perspective.
I think that's why traveling restores my sense of balance, and reminds me that not everything should be simple, consistent, rational or even clear. It's often a breath of fresh air, in an otherwise stuffy environment. A subtle reminder to not get too caught up in myself.
I do notice that besides myself, often others in the community are highly afflicted by a computationally-constrained perspective as well. One that probably makes their lives more difficult. There are a huge number of examples, but the web itself becomes a great historian for studying people who have become far too rigid.
THE DEVILS IN THE DETAILS
While the small details are very important in getting large complex projects to work, being too attentive to them can lead to missing the big picture. Given the choice, getting the broad strokes wrong is far worse than missing any of the details. In the later case, the distance to fix the problem is often far shorter.
Focusing too hard on too many small things leads to a tunnel vision. A perspective of a world were the importance of nearly trivial details are often thrown out of proportion.
It's not uncommon, for example, to see techies highly irritated by spelling mistakes and typos.
Language is the container for ideas, and while good spelling and grammar make reading easier, it is rare that they introduce real ambiguities into the text. As such, they are really minor annoyances, yet some people seem so put off by even the simplest of them.
A well-edited piece, quite truly, is easier to read, yet often it is symptomatic of the work being less raw and more contrived. A polished effort. You'd expect good writing in a magazine for example, but you always know that the polishing 'twists' the content. You are not getting the real thoughts of the author, but instead you're getting their packaged version.
Ultimately, what's important is the substance of what people say, not how they say it. Missing that, leads to dismissing possibly correct ideas for the wrong reasons. The truth is true no matter how badly it is packaged, while falsehoods tend towards better packaging to make them more appetizing.
BLACK AND WHITE
You can't just toss anything into a computer and expect it too work. Computers help users compile data, which can be used to automate work, but ultimately everything in a computer is a shadow of the real world. A symbolic projection into a mathematical space.
We can strive for a lot of complexity, but we can never match the real world in depth, nor would we want to. To make our systems work, we have to reduce a world full of ambiguity and grey into a simplified black and white version. If it's not easily explainable, then it's hardly usable.
This can be a huge problem.
Too often you see programmers or developers trying to pound the real world into overtly simplistic boxes of good and bad. Right or wrong. Left or right.
That type of poor deconstruction, driven by a Star Wars mentality, leads to many of the stupid flame wars that are so popular were two sides pound on each other assuming the other is wrong.
You know: my programming language is good, yours is bad. My OS is better than yours. My hardware is right, yours is inferior. That type of nonsense.
Clearly, all technologies have their good and bad points, but that quickly becomes ignored in a black and white world. Everything get reduced into two overly simplified categories, whether or not it makes any sense. A stiff rigid viewpoint fits well inside of a machine, but isn't even close to working with the world outside.
If you're oriented, for example, to see all of your fellow employees as only either good or bad, then because of that limited perspective you are missing out on a broad (and ultimately more successful) view of the world around you. People have such a wide range of strengths and weaknesses, that assigning them to one list or the other misses out on their potential. You'll end up relying on weak people at the wrong times, while passing up some well-suited resources.
Black and white works well for Hollywood or comic book plots, but it forces us to miss the depth of the world around us.
A LACK OF DETERMINISM
A programmer lays out a pre-determined series of steps for a computer to follow. Even with today's overly complex technologies, the computer still precisely follows these instructions. It does exactly what it is told to do. It does so in a deterministic and predictable manner. It is logical, and rational in it's behavior.
We get used to this behavior from the machines, so it's not uncommon that programmers start to expect this from other parts of the real world as well.
This becomes most obvious when you see technical people discussing business.
Management consultants lay out huge numbers of theories for business to follow, but hopefully they don't really believe that it works out that simply. Business, like weather, has some simple driving factors underneath, but at the practical level it is chaotic enough that it should be considered irrational.
If there existed some simple rules on how to succeed in business, eventually enough smart people would figure them out, to a significant enough degree that their exploiting the behavior would change the system. That is, if to many people know how to exceed the average, then they become the new average. If everybody wins, then nobody wins.
The markets are intrisically irrational (from a distant perspective) yet that doesn't stop techies from apply bad logic to predict or explain their behavior. It's epidemic, examples of people explaining why a superior technology will dominate the market, how being nice to the customers will increase business, how careful listening will get the requirements or any other assumption that presumes that the underlying behavior is deterministic, logical or predictable.
This works well for dealing with the internals of software systems, but for business and politics most programmers would do well to except that they are just irrational and unpredictable.
SUMMARY
If you know that you're biased, it is far easier to compensate for it. However, if you're walking around in an imaginary world, you tend to find that it's an uphill fight in the real one. You keep bumping into invisible walls.
The best way around this is to try very hard to separate these two worlds into two distinct perspectives. We don't have to unify our working perspective with the real one, but we do have to be aware when one is corrupting the other. We can keep them separate, and have separate expectations for each. They don't need to be combined.
When I travel, it reminds me of the greyness, the people and the irrationality of the world around me. Although I can often break things down into simple bits for the system development work, I always need to be reminded that that is just a mere subset of the reality surrounding me.
Computers should be predictable devices that are easily explainable, at least at a high usability level. The real world, outside of the machines, on the other hand, is an infinitely deep messy collection of exceptions to each and every rule. If you think you've figured it out, then you haven't gone deep enough yet.
It brings nothing but pain and frustration to expect that the real world will work on the same principles as a computational one, but still it's very common to see people caught in this mental trap. It certainly is something worth avoiding. Even if you have to occasionally trek around on the other side of the planet to do it.
Traveling always renews my excitement for life and reawakens my sense of curiosity. I find that if I'm stuck too long without some type of grand adventure, I tend towards pessimism. I think getting caught in a rut just clouds my mood.
Every so often I need to break free of the constraints of modern living and lead a more nomadic existence. I need to get away from the routine, and react more dynamically to the world around me. I just need to break free of all those bits of life that just keep piling up around us. I need to get back into the real world, not just my little corner of it.
LANGUAGE AND OCCUPATION
There are a lot of theories, such as the Sapir-Whorf hypothesis, that suggest that languages influence how we think and see the world around us. For many modern languages, it may seem to some to be a stretch, but I seem to remember a reference to an ancient Chinese "water" language with only 400 written characters (although I couldn't find an Internet reference, so you'll have to trust my memory). With such a limited vocabulary it would certainly be difficult, if not impossible, to craft an effective rant on most subjects. The words just aren't available. If you don't have the words, you can't express things easily. Even if you could create long complex sentences; the lack of brevity starts to be become an impediment to the ideas.
That's probably why so many professions end up with their own huge technical dictionaries. Short precise terms that have large significant meaning. Words that encapsulate complex ideas. We always need to be able to communicate larger ideas with less bandwidth.
More obvious than language, one's occupation surely has an effect on how we perceive the world around us. You can't spend 40 to 60 hours a week engaged in something, year after year and not expect it to affect you in some way.
Lawyers probably argue more than most, doctors seem preoccupied with their health (although I know several that smoke and drink) and accountants tend to pinch pennies. If you keep looking at the same problems in the same ways, it's hard to prevent that from spilling over into other aspects of your life.
I'm sure everyone will have counter examples where they know of someone that breaks the mold -- there are always exceptions -- but I think it's still pretty likely that the way we live our lives directly affects how we see the world around us.
It certainly shows up heavily with programmers and other techies. You see it clearly in their ideas and interactions. The web is plagued with examples.
THREE CHANGES
Over the years I've noticed several changes in the way I see things. Some changes are purely a result of age or education, but there are definitely some that have come directly from how I am spending huge chunks of my time. Influences from a lifetime of pounding out software.
Three changes in specific, appear over and over again. I've become more detail-oriented, I tend to view the world more often in a black and white perspective, and I'm often more disappointed when things are not easily deterministic.
It makes sense. Each of these attributes helps significantly in developing software.
The key to getting any big project designed, coded and into production is to make sure all of the details are strictly and carefully attend to. No matter how you look at the big picture, the details always either act as time sinks or have to be carefully managed.
Also, as a software developer I always have to tightly constrain the world around me into a strict, yet limited black and white perspective. Dynamic shades of grey make for horribly inconsistent software.
And I've learned to stick to the things that I know are strictly deterministic in nature. If it is going to work, it needs to work every time, consistently. A program that works sometimes is all but useless.
Spending my days and often nights in pursuit of big functioning systems gradually takes a toll on the way I see the world. On the way I want to interact with it. For each new domain I enter, and each new product I create, I have to break everything down into relatively simple, straight-forward chunks.
While these changes have helped me build bigger and better things in my career, I've often mistakenly applied the same back to the world around me, to a negative effect.
Software straddles the fence between the purity of mathematics and the messiness of the real world. The real world doesn't work the same way that a computable one does. You can't debug a personal relationship for example, and some things such as weather are just too chaotic in nature to be predicable.
While often we can understand the underlying rules, that doesn't mean we understand the results. You can see how the stock market works for example, but still be unable to profit from it. Those attributes that help me create programs also set me at odds with the world around me. The harder I work at it, the more they seem to influence my perspective.
I think that's why traveling restores my sense of balance, and reminds me that not everything should be simple, consistent, rational or even clear. It's often a breath of fresh air, in an otherwise stuffy environment. A subtle reminder to not get too caught up in myself.
I do notice that besides myself, often others in the community are highly afflicted by a computationally-constrained perspective as well. One that probably makes their lives more difficult. There are a huge number of examples, but the web itself becomes a great historian for studying people who have become far too rigid.
THE DEVILS IN THE DETAILS
While the small details are very important in getting large complex projects to work, being too attentive to them can lead to missing the big picture. Given the choice, getting the broad strokes wrong is far worse than missing any of the details. In the later case, the distance to fix the problem is often far shorter.
Focusing too hard on too many small things leads to a tunnel vision. A perspective of a world were the importance of nearly trivial details are often thrown out of proportion.
It's not uncommon, for example, to see techies highly irritated by spelling mistakes and typos.
Language is the container for ideas, and while good spelling and grammar make reading easier, it is rare that they introduce real ambiguities into the text. As such, they are really minor annoyances, yet some people seem so put off by even the simplest of them.
A well-edited piece, quite truly, is easier to read, yet often it is symptomatic of the work being less raw and more contrived. A polished effort. You'd expect good writing in a magazine for example, but you always know that the polishing 'twists' the content. You are not getting the real thoughts of the author, but instead you're getting their packaged version.
Ultimately, what's important is the substance of what people say, not how they say it. Missing that, leads to dismissing possibly correct ideas for the wrong reasons. The truth is true no matter how badly it is packaged, while falsehoods tend towards better packaging to make them more appetizing.
BLACK AND WHITE
You can't just toss anything into a computer and expect it too work. Computers help users compile data, which can be used to automate work, but ultimately everything in a computer is a shadow of the real world. A symbolic projection into a mathematical space.
We can strive for a lot of complexity, but we can never match the real world in depth, nor would we want to. To make our systems work, we have to reduce a world full of ambiguity and grey into a simplified black and white version. If it's not easily explainable, then it's hardly usable.
This can be a huge problem.
Too often you see programmers or developers trying to pound the real world into overtly simplistic boxes of good and bad. Right or wrong. Left or right.
That type of poor deconstruction, driven by a Star Wars mentality, leads to many of the stupid flame wars that are so popular were two sides pound on each other assuming the other is wrong.
You know: my programming language is good, yours is bad. My OS is better than yours. My hardware is right, yours is inferior. That type of nonsense.
Clearly, all technologies have their good and bad points, but that quickly becomes ignored in a black and white world. Everything get reduced into two overly simplified categories, whether or not it makes any sense. A stiff rigid viewpoint fits well inside of a machine, but isn't even close to working with the world outside.
If you're oriented, for example, to see all of your fellow employees as only either good or bad, then because of that limited perspective you are missing out on a broad (and ultimately more successful) view of the world around you. People have such a wide range of strengths and weaknesses, that assigning them to one list or the other misses out on their potential. You'll end up relying on weak people at the wrong times, while passing up some well-suited resources.
Black and white works well for Hollywood or comic book plots, but it forces us to miss the depth of the world around us.
A LACK OF DETERMINISM
A programmer lays out a pre-determined series of steps for a computer to follow. Even with today's overly complex technologies, the computer still precisely follows these instructions. It does exactly what it is told to do. It does so in a deterministic and predictable manner. It is logical, and rational in it's behavior.
We get used to this behavior from the machines, so it's not uncommon that programmers start to expect this from other parts of the real world as well.
This becomes most obvious when you see technical people discussing business.
Management consultants lay out huge numbers of theories for business to follow, but hopefully they don't really believe that it works out that simply. Business, like weather, has some simple driving factors underneath, but at the practical level it is chaotic enough that it should be considered irrational.
If there existed some simple rules on how to succeed in business, eventually enough smart people would figure them out, to a significant enough degree that their exploiting the behavior would change the system. That is, if to many people know how to exceed the average, then they become the new average. If everybody wins, then nobody wins.
The markets are intrisically irrational (from a distant perspective) yet that doesn't stop techies from apply bad logic to predict or explain their behavior. It's epidemic, examples of people explaining why a superior technology will dominate the market, how being nice to the customers will increase business, how careful listening will get the requirements or any other assumption that presumes that the underlying behavior is deterministic, logical or predictable.
This works well for dealing with the internals of software systems, but for business and politics most programmers would do well to except that they are just irrational and unpredictable.
SUMMARY
If you know that you're biased, it is far easier to compensate for it. However, if you're walking around in an imaginary world, you tend to find that it's an uphill fight in the real one. You keep bumping into invisible walls.
The best way around this is to try very hard to separate these two worlds into two distinct perspectives. We don't have to unify our working perspective with the real one, but we do have to be aware when one is corrupting the other. We can keep them separate, and have separate expectations for each. They don't need to be combined.
When I travel, it reminds me of the greyness, the people and the irrationality of the world around me. Although I can often break things down into simple bits for the system development work, I always need to be reminded that that is just a mere subset of the reality surrounding me.
Computers should be predictable devices that are easily explainable, at least at a high usability level. The real world, outside of the machines, on the other hand, is an infinitely deep messy collection of exceptions to each and every rule. If you think you've figured it out, then you haven't gone deep enough yet.
It brings nothing but pain and frustration to expect that the real world will work on the same principles as a computational one, but still it's very common to see people caught in this mental trap. It certainly is something worth avoiding. Even if you have to occasionally trek around on the other side of the planet to do it.
Subscribe to:
Comments (Atom)