The
world is littered with a growing number of problems. Some of these
problems are solvable by using a computer, some not. To solve a problem
with a computer all it takes is to design and construct a software
solution, then get people to use it. Easy?
Solving
a problem with software is not nearly as simple as it looks. Many more
solutions fail then succeed. The stats are ugly: 66% of all software
projects fail, as do 90% of all startups (most of which are founded
around software solutions). Why are these numbers so high?
I
think the root of the problem is that a good software solution looks
simple. It’s very easy to miss the depth of both the thinking and the
work that have gone into making it possible. This leads to a lot of
people casually tossing around ideas for solutions, with the expectation
that they’ll actually solve a problem. But few of these ideas actually
do, and in that very limited remaining subset, the execution and the
details mean everything.
To
actually solve somebody’s problem with software, the developers needs
to look long and deeply at the underlying problem. They need to
understand it, not only from the 10,000 foot view, but also right down
to each of the tiny details. A shallow understanding of the problem is
not nearly enough to solve it. The real issues are often
counter-intuitive, hidden and not easily grasped.
Even
more complex is understanding the nature of how people use technology.
Without some type of incentive, many ‘features’ of a system are useless.
Making software ‘sticky’ is a rather complex combination of various
issues including stability, usability, psychology, feedback and empathy.
Again a broad perspective is not enough to understand these depths.
As
well, the developers need to understand the uses and limits of modern
technology. Software is still fairly crude and marketing claims for it
rarely match reality. Technological issues are often deeply rooted in
their theory and history, so both of these need to be well understood
before you can properly assess the limitations. You really need to
understand these limits before you can apply the technology to solve a
problem.
Running
a system development project is no picnic either. Software development
is like a large train moving slowly down the track, from station to
station. You can’t just throw in a right turn here or there. You can’t
really change its direction until you hit a station, and even then the
number of other accessible stations is very limited. It is easy to miss
these inherent and often dangerous properties.
Right
down at the core, software is about programming. But just being able to
program is not enough either. Some code is good, while some is just a
disaster waiting to happen. Experience teaches one more about the code
they shouldn’t write, then about what they should. Knowing what’s
disorganized, or fragile, or outright dangerous isn’t well documented
and sadly only comes with a tremendous amount of experience. Hacking
one’s way through a problem space won’t work; won’t produce enough of a
stable, coherent solution. Just a convoluted mess. Really understanding
what that means in practice takes a lot of time and a lot of experience.
It’s not something that people can just read about, or take a course
in, or figure out on the fly.
So
why is the rate of failure so high? Easy, most of the people out there
proposing solutions don’t have a deep enough understanding of what works
and what doesn’t, to be able to propose valid solutions. Most
programmers are viewed as just ‘code monkeys’ whose task is to build
someone else’s solution. So the success or failure of their labours
depends heavily on the quality of the proposed solution. A crappy
solution will either fail outright, or limp along for years. A good
solution can be ruined by a lot of crappy sub-solutions, slowly mutating
into something horrific.
It’s
for this reason that I often distinguish between programmers and
software developers. A programmer can code, but a software developer
knows how to solve real problems correctly with software. There is a
huge difference. It’s also for this reason that I’ve never liked the
term ‘stakeholder’. It seems to imply that they have the right to choose
the solution because they have a ‘stake’ in the outcome. But if they
don’t have enough of a background to pick a valid solution, then the
work will fail. I’ve also seen a great number of entrepreneurs out there
with the belief that as business people they should be the ones to pick
the solution. That somehow a business background is enough for them to
fully grok the problem, the people and the technical issues. This is
probably why most startups produce software that doesn’t actually solve
any real problems. It just looks shiny and new. So they fail.
The
takeaway from all of this is that if you want to create a software
solution to solve people’s problems, you need to give the task to
someone with experience in creating working solutions with software. An
experienced software developer who understands the history, limits and
possibilities of software. If you leave it to someone with no
experience, the solution is unlikely to be valid.
Software is a static list of instructions, which we are constantly changing.
Wednesday, November 30, 2011
Tuesday, November 22, 2011
The Big Idea
Way
back when I was a co-op student, I got a job at a small company that
built analytics applications for statistics. My previous co-op employer
had laid off the whole division, so I had to find a new gig. There were
lots of choices back then, so I picked the company because I would get
experience programming in C. It seemed like a good idea at the time.
At first the job was great. I got to work on an interesting little problem for a product about to be released. But as soon as that dried up, the trouble began.
One of the senior statisticians had been asking for resources so he could get a pet project done. He sat way back in a corner office, I hadn’t really noticed him before. In hindsight the location of his office should have been enough of a clue.
The re-assignment started out OK. He had a bunch of little programs he wanted written in C. Nothing complex, not too hard, and the only big challenge for me was the using the language well enough. I had experience in programming Pascal, but C was a little trickier.
My problems started when he begin explaining his ‘big idea’. Variables, he said, where really hard to create and name when coding. It takes time to think up the names, and then you have to remember the names you thought up. Instead, he proclaimed, the secret to great coding is to declare a bunch of global arrays at the top of each program. One for each data-type, e.g. int, float, double, char *, etc. Then all you have to do is index these to get to the correct variable. That is, if you have an integer variable that holds the number of iterations, all you need to do is remember that it was your seventh variable and type in ‘integer[7]’ and you’ve got it. No need to remember some complex name, you just need to know the type and when you created it. Simple.
He didn’t really understand why I recoiled in horror. This was, of course, the most absurd thing I’d ever heard in my short little career. I understood just enough to know why this wasn’t just a step backwards, but rather a quantum leap backwards. It was ‘off the charts’ crazy as far as I was concerned.
I tried to explain why this would render the code completely unreadable (and probably full of bugs), but I was just a student programmer so he dismissed it. I ignored his directive to implement it this way and wrote the code normally, you know with real variable names and scoping as tight as possible. But I wasn’t used to C and I used some of its rope to hang myself (my code was worse than flaky). And so my failure re-affirmed his belief that he was right. The situation grew worse.
Fortunately I was saved by the co-op work term coming to an end. It was a horrific experience, but on the bright side I did end up with C on my resume, which lead to my next job (where I apprenticed with someone who actually taught me how to code properly).
I learned a lot of different things from that experience, and even after more than two decades I’m still learning new things. But the most important of them was how easily it was for someone to get it horribly wrong. Although he had written far more code than I had at the time, he hadn’t written nearly enough to really understand what he was doing. A big idea is far away from actual understanding. But without the latter, the former is unlikely to be reasonable. And unfortunately for us, software development appears simple to those who haven’t plumbed its depths. Thus we get a lot of big ideas …
At first the job was great. I got to work on an interesting little problem for a product about to be released. But as soon as that dried up, the trouble began.
One of the senior statisticians had been asking for resources so he could get a pet project done. He sat way back in a corner office, I hadn’t really noticed him before. In hindsight the location of his office should have been enough of a clue.
The re-assignment started out OK. He had a bunch of little programs he wanted written in C. Nothing complex, not too hard, and the only big challenge for me was the using the language well enough. I had experience in programming Pascal, but C was a little trickier.
My problems started when he begin explaining his ‘big idea’. Variables, he said, where really hard to create and name when coding. It takes time to think up the names, and then you have to remember the names you thought up. Instead, he proclaimed, the secret to great coding is to declare a bunch of global arrays at the top of each program. One for each data-type, e.g. int, float, double, char *, etc. Then all you have to do is index these to get to the correct variable. That is, if you have an integer variable that holds the number of iterations, all you need to do is remember that it was your seventh variable and type in ‘integer[7]’ and you’ve got it. No need to remember some complex name, you just need to know the type and when you created it. Simple.
He didn’t really understand why I recoiled in horror. This was, of course, the most absurd thing I’d ever heard in my short little career. I understood just enough to know why this wasn’t just a step backwards, but rather a quantum leap backwards. It was ‘off the charts’ crazy as far as I was concerned.
I tried to explain why this would render the code completely unreadable (and probably full of bugs), but I was just a student programmer so he dismissed it. I ignored his directive to implement it this way and wrote the code normally, you know with real variable names and scoping as tight as possible. But I wasn’t used to C and I used some of its rope to hang myself (my code was worse than flaky). And so my failure re-affirmed his belief that he was right. The situation grew worse.
Fortunately I was saved by the co-op work term coming to an end. It was a horrific experience, but on the bright side I did end up with C on my resume, which lead to my next job (where I apprenticed with someone who actually taught me how to code properly).
I learned a lot of different things from that experience, and even after more than two decades I’m still learning new things. But the most important of them was how easily it was for someone to get it horribly wrong. Although he had written far more code than I had at the time, he hadn’t written nearly enough to really understand what he was doing. A big idea is far away from actual understanding. But without the latter, the former is unlikely to be reasonable. And unfortunately for us, software development appears simple to those who haven’t plumbed its depths. Thus we get a lot of big ideas …
Wednesday, November 9, 2011
Thinking About It
That
dreaded moment comes in every software project development cycle. The
plans are too ambitious, the time too short. After an exciting start,
the groove descends into the valley of just plugging away, day after
day, while trying hard to keep to an impossible schedule.
It comes every time, on every project (if it doesn’t please, please let me know where you work :-)
What I’ve witnessed over and over is that this moment pushes programmers into speeding up their pace. They convince themselves that what’s needed -- what would make it better -- is more code. So off they go, coding faster and faster, cutting more and more corners.
But that sense of panic is working against them. And against their goal of getting the software released.
In an abstract sense, one can view software development as the act of ‘thinking’ about how to solve problems in a robust manner. The code itself is just a by-product of that thinking. The work is problem solving, but the final output is code.
When you speed up programming, it comes at the cost of reducing thinking. The programmers just fall back into writing the most basic rudimentary splat of code that appears to run. Because of the rush, that code is inherently fragile; the little niceties and less obvious corner-cases are always the first casualties. Spaghetti is a real possibility.
Even worse ‘code re-use’ goes completely out the window. So now the system is rapidly getting filled with redundant code that is fragile, inconsistent and most likely convoluted.
As this first wave results in a mounting bug list, it gets plowed under by even more waves, each one going faster; each one with a deeper sense of urgency. But each one causing more damage. The bug list explodes, the specs change and the technical debt swells to some epic proportion, often ending the project or at very least, resulting in an unstable release that is a nightmare to use or upgrade.
And thus speeding up the coding is a recipe for failure.
What does need to happen is to either speed up the thinking (if that is even possible) or start slashing the goals. Deep, but well-thought-out cuts (to avoid painting oneself into a corner) can push the excess work into later cycles. So long as enough thought about the future is applied, the added technical debt can be minimized. Meanwhile the development pace should be maintained. Evenly.
Sure it’s scary, but with experience this point in the process can be anticipated and even managed properly. A code panic, on the other hand is most likely to lead to ruins.
It comes every time, on every project (if it doesn’t please, please let me know where you work :-)
What I’ve witnessed over and over is that this moment pushes programmers into speeding up their pace. They convince themselves that what’s needed -- what would make it better -- is more code. So off they go, coding faster and faster, cutting more and more corners.
But that sense of panic is working against them. And against their goal of getting the software released.
In an abstract sense, one can view software development as the act of ‘thinking’ about how to solve problems in a robust manner. The code itself is just a by-product of that thinking. The work is problem solving, but the final output is code.
When you speed up programming, it comes at the cost of reducing thinking. The programmers just fall back into writing the most basic rudimentary splat of code that appears to run. Because of the rush, that code is inherently fragile; the little niceties and less obvious corner-cases are always the first casualties. Spaghetti is a real possibility.
Even worse ‘code re-use’ goes completely out the window. So now the system is rapidly getting filled with redundant code that is fragile, inconsistent and most likely convoluted.
As this first wave results in a mounting bug list, it gets plowed under by even more waves, each one going faster; each one with a deeper sense of urgency. But each one causing more damage. The bug list explodes, the specs change and the technical debt swells to some epic proportion, often ending the project or at very least, resulting in an unstable release that is a nightmare to use or upgrade.
And thus speeding up the coding is a recipe for failure.
What does need to happen is to either speed up the thinking (if that is even possible) or start slashing the goals. Deep, but well-thought-out cuts (to avoid painting oneself into a corner) can push the excess work into later cycles. So long as enough thought about the future is applied, the added technical debt can be minimized. Meanwhile the development pace should be maintained. Evenly.
Sure it’s scary, but with experience this point in the process can be anticipated and even managed properly. A code panic, on the other hand is most likely to lead to ruins.
Tuesday, November 1, 2011
Problem Solving
"Mainframe
guys?! What do they know? They’re dinosaurs, the industry passed them
by years ago...” was my response twenty years ago, when I first got out
of school.
Someone had recommended that I take a look at how the mainframe guys (and gals) had solved a particular problem way back, while I was still just finding my identify in high school. But I dismissed it far too easily. Things, I felt, were changing too far, too fast.
In hindsight, of course, it took me a long long time to understand how wrong I really was. There were indeed things that I could learn from all of that earlier intellectual work. Problems that were solved, done and dusted. And what I truly failed to grok was that the work was done when there were less expectations, and more of an ability to focus longer and deeper on getting better solutions.
“Don’t reinvent the wheel” isn’t about using libraries or other existing code. It’s about not re-solving problems that in many cases were solved decades ago. And even deeper, it’s about not just quickly solving them in a brute force manner if there is already an elegant solution that has been deeply thought about and made available.
Still, being young, I wanted to solve problems right away and I guess that orients one to search for low hanging fruit. But inevitably, as there has been a tremendous amount of water under the bridge already, pretty much all of the low hanging fruit has been tasted by others. Many others.
The trick then is to spend some time first, to see if the problem is really new and unique. If it is, great -- you can learn from what people already know, but if it’s not then avoid falling into the trap; try to utilize what was done before. Under time pressure that can seem time-consuming and dangerous, but reinventing really common wheels is a guaranteed time sink that will only make the time pressures worse. After all, it is hubris to think that one can do a better job in a week of two of hacking, then was done decades before with thorough analysis and experimentation. Most solutions for low hanging problems out there are far better than anything we can dream up in a rush. The time spent learning them is time saved from falling prey to well-known issues.
But alas, I think it took me about 15 years to figure that out ...
Someone had recommended that I take a look at how the mainframe guys (and gals) had solved a particular problem way back, while I was still just finding my identify in high school. But I dismissed it far too easily. Things, I felt, were changing too far, too fast.
In hindsight, of course, it took me a long long time to understand how wrong I really was. There were indeed things that I could learn from all of that earlier intellectual work. Problems that were solved, done and dusted. And what I truly failed to grok was that the work was done when there were less expectations, and more of an ability to focus longer and deeper on getting better solutions.
“Don’t reinvent the wheel” isn’t about using libraries or other existing code. It’s about not re-solving problems that in many cases were solved decades ago. And even deeper, it’s about not just quickly solving them in a brute force manner if there is already an elegant solution that has been deeply thought about and made available.
Still, being young, I wanted to solve problems right away and I guess that orients one to search for low hanging fruit. But inevitably, as there has been a tremendous amount of water under the bridge already, pretty much all of the low hanging fruit has been tasted by others. Many others.
The trick then is to spend some time first, to see if the problem is really new and unique. If it is, great -- you can learn from what people already know, but if it’s not then avoid falling into the trap; try to utilize what was done before. Under time pressure that can seem time-consuming and dangerous, but reinventing really common wheels is a guaranteed time sink that will only make the time pressures worse. After all, it is hubris to think that one can do a better job in a week of two of hacking, then was done decades before with thorough analysis and experimentation. Most solutions for low hanging problems out there are far better than anything we can dream up in a rush. The time spent learning them is time saved from falling prey to well-known issues.
But alas, I think it took me about 15 years to figure that out ...
Subscribe to:
Posts (Atom)