Wednesday, April 23, 2008

Creatively Speaking

Somewhere in my youth, I came into possession of a very simplified model of the human brain. I'm not sure from where, it was so long ago its roots were lost in time.

The idea is simple, the brain consist of three, and only three things. First is memory. We can memorize stuff. By that, I mean that we take some fact, associate it with a number of keys and then, on demand, we can return the fact as needed. It is committed to memory. It becomes a piece of knowledge, something we know. I'll quietly ignore long-term vs. short-term memory for this model.

The next thing is understanding. That is, we understand something. For this, I mean that we can take some set of facts that we learned, and at the appropriate time use them for some piece of higher reasoning. If you've just memorized a fact, that's nice, but until you understand it, you can't really take advantage of it. In some sense, understanding is really just how that set of facts is linked to other sets of facts that can in some way drive or change your behavior. You 'know' what an stove burner is (fact), and you 'understand' that once its turned on, it is hot and will burn you.

The very last thing is creativity. This, conceptually is a little more complicated. If a fact is a thing you remember, and understanding is how it is linked, creativity in some offbeat way is how you can create 'dynamic' linkages between non-related facts or understandings. You might have a pencil for instance, and you know how to use it to draw shapes and textures, but at the time you are working you're feeling blue, which is some intangible emotion. If you want to use the pencil and your ability to draw to express your emotion state by drawing a dark moody image of yourself, you are taking your pencil, and your self-view, both facts, applying your understanding of drawing and then 'combining' that with your current emotional state to 'create' something that on its own would not come into existence for anyone else. You are creatively expressing your emotions through a self-portrait with a moody overtone. The work is not reproducible.

Creativity then is this ability to combine two entirely unrelated things, in arbitrary, random, or mostly irrational ways. The base things are either facts or understandings.


CONSEQUENCES

A consequence of this simplified model is that the abilities of a human on one level are quite simple. We build up knowledge -- linking it to other bits -- in ways that we find useful. Indirectly, as if it were a computer program run amok, one can see creativity as some type of inherent 'flaw' in our brains. After all, connecting two things that "shouldn't" be connected is not rational or deterministic. It's like a computer programming randomly changing the data, it is outside of the norm. The brain shorts out, or something.

From a natural point of view, this model allows one to see human progress often as this long history of people gradually creating new ideas and memes by either observing them in nature or randomly combining together un-related bits. In a direct sense, all of our of knowledge beyond what is quantifiable, is accidental. We stumble into discovering it.

That's a little hard for many people to swallow, so it likely shows that this model is far to simplified to be absolutely correct, but still quite interesting. To get to the really high concepts, they have to be based on lower ones, that somewhere down the way have to be based on observable things, or creative leaps.

I like this model because it provides nice tidy explanations for many things. For instance, why somebody with 'photographic' memory does so well in their early university courses, but then has trouble later on; they never really understood the original stuff, just memorized it. The latter courses require understanding as well as memorization. You can't build on a foundation that you don't have.

Intelligence is not just our ability to remember facts, understanding and creativity play a huge role as well. So that means a brilliant person is one, then who has a lot of facts, and understands them well. While a genius is an intensely creative person, who many not actually sit on that big a reservoir of understandings or facts; but they can just combine what little they have into interesting new bits. There are a lot of brilliant people, while creativity, is a rarer thing.

This model harmonizes 'intelligence' and shows that one cannot easily compare their three values against other people's three values. With three dimensions, you cannot easily rank your intelligence. Often you can meet people who can quote tremendous amounts of knowledge, but seem to have trouble really putting the pieces together. Your typical absent minded professor may be incredible forgetful, but full of deep understandings. It is a great perspective on our abilities.


COMPUTER SCIENCE

Now, all of that being interesting, what does it have to do with computer science? I read a excellent write up recently of someone talking about how they believe that math is actually creative. It is know as Lockhart's Lament and it is at:

http://www.maa.org/devlin/LockhartsLament.pdf

This to me this is a hugely misunderstood characterization of what it means to be creative, and of what mathematics is, and why it is so special. In a way, relating it back towards creativity takes away from the true underlying awe and beauty of mathematics. It trivializes it.

In a very real sense, mathematics is an infinite set of axioms over an abstract space. It does not exist in our real world and because of its rigour, which could not exist either in this world, it is purely abstract. Math is sort of this pure crystallize abstract structure that we can perceive if we concentrate on it long enough. It exists, but without substance. A truly amazing thing.

As such, all math exists and has always existed. It always will, with or without us.

The branches, axioms, primitives etc. of the various different types of math are our expressions of the that abstract foundation. It is out there, right now. All of our existing math -- a drop in the overall bucket -- and all of the stuff we still do not know. Mathematicians set out on a quest to 'find' new mathematics. They find it, prove it is rigorous, and then explain it to the rest of us.

In that sense, mathematics as we know it, is a huge set of facts and a significant amount of understanding about those facts. In its abstractness, it is consistent, and contained. It is not contaminated by the real world, where entropy and chaos dominate. It exists in a pure state, perfect, rigorous and eternal.

Knowledge and understanding then easily define our existing view of mathematics. They exist, so there is little that is inherently creative in their existence. Learning mathematics is not a creative exercise. You study the existing works, memorize the facts and work on problems until the understanding of the facts kicks in. If you are being led down the garden path, then as exciting as it is, learning about some new set of axioms and how to use them really isn't your creative spark, is it?

If learning mathematics is not particularly creative, then the big question must be one about 'finding' the ideas initially; whether or not that is a really 'creative' endeavour?


DOWN ANOTHER HOLE

To digress just a little bit. Software development is in its very own way a form of expressing a set of instructions to a computer that in its abstractness and rigour are similar to the act of creating a mathematical proof. Expression, be it about what you had for lunch, the weather or some specific algorithm for searching is a form of communication. The underlying topic may change, but choosing English, French, C or Java for your expression is still a way of communicating something. For English, it is non-rigorous and it goes to another human being. For C, it is very rigorous and it goes to a computer, but if both are about algorithms, then they are just different ways of expressing the same underlying thing. We only choose to express vastly different things in the different mediums.

There is, of course, a question about the expression of intangible quantities. It is one of those that Richard Gabriel in his book "Patterns of Software" says architect Christopher Alexander called: "the quality without a name". We have to categorize the world into those things that are determinable, tangible in some way, and those that are not. Tangible usually means, some form of physical existence. For the intangible, mostly it comes to us, personally in some what that is spiritual to some degree. By that I do not evoke any higher meaning, other than to describe 'spiritual' as a mood that is specified by humans. It is one of our many emotions. In Gabriel's book he talks about the feelings one gets when they go into these huge architectural wonders, often churches, but not always. It moves people in some intangible way. That unnameable quality then to my mind is just another intangible human emotion. Perhaps awe, or something similar.

That, fortunately makes for a nice and tidy taxonomy, since I've often described fine art -- for example -- as having an embedded emotional context, which is what separates it from merely being graphic design. If it stirs you in some way, it is art. If it is just pretty, then it is only graphic design. The same is true for music, it moves you or it is 'pop' music.

What we can perceive with our intelligence is tangible, and what we can experience with our emotions is intangible.

And so, in my earlier example with the drawing, the self portrait combines two very distinct things, one quite tangible, the representation of a human -- with the other, which is intangible -- the emotion of feeling somewhat blue. The creative 'short' or spark that drives that is not natural to most people, so although afterwards we (some of us) can see what the artist was getting at, and are moved by it. Although left in the same circumstances, we -- not being that specific artist -- would not have been creative in that same way.


MATHEMATICS

Coming back to mathematics: we know that the math is out there, and that it is abstract, yet not intangible. It is a thing, and if you search hard enough, eventually you will come upon it. I am not saying that in any way it is easy, or that it would take humanity anything less than thousands of years of culture before they might stumble on a complexity concept like negative numbers, which is exactly what happened. It also took a long time for us to figure out zero, complex numbers, calculus, etc. Some humans clearly can get there faster, but the road that needs to be traveled needs to be traveled by all, collectively.

We could not have complex branches of mathematics for instance without having an understanding of negative numbers. The intuitive leap that one and only one person can make, can only be so far. We are limited by our life spans. The imaginary of standing on shoulders of giants is exactly correct if you realize that the giants themselves are only just other people standing on other people's shoulders. And on it goes.

We get left then with the understanding that finding new forms of mathematics is a search, and it is one that comes from a limited distance from our last existing searches. With each new generation we can cover new ground, but only just barely. In that, although for one human, they learn the existing knowledge and then the existing understandings, they can only start searching from that point on. To them it is an open un-explored field with hidden surprises, to humanity however, it is just part of an ever going permutation through all of the localized space relative to the last set of solved problems. So, in the end, the best that a mathematician can do -- in that sense -- is to find or get to their discovery before the hoard of people following them do. Speed is important, but so is being in the right place at the right time, as the field is constantly filling up with others overturning the same rocks and prodding the same bushes too. If you don't discover it, they eventually will.

All of this really comes back to whether or not the idea of someone prodding around in a big field looking for interesting bits of mathematics is in fact being creative or not. And, to some very large degree, if mathematicians are creative in what they find, then by virtual of their similarity, computer scientists -- at least for the very first time they are writing something really unknown -- are equally as creative.

As if you needed more twists, we still need to try and relate this situation back to something more physical.

The simplest mapping would be to look at a graphic artist. They are by virtual of their training able to produce things that are visually appealing. Yet, as we are awash in pretty graphic design, you know that their work is not stirring in any fashion.

It looks nice, but it doesn't not give rise to emotions. That's true because while they have taken their facts, and taken their understanding -- in this case abilities and skills at things like drawing or matching pleasing colors -- they have not crossed the gulf and gone on to combine these with some intangible emotional context or any other thing that is out of the ordinary. Their skills may be beyond what some in the 17th century would have been able to do -- appearing back then to be 'art' -- but here and now, in this time they are applying the things they learned in school to a specific end. Understanding, yes, creativity, probably not.


BACK AROUND AGAIN

So going back to our mathematicians, as they ponder and search throughout the field, are they really being creative, or are they not just applying the skills thy were taught towards the goal of identifying previously unknown bits of mathematics? There may, I admit be some truly creative spark there that helps particularly gifted mathematicians to get to a specific point in the field faster than others and allows them to identify their thing rapidly. How else, can you explain Einstein? Although his leaps where related to a specific science, not general abstract mathematics, the distance that he appears to have travelled vs. his contemporaries at the time seems to be enormous. But again, what he found was there plainly for all to see, if they wanted to, or were willing to look. Relativity always was, and always will be. You just have to want to see it. But still, the leap was big and it took twenty years for his fellows to catch up with him and get to that particular place.

And so, for me, I'd have to say that most of the leaps that are going in mathematics are not leaps, but a natural progression of steps. And that these steps are an expected consequence of the knowledge and understanding that come from studying the field and really understanding it. That sometimes, but very rarely, a few make creative'ish giant leaps, but in general people are just working towards the goals of their field. If any one famous mathematician had died early, it may have taken us longer to get to their understanding, but we would still get there. The results might be slightly different, possibly composed of a different set of primitives, but it will still be functionally equivalent. As mathematicians create proofs, so too do computer programmers create code. It is exceptionally less complex, and far more mundane, but still shares the same type of relationship.

The code exists, we just have to find that sequence of instructions that expresses the solution to the problem in the most elegant fashion. So, we have to ask again: is programming really creative? The answer would have to be rarely. Only at that moment where we are combining two unrelated things in a truly unique way are we actually allowing our brains to take creative leaps. Otherwise we are just applying our understanding of how to code.

If your just doing another day's worth of problem solving, then those are exactly the skills programmer's should have and were taught. Software design has creative moments, and often the design is left unspecified so that some of the effort is required by the programmers, but mostly the business domain, technical domain and best practices remove a significant amount of the degrees of freedom. The interfaces share the same creative tones as graphic design. Most of it is really the application of an understanding.

Is software development art? Never. There is virtually no place in software for the expression of intangible emotions, I can barely imagine what that might look like. Tools are just tools. But, to slightly confuse the issue, the content of the programming could be art. That is, you could create a masterpiece of art using code as the means of expression. Games designers and many of the early pioneers of tools like flash succeeded or came close.

But really, a tool that moves you to tears is somewhat counter-productive, yes? It would be difficult to be both stirring and functional. Although, I do have to admit that some big company software products have caused me to cry, but really, that is only due to dealing with their crap, and my inability to be able to find a better replacement. The code did not stir the emotions, the angst of dealing with it did.

Sunday, April 13, 2008

The Essence of Analysis

In any venture if you do not start off on the right foot, your chances of success are dramatically diminished. So it is easy to understand that the farther away we are from our initial target, the longer it will take to get there. If we have no idea where we are even going, then we won't even know if we have ever arrived. Some things, just are.

Many software developers understand that design is a highly critical part of getting the software right. You can't just magically iterate your way into the perfect system without first understanding what the system is supposed to do. Formal, or on the back of napkin, developers need a target to work towards. However, it is often the case that even the best of all designs still cannot help if the even earlier stages are fatal. Design is not actually the first step.

The very first thing for any building is to have a reason to build it. A building without occupants, a book without readers, or a software product that does not solve any real problems are all examples of things that got lost in their very first stage: analysis.


SOME EARLIER THOUGHTS

A while back, I wrote down some ideas about what I thought were the essential development problems:

http://theprogrammersparadox.blogspot.com/2008/01/essential-develoment-problems.html

I identified four big issues a) perception, b) discipline, c) generalization and d) analysis. Of these, analysis is the one that I have least talked about, primarily because without resorting to domain specific examples -- generally proprietary in the software world -- talking about it is difficult.

But analysis is clearly one of the keys to getting the right thing built at the right time. Turning a list of functions into executing code takes effort, but turning a vague understanding of someone's process into a list of functions takes a genuine understanding. It is a very difficult, very overlooked and a very uncommon skill set.


BASIC DEFINITIONS

In analysis, only one group of people is really important: the user of the software. All of the other 'stakeholders' have some effect in helping or derailing the project, but your quality of work comes from only pleasing one group. For this entry I'll focus only on them, leaving the others as just background obstacles to be overcome.

All that software can do is to help a user manipulate a growing pile of data. Whether it is visualization, tracking or editing, software is incredibly simple; each function just helps play with the information in some way. The key to building good software, then is to obviously tie this back to something useful in the user's life. Software that helps solves specific problems is software that is desirable. It extends out the user's ability to manage their information efficiently and effectively. Software that violates that covenant just irritates it users. Realistically most software is a mixture of useful and irritating features. Sometimes on purpose, often not.

Without a doubt, the most common approach to software development is a combination of hanging around on an ivory tower and collectively guessing at the sorts of problems that the user might want solved. If this sounds haughty and aloof, it generally is, which is why a great deal of software lacks a considerable amount of empathy for its users. Missing empathy equates to sticky, hard to use features that beg to be rewritten. It doesn't need to be like that, but that's the industry norm.

If you stand back a bit, you can see that software solves two basic problems: one from the business domain, the other from the technical. The technical domain is the packaging that is required by the programmers to get the business solution to the right people. The users for example may want to keep track of their financial information; the database, GUI, programming language, etc. are all just technical solutions that help the user access the financial tracking capabilities that they need. From a user's perspective, they are essentially noise that has to be put up with, in order to accomplish their goals. User's don't, and shouldn't care about the technical issues.

Programmers, on the other hand, love solving technical problems which are easier and more straight-forward. As such they try to take those abilities learned from creating technical solutions and throw them at the business problems. Generally to very painful results. Technical problems are consistent, simple and rational. Business problems, at least from the perspective of the programmers are messy, irrational crazy things that make no sense. The type of problems that scare most developers. Trying to wrap the irrational in a simple consistent model is a hopeless task.

Because of its messiness most programmer's prefer working on technical issues. If you need any real proof for that, a quick examination of the OpenSource movement shows that for the most part, the projects -- particularly the successful ones -- have been entirely spawned around and dedicated to technical problems. Volunteer programmers want to tackle easy fun problems, so they are naturally drawn to the technical ones. Why tackle the painful stuff for free?

Not surprisingly, the bulk of the world's programming problems are not technical. They are domain specific, which is why commercial development and consulting are still significant industries, even in the face of so many programmings seemingly giving away their craft. Given the difference between the two types of problems and the obvious lack of appeal for the ugly business code, it is more than likely software development will continue to be a well-paying profession for some time.


CONSULT THE EXPERTS

A computer programmer is someone who can assemble a complex set of instructions intended to implement some functionality in a software program. A software developer, on the other hand, is someone who can analyse real world issues, isolate the problems and then conceive, design, implement, test and deploy a working solution to help with those problems. The difference is clearly the scope of the work being performed. There are lots of computer programmers, but very few real software developers. You need to master the technology first, and then grow your abilities to start tackling the really hard problems.

The users are the expert at understanding their own problems, but an experience software developer is the expert at turning that into some workable solution. Naturally because domain problems are extremely messy, deep and irrational, years of experience with a specific domain are important. You can't solve problems if you don't really understand them.

While the developers need to understand the domain in which they are working, they are not by definition the experts. They don't have to know it all, or are necessarily functional in it. They really only need to see it from the perspective of the computer software:

- the data, its structure, frequency, quality, etc.
- the processes, their timings, interactions, differences, etc.
- the players, their roles, responsibilities, expectations, etc.

A computer takes a deterministic static view of the data, so all that the software developer really needs to understand is that viewpoint. But, that viewpoint must, to an incredible level of detail, be understand really thoroughly. The success of software rests on its implementation of the details.


FINDING THE PROBLEMS

The first mistake most developers make is trying to solve the wrong problems. Computers can't change things, they can only act as extension or tool for the users. In that sense, any software development is an automation project, one that is geared towards a specific pile of data. Thinking your shifting their paradigm, rocking their world, or changing the way they work will ultimately fall flat. At very most, someone farther upstream might be able to eliminate some lower-down position, with help from the computer, but at the very end, it all needs to be anchored back to a person, and they, at least in their capacity of using the software, aren't going to be pure management (which is a reporting only position).

So, software problems, are simple automation projects that involve building up piles of data. That leads to a fundamental property:

- data has to be captured or entered into the software.

It sounds silly, but in large projects people often fail to trace the data back to its source, a simple exercise that easily reveals many of the worst detail problems with the foundations. If you trace back the data to its source, you generally have a broader understanding of the domain. Even in a big warehousing project, all of the data starts somewhere, be it an an interface for service people, some type of capture scanning, or some feed from some other database. The other database had to gets its data from somewhere. So in general, most data was entered directly from manual forms on a screen. If you dig deep enough you'll find the real source, and thus the real problems. For each piece of data, you have to know the quality and frequency, that is vital.


UNDERSTANDING AND STRUCTURE ARE KEY

Bad assumptions, and misunderstanding data are very common problems that propagate through out development projects. I've speculated that there is essentially one great superstructure underneath in some of my blog entries (Age of Clarity, The Science of Information), but whether or not you believe that, getting a real working proper model of the underlying data is a hugely problematic issue. In many industries, even after thirty years, huge elements of the underlying data are frequently misunderstood. Guessing -- which is the most common approach to understanding data -- is dangerous, particularly if those poor initial pot-shots get permanently enshrined into the schema for reasons of backward compatibility.

So many software programs start off with a slightly-off underlying model, and are never able to truly fix their original problems. Seeing the danger of that, it is important to simplify, but not to miss any obvious chunks. Because of the limits of our current technology, it is extremely hard to 'uncomplicate' a schema once it is in production.

Clearly, the best most reasonable approach is to seek out the users on their own ground and have them explain their problems to you in their own language. This initial 'incomprehensible' babble, is only so, when you don't understand their domain. As such, any communication impedance mismatches, or user complaints, most often point to significant domain issues that you need to understand and resolve. Programmers are quick at dismissing their users as 'stupid and ugly', while software developers know to look deeper. We build for them, and if we don't take the time to see their point of view, that is reflected in our work. With understanding and experience, the things that you initially took for trivial turn out to have significant value.

There are of course, times when the users are just letting off steam, but unless you know that for sure -- not as an assumption -- then it is best to assume there is something tangible, even if it is slight, underneath that is worth considering.


THE USER IS NOT THE DATA

In that data has an underlying structure, that understanding and interaction is rarely what the user is after. For example, we want to interact with our computers in a desktop metaphor, while underneath they need to deal with files, folders and links, and under that: file formats, data structures, bytes and bits.

An incredibly common mistake, particularly when it comes to code generators is to assume that the user perspective, and the data perspective are identical. The data contains inherent ugliness that the computer should shield the user from. In that sense, in a classic system, the database schema will contain a level of detail that is completely uninteresting to the user. Their application summarizes, stores context, and essentially hides away a great deal of that detail. And, the user perspective is irrational, there is no formulaic way to tie it back to the data model. It will be similar, but it will always be very different in irrational ways.

Harmonizing that translation between the user perspective and the true data perspective is another great problem. Overtime people have understood that it was there, but all too often they have attributed it to the wrong causes, like assuming it was physical storage issues, or alternative viewpoints.

I guess for simplification reasons, we would really like to have one single consistent model for all of our data. And having one model that we could magically use to generate all of the code is the next logical conclusions. But, as many programmers know, there are a huge number of transformations happening in any well-written software. Now, we just do the work, and stick it were it is convenient, but the duality between the structure of the data, and the user view, points to an area where we really should be able to separate out the different models in the architecture and formalize (to some degree) the transformations back and forth between these different perspectives.

With further consideration, you can see this as a strong reason why many attempts at re-use fail so badly. The developers, in their quest for one overall model, mix the application-specific structure with the underlying real structure tying the resulting code absolutely to the one and only one application for which it mostly matches.

Beyond the technical problems of implementation, this has a big impact on analysis. The analyst needs to work with the users to see their perspective, but they also need to get down to the real underlying data perspective as well, something that the users may or may not be aware of. The user champions, then are often only really experts at half of the details. The analyst still needs to find and understand the other half. That is a huge problem, but manageable if you are aware of its existence.


CONSOLIDATING THE KNOWLEDGE

So, great you've successfully gone out and found a hundred or so really important 'functions' that would make your user's lives easier as they accumulate their growing pile of data. Randomly slapping these 'things' into a GUI is a bad idea. Sure it is easy to just extend the system by attaching a 'bag' onto the side of it with some disconnected functionality, but that is the type of analysis and design that makes all of us hate those 'big company' programmers.

If you've really come to understand your user's needs, then you should understand how to integrate that back into the existing solution without just gluing on more disconnected features. It is this consolidation of the feature set, that is often extremely difficult. But finding a way to accomplish this, not only makes for a better tool, it also significantly cuts down on testing. E.g. testing one slightly enhanced application is less work than testing two disjoint ones (although most companies fail to recognize this, and only increase their testing resources by a fraction of what is really needed, so they get two apps tested to a lessor degree than the original one).

Mostly, we design our tools to have some consistent overall metaphor, or style of working. Extending that, while maintaining its consistency is crucial. At times, the right answer may to actually throw away all of the old perspective and go up to the next level for a totally new one. Keeping everything simple and consistent, is extremely hard, and extremely dependent on the underlying domain and tool set.

Backward compatibility is important to some degree, but that should not be used as an excuse to not properly integrate functionality into an application. The tools should get easier to use as the coverage of the functionality grows, something that is clearly not happening with a lot of modern software.

Consolidation is as hard as the initial analysis, and is considerably more important. We've reached the stage were we can bang out some pretty sophisticated software products, but we have not gone over to being able to take a large amount of functionality and make it useful. This is obviously somewhere significant where Computer Science needs to grow up.


THE LAST WORDS

Bad, poor, or missing analysis is the start of a doomed project. You can't sit in a cubicle, miles away from your users and expect to be able to write useful software for them. Where you succeed with this approach, it is only and absolutely luck, and nothing else. It is easy to see why so much of our software is so defective.

In five years of constant programming, most people who stay at it will master to some degree the ability of structuring sets of instructions for the computer to execute. These people are computer programmers, and mostly should get some help from others to determine what the instruction set should accomplish.

Our industry standard is to recognize a five year career as being a senior, but while that might be true of someone attempting to master implementing specific functionality into a particular technology, that isn't even remotely true of someone trying to learn how to analyse a problem domain and use that information to build tools. There is far more beyond just belting out sets of instruction that is required to build things that are usable.

The secret then, if there is one to be known and understood, is to master the various techniques and technologies of software development, but never ever, even for a moment 'assume' that you have completely mastered the problem domain issues. Well, once you have re-written the same system three times, you're probably on your way towards mastering that specific issue, but jump problems, domains or industries and wam! your back to ground zero again. We have to be very careful to understand the difference between what we know, and what we assume we know. Building a simple system to assemble a big pile of data, does not make us really understand how that data works in the world around us.

If you can accept this, then you can prepare for a significant number of serious problems in the development leaking in from your lack of knowledge of the problem domain. Once you are past your own ego, you are a little more likely to accept the problems and their irrational nature for what they really are: the things that your software needs to solve to become better. Building software is easy, knowing what to build is the hard problem. All too often our industry wants to make the former sound harder and ignore the latter.

From scratch or extending a system, an experienced software developer makes a huge difference. Luck is the only thing that replaces a lack of analysis. The software industry's high failure rate comes from its inability to recognize the important of hiring developers with senior analytical skills to properly lead development projects. Analysis is a significant skill set. It is unique and not related to writing code. It is not just taking notes from user conversations and building up a list of requirements. It is not writing down stories, or guessing at how people might use things. It is a full in-depth understanding of the user's needs and the underlying data, and how to tie these two disparate things together into tool that is usable.

Even after nearly twenty years and many different software development domains, I am still often surprised by the 'depth' of complexity that some domain specific information is hiding. That is to say, that with each new problem I attempt to analyse, it is far deeper, far more complex and far more irrational than I'd ever really expect, and I've dealt with some pretty deep, complex and irrational problems already. But, I have certainly seen, again and again, that if you get into the user's perspective and build them a tool that really works, while capturing (but hiding) the true complexity of the data, you've come very close to really mastering software development.

If you get the analysis correct, then all you have to do it build it :-)