[LACK OF EDITOR'S NOTE: I wrote and posted this entirely on my iPad, so there are bound to be spelling and formatting problems. Please feel free to point them out, I'll fix them as I go.]
I use a couple of metrics to size development work. On one axes I always consider the underlying quality of the expected work. It's important because quality sits on at least in a logarithmic space. That is small improvements in quality get considerable more expensive as we try for better systems. On the other axes I use scale. Scale in also at least logrithmic in effort.
My four basic catigories for both are:
Software Quality
- Prototype
- Demo
- In-house
- Commercial
Project Scale
- Small
- Medium
- Large
- Huge
Software Quality
The first level of quality are prototypes. They are generally very specific and crude programs that show a proof of concept. There is almost no error handling, and no packaging. Sometimes these are built to test out new technologies or algorithms, sometimes they are just people playing with the coding environment. Proototypes are imortant for reducing risk, they allow experience to be gained before commiting to a serious development effort.
Demos usually bring a number of different things together into one package. They are not full solutions, rather they just focus on 'talking points'. Demos also lack reasonable error handlng and packaging, but they usually show the essence of how the software will be used. Occasionally you see someone release one as a commercial product, but this type of premature exposure comes with a strong likelihood of turning off potential users.
My suspicion is that in-house software accounts for most of modern software development. What really identifies a system as in-house is that it is quirky. The interface is a dumping ground for random disconnected functionality, the system looks ugly and it's hard to navigate around. Often, but not always, the internal code is as quirky as the external appearance. There is little architecture, plenty of duplication and usually some very strange solutions to very common problems. Often the systems have haphazaardly evolved into their present state. The systems get used but its more often because the audience is captive, given a choise they'd prefer a more usable product. Many enterprise commercial systems are really in-house quality. Sometime they started out higher, but have gradually degenerated to this level after years of people just dumping code into them. Pretty much an unfocused development project is constrained by this level. It takes considerable experience, talent, time and focus to lift the bar.
What really defines commercial quality is that the users couldn't imagine life without the software. Not only does it look good, it's stunningly reliable, simple and intuitive to use. Internally the code is also clean and well organized. It handles all errors correctly, is well packaged and requires minimal or no support. Graphic designs and UX experts have heavily contributed to give the solution both a clear narrative and a simple, but easily understood philosopy. A really great example really does solve all of the problems that it claims to. Even the smallest detail has been though-out with great care. The true mastery of programming comes from making a hard problem look simple; commercial systems require this both to maintain their quality and to support future extensions. Lots of programmers claim commercial quality programming abilities, but judging from our industry very few can actually operate at this level. As user's expectation for scale and shortened development times have skyrocketed, the overall quality of software has been declining. Bugs that in the past would have caused a revolt or lawsuit are now convientently overlooked. This may lead to more tools available, but it also means more time wasted and more confusion.
Project Scale
It is imossible to discuss project scale without resorting to construction analogies. I know programmers hate this, but without some tangilble basis, people easily focus on the wrong aspects or oversimplify their analysis. Grounding the discusion with links to physical reality really helps to visualise the work involved and the types of experience necessary to do it correctly.
A small project is akin to building a shed out back. It takes some skill, but it is easily learned and if there are the inevitable problems, they are relative well contained. A slightly wonky shack still works as a shack, it may not look pretty but hey, it's only a shack. Small projects generally come in around less than 20,000 lines of code. It's the type of work that can be completed in days, weeks or a few months by one or two programmers.
A medium project is essentially a house. There is a great deal of skill in building a house; it's not an easy job to complete. A well-built house is impressive, but somewhere in the middle of the scale it's hard for someone living in the house to really get a sense of the quality. If it works well enough, then it works. Medium projects vary somewhat, falling in around 50,000 lines of code and are generally less than 100,000. Medium projects usually require a team, or thet are spread across a large number of years.
You can't just pile a bunch of houses on top of each other to get an apartment building. Houses are made of smaller, lighter materials and are really scaled towards a family. Building an apartment building on the other hand, requires a whole new set of skills. Suddenly things like steel frames become necessary to get the size beyond a few floors. Plumbing and electricty are different. Elevators become important. The game changes, and the attention to detail changes as well. A small flaw in a house, might be a serious problem in an apartment building. As such more planning is required, there are fewer options and bigger groups need to be involved, often with specializations. For software, large generally starts somewhere after 100,000 lines but can also get triggered by difficult performance constraints or more than a trvial number of users. In a sense it's going from a single family dwelling into a system that can accomodate significatly larger groups. That leap upwards in complexity is dangerous. Crossing the line may not look like that much more work, but underneath the rules have changed. It's easy to miss, sending the whole project into what is basically a brick wall.
Skyscapers are marvels of modern engineering. They seem to go up quickly, so it's easy to miss their stunning degree of sophistication, but they are complex beasts. It's impressive how huge teams come together and managed to stay organized enough to achieve these monuments. These days, there are many similar examples within the software world. Sytems that run across tens of thousands of machines or cope with millions of users. There isn't that much in common between an apartment building and a skyscaper, although the lines may be somewhat blurred. It's another step in sophistication.
Besides visualization, I like these categories because it's easy to see that they are somewhat independent. That is, just because someone can build a house doesn't mean they can build a skyscaper. Each step upwards requires new sets of skills and more attention to the detail. Each step requires more organization and more manpower. It wouldn't make sense to hire an expert from one category and expect them to do a good job in another. An expert house-builder can't necessarily build a skyscraper and a skyscraper engineer may tragically over-engineer a simple shed. People can move of course, but only if they put aside their hubris and accept that they are entering a new area. This -- I've often seen -- is true as well for software. Each bump up in scale has its own challenges and its own skills.
Finally
A software project has an inherent scale and some type of quality goals. Combining these gives a very reliable way of sizing the work. Factors like the environment and experience of the developers also come to play, but scale and quality dominate. If you want a commercial grade skyscaper for instance, it is going to be hundreds of man-years of effort. It's too easy to dream big, but there are always physical constraints at work, and as Brookes pointed out so very long ago, there are no silver bullets.
Software is a static list of instructions, which we are constantly changing.
Saturday, January 19, 2013
Sunday, January 6, 2013
Potential
Computers
are incredibly powerful. Sure they are just stupid machines, but they
are embodied with infinite patience and unbelievable precision. But so
far we’ve barely tapped their potential, we’re still mired in building
up semi-intelligent instruction sets by brute force. Someday however,
we’ll get beyond that and finally be able to utilize these machines to
improve both our lives and our understanding of the universe.
What we are fighting with now is our inability to bring together massive sets of intelligent instructions. We certainly build larger software systems now then in the past, but we still do this by crudely mashing together individual efforts into loosely related collections of ‘functionality’. We are still extremely dependent on keeping the work separated, e.g. apps, modules, libraries, etc. These are all works of a small groups or individuals. We have no real reliable ways of combining the effort from thousands or millions of people into focused coherent works. There are some ‘close but no cigar’ examples, such as the Internet or sites like Wikipedia where they are a collection from a large number of people, but these have heavily relied on being loosely organized and as such they fall short of the full potential of what could be achieved.
If we take the perspective of software being a form of ‘encoded’ intelligence, then it’s not hard to imagine what could be created if we could merge the collective knowledge of thousands of people together into a single source. In a sense, we know that individual intelligence ranges; that is some people operate really smartly, some do not. But even the brightest of our species isn’t consistently intelligent about all aspects of their life. We all have our stupid moments where we’re not doing things to our best advantage. Instead we’re stumbling around, often just one small step ahead of calamity. In that sense ‘intelligence’ isn’t really about what we are thinking internally, but rather about how we are applying our internal models to the world around us. If you really understood the full consequences of your own actions for instance, then you would probably alter them to make your life better...
If we could combine most of what we collectively know as a species, we’d come to a radically different perspective of our societies. And if we used this ‘greater truth’ constructively we’d be able to fix problems that have long plagued our organizations. So it’s the potential to utilize this superior collective intelligence that I see when I play with computers. We take what we know, what we think we know, and what we assume for as many people as possible, then compile this together into massive unified models of our world. With this information -- a degree of encoded intelligence that far exceeds our own individual intelligence -- we apply it back, making changes that we know for sure will improve our world, not just ones based on wild guesses, hunches or egos.
Keep in mind that this isn’t artificial intelligence in the classic sense. Rather it is a knowledge-base built around our shared understandings. It isn’t sentient or moody, or even interested in plotting our destruction, but instead it is just a massive resource that simplifies our ability to comprehend huge multi-dimensional problems that exceeds the physical limitations of our own biology. We can still choose to apply this higher intelligence at our own discretion. The only difference is that we’ve finally given our species the ability to understand things beyond their own capabilities. We’ve transcended our biological limitations.
What we are fighting with now is our inability to bring together massive sets of intelligent instructions. We certainly build larger software systems now then in the past, but we still do this by crudely mashing together individual efforts into loosely related collections of ‘functionality’. We are still extremely dependent on keeping the work separated, e.g. apps, modules, libraries, etc. These are all works of a small groups or individuals. We have no real reliable ways of combining the effort from thousands or millions of people into focused coherent works. There are some ‘close but no cigar’ examples, such as the Internet or sites like Wikipedia where they are a collection from a large number of people, but these have heavily relied on being loosely organized and as such they fall short of the full potential of what could be achieved.
If we take the perspective of software being a form of ‘encoded’ intelligence, then it’s not hard to imagine what could be created if we could merge the collective knowledge of thousands of people together into a single source. In a sense, we know that individual intelligence ranges; that is some people operate really smartly, some do not. But even the brightest of our species isn’t consistently intelligent about all aspects of their life. We all have our stupid moments where we’re not doing things to our best advantage. Instead we’re stumbling around, often just one small step ahead of calamity. In that sense ‘intelligence’ isn’t really about what we are thinking internally, but rather about how we are applying our internal models to the world around us. If you really understood the full consequences of your own actions for instance, then you would probably alter them to make your life better...
If we could combine most of what we collectively know as a species, we’d come to a radically different perspective of our societies. And if we used this ‘greater truth’ constructively we’d be able to fix problems that have long plagued our organizations. So it’s the potential to utilize this superior collective intelligence that I see when I play with computers. We take what we know, what we think we know, and what we assume for as many people as possible, then compile this together into massive unified models of our world. With this information -- a degree of encoded intelligence that far exceeds our own individual intelligence -- we apply it back, making changes that we know for sure will improve our world, not just ones based on wild guesses, hunches or egos.
Keep in mind that this isn’t artificial intelligence in the classic sense. Rather it is a knowledge-base built around our shared understandings. It isn’t sentient or moody, or even interested in plotting our destruction, but instead it is just a massive resource that simplifies our ability to comprehend huge multi-dimensional problems that exceeds the physical limitations of our own biology. We can still choose to apply this higher intelligence at our own discretion. The only difference is that we’ve finally given our species the ability to understand things beyond their own capabilities. We’ve transcended our biological limitations.
Subscribe to:
Posts (Atom)