Writing small software programs is easy. Just ‘code and go’.
But this changes dramatically as the size of the code grows.
For any potentially usable software, there is ‘value’. If the software has value -- in then it does what it is supposed to do and it does it reliably -- then people will want to use it. If the value in the software is too low, then they will try to get away from that it.
That can be true even if the system is getting enhanced. Sometimes the new incoming features don’t provide enough new value, while the existing ones stagnate and decay. That’s a great situation in that some people can now claim the system has ‘grown’, but the overall value has really not matched the growing expectations of the users, so they leave anyways.
Going from top-down, the best way to kill value is to add more features into the overall navigation, so that the navigation itself becomes a maze. That tortures people because they know that the functionality is there, someplace, they just don’t have time to search through the whole freakin system to find it.
Non-deterministic features are great too. Basically, if people can’t predict what the code will do next, then they quickly lose their appetite to use it. Closely related to this are systems that are up, then down, then up, then ...
Crappy data quality bugs people as well. Particularly if the system has lots of data, some of it is great, but you can’t tell which and some of it is just tragically wrong. Nothing is more uninviting than having to scroll through endless bad data to find the stuff you are looking for. Useless searching of too much data is helpful in this category too.
While these are all external issues, some products peak early in their lives and then start the slow but steady descent into being awful. Usually, that comes from internal or process issues.
If the code is a mess, testing can hide that for a few versions, but eventually, testing will not be enough and the mess will percolate out to the users. Good testing cannot help with poor analysis, bad design, disorganization or messy code.
In the code, if a large group of programmers each builds their own parts in their own way, then much of their work is both redundant and conflictory. So, letting everyone have total creative freedom over their efforts will eventually kill a lot of value.
Another fun way to trash value is to go wild with the dependencies. Grabbing every library under the sun and just adding them all in quickly becomes unmanageable. As many of the libraries have their own issues, just keeping the versions up-to-date is a mammoth task. What seems like a Precambrian explosion of features, turns into just an explosion.
Within the code itself, any approach that increases the cognitive load without also providing enough added benefits is sure to deplete value. This happens because the approach acts as a resource ‘sink’ that pulls time away from better activities like thinking, communicating, or organizing. Wasting time, in a time-starved endeavor will help exaggerate all of the other problems.
A great thing to do is just to ignore parts of the system, hoping that they will just magically keep working. But what isn’t understood is always out of control, even if it isn’t obvious yet. If enough trouble is brewing, eventually it will overflow.
If the interface is great and the code is beautiful, that still doesn’t mean that the value won’t disappear with time. Building systems involves five very different stages, so ignoring any of them, or trying to treat all of them the same, or even just going at them backward, sets up a turbulent stream of work. Code coming from class VI rapids is highly unlikely to increase value.
The trick to moving a system from being small to being huge is that while doing so, the value increases enough that the users still want to play with the code and data. If you focus on some other attribute and accidentally kill that value, then you now have added a rather large hole to your foot.
No comments:
Post a Comment
Thanks for the Feedback!