Saturday, June 3, 2017

Stores of Value

Lately, I’ve been asked several times about what I think is the underlying value of cryptocurrencies. With their increased popularity, it is a rather pressing question.

Although I am not an economist, or a financial engineer, or even particularly knowledgeable, I have managed to glom onto little bits and pieces of knowledge about money that are floating about. Although these ideas all appear to fit together, I reserve the right to change my mind as time progresses and I learn more.

In a world without money -- if all other things were equal -- getting paid would be a bit problematic.

I might, for instance, go to work and in exchange for my efforts to receive a chair as payment. I could roll that chair over to the local supermarket and exchange it for some groceries, and possibly a grocery cart as change. Once I returned to my house, I would have to take all of the tangible items I had received during the day and store them in various rooms, hopefully, to be available for rainy days. My retirement savings would require a whole warehouse.

Of course, that would be a massive pain and completely impractical. The size of the house I would need would be enormous and the bartered goods would get gradually covered with dust, or decay, or entropy would find some other sneaky way of breaking them down.

Quite obviously, someone would come along and provide a service to allow me to drop off my items, in exchange for keeping them clean and safe. However, they would have considerable incentives to not just waste storage space either. They might then trade my newly gotten chair back to the same company that I work for, and that could actually form my payment for my next day's effort. In fact, I might end up getting the same chair, over and over, stacking it up in my inventory count until I’d have to withdraw, say, a sofa to get some plumbing fixed in the kitchen.

Clearly, that would be a rather insane society, so it is little surprise that we stepped up from bartering to having a means of keeping an abstract count. We don’t need to pass around physical items, just a “metric” that relates them all to each other. In that sense, money is a store of work, of the effort that I have put in, but it is not quite that simple.

The banks act much like that barter storage service, but they are regulated to only have the means to multiple money at some fixed percentage. That is, they can loan out the same chair, again and again, but they are ultimately restricted on how many times they can do that because they always need to keep a minimum number of chairs in storage. Given their multiplicative effect (and others), money is more realistically a small percentage of the work, not 100%. For argument's sake, let's just say that it is 20%.

Now, we use money internally in a country to provide daily interactions, but we also use it externally between countries. In order to minimize any internal disruptions, the sovereign central banks are allowed to play with the underlying percentages. So they might reduce that 20% underlying value down to say 15% because they want their country's stored values to be more competitive in the world markets.

It is of course considerably more complicated than that, but this perspective helps to illustrate that there is a ‘relative’ internal value to money, as well as a more ‘absolute’ external one and that much of what governments are doing is manipulating the underlying value to help the internal economies remain stable.

There is often talk of the world’s economies moving away from the gold standard as being dangerous or reckless, but a close friend of mine pointed out that it actually stabilized the system. Although money is a partial store of work, we like to peg it to something tangible to make it easy to compare different values. Gold emerged as this anchor, but it actually has a significant problem. The world’s supply of gold is not constant. Some of it might get used up for jewelry or industrial efforts, and there is always the potential for new, large mines to get discovered and flood the market. By pegging against a moving target, we introduce unwanted behaviors.

Since money is more or less a store of work, and we have issues with needing to interchange between nations, it seems most reasonable to just assume that the underlying value is essentially a country's GDP. That is, money is really backed by all of the products and services produced by a country. Watered down to be sure, but still anchored by something tangible. Except for wars and other catastrophic economic events, the amount of work produced changes slowly for most countries. The economies grow and shrink, but they don’t usually get spikes.

With that oversimplified introduction in place, we can almost talk about cryptocurrencies, but I still have one more tangent that is necessary first.

When it was first discovered, researchers found that they could quite easily create electricity in their science labs. This was long before our modern electrical appliances; it was what was essentially a steam-punk era. From their perspective, electricity was most likely just a curiosity. Essentially worthless. They would create it in small amounts and play with it. It was just some sort of parlor trick.

A modern electrical plant can generate a huge amount of electricity, sending it over the wires to the millions of potential consumers. It has a significant value. As we create more devices like light bulbs and radios that need power, electricity changed from being worthless to having a very strong underlying value. But without electrical devices to consume it, there is no value.

We couldn’t live without electricity now, and there are, no doubt, plenty of people that made their fortunes by supplying it to us. It’s a strong utility that drives our economies.

This shows quite clearly that value can be driven only by demand; that something with no value can, in fact, be quite valuable if there are consumers. Keeping electricity in mind while discussing the current worth of cryptocurrencies is useful.

So with all of that in mind, we can finally get to cryptocurrencies.

It is somewhat obvious from the last analogy about electricity that if you produce something, to gain value there needs to be demand. For Bitcoin, it seemed like the first few years the demand was driven by curiosity or even entertainment. There were no real “electrical appliances” just interested people playing with it. So there was some value there, but it was tiny.

As different segments of the various markets came in and found uses for Bitcoin, the underlying value slowly increased.

The addition of other competitive cryptocurrencies like Ethereum appears to have strengthened Bitcoin’s value, rather than hurt it.

A good indication that there is something tangible below is that both Bitcoin and Ethereum have weathered rather nasty storms: Mt. Gox for one and The DAO for the other. If they were just 100% pure speculation, people would have walked away from them, but instead although they took significant hits, they recovered. Fads and Ponzi schemes don’t weather storms, they are just get forgotten.

As the cryptocurrencies break into new markets and there becomes more and more options for spending them, their underlying value increases. What drives them underneath is how flexible they are to use. If there are lots of places to use these currencies, and it is easy to switch back and forth, then as the roadblocks come down, the underlying value increases. So there is some intrinsic underlying value and it is increasing, probably quite rapidly at this point.

A hot topic these days is whether or not countries are for or against cryptocurrencies. Popular sentiment ascribes the motivations behind central banks as a means to control the masses, but I’ve never really seen it in that draconian light. Countries control their currency in order to try and stabilize their economies and to control their debts to other nations. They’re probably not too worried about individual spending habits. Most countries want their citizens to be successful.

They do need, however, to collect their ‘cut’ of commerce as it flows; that is how they exist. Cryptocurrencies don’t have to mess with this mechanics at all. That is, a country can still strengthen or weaken their money supply, and the fiat currency is still the final arbitrator of worth within the borders and between other nations.

Any deal with the Canadian government, for example, would be denominated in Canadian dollars. That won’t change even if all of the citizens used Bitcoin for their daily transactions. The dollar would still be in the underlying measure.

Cryptocurrencies being global offer citizens a way to essentially hedge their funds externally as well. Now that was always there, you can easily hold a US dollar bank account in Canada and then you can use it when you travel abroad, for example. Doing so eliminates any FX risk from travel expenses, and if you restock the account during periods of better exchange rates, it boosts the spending power of the money by a small amount.

If it works for a Canadian with US dollars, then there is essentially no difference in doing it with Bitcoin when we are talking about external transactions.

The funny part though is what happens if all of the citizens pay for their daily goods with Bitcoins. When it happens with internal transactions, does that somehow diminish the government's ability to heat or cool down the economy?

There would obviously be some effect. If Bitcoins were the better deal people would switch to using them. If they crashed, they would all return to Canadian dollars. That really feels like it is an asymmetric hedging effect. The underlying transactions would shift towards the best currency. But it is worth noting that all of the taxes would still have to be paid in the fiat currency and the people and vendors would likely try to naturally avoid currency risk and bad exchange rates when collecting these amounts. Canadian dollars would still be preferred.

Thus a fiat currency anchors the underlying value, and it wouldn’t surprise me the least if as the cryptocurrencies became more popular; that they had somewhat different valuations in each different region. That is, the country's own financial constraints will come to bear on the cryptocurrencies as well, and the central banks will lose only a small amount of control over the value. People would then try to arbitrage on the regional differences, but the summation of these effects would likely tie all of the fiat currencies closer together. They would collectively become more stable. More intertwined.

Realistically this has been happening for at least thirty years now. Globalization has been making it easier for currencies, and people, to flow. That relaxation of restrictions on the movement of cash naturally draws all of the players closer together. That, of course, means that one country's fate is more dependent on the others, but it also true that bigger organizations are intrinsically more stable. They move slower, but that is sometimes an advantage in a volatile world.

The cryptocurrencies are just going to increase that trend, to allow it to become more fine-grained and to get ingrained into the masses. Thus, they likely are in line with the financial trends in the 21st century, maybe speeding up the effect somewhat but definitely not changing the dynamics.

All of this gets me back to the real underlying question, which is what are the cryptocurrencies ultimately worth today. If we compare them back to fiat currencies we can start by looking at all of the work going into to produce them: the hardware, network connections, operations, and electricity. That output is analogous to the GDP of a nation and it provides a definitive underlying value, but we also need to consider the analogy to electricity.

The base work of maintaining the Bitcoin infrastructure is contingent on there being demand. That is, as the end-points -- the various uses of the cryptocurrencies -- increase they supply underlying value to the effort of keeping the coins running. The two grow hand in hand. As more electrical appliances found their way into common usage, the value of electricity increased.

But it also worth noting that particularly for Bitcoin, the underlying market is in an expansion phase. It is rapidly growing to meet brand new needs, it is nowhere near saturation. This seemingly unlimited growth, as it is fed by more and new types of demand is in itself a temporary value. Being ‘expandable’ is a strong quality because it provides optimism for the future, which while intangible is still a driving force.

On the other hand, there are intrinsic fixed technical limitations to the cryptocurrencies, and to those that are bound to high costs of PoW (proof-of-work) in particular.

No technology is infinitely scalable, the physical universe always imposes limits, and these technical limitations, rather than the full potential of the market itself, are where the saturation points arise. Thus the market will grow as large and as strong as the technology allows, not the current potential of the markets for the entire planet. That makes the issues about transaction speeds and backlogs far more pressing since they seem to imply that the saturation point is coming far sooner than expected.

All told, we seem to have at least four distinct parts to the valuation of cryptocurrencies. There is obviously some element of speculation, there is an underlying value to maintaining the infrastructure, there is the ability to expand and finally, there is increasing demand. We could likely model these as essentially separate curves over time, based on the historic data, and use that to attribute their various effects on the prices. It would be trickier in that some of the issues actually span multiple cryptocurrencies, and the underlying base is somewhat volatile right now, as different actors switch between the competing technologies.

Still, it says that it certainly isn’t 100% speculative right now and that the coins should be moving towards some specific value. The rate of return is driven way up by possibilities of expansion, but that will eventually converge on what is essentially the collective changes in the GDP for the planet.

Is that higher or lower than today’s current price? That would take a huge amount of analysis, and likely a large number of lucky guesses, but it does seem that much like electricity, cryptocurrencies will eventually become a normal utility for our societies and that collectively they are undervalued right now.

Any individual coin though might be overvalued, specifically because of its technical limitations, but those are also subject to change as we increase our knowledge about how to build and deploy these types of systems.

So it appears very unlikely to me that the cryptocurrencies will just blow up one day and hit a valuation of 0. They’re not a Ponzi scheme, but rather a fundamental change to our financial infrastructure. If a total crash could happen, there have already been plenty of chances for it to occur already. There is real value underneath, and it is really growing.

But caution is still required for any given currency, and it seems that spreading the risk across multiple ones is not only wiser but also in line with the observation that they seem to be stronger together, than individually. The situation, however, is rapidly changing, and it will likely take at least a decade or two before it settles down.

Cryptocurrencies are here to stay, but there is still a long road to go before they settle into their utility state. And it’s no doubt quite a rocky road. 

Tuesday, March 14, 2017

Navigation

The primary goal of software is to be useful. In order to do that, someone with a problem needs to use it to build up a ‘context’ and then execute some functionality. This context can be seen as all of the inputs necessary for their computation, whether it is persistent data, current configuration or the results of navigating from a broader context down to a specific one.

As an example of that latter input, for a GUI application, a user might start the program and select a number of menu items and buttons that gradually get them closer and closer to being able to run the desired code. They might open a file, move the cursor to a specific line, then enter some text. When satisfied, they will finally save the document, as a file, for long-term persistence on a disk or flash drive somewhere, achieving their goal.

The functionality that they require is really to modify an existing persistent file of a given format. Everything else they have to do, starting the app, picking file, changing it, etc. is just building up a context of inputs so that the save happens in the way they expect it to. Consequently, if the editor dies before the file is saved then they have to go back and navigate through the context again. It’s not over until the final functionality is completed.

For a command line environment it is the same. Most often the user works their way into a directory, might change the file with something like sed or awk redirected into a temp file, and then copies the modified file on top of the old one. Their navigation through the various commands is nearly equivalent to utilizing a series of widgets in a GUI.

In that sense we can frame any user action with a computer as a long series of navigating down to specific contexts to fire off some code. Programmers often focus on what that code is, and how it works, but from a user’s perspective the ease or pain of navigating is what they really see.

When programs are small, the amount of functionality is small enough that it can be pushed up to the top level. It really doesn’t need any serious organization. But as development continues and more and more functionality becomes available, knowing some functionality exists and navigating to it become larger and larger problems. In many active but ancient products, a great deal of the features just aren’t used because they are nearly impossible to find when needed. They exist, but they are buried in unexpected places.

To counter this, the focus should change as the code grows, first as needing new functionality, then it switches over to finding ways to reorganize the interface so that all of that existing functionality is still usable. This of course involves heavy refactoring, which is hugely expensive as the underlying layers of code have become locked into position, so it is rarely done.

It is also extremely difficult, in that the designers need to re-imagine every aspect of the program from the top down, from the perspective of the user. And then they need to overlay an organizational philosophy on top of everything so that it is clearly intuitive to most people on where to navigate to achieve their goals. There isn’t, that I know of, some magical means of applying ‘normalization’ such that it makes this process mechanical, and whatever arrangement they come up with will eventually degrade as the functionality continues to grow. It’s a perpetual problem.

What seems to occur in practice is that the programs just gradually decay until they are essentially stuck. The navigation becomes so bad, and arbitrary, and the work to fix it is so large, that almost no more forward progress is made. Generally this is followed by another program from someone else, with a different organizational perspective getting written and taking away most of the users. Eventually that one stalls too, and the cycle repeats itself. In that way it may seem like we are progressing, but often the case is that the replacements simply ignore old, but required functionality, which when added hastens the inevitable outcome. These days it seems more like we are just going round and round in circles.

It is far easier to ignore the navigation and just focus on grinding out functionality, which is why it is common practice. When software was young, this approach made a lot of sense, in that there was a real need for new features. But most of the industry has matured now, and there is a plethora of functionality available for just about every imaginable computation. It’s no longer an issue of having to create the functionality, rather it has become one of trying to find it. This change does not seem to be reflected in our development cultures. Yet.

Tuesday, December 6, 2016

Partial Knowledge

We live in an amazing time. Collectively our species knows more about the world around us than any one individual could ever learn in their lifetime. Some of that knowledge is the underlying natural complexity of our physical existence. Some of it is the dynamic complexity of life. On top of this, over a long history, we have built up hugely complex social interactions, that have been evolving to maintain our growing technological innovations. Altogether, the details are staggering, and like swiss cheese, our understanding of it all is filled with countless holes.

Full knowledge is rather obviously impossible for any individual. The best we can do is focus in on a few areas and try to absorb as many details as possible. We all have partial knowledge. To make matters worse, often the details are non-intuitive. An overview is a simplification, so it is easy on the outside to misunderstand, to think that the issues that matter are different from what really matters on the inside. We see this often, with people misdirecting resources because their knowledge is too shallow.

Given our out-of-control complexity, it is not surprising that there are frequent and long-running problems with our societies. Each new generation hyper-focuses on their own partial understanding, pursuing changes that rather than fix the problems, just shifts them elsewhere and slowly amplifies them. This gets continually repeated, as the underlying general knowledge gets watered down. In time most people start knowing less about the specifics because they get swapped with the volume, their understandings become increasingly destructive. Their changes converge on making things worse.

What we need as a species is to be able to merge our partial knowledge. To bring it all back together, generation after generation, so that we are moving forward, rather than just cycling between calamities.

Knowledge was initially trapped in an individual. We eventually learned to verbally communicate it to our peers, then to preserve it for generations in physical form, but other than the gradual simplifications brought on by our repetitive communication we really don’t know how to extract the essence of underlying truth from it. If we could pull out the truth from the things we say, and we could combine it across large numbers of people, we could reassemble it in ways that enlighten our perspective on our world, our lives, our future.

What lies under the heart of all communications is a cloudy mix of aspects of our existence. Never really correct, but often not obviously wrong. As it flows through people over time it maintains that anchor to its origins but reshapes itself from other exposure. It is slowly merging, but what we really want is to be able to quickly identify it, strip it down, collect it and build it back up again. We want to accelerate this, but also to ensure that we improve in quality, rather than ebb and flow. We have some means of moving forward, like the scientific method and the ideas underpinning engineering, but daily life is a constant reminder that these approaches fail often. They are not perfected and they really come to rely on individuals for success. That makes them subject to being overwhelmed by the growing tsunami of misinformation.

What we need are tools to help the individuals work together, to help them merge their partial knowledge and to allow the merging to reliably converge on a quality of understanding that allows us to fix our past mistakes without just shifting them elsewhere. We have such a tool: computers. They can amass large amounts of information and they are patient enough to sift through it, using the underlying structural information to measure fitness. We can build tools to hold massive contexts and to gradually merge these into larger and larger ones. We can build tools to help leverage these huge structures to correct and fix grand problems.

We do some of this now, but we do it on an ad hoc basis with very specific modeling. The next generation of tools needs to do it across everything we know, rather than just for tiny sub-disciplines. It’s a model and merge problem, but one that spans the perspectives and understandings of billions of people, not just a small group of researchers.

If we can’t utilize our partial knowledge to improve things then we know that complexity will continue to grow, now at an increasing rate, up to some threshold where it in itself will become the most significant problem for our species. That is, technology, being neither good nor bad, is just an amplifier. And at the point that it expands things to be large enough, the core gets so watered down that the entire edifice will stall, or even worse it will implode. It is a recurring pattern in history, it also seems to be a physical limitation of our universe, and it is this path that we are currently traveling on.

We see that we can not go on like this forever, but our reliance on partial knowledge also means that without better tools we will. That no single individual can lead us away from our fate, that now not even a small number of people will suffice anymore. We need to harness a gigantic understanding in order to correct our path and we haven’t even begun to accept this as ‘the problem’ let alone started working on it yet. In a sense, technology got us into this mess; we have to admit that and redirect our resources to craft technologies that will really get us out, not just pile on more dangerous bandages or keep shifting the problems around in circles. This isn’t a practical problem that will get accidentally solved by a couple of people in a garage, it has grown too large for that now. If we don’t choose to accept, understand and work towards a solution the universe will eventually find one for us...

Saturday, November 19, 2016

Backwards

It is easy to get lost when tackling a difficult problem. It is easy to go off on a dead end, wasting a lot of valuable time. When we are presented with a lot of choices, it is very difficult to pick the right one.

For whatever reason, most people are far more comfortable with approaching problems ‘forwards’ from the top down. However, we could see the path to a solution as winding through a rather dense tree of decisions and effort. At every opportunity there are plenty of choices, options. If you make the right choice, you move to down to the next level of the problem. If you pause or head down an irrelevant path, well…

The most optimal way I’ve found is to visualize everything backwards. That is, instead of starting at the beginning, you go straight to the end. If you understand what parts of a solution ‘must’ exist, then you can work backwards to bring them together. Along the way you can collapse any special cases to simplify them. As you keep going backwards from the end, you are naturally pruning off all of those dangerous decisions, all of that useless work. You are avoiding massive amounts of wasted time.

When the outcome is predetermined, the path between there and here is straightforward. If you can walk just that path, and only that path, then you can create a solution in the fastest time possible. All of the work to be accomplished has a very specific, and targeted use, none of it is wasted.

Software gets a little more difficult because it isn’t just one version, it grows over a long period of time often changing directions a bit. As you build a system, there are usually multiple destinations at once and with usage new ones become apparent. Still, if you go backwards from each, and you understand the overall territory you are moving through, then you easily become aware of multiple overlapping paths. You can take advantage of these to further optimize not just the next solution, but a whole series of them. The better you become at identifying viable opportunities to leverage existing work, the more ambitious you can be to tackle bigger and more interesting problems. Time is always the one resource that a Software Developer has far too little of, so saving huge chunks of it is way better than trading off little chunks now for massive problems in the future. The direction of a big project is never arbitrary, and if it seems like it is, then it is only because people haven’t spend enough time to really think it through. If you don’t get that time, because you are wasting it on unnecessary decisions and dead ends, then you’re squandering your most precious resource.


Most development projects are not nearly as difficult as they seem. Most of the endless series of decisions we make are not really necessary. Much of the work done has no real practical value. We get into time crunches most often because we are not spending our time doing valuable things. While often it's hard to see the right path going forward, it is much easier to see it if you are backing up from the solution. All you need to do is flip your perspective and the rest comes naturally.

Saturday, September 3, 2016

Meta-methodology

How we go about building complex software is often called ‘methodology’.

To be complete, a methodology should cover every step of the process from gathering all necessary information about the problem of deploying and operating the solution in a production environment.

What has become obvious over the years is that for the full range of software development there is no one consistent methodology that will work effectively for every possible project. The resources, scale, and environments that drive software development to impose enough external constraints that each methodology needs to be explicitly tailored in order to remain effective.

Sometimes this lack of a specific answer is used as reasoning to not have any formalities with the development, but for any non-trivial project, the resulting disorganization from this choice is detrimental to the success and quality of the effort. A methodology is so necessary that a bad one is often better than nothing.

At a higher level, however, there are certainly many known attributes about software understanding that we have learned over the last half-century. What's needed to ensure success in development projects is to extract and apply these lessons in ways that contribute to the effort, rather than harm it. Thus, if we can’t specify the perfect methodology, we can certainly specify a meta-methodology that ensures that what happens in practice is as good as the circumstances allow.

The first and most import property of a methodology is that it is always changing. That is it needs to keep up with the overall changes in the resources and environment. That doesn’t mean changes are arbitrary; they need to be driven by feedback and sensitivity for any side-effects. Some parts of the methodology should actually be as static as possible, they need to be near constants throughout the chaos or work will not progress. A constantly shifting landscape is not a suitable foundation for building on. Still, as the development ebbs and flows, the methodology needs to stay in sync.

To keep any methodology in focus, it needs to be the responsibility of a single individual. Not someone from the outside, since they would only have a shallow view of the details, but rather the main leading technologist. The lead software developer. They are the key person on the development side whose responsibility is to ensure that the parts of the project get completed. That makes sense given that their role is really to push all of the work through to completion, so how that work gets done is a huge part of their responsibilities. Rather obviously that implies that they have significant prior experience in the full breadth of the software process, not just coding. If they only have limited experience in part of the effort, that is where they will incorrectly focus their attention. If only part of the development process is working, then overall the whole process is not.

This does tie the success of the project to the lead developer, but that has usually been the case, whether or not people have been willing to admit it. Projects without strong technical leadership frequently go off the rails, mostly by just endlessly spinning in circles. Expecting good leadership from the domain side is risky because they most often have expertise in any or everything but software development, so they too tend to focus on what they understand, not the rapidly accumulating problems.

For very large scale development, a single leader will not suffice. In that case, though, there should be a hierarchy of sub-leaders, with clear delineations between their responsibilities. That’s necessary to avoid competition and politics, both of which inject external complexity into the overall process. When leadership is spending too much effort on external issues, it has little time to correct or improve internal ones. At the top, if this hierarchy, the overall picture still falls to a single individual.

Any usable methodology for software addresses all five different, but necessary, stages: analysis, design, programming, testing, and operations. Each of these stages has its own issues and challenges. To solve a non-trivial problem, we need to go out into the world and understand as much of it as possible in an organized manner. Then we need to bring that knowledge back, mix it with underlying technologies and set some overall encapsulating structure so that it can be built. All of that work needs to be coded in a relatively clean and readable manner, but that work also requires significant editing passes to be able to fit nicely into any existing or new efforts. Once it's all built, it is necessary to ensure that it is working as expected, both for the users and for its intended operating environment. If it is ready to go, then it needs to be deployed, and any subsequent problems need to be fed back into the earlier stages. All of this required work remains constant for any given software solution, but each stage has a very different perspective on what is being done.

Most problems in the quality or the stability of the final running software come from process problems that occurred earlier. An all too frequent issue in modern development is for the programmers to be implicitly, but not directly responsible for the other stages. Thus major bugs appear because the software wasn’t tested properly; because the programmers who set the tests were too focused on the simple cases that they understand and not on the full range of possibilities.

In some projects, analysis and design are sub-tasked to the programmers, in essence, to make their jobs more interesting, but the results are significant gaps or overlaps in the final work, as well as lack of overall coherent organization.

The all too common scope creep is either a failure to properly do analysis or a by-product of the project direction wobbling too frequently.

Overall stability issues are frequently failures in design to properly encompass the reality of operations. They skip or mishandle issues like error handling. Ugly interfaces or obtuse functionality come directly from design failures, such that the prerequisite skills to prevent them were not available or believed necessary. Ugliness is often compounded by inconsistencies caused by lack of focus; too many people involved.

Following these examples, we can frame any and all deficiencies in the final product as breakdowns in the process of development. This is useful because it avoids just setting the blame on individuals. Most often if a person on the project is producing substandard work it is because the process has not properly guided them onto a useful path. This property is one of the key reasons why any methodology will need to continuously tweaked. As the staff change, they will need more or less guidance to get their work correct. A battle-hardened team of programmers needs considerably less analysis and specifications than a team of juniors. Their experience tends to focus them on the right issues.

Still, there are always rogue employees that don’t or can’t work well with others, so it is crucial to be able to move them out of the project swiftly. Responsibility for evaluating and quickly fixing these types of personality issues falls directly on the technical lead. They need full authority over who's involved in the work at most stages (operations is usually the exception to this rule) and who is no longer part of the project.

All of this sets a rather heavy burden on the technical lead. That is really unavoidable, but the lead’s still subservient to the direction of the domain experts and funding so while they can modify the methodology to restructure the priorities, they can’t necessarily alter the overall scope of the work. They can’t run off and build something completely different, and if they end up not meeting at least the basic requirements the project should be deemed a failure and they should be removed. Most times this is both what the different types of stakeholders want and what they need.

Sometimes, however, what the users need is not what the main stakeholders want. In those situations tying the responsibility for the system entirely to the lead developer is actually a good thing. Their strengths in doing their job come from being able to navigate these political minefields in order to get the best possible result for the users. Without at least a chance of moving this dial, the project is ultimately bound for disaster. With the responsibilities defined properly, at least the odds are better. And if the project does fail, at least we know who to blame, and what skills they were missing.

There is currently a huge range of known static methodologies. The heavy-weight ones follow the waterfall approach, while the lighter ones loosely are called agile. For any project, the static adoption of any one of these is likely as bad as any other for reasons previously mentioned. So the most reasonable approach is to pick and choose the best pieces or qualities. This may seem like a good way to get a mess, but this should really only be the starting point. As the project progresses, the understanding of its problems should be applied as fixes to the methodology and this should be ongoing throughout the whole life of the project.

In practice, however, most gnarled veterans of software have experienced at least one decent, mostly working methodology in the past, so it's rather obvious that they start with that and then improve upon it. Rather obviously, for qualifications to lead a big development project, a lot of interest should be shown about the methodology they intend to follow, and less about the specifics of their past coding, design, analysis, testing and operation experiences, but clearly these are all tied together.

As for the pieces, software development is too subjective to trends. We forget the past too quickly and seem to keep going around having to relearn the same lessons over and over again. Good leadership has risen above this, so the right qualities for a methodology are not what is popular, but rather what has been shown to really work. For example, it is quite popular to say bad things about waterfall, but the justifications for this are not soundly based. Not all waterfall projects failed, and those that did frequently did so because of a lack of leadership, not the methodology. It does take time for waterfall projects to complete, but they also have a much better long-term perspective on the work and when run well can be considerably more effective and often more consistent. It’s not that we should return entirely back to these sorts of methodologies, but rather that some of them did have some excellent properties and these should be utilized if needed. 

At the other end of the spectrum, many of the lighter approaches seem to embrace chaos by trying to become ultra-reactive. That might fix the time issue and prevent them from steering away from what the stakeholders want, but it comes at the cost of horrendous technical debt, which sets a very short shelf life on the results.

A good methodology would then obviously find some path between these extremes, but be weighted on one side or the other because of the available resources and environment. Thus, it would likely have variable length iterations, but could even pipeline different parts of the work through the stages at different speeds. Some deep core work might be slow and closer to waterfall, while some functionality at the fringes might be as agile as possible. The methodology would encompass both of these efforts.

Because of the past, many people think that methodologies are vast tomes. That to be a methodology, everything has to be written down in the longest and most precise of detail. For a huge development effort that might be true, but for smaller scales what needs to be written is only what will be quickly forgotten about or abused. That is, the documentation of any methodology is only necessary to prevent it from not being followed. If everyone involved has remembered the rules, then the documents are redundant. And if each time there are changes, the rules can change too, then the documentation will also be redundant. As such, a small tiger team of experts might have exactly zero percent of their methodology on paper, and there isn’t anything wrong with that if they are consistently following it.

There are occurrences however whether outsiders need to vet and approve the methodology for regulatory or contractual reasons. That’s fine, but since parts of the methodology change, the dynamic parts need to be minimally documented in order to avoid them becoming static or out-of-date.

Another reason for documentation is to bring new resources up to speed faster. That is more often the case for a new project that is growing rapidly. At some point, however, in the later stages of life, that type of effort exceeds its value.

From this it is clear that methodologies should include the intercommunication between the different people and stages of the project. All of this interim work eventually influences how the final code is produced and it also influences how that code is checked for correctness. Some of the older heavyweight methodologies focused too intensely on these issues, but they are important because software really is a sum of these efforts. Thus, for example, the structure and layout of any analysis does make a direct difference to the final quality of the work, but it also can help show that some areas are incomplete and they need further analysis. The analysts then in a large project should be laying out their results for the convenience of the designers, testers and the lead developer. They may need to confirm their work with domain users, but the work itself is targeted to other stages.

Communication of the types of complex information needed to build non-trivial systems is a tricky issue. If every detail is precisely laid out in absolute terms, the work involved will be staggering and ironically the programmers will just need to write a program to read the specifications and then generate the code. That is completely hopeless in practice. The programmers are the specialists in being pedantic enough to please their computers, so most other people involved are going to be somewhat vague and sometimes irrational. The job of programming is to find a way to map between these two worlds. Still, that sort of mapping involves deep knowledge, and that type of knowledge takes decades to acquire, so most of the time the programmers have a bit of the ability, but not all of it. A good methodology then ensures that for each individual producing code they have everything they need to augment their own knowledge in order to successfully complete the work. Obviously, that is very different for each and every programmer, so the most effective methodology gives everybody what they need, but doesn’t waste resources by giving too much.

Then the higher level specifications are resolved only to the depth required by specific individuals. That might seem impossible, but really it means that multiple people in the development have to extend the depth of any specifications at different times. That is, the architects need to produce a high-level structuring for the system that goes to the senior developers. They either do the work, or they add some more depth and pass it down to the intermediates, who follow suit. By the time it arrives on a junior’s desk, it is deeply specified. If a senior does the work, they’ll just mentally fill in the blanks and get it done. This trickle-down approach prevents everything from being fully specified and resources wasted but does not leave less experienced people flailing at their workload. It also means that from a design perspective, people can focus just on the big issues without getting bogged down in too many details. All of the different individuals get the input they need, only when they really need it and the upper-level expertise is more evenly distributed across the full effort.

There are many more issues to be discussed with respect to methodology, but I think pushing them upwards to be properties or qualities of a meta-methodology is a viable way to proceed. It avoids the obvious problem with one approach not fitting for different circumstances, while still being able to extract out the best of the knowledge acquired in practice. We still have a long way to go before most software development produces results that we can really rely upon, but our societies are moving too quickly into dependence now. At this early stage, ‘software eating the world’ might seem to be an improvement, but we have yet to see the full and exact costs of our choices. Better than waiting, it would be wiser to advance our knowledge of development up to the point where we can actually rely on the results. That might take some of the mystery out of the work, but hopefully, it will also remove lots of stress and disappointment as well.