This is going to be a long and winding post, as there are always fundamental questions that do not have easy or short answers. Complexity is one of those concepts that may seem simple on its surface, but it encompasses a profoundly deep perspective on the nature of our existence. It is paired with simplicity in many aspects, which I wrote about in:
http://theprogrammersparadox.blogspot.ca/2007/12/nature-of-simple.html
It would be helpful in understanding my perspective on complexity to go back and read that older post before reading further.
The first thing I need to establish is that this is my view of complexity. It is inspired by many others, and it may or may not be a common viewpoint, but I’m not going to worry in this posting about getting my facts or references into the text. Instead, I’m just going to give my own intuitive view of complexity and leave it to others to pick out from it what they feel is useful (and to disregard the rest).
Deep down, our universe is composed of particles. Douglas Hofstadter in “I am a Strange Loop” used the term ‘epiphenomenon’ to describe how larger meta-behavior forms on top of this underlying particle system. Particles form molecules, which form chemicals, which form into materials, which we manipulate in our world. There are many ‘layers’ going down between us and particles. Going upwards, we collect together as groups and neighborhoods, based in cities in various regional collections to interact with each other as societies. Each of these layers is another set of discrete ‘elements’ bound together by rules that control their interaction. Sometimes these rules are unbreakable, I’ll call these formal systems. Sometimes they are very malleable: thus informal systems. A deeper explanation can be found here:
http://theprogrammersparadox.blogspot.ca/2011/12/informal-ramble.html
If we were to look at the universe in absolute terms, the sum total of everything we know is one massive complex system. It is so large that I sincerely doubt we know how large it actually is. We can look at any one ‘element’ in this overall system and talk about its ‘context’; basically all of the other elements floating about and any rules that apply to the element. That’s a nice abstract concept, but not very useful given that we can’t cope with the massive scale of the overall system. I’m not even sure that we can grok the number of layers that it has.
Because of this, we narrow down the context to something more intellectually manageable. We pick some ‘layer’ and some subset of elements and rules in which to frame the discussion. So we talk about an element like a ‘country’, and we are focused on what is happening internally in it or we talk about how it interacts with the other countries around it. We can leverage the mathematical terminology ‘with respect to’ -- abbreviated to ‘wrt’ -- for this usage. Thus we can talk about a country wrt global politics or wrt citizen unrest. This constraints the context down to something tangible.
A side-effect of this type of constraint is that we are also drawing a rather concrete border around what is essentially a finite set of particles. If we refer to a country, although there is some ambiguity, we still mean a very explicit set of particles at a particular point in time (inferred).
So what does this view of the world have to do with complexity? The first point is that if we were going to craft a metric for complexity then whatever it is it must be relative. So it is ‘complexity wrt a,b,c,.., z’. That is, some finite encapsulation of both the underlying elements (possibly all the way down to particles or lower) and some finite encapsulation of all of the rules that control their behavior, at every layer specified. Complexity then relates to a specific subsystem, rather than some type of absolute whole. Absolute complexity is rarely what we mean.
In that definition, we then get a pretty strong glimpse of the underpinnings of complexity. We could just take it as some projection based on all of the layers, elements and rules. That is of course a simplification of its essence and that in itself is subject to another set of constraints imposed by the reduction. Combined with the initial subsystem, it is easy to see why any metric for complexity is subject to a considerable number of external factors.
Another harder. but perhaps more accurate way of looking at complexity is as the size of some sort of multidimensional space. In that context we could conceive of what amounts to the equivalent of a ‘volume’, a spatial/temporal approach to looking at the space occupied by the system. This allows use to take two constrained subsystems and roughly size them up against each other. To be able to say that one is more ‘complex’ than the other
Complexity in this way of thinking has some interesting attributes. One of them is that while there is some minimum level of complexity within the subsystem, organization does appear to reduce the overall complexity. That is, in a very simple system, if the rules that bind it are increased, but the increase reduces the interactions of the epiphenomenon, the overall system could be less complex than the original one. There is a still a minimum, you can’t organize it down to nothing, but chaos increases the size of complexity (which is different from the way information theory sees the world). So there is some ‘organizational principle’ which can be used to push down complexity to its minimum, however this principle is still bound by the similar constraints that hold for any restructuring operation like simplicity. That is, things are ‘organized’ wrt some attributes.
Another interesting aspect of this perspective of complexity is how it relates to information. If complexity is elements and rules in layers, information is a path of serialization through these elements, rules and layers. That is, it is a linearized syntactic cross-section of the underlying complexity. It is composed of details and relationships that are interconnected, but flattened. In that sense we can use some aspect of Information Theory to identify attributes of an underlying subsystem. There is an inherent danger in doing this because the path through the complexity isn’t necessarily complete and may contain cycles and overlaps, but it does open the door to another method of navigating the subsystem besides ‘space’. We could also use some compression techniques to show that a particular information path is near a minimal information path. So that the traversal and the underlying subsystem are in essence as tightly woven as they could possibly be.
A key point is that complexity is subject to decomposition. That is, things can appear more or less complex by simply ignoring or adding different parts of the overall complexity. Since we are usually referring to some form of ‘wrt’, then what we are referring to is subject to where we drew these lines in the space. If we move the lines substantially, a different subsystem emerges. Since there are no physical restrictions on partitioning the lines, they are essentially arbitrary.
Subsystem complexities are not mutually independent of the overall complexity. We like to think they are, but in that all things are interrelated. However, there are some influences that are so small that they can be considered negligible. So for instance fluctuations on the temperature of Pluto (the planetiod) are unlikely to affect local city politics. The two seem unrelated, however they both exist in the same system of particles floating about in space, and they are both types of epiphenomenon, although one is composed of natural elements while the other is a rather small group of humans interacting together in a regional confrontation. It is possible (but highly unlikely) that some chunk of Pluto could come crashing down and put an end to both an entire city and any of its internal squabbling. We don’t expect this, but there is no rule forbidding it.
The way we as a species deal with complexity is by partitioning it. We simply ignore what we believe is on the outside of the subsystem and focus on what we can fit within our brains. So we tend to think that things are significantly less complex than they really are, primarily because we have focused on some layer and filtered down the elements and rules. Where we often get into trouble with this is with temporal issues. For a time, two subsystems appear independent, but at some point that changes. This often misleads people into incorrectly assessing the behaviors.
Because we have to constrain complexity, we choose to not deal with large systems, but they still affect the complexity. For the largest absolute overall system, it seems likely that there is a fixed amount of complexity possible. One has to be careful with that assumption though because we already know from Godel’s Incompleteness Theorem that there is essentially an infinite amount of stuff theoretically out there as it related to abstract formal systems. One could get caught up in a discussion about issues like the tangibility of ‘infinite’, but I think I’ll leave that for another post and just state an assumption that there likely appears to be a finite number of particles, a maximum size, an end to time and thus a finite number of interactions possible in the global system. For now we can just assume it is finite.
Because of the sheer size of the overall system, there is effectively no upper limit on how complex things in our world can become. We could apply the opposite of the earlier ‘organizational principle’ to build in, what amounts to artificial complexity and make things more complicated. We could shift the boundaries of the subsystem to make it more complex. We could also add in new abstract layers would again would increase the complexity. It is fairly easy to accomplish, and from our perspective there is effectively an infinite amount of space (wrt a lifetime) to extend into.
One way of dealing with complexity is by encapsulating it. That is cleaving off a subsystem and embedding it in a ‘black box’. This works, so long as the elements and rules within the subsystem are not influenced by things outside of the subsystem in any significant way. This restriction means that working encapsulation is restricted to what are essentially mutually independent parts. While this is similar to how we as people deal internally with complexity, it requires a broader degree of certainty of independence to function correctly. You can not encapsulate human behavior away from the rules governing economies for instance, and these days you cannot encapsulate one economy from any other on the planet, the changes in one are highly likely to affect the other. Encapsulation does work in many physical systems and often in many formal system, but only again wrt elements in the greater subsystem. That is, a set of gears in a machine may be independent of a motor, but both are subject to outside influences, such as being crushed.
Overall, complexity is difficult to define because it is always relative to some constraints and it is inherently woven through layers. We don’t tend to be able to deal with the whole, so we ignore parts and then try to convince ourselves that these parts are not effecting things in any significant way. It is evident from modern societies that we do not collectively deal with complexity very well, and that we certainly can’t deal with all of the epiphenomenon currently interacting on our planet right now. Rather we just define very small artificial subsystems, tweak them and then hope for or claim positive results. Given the vast scale of the overall system, we have no realistic way of confirming that some element or some rule is really and truly outside of what we are dealing with, or that the behavior isn’t localized or subject to scaling issues.
Mastering complexity comes from an ever-increasing stretching of our horizons. We have to accept external influences and move to partition them or accept their interactions. In software, the complexity inherent in the code comes from the environment of development and the environment of operations. Both of these influence the flow and significance of the details within the system. Fluctuations from the outside needs and understanding, drive the types of instructions we are assembling to control the computer. Our internal ‘symbols’ for the physical world align or disconnect with reality based on how well we understand their influences. As such, we are effectively modelling limited aspects of informal systems in the real world with the formal ones in a digital world. Not only is the mapping important, but also the outside subsystems that we use to design and built it. As the boundaries increase, only encapsulation and organization can help control the complexity. They provide footholds into taming the problems. The worst thing we can do with managing complexity is to draw incorrect, artificial lines and then just blind ourselves to things crossing them. Ignoring complexity does not make it go away, it is an elementary property of our existence.
Software is a static list of instructions, which we are constantly changing.
Monday, June 18, 2012
Friday, June 15, 2012
Globals and State
One
of the great lessons learned -- long ago -- was that making variables
‘global’ in programming code was just asking for trouble. It is of
course, easier to write the code with globals; you just declare
everything global then fiddle with it wherever you want. But that ease
comes at the rather horrendous cost of trying to modify the code later.
Get enough different sections of code playing with the same global and
then suddenly it is very complicated to ascertain what any little
changes to the variable will do to the overall system. So we came to the
conclusion that these ‘side-effects’ were both very expensive and
completely undesirable. A lot of effort went into making it possible in
most modern languages to batten down the scope for everything,
variables, functions, etc. We turned our attention towards stuffing as
much as possible into ‘black boxes’ -- encapsulation -- so that we
minimize its interaction with the rest of the code base.
Another lesson often learned was that stateless code was considerably easier to debug than code that supported a lot of internal states. If you execute the code and each time its behavior is identical, it is fairly straightforward to determine if it is correct or not. If however, the behavior fluctuates based on changing internal state information, then testing becomes a long and drawn out process of cross-referencing all of the different inputs with the different outputs (a task usually short-changed leading to corner-case bugs). Test cases become complex sequences that are both time consuming and hard to accurately reproduce. Simple tests for stateless code means less work and better quality.
State changes can come from internal modification of variables, but they are most often triggered by things external to the scope of the code. Thus, function A modifies some state information, so that the behavior of function B changes. Generally the call to function A comes from somewhere on the outside of function B’s code block. This essentially forms an indirect reference to the state for function B, which relies not on a global variable, but rather a function that could be accessed globally. A global function. When we banished globals we did so for static variable declarations, however a code-based dynamic call is essentially the same thing. In a very real sense, any part of the program that is subject to changes either directly, or indirectly, that originate from other parts of the program is some type of global. Global data or global action, it doesn’t matter.
Ideally to make everything easily testable we’d like 100% of all arguments explicitly pushed into every function, and to support changes within the system we’d like 0% side-effects, so everything changed is returned from the function. Global-less, stateless code.
Often in APIs there are a large number of different primitives available. Different users of the API will access these subsets of functions in many different orders. In most Object Oriented (OO) languages this is handled by using something equivalent to set/get methods to alter the state of internal private variables, which other primitives use as values in their calculations. However, these methods are only available if the object is within the scope of the caller, so it has the effect of constraining their usage. So long as the object is not global, the methods are not either. The object becomes a local variable, interacting with it can be in any order necessary. However you can violate this easily by either setting the method calls to static or by creating the object as a Singleton. Either way introducing a global effect.
Another way to mess with things is to have the internal data as a reference to an object that is outside of the scope. When changes to that underlying object can occur anywhere in the code,this is another form of global manipulation.
In most systems, particularly if there is an interface for users, there is a considerable amount of mandatory state. Basically the computer is used to remember the user’s actions so that the user doesn’t have to keep supplying all of the contextual data over and over again. Depending on how the surrounding session mechanics interacts with the underlying technology, this can leave a lot of little pieces of required state laying all over the code. This of course is a form of spaghetti (variable-spaghetti) and it can be quite nasty because it gets placed everywhere. Cleaning this up means collecting all of the state information together into a single location for any given technical context. So for instance, in a web app there is likely one big collection of user state information in the browser, and another collection associated with the user’s session in the server. That’s fine and considerably better than having a huge number located in both places.
Long ago we identified that global variables were a big problem and in many circles we banished them. But I think we focused too hard on the ‘variable’ part, and not enough on the ‘global’ aspect. Global anything is a potential problem, anywhere. Like disorganization and redundancies, it is just a pool of gasoline waiting for a match. Software systems can be composed of an outrageous amount of complexity, and the only way to effectively deal with it is by encapsulating as much as possible into well-organized sub pieces. If you break that encapsulation... well, you are pretty much right back to where you started ...
Another lesson often learned was that stateless code was considerably easier to debug than code that supported a lot of internal states. If you execute the code and each time its behavior is identical, it is fairly straightforward to determine if it is correct or not. If however, the behavior fluctuates based on changing internal state information, then testing becomes a long and drawn out process of cross-referencing all of the different inputs with the different outputs (a task usually short-changed leading to corner-case bugs). Test cases become complex sequences that are both time consuming and hard to accurately reproduce. Simple tests for stateless code means less work and better quality.
State changes can come from internal modification of variables, but they are most often triggered by things external to the scope of the code. Thus, function A modifies some state information, so that the behavior of function B changes. Generally the call to function A comes from somewhere on the outside of function B’s code block. This essentially forms an indirect reference to the state for function B, which relies not on a global variable, but rather a function that could be accessed globally. A global function. When we banished globals we did so for static variable declarations, however a code-based dynamic call is essentially the same thing. In a very real sense, any part of the program that is subject to changes either directly, or indirectly, that originate from other parts of the program is some type of global. Global data or global action, it doesn’t matter.
Ideally to make everything easily testable we’d like 100% of all arguments explicitly pushed into every function, and to support changes within the system we’d like 0% side-effects, so everything changed is returned from the function. Global-less, stateless code.
Often in APIs there are a large number of different primitives available. Different users of the API will access these subsets of functions in many different orders. In most Object Oriented (OO) languages this is handled by using something equivalent to set/get methods to alter the state of internal private variables, which other primitives use as values in their calculations. However, these methods are only available if the object is within the scope of the caller, so it has the effect of constraining their usage. So long as the object is not global, the methods are not either. The object becomes a local variable, interacting with it can be in any order necessary. However you can violate this easily by either setting the method calls to static or by creating the object as a Singleton. Either way introducing a global effect.
Another way to mess with things is to have the internal data as a reference to an object that is outside of the scope. When changes to that underlying object can occur anywhere in the code,this is another form of global manipulation.
In most systems, particularly if there is an interface for users, there is a considerable amount of mandatory state. Basically the computer is used to remember the user’s actions so that the user doesn’t have to keep supplying all of the contextual data over and over again. Depending on how the surrounding session mechanics interacts with the underlying technology, this can leave a lot of little pieces of required state laying all over the code. This of course is a form of spaghetti (variable-spaghetti) and it can be quite nasty because it gets placed everywhere. Cleaning this up means collecting all of the state information together into a single location for any given technical context. So for instance, in a web app there is likely one big collection of user state information in the browser, and another collection associated with the user’s session in the server. That’s fine and considerably better than having a huge number located in both places.
Long ago we identified that global variables were a big problem and in many circles we banished them. But I think we focused too hard on the ‘variable’ part, and not enough on the ‘global’ aspect. Global anything is a potential problem, anywhere. Like disorganization and redundancies, it is just a pool of gasoline waiting for a match. Software systems can be composed of an outrageous amount of complexity, and the only way to effectively deal with it is by encapsulating as much as possible into well-organized sub pieces. If you break that encapsulation... well, you are pretty much right back to where you started ...
Thursday, June 7, 2012
Micromanaging Software Development
Software
development projects fail; often and badly. I’ve blogged a lot about
the different causes so I won’t repeat most of that again, only to say
if you include all of the various people involved in one way or another,
at every level (including the end-users), the whole thing is known to
be a very difficult exercise in ‘herding cats’. All of the problems stem
from a lack of focus.
There are as many “theories” out there about how to prevent failure as there are ways to fail. They range all over the map, but a few of them rely on micromanagement at their core.
What is micromanagement? My first real introduction to it was from working in middle end restaurants as a kid. Most of them would be highly chaotic places but for their managers. The managers sit on the employees and make sure that they are all following the rules. They need to clock in when they start and clock out when they leave. They have to wash their hands, and the kitchen staff have to put on clean ‘whites’ and hairnets. Employees aren’t allowed to eat the food and they are always supposed to keep busy even if the restaurant is not. There are rules for preparing the food, rules for serving it and rules for handling the waste. As an employee your job is not to think, but to get your work done as quickly as possible, while obeying all of the rules.
When the manager is doing his or her job well, they’re running all over the place causing the restaurant to function like clockwork. A finely tuned machine for grinding out food and collecting revenues.
As a patron I’ve always appreciated a well-running restaurant with good food. As an employee I didn’t actually mind the discipline. At least you always knew where you stood and you didn’t have to think hard about stuff. After a while you just fell into the grove and did whatever the manager wanted you to do. Once work was over, it was over. You were done for the day.
What I realized is that doing a job like ‘line cook’ in a restaurant is a semi-skilled labour position. It takes a bit to learn the ropes, but once you’ve acquired the skills they are pretty much constant. Even into my middle age, I can still grill a ‘mean’ steak (I’ve mastered medium rare and rare. mmmm). Micromanagement is a very good way to manage non-skilled and semi-skilled labour, since it really comes down to a manager just extending their capabilities by directing the physical work of others. And if the micromanager is found to be annoying (as sometimes they are), changes in moral really don’t significantly affect people’s ability to get the job done. It’s probably one of the best way of handling employees in this type of position.
For decades now I’ve seen many a person look enviously at how these types of organizations function and then try to apply this back to intellectual jobs like computer programming. As I’ve often said, by far the bulk of most programming is ‘jogging for the mind’ so it doesn’t always require the uber-deep levels of thought that something like research or mathematics needs. From the outside this seems to confuse people who mistake it for being closely aligned with semi-skill labour. Both require some thinking so bought ought to be the same they surmise. There is however a huge difference. When cooking for instance, my main output is the way I am handling the food as I prepare it. When programming, the byproduct of my work is typing on a keyboard or swinging a mouse around, but the main output of my work is actually in ‘understanding’ the solutions I am creating. An outsider can see that I am using the correct techniques to cook and if they have acquired the same skill set, they can help me correct flaws in how I am working. An outsider however cannot peer into my brain and see that I am correctly understanding the puzzles I am solving and worse, depending on what I am building, they may need decades to acquire the same skill set, just to be able to critique my working habits.
Even if programming is jogging, and it gets considerably easier the more you do it, there is still a huge amount of learning involved. We solve a vast array of problems, on a wide range of equipment, with a plethora of different technologies. Too many things for any single programmer to master them all, and they change fast and frequently. Stop coding for long enough and suddenly the landscape looks completely foreign. It isn’t really and a good knowledge of the many theories of computer science can really do help to keep abreast, but still the amount of knowledge required is staggering.
In principle this means that a micromanager in software is unlikely to have more expertise than his employees, and that he has no way of judging how well his employees are working through the problems (until it is too late).
I’ve seen the first aspect of this solved by creating half-managers/half-programmers. In some sense it has been very common, since well-run teams are usually lead by the seniors in the group. Some organizations have just formalized this relationship and built on it.
The second aspect of this however is more difficult to deal with. Nothing is worse than letting a bunch of programmers run wild for months or years, only to find out that they really didn’t understand what they were supposed to build or how they should build it. A great many projects have failed this way. The micromanagement solution is to reduce the work to very tiny time increment, say a few days. The manager issues the work, the programmer delivers it and then it becomes very obvious if the manager/programmer communication and the programmer’s skills are both at a functional level.
The downside to this is ownership. Once you take away a long-term commitment to building something from the coders, they’re just drones to belt out the pieces. A skilled micromanager is required to direct the process, but the inspiration and quality of work from the programmers is likely to be lackluster. The job has become akin to working in a restaurant and of course once the day is done, the programmers rush off to do better things with their time. This of course can be a functional way to develop software, but it is at the far end of the spectrum caused by trading off development risk for increased surveillance. Lacking satisfaction, the turnover for programmers is high and few of them will have the ability to become micromanagers effectively.
There are no doubt circumstances where micromanagement works in software development, I’ve seen a couple of good examples. The work gets done, but from my personal experiences I’ve always seen that the overall quality suffers greatly for this. Attempts to formalize this relationship fail unless the significance of the manager’s role and exceptional level skill they require, is factored in correctly. Even then, with the manager basically over-extended and the staff turn-over often high, it’s no surprise that the organization is subject to significant volatility in both their output and quality. The risks have shifted from each individual programmer over to the relationship between the programmers and their handlers, but they are still there.
In general, although I find this type of organization structure interesting, I’d have to say that micromanagement is not a particularly effective way of getting over the herding cats problem. I’ve always felt that a well-educated and well-balanced team, with strong leadership is probably a more sustainable way to direct the work. A happy, engaged programmer is much more likely to catch significant issues before they’ve become too ingrained to fix. If they can transmit this upwards, while getting enough of a downwards context from management, they can be well positioned to deliver their individual work both quickly and correctly. Well that’s my “theory” anyways …
There are as many “theories” out there about how to prevent failure as there are ways to fail. They range all over the map, but a few of them rely on micromanagement at their core.
What is micromanagement? My first real introduction to it was from working in middle end restaurants as a kid. Most of them would be highly chaotic places but for their managers. The managers sit on the employees and make sure that they are all following the rules. They need to clock in when they start and clock out when they leave. They have to wash their hands, and the kitchen staff have to put on clean ‘whites’ and hairnets. Employees aren’t allowed to eat the food and they are always supposed to keep busy even if the restaurant is not. There are rules for preparing the food, rules for serving it and rules for handling the waste. As an employee your job is not to think, but to get your work done as quickly as possible, while obeying all of the rules.
When the manager is doing his or her job well, they’re running all over the place causing the restaurant to function like clockwork. A finely tuned machine for grinding out food and collecting revenues.
As a patron I’ve always appreciated a well-running restaurant with good food. As an employee I didn’t actually mind the discipline. At least you always knew where you stood and you didn’t have to think hard about stuff. After a while you just fell into the grove and did whatever the manager wanted you to do. Once work was over, it was over. You were done for the day.
What I realized is that doing a job like ‘line cook’ in a restaurant is a semi-skilled labour position. It takes a bit to learn the ropes, but once you’ve acquired the skills they are pretty much constant. Even into my middle age, I can still grill a ‘mean’ steak (I’ve mastered medium rare and rare. mmmm). Micromanagement is a very good way to manage non-skilled and semi-skilled labour, since it really comes down to a manager just extending their capabilities by directing the physical work of others. And if the micromanager is found to be annoying (as sometimes they are), changes in moral really don’t significantly affect people’s ability to get the job done. It’s probably one of the best way of handling employees in this type of position.
For decades now I’ve seen many a person look enviously at how these types of organizations function and then try to apply this back to intellectual jobs like computer programming. As I’ve often said, by far the bulk of most programming is ‘jogging for the mind’ so it doesn’t always require the uber-deep levels of thought that something like research or mathematics needs. From the outside this seems to confuse people who mistake it for being closely aligned with semi-skill labour. Both require some thinking so bought ought to be the same they surmise. There is however a huge difference. When cooking for instance, my main output is the way I am handling the food as I prepare it. When programming, the byproduct of my work is typing on a keyboard or swinging a mouse around, but the main output of my work is actually in ‘understanding’ the solutions I am creating. An outsider can see that I am using the correct techniques to cook and if they have acquired the same skill set, they can help me correct flaws in how I am working. An outsider however cannot peer into my brain and see that I am correctly understanding the puzzles I am solving and worse, depending on what I am building, they may need decades to acquire the same skill set, just to be able to critique my working habits.
Even if programming is jogging, and it gets considerably easier the more you do it, there is still a huge amount of learning involved. We solve a vast array of problems, on a wide range of equipment, with a plethora of different technologies. Too many things for any single programmer to master them all, and they change fast and frequently. Stop coding for long enough and suddenly the landscape looks completely foreign. It isn’t really and a good knowledge of the many theories of computer science can really do help to keep abreast, but still the amount of knowledge required is staggering.
In principle this means that a micromanager in software is unlikely to have more expertise than his employees, and that he has no way of judging how well his employees are working through the problems (until it is too late).
I’ve seen the first aspect of this solved by creating half-managers/half-programmers. In some sense it has been very common, since well-run teams are usually lead by the seniors in the group. Some organizations have just formalized this relationship and built on it.
The second aspect of this however is more difficult to deal with. Nothing is worse than letting a bunch of programmers run wild for months or years, only to find out that they really didn’t understand what they were supposed to build or how they should build it. A great many projects have failed this way. The micromanagement solution is to reduce the work to very tiny time increment, say a few days. The manager issues the work, the programmer delivers it and then it becomes very obvious if the manager/programmer communication and the programmer’s skills are both at a functional level.
The downside to this is ownership. Once you take away a long-term commitment to building something from the coders, they’re just drones to belt out the pieces. A skilled micromanager is required to direct the process, but the inspiration and quality of work from the programmers is likely to be lackluster. The job has become akin to working in a restaurant and of course once the day is done, the programmers rush off to do better things with their time. This of course can be a functional way to develop software, but it is at the far end of the spectrum caused by trading off development risk for increased surveillance. Lacking satisfaction, the turnover for programmers is high and few of them will have the ability to become micromanagers effectively.
There are no doubt circumstances where micromanagement works in software development, I’ve seen a couple of good examples. The work gets done, but from my personal experiences I’ve always seen that the overall quality suffers greatly for this. Attempts to formalize this relationship fail unless the significance of the manager’s role and exceptional level skill they require, is factored in correctly. Even then, with the manager basically over-extended and the staff turn-over often high, it’s no surprise that the organization is subject to significant volatility in both their output and quality. The risks have shifted from each individual programmer over to the relationship between the programmers and their handlers, but they are still there.
In general, although I find this type of organization structure interesting, I’d have to say that micromanagement is not a particularly effective way of getting over the herding cats problem. I’ve always felt that a well-educated and well-balanced team, with strong leadership is probably a more sustainable way to direct the work. A happy, engaged programmer is much more likely to catch significant issues before they’ve become too ingrained to fix. If they can transmit this upwards, while getting enough of a downwards context from management, they can be well positioned to deliver their individual work both quickly and correctly. Well that’s my “theory” anyways …
Tuesday, June 5, 2012
Another Stinkin’ Analogy
Yea,
yea, I know. Programmers hate analogies. But I think that this attitude
leads one to miss their importance. Sure the world is based on details
-- facts -- but these facts are just chunks of information rooted in our
physical existence. And more importantly this information isn’t
mutually independent, it is tied by context to all of the other
information floating about. These ties form a meta-layer of information
that binds together the underlying information. It’s these higher level
relationships that things like analogies, metaphors, similes try to
address at a higher level of abstraction. Sure there are simplifications
involved, and there isn’t always a one-to-one correspondence with all
aspects of an analogy, but what there is a relationship that can be
learned, understood, and applied back to help organize and utilize the
facts. Facts don’t help unless you can turn them into actual knowledge …
This analogy comes in two parts. The first is simple: programming is like solving a Rubik’s cube. That is, it is a puzzle that involves figuring out a series of steps to solve it. Like Rubik’s cube there are many solutions, but there is often a minimal number of steps possible. Also, like solving the cubes, there are some states between ‘randomized’ and ‘solved’ where some of the sides appear solved, but the whole puzzle is not. Of course, programming has a significantly wider array of puzzles within it then just this one that need to be solved. They come in all sorts of shapes and sizes. Still, when coding it’s not unlike just working your way through puzzle after puzzle, while trying to solve them as fast as you can. It can often be a race to get the work completed.
The second part of the analogy comes in after the puzzle has been solved. Some people just toss the completed work into a big pile, but big software development needs structure not piles. In that sense, once each puzzle is complete it is neatly stacked to form an architecture. Usually the architecture itself has its own needs and constraints. The puzzles form the substance of the walls, but the walls themselves need to be constructed carefully in order for the entire work to hold together. Sometimes you meet programmers that don’t get that second half of the work. They think it’s all about the puzzles. But it really should be understood that the user’s don’t use the puzzles directly, they use the system and it’s the way the system holds together that makes the difference in their lives. The code could contain the cleverest algorithm ever invented, but if that’s wrapped in a disorganized mess, then its effect is lost on them.
Ok, I lied there is a third part to this analogy as well. I didn’t pick Rubik’s cubes lightly as the underlying type of puzzle. In the second part, obviously the cubes can also act as bricks in a larger structure, but there is still more. For anyone that has spend time solving Rubik’s cube from scratch -- no hints -- they know that it isn’t any easy puzzle. You have to work really hard to work out the mechanics of the combinations that will leave a solution in place. There are, of course, lots of books you can buy to teach how to solve the cubes quickly. Once you get the rules, it’s not very hard at all in fact these days kids compete for speed in times that are quite frankly somewhat unbelievable. Some programmers will argue that using Rubik’s cube as the iconic puzzle is a gross simplification, but I choose it on purpose since it has a rather dual nature. If you don’t know how to solve it, it is a hard puzzle. But if you do your homework first, you can get it down to an incredible speed. That ‘attribute’ holds true for programming as well. Computer languages are simple in their essence, and all libraries are essentially deterministic. If you dive in without any context, it is a difficult world where it almost seems magical how things get built. But if you gain the knowledge first to solve the types of problems you are tackling, then you can belt through them quite quickly. In a prior post I referred to it as ‘jogging for the mind’. It’s not effortless and it takes time, but it’s not nearly as hard as it is if you have no context. Write enough code, and coding becomes considerably faster and more natural.
In turning what we know into knowledge we need to collect the details together then find the rules and patterns that bind them. It’s this higher level that we apply back to our lives, to hopefully make them better in some way. This applies back to software development as well. Programming can be very time intensive. We rarely get enough time to really bring the work up to an exceptional quality. And it’s this deficit in resources that drives most of our problems in development. To counter this, we need to be as effective as possible with our limited resources, and it is exactly this point that comes right back to the analogy. I’ve spent my life watching programmers struggle in a panic to solve Rubik’s cube-like problems faster than is possible, but mostly because they didn’t just stop and find some reference on how to actually solve the puzzles first. They’ve become so addicted to doing it from scratch that they see no other way. Meanwhile, they could have saved themselves considerable stress by just looking up the answer. And to make it worse, because of the time constraints, they often settle on getting a couple of sides nearly done, not even the whole puzzle.
There are, of course, a huge variety of Rubik’s-like puzzles that we deal with on a regular basis, but easily the majority of them are well-understood and have readily available answers scattered about the World Wide Web. The time taken to find an answer is just a fraction of the time required to solve it yourself, and the more answers you find the more puzzles you can breeze through. The puzzles are fun and all, but what we’re building for the user is the structure of the system. Getting that done is considerably more fun.
This analogy comes in two parts. The first is simple: programming is like solving a Rubik’s cube. That is, it is a puzzle that involves figuring out a series of steps to solve it. Like Rubik’s cube there are many solutions, but there is often a minimal number of steps possible. Also, like solving the cubes, there are some states between ‘randomized’ and ‘solved’ where some of the sides appear solved, but the whole puzzle is not. Of course, programming has a significantly wider array of puzzles within it then just this one that need to be solved. They come in all sorts of shapes and sizes. Still, when coding it’s not unlike just working your way through puzzle after puzzle, while trying to solve them as fast as you can. It can often be a race to get the work completed.
The second part of the analogy comes in after the puzzle has been solved. Some people just toss the completed work into a big pile, but big software development needs structure not piles. In that sense, once each puzzle is complete it is neatly stacked to form an architecture. Usually the architecture itself has its own needs and constraints. The puzzles form the substance of the walls, but the walls themselves need to be constructed carefully in order for the entire work to hold together. Sometimes you meet programmers that don’t get that second half of the work. They think it’s all about the puzzles. But it really should be understood that the user’s don’t use the puzzles directly, they use the system and it’s the way the system holds together that makes the difference in their lives. The code could contain the cleverest algorithm ever invented, but if that’s wrapped in a disorganized mess, then its effect is lost on them.
Ok, I lied there is a third part to this analogy as well. I didn’t pick Rubik’s cubes lightly as the underlying type of puzzle. In the second part, obviously the cubes can also act as bricks in a larger structure, but there is still more. For anyone that has spend time solving Rubik’s cube from scratch -- no hints -- they know that it isn’t any easy puzzle. You have to work really hard to work out the mechanics of the combinations that will leave a solution in place. There are, of course, lots of books you can buy to teach how to solve the cubes quickly. Once you get the rules, it’s not very hard at all in fact these days kids compete for speed in times that are quite frankly somewhat unbelievable. Some programmers will argue that using Rubik’s cube as the iconic puzzle is a gross simplification, but I choose it on purpose since it has a rather dual nature. If you don’t know how to solve it, it is a hard puzzle. But if you do your homework first, you can get it down to an incredible speed. That ‘attribute’ holds true for programming as well. Computer languages are simple in their essence, and all libraries are essentially deterministic. If you dive in without any context, it is a difficult world where it almost seems magical how things get built. But if you gain the knowledge first to solve the types of problems you are tackling, then you can belt through them quite quickly. In a prior post I referred to it as ‘jogging for the mind’. It’s not effortless and it takes time, but it’s not nearly as hard as it is if you have no context. Write enough code, and coding becomes considerably faster and more natural.
In turning what we know into knowledge we need to collect the details together then find the rules and patterns that bind them. It’s this higher level that we apply back to our lives, to hopefully make them better in some way. This applies back to software development as well. Programming can be very time intensive. We rarely get enough time to really bring the work up to an exceptional quality. And it’s this deficit in resources that drives most of our problems in development. To counter this, we need to be as effective as possible with our limited resources, and it is exactly this point that comes right back to the analogy. I’ve spent my life watching programmers struggle in a panic to solve Rubik’s cube-like problems faster than is possible, but mostly because they didn’t just stop and find some reference on how to actually solve the puzzles first. They’ve become so addicted to doing it from scratch that they see no other way. Meanwhile, they could have saved themselves considerable stress by just looking up the answer. And to make it worse, because of the time constraints, they often settle on getting a couple of sides nearly done, not even the whole puzzle.
There are, of course, a huge variety of Rubik’s-like puzzles that we deal with on a regular basis, but easily the majority of them are well-understood and have readily available answers scattered about the World Wide Web. The time taken to find an answer is just a fraction of the time required to solve it yourself, and the more answers you find the more puzzles you can breeze through. The puzzles are fun and all, but what we’re building for the user is the structure of the system. Getting that done is considerably more fun.
Subscribe to:
Posts (Atom)