If they had to work on a ‘small part’ it would be fine, but if they need to do stuff that spans a bunch of parts, then they have to iterate through each one, one at a time. In doing that, they incur a significant risk that one or more of the sub-parts is incorrect. That somewhere along the way, they will derail.
What we would like to do is to give them a means of being certain that all of their work fits correctly into the big picture. The most obvious idea is to have a means of planning the work, before doing it, such that any problems or issues with the plan are transparent enough and can be corrected easily before the work is started.
The quality of a plan is dependent on its foundations. If the foundations are constantly shifting, the plan itself would need constant modification to keep up.
So an essential part of planning is to lock down every major thing that can vary.
In this case, it would be locked down for the length of the work, but afterward, it could be revisited and changed. That length of time would be some guesstimate about the sum of the expected work for the parts.
If the guesstimate were too short, and the changes couldn’t be delayed, then they might derail the plan by throwing off the continuity of the work, which would increase its likelihood to be wrong. So, obviously locking for too long is considerably less risky.
The plan itself is subject to complexity.
In that, planning for a chaotic or disorganized environment is at least orders of magnitude more complex than if everything is neat and tidy.
In that sense, the plan itself becomes one of the parts of the project and if it’s a large plan, then it falls right back to the same issues above. One would need to work through the parts of the plan to ensure that they maintain consistency and in order to do that correctly, one would need a meta-plan. This, of course, is upwardly recursive.
If we wanted to ensure that things would happen correctly, we would need smaller and smaller meta-plans, until we arrive at one that is small enough that a single person could create and check it properly. Then, that would expand to the next level, and as we walked through those parts, it would expand to the level below, etc.
This shows that the costs of disorganization are at least multiplicative. They make the actual work harder, but they also make the planning harder too, and everything on top as well, plus they make the overall hierarchy larger too.
So, the first big thing that would likely increase the likelihood of success would be to carefully organize as much as possible, with well-defined rules, so that any following planning or work went smoothly.
The converse is also true as well, in that if a big project fell apart, the most likely reasons that this happened were changes to the foundations and disorganization. Changes themselves are fairly easy to track, and if they haven’t been tracked, then that chaos is a form of disorganization, so one would separate out the different weights of the causes of the problem as caused by not locking down enough of the work, and just being disorganized (in many, many different ways). Usually, the latter is way more significant.
In its simplest form, organization is a place for everything and everything in its place. On that top this we also need to control the different similar (but not exactly the same) things being in the same place, in that if there are only a few similar things, they can be grouped together, but once there are more than a few of them, they need their own distinct ‘places’ to break up the collection. That is a scaling property, which is often seen in that if things are really small, everything is similar, but as it starts to grow, the similarities start to need to be differentiated. There is a recursive growth problem here too in that the places themselves need to be organized, needing meta-places, etc. So, it’s never-ending, but it does slow down as things become, large, massive, etc. Places need to be split, then when there are enough of them, organized too.
If you know that a project will say go from small, through medium and then to large, it would be far more effective to settle on an organization scheme to support the large size, then having to reorganize each time everything grows. Basically, if there were some ‘growth plan’ and its elements could be locked enough, you could skip the less organized variations and go straight to the heavier version, taking more time initially, but saving a huge amount overall.
So, the maximum locking time is an intrinsic quality of the environment.
The longer it is, the more you can leverage it to be more efficient. If it is short, perhaps because there are issues like market fit involved, such that the initial stages need viability testing first, then that sets the planning length, and indirectly the type of organization that can be applied.
But, almost by definition, that first stage can’t be massive, so for volatile environments, it is far better to just find the shortest path forward. The only caveat there is that that method of working itself needs to be changed if the first stage turns out to be viable and the product needs to grow massive. Basically, absolutely every assumption needs to be reset and redone from scratch.
That indirectly says a lot about why trying to treat everything from a reactive short term perspective fails so spectacularly when applied to big projects. If there is a strong likelihood of any long-term, then not utilizing that, and basically trying to just treat all of the sub-parts as if they are independent is going to derail frequently, and eventually prevent the rest of the work from getting completed.
If one avoids explicitly organizing things, then since it doesn’t happen by accident, growth will be chaotic. It’s in and around this trade-off that we realize that there can be no ‘one-size-fits-all’ approach for all sizes and time frames.
With all of that in mind, if we want to do something massively complex, then organization and planning are essential to ensuring that the complexity won’t overwhelm it. If we want to do something we think is trivial, but it turns out that it isn’t, we pretty much need to return to the same starting point, and do it all over again. If we don’t, the accumulated disorganization will shut it all down prematurely. That is hardly surprising, in that watching master craftsmen in various domains work, they often keep their workspaces tidy and clean up as they go. It’s one of the good habits that helped them master their craft.
The longer it is, the more you can leverage it to be more efficient. If it is short, perhaps because there are issues like market fit involved, such that the initial stages need viability testing first, then that sets the planning length, and indirectly the type of organization that can be applied.
But, almost by definition, that first stage can’t be massive, so for volatile environments, it is far better to just find the shortest path forward. The only caveat there is that that method of working itself needs to be changed if the first stage turns out to be viable and the product needs to grow massive. Basically, absolutely every assumption needs to be reset and redone from scratch.
That indirectly says a lot about why trying to treat everything from a reactive short term perspective fails so spectacularly when applied to big projects. If there is a strong likelihood of any long-term, then not utilizing that, and basically trying to just treat all of the sub-parts as if they are independent is going to derail frequently, and eventually prevent the rest of the work from getting completed.
If one avoids explicitly organizing things, then since it doesn’t happen by accident, growth will be chaotic. It’s in and around this trade-off that we realize that there can be no ‘one-size-fits-all’ approach for all sizes and time frames.
With all of that in mind, if we want to do something massively complex, then organization and planning are essential to ensuring that the complexity won’t overwhelm it. If we want to do something we think is trivial, but it turns out that it isn’t, we pretty much need to return to the same starting point, and do it all over again. If we don’t, the accumulated disorganization will shut it all down prematurely. That is hardly surprising, in that watching master craftsmen in various domains work, they often keep their workspaces tidy and clean up as they go. It’s one of the good habits that helped them master their craft.
No comments:
Post a Comment
Thanks for the Feedback!