Sunday, April 8, 2012

Structured Questioning

I worked with a person who believed that the secret to getting things done was just to make a list, then start checking off items as they were done. For anyone that’s worked on a large software development project, they know that it’s just not that easy. Different items depend on each other, progress varies, people come and go, and that leaves a dynamic shifting landscape that continuously results in having to reorder the work. It’s far more of an exercise in juggling a huge number of balls, then it is an act of just marching through a list.

But still, for most people in the project they are usually just tackling one, two or even three items at a time. From their perspective, their traversal through the work is very list-like. It’s just a matter of getting the things from the ‘to be done’ category into the ‘completed’ one. Order is very important, but only if you are looking to get the most work completed with the least effort.

I’ve been thinking about this lately, but with a slightly different spin. Big projects generally require a massive amount of analysis to get finished. Failure to complete that work, results in serious scope creep problems. If you only kinda know what to build, as you progress and learn more that new knowledge radically changes what you are building, invalidating some of the earlier work. That’s not very efficient. A little more up front analysis always results in a lot less reworking.

So what is the best way to ‘structure’ analysis, so that you know you’ve completely covered the problem space?

Ultimately, I think graph theory holds the answer to this type of problem. A graph is an abstract collection of vertices and edges. The vertices (or nodes) are interconnected to each other via edges. A graph is independent of any strict positional layout; the vertices aren’t fixed to any dimensional locations. The ‘where’ of the nodes isn’t important, only their relationship to each other. You can of course constrain a graph dimensionally, a planer graph for instance is one that can be fully represented in 2D with no edges crossing each other. For larger graphs with more edges, each can be constrained within some specific N dimensional space (with some higher dimensional equivalent to ‘crossing’ defined).

What makes graphs so useful is that they can be used to model multi-dimensional problems without getting tied down into any spatial positioning issues. They simply represent the interrelationships between different entities. They’re pure information.

Applying this to analysis is a matter of figuring out what entities the vertices represent. That comes pretty naturally if we realize that a question/answer format is strong way of structuring complex informal systems. The question sets a constrained context, while the answer iterates through all of the associated details. We see this commonly in technical writing when the author uses a dialog between two parties to explore issues at depth, or a group of people create a ‘howto’ reference document to make it easier for people to navigate quickly to the answers. Both these forms provide a question/answer structure to express large or complex information.

Using this perspective, we can simply set each vertex to represent a question/answer pair. That sets the structure, but by itself it really isn’t all that useful. Depending on the question, answers can often result in a slew of more questions. These sub-questions are important, and related, but fall out of the original context defined by the question. That is, any question/answer pair has a relationship to many other question/answer pairs. There is some meta-relationship between them.

While the underlying structure of all of these related question vertices is a graph, what we are interested in isn’t their overall relationship structure, rather we are interested in making sure that as we traverse all of these questions in the analysis process; that we’re not dropping any important balls. So while the underpinnings are a graph, what we need is closer to a list of questions and answers.

Working that in is as simple as just iterating through this information as a list of vertices + all of edges. That is, we set the question as a unique identifier for the vertex, the answer as the content and include a list of other related questions. So it looks like:

Question: “...”
Answer: “...”
List of related Questions: [ “...”, “...”, ...]

At this point it may seem like we’ve spun right around into a ‘trivial’ approach, but I suspect that the overwhelming simplicity of this structure hides some of underlying depth. With a careful selection of the syntax of the questions, we leave ourselves open to being able to gain some confidence in knowing just how much ground is covered by a list of these ‘structured questions’. We can for instance, constrain the types of questions by spatial or temporal bounds. Thus some questions can be independent of time, essentially nouns, that set a context in our physical world. These come in the form of ‘who’, ‘where’ or ‘what’. We can also include temporal questions, based around verbs, that seek answers for causal relationships in the past or predictions of the future. These are of the form ‘why’ or ‘how’.

Strict structuring of questions both insures that the answers are significantly ‘atomic’, while also making it easier to identify alternative or ambiguous questions that need to be aligned, removed or rewritten. This is necessary considering the goal is to provide some structure that leads to a confidence that the problem space has been fully explored. Redundancies, if not detectable, lead to concerns that the process has fallen into analysis paralysis. Chaotic exploration most likely results in large undetected gaps (that in software don’t get found until it’s too late).

All of this brings us full-circle back to the problem of trying to frame the analysis. If we were interested in analyzing an area related to building new software, we’d start with what is essentially the root of all of all such questions “What problem are we trying to solve with a computer?” An answer to this, for a common problem like social networking, might be something like “Allow the users to communicate effectively with each other via software running on the computer”, which is simple enough that it immediately generates a slew of sub-questions:

  • Who are the users?
  • Why would they use this software?
  • How do they use the software?
  • What does it mean to ‘communicate effectively”?
  • etc.

Then we can take this collection of sub-questions -- constrained relative our root question -- and start breaking them down into answers, and more related questions. This effectively brings us back to the idea of ‘just creating a list and working through it’. There is a process to move forward, constraints to prevent endless analysis and a means to know that the work is approaching ‘completeness’. We just keep iterating the list, until the questions dissipate or become trivially uninteresting.

Now of course, initially it’s hard to know how many sub-questions will spin off an initial one. There is a combinatorial explosion in progress as each question spawns more questions. But this is why it’s so important that there are constraints. There are natural boundaries to the analysis, which if articulated early prevent an endless process that will go outside of the lines.

Inside of this space however, is a finite area that we are hoping to cover completely. As the analysis progresses, the growth of new questions will reverse. This shift provides a good indication of the remaining work. That is, if the process is initially inestimable, as more is known it will approach increasingly reliable estimates. The unknowns will gradually vanish with time. Tracking this progress, identifies the length.

In practice this might come to play as setting a fixed period for the analysis, and then mid-way through having to redo the original estimated effort, but hopefully enough is already know that this only has to happen once. And although the effort exceeded the initial prediction, the downstream cost savings in not having to cope with scope creep will put the project in better shape.

Overall, I like the idea of structured questioning, although I haven’t worked it into any real-world analysis yet. It just follows from my understanding that almost any ‘organization’ is critical to making large projects work. The key is that it is incredibly simple, yet it is highly constrained by a defined structure. These two attributes always combine well. The things that work in practice need to be easily understood, and they need to provide enough value that people gravitate towards them because they help, rather than hinder the process.