In software programs we talk about state. The setting of a boolean variable is its state. There are only two states.
For variables with larger ranges, i.e. possible settings, there can be a huge number of possible states, they are all discrete. An integer may be set to 42.
We usually use state to refer to a group of variables. E.g. the state of a UI is its settings, navigation, and all of the preferences.
Context is similar, but somewhat expanded. The context is all of the variables, whether explicit or implicit; formal or informal. It is really anything at all that can vary, digitally or even in reality.
Sometimes people just restrict context to purely digital usages, but it is far more useful if you open it up to include any informal variability in the world around us. That way we can talk about the context of a UI, but we can also talk about the context of the user using that UI. The first is a proper subset of the second.
The reason we want it to be wider than, say just a context in the backend code is because it affects our work. Software is a solution to one or more problems. Some of those problems are purely digital, such as computations, persistence, or communications, but most of our problems are actually anchored in reality.
For instance, consider a software system that inventories cogs created at a factory. The cogs themselves and the factory are physical. The software mirrors them in the computer in order to help keep track of them. So, some of the issues that affect the cogs, the factory, or the types of usage of the system, are really just ‘informal’ effects of reality. What people do with the software is heavily influenced by what happens in the real world. The point of an inventory system is to help make better real world decisions.
We may or may not map all of those physical influences onto digital proxies, but that does not mitigate their effect. They happen regardless. So if there are real events happening in the factory that affect the cogs but are not captured correctly, the digital proxies for those cogs can fall out of sync. We might have the wrong counts in the software for example because a bunch of cogs went missing.
As well, the mappings between reality and the software can be designed incorrectly. The factory might have twenty different types of cogs, but the software can only distinguish ten different types. The cogs themselves might relate to each other in some type of hierarchy, but the software only sees them as a flat inventory list.
In that sense the software developers are not free to model the factory and its cogs in any way they choose. The context in reality needs to properly bound the software context. So that whatever happens in the larger context can be correctly tracked in the software context.
The quality of the software is rooted in its ability to remain correct. Bad software will sometimes be wrong, so it is not trustworthy, thus not too useful.
Now if the factory was very complex, it might be a huge amount of work to write some software that precisely models everything down to each and every little detail. That would be a massive amount of work. So we frequently apply simplifications to the solution context. That works if and only if the solution context is still a proper generalized subset of the problem context.
From our earlier example if all twenty physical cogs map uniquely onto the ten software cogs, the context may be okay. But if some cogs can be mapped in different ways, or some cogs cannot be mapped at all, then the software solution will drift away from reality and people will see this as bugs. If there are manual procedures and conventions to occasionally fix the map, then at some point they'll degrade and it will still fail.
Which is one of the most common fundamental problems with software. There often isn’t time to do the context mappings properly, and the shortcuts applied were invalid. The software context is shifted out from under the problem context, so it will gradually break. More software or even manual procedures will only delay the inevitable. The data, e.g. proxies, in the computer will eventually drift away from reality.
So, if we see the context of the software as needing to be a proper subset of the context of the problem we intend to solve, it is easier to understand the consequences of simplifications.
This often plays out in interesting ways. If you build a system that keeps track of a large number of people you obviously want to be able to uniquely identify them. Some people might incorrectly assume that a full name, as first, middle, last, is enough, but most names are not particularly unique. Age doesn’t help and duplicate birthdays are far too common. You could use a home address as well, but even in some parts of the world that is not enough.
Correctly and uniquely identifying ‘all’ individuals is extraordinarily hard. Identifying a small subset for an organization is much easier. So we cheat. But any mapping only works correctly for the restricted domain context when you don’t have to fiddle with the data. If you have to have Bob and Bob1 for example, then the mapping is broken and should be fixed before it gets even worse.
So as a problem we want to track a tiny group of people and we don’t have to worry about the full context. Yet, if whatever we do forces fiddling with the data, that means our solution context is misfocused and should be shifted or expanded. Manual hacks are a bug. Seen this way, it ends any sort of subjective arguments about modeling or conventions. It’s a context misfit, it needs to be fixed. It’s not ‘speculative generation’ or over-engineering, it is just obviously wrong.
The same issues play out all over software development. We build solutions, but we build them to fit against one or more problem contexts, and those often get bounced around by larger organization or industry contexts.
That is, often people narrow down the context to make an argument about why something is right or wrong, better or worse, but the argument is invalid because the context is just too narrow. The most obvious example I know is the ancient arguments about why Betamax tapes would beat out VHS, when in reality it went the other way. I think the best reference to explain it all was Geoffrey Moore in “Crossing the Chasm” when he talks about the ‘whole product’ which is an expanded context.
All of that makes understanding the various contexts that bound the system very important. Ultimately we want to build the best fitting solutions given the problems we are trying to solve. Comparing the two contexts is how we figure out if we have done a good job or not.
For variables with larger ranges, i.e. possible settings, there can be a huge number of possible states, they are all discrete. An integer may be set to 42.
We usually use state to refer to a group of variables. E.g. the state of a UI is its settings, navigation, and all of the preferences.
Context is similar, but somewhat expanded. The context is all of the variables, whether explicit or implicit; formal or informal. It is really anything at all that can vary, digitally or even in reality.
Sometimes people just restrict context to purely digital usages, but it is far more useful if you open it up to include any informal variability in the world around us. That way we can talk about the context of a UI, but we can also talk about the context of the user using that UI. The first is a proper subset of the second.
The reason we want it to be wider than, say just a context in the backend code is because it affects our work. Software is a solution to one or more problems. Some of those problems are purely digital, such as computations, persistence, or communications, but most of our problems are actually anchored in reality.
For instance, consider a software system that inventories cogs created at a factory. The cogs themselves and the factory are physical. The software mirrors them in the computer in order to help keep track of them. So, some of the issues that affect the cogs, the factory, or the types of usage of the system, are really just ‘informal’ effects of reality. What people do with the software is heavily influenced by what happens in the real world. The point of an inventory system is to help make better real world decisions.
We may or may not map all of those physical influences onto digital proxies, but that does not mitigate their effect. They happen regardless. So if there are real events happening in the factory that affect the cogs but are not captured correctly, the digital proxies for those cogs can fall out of sync. We might have the wrong counts in the software for example because a bunch of cogs went missing.
As well, the mappings between reality and the software can be designed incorrectly. The factory might have twenty different types of cogs, but the software can only distinguish ten different types. The cogs themselves might relate to each other in some type of hierarchy, but the software only sees them as a flat inventory list.
In that sense the software developers are not free to model the factory and its cogs in any way they choose. The context in reality needs to properly bound the software context. So that whatever happens in the larger context can be correctly tracked in the software context.
The quality of the software is rooted in its ability to remain correct. Bad software will sometimes be wrong, so it is not trustworthy, thus not too useful.
Now if the factory was very complex, it might be a huge amount of work to write some software that precisely models everything down to each and every little detail. That would be a massive amount of work. So we frequently apply simplifications to the solution context. That works if and only if the solution context is still a proper generalized subset of the problem context.
From our earlier example if all twenty physical cogs map uniquely onto the ten software cogs, the context may be okay. But if some cogs can be mapped in different ways, or some cogs cannot be mapped at all, then the software solution will drift away from reality and people will see this as bugs. If there are manual procedures and conventions to occasionally fix the map, then at some point they'll degrade and it will still fail.
Which is one of the most common fundamental problems with software. There often isn’t time to do the context mappings properly, and the shortcuts applied were invalid. The software context is shifted out from under the problem context, so it will gradually break. More software or even manual procedures will only delay the inevitable. The data, e.g. proxies, in the computer will eventually drift away from reality.
So, if we see the context of the software as needing to be a proper subset of the context of the problem we intend to solve, it is easier to understand the consequences of simplifications.
This often plays out in interesting ways. If you build a system that keeps track of a large number of people you obviously want to be able to uniquely identify them. Some people might incorrectly assume that a full name, as first, middle, last, is enough, but most names are not particularly unique. Age doesn’t help and duplicate birthdays are far too common. You could use a home address as well, but even in some parts of the world that is not enough.
Correctly and uniquely identifying ‘all’ individuals is extraordinarily hard. Identifying a small subset for an organization is much easier. So we cheat. But any mapping only works correctly for the restricted domain context when you don’t have to fiddle with the data. If you have to have Bob and Bob1 for example, then the mapping is broken and should be fixed before it gets even worse.
So as a problem we want to track a tiny group of people and we don’t have to worry about the full context. Yet, if whatever we do forces fiddling with the data, that means our solution context is misfocused and should be shifted or expanded. Manual hacks are a bug. Seen this way, it ends any sort of subjective arguments about modeling or conventions. It’s a context misfit, it needs to be fixed. It’s not ‘speculative generation’ or over-engineering, it is just obviously wrong.
The same issues play out all over software development. We build solutions, but we build them to fit against one or more problem contexts, and those often get bounced around by larger organization or industry contexts.
That is, often people narrow down the context to make an argument about why something is right or wrong, better or worse, but the argument is invalid because the context is just too narrow. The most obvious example I know is the ancient arguments about why Betamax tapes would beat out VHS, when in reality it went the other way. I think the best reference to explain it all was Geoffrey Moore in “Crossing the Chasm” when he talks about the ‘whole product’ which is an expanded context.
All of that makes understanding the various contexts that bound the system very important. Ultimately we want to build the best fitting solutions given the problems we are trying to solve. Comparing the two contexts is how we figure out if we have done a good job or not.