Our world just keeps on changing and with all of these changes we must keep on updating and asserting our basic human rights. To this end I suggest a couple of new rights that I think we all possess:
We own the intellectual property rights to all our interactions with the world. That is, if we buy something from a store, we own and control any data generated from that interaction which includes us as individuals. The store can own any data that shows they sold a bunch of stuff to a collection of anonymous people, but if that data singles anyone out, in any way or shape, then that person owns it. That right not only includes stores but governments and healthcare as well, in fact any and all interactions that we have as we creatively engage with the world around us. If it is specifically about a person then clearly they should own the rights to it and it can't be used or even collected without their explicit consent.
Given that science seeks to enlighten us with their research, we own the right to not accept anything they say unless they also present the raw, unedited data that they gathered to back up their analysis. I don't want to see or hear about any work unless the process to compile it was completely transparent. If the data really shows what their analysis claims it shows, then they will have no problems releasing two things: the paper explaining the analyse and any data (including unused data) that was gathered to investigate the claim. Given the increasing sophistication of mathematical approaches for extracting conclusions from data, any claim presented without data should be considered untrustworthy and quite possibly propaganda designed to obscure rather than the clarify the underlying truth. Papers without data should not be considered 'scientific works'. Science is about discovering the truth, not about making one's career.
We all own the right to be different. We are all unique and should value this. Diversification is a key strength of our species so we shouldn't be alike, think alike or follow blindly. Any person, organization or process that is attempting to 'clean up our differences' is not acting in our best interests. They are violating our fundamentals rights to be different and to remain that way forever. A homogeneous world is just one sad shade of grey; we know this and all need to incorporate it into our philosophies of getting along together. Different is good, even if it can be annoying at times.
That's it for now but I'm sure as the next wave of madness hits us I'll figure out some other basic tenets of existence.
Software is a static list of instructions, which we are constantly changing.
Saturday, January 25, 2014
Monday, January 13, 2014
Controlling Complexity
"Make everything as simple as possible, but not simpler."
Albert Einstein
Within a context every object or process has a given amount of complexity. As Einstein said there is a base level of complexity that cannot be circumvented, but there are at least two types of complexity: inherent and artificial. There are many other names for these and many other ways to decompose complexity into subparts, but this simple breakdown clarifies a simple property of complexity, that is that under specific circumstances many complex things can be made simpler.
Simplification can occur for many reasons, but most commonly it is from removing artificial complexity. That is, the complexity that is piled on top for reasons like misunderstandings, short-cuts, disorganization, self interest and lack of understanding. Note that all of these are directly attributable to human intelligence, and with that we quite easy define 'inherent' complexity as the lower limit that is bounded by our physical world in the sense that Einstein really meant in his quote. Also note that I started the first sentence referring to context. By this I actually mean a combination of spacial and temporal context. Thus, things can get simpler because we have learned more about them over time or because we are choosing to tighten the boundaries of the problem down to avoid issues within the larger context. The latter however can be problematic if done under the wrong conditions.
For reducing complexity there is also the possibility of simplification by encapsulation, that is some part of the whole is hidden within a black box. The context within the box is obviously simpler, but the box itself adds something to the larger complexity. This works to some degree, but it can only be piled so high before it itself becomes too complex.
Often people attempt to simplify by reducing context, essentially "wearing blinders", but they don't follow through with the encapsulation. In that case, it is extremely unlikely that any underlying changes will actually simply things, instead they spawn off unexpected side effects which themselves are just added artificial complexity. This often goes by the name 'over simplifying' but it's a misnomer in that while the change within the context may be describable as a 'simplification' it isn't really.
Within this description we can also add abstraction as a means of simplifying stuff. In general it's really just a larger pattern or relationship manifested over a larger space of objects or processes, but it's ability to help comes from the fact that it organizes things underneath. Organization and sometimes categorization relate similar things together by properties, so exploiting these relations reduces the complexity of dealing with the individual parts. Abstraction thought has it limits in that it acts much like a bell curve. Some abstraction reduces complexity, increasing to a maximum point, then falling off again because the abstractions are too general to be applied for organization. Still a powerful abstraction, at a maximal point, can cut complexity by orders of magnitude, which is way more powerful than any other technique for controlling complexity. It's not free however in that considerable fewer people can deal with or understand strong abstractions. That leaves them subject to being misunderstood and thus becoming a generator of artificial complexity.
There are many ways to reduce or control complexity, but there are many more ways for people to introduce artificial complexity. It's this imbalance that is driving our modern age to the brink of serious trouble. So often people cry "simplification" while actually making things worse, and it isn't helped by living in an age where ability to spin the facts is valued far more than the ability to get things done well. Quantity and hype constantly trump quality and achievement.
Albert Einstein
Within a context every object or process has a given amount of complexity. As Einstein said there is a base level of complexity that cannot be circumvented, but there are at least two types of complexity: inherent and artificial. There are many other names for these and many other ways to decompose complexity into subparts, but this simple breakdown clarifies a simple property of complexity, that is that under specific circumstances many complex things can be made simpler.
Simplification can occur for many reasons, but most commonly it is from removing artificial complexity. That is, the complexity that is piled on top for reasons like misunderstandings, short-cuts, disorganization, self interest and lack of understanding. Note that all of these are directly attributable to human intelligence, and with that we quite easy define 'inherent' complexity as the lower limit that is bounded by our physical world in the sense that Einstein really meant in his quote. Also note that I started the first sentence referring to context. By this I actually mean a combination of spacial and temporal context. Thus, things can get simpler because we have learned more about them over time or because we are choosing to tighten the boundaries of the problem down to avoid issues within the larger context. The latter however can be problematic if done under the wrong conditions.
For reducing complexity there is also the possibility of simplification by encapsulation, that is some part of the whole is hidden within a black box. The context within the box is obviously simpler, but the box itself adds something to the larger complexity. This works to some degree, but it can only be piled so high before it itself becomes too complex.
Often people attempt to simplify by reducing context, essentially "wearing blinders", but they don't follow through with the encapsulation. In that case, it is extremely unlikely that any underlying changes will actually simply things, instead they spawn off unexpected side effects which themselves are just added artificial complexity. This often goes by the name 'over simplifying' but it's a misnomer in that while the change within the context may be describable as a 'simplification' it isn't really.
Within this description we can also add abstraction as a means of simplifying stuff. In general it's really just a larger pattern or relationship manifested over a larger space of objects or processes, but it's ability to help comes from the fact that it organizes things underneath. Organization and sometimes categorization relate similar things together by properties, so exploiting these relations reduces the complexity of dealing with the individual parts. Abstraction thought has it limits in that it acts much like a bell curve. Some abstraction reduces complexity, increasing to a maximum point, then falling off again because the abstractions are too general to be applied for organization. Still a powerful abstraction, at a maximal point, can cut complexity by orders of magnitude, which is way more powerful than any other technique for controlling complexity. It's not free however in that considerable fewer people can deal with or understand strong abstractions. That leaves them subject to being misunderstood and thus becoming a generator of artificial complexity.
There are many ways to reduce or control complexity, but there are many more ways for people to introduce artificial complexity. It's this imbalance that is driving our modern age to the brink of serious trouble. So often people cry "simplification" while actually making things worse, and it isn't helped by living in an age where ability to spin the facts is valued far more than the ability to get things done well. Quantity and hype constantly trump quality and achievement.
Thursday, January 2, 2014
The Quality of Code
One of the trickier issues in programming is whether or not a program is well-written. Personally I believe that the overall quality of the software is heavily affected by its internal quality. This is because most actively used software is in continuous development, so there is always more that can be done to improve it, the project never really stops. To keep this momentum on a reasonable track the underlying code needs to be both readable and extendable. These form the two foundations for code quality.
Readability is simple in its essence, but notoriously difficult to achieve in practice. Mostly this is because programming languages support a huge variance in style. Two different programmers can use the same language in very different ways and still get good results. This capacity opens the door to each programmer coding in their own unique style, an indirect way of signing their own work. Unfortunately a code base made up of 4 different styles is by definition four times harder to read. You have to keep adjusting your understanding when switching between the different sections in different styles. Getting multiple programmers to align on nearly identical styles is incredibly hard because they don't like having any constraints, there are deadlines and most programmers won't read other programmer's code. Style issues should really be set up before any development begins and any new programmers should learn and follow the stylistic rules already laid down. When that happens well enough, the code quality increases.
To get around reading other's code many programmers will attempt to extend existing code by doing what Tracy Kidder described in her book "the Soul of a New Machine" as just attaching a bag on the side. Essentially instead of extending, refactoring or integrating they just write some external clump of code and try to glue it to the side of the existing system. This results in there affectively being two different ways of handling the same underlying mechanics, again doubling any new work to extend the system. Done enough, this degenerates the architecture into a hopeless 'ball of mud' eventually killing any ability to extend the system further. Many programmers justify this by stating that it is faster, but that speed comes at the cost of gradually stopping any further extensions.
Both multiple styles and bad extensions are very obvious if you read through the code. In this way if you read a lot of code, it is fairly obvious if the system is well-written or not. If its fairly consistent and the mechanics of the system are all encapsulated together, its probably not going to be hard to read it and then extend its functionality. If on the other hard it looks like it was tossed together by a bunch of competing programmers with little structure or organization then making any changes is probably long, painful and will require boat loads of testing to validate them. Given lost of experience with different systems, experienced programmers can often just loosely rank a code base on a scale of 1 to 10, with the obvious caveat that any ranking from a programmer who hates reading other's code will obviously be erratic.
An important side effect of achieving good quality is that although the project starts slower, it maintains a consistent pace of development throughout it's lifetime, instead of slowing down over time. This opens a door to keeping a metric on long term development that mirrors the underlying quality. If the amount of code getting in the final production state is rapidly decreasing, on of the causes is declining quality (there are several other causes to consider as well).
Readability is simple in its essence, but notoriously difficult to achieve in practice. Mostly this is because programming languages support a huge variance in style. Two different programmers can use the same language in very different ways and still get good results. This capacity opens the door to each programmer coding in their own unique style, an indirect way of signing their own work. Unfortunately a code base made up of 4 different styles is by definition four times harder to read. You have to keep adjusting your understanding when switching between the different sections in different styles. Getting multiple programmers to align on nearly identical styles is incredibly hard because they don't like having any constraints, there are deadlines and most programmers won't read other programmer's code. Style issues should really be set up before any development begins and any new programmers should learn and follow the stylistic rules already laid down. When that happens well enough, the code quality increases.
To get around reading other's code many programmers will attempt to extend existing code by doing what Tracy Kidder described in her book "the Soul of a New Machine" as just attaching a bag on the side. Essentially instead of extending, refactoring or integrating they just write some external clump of code and try to glue it to the side of the existing system. This results in there affectively being two different ways of handling the same underlying mechanics, again doubling any new work to extend the system. Done enough, this degenerates the architecture into a hopeless 'ball of mud' eventually killing any ability to extend the system further. Many programmers justify this by stating that it is faster, but that speed comes at the cost of gradually stopping any further extensions.
Both multiple styles and bad extensions are very obvious if you read through the code. In this way if you read a lot of code, it is fairly obvious if the system is well-written or not. If its fairly consistent and the mechanics of the system are all encapsulated together, its probably not going to be hard to read it and then extend its functionality. If on the other hard it looks like it was tossed together by a bunch of competing programmers with little structure or organization then making any changes is probably long, painful and will require boat loads of testing to validate them. Given lost of experience with different systems, experienced programmers can often just loosely rank a code base on a scale of 1 to 10, with the obvious caveat that any ranking from a programmer who hates reading other's code will obviously be erratic.
An important side effect of achieving good quality is that although the project starts slower, it maintains a consistent pace of development throughout it's lifetime, instead of slowing down over time. This opens a door to keeping a metric on long term development that mirrors the underlying quality. If the amount of code getting in the final production state is rapidly decreasing, on of the causes is declining quality (there are several other causes to consider as well).
Subscribe to:
Posts (Atom)