A common problem in all code is juggling a lot of data.
If you are going to make your code base industrial-strength one key part of that is to move any and all static data outside of the code.
That includes any strings, particularly since they are the bain of internationalization.
That also includes flags, counters, etc.
A really strong piece of code has no constant declarations. Not for the logging, nor the configuration, or even for user options. It still needs those constants, but they come from elsewhere.
This goal is really hard to achieve. But where it has been mostly achieved you usually see the code lasting a lot longer in usage.
What this gives you is a boatload of data for configurations, as well as the domain data, user data, and system data. Lots of data.
We know how to handle domain data. You put it into a reliable data store, sometimes an RDBMS, or maybe NoSQL technology. We know that preventing it from being redundant is crucial.
But we want the same thing for the other data types too.
It’s just that while we may want an explicit map of all of the possible configuration parameters, in most mediums whether that is in a file, the environment, or at the cli, we usually want to amend a partial set. Maybe the default for 10,000 parameters is fine, but on a specific machine we need to change two or three. This changes how we see and deal with configuration data.
What we should do instead is take the data model for the system to be absolutely everything that is data. Treat all data the same, always.
Then we know that we can get it all from a bunch of different ‘sources’. All we have to do is establish a precedence and then merge on top.
For example, we have 100 weird network parameters that get shoved into calls. We put a default version of each parameter in a file. Then when we load that file, we go into the environment and see if there are any overriding changes, and then we look at the command line and see if there are any more overriding changes. We keep some type of hash to this mess, then as we need the values, we simply do a hash lookup to get our hands on the final value.
This means that a specific piece of code only references the data once when it is needed. That there are multiple loaders that write over each other, in some form of precedence. With this it is easy.
Load from file, load on top from env, load on top from cli. In some cases, we may want to load from a db too (there are fun production reasons why we might want this as the very last step).
We can validate the code because it uses a small number of parameters all in the place they are needed. We can validate the default parameters. We can write code to scream if an env var or cli param exists but there are no defaults. And so forth. Well structured, easy to understand, easy to test, and fairly clean. All good attributes for code.
The fun part though is that we can get crazier. As I said, this applies to all ‘data’ in the system, so we can span it out over all other data sources, and tweak it to handle instances as well as values. In that sense, you do something fun like use a cli command with args that fill in the raw domain data, so that you can do some pretty advanced testing. The possibilities are endless, but the code is still sane.
More useful, you can get some large-scale base domain data from one data source, then amend it with even more data from a bunch of other data sources. If you put checks on validity and drop garbage, the system could use a whole range of different places to merge the data by precedence. Start with the weakest sources first, and use very strong validation. Then loosen up the validation and pile other sources on top. You’ll lose a bit of the determinism overall, but offset that with robustness. You could do it live, or it could be a constant background refresh. That’s fun when you have huge fragmented data issues.
Software is a static list of instructions, which we are constantly changing.
Friday, December 20, 2024
Thursday, December 12, 2024
In Full
One of the key goals of writing good software is to minimize the number of lines of code.
This is important because if you only needed 30K lines, but ended up writing 300K, it was a lot of work that is essentially wasted. It’s 10x the amount of code, 10x the amount of testing, and 10x the bugs. As well, you have to search through 10x lines to understand or find issues.
So, if you can get away with 30k, instead of 300K, then is it always worth doing that?
Well, almost always.
30K of really cryptic code that uses every syntactic trick in the book to cleverly compress the code down to almost nothing is generally unreadable. It is too small now.
You always have to revisit the code all of the time.
First to make sure it is reusable, but then later to make sure it is doing the right things in the right way. Reading code is an important part of keeping the entire software development project manageable.
So 30K of really compressed, unreadable stuff, in that sense is no better than 300K of really long and redundant stuff. It’s just a similar problem but in the opposite direction.
Thus, while we want to shrink down the representation of the code to its minimum, we oddly want to expand out the expression of that code to its maximum. It may seem like a contradiction, but it isn’t.
What the code does is not the same as the way you express it. You want it to do the minimum, but you usually want the expression to be maximized.
You could for instance hack 10 functions into a complex calling chain something like A(B(C(D(E(F(G(H(I(K(args))))))) or A().B().C().D().E().F().G().H().I().K(args) so it fits on a single line.
That’s nuts, and it would severely impair the ability of anyone to figure out what either line of code does. So, instead, you put the A..K function calls on individual lines. Call them one by one. Ultimately it does the same thing, but it eats up 10x lines to get there. Which is not just okay, it is actually what you want to do.
It’s an unrealistic example since you really shouldn’t end up with 10 individual function calls. Normally some of them should be encapsulated below the others.
This is important because if you only needed 30K lines, but ended up writing 300K, it was a lot of work that is essentially wasted. It’s 10x the amount of code, 10x the amount of testing, and 10x the bugs. As well, you have to search through 10x lines to understand or find issues.
So, if you can get away with 30k, instead of 300K, then is it always worth doing that?
Well, almost always.
30K of really cryptic code that uses every syntactic trick in the book to cleverly compress the code down to almost nothing is generally unreadable. It is too small now.
You always have to revisit the code all of the time.
First to make sure it is reusable, but then later to make sure it is doing the right things in the right way. Reading code is an important part of keeping the entire software development project manageable.
So 30K of really compressed, unreadable stuff, in that sense is no better than 300K of really long and redundant stuff. It’s just a similar problem but in the opposite direction.
Thus, while we want to shrink down the representation of the code to its minimum, we oddly want to expand out the expression of that code to its maximum. It may seem like a contradiction, but it isn’t.
What the code does is not the same as the way you express it. You want it to do the minimum, but you usually want the expression to be maximized.
You could for instance hack 10 functions into a complex calling chain something like A(B(C(D(E(F(G(H(I(K(args))))))) or A().B().C().D().E().F().G().H().I().K(args) so it fits on a single line.
That’s nuts, and it would severely impair the ability of anyone to figure out what either line of code does. So, instead, you put the A..K function calls on individual lines. Call them one by one. Ultimately it does the same thing, but it eats up 10x lines to get there. Which is not just okay, it is actually what you want to do.
It’s an unrealistic example since you really shouldn’t end up with 10 individual function calls. Normally some of them should be encapsulated below the others.
Encapsulation hides stuff, but if the hidden parts are really just sub-parts of the parent, then it is a very good form of hiding. If you understand what the parent needs to do, then you would assume that the children should be there.
Also, the function names should be self-describing, not just letters of the Alphabet.
Still, you often see coding nearly as bad as the above, when it should have been written out fully.
Getting back to the point, if you had 60K of code that spelled out everything in full, instead of the 30K of cryptic code, or the 300K of redundant code, then you have probably reached the best code possible. Not too big, but also not too small. Lots of languages provide plenty of synaptic sugar, but you really only want to use those features to make the code more readable, not just smaller.
Getting back to the point, if you had 60K of code that spelled out everything in full, instead of the 30K of cryptic code, or the 300K of redundant code, then you have probably reached the best code possible. Not too big, but also not too small. Lots of languages provide plenty of synaptic sugar, but you really only want to use those features to make the code more readable, not just smaller.
Friday, December 6, 2024
The Right Thing
Long ago, rather jokingly, someone gave me specific rules for working, the most interesting of them was “Do the Right thing”.
In some cases with software development, the right thing is tricky. Is it the right thing for you? For the project? For using a specific technology?
In other cases though, it is fairly obvious.
For example, even if a given programming language lets you get crazy with spacing, you do not. You format the source code properly, each and every time. The formatting of any source file should be clean.
In one sense, it doesn’t bug the computer if the spaces are messed up. But source code isn’t just for a computer. You’ll probably have to reread it a lot, and if the code is worth using, aka well written, lots of other people will have to read it as well. We are not concerned with the time it takes to type in the code or edit it carefully to fix any spacing issues. We are concerned with the friction that bad spacing adds when humans are reading the code. The right thing to do is not add unnecessary friction.
That ‘do extra now, for a benefit later’ principle is quite common when building software software. Yet we see it ignored far too often.
One place it applies strongly, but is not often discussed is with hacks.
The right thing to do if you encounter an ugly hack below you is not to ignore it. If you let it percolate upwards, the problem with it will continue. And continue.
Instead, if you find something like that, you want to encapsulate it in the lowest possible place to keep it from littering your code. Don’t embrace the ugliness, don’t embrace the mess. Wrap it and contain it. Make the wrapper act the way the underlying code should have been constructed.
The reason this is right is because of complexity. If everybody lets everybody else’s messes propagate everywhere, the artificial complex in the final effort will be outrageous. You haven’t built code, you just glued together a bunch of messes into a bigger mess. The value of what you wrote is scarce.
A lot of people like the ‘not my problem’ approach. Or the ‘do the laziest thing possible’ one. But the value you are building is essentially crippled with those approaches. You might get to the end quicker, but you didn’t really do what was asked. If they asked me to build a system, it is inherent that the system should be at least good enough. If what I deliver is a huge mess, then I failed on that basic implicit requirement. I did not do my job properly.
So the right thing is to fix the foundational messes by properly encapsulating them so that what is built on top is 1000x more workable. Encapsulate the mess, don’t percolate it.
If you go back and look at much older technology, you definitely see periods where doing the right thing was far more common. And not only does it show, but we also depend on that type of code far more than we depend on the flakey stuff. That’s why the length of time your code is in production is often a reasonable proxy to its quality, which is usually a manifestation of the ability of some programmers to find really clean ways of implementing complicated problems. It is all related.
In some cases with software development, the right thing is tricky. Is it the right thing for you? For the project? For using a specific technology?
In other cases though, it is fairly obvious.
For example, even if a given programming language lets you get crazy with spacing, you do not. You format the source code properly, each and every time. The formatting of any source file should be clean.
In one sense, it doesn’t bug the computer if the spaces are messed up. But source code isn’t just for a computer. You’ll probably have to reread it a lot, and if the code is worth using, aka well written, lots of other people will have to read it as well. We are not concerned with the time it takes to type in the code or edit it carefully to fix any spacing issues. We are concerned with the friction that bad spacing adds when humans are reading the code. The right thing to do is not add unnecessary friction.
That ‘do extra now, for a benefit later’ principle is quite common when building software software. Yet we see it ignored far too often.
One place it applies strongly, but is not often discussed is with hacks.
The right thing to do if you encounter an ugly hack below you is not to ignore it. If you let it percolate upwards, the problem with it will continue. And continue.
Instead, if you find something like that, you want to encapsulate it in the lowest possible place to keep it from littering your code. Don’t embrace the ugliness, don’t embrace the mess. Wrap it and contain it. Make the wrapper act the way the underlying code should have been constructed.
The reason this is right is because of complexity. If everybody lets everybody else’s messes propagate everywhere, the artificial complex in the final effort will be outrageous. You haven’t built code, you just glued together a bunch of messes into a bigger mess. The value of what you wrote is scarce.
A lot of people like the ‘not my problem’ approach. Or the ‘do the laziest thing possible’ one. But the value you are building is essentially crippled with those approaches. You might get to the end quicker, but you didn’t really do what was asked. If they asked me to build a system, it is inherent that the system should be at least good enough. If what I deliver is a huge mess, then I failed on that basic implicit requirement. I did not do my job properly.
So the right thing is to fix the foundational messes by properly encapsulating them so that what is built on top is 1000x more workable. Encapsulate the mess, don’t percolate it.
If you go back and look at much older technology, you definitely see periods where doing the right thing was far more common. And not only does it show, but we also depend on that type of code far more than we depend on the flakey stuff. That’s why the length of time your code is in production is often a reasonable proxy to its quality, which is usually a manifestation of the ability of some programmers to find really clean ways of implementing complicated problems. It is all related.
Thursday, November 28, 2024
Portability Layer
Way back, when I was first coding, if we had to write code to run on a wide variety of different “platforms” the first thing we would always do is write a portability layer.
The idea was that above that layer, the code is simple and clean. It is the layer that deals with all of the strange stuff on the different platforms and different configurations.
In that sense, it is a separation of the code itself from the ugliness of making it run everywhere.
Sometime later, the practice fell out of use. Partly because there were fewer operating systems, so most people were just opting to rewrite their stuff for each one. But also because there was a plethora of attempts like the JVM to write universal machines to hide portability issues. They were meant to be that portability layer.
Portability is oddly related to user preferences and configurations. You can see both as a whackload of conditional code triggered by static or nearly static data. A platform or a session are just tightened contexts.
And that is the essence of a portability layer. It is there to hide all of the ugly, messy, nested mess that is triggered from above.
You can’t let the operating system, specific machine configs, or user preferences muddle the code. The code needs to do one thing. Only one thing. Part of that one thing might be a little different below, but above the layer, it is all the same. So the layer hides that.
There may be extra steps for certain users or hardware. The layer hides that too.
The presentation may be different for different users, or even the language. The layer should hide that as well.
It would be better to call this an ugliness layer than a portability layer. It separates the high-level actions of the code from the implementation nightmares below it. And it should be driven by configuration data, which is just the same as any other data in the system. Realistically part of the database, part of the data model.
In a very abstract sense, some code is triggered with some configuration data, and some alternative code chains that combine to do the work. At the top level, you can lay out the full logic, while underneath the portability layer if some things aren’t required they are ‘no-op’s, basically empty functions. When structured that way, it’s a lot easier to verify, and debug.
The high level is a large set of polymorphic instructions, the underlying level is specific to the context.
The system is a big set of functions, that implement various features. Each function is high level, these are mapped to the lower level stuff that is specific to the machine, user, etc. Any sub-context that can be applied.
When code is structured that way, it is easy to figure out what it is doing, and a whole lot less messy. Code that is easy to figure is also easy to test and debug. Easy code is higher quality code. Encapsulated madness below is where you’ll hit the bugs, and is harder to test. But if that is clearly separated, then it is just a matter of reproducing the problem and noticing why it was ineffective.
We should have not forgotten about portability layers, they were definitely a far better decomposition to help with overly complex runtime issues.
The idea was that above that layer, the code is simple and clean. It is the layer that deals with all of the strange stuff on the different platforms and different configurations.
In that sense, it is a separation of the code itself from the ugliness of making it run everywhere.
Sometime later, the practice fell out of use. Partly because there were fewer operating systems, so most people were just opting to rewrite their stuff for each one. But also because there was a plethora of attempts like the JVM to write universal machines to hide portability issues. They were meant to be that portability layer.
Portability is oddly related to user preferences and configurations. You can see both as a whackload of conditional code triggered by static or nearly static data. A platform or a session are just tightened contexts.
And that is the essence of a portability layer. It is there to hide all of the ugly, messy, nested mess that is triggered from above.
You can’t let the operating system, specific machine configs, or user preferences muddle the code. The code needs to do one thing. Only one thing. Part of that one thing might be a little different below, but above the layer, it is all the same. So the layer hides that.
There may be extra steps for certain users or hardware. The layer hides that too.
The presentation may be different for different users, or even the language. The layer should hide that as well.
It would be better to call this an ugliness layer than a portability layer. It separates the high-level actions of the code from the implementation nightmares below it. And it should be driven by configuration data, which is just the same as any other data in the system. Realistically part of the database, part of the data model.
In a very abstract sense, some code is triggered with some configuration data, and some alternative code chains that combine to do the work. At the top level, you can lay out the full logic, while underneath the portability layer if some things aren’t required they are ‘no-op’s, basically empty functions. When structured that way, it’s a lot easier to verify, and debug.
The high level is a large set of polymorphic instructions, the underlying level is specific to the context.
The system is a big set of functions, that implement various features. Each function is high level, these are mapped to the lower level stuff that is specific to the machine, user, etc. Any sub-context that can be applied.
When code is structured that way, it is easy to figure out what it is doing, and a whole lot less messy. Code that is easy to figure is also easy to test and debug. Easy code is higher quality code. Encapsulated madness below is where you’ll hit the bugs, and is harder to test. But if that is clearly separated, then it is just a matter of reproducing the problem and noticing why it was ineffective.
We should have not forgotten about portability layers, they were definitely a far better decomposition to help with overly complex runtime issues.
Friday, November 22, 2024
Wobble
As a software development project spins out of control, it wobbles all over the place. The real issues get replaced by fabricated ones.
This is evident in its political discussions.
If all of the issues are grounded, then they are just work that needs to be accomplished. But if they have become disjoint from the underlying technical or domain constraints, then the project is wobbling.
Because the issues are no longer bound to reality, it does not matter how they are handled. They are not the problem, the wobble that caused them is the problem.
A wobbling project quickly gets messy and disorganized. That lowers the quality of both the proposed solution and its implementation. You end up with a lot of badly fitting technology that requires significant handholding just to get through basic functionality. Its fragility defeats its utility.
Wobbles are caused by all sorts of technical, domain, and control issues. These issues are disconnected from the work of building the solution itself. They are irrational and sometimes quite emotional.
The best thing you can do is recognize them as a wobble and work carefully to not let them interfere with the actual development effort. That can be tricky.
If some technology is irrationally banned even though it is suitable for the current development, you end up just having to find an acceptable replacement that doesn’t force a full re-architecture.
If the domain features and functionality are in question, you usually need to go right back to the users themselves and figure out directly from them what parts they actually need, and what parts they will not use. Skip the middle players, they are the source of the wobble.
For process issues, sometimes you need the illusion of following a useless makework process, but ensure that that effort does not steal away too much focus from the real underlying process. Nontechnical people love to throw all sorts of hoops in your way to jump over. You just have to make sure it is not too disruptive.
The biggest cause of most wobbles is a lack of real development experience, from the developers, management, stakeholders, or any others with influence. Software development often seems conceptually easy, but the size, scale, and sheer amount of effort involved in even a medium-sized effort make it somewhat non-intuitive. Good experience is necessary to detect and avoid wobbling.
This is evident in its political discussions.
If all of the issues are grounded, then they are just work that needs to be accomplished. But if they have become disjoint from the underlying technical or domain constraints, then the project is wobbling.
Because the issues are no longer bound to reality, it does not matter how they are handled. They are not the problem, the wobble that caused them is the problem.
A wobbling project quickly gets messy and disorganized. That lowers the quality of both the proposed solution and its implementation. You end up with a lot of badly fitting technology that requires significant handholding just to get through basic functionality. Its fragility defeats its utility.
Wobbles are caused by all sorts of technical, domain, and control issues. These issues are disconnected from the work of building the solution itself. They are irrational and sometimes quite emotional.
The best thing you can do is recognize them as a wobble and work carefully to not let them interfere with the actual development effort. That can be tricky.
If some technology is irrationally banned even though it is suitable for the current development, you end up just having to find an acceptable replacement that doesn’t force a full re-architecture.
If the domain features and functionality are in question, you usually need to go right back to the users themselves and figure out directly from them what parts they actually need, and what parts they will not use. Skip the middle players, they are the source of the wobble.
For process issues, sometimes you need the illusion of following a useless makework process, but ensure that that effort does not steal away too much focus from the real underlying process. Nontechnical people love to throw all sorts of hoops in your way to jump over. You just have to make sure it is not too disruptive.
The biggest cause of most wobbles is a lack of real development experience, from the developers, management, stakeholders, or any others with influence. Software development often seems conceptually easy, but the size, scale, and sheer amount of effort involved in even a medium-sized effort make it somewhat non-intuitive. Good experience is necessary to detect and avoid wobbling.
Thursday, November 14, 2024
Clarity of Expression
For software programs, there are three conflicting goals.
First, the programmer has to express their understanding of the problems they are trying to solve.
Next, the computer has to understand the programmer’s instructions and do the right thing.
Third, other programmers will have to read the code and figure out what was intended to verify, change, or extend it. Code is never finished.
Some languages add all sorts of syntactic sugar and implicit mechanics in order to make it faster for a programmers to express themselves.
Those features can trick the original programmer into not fully understanding why specific behaviors are occurring. Things seem to work, they don’t really know why.
It can lead to ambiguities in the way the code executes.
But more importantly, it convolutes the code so that other programmers can’t easily understand what was intended.
It moves the language closer to what is known as ‘write once, read never’ code. Where it is impossible to see the full scope of expression, so you pretty much have to just run it to see what it does then decide if you like that based on very limited experience.
As languages mature, people often add more and more syntactic sugar or magical implicit handling. They believe that are helping the programmers write code faster, but really they're just making it all too complicated and unreadable. They are negative value.
For a really simple language, it is often important to utilize all aspects of it.
For a mature language, the opposite is true, you just want to pick a reasonable subset that best expresses your intent. Overusing all of the later language features just impairs readability. It’s a balance, you need to use just enough to make your expression crystal clear.The thing you are doing is not trying to save your own time, but rather get the meaning and intent to shine through. Code that is never read tends to get replaced quickly.
First, the programmer has to express their understanding of the problems they are trying to solve.
Next, the computer has to understand the programmer’s instructions and do the right thing.
Third, other programmers will have to read the code and figure out what was intended to verify, change, or extend it. Code is never finished.
Some languages add all sorts of syntactic sugar and implicit mechanics in order to make it faster for a programmers to express themselves.
Those features can trick the original programmer into not fully understanding why specific behaviors are occurring. Things seem to work, they don’t really know why.
It can lead to ambiguities in the way the code executes.
But more importantly, it convolutes the code so that other programmers can’t easily understand what was intended.
It moves the language closer to what is known as ‘write once, read never’ code. Where it is impossible to see the full scope of expression, so you pretty much have to just run it to see what it does then decide if you like that based on very limited experience.
As languages mature, people often add more and more syntactic sugar or magical implicit handling. They believe that are helping the programmers write code faster, but really they're just making it all too complicated and unreadable. They are negative value.
For a really simple language, it is often important to utilize all aspects of it.
For a mature language, the opposite is true, you just want to pick a reasonable subset that best expresses your intent. Overusing all of the later language features just impairs readability. It’s a balance, you need to use just enough to make your expression crystal clear.The thing you are doing is not trying to save your own time, but rather get the meaning and intent to shine through. Code that is never read tends to get replaced quickly.
Thursday, November 7, 2024
The Value of Technology
It is pretty simple.
For an individual, technologies only have value when they somehow make life better.
For a society, the same is also true. A valuable technology improves society.
Using this as a baseline we can see that a technology that sometimes works well but fails very badly at other times is misleading. For some people, the technology works, they see it as valuable. However, for others, when technology fails them, they become frustrated. Some of those failures are catastrophic. It is a negative value.
To really evaluate a technology properly, we need to look at those failures. They define more of its value than the successes.
For example, someone builds an inventory system. Great, now people can quickly see what is on hand. But the digital information in that system drifts from the actual inventory. Sometimes it says stuff is there when it is not. Sometimes it is the opposite. The value for such a system is its reliability, which is its ability to not drift too far from reality. If the drift is significant, the system is unreliable, it has very little value then, in fact, it might be hugely negative. Just smoke and mirrors that misleads people.
We see that with other categories of systems, particularly communications. If a system delivers accurate information, it is valuable. But if it is susceptible to being subverted to deliver misinformation, spin, or propaganda, then its value is low, or even worse it can be a literal threat to a society. It can be controlled by people with a negative agenda.
If people mistakenly trust that system, yet it delivers information that causes them to make tragic mistakes, the value is extraordinarily negative. It is a bad system. Very bad. They blindly trust the information they are getting, but it is actually not correct and controlled by other people who do not have their best interests at heart. It is a tool for mass exploitation, not for enlightenment.
We can verify that a technology is negative by simply looking at it retroactively. If the trajectory of something within the sphere of influence has gone off in a bad direction, and some technology was instrumental in changing that course, then it clearly has negative value. So individual or collective negative acts taint the mediums that influenced them.
If a group of people do something so obviously self-destructive and were driven to do so by use of some questionable technology, it's clear that that technology is bad. Very bad. There is something seriously wrong with it, if it was effective at promoting bad choices.
What we need to do is get better at evaluating technologies and mitigating the effects of any negative technologies on our societies. If we don’t they will eventually be used to bring about our ruin. We went through this with machines too. Initially, they amplified our labor, but then also started to amplify destruction. We still have a lot of confusion about the difference.
For an individual, technologies only have value when they somehow make life better.
For a society, the same is also true. A valuable technology improves society.
Using this as a baseline we can see that a technology that sometimes works well but fails very badly at other times is misleading. For some people, the technology works, they see it as valuable. However, for others, when technology fails them, they become frustrated. Some of those failures are catastrophic. It is a negative value.
To really evaluate a technology properly, we need to look at those failures. They define more of its value than the successes.
For example, someone builds an inventory system. Great, now people can quickly see what is on hand. But the digital information in that system drifts from the actual inventory. Sometimes it says stuff is there when it is not. Sometimes it is the opposite. The value for such a system is its reliability, which is its ability to not drift too far from reality. If the drift is significant, the system is unreliable, it has very little value then, in fact, it might be hugely negative. Just smoke and mirrors that misleads people.
We see that with other categories of systems, particularly communications. If a system delivers accurate information, it is valuable. But if it is susceptible to being subverted to deliver misinformation, spin, or propaganda, then its value is low, or even worse it can be a literal threat to a society. It can be controlled by people with a negative agenda.
If people mistakenly trust that system, yet it delivers information that causes them to make tragic mistakes, the value is extraordinarily negative. It is a bad system. Very bad. They blindly trust the information they are getting, but it is actually not correct and controlled by other people who do not have their best interests at heart. It is a tool for mass exploitation, not for enlightenment.
We can verify that a technology is negative by simply looking at it retroactively. If the trajectory of something within the sphere of influence has gone off in a bad direction, and some technology was instrumental in changing that course, then it clearly has negative value. So individual or collective negative acts taint the mediums that influenced them.
If a group of people do something so obviously self-destructive and were driven to do so by use of some questionable technology, it's clear that that technology is bad. Very bad. There is something seriously wrong with it, if it was effective at promoting bad choices.
What we need to do is get better at evaluating technologies and mitigating the effects of any negative technologies on our societies. If we don’t they will eventually be used to bring about our ruin. We went through this with machines too. Initially, they amplified our labor, but then also started to amplify destruction. We still have a lot of confusion about the difference.
Thursday, October 31, 2024
Full Speed
I’ve never liked coding in a rush. You just don’t get enough time to put the right parts nicely into the right places, it really hurts the quality of the system. But as it gained in popularity, I often had no choice.
One of the keys to not letting it get totally out of control is to schedule the work dynamically.
When non-technical people interfere, they have a bad tendency to try to serialize the work and then arbitrarily schedule it. They like to dictate what gets done and when. That’s pretty bad, as the most optimal path is not intuitive.
The first point is to start every development cycle by cleaning up the mistakes of the past. If you were diligent in keeping a list as it was going wrong, it can be very constrained work. Essentially you pick a time period, a week or two, and then get as much done in it as you can. To keep the workload smooth, you don’t take a break after a release, you just switch immediately to cleanup mode and keep the pace steady. Steady is good.
Then I always want to tackle the hard or low-level stuff first. Hard stuff because it is tricky to estimate and prone to surprises; low-level stuff because so much is built on top, it needs to get there first.
But that type of work is often a lot of cognitive effort. Due to life and burnout, some days it is just too much cognitive effort. So the trick to rushing is to pace yourself; some days you wake up and want to do easy stuff. Some days you feel you can tackle that hard stuff. You don’t know until the day progresses. You alway keep the option to do both.
I usually keep two lists as I go, an easy and a hard one, so I can have off days where I just focus on mindless stuff. I switch erratically.
So long as I keep those two lists up to date, and the cleanup one as well, it is organized enough to now spin out of control.
This makes it hard to say what I’ll get done and when. It makes it harder to work with others. But it is me at full speed, cranking out as much as I can, as fast as I can, with the best quality achievable under the circumstances. It’s the price of speed.
I always say that we should never code like this, you’re never really going to get anything other than barely acceptable quality this way, but it has become so rare these days to get anything even close to enough time to do things reasonably.
I miss the early days in my career when we actually were allowed to take our time and were actively encouraged to get it right. The stuff we wrote then was way better than the stuff we grind out now. It really shows.
One of the keys to not letting it get totally out of control is to schedule the work dynamically.
When non-technical people interfere, they have a bad tendency to try to serialize the work and then arbitrarily schedule it. They like to dictate what gets done and when. That’s pretty bad, as the most optimal path is not intuitive.
The first point is to start every development cycle by cleaning up the mistakes of the past. If you were diligent in keeping a list as it was going wrong, it can be very constrained work. Essentially you pick a time period, a week or two, and then get as much done in it as you can. To keep the workload smooth, you don’t take a break after a release, you just switch immediately to cleanup mode and keep the pace steady. Steady is good.
Then I always want to tackle the hard or low-level stuff first. Hard stuff because it is tricky to estimate and prone to surprises; low-level stuff because so much is built on top, it needs to get there first.
But that type of work is often a lot of cognitive effort. Due to life and burnout, some days it is just too much cognitive effort. So the trick to rushing is to pace yourself; some days you wake up and want to do easy stuff. Some days you feel you can tackle that hard stuff. You don’t know until the day progresses. You alway keep the option to do both.
I usually keep two lists as I go, an easy and a hard one, so I can have off days where I just focus on mindless stuff. I switch erratically.
So long as I keep those two lists up to date, and the cleanup one as well, it is organized enough to now spin out of control.
This makes it hard to say what I’ll get done and when. It makes it harder to work with others. But it is me at full speed, cranking out as much as I can, as fast as I can, with the best quality achievable under the circumstances. It’s the price of speed.
I always say that we should never code like this, you’re never really going to get anything other than barely acceptable quality this way, but it has become so rare these days to get anything even close to enough time to do things reasonably.
I miss the early days in my career when we actually were allowed to take our time and were actively encouraged to get it right. The stuff we wrote then was way better than the stuff we grind out now. It really shows.
Thursday, October 24, 2024
The Beauty of Technology
Long ago, when the book Beautiful Code was first published, I read a few blog posts where the authors stated that they had never seen beautiful code. Everything they worked on was always grungy, I felt sorry for them.
Technologies can be beautiful; it is rare in modern times, but still possible.
I’ll broadly define ‘technology’ as anything that allows humans to manipulate the world around us. It’s a very wide net.
Probably the first such occurrence was mastering fire. We learned to use it to purify food and for warmth and security. Clothes are another early example, they reduce the effect of the weather on us.
Everything we do to alter or control the natural world is technology. Farming, animal domestication, and tools. It has obviously had a great effect on us, we reached our highest-ever population, which would have not been possible before. We’ve spanned the globe, surviving in even its darkest corners. So, as far as we are concerned technology is good, it has helped us immensely.
But as well as making our lives easier, it is also the source of a lot of misery. We’ve become proficient at war for example. We have a long history of inventing ever-increasing weapons that kill more effectively. Without knives, swords, guns, canons, tanks, planes, ships, etc. wars would be far less lethal and have less collateral damage. Technologies make it easy to rain down destruction on large tracts of land killing all sorts of innocents.
So technologies are both good and bad. It depends on how we choose to use them.
Within that context, technologies can be refined or crude. You might have a finely crafted hammer with a clever design that makes it usable for all sorts of applications. Or a big stick that can be used to beat other people. We can build intricate machines like cars that transport wherever we want to go at speeds far better than walking, or planes that can carry hundreds of people quickly to the other side of the planet.
When we match the complexities of our solutions to the difficulties of the problems we want to solve, we get good fitting technologies. It might require a lot of effort to manufacture a good winter jacket for example, but it holds in the heat and lasts for a long time. All of the hundreds of years of effort in materials, textiles, and production can produce a high-quality coat that can last for decades. You can rely on your jacket for winter after winter, knowing that it will keep the cold at bay.
And it’s that trust and reliability along with its underlying sophistication that make it beautiful. You know that a lot of care went into the design and construction and that any unintentional side effects are negligible or at least mostly mitigated. You have some faith for example that the jacket is not lined with asbestos and that the price of warmth is a sizable risk of cancer.
You can trust that such obvious flaws were avoided out of concern and good craftsmanship. It doesn’t mean that there is a guarantee that the jacket is entirely not carcinogenic, just that it is not intentional or even likely accidental. Everyone involved in the production cared enough to make sure that it was the best it could be given all of the things we know about the world.
So, rather obviously if some accountant figured out how to save 0.1% of the costs as extra profit by substituting in a dangerous or inferior material, that new jacket is not beautiful, it is an abomination. A trick to pretend to be something it is not.
In that sense beautiful references not only its design and its construction but also that everybody along the way in its creation did their part with integrity and care. Beauty is not just how it looks, but is ingrained into every aspect of what it is.
We do have many beautiful technologies around us, although since the popularity of planned obsolescence, we are getting tricked far more often. Still, there are these technologies that we trust and that really do help make our lives better. They look good both externally and internally. You have to admire their conception. People cared about doing a good job and it shows.
Beauty is an artifact of craftsmanship. It is a manifestation of all of the skill, knowledge, patience, and understanding that went into it. It is the sum of its parts and all of its parts are beautiful. The way it was assembled was beautiful, everything in and around it was carefully labored over and done as best as it can be imagined to have been done. Beauty in that sense is holistic. All around.
Beauty is not more expensive, just rarer. If you were tricked into thinking something is beautiful when it is not, you have already priced it in, the difference is an increase in profit.
People always want easy money, so beauty is fleeting. The effort that went into something in the past is torn apart by those looking for money, thus negative tradeoffs get made. Things are a little cheaper in the hopes that people don’t notice. Little substitutions here and there that quietly devalue the technology in order to squeeze more money from it. Tarnished beauty, a sort of monument to the beauty of the past, but no longer present. Driven by greed and ambition, counter to the skilled craftsmanship of earlier times.
Technologies can be beautiful, but that comes from the care and skill of the people who create them. Too often in the modern world, this is discouraged, as if just profit all by itself is enough of a justification. It is not. Nobody wants to surround themselves with crap or cheap untrustworthy stuff. Somehow we have forgotten that.
Technologies can be beautiful; it is rare in modern times, but still possible.
I’ll broadly define ‘technology’ as anything that allows humans to manipulate the world around us. It’s a very wide net.
Probably the first such occurrence was mastering fire. We learned to use it to purify food and for warmth and security. Clothes are another early example, they reduce the effect of the weather on us.
Everything we do to alter or control the natural world is technology. Farming, animal domestication, and tools. It has obviously had a great effect on us, we reached our highest-ever population, which would have not been possible before. We’ve spanned the globe, surviving in even its darkest corners. So, as far as we are concerned technology is good, it has helped us immensely.
But as well as making our lives easier, it is also the source of a lot of misery. We’ve become proficient at war for example. We have a long history of inventing ever-increasing weapons that kill more effectively. Without knives, swords, guns, canons, tanks, planes, ships, etc. wars would be far less lethal and have less collateral damage. Technologies make it easy to rain down destruction on large tracts of land killing all sorts of innocents.
So technologies are both good and bad. It depends on how we choose to use them.
Within that context, technologies can be refined or crude. You might have a finely crafted hammer with a clever design that makes it usable for all sorts of applications. Or a big stick that can be used to beat other people. We can build intricate machines like cars that transport wherever we want to go at speeds far better than walking, or planes that can carry hundreds of people quickly to the other side of the planet.
When we match the complexities of our solutions to the difficulties of the problems we want to solve, we get good fitting technologies. It might require a lot of effort to manufacture a good winter jacket for example, but it holds in the heat and lasts for a long time. All of the hundreds of years of effort in materials, textiles, and production can produce a high-quality coat that can last for decades. You can rely on your jacket for winter after winter, knowing that it will keep the cold at bay.
And it’s that trust and reliability along with its underlying sophistication that make it beautiful. You know that a lot of care went into the design and construction and that any unintentional side effects are negligible or at least mostly mitigated. You have some faith for example that the jacket is not lined with asbestos and that the price of warmth is a sizable risk of cancer.
You can trust that such obvious flaws were avoided out of concern and good craftsmanship. It doesn’t mean that there is a guarantee that the jacket is entirely not carcinogenic, just that it is not intentional or even likely accidental. Everyone involved in the production cared enough to make sure that it was the best it could be given all of the things we know about the world.
So, rather obviously if some accountant figured out how to save 0.1% of the costs as extra profit by substituting in a dangerous or inferior material, that new jacket is not beautiful, it is an abomination. A trick to pretend to be something it is not.
In that sense beautiful references not only its design and its construction but also that everybody along the way in its creation did their part with integrity and care. Beauty is not just how it looks, but is ingrained into every aspect of what it is.
We do have many beautiful technologies around us, although since the popularity of planned obsolescence, we are getting tricked far more often. Still, there are these technologies that we trust and that really do help make our lives better. They look good both externally and internally. You have to admire their conception. People cared about doing a good job and it shows.
Beauty is an artifact of craftsmanship. It is a manifestation of all of the skill, knowledge, patience, and understanding that went into it. It is the sum of its parts and all of its parts are beautiful. The way it was assembled was beautiful, everything in and around it was carefully labored over and done as best as it can be imagined to have been done. Beauty in that sense is holistic. All around.
Beauty is not more expensive, just rarer. If you were tricked into thinking something is beautiful when it is not, you have already priced it in, the difference is an increase in profit.
People always want easy money, so beauty is fleeting. The effort that went into something in the past is torn apart by those looking for money, thus negative tradeoffs get made. Things are a little cheaper in the hopes that people don’t notice. Little substitutions here and there that quietly devalue the technology in order to squeeze more money from it. Tarnished beauty, a sort of monument to the beauty of the past, but no longer present. Driven by greed and ambition, counter to the skilled craftsmanship of earlier times.
Technologies can be beautiful, but that comes from the care and skill of the people who create them. Too often in the modern world, this is discouraged, as if just profit all by itself is enough of a justification. It is not. Nobody wants to surround themselves with crap or cheap untrustworthy stuff. Somehow we have forgotten that.
Thursday, October 17, 2024
Programming
Some programming is routine. You follow the general industry guidelines, get some libraries and a framework, and then whack out the code.
All you need to get it done correctly is a reasonably trained programmer and a bit of time. It is straightforward. Paint by numbers.
Some programming is close to impossible. The author has to be able to twist an incredible amount of details and complexity around in their head, in order to find one of the few tangible ways to implement it.
Difficult programming is a rare skill and takes a very long time to master. There is a lot of prerequisite knowledge needed and it takes a lot to hone those skills.
Some difficult programming is technical. It requires deep knowledge of computer science and mathematics.
Some difficult programming is domain-based, it requires deep knowledge of large parts of the given domain.
In either case, it requires both significant skill and lots of knowledge.
All application and system programming is a combination of the two: routine and difficult. The mix is different for every situation.
If you throw inexperienced programmers at a difficult task it will likely fail. They do not have the ability to succeed.
Mentorship is the best way that people learn to deal with difficult programming. Learning from failures takes too long, is too stressful, and requires humility. Lots of reading is important, too.
If it is difficult and you want it to work correctly, you need to get programmers who are used to coping with these types of difficulties. They are specialists.
All you need to get it done correctly is a reasonably trained programmer and a bit of time. It is straightforward. Paint by numbers.
Some programming is close to impossible. The author has to be able to twist an incredible amount of details and complexity around in their head, in order to find one of the few tangible ways to implement it.
Difficult programming is a rare skill and takes a very long time to master. There is a lot of prerequisite knowledge needed and it takes a lot to hone those skills.
Some difficult programming is technical. It requires deep knowledge of computer science and mathematics.
Some difficult programming is domain-based, it requires deep knowledge of large parts of the given domain.
In either case, it requires both significant skill and lots of knowledge.
All application and system programming is a combination of the two: routine and difficult. The mix is different for every situation.
If you throw inexperienced programmers at a difficult task it will likely fail. They do not have the ability to succeed.
Mentorship is the best way that people learn to deal with difficult programming. Learning from failures takes too long, is too stressful, and requires humility. Lots of reading is important, too.
If it is difficult and you want it to work correctly, you need to get programmers who are used to coping with these types of difficulties. They are specialists.
Thursday, October 10, 2024
Avoidance
When writing software, the issues you try to avoid often come back to haunt you later.
A good example is error handling. Most people consider it a boolean; it works or does not. But it is more complicated than that. Sometimes you want the logic to stop and display an error, but sometimes you just want it to pause for a while and try again. Some errors are a full stop, and some are local. In a big distributed system, you also don’t want one component failure to domino into a lot of others, so the others should just wait.
This means that when there is an error, you need to think very clearly about how you will handle it. Some errors are immediate, and some are retries. Some are logged, others trigger diagnostics or backup plans. There may be other strategies too. You’d usually need some kind of map that binds specific errors to different types of handlers.
However, a lot of newer languages give you exceptions. Their purpose is that instead of worrying about how you will handle the error now, you throw an exception and figure it out later. That’s quite convenient for coding.
It’s just that when you’ve done that hundreds of times, and later never comes, the behavior of the code gets erratic, which people obviously report as a bug.
And if you try to fix it by just treating it as a boolean, you’ll miss the subtleties of proper error handling, so it will keep causing trouble. One simple bit of logic will never correctly cover all of the variations.
Exceptions are good if you have a strong theory about putting them in place and you are strict enough to always follow it. An example is to catch it low and handle it there, or let it percolate to the very top. That makes it easy later to double-check that all of the exceptions are doing what you want. It puts the handling consistently at the bottom.
But instead, many people litter the code indiscriminately with exceptions, which guarantees that the behavior is unpredictable. Thus, lots of bugs.
So, language features like exceptions let you avoid thinking about error handling, but you’ll pay for them later if you haven’t braced that with enough discipline to use them consistently.
Another example is data.
All code depends on the data it is handling underneath. If that data is not modeled correctly -- proper structures and representations -- then it will need to be fixed at some point. Which means all of the code that relies on it needs to be fixed too.
A lot of people don’t want to dig into the data and understand it, instead, they start making very loose assumptions about it.
Data is funny in that in some cases it can be extraordinarily complicated in the real world. Any digital artifact for it then will have to be complicated. So, people ignore that, assume the data is far simpler than it really is, and end up dealing with endless scope creep.
Incorrectly modeled data diminishes any real value from the code, usually a whole collection of bugs. It is the foundation, which sets the base quality for everything.
If you understand the data in great depth then you can model it appropriately in the beginning and the code that sits on top of it will be far more stable. If you defer that to others, they probably won’t catch on to all of the issues, so they over-simplify it. Later, when these deficiencies materialize, a great deal of the code built on top will need to be redone, thus wasting a massive amount of time. Even trying to minimize the code changes through clever hacks will just amplify the problems. Unless you solve them, these types of problems always get worse, not better.
Performance is an example too.
The expression “Premature optimization is the root of all evil” is true. You should not spend a lot of time finely optimizing your code, until way later when it has settled down nicely and is not subject to a lot of changes. So, optimizations should always come last. Write it, edit it, test it, battle harden it, then optimize it.
But you can also deoptimize code. Deliberately make it more resource-intensive. For example, you can allocate a huge chunk of memory, only to use it to store a tiny amount of data. The size causes behavioral issues with the operating system; paging a lot of memory is expensive. By writing the code to only use the memory you need, you are not optimizing it, you are just being frugal.
There are lots of ways you can express the same code that doesn’t waste resources but still maintains readability. These are the good habits of coding. They don’t take much extra effort and they are not optimizations. The code keeps the data in reasonable structures, and it does not do a lot of wasteful transformations. It only loops when it needs to, and it does not have repetitive throwaway work. Not only does the code run more efficiently, it is also more readable and far more understandable.
It does take some effort and coordination to do this. The development team should not blindly rewrite stuff everywhere, and you have to spend effort understanding what the existing representations are. This will make you think, learn, and slow you down a little in the beginning. Which is why it is so commonly avoided.
You see these mega-messes where most of the processing of the data is just to pass it between internal code boundaries. Pointless. One component models the data one way, and another wastes a lot of resources flipping it around for no real reason. That is a very common deoptimization, you see it everywhere. Had the programmers not avoided learning the other parts of the system, everything would have worked a whole lot better.
In software development, it is very often true that the things you avoid thinking about and digging into, are the root causes of most of the bugs and drama. They usually contribute to the most serious bugs. Coming back later to correct these avoidances is often so expensive that it never gets done. Instead, the system hobbles along, getting bad patch after bad patch, until it sort of works and everybody puts in manual processes to counteract its deficiencies. I’ve seen systems that cost far more than manual processes and are way less reliable. On top of that, years later everybody get frustrated with it, commissions a rewrite, and makes the exact same mistakes all over again.
A good example is error handling. Most people consider it a boolean; it works or does not. But it is more complicated than that. Sometimes you want the logic to stop and display an error, but sometimes you just want it to pause for a while and try again. Some errors are a full stop, and some are local. In a big distributed system, you also don’t want one component failure to domino into a lot of others, so the others should just wait.
This means that when there is an error, you need to think very clearly about how you will handle it. Some errors are immediate, and some are retries. Some are logged, others trigger diagnostics or backup plans. There may be other strategies too. You’d usually need some kind of map that binds specific errors to different types of handlers.
However, a lot of newer languages give you exceptions. Their purpose is that instead of worrying about how you will handle the error now, you throw an exception and figure it out later. That’s quite convenient for coding.
It’s just that when you’ve done that hundreds of times, and later never comes, the behavior of the code gets erratic, which people obviously report as a bug.
And if you try to fix it by just treating it as a boolean, you’ll miss the subtleties of proper error handling, so it will keep causing trouble. One simple bit of logic will never correctly cover all of the variations.
Exceptions are good if you have a strong theory about putting them in place and you are strict enough to always follow it. An example is to catch it low and handle it there, or let it percolate to the very top. That makes it easy later to double-check that all of the exceptions are doing what you want. It puts the handling consistently at the bottom.
But instead, many people litter the code indiscriminately with exceptions, which guarantees that the behavior is unpredictable. Thus, lots of bugs.
So, language features like exceptions let you avoid thinking about error handling, but you’ll pay for them later if you haven’t braced that with enough discipline to use them consistently.
Another example is data.
All code depends on the data it is handling underneath. If that data is not modeled correctly -- proper structures and representations -- then it will need to be fixed at some point. Which means all of the code that relies on it needs to be fixed too.
A lot of people don’t want to dig into the data and understand it, instead, they start making very loose assumptions about it.
Data is funny in that in some cases it can be extraordinarily complicated in the real world. Any digital artifact for it then will have to be complicated. So, people ignore that, assume the data is far simpler than it really is, and end up dealing with endless scope creep.
Incorrectly modeled data diminishes any real value from the code, usually a whole collection of bugs. It is the foundation, which sets the base quality for everything.
If you understand the data in great depth then you can model it appropriately in the beginning and the code that sits on top of it will be far more stable. If you defer that to others, they probably won’t catch on to all of the issues, so they over-simplify it. Later, when these deficiencies materialize, a great deal of the code built on top will need to be redone, thus wasting a massive amount of time. Even trying to minimize the code changes through clever hacks will just amplify the problems. Unless you solve them, these types of problems always get worse, not better.
Performance is an example too.
The expression “Premature optimization is the root of all evil” is true. You should not spend a lot of time finely optimizing your code, until way later when it has settled down nicely and is not subject to a lot of changes. So, optimizations should always come last. Write it, edit it, test it, battle harden it, then optimize it.
But you can also deoptimize code. Deliberately make it more resource-intensive. For example, you can allocate a huge chunk of memory, only to use it to store a tiny amount of data. The size causes behavioral issues with the operating system; paging a lot of memory is expensive. By writing the code to only use the memory you need, you are not optimizing it, you are just being frugal.
There are lots of ways you can express the same code that doesn’t waste resources but still maintains readability. These are the good habits of coding. They don’t take much extra effort and they are not optimizations. The code keeps the data in reasonable structures, and it does not do a lot of wasteful transformations. It only loops when it needs to, and it does not have repetitive throwaway work. Not only does the code run more efficiently, it is also more readable and far more understandable.
It does take some effort and coordination to do this. The development team should not blindly rewrite stuff everywhere, and you have to spend effort understanding what the existing representations are. This will make you think, learn, and slow you down a little in the beginning. Which is why it is so commonly avoided.
You see these mega-messes where most of the processing of the data is just to pass it between internal code boundaries. Pointless. One component models the data one way, and another wastes a lot of resources flipping it around for no real reason. That is a very common deoptimization, you see it everywhere. Had the programmers not avoided learning the other parts of the system, everything would have worked a whole lot better.
In software development, it is very often true that the things you avoid thinking about and digging into, are the root causes of most of the bugs and drama. They usually contribute to the most serious bugs. Coming back later to correct these avoidances is often so expensive that it never gets done. Instead, the system hobbles along, getting bad patch after bad patch, until it sort of works and everybody puts in manual processes to counteract its deficiencies. I’ve seen systems that cost far more than manual processes and are way less reliable. On top of that, years later everybody get frustrated with it, commissions a rewrite, and makes the exact same mistakes all over again.
Thursday, October 3, 2024
Time
Time is often a big problem for people when they make decisions.
Mostly, if you have to choose between possible options, you should weigh the pros and cons of each, carefully, then decide.
But time obscures that.
For instance, there might be some seriously bad consequences related to one of the options, but if they take place far enough into the future, people don’t want to think about them or weigh them into their decisions. They tend to live in the moment.
Later, when trouble arises, they tend to disassociate the current trouble with most of those past decisions. It is disconnected, so they don’t learn from the experiences.
Frequency also causes problems. If something happens once, it is very different than if it is a repeating event. You can accept less-than-perfect outcomes for one-off events, but the consequences tend to add up unexpectedly for repetitive events. What looks like an irritant becomes a major issue.
Tunnel vision is the worst of the problems. People are so focused on achieving one short-term objective that they set the stage to lose heavily in the future. The first objective works out okay, but then the long-term ones fall apart.
We see this in software development all of the time. The total work is significantly underestimated which becomes apparent during coding. The reasonable options are to move the dates back or reduce the work. But more often, everything gets horrifically rushed. That resulting drop in quality sends the technical debt spiraling out of control. What is ignored or done poorly usually comes back with a vengeance and costs at least 10x more effort, sometimes way above that. Weighted against the other choices, without any consideration for the future, rushing the work does not seem that bad of an option, which is why it is so common.
Time can be difficult. People have problems dealing with it, but ignoring it does not make the problems go away, it only intensifies them.
Mostly, if you have to choose between possible options, you should weigh the pros and cons of each, carefully, then decide.
But time obscures that.
For instance, there might be some seriously bad consequences related to one of the options, but if they take place far enough into the future, people don’t want to think about them or weigh them into their decisions. They tend to live in the moment.
Later, when trouble arises, they tend to disassociate the current trouble with most of those past decisions. It is disconnected, so they don’t learn from the experiences.
Frequency also causes problems. If something happens once, it is very different than if it is a repeating event. You can accept less-than-perfect outcomes for one-off events, but the consequences tend to add up unexpectedly for repetitive events. What looks like an irritant becomes a major issue.
Tunnel vision is the worst of the problems. People are so focused on achieving one short-term objective that they set the stage to lose heavily in the future. The first objective works out okay, but then the long-term ones fall apart.
We see this in software development all of the time. The total work is significantly underestimated which becomes apparent during coding. The reasonable options are to move the dates back or reduce the work. But more often, everything gets horrifically rushed. That resulting drop in quality sends the technical debt spiraling out of control. What is ignored or done poorly usually comes back with a vengeance and costs at least 10x more effort, sometimes way above that. Weighted against the other choices, without any consideration for the future, rushing the work does not seem that bad of an option, which is why it is so common.
Time can be difficult. People have problems dealing with it, but ignoring it does not make the problems go away, it only intensifies them.
Thursday, September 26, 2024
Complexity
All “things” are distinguished by whether they can be reasonably decomposed into smaller things. The lines of decomposition are the similarities or differences. Not only do we need to break things down into their smallest parts, but we also need to understand all of the effects between the parts.
Some ‘thing’ is simple if it defies decomposition. It is as broken down as it can be. It gets more complex as the decompositions are more abundant. It is multidimensional relative to any chosen categorizations, so it isn’t easy to compare relative complexity. But it is easy to compare it back to something on the same set of axes that is simpler. One hundred things is marginally more complex than five things.
This is also true of “events”. A single specific event in and of itself is simple, particularly if it does not involve any ‘things’. Events get more complex as they are related to each other. Maybe just sequential or possibly cause and effect. A bunch of related events is more complex than any single one. Again it is multidimensional based on categorizations, but also as events can involve things, this makes it even more multidimensional.
For software, some type of problem exists in reality and we have decided that we can use software to form part of a solution to solve it. Most often, software is only ever a part of the solution, as the problem itself is always anchored in reality, there have to be physical bindings.
Users for instance are just digital collections of information that represent a proxy to one or more people who utilize the software. Without people interacting or being tracked with the software, the concept of users is meaningless.
Since we have a loose relative measure for complexity, we can get some sense of the difference between any two software solutions. We can see that the decomposition for one may be considerably simpler than another for example, although it gets murky when it crosses over trade-offs. Two nearly identical pieces of software may only really differ by some explicit trade-off, but as the trad-offs are anchored in reality, they would not share the same complexity. It would be relative to their operational usage, a larger context.
But if we have a given problem and we propose a solution that is vastly simpler than what the problem needs we can see that the solution is “oversimplified”. We know this because the full width of the solution does not fit the full width of the problem, so parts of the problem are left exposed. They may require other software tools or possibly physical tools in order to get the problem solved.
So, in that sense, the fit of any software is defined by its leftover unsolved aspects. These of course have their own complexity. If we sum up the complexities with these gaps, and if there were any overlaps, that gives us the total complexity of the solution. In that case, we find that an oversimplified solution has a higher overall complexity than a properly fighting solution. Not individually, but overall.
We can see that the problem has an ‘intrinsic’ complexity. The partial fit of any solution must be at least as complex as the parts it covers. All fragments and redundancies have their own complexity, but we can place their existence in the category of ‘artificial’ complexity relative to any better-fitting solution.
We might see that in terms of a GUI that helps to keep track of nicely formatted text. If the ability to create, edit, and delete the text is part of the solution, but it lacks the ability to create, edit, and delete all of the different types of required formatting then it would force the users to go to some outside tool to do that work. So, it’s ill-fitting. A second piece of software is required to work with it. The outside tools themselves might have an inherent intrinsic complex, but relative to the problem we are looking at, having to learn and use them is just artificial complexity. Combined that is significantly more than if the embedded editing widget in the main software just allowed for the user to properly manage the formatting.
Keep in mind that this would be different than the Unix philosophy of scripting, as in that case, there are lots of little pieces of software, but they all exist as intrinsic complexity ‘within’ the scripting environment. They are essentially inside of the solution space, not outside.
We can’t necessarily linearize complexity and make it explicitly comparable, but we can understand that one instance of complexity is more complex than another. We might have to back up the context a little to distinguish that, but it always exists. We can also relate complexity back to work. It would be a specific amount of work to build a solution to solve some given problem, but if we meander while constructing it, obviously that would be a lot more work. The shortest path, with the least amount of work, would be to build the full solution as effectively as possible so that it fully covers the problem. For software, that is usually a massive amount of work, so we tend to do it in parts, and gradually evolve into a more complete solution. If the evolution wobbles though, that is an effort that could have been avoided.
All of this gives us a sense that the construction of software as a solution is driven by the understanding and controlling of complexity. Projects are smooth if you understand the full complexities of the problem and find the best path forward to get them covered properly by a software solution as quickly as possible. If you ignore some of those complexities, inherent or artificial, they tend to generate more complexity. Eventually, if you ignore enough of them the project gets out of control and usually meets an undesirable fate. Building software is an exercise in controlling complexity increases. Every decision is about not letting it grow too quickly.
Some ‘thing’ is simple if it defies decomposition. It is as broken down as it can be. It gets more complex as the decompositions are more abundant. It is multidimensional relative to any chosen categorizations, so it isn’t easy to compare relative complexity. But it is easy to compare it back to something on the same set of axes that is simpler. One hundred things is marginally more complex than five things.
This is also true of “events”. A single specific event in and of itself is simple, particularly if it does not involve any ‘things’. Events get more complex as they are related to each other. Maybe just sequential or possibly cause and effect. A bunch of related events is more complex than any single one. Again it is multidimensional based on categorizations, but also as events can involve things, this makes it even more multidimensional.
For software, some type of problem exists in reality and we have decided that we can use software to form part of a solution to solve it. Most often, software is only ever a part of the solution, as the problem itself is always anchored in reality, there have to be physical bindings.
Users for instance are just digital collections of information that represent a proxy to one or more people who utilize the software. Without people interacting or being tracked with the software, the concept of users is meaningless.
Since we have a loose relative measure for complexity, we can get some sense of the difference between any two software solutions. We can see that the decomposition for one may be considerably simpler than another for example, although it gets murky when it crosses over trade-offs. Two nearly identical pieces of software may only really differ by some explicit trade-off, but as the trad-offs are anchored in reality, they would not share the same complexity. It would be relative to their operational usage, a larger context.
But if we have a given problem and we propose a solution that is vastly simpler than what the problem needs we can see that the solution is “oversimplified”. We know this because the full width of the solution does not fit the full width of the problem, so parts of the problem are left exposed. They may require other software tools or possibly physical tools in order to get the problem solved.
So, in that sense, the fit of any software is defined by its leftover unsolved aspects. These of course have their own complexity. If we sum up the complexities with these gaps, and if there were any overlaps, that gives us the total complexity of the solution. In that case, we find that an oversimplified solution has a higher overall complexity than a properly fighting solution. Not individually, but overall.
We can see that the problem has an ‘intrinsic’ complexity. The partial fit of any solution must be at least as complex as the parts it covers. All fragments and redundancies have their own complexity, but we can place their existence in the category of ‘artificial’ complexity relative to any better-fitting solution.
We might see that in terms of a GUI that helps to keep track of nicely formatted text. If the ability to create, edit, and delete the text is part of the solution, but it lacks the ability to create, edit, and delete all of the different types of required formatting then it would force the users to go to some outside tool to do that work. So, it’s ill-fitting. A second piece of software is required to work with it. The outside tools themselves might have an inherent intrinsic complex, but relative to the problem we are looking at, having to learn and use them is just artificial complexity. Combined that is significantly more than if the embedded editing widget in the main software just allowed for the user to properly manage the formatting.
Keep in mind that this would be different than the Unix philosophy of scripting, as in that case, there are lots of little pieces of software, but they all exist as intrinsic complexity ‘within’ the scripting environment. They are essentially inside of the solution space, not outside.
We can’t necessarily linearize complexity and make it explicitly comparable, but we can understand that one instance of complexity is more complex than another. We might have to back up the context a little to distinguish that, but it always exists. We can also relate complexity back to work. It would be a specific amount of work to build a solution to solve some given problem, but if we meander while constructing it, obviously that would be a lot more work. The shortest path, with the least amount of work, would be to build the full solution as effectively as possible so that it fully covers the problem. For software, that is usually a massive amount of work, so we tend to do it in parts, and gradually evolve into a more complete solution. If the evolution wobbles though, that is an effort that could have been avoided.
All of this gives us a sense that the construction of software as a solution is driven by the understanding and controlling of complexity. Projects are smooth if you understand the full complexities of the problem and find the best path forward to get them covered properly by a software solution as quickly as possible. If you ignore some of those complexities, inherent or artificial, they tend to generate more complexity. Eventually, if you ignore enough of them the project gets out of control and usually meets an undesirable fate. Building software is an exercise in controlling complexity increases. Every decision is about not letting it grow too quickly.
Sunday, September 22, 2024
Editing Anxieties
An all too common problem I’ve seen programmers make is to become afraid of changing their code.
They type it in quickly. It’s a bit muddled and probably a little messy.
Bugs and changes get requested right away as it is not doing what people expect it to do.
The original author and lots of those who follow, seek to make the most minimal changes they can. They are dainty to the code. They only do the littlest things in the hopes of improving it.
But the code is weak. It was fully thought out; it was poorly implemented.
It would save a lot of time to make rather bigger changes. Bold ones. Not to rewrite it, but rather to take what is there as a wide approximation to what should have been there instead.
Break it up into lots of functions.
Rename the variables and the function name.
Correct any variable overloading. Throw out redundant or unused variables.
Shift around the structure to be consistent, moving lines of code up or down the call stack.
All of those are effectively “nondestructive” refactoring. They will not change what the code is doing, but they will make it earlier to understand it.
Nondestructive refactors are too often avoided by programmers, but they are an essential tool in fixing weak codebases.
Once you’ve cleaned up the mess, and it is obvious what the code is doing, then you can decide how to make it do what it was supposed to do in the first place. But you need to know what is there first, in order to correctly change it.
If you avoid fixing the syntax, naming, inconsistencies, etc it will not save time, only delay your understanding of how to get the code to where it should be. A million little fixes will not necessarily converge on correct code. It can be endless.
They type it in quickly. It’s a bit muddled and probably a little messy.
Bugs and changes get requested right away as it is not doing what people expect it to do.
The original author and lots of those who follow, seek to make the most minimal changes they can. They are dainty to the code. They only do the littlest things in the hopes of improving it.
But the code is weak. It was fully thought out; it was poorly implemented.
It would save a lot of time to make rather bigger changes. Bold ones. Not to rewrite it, but rather to take what is there as a wide approximation to what should have been there instead.
Break it up into lots of functions.
Rename the variables and the function name.
Correct any variable overloading. Throw out redundant or unused variables.
Shift around the structure to be consistent, moving lines of code up or down the call stack.
All of those are effectively “nondestructive” refactoring. They will not change what the code is doing, but they will make it earlier to understand it.
Nondestructive refactors are too often avoided by programmers, but they are an essential tool in fixing weak codebases.
Once you’ve cleaned up the mess, and it is obvious what the code is doing, then you can decide how to make it do what it was supposed to do in the first place. But you need to know what is there first, in order to correctly change it.
If you avoid fixing the syntax, naming, inconsistencies, etc it will not save time, only delay your understanding of how to get the code to where it should be. A million little fixes will not necessarily converge on correct code. It can be endless.
Thursday, September 12, 2024
Ambiguities
What is unknowable is just not knowable. It is an ambiguity. It could be one answer or it could be any other. There is no way to decide. The information to do so simply does not exist. It isn't open to discussion, it is a byproduct of our physical universe.
The languages we use to drive computers are often Turing Complete. This means they are highly expressive, that they are deterministic, yet buried beneath them there can also ambiguities. Some are accidentally put there by programmers; some are just intrinsically embedded.
I saw a post about a bug caused by one of the newer configuration languages. It was mistaken in the type of its data and made a really bad assumption about it. That is a deliberate ambiguity.
We choose implicit typing far too often just to save expression, but then we overload it later with polymorphic types, and we sometimes don’t get the precedence right. To be usable the data must be typed, but because it is left to the software to decide, and the precedence is incorrect, it forms an ambiguity. The correct data type is unknown and cannot be inferred. Mostly it works, right up until it doesn’t. If the original programmers forced typing, or the precedence was tight, the issue would go away.
My favorite ambiguity is the two generals’ problem. It sits deep in the heart of transactional integrity, plaguing distributed systems everywhere. It is simple: if one computational engine sends a message to another one and receives no response, you can never know if it was the message that disappeared or the response. The information needed to correctly choose just doesn’t exist.
If the message was an action, you can’t know right away if the action happened or not.
It is the cause of literally millions of bugs, some encountered so frequently that they are just papered over with manual interventions.
The languages we use to drive computers are often Turing Complete. This means they are highly expressive, that they are deterministic, yet buried beneath them there can also ambiguities. Some are accidentally put there by programmers; some are just intrinsically embedded.
I saw a post about a bug caused by one of the newer configuration languages. It was mistaken in the type of its data and made a really bad assumption about it. That is a deliberate ambiguity.
We choose implicit typing far too often just to save expression, but then we overload it later with polymorphic types, and we sometimes don’t get the precedence right. To be usable the data must be typed, but because it is left to the software to decide, and the precedence is incorrect, it forms an ambiguity. The correct data type is unknown and cannot be inferred. Mostly it works, right up until it doesn’t. If the original programmers forced typing, or the precedence was tight, the issue would go away.
My favorite ambiguity is the two generals’ problem. It sits deep in the heart of transactional integrity, plaguing distributed systems everywhere. It is simple: if one computational engine sends a message to another one and receives no response, you can never know if it was the message that disappeared or the response. The information needed to correctly choose just doesn’t exist.
If the message was an action, you can’t know right away if the action happened or not.
It is the cause of literally millions of bugs, some encountered so frequently that they are just papered over with manual interventions.
What makes it so fascinating is that although you can never reduce the ambiguity itself, you can still wire up the behavior to fail so infrequently that you might never encounter the problem in your lifetime. That is some pretty strange almost non-deterministic behavior.
Ambiguities are one of the great boundaries of software. Our code is limited. We can endeavor to craft perfect software, but can never exceed the imperfections of reality. Software isn’t nearly as soft as we would like to believe, it is always tied to physical machines, which so often mess up the purity of their formal mechanics.
Sometimes to build good software you have to have both a strong understanding of what it should do, but also a strong understanding of what it cannot do. That balance is important.
Ambiguities are one of the great boundaries of software. Our code is limited. We can endeavor to craft perfect software, but can never exceed the imperfections of reality. Software isn’t nearly as soft as we would like to believe, it is always tied to physical machines, which so often mess up the purity of their formal mechanics.
Sometimes to build good software you have to have both a strong understanding of what it should do, but also a strong understanding of what it cannot do. That balance is important.
Thursday, September 5, 2024
Value
We’ll start by defining ‘value’ as a product or service that somebody needs or wants. Their desire is strong enough that they are willing to pay something to get it.
There is a personal cost, in that money -- particularly when it comes from salary -- is an aspect of time. You have to spend time in your life to earn it. So, spending that money is also accepting that that time is gone too.
It is extraordinarily difficult to create value. Very few people can do it by themselves. There are not that many more people who can drive others to do it with them. Often it requires manifesting something complex, yet balanced, out of imagination, which is why it is such a rare ability.
Once value exists, lots of other people can build on top of it. They can enhance or expand it. They can focus on whatever parts of the production are time-consuming and minimize it. Value explodes when it is scaled; doing something for ten people is not nearly as rewarding as doing it for thousands.
The difference between the cost to produce and willingness to pay is profit. There is always some risk, as you have to spend money before you will see if it pays off. For new ideas, the risk is very high. But as the value proves itself those risks start to diminish. At some point, they are nearly zero, this is often referred to as ‘printing money’. You just need to keep on producing the value and collecting the profits.
At some point, the growth of the profitability of any value will be zero. The demand is saturated. If the production at zero is smooth, the profits are effectively capped. There is nowhere to go except down.
The easy thing you can do with value is degrade it. You can fall below the minimum for the product, then people will gradually stop wanting to pay for it. If it’s been around for a while habits will keep some purchases intact, but it will slowly trickle away. The profits will eventually disappear.
A common way to degrade value is to miss what is most valuable about it. People may be buying it, but their reasons for doing so are not necessarily obvious. The secondary concerns may be what is keeping them coming back. If you gut those for extra profit, you kill the primary value.
People incorrectly assume that all value has a life span. That a product or service lives or dies fairly quickly, and that turnover is normal. That is true for trendy industries, where the secondary aspects are that it is new and that is popular, but there is an awful lot of base value that people need in their lives and they do not want it to be shifting around. Trustworthy constants are strong secondary attributes. The market may fluctuate slightly, but the value is as strong as ever. Stable business with no growth, but good long-term potential.
Exploiting value is the core of business. They find it, get as much money out of it as possible, then move on to something else. When the goal is perpetual growth, which isn’t ever possible, the value turns over too quickly, leading to a lot of instability. That is a destabilizing force on any society.
There is a personal cost, in that money -- particularly when it comes from salary -- is an aspect of time. You have to spend time in your life to earn it. So, spending that money is also accepting that that time is gone too.
It is extraordinarily difficult to create value. Very few people can do it by themselves. There are not that many more people who can drive others to do it with them. Often it requires manifesting something complex, yet balanced, out of imagination, which is why it is such a rare ability.
Once value exists, lots of other people can build on top of it. They can enhance or expand it. They can focus on whatever parts of the production are time-consuming and minimize it. Value explodes when it is scaled; doing something for ten people is not nearly as rewarding as doing it for thousands.
The difference between the cost to produce and willingness to pay is profit. There is always some risk, as you have to spend money before you will see if it pays off. For new ideas, the risk is very high. But as the value proves itself those risks start to diminish. At some point, they are nearly zero, this is often referred to as ‘printing money’. You just need to keep on producing the value and collecting the profits.
At some point, the growth of the profitability of any value will be zero. The demand is saturated. If the production at zero is smooth, the profits are effectively capped. There is nowhere to go except down.
The easy thing you can do with value is degrade it. You can fall below the minimum for the product, then people will gradually stop wanting to pay for it. If it’s been around for a while habits will keep some purchases intact, but it will slowly trickle away. The profits will eventually disappear.
A common way to degrade value is to miss what is most valuable about it. People may be buying it, but their reasons for doing so are not necessarily obvious. The secondary concerns may be what is keeping them coming back. If you gut those for extra profit, you kill the primary value.
People incorrectly assume that all value has a life span. That a product or service lives or dies fairly quickly, and that turnover is normal. That is true for trendy industries, where the secondary aspects are that it is new and that is popular, but there is an awful lot of base value that people need in their lives and they do not want it to be shifting around. Trustworthy constants are strong secondary attributes. The market may fluctuate slightly, but the value is as strong as ever. Stable business with no growth, but good long-term potential.
Exploiting value is the core of business. They find it, get as much money out of it as possible, then move on to something else. When the goal is perpetual growth, which isn’t ever possible, the value turns over too quickly, leading to a lot of instability. That is a destabilizing force on any society.
Friday, August 30, 2024
Learning
Even the most straightforward routine programming tasks involve having to know a crazy amount of stuff. The more you know, the faster the work will go and the better quality you’ll achieve. So knowledge is crucial.
Software is always changing and is a very trendy profession, so it is endless learning. You have to keep up with it. While It is a more relaxed schedule than university, it is nearly the same volume of information. Endless school. And when you finally get near the end of the tunnel and understand most of the tiny little parts of any given stack, you inevitably have to jump to another one to keep the job opportunities coming.
For me, the most effective way to learn a lot of stuff was directly from someone else. If I spend time working with someone who has nearly mastered a technology or set of skills, I’ll always pick up a huge amount of stuff. Mentors were my biggest influence always causing big jumps in my abilities.
Hands-on experience can be good too, but really only if you augment it with some other source of guiding information, otherwise you run the risk of stumbling into too many eclectic practices that you think are fine, but end up making everything a lot harder. Bad habits create lots of bugs. Lots of bugs.
Textbooks used to be great. They were finely curated dense knowledge that was mostly trustworthy. Expensive and slow to read but worth it. Many had the depth you really needed.
Some of the fluffy books are okay, but lots of them are filled with misconceptions and bad habits. I tend to avoid most of them. Some I’ll read for entertainment.
Programming knowledge has a wide variance. You can find out an easy way to do something, but it is often fragile. Kinda works, but isn’t the best or most reliable way.
Usually, correctness is a bit fiddly, it isn’t the most obvious way forward. It is almost never easy. But it is worth the extra time to figure it out, as it usually saves a lot of stress from testing and production bugs. Poor quality code causes a lot of drama. Oversimplifications are common. Most of the time the code needs to be industrial strength, which takes extra effort.
University courses were great for giving broad overviews. You get a good sense of the landscape, and then it focuses in on a few of the detailed areas.
People don’t seem to learn much from Q&A sites. They just blindly treat the snippets as magic incantations. For me, I always make sure I understand ‘why’ the example works before I rely on it, and I always rearrange it to fit the conventions of the codebase I am working on. Blog posts can be okay too, but more often they are better at setting the context than fully explaining it.
If you have all of the knowledge to build a system and are given enough time, it is a fairly straightforward task. There are a few parts of it that require creativity or research, but then the rest of building it is just work. Skills like organization and consistency tend to enhance quality a lot more than raw knowledge mostly because if you find out later that you were wrong about the way something needed to be done, if the codebase is well written, it is an easy and quick change. In that sense, any friction during bug fixing is a reduction of quality.
Software is always changing and is a very trendy profession, so it is endless learning. You have to keep up with it. While It is a more relaxed schedule than university, it is nearly the same volume of information. Endless school. And when you finally get near the end of the tunnel and understand most of the tiny little parts of any given stack, you inevitably have to jump to another one to keep the job opportunities coming.
For me, the most effective way to learn a lot of stuff was directly from someone else. If I spend time working with someone who has nearly mastered a technology or set of skills, I’ll always pick up a huge amount of stuff. Mentors were my biggest influence always causing big jumps in my abilities.
Hands-on experience can be good too, but really only if you augment it with some other source of guiding information, otherwise you run the risk of stumbling into too many eclectic practices that you think are fine, but end up making everything a lot harder. Bad habits create lots of bugs. Lots of bugs.
Textbooks used to be great. They were finely curated dense knowledge that was mostly trustworthy. Expensive and slow to read but worth it. Many had the depth you really needed.
Some of the fluffy books are okay, but lots of them are filled with misconceptions and bad habits. I tend to avoid most of them. Some I’ll read for entertainment.
Programming knowledge has a wide variance. You can find out an easy way to do something, but it is often fragile. Kinda works, but isn’t the best or most reliable way.
Usually, correctness is a bit fiddly, it isn’t the most obvious way forward. It is almost never easy. But it is worth the extra time to figure it out, as it usually saves a lot of stress from testing and production bugs. Poor quality code causes a lot of drama. Oversimplifications are common. Most of the time the code needs to be industrial strength, which takes extra effort.
University courses were great for giving broad overviews. You get a good sense of the landscape, and then it focuses in on a few of the detailed areas.
People don’t seem to learn much from Q&A sites. They just blindly treat the snippets as magic incantations. For me, I always make sure I understand ‘why’ the example works before I rely on it, and I always rearrange it to fit the conventions of the codebase I am working on. Blog posts can be okay too, but more often they are better at setting the context than fully explaining it.
If you have all of the knowledge to build a system and are given enough time, it is a fairly straightforward task. There are a few parts of it that require creativity or research, but then the rest of building it is just work. Skills like organization and consistency tend to enhance quality a lot more than raw knowledge mostly because if you find out later that you were wrong about the way something needed to be done, if the codebase is well written, it is an easy and quick change. In that sense, any friction during bug fixing is a reduction of quality.
Thursday, August 22, 2024
Navigation
The hardest part about creating any large software user interface is getting the navigation correct. It is the ‘feel’ part of expression ‘look and feel’.
When navigation is bad, the users find the system awkward to use. It’s a struggle to do stuff with it. Eventually, they’ll put up with the deficiencies but they’ll never really like using the software.
When the navigation is good, no one notices. It fades into the background. It always works as expected, so people don’t see it anymore. Still, people tend to rave about that type of software and get attached to it. They are very aware that “it just works” and appreciate that. It is getting rarer these days.
The design of graphical user interfaces has been fairly consistent for at least thirty years. It can be phrased in terms of screens, selections, and widgets.
You start with some type of landing screen. That is where everyone goes unless they have customized entry to land elsewhere. That is pretty easy.
There are two main families of interfaces: hierarchical and toolbox.
A hierarchical interface makes the user move around through a tree or graph of screens to find the functionality they need. There are all sorts of navigation aids and quick ways to jump from one part of the system to another.
Toolboxes tend to keep the user in one place, a base screen, working with a small set of objects. They select these and then dig down into different toolboxes to get to the functionality they need and apply it to the object.
So we either make the user go to the functionality or we bring the functionality to the user. Mixing and matching these paradigms tends to lead to confusion unless it is super held together by some other principle.
It all amounts to the same thing.
At some point, we have established a context for the user that points to all of the data, aka nous, needed by some functionality, aka verbs, to execute correctly, aka compute. Creating and managing that context is navigation. Some of that data is session-based, some is based around the core nouns that the user has selected, and some of it is based on preferences. It is always a mixture which has often lead to development confusion.
For nouns, there are always 0, 1, or more of any type of them. Three choices. A good interface will deal with all three. For base verbs, they are create, edit, and delete. That directly or indirectly applied to every action, and again they are all handled for everything. There are connective verbs like find.
There are common navigation patterns like find-list-select as a means to go from many to one. The opposite is true too, but more often you see that in a toolbox. You can go from one to many.
There are plenty of variations on saving and restoring context. In some interfaces, it can be recursive, so you might go down from the existing context into a new one, execute the functionality, and link that back into the above context.
The best interfaces minimize any cognitive demands. If the user needs three things from different parts of the system, as they navigate, those parts are both connected to the context, but can also be viewed, and possibly modified. The simplistic example is copy and paste, but there are other forms like drag and drop, saved values, history, etc. They are also consistent. They do not use every trick in the book, instead, they carefully pick a small number of them and apply them consistently. This allows the user to correctly guess how to get things accomplished.
The trick is to make it all intuitive. The user knows they want to do something, but they don’t have to think very hard about how to accomplish it, it is obvious. All they need to do is grok the philosophy and go.
Good interfaces sometimes degrade quickly. People start randomly adding in more functionality, sort of trying to follow the existing pattern, but not quite. Then suddenly the interface gets convoluted. It has an overall feel, but too many expectations to that feel.
In that sense, an interface is scaled to support a specific amount of functionality, so when that grows a new interface is needed to support the larger amount.
There is no one-size-fits-all interface paradigm, just as there are no one-size-fits-all scaling solutions for any other part of software. This happens because organizational schemes are relative to size. As the size changes, it all needs to be reorganized, there is no avoiding that constraint. You cannot organize an infinite number of things, each organizational scheme is relative to a range.
The overall quality of any interface is how it feels to the user. Is it intuitive? Is it complete? A long time ago, someone pointed out that tutorials and manuals for interfaces exist only to document the flaws. They document the deviations from a good interface.
We could do advanced things with interfaces, but I haven’t seen anyone try in a long time.
One idea I’ve always wondered about is if you could get away with only one screen for everything that mixes up all of the nouns. As you select a bunch, the available verbs change. This runs slightly away from the ‘discovery’ aspects of the earlier GUI conventions, where menu items should always be visible even if they are grey right now because the context is incomplete. This flips that notion. You have a sea of functionality, but only see what is acceptable given the context. If the context itself is a first-class noun, then you build up a whole bunch of those, they expose functionality. As you create more and more higher-order objects, you stumble on higher-order functionality as well.
Way back, lots of people were building massive workflow systems. It’s all graph based but the navigation is totally dynamic. You just have a sea of disconnected screens, all wired up internally, but then you have navigation on the fly that moves the users between them based on any sort of persistent customization. Paired with dynamic forms, and if you treat things like menus as just partial screens, you get the ability to quickly rewire arbitrary workflows through any of the data and functionality. You can quickly create whatever workflow you need. If there is also an interface for creating workflows, users can customize all aspects of their work and the system will actually grow into its usage. It starts empty, and people gradually add the data pipes and workflows that suit them.
If you abstract a little, you see that every interface is just a nested collection of widgets working together. We navigate between these building up a context in order to trigger functionality. The interface does not know or need to know what the data is. It should contain zero “business logic”. The only difference between any type of data is labels, types, and structure. Because of that, we can actually build up higher-level paradigms that make it far less expensive to build larger interfaces, we do not need to spend a lot of time carefully wiring up the same widget arrangements over and over again. There are some reusable composite widgets out there that kind of do this, but surprisingly fewer of them than you would expect after all of these decades.
We tend to treat interface creation as if it were a new thing. It was new forty years ago, but it really is just stock code wiring these days. The interface reflects the user model of the underlying persisted data model. It throws in context on top. There are a lot of well-established conventions. They can be fun to build, but once you decide on how the user should see their data and work with it, the rest is routine.
When navigation is bad, the users find the system awkward to use. It’s a struggle to do stuff with it. Eventually, they’ll put up with the deficiencies but they’ll never really like using the software.
When the navigation is good, no one notices. It fades into the background. It always works as expected, so people don’t see it anymore. Still, people tend to rave about that type of software and get attached to it. They are very aware that “it just works” and appreciate that. It is getting rarer these days.
The design of graphical user interfaces has been fairly consistent for at least thirty years. It can be phrased in terms of screens, selections, and widgets.
You start with some type of landing screen. That is where everyone goes unless they have customized entry to land elsewhere. That is pretty easy.
There are two main families of interfaces: hierarchical and toolbox.
A hierarchical interface makes the user move around through a tree or graph of screens to find the functionality they need. There are all sorts of navigation aids and quick ways to jump from one part of the system to another.
Toolboxes tend to keep the user in one place, a base screen, working with a small set of objects. They select these and then dig down into different toolboxes to get to the functionality they need and apply it to the object.
So we either make the user go to the functionality or we bring the functionality to the user. Mixing and matching these paradigms tends to lead to confusion unless it is super held together by some other principle.
It all amounts to the same thing.
At some point, we have established a context for the user that points to all of the data, aka nous, needed by some functionality, aka verbs, to execute correctly, aka compute. Creating and managing that context is navigation. Some of that data is session-based, some is based around the core nouns that the user has selected, and some of it is based on preferences. It is always a mixture which has often lead to development confusion.
For nouns, there are always 0, 1, or more of any type of them. Three choices. A good interface will deal with all three. For base verbs, they are create, edit, and delete. That directly or indirectly applied to every action, and again they are all handled for everything. There are connective verbs like find.
There are common navigation patterns like find-list-select as a means to go from many to one. The opposite is true too, but more often you see that in a toolbox. You can go from one to many.
There are plenty of variations on saving and restoring context. In some interfaces, it can be recursive, so you might go down from the existing context into a new one, execute the functionality, and link that back into the above context.
The best interfaces minimize any cognitive demands. If the user needs three things from different parts of the system, as they navigate, those parts are both connected to the context, but can also be viewed, and possibly modified. The simplistic example is copy and paste, but there are other forms like drag and drop, saved values, history, etc. They are also consistent. They do not use every trick in the book, instead, they carefully pick a small number of them and apply them consistently. This allows the user to correctly guess how to get things accomplished.
The trick is to make it all intuitive. The user knows they want to do something, but they don’t have to think very hard about how to accomplish it, it is obvious. All they need to do is grok the philosophy and go.
Good interfaces sometimes degrade quickly. People start randomly adding in more functionality, sort of trying to follow the existing pattern, but not quite. Then suddenly the interface gets convoluted. It has an overall feel, but too many expectations to that feel.
In that sense, an interface is scaled to support a specific amount of functionality, so when that grows a new interface is needed to support the larger amount.
There is no one-size-fits-all interface paradigm, just as there are no one-size-fits-all scaling solutions for any other part of software. This happens because organizational schemes are relative to size. As the size changes, it all needs to be reorganized, there is no avoiding that constraint. You cannot organize an infinite number of things, each organizational scheme is relative to a range.
The overall quality of any interface is how it feels to the user. Is it intuitive? Is it complete? A long time ago, someone pointed out that tutorials and manuals for interfaces exist only to document the flaws. They document the deviations from a good interface.
We could do advanced things with interfaces, but I haven’t seen anyone try in a long time.
One idea I’ve always wondered about is if you could get away with only one screen for everything that mixes up all of the nouns. As you select a bunch, the available verbs change. This runs slightly away from the ‘discovery’ aspects of the earlier GUI conventions, where menu items should always be visible even if they are grey right now because the context is incomplete. This flips that notion. You have a sea of functionality, but only see what is acceptable given the context. If the context itself is a first-class noun, then you build up a whole bunch of those, they expose functionality. As you create more and more higher-order objects, you stumble on higher-order functionality as well.
Way back, lots of people were building massive workflow systems. It’s all graph based but the navigation is totally dynamic. You just have a sea of disconnected screens, all wired up internally, but then you have navigation on the fly that moves the users between them based on any sort of persistent customization. Paired with dynamic forms, and if you treat things like menus as just partial screens, you get the ability to quickly rewire arbitrary workflows through any of the data and functionality. You can quickly create whatever workflow you need. If there is also an interface for creating workflows, users can customize all aspects of their work and the system will actually grow into its usage. It starts empty, and people gradually add the data pipes and workflows that suit them.
If you abstract a little, you see that every interface is just a nested collection of widgets working together. We navigate between these building up a context in order to trigger functionality. The interface does not know or need to know what the data is. It should contain zero “business logic”. The only difference between any type of data is labels, types, and structure. Because of that, we can actually build up higher-level paradigms that make it far less expensive to build larger interfaces, we do not need to spend a lot of time carefully wiring up the same widget arrangements over and over again. There are some reusable composite widgets out there that kind of do this, but surprisingly fewer of them than you would expect after all of these decades.
We tend to treat interface creation as if it were a new thing. It was new forty years ago, but it really is just stock code wiring these days. The interface reflects the user model of the underlying persisted data model. It throws in context on top. There are a lot of well-established conventions. They can be fun to build, but once you decide on how the user should see their data and work with it, the rest is routine.
Thursday, August 15, 2024
Nailed It
“If all you have is a hammer, everything looks like a nail” -- Proverb
When you need a solution, the worst thing you can do is to fall back on just the tricks you’ve learned over the years and try to force them to fit.
That’s how we get Rube Goldberg machines. They are a collection of independent components that appear to accomplish the task, but they do so in very awkward and overly complex ways. They make good art, but poor technology.
Instead, you have to see the problem itself, and how it really affects people. You start in their shoes and see what it is that they need.
You may have to twist it around a few times in your head, in order to figure out the best way to bind a bunch of seemingly unrelated use cases. Often the best solution is neither obvious nor intuitive. Once you’ve stumbled onto it, it is clear to everyone that it fits really well, but just before that, it is a mystery.
Just keep picturing it from the user's perspective.
There is some information they need, they need it in a useful form, and they need to get to it quickly. They shouldn’t have to remember a lot to accomplish their tasks, that is the computer’s job.
The users may phrase it one way or describe it with respect to some other ancient and dysfunctional set of tools they used in the past. You can’t take them literally. You need to figure out what they really need, not just blindly throw things together. It can take a while and go through a bunch of iterations. Sometimes it is like pulling teeth to get enough of the fragments from them to be able to piece it together properly. Patience and lots of questions are the keys.
If you visualize the solution that way, you will soon arrive at a point where you don't actually know how to build it. That is good, it is what you want. That is the solution that you want to decompose and break down into tangible components.
If you’ve never worked through that domain or type of system before, it will be hard. It should be hard. If it isn’t hard, then you are probably just trying to smash pieces in place. Hammering up a storm.
For some visualizations, while they would be nearly perfect, you know you’ll never quite get there, Which is okay. As you work through the mechanics, time and impatience will force you to push a few lesser parts. If you have each one encapsulated, then later you can enhance them. Gradually you get closer to your ultimate solution.
If you can visualize a strong solution to their problems, and you’ve worked through getting in place the mechanics you need to nearly get there, then the actual coding of the thing should be smooth. There will be questions and issues that come up of course, but you’ll already possess enough knowledge to answer most of them for yourself. Still, there should be an ongoing dialog with the users, as the earlier versions of the work will be approximations to what they need. You’ll make too many assumptions. There will be some adjustments made for time and scheduling. They may have to live with a few crude features initially until you get a window to craft something better.
This approach to building stuff is very different than the one often pushed by the software industry. They want the users to dump out everything fully, literally, and then they just want to blindly grind out components to match that. Those types of projects fail often.
They’ve failed for decades and different people have placed the blame on various parts of the process like technology or methodology, but the issue is that the users are experts for a problem, and this is forcing them to also be experts on the solution, which they are not. Instead, the people who build the solution have to understand the problem they are trying to solve as well. Without that, a disconnect will happen, and although the software gets built it is unlikely to fit properly. So, it’s a waste of time and money.
When building software, it is not about code. It is not about solving little puzzles. It is about producing a solution that fits back to the user's problems in a way that makes the user’s lives better. The stack, style, libraries, and methodologies, don’t matter if the output doesn’t really solve the problem. It’s like building a house and forgetting to make it watertight or put a roof on it. Technically it is a house, but not really. The only real purpose of coding is to build things that make the world better. We build solutions, and hopefully not more problems.
When you need a solution, the worst thing you can do is to fall back on just the tricks you’ve learned over the years and try to force them to fit.
That’s how we get Rube Goldberg machines. They are a collection of independent components that appear to accomplish the task, but they do so in very awkward and overly complex ways. They make good art, but poor technology.
Instead, you have to see the problem itself, and how it really affects people. You start in their shoes and see what it is that they need.
You may have to twist it around a few times in your head, in order to figure out the best way to bind a bunch of seemingly unrelated use cases. Often the best solution is neither obvious nor intuitive. Once you’ve stumbled onto it, it is clear to everyone that it fits really well, but just before that, it is a mystery.
Just keep picturing it from the user's perspective.
There is some information they need, they need it in a useful form, and they need to get to it quickly. They shouldn’t have to remember a lot to accomplish their tasks, that is the computer’s job.
The users may phrase it one way or describe it with respect to some other ancient and dysfunctional set of tools they used in the past. You can’t take them literally. You need to figure out what they really need, not just blindly throw things together. It can take a while and go through a bunch of iterations. Sometimes it is like pulling teeth to get enough of the fragments from them to be able to piece it together properly. Patience and lots of questions are the keys.
If you visualize the solution that way, you will soon arrive at a point where you don't actually know how to build it. That is good, it is what you want. That is the solution that you want to decompose and break down into tangible components.
If you’ve never worked through that domain or type of system before, it will be hard. It should be hard. If it isn’t hard, then you are probably just trying to smash pieces in place. Hammering up a storm.
For some visualizations, while they would be nearly perfect, you know you’ll never quite get there, Which is okay. As you work through the mechanics, time and impatience will force you to push a few lesser parts. If you have each one encapsulated, then later you can enhance them. Gradually you get closer to your ultimate solution.
If you can visualize a strong solution to their problems, and you’ve worked through getting in place the mechanics you need to nearly get there, then the actual coding of the thing should be smooth. There will be questions and issues that come up of course, but you’ll already possess enough knowledge to answer most of them for yourself. Still, there should be an ongoing dialog with the users, as the earlier versions of the work will be approximations to what they need. You’ll make too many assumptions. There will be some adjustments made for time and scheduling. They may have to live with a few crude features initially until you get a window to craft something better.
This approach to building stuff is very different than the one often pushed by the software industry. They want the users to dump out everything fully, literally, and then they just want to blindly grind out components to match that. Those types of projects fail often.
They’ve failed for decades and different people have placed the blame on various parts of the process like technology or methodology, but the issue is that the users are experts for a problem, and this is forcing them to also be experts on the solution, which they are not. Instead, the people who build the solution have to understand the problem they are trying to solve as well. Without that, a disconnect will happen, and although the software gets built it is unlikely to fit properly. So, it’s a waste of time and money.
When building software, it is not about code. It is not about solving little puzzles. It is about producing a solution that fits back to the user's problems in a way that makes the user’s lives better. The stack, style, libraries, and methodologies, don’t matter if the output doesn’t really solve the problem. It’s like building a house and forgetting to make it watertight or put a roof on it. Technically it is a house, but not really. The only real purpose of coding is to build things that make the world better. We build solutions, and hopefully not more problems.
Thursday, August 8, 2024
Coding Style
Over the decades I’ve worked in many different development teams of varying sizes.
For a few teams, we had super strict coding conventions, which included formatting, structure, naming, and even idioms.
In a bunch, we weren’t strict — there were no “enforced” standards — but the coders were mostly aligned.
In some projects, particularly later in my career, it was just herding cats, with all of the developers going their own way. A free-for-all.
In those first cases, the projects were all about high-quality code and that was what we delivered. It was tight, complex, and industrial. Slow and steady.
For the loosely aligned teams, the projects were usually successful. The quality was pretty good. There were some bugs, but they were quickly fixable. Really ugly code would mostly get refactored to fit in correctly. There was always cleanup.
For the rogue groups, the code was usually a huge mess. The architecture was an unstructured ball of mud. Stuff was scattered everywhere and most people were overly stressed. The bugs were legendary and the fixes were high risk and scary. If a bug crossed multiple author’s work, it was an unsolvable nightmare. Nothing was ever cleaned up, just chucked in and left in that state.
So it’s no surprise that I see quality as tied to disciplined approaches like coding standards. It’s sort of style vs substance, but for programming, it is more like initial speed vs quality. If you understand in advance what you are building and concentrate, slowly and carefully, when you code, your output will be far better. It requires diligence and patience.
That manifests in consistency, readability, and reuse. All three reduce time, reduce bugs, and make it easier to extend the work later. If the development continues for years, it is mandatory.
If you’ve ever worked on tight projects vs sloppy ones, you see the difference. But if all of your projects have just been dumpster fires, it might not be so obvious. You might be used to the blowups in production, the arguments over requirements. A lot of development projects burn energy on avoidable issues.
It is getting increasingly harder to convince any development team that they need coding conventions. The industry frowns on it, and most programmers believe that it impinges on their freedoms. They felt constrained, incorrectly believing that it would just slow them down.
Whenever I’ve seen a greenfield team that refuses or ignores it, I can pretty much predict the outcome. Late and low quality. Sometimes canceled after a disastrous set of releases. You can see it smoldering early on.
Sometimes I’ve been dragged into older projects to rescue them. What they often have in common is one or two really strong, but eclectic, coders pulling most of the weight, and everyone else a little lost, mulling around the outside. So, they are almost standardized, in that the bulk of the code has some consistency, even if the conventions are a bit crazy. You can fix eclectic code if it is consistent, so they are usually recoverable.
One big advantage of a coding style is that it makes code reviews work. It is easier to review with respect to conventions than just comment on what is effectively random code. Violations of the conventions tend to happen in quick, sloppy work, which is usually where the bugs are clustered. If you are fixing a big mess, you’ll find that the bugs always take you back to the worst code. That is how we know they are tied to each other.
However, I find it very hard to set up, implement, and even follow conventions.
That might sound weird in that I certainly recognize the importance of them. But they do act as a slight translation of my thinking about the code and how to express it. It usually takes a while before following the conventions becomes more natural. Initially, it feels like friction.
Also, some of the more modern conventions are far worse than the older ones. Left to decide, I’ll go far back to the stricter conventions of my past.
In software, all sorts of things are commonly changed just for the sake of changing them, not because they are improved. You could write endless essays about why certain conventions really are stronger, but nobody reads them. We really have to drop that “newer is always better” attitude, which is not working for us anymore.
Still, if you need to build something reliable, in the most efficient time possible, the core thing to do is not to let it become a dumpster fire. So, organization, conventions, and reuse are critical. You’ll take a bit longer to get out of the gate, but you’ll whizz by any other development team once you find the grove. It’s always been like that for the teams I’ve worked in and talked to, and I think it is likely universal. We like to say “garbage in, garbage out” for data but maybe for coding we should also add “make a mess, get a mess” for code.
For a few teams, we had super strict coding conventions, which included formatting, structure, naming, and even idioms.
In a bunch, we weren’t strict — there were no “enforced” standards — but the coders were mostly aligned.
In some projects, particularly later in my career, it was just herding cats, with all of the developers going their own way. A free-for-all.
In those first cases, the projects were all about high-quality code and that was what we delivered. It was tight, complex, and industrial. Slow and steady.
For the loosely aligned teams, the projects were usually successful. The quality was pretty good. There were some bugs, but they were quickly fixable. Really ugly code would mostly get refactored to fit in correctly. There was always cleanup.
For the rogue groups, the code was usually a huge mess. The architecture was an unstructured ball of mud. Stuff was scattered everywhere and most people were overly stressed. The bugs were legendary and the fixes were high risk and scary. If a bug crossed multiple author’s work, it was an unsolvable nightmare. Nothing was ever cleaned up, just chucked in and left in that state.
So it’s no surprise that I see quality as tied to disciplined approaches like coding standards. It’s sort of style vs substance, but for programming, it is more like initial speed vs quality. If you understand in advance what you are building and concentrate, slowly and carefully, when you code, your output will be far better. It requires diligence and patience.
That manifests in consistency, readability, and reuse. All three reduce time, reduce bugs, and make it easier to extend the work later. If the development continues for years, it is mandatory.
If you’ve ever worked on tight projects vs sloppy ones, you see the difference. But if all of your projects have just been dumpster fires, it might not be so obvious. You might be used to the blowups in production, the arguments over requirements. A lot of development projects burn energy on avoidable issues.
It is getting increasingly harder to convince any development team that they need coding conventions. The industry frowns on it, and most programmers believe that it impinges on their freedoms. They felt constrained, incorrectly believing that it would just slow them down.
Whenever I’ve seen a greenfield team that refuses or ignores it, I can pretty much predict the outcome. Late and low quality. Sometimes canceled after a disastrous set of releases. You can see it smoldering early on.
Sometimes I’ve been dragged into older projects to rescue them. What they often have in common is one or two really strong, but eclectic, coders pulling most of the weight, and everyone else a little lost, mulling around the outside. So, they are almost standardized, in that the bulk of the code has some consistency, even if the conventions are a bit crazy. You can fix eclectic code if it is consistent, so they are usually recoverable.
One big advantage of a coding style is that it makes code reviews work. It is easier to review with respect to conventions than just comment on what is effectively random code. Violations of the conventions tend to happen in quick, sloppy work, which is usually where the bugs are clustered. If you are fixing a big mess, you’ll find that the bugs always take you back to the worst code. That is how we know they are tied to each other.
However, I find it very hard to set up, implement, and even follow conventions.
That might sound weird in that I certainly recognize the importance of them. But they do act as a slight translation of my thinking about the code and how to express it. It usually takes a while before following the conventions becomes more natural. Initially, it feels like friction.
Also, some of the more modern conventions are far worse than the older ones. Left to decide, I’ll go far back to the stricter conventions of my past.
In software, all sorts of things are commonly changed just for the sake of changing them, not because they are improved. You could write endless essays about why certain conventions really are stronger, but nobody reads them. We really have to drop that “newer is always better” attitude, which is not working for us anymore.
Still, if you need to build something reliable, in the most efficient time possible, the core thing to do is not to let it become a dumpster fire. So, organization, conventions, and reuse are critical. You’ll take a bit longer to get out of the gate, but you’ll whizz by any other development team once you find the grove. It’s always been like that for the teams I’ve worked in and talked to, and I think it is likely universal. We like to say “garbage in, garbage out” for data but maybe for coding we should also add “make a mess, get a mess” for code.
Thursday, August 1, 2024
Massively Parallel Computing
A long time ago I was often thinking about the computer architectures of that day. They were a little simpler back then.
The first thing that bothered me was why we had disks, memory, and registers. Later we added pipelines and caches.
Why can’t we just put some data directly in some long-term persistence and access it there? Drop the redundancy. Turn it all upside down?
Then the data isn’t moved into a CPU, or FPU, or even the GPU. Instead, the computation is taken directly to the data. Ignoring the co-processors, the CPU roves around persistence. It is told to go somewhere, examine the data, and then produce a result. Sort of like a little robot in a huge factory of memory cells.
It would get a little weirder of course, since there might be multiple memory cells and a bit of indirection involved. And we’d want the addressable space to be gigantic, like petabytes or exabytes. Infinite would be better.
Then a computer isn’t time splicing a bunch of chips, but rather it is a swarm of much simpler chips that are effectively each dedicated to one task. Let’s call them TPUs. Since the pool of them wouldn’t be infinite, they would still do some work, get interrupted, and switch to some other work. But we could interrupt them a lot less, particularly if there are a lot of them, and some of them could be locked to specific code sequences like the operating system or device drivers.
If when they moved they fully owned and uniquely occupied the cells that they needed, it would be a strong form of locking. Built right into the mechanics.
We couldn’t do this before, the CPUs are really complicated, but all we’d need is for each one to be a fraction of that size. A tiny, tiny instruction set, just the minimum. As small as possible.
Then the bus is just mapped under addressable space. Just more cells, but with some different type of implementation beneath. The TPUs won’t know or care about the difference. Everything is polymorphic in this massive factory.
Besides a basic computation set, they’d have some sort of batch strength. That way they could lock a lot of cells all at once, then move or copy them somewhere else in one optimized operation. They might have different categories too, so some could behave more like an FPU.
It would be easy to add more, different types. You would start with a basic number and keep dumping in more. In fact, you could take any two machines and easily combine them as one, even if they are of different ages. You could keep combining them until you had a beast. Or split a beast in two.
I don’t do hardware, so there is probably something obvious that I am missing, but it always seemed like this would make more sense. I thought it would be super cool if instead of trashing all the machines I’ve bought over the decades, I could just meld them together into one big entity.
The first thing that bothered me was why we had disks, memory, and registers. Later we added pipelines and caches.
Why can’t we just put some data directly in some long-term persistence and access it there? Drop the redundancy. Turn it all upside down?
Then the data isn’t moved into a CPU, or FPU, or even the GPU. Instead, the computation is taken directly to the data. Ignoring the co-processors, the CPU roves around persistence. It is told to go somewhere, examine the data, and then produce a result. Sort of like a little robot in a huge factory of memory cells.
It would get a little weirder of course, since there might be multiple memory cells and a bit of indirection involved. And we’d want the addressable space to be gigantic, like petabytes or exabytes. Infinite would be better.
Then a computer isn’t time splicing a bunch of chips, but rather it is a swarm of much simpler chips that are effectively each dedicated to one task. Let’s call them TPUs. Since the pool of them wouldn’t be infinite, they would still do some work, get interrupted, and switch to some other work. But we could interrupt them a lot less, particularly if there are a lot of them, and some of them could be locked to specific code sequences like the operating system or device drivers.
If when they moved they fully owned and uniquely occupied the cells that they needed, it would be a strong form of locking. Built right into the mechanics.
We couldn’t do this before, the CPUs are really complicated, but all we’d need is for each one to be a fraction of that size. A tiny, tiny instruction set, just the minimum. As small as possible.
Then the bus is just mapped under addressable space. Just more cells, but with some different type of implementation beneath. The TPUs won’t know or care about the difference. Everything is polymorphic in this massive factory.
Besides a basic computation set, they’d have some sort of batch strength. That way they could lock a lot of cells all at once, then move or copy them somewhere else in one optimized operation. They might have different categories too, so some could behave more like an FPU.
It would be easy to add more, different types. You would start with a basic number and keep dumping in more. In fact, you could take any two machines and easily combine them as one, even if they are of different ages. You could keep combining them until you had a beast. Or split a beast in two.
I don’t do hardware, so there is probably something obvious that I am missing, but it always seemed like this would make more sense. I thought it would be super cool if instead of trashing all the machines I’ve bought over the decades, I could just meld them together into one big entity.
Thursday, July 25, 2024
Updating Software
It would be nice if all of the programmers in this world always produced high-quality bug-free software. But they don’t.
There are some isolated centers of excellence that get pretty close.
But the industry's desire to produce high quality has been waning in the face of impatience and exponentially exploding complexity. Most programmers have barely enough time to get their stuff sort of working.
Thus bugs are common, and the number in the wild is increasing at an increasing rate.
A long time ago, updates were always scheduled carefully.
Updating dependencies was done independently of any code or configuration changes made on top. We’d kept the different types of changes separated to make it easier to diagnose problems. The last thing you want is to deploy on top of chaos.
Sometimes we would package upgrades into a big release, but we did this with caution. Before the release we would first clean up the code, then update the dependencies, then go off and do some light regression testing. If all that looks good we’d start adding the big changes on top. We would do full testing before the release, both old and new features. It was a good order, but a bit expensive.
For common infrastructure, there would be scheduled updates, where all other changes were frozen ahead and behind it. So, if the update triggered bugs, we’d know exactly what caused them. If enough teams honored the freeze, upgraded dependency bugs would be picked up quickly and rolled back to reduce the damage.
A lot of that changed with the rise of computer crime. Some older code is buggy so exploits are happening all of the time. An endless stream of updates. Lots of security issues convinced people to take the easy route and turn on auto-updates. It keeps up with the patches, but the foundations have now become unpredictable. If you deploy something on top and you get a new bug, your code might be wrong or there could have been a change to a dependency below. You can’t easily tell.
The rise of apps — targeted platform programs — also pushed auto updates. In some cases just to get new features out quickly, the initial apps were kinda crude, but also for major security flaws.
Auto updates are a pretty bad idea. There are the occasional emergency security patches that need immediate updating, but pretty much everything else does not. You don’t want surprise changes, it breeds confusion.
Part of operations should be having a huge inventory list of critical dependencies and scanning for emergency patches. The scanning can be automated. If one shows up and it is serious, it should be updated immediately. But it always requires human intervention. Someone should assess the impact of making that change. The usage of the dependency may not warrant the risk of an immediate upgrade.
Zero-day exploits on internal-only software, for example, are not emergencies. There are no public access points for them to be leveraged. They need to get patched, but the schedule should be reasonable and not disruptive.
If we go back to that, then we should return to most updates being scheduled. It was a much better way of dealing with changes and bugs.
Software is an endless stream of changes, so to quote Ghostbusters: “Never cross the streams”. Keep your changes, dependency changes, and infrastructure changes separate. The most important part of running software is to be able to diagnose the inevitable bugs quickly. Never compromise that.
It’s worth noting that I always turn off auto updates on all my machines. I check frequently to see if there are any updates, but I do not ever trust auto updates and I do not immediately perform feature updates unless they are critical.
I have always done this because more than once in my life I have seen a big software release blow up because of auto updates. Variances in development machines cause works-on-my-machine bugs or outages at very vulnerable moments.
Teams always need to synchronize the updates for all of their tools and environments, it is bad to not do that. And you never want to update anything in the last few days or weeks before the release, it is just asking for trouble.
Some people will complain that turning off auto updates will take us back to the old days when it was not uncommon to find very stale software running out there. If it's been years of ignoring updates, the risks in applying all of them at once are huge, so each delay makes it harder to move forward. That is an operational deficiency. If you run software, you have a responsibility to update it on a reasonable basis. It is all about operations. Cheating the game with auto updates just moves the risks elsewhere, it does not eliminate them.
Some people don’t want to think about updates. Auto updates make that effort go away. But really you just traded it for an even worse problem: instability. A stale but reliable system is far better than the latest and greatest unstable mess. The most important part of any software is that it is trustworthy and reliable. Break those two properties and there is barely any value left.
I get that just turning on auto-updates makes life simpler today. But we don’t build software systems just for today. We build them to run for years, decades, and soon possibly centuries. A tradeoff does not eliminate the problem, it just moves it. Sometimes that is okay, but sometimes that is just making everything worse.
There are some isolated centers of excellence that get pretty close.
But the industry's desire to produce high quality has been waning in the face of impatience and exponentially exploding complexity. Most programmers have barely enough time to get their stuff sort of working.
Thus bugs are common, and the number in the wild is increasing at an increasing rate.
A long time ago, updates were always scheduled carefully.
Updating dependencies was done independently of any code or configuration changes made on top. We’d kept the different types of changes separated to make it easier to diagnose problems. The last thing you want is to deploy on top of chaos.
Sometimes we would package upgrades into a big release, but we did this with caution. Before the release we would first clean up the code, then update the dependencies, then go off and do some light regression testing. If all that looks good we’d start adding the big changes on top. We would do full testing before the release, both old and new features. It was a good order, but a bit expensive.
For common infrastructure, there would be scheduled updates, where all other changes were frozen ahead and behind it. So, if the update triggered bugs, we’d know exactly what caused them. If enough teams honored the freeze, upgraded dependency bugs would be picked up quickly and rolled back to reduce the damage.
A lot of that changed with the rise of computer crime. Some older code is buggy so exploits are happening all of the time. An endless stream of updates. Lots of security issues convinced people to take the easy route and turn on auto-updates. It keeps up with the patches, but the foundations have now become unpredictable. If you deploy something on top and you get a new bug, your code might be wrong or there could have been a change to a dependency below. You can’t easily tell.
The rise of apps — targeted platform programs — also pushed auto updates. In some cases just to get new features out quickly, the initial apps were kinda crude, but also for major security flaws.
Auto updates are a pretty bad idea. There are the occasional emergency security patches that need immediate updating, but pretty much everything else does not. You don’t want surprise changes, it breeds confusion.
Part of operations should be having a huge inventory list of critical dependencies and scanning for emergency patches. The scanning can be automated. If one shows up and it is serious, it should be updated immediately. But it always requires human intervention. Someone should assess the impact of making that change. The usage of the dependency may not warrant the risk of an immediate upgrade.
Zero-day exploits on internal-only software, for example, are not emergencies. There are no public access points for them to be leveraged. They need to get patched, but the schedule should be reasonable and not disruptive.
If we go back to that, then we should return to most updates being scheduled. It was a much better way of dealing with changes and bugs.
Software is an endless stream of changes, so to quote Ghostbusters: “Never cross the streams”. Keep your changes, dependency changes, and infrastructure changes separate. The most important part of running software is to be able to diagnose the inevitable bugs quickly. Never compromise that.
It’s worth noting that I always turn off auto updates on all my machines. I check frequently to see if there are any updates, but I do not ever trust auto updates and I do not immediately perform feature updates unless they are critical.
I have always done this because more than once in my life I have seen a big software release blow up because of auto updates. Variances in development machines cause works-on-my-machine bugs or outages at very vulnerable moments.
Teams always need to synchronize the updates for all of their tools and environments, it is bad to not do that. And you never want to update anything in the last few days or weeks before the release, it is just asking for trouble.
Some people will complain that turning off auto updates will take us back to the old days when it was not uncommon to find very stale software running out there. If it's been years of ignoring updates, the risks in applying all of them at once are huge, so each delay makes it harder to move forward. That is an operational deficiency. If you run software, you have a responsibility to update it on a reasonable basis. It is all about operations. Cheating the game with auto updates just moves the risks elsewhere, it does not eliminate them.
Some people don’t want to think about updates. Auto updates make that effort go away. But really you just traded it for an even worse problem: instability. A stale but reliable system is far better than the latest and greatest unstable mess. The most important part of any software is that it is trustworthy and reliable. Break those two properties and there is barely any value left.
I get that just turning on auto-updates makes life simpler today. But we don’t build software systems just for today. We build them to run for years, decades, and soon possibly centuries. A tradeoff does not eliminate the problem, it just moves it. Sometimes that is okay, but sometimes that is just making everything worse.
Thursday, July 18, 2024
Expression
Often I use the term expressibility to mean the width of all possible permutations within some usage of a formal system. So state machines and programming languages have different expressibility. The limits of what you can do with them are different.
But there is another way to look at it.
You decide you want to build a solution. It fits a set of problems. It has an inherent complexity. Programmers visualize that in different ways.
When you go to code it for the computer, depending on the language, it may be more or less natural. That is, if you are going to code some complex mathematical equations, then a mathematics-oriented language like APL would be easier. In nearly the same way we express the math itself, we can write the code.
Although it is equivalent, if you express that same code with any imperative language, you will have to go through a lot more gyrations and transformations in order to fit those equations in the language. Some underlying libraries may help, but you still need to bend what you are thinking in order to fit it into the syntax.
Wherever and whenever we bend, there is a significantly increased likelihood of bugs. The bends tend to hide problems. You can’t just read it back and say “Yeah, that is actually what I was thinking.” The change of expression obscures that.
A long time ago, for large, but routine systems, I remember saying that the code should nearly match the specifications. If the user wrote a paragraph explaining what the code should do, the code that does the work should reflect that almost perfectly.
The variables are the user terminology; the structure is as they described. If it were tight, we could show the author of the spec the finished code and they would be able to mostly understand it and then verify that it is what they want. There would be some syntactic noise, some intermediate values, and some error handling as well, but the user would likely be able to see through all of that and know that it was correct and would match the spec.
That idea works well for specific complex calculations if they are tightly encapsulated in functions, but obviously, systems need a lot of other operational stuff around them to work. Still, the closer you get to that utopia, the more likely that visual inspections will bear fruit.
That doesn’t just affect quality but also enhances debugging and discussions. If someone has a question about how the system is working and you can answer that in a few seconds, it really helps.
Going the other way we can roughly talk about how far away the code drifts from the problem.
The programming language could be awkward and noisy. Expressing some complicated mathematics in assembler for instance would make it way harder to verify. All of the drift would have to be shoved into comments or almost no one could ever understand it.
Some languages require a lot of boilerplate, frameworks, and syntactic sugar, the expression there can bear little resemblance to the original problem.
Abstraction is another cause of drift. The code may solve a much more general problem, then need some specific configurations to scope it down to the actual problem at hand. So the expression is split into parts two.
The big value of abstraction is reuse. Code it once, get it working, and reuse it again for dozens of similar problems, it is a huge time saver, but a little more complex expression.
Expression in this sense isn’t that different from writing. You can say something in plain simple terms or you can hide your message in doublespeak. You might still be communicating the same things, but just making the listener's job a whole lot more difficult.
In the midst of a critical production bug, it really sucks if the expression of the code is disconnected from the behavior. At the extreme, it is spaghetti code. The twists and turns feel nearly random. Oddly, the worse the code expression, the more likely that there will be critical production bugs.
Good code doesn’t run into this issue very often, bad code hits it all of the time. Battletested abstract code is good unless there is a problem, but these are also rare. If you are fixing legacy code, most of what you will encounter will be bad code. The good or well-tested stuff is mostly invisible.
But there is another way to look at it.
You decide you want to build a solution. It fits a set of problems. It has an inherent complexity. Programmers visualize that in different ways.
When you go to code it for the computer, depending on the language, it may be more or less natural. That is, if you are going to code some complex mathematical equations, then a mathematics-oriented language like APL would be easier. In nearly the same way we express the math itself, we can write the code.
Although it is equivalent, if you express that same code with any imperative language, you will have to go through a lot more gyrations and transformations in order to fit those equations in the language. Some underlying libraries may help, but you still need to bend what you are thinking in order to fit it into the syntax.
Wherever and whenever we bend, there is a significantly increased likelihood of bugs. The bends tend to hide problems. You can’t just read it back and say “Yeah, that is actually what I was thinking.” The change of expression obscures that.
A long time ago, for large, but routine systems, I remember saying that the code should nearly match the specifications. If the user wrote a paragraph explaining what the code should do, the code that does the work should reflect that almost perfectly.
The variables are the user terminology; the structure is as they described. If it were tight, we could show the author of the spec the finished code and they would be able to mostly understand it and then verify that it is what they want. There would be some syntactic noise, some intermediate values, and some error handling as well, but the user would likely be able to see through all of that and know that it was correct and would match the spec.
That idea works well for specific complex calculations if they are tightly encapsulated in functions, but obviously, systems need a lot of other operational stuff around them to work. Still, the closer you get to that utopia, the more likely that visual inspections will bear fruit.
That doesn’t just affect quality but also enhances debugging and discussions. If someone has a question about how the system is working and you can answer that in a few seconds, it really helps.
Going the other way we can roughly talk about how far away the code drifts from the problem.
The programming language could be awkward and noisy. Expressing some complicated mathematics in assembler for instance would make it way harder to verify. All of the drift would have to be shoved into comments or almost no one could ever understand it.
Some languages require a lot of boilerplate, frameworks, and syntactic sugar, the expression there can bear little resemblance to the original problem.
Abstraction is another cause of drift. The code may solve a much more general problem, then need some specific configurations to scope it down to the actual problem at hand. So the expression is split into parts two.
The big value of abstraction is reuse. Code it once, get it working, and reuse it again for dozens of similar problems, it is a huge time saver, but a little more complex expression.
Expression in this sense isn’t that different from writing. You can say something in plain simple terms or you can hide your message in doublespeak. You might still be communicating the same things, but just making the listener's job a whole lot more difficult.
In the midst of a critical production bug, it really sucks if the expression of the code is disconnected from the behavior. At the extreme, it is spaghetti code. The twists and turns feel nearly random. Oddly, the worse the code expression, the more likely that there will be critical production bugs.
Good code doesn’t run into this issue very often, bad code hits it all of the time. Battletested abstract code is good unless there is a problem, but these are also rare. If you are fixing legacy code, most of what you will encounter will be bad code. The good or well-tested stuff is mostly invisible.
Thursday, July 11, 2024
Effective Management
What I think management theorists keep forgetting is that the rubber needs to hit the road. That is, management is a secondary occupation. It exists to make sure something actually gets done well enough, underneath.
Time is the critical component.
There are things you can do right now to alleviate an issue, which some might believe are successful. But if that short-term fix ends up making the overall problems worse later, it was not successful. It just took some time for the lagging consequences to play out.
We, as a species, seem really bad at understanding this. We clamor for fast fixes, then whine later that they were inappropriate. It would make more sense for us to accept that not all consequences are immediate and that we can trace bad things to a much earlier decision. And more importantly, we can learn from this and not keep repeating the same mistakes over and over again. We can get smarter, we can approach getting things done intelligently.
We like hierarchies to run stuff, but we seem to be foggy about their construction. People at different levels get confused about their role in making things happen. You get dictatorial leaders who make outrageous claims, and disabled workers that are entirely disconnected from what they are doing. The circumstances spiral out of control; unfortunate things happen.
At the bottom are the people doing the work.
To scale up an endeavor, they need to be placed in little controlled boxes. They cannot and should not be free to do whatever they choose. They have a job to do, they need to do their job.
But at the same time, if the boxes are so small that they can hardly breathe that is really bad too. They need to have enough freedom that they feel comfortable with what they are doing and can concentrate on doing their best at it. They need to be able to assess the value of what they are doing and participate in deciding if it is a worthwhile activity.
A good workforce will do good works regardless of their management. They will find the better ways to get things done. If they are enabled, the quality of their work will be better. Bad management can steal credit for a good workforce, it happens all of the time.
Good management understands that their job is to make sure the environment exists in order to build up a good workforce. They set the stage, the tone. They can help, but they are not the workforce, they are not in as much control of the situation as they want to believe. They didn’t do something, they set the stage for that thing to get done. It is important, but it is not the main thing.
As you go up the hierarchy, the concerns of management should be time and direction.
The lower managers need to be mostly concerned about tactical issues. They need to make sure the obvious obstacles are not preventing the workforce from accomplishing their effort.
Farther up the hierarchy the concerns should be longer-term. At the very top, the primary concern is strategy. Someone running a company should be obsessed with at least the next five years, if not far longer.
It’s the higher executives that should clue into the negative long-term consequences. They might realize that some tactical decision to get around a problem will hurt them somewhere down the road. They should be the ones who understand enough of the landscape to find a better path forward. They might call for opinions, but they are supposed to be in the position of evaluating all of them correctly. Ultimately direction is their decision, their responsibility.
If a company accidentally cannibalizes its own market with a less effective product, it is a strategic mistake. It may take a long time to play out but it is still a black mark against the leaders at the top. They should not have pointed the company to the edge of a cliff.
If a company lowers quality too far and eats through the value they had built up earlier, that is also a strategic mistake. They should have known exactly what the minimum quality was, and they should be ensuring that the effort does not fall below that bar. They should be able to see that a series of tactical choices doesn’t add up correctly and requires some interference in order to keep it from getting worse. If they don’t act, that is still a decision that they made or should have made. They are still at fault.
If a company blindly trudges forward while the ground beneath them erodes and disappears that is another common strategic mistake. Things were going so well that the upper executives stopped paying attention. They grossly overestimated the lifespan of their offerings. They should have noticed that the landscape changed and they need to change direction now too. They were asleep at the wheel. It is their responsibility to find and follow the road, even as it twists and turns through the forest.
So we put it all together and we can see that we have workers that are comfortable at their jobs. They are doing their best and occasionally raising concerns. Their tactical management jumps in to resolve their blockers quickly. Sometimes someone even higher jumps in later to reset the direction slightly. Overall though the workers are clear about what they have to do, they do it well enough, and they have confidence that they are headed in the same direction as their colleagues. Drama is rare, and while there are always frustrations, they have some confidence that things will get better, albeit it may be slow progress.
Contrast that to an organization out of control. The highest executives are stomping around making outrageous claims. These don’t match what is happening on the ground. People are mostly lost and frustrated. A few are pulling most of the weight and are exhausted. The lower management just exists to beat and whip the troupes. Badness rolls downhill. Conflict is encouraged. There is often a stagnant bureaucracy that exists as nothing but roadblocks. Bad problems don’t ever get fixed, they just fester while people try to steer around them. Navigating the insanity is harder than actually doing the work. Most of what actually gets done is useless; contributes nothing to reaching the goals. Success is rare. Overall whatever value was created in the past is slowly decaying. It’s enough to keep the endeavor funded but poor management is eating away at it instead of contributing to it. The overall direction is downwards and it is obvious to most people involved.
The problem with analysis is often perspective. If you take only a few viewpoints in a large organization, your understanding of the issues will be skewed. You have to step back, look at all of it, in all its ugly details, then objectively address the problems. Realistically, no one will be happy, they will always prefer their own agenda, that their perspective dominates. But what we have trouble understanding is that the best we can do collectively is not the best we can do individually. Management is not optimizing individual careers, it is about making sure that a group of people can get a lot of work done as effectively as the circumstances allow. All that really matters in the end is the long term. Everything else is a distraction.
Time is the critical component.
There are things you can do right now to alleviate an issue, which some might believe are successful. But if that short-term fix ends up making the overall problems worse later, it was not successful. It just took some time for the lagging consequences to play out.
We, as a species, seem really bad at understanding this. We clamor for fast fixes, then whine later that they were inappropriate. It would make more sense for us to accept that not all consequences are immediate and that we can trace bad things to a much earlier decision. And more importantly, we can learn from this and not keep repeating the same mistakes over and over again. We can get smarter, we can approach getting things done intelligently.
We like hierarchies to run stuff, but we seem to be foggy about their construction. People at different levels get confused about their role in making things happen. You get dictatorial leaders who make outrageous claims, and disabled workers that are entirely disconnected from what they are doing. The circumstances spiral out of control; unfortunate things happen.
At the bottom are the people doing the work.
To scale up an endeavor, they need to be placed in little controlled boxes. They cannot and should not be free to do whatever they choose. They have a job to do, they need to do their job.
But at the same time, if the boxes are so small that they can hardly breathe that is really bad too. They need to have enough freedom that they feel comfortable with what they are doing and can concentrate on doing their best at it. They need to be able to assess the value of what they are doing and participate in deciding if it is a worthwhile activity.
A good workforce will do good works regardless of their management. They will find the better ways to get things done. If they are enabled, the quality of their work will be better. Bad management can steal credit for a good workforce, it happens all of the time.
Good management understands that their job is to make sure the environment exists in order to build up a good workforce. They set the stage, the tone. They can help, but they are not the workforce, they are not in as much control of the situation as they want to believe. They didn’t do something, they set the stage for that thing to get done. It is important, but it is not the main thing.
As you go up the hierarchy, the concerns of management should be time and direction.
The lower managers need to be mostly concerned about tactical issues. They need to make sure the obvious obstacles are not preventing the workforce from accomplishing their effort.
Farther up the hierarchy the concerns should be longer-term. At the very top, the primary concern is strategy. Someone running a company should be obsessed with at least the next five years, if not far longer.
It’s the higher executives that should clue into the negative long-term consequences. They might realize that some tactical decision to get around a problem will hurt them somewhere down the road. They should be the ones who understand enough of the landscape to find a better path forward. They might call for opinions, but they are supposed to be in the position of evaluating all of them correctly. Ultimately direction is their decision, their responsibility.
If a company accidentally cannibalizes its own market with a less effective product, it is a strategic mistake. It may take a long time to play out but it is still a black mark against the leaders at the top. They should not have pointed the company to the edge of a cliff.
If a company lowers quality too far and eats through the value they had built up earlier, that is also a strategic mistake. They should have known exactly what the minimum quality was, and they should be ensuring that the effort does not fall below that bar. They should be able to see that a series of tactical choices doesn’t add up correctly and requires some interference in order to keep it from getting worse. If they don’t act, that is still a decision that they made or should have made. They are still at fault.
If a company blindly trudges forward while the ground beneath them erodes and disappears that is another common strategic mistake. Things were going so well that the upper executives stopped paying attention. They grossly overestimated the lifespan of their offerings. They should have noticed that the landscape changed and they need to change direction now too. They were asleep at the wheel. It is their responsibility to find and follow the road, even as it twists and turns through the forest.
So we put it all together and we can see that we have workers that are comfortable at their jobs. They are doing their best and occasionally raising concerns. Their tactical management jumps in to resolve their blockers quickly. Sometimes someone even higher jumps in later to reset the direction slightly. Overall though the workers are clear about what they have to do, they do it well enough, and they have confidence that they are headed in the same direction as their colleagues. Drama is rare, and while there are always frustrations, they have some confidence that things will get better, albeit it may be slow progress.
Contrast that to an organization out of control. The highest executives are stomping around making outrageous claims. These don’t match what is happening on the ground. People are mostly lost and frustrated. A few are pulling most of the weight and are exhausted. The lower management just exists to beat and whip the troupes. Badness rolls downhill. Conflict is encouraged. There is often a stagnant bureaucracy that exists as nothing but roadblocks. Bad problems don’t ever get fixed, they just fester while people try to steer around them. Navigating the insanity is harder than actually doing the work. Most of what actually gets done is useless; contributes nothing to reaching the goals. Success is rare. Overall whatever value was created in the past is slowly decaying. It’s enough to keep the endeavor funded but poor management is eating away at it instead of contributing to it. The overall direction is downwards and it is obvious to most people involved.
The problem with analysis is often perspective. If you take only a few viewpoints in a large organization, your understanding of the issues will be skewed. You have to step back, look at all of it, in all its ugly details, then objectively address the problems. Realistically, no one will be happy, they will always prefer their own agenda, that their perspective dominates. But what we have trouble understanding is that the best we can do collectively is not the best we can do individually. Management is not optimizing individual careers, it is about making sure that a group of people can get a lot of work done as effectively as the circumstances allow. All that really matters in the end is the long term. Everything else is a distraction.
Thursday, July 4, 2024
AI
I was interested in neural nets over thirty years ago. A friend of mine was engrossed in them and taught me a lot. They were hot for a while, but then they faded away.
As the AI winter began thawing, I dipped in periodically to see how it was progressing.
The results were getting amazing. However, I tended to see these large language models as just dynamically crafting code to fit specific data. Sure the code is immensely complex, and some of the behaviors are surprising, but I didn’t feel like the technology had transcended the physical limitation of hardware.
Computers are stupid; software can look smart, but it never is. The utility of software comes from how we interpret what the computer remembers.
A few weeks ago I was listening to Prof. Geoffrey Hinton talk about his AI concerns. He had survived the winter in one of our local universities. I have stumbled across his work quite often.
You have to respect his knowledge, it is incredibly deep. But I was still dismissing his concerns. The output from these things is a mathematical game, it may appear intelligent, but it can’t be.
As his words sank deeper I started thinking back to some of Douglas Hofstadter’s work. Godel, Escher, Bach is a magnum opus, but I read some of his later writings where he delved into epiphenomenon. I think it was I Am a Strange Loop, where he was making an argument that people live on in other’s memories.
I didn’t buy that as a valid argument. Interesting, sure, but not valid. Memories are static, what we know of intelligent life forms is that they are always dynamic. They can and do change. They adjust to the world around them, that is the essence of life. Still, I thought that the higher concept of epiphenomenon itself is interesting.
All life, as far as I know, is cellular. Roger Penrose in The Emperor's New Mind tried to make the argument that our intelligence and consciousness on top of our bodies sprang from the exact sort of quantum effects that Einstein so hated. Long ago I toyed with the idea that that probabilistic undertone was spacetime, as an object, getting built. I never published that work, early readers dismissed it quite rapidly, but that sense that the future wasn’t written yet stayed with me. That it all somehow plays back into our self-determination and free will as Penrose was suggesting. Again, another interesting perspective.
And the questions remained. If we are composed of tiny biological machines, how is it possible that we believe we are something else entirely on top of this? Maybe Hofstadter’s epiphenomenon really are independent from their foundations? Are we entities in our own right, or are we just clumps of quadrillions of cells? A Short History of Nearly Everything by Bill Bryson helps to muddle that notion even further.
Does it roll back to Kurt Godel’s first incompleteness theorem, that there are things -- that are true-- that are entirely unreachable from the lower mechanics? I’ll call them emergent properties. They seem to spring out of nowhere, yet they are provably true.
If we searched, would we find that there was some surprising formula that dictates the construction of sequential huge prime numbers, starting at a massive one, and continuing for a giant range, yet except for actually calculating it all out and examining it, we’d be totally unaware of the formula's existence. Nothing about the construction of primes themselves would lead us to deduce this formula. It seems to be disconnected. Properties just seem to emerge.
Godel did that proof for formal systems, which we are not, but we have become the masters of expressing the informal relationships that we see in our world with formal systems, so the linkages between the two are far tighter than we understand right now.
That argument that our sense of self is an epiphenomenon that is extraordinarily complex and springs to “life” on top of a somewhat less than formal biological system that is in the middle of writing itself out is super fascinating. It all sorts of ties itself together.
And then it scared me. If Hinton is correct then an AI answering questions through statistical tricks and dynamic code is just the type of complex foundation on which we could see something else emerge.
It may just be a few properties short of a serious problem right now. But possibly worse because humans tend to randomly toss things into it at a foolish rate. A boiling cauldron of trouble.
We might just be at that moment of singularity, and we might just stumble across the threshold accidentally. Some programmer somewhere thinks one little feature is cool, and that is just enough extra complexity for a dangerous new property to emerge, surprising everyone. Oops.
That a stupid computer can generate brand new text that is mostly correct and sounds nearly legitimate is astounding. While it is still derived and bounded by a sea of input I still don’t think it has crossed the line yet. But I am starting to suspect that it is too close for comfort now. That if I focused really hard on it, I could give it a shove to the other side, and what’s worse is that I am nowhere close to being the brightest bulb in the amusement park. What’s to keep someone brilliant near genius from just waking up one night and setting it all off, blind to the negative consequences of their own success?
After the AI winter, I just assumed this latest sideshow was another fad that would fade away when everyone gets bored. It will unless it unleashes something else.
The results were getting amazing. However, I tended to see these large language models as just dynamically crafting code to fit specific data. Sure the code is immensely complex, and some of the behaviors are surprising, but I didn’t feel like the technology had transcended the physical limitation of hardware.
Computers are stupid; software can look smart, but it never is. The utility of software comes from how we interpret what the computer remembers.
A few weeks ago I was listening to Prof. Geoffrey Hinton talk about his AI concerns. He had survived the winter in one of our local universities. I have stumbled across his work quite often.
You have to respect his knowledge, it is incredibly deep. But I was still dismissing his concerns. The output from these things is a mathematical game, it may appear intelligent, but it can’t be.
As his words sank deeper I started thinking back to some of Douglas Hofstadter’s work. Godel, Escher, Bach is a magnum opus, but I read some of his later writings where he delved into epiphenomenon. I think it was I Am a Strange Loop, where he was making an argument that people live on in other’s memories.
I didn’t buy that as a valid argument. Interesting, sure, but not valid. Memories are static, what we know of intelligent life forms is that they are always dynamic. They can and do change. They adjust to the world around them, that is the essence of life. Still, I thought that the higher concept of epiphenomenon itself is interesting.
All life, as far as I know, is cellular. Roger Penrose in The Emperor's New Mind tried to make the argument that our intelligence and consciousness on top of our bodies sprang from the exact sort of quantum effects that Einstein so hated. Long ago I toyed with the idea that that probabilistic undertone was spacetime, as an object, getting built. I never published that work, early readers dismissed it quite rapidly, but that sense that the future wasn’t written yet stayed with me. That it all somehow plays back into our self-determination and free will as Penrose was suggesting. Again, another interesting perspective.
And the questions remained. If we are composed of tiny biological machines, how is it possible that we believe we are something else entirely on top of this? Maybe Hofstadter’s epiphenomenon really are independent from their foundations? Are we entities in our own right, or are we just clumps of quadrillions of cells? A Short History of Nearly Everything by Bill Bryson helps to muddle that notion even further.
Does it roll back to Kurt Godel’s first incompleteness theorem, that there are things -- that are true-- that are entirely unreachable from the lower mechanics? I’ll call them emergent properties. They seem to spring out of nowhere, yet they are provably true.
If we searched, would we find that there was some surprising formula that dictates the construction of sequential huge prime numbers, starting at a massive one, and continuing for a giant range, yet except for actually calculating it all out and examining it, we’d be totally unaware of the formula's existence. Nothing about the construction of primes themselves would lead us to deduce this formula. It seems to be disconnected. Properties just seem to emerge.
Godel did that proof for formal systems, which we are not, but we have become the masters of expressing the informal relationships that we see in our world with formal systems, so the linkages between the two are far tighter than we understand right now.
That argument that our sense of self is an epiphenomenon that is extraordinarily complex and springs to “life” on top of a somewhat less than formal biological system that is in the middle of writing itself out is super fascinating. It all sorts of ties itself together.
And then it scared me. If Hinton is correct then an AI answering questions through statistical tricks and dynamic code is just the type of complex foundation on which we could see something else emerge.
It may just be a few properties short of a serious problem right now. But possibly worse because humans tend to randomly toss things into it at a foolish rate. A boiling cauldron of trouble.
We might just be at that moment of singularity, and we might just stumble across the threshold accidentally. Some programmer somewhere thinks one little feature is cool, and that is just enough extra complexity for a dangerous new property to emerge, surprising everyone. Oops.
That a stupid computer can generate brand new text that is mostly correct and sounds nearly legitimate is astounding. While it is still derived and bounded by a sea of input I still don’t think it has crossed the line yet. But I am starting to suspect that it is too close for comfort now. That if I focused really hard on it, I could give it a shove to the other side, and what’s worse is that I am nowhere close to being the brightest bulb in the amusement park. What’s to keep someone brilliant near genius from just waking up one night and setting it all off, blind to the negative consequences of their own success?
After the AI winter, I just assumed this latest sideshow was another fad that would fade away when everyone gets bored. It will unless it unleashes something else.
I did enjoy the trilogy Wake, Watch, Wonder by Robert J Sawyer, but I suspect that the odds of a benevolent AI are pretty low. I'd say we have to slow way, way, down, but that wouldn’t stop progress from the dark corners of the world.
If I had a suggestion it would be to turn directly into opening Pandora's box, but to do it in a very contained way. A tightly sandboxed testnet that was locked down fully. Extra fully. Nothing but sneakernet access. A billion-dollar self-contained simulation of the Internet, with an instantaneous kill switch, and an uncrossable physical moat between it and the real world. Only there would I feel comfortable deliberately trying out ideas to see if we are now in trouble or not.
Subscribe to:
Posts (Atom)