A common problem in all code is juggling a lot of data.
If you are going to make your code base industrial-strength one key part of that is to move any and all static data outside of the code.
That includes any strings, particularly since they are the bain of internationalization.
That also includes flags, counters, etc.
A really strong piece of code has no constant declarations. Not for the logging, nor the configuration, or even for user options. It still needs those constants, but they come from elsewhere.
This goal is really hard to achieve. But where it has been mostly achieved you usually see the code lasting a lot longer in usage.
What this gives you is a boatload of data for configurations, as well as the domain data, user data, and system data. Lots of data.
We know how to handle domain data. You put it into a reliable data store, sometimes an RDBMS, or maybe NoSQL technology. We know that preventing it from being redundant is crucial.
But we want the same thing for the other data types too.
It’s just that while we may want an explicit map of all of the possible configuration parameters, in most mediums whether that is in a file, the environment, or at the cli, we usually want to amend a partial set. Maybe the default for 10,000 parameters is fine, but on a specific machine we need to change two or three. This changes how we see and deal with configuration data.
What we should do instead is take the data model for the system to be absolutely everything that is data. Treat all data the same, always.
Then we know that we can get it all from a bunch of different ‘sources’. All we have to do is establish a precedence and then merge on top.
For example, we have 100 weird network parameters that get shoved into calls. We put a default version of each parameter in a file. Then when we load that file, we go into the environment and see if there are any overriding changes, and then we look at the command line and see if there are any more overriding changes. We keep some type of hash to this mess, then as we need the values, we simply do a hash lookup to get our hands on the final value.
This means that a specific piece of code only references the data once when it is needed. That there are multiple loaders that write over each other, in some form of precedence. With this it is easy.
Load from file, load on top from env, load on top from cli. In some cases, we may want to load from a db too (there are fun production reasons why we might want this as the very last step).
We can validate the code because it uses a small number of parameters all in the place they are needed. We can validate the default parameters. We can write code to scream if an env var or cli param exists but there are no defaults. And so forth. Well structured, easy to understand, easy to test, and fairly clean. All good attributes for code.
The fun part though is that we can get crazier. As I said, this applies to all ‘data’ in the system, so we can span it out over all other data sources, and tweak it to handle instances as well as values. In that sense, you do something fun like use a cli command with args that fill in the raw domain data, so that you can do some pretty advanced testing. The possibilities are endless, but the code is still sane.
More useful, you can get some large-scale base domain data from one data source, then amend it with even more data from a bunch of other data sources. If you put checks on validity and drop garbage, the system could use a whole range of different places to merge the data by precedence. Start with the weakest sources first, and use very strong validation. Then loosen up the validation and pile other sources on top. You’ll lose a bit of the determinism overall, but offset that with robustness. You could do it live, or it could be a constant background refresh. That’s fun when you have huge fragmented data issues.
The Programmer's Paradox
Software is a static list of instructions, which we are constantly changing.
Friday, December 20, 2024
Thursday, December 12, 2024
In Full
One of the key goals of writing good software is to minimize the number of lines of code.
This is important because if you only needed 30K lines, but ended up writing 300K, it was a lot of work that is essentially wasted. It’s 10x the amount of code, 10x the amount of testing, and 10x the bugs. As well, you have to search through 10x lines to understand or find issues.
So, if you can get away with 30k, instead of 300K, then is it always worth doing that?
Well, almost always.
30K of really cryptic code that uses every syntactic trick in the book to cleverly compress the code down to almost nothing is generally unreadable. It is too small now.
You always have to revisit the code all of the time.
First to make sure it is reusable, but then later to make sure it is doing the right things in the right way. Reading code is an important part of keeping the entire software development project manageable.
So 30K of really compressed, unreadable stuff, in that sense is no better than 300K of really long and redundant stuff. It’s just a similar problem but in the opposite direction.
Thus, while we want to shrink down the representation of the code to its minimum, we oddly want to expand out the expression of that code to its maximum. It may seem like a contradiction, but it isn’t.
What the code does is not the same as the way you express it. You want it to do the minimum, but you usually want the expression to be maximized.
You could for instance hack 10 functions into a complex calling chain something like A(B(C(D(E(F(G(H(I(K(args))))))) or A().B().C().D().E().F().G().H().I().K(args) so it fits on a single line.
That’s nuts, and it would severely impair the ability of anyone to figure out what either line of code does. So, instead, you put the A..K function calls on individual lines. Call them one by one. Ultimately it does the same thing, but it eats up 10x lines to get there. Which is not just okay, it is actually what you want to do.
It’s an unrealistic example since you really shouldn’t end up with 10 individual function calls. Normally some of them should be encapsulated below the others.
This is important because if you only needed 30K lines, but ended up writing 300K, it was a lot of work that is essentially wasted. It’s 10x the amount of code, 10x the amount of testing, and 10x the bugs. As well, you have to search through 10x lines to understand or find issues.
So, if you can get away with 30k, instead of 300K, then is it always worth doing that?
Well, almost always.
30K of really cryptic code that uses every syntactic trick in the book to cleverly compress the code down to almost nothing is generally unreadable. It is too small now.
You always have to revisit the code all of the time.
First to make sure it is reusable, but then later to make sure it is doing the right things in the right way. Reading code is an important part of keeping the entire software development project manageable.
So 30K of really compressed, unreadable stuff, in that sense is no better than 300K of really long and redundant stuff. It’s just a similar problem but in the opposite direction.
Thus, while we want to shrink down the representation of the code to its minimum, we oddly want to expand out the expression of that code to its maximum. It may seem like a contradiction, but it isn’t.
What the code does is not the same as the way you express it. You want it to do the minimum, but you usually want the expression to be maximized.
You could for instance hack 10 functions into a complex calling chain something like A(B(C(D(E(F(G(H(I(K(args))))))) or A().B().C().D().E().F().G().H().I().K(args) so it fits on a single line.
That’s nuts, and it would severely impair the ability of anyone to figure out what either line of code does. So, instead, you put the A..K function calls on individual lines. Call them one by one. Ultimately it does the same thing, but it eats up 10x lines to get there. Which is not just okay, it is actually what you want to do.
It’s an unrealistic example since you really shouldn’t end up with 10 individual function calls. Normally some of them should be encapsulated below the others.
Encapsulation hides stuff, but if the hidden parts are really just sub-parts of the parent, then it is a very good form of hiding. If you understand what the parent needs to do, then you would assume that the children should be there.
Also, the function names should be self-describing, not just letters of the Alphabet.
Still, you often see coding nearly as bad as the above, when it should have been written out fully.
Getting back to the point, if you had 60K of code that spelled out everything in full, instead of the 30K of cryptic code, or the 300K of redundant code, then you have probably reached the best code possible. Not too big, but also not too small. Lots of languages provide plenty of synaptic sugar, but you really only want to use those features to make the code more readable, not just smaller.
Getting back to the point, if you had 60K of code that spelled out everything in full, instead of the 30K of cryptic code, or the 300K of redundant code, then you have probably reached the best code possible. Not too big, but also not too small. Lots of languages provide plenty of synaptic sugar, but you really only want to use those features to make the code more readable, not just smaller.
Friday, December 6, 2024
The Right Thing
Long ago, rather jokingly, someone gave me specific rules for working, the most interesting of them was “Do the Right thing”.
In some cases with software development, the right thing is tricky. Is it the right thing for you? For the project? For using a specific technology?
In other cases though, it is fairly obvious.
For example, even if a given programming language lets you get crazy with spacing, you do not. You format the source code properly, each and every time. The formatting of any source file should be clean.
In one sense, it doesn’t bug the computer if the spaces are messed up. But source code isn’t just for a computer. You’ll probably have to reread it a lot, and if the code is worth using, aka well written, lots of other people will have to read it as well. We are not concerned with the time it takes to type in the code or edit it carefully to fix any spacing issues. We are concerned with the friction that bad spacing adds when humans are reading the code. The right thing to do is not add unnecessary friction.
That ‘do extra now, for a benefit later’ principle is quite common when building software software. Yet we see it ignored far too often.
One place it applies strongly, but is not often discussed is with hacks.
The right thing to do if you encounter an ugly hack below you is not to ignore it. If you let it percolate upwards, the problem with it will continue. And continue.
Instead, if you find something like that, you want to encapsulate it in the lowest possible place to keep it from littering your code. Don’t embrace the ugliness, don’t embrace the mess. Wrap it and contain it. Make the wrapper act the way the underlying code should have been constructed.
The reason this is right is because of complexity. If everybody lets everybody else’s messes propagate everywhere, the artificial complex in the final effort will be outrageous. You haven’t built code, you just glued together a bunch of messes into a bigger mess. The value of what you wrote is scarce.
A lot of people like the ‘not my problem’ approach. Or the ‘do the laziest thing possible’ one. But the value you are building is essentially crippled with those approaches. You might get to the end quicker, but you didn’t really do what was asked. If they asked me to build a system, it is inherent that the system should be at least good enough. If what I deliver is a huge mess, then I failed on that basic implicit requirement. I did not do my job properly.
So the right thing is to fix the foundational messes by properly encapsulating them so that what is built on top is 1000x more workable. Encapsulate the mess, don’t percolate it.
If you go back and look at much older technology, you definitely see periods where doing the right thing was far more common. And not only does it show, but we also depend on that type of code far more than we depend on the flakey stuff. That’s why the length of time your code is in production is often a reasonable proxy to its quality, which is usually a manifestation of the ability of some programmers to find really clean ways of implementing complicated problems. It is all related.
In some cases with software development, the right thing is tricky. Is it the right thing for you? For the project? For using a specific technology?
In other cases though, it is fairly obvious.
For example, even if a given programming language lets you get crazy with spacing, you do not. You format the source code properly, each and every time. The formatting of any source file should be clean.
In one sense, it doesn’t bug the computer if the spaces are messed up. But source code isn’t just for a computer. You’ll probably have to reread it a lot, and if the code is worth using, aka well written, lots of other people will have to read it as well. We are not concerned with the time it takes to type in the code or edit it carefully to fix any spacing issues. We are concerned with the friction that bad spacing adds when humans are reading the code. The right thing to do is not add unnecessary friction.
That ‘do extra now, for a benefit later’ principle is quite common when building software software. Yet we see it ignored far too often.
One place it applies strongly, but is not often discussed is with hacks.
The right thing to do if you encounter an ugly hack below you is not to ignore it. If you let it percolate upwards, the problem with it will continue. And continue.
Instead, if you find something like that, you want to encapsulate it in the lowest possible place to keep it from littering your code. Don’t embrace the ugliness, don’t embrace the mess. Wrap it and contain it. Make the wrapper act the way the underlying code should have been constructed.
The reason this is right is because of complexity. If everybody lets everybody else’s messes propagate everywhere, the artificial complex in the final effort will be outrageous. You haven’t built code, you just glued together a bunch of messes into a bigger mess. The value of what you wrote is scarce.
A lot of people like the ‘not my problem’ approach. Or the ‘do the laziest thing possible’ one. But the value you are building is essentially crippled with those approaches. You might get to the end quicker, but you didn’t really do what was asked. If they asked me to build a system, it is inherent that the system should be at least good enough. If what I deliver is a huge mess, then I failed on that basic implicit requirement. I did not do my job properly.
So the right thing is to fix the foundational messes by properly encapsulating them so that what is built on top is 1000x more workable. Encapsulate the mess, don’t percolate it.
If you go back and look at much older technology, you definitely see periods where doing the right thing was far more common. And not only does it show, but we also depend on that type of code far more than we depend on the flakey stuff. That’s why the length of time your code is in production is often a reasonable proxy to its quality, which is usually a manifestation of the ability of some programmers to find really clean ways of implementing complicated problems. It is all related.
Thursday, November 28, 2024
Portability Layer
Way back, when I was first coding, if we had to write code to run on a wide variety of different “platforms” the first thing we would always do is write a portability layer.
The idea was that above that layer, the code is simple and clean. It is the layer that deals with all of the strange stuff on the different platforms and different configurations.
In that sense, it is a separation of the code itself from the ugliness of making it run everywhere.
Sometime later, the practice fell out of use. Partly because there were fewer operating systems, so most people were just opting to rewrite their stuff for each one. But also because there was a plethora of attempts like the JVM to write universal machines to hide portability issues. They were meant to be that portability layer.
Portability is oddly related to user preferences and configurations. You can see both as a whackload of conditional code triggered by static or nearly static data. A platform or a session are just tightened contexts.
And that is the essence of a portability layer. It is there to hide all of the ugly, messy, nested mess that is triggered from above.
You can’t let the operating system, specific machine configs, or user preferences muddle the code. The code needs to do one thing. Only one thing. Part of that one thing might be a little different below, but above the layer, it is all the same. So the layer hides that.
There may be extra steps for certain users or hardware. The layer hides that too.
The presentation may be different for different users, or even the language. The layer should hide that as well.
It would be better to call this an ugliness layer than a portability layer. It separates the high-level actions of the code from the implementation nightmares below it. And it should be driven by configuration data, which is just the same as any other data in the system. Realistically part of the database, part of the data model.
In a very abstract sense, some code is triggered with some configuration data, and some alternative code chains that combine to do the work. At the top level, you can lay out the full logic, while underneath the portability layer if some things aren’t required they are ‘no-op’s, basically empty functions. When structured that way, it’s a lot easier to verify, and debug.
The high level is a large set of polymorphic instructions, the underlying level is specific to the context.
The system is a big set of functions, that implement various features. Each function is high level, these are mapped to the lower level stuff that is specific to the machine, user, etc. Any sub-context that can be applied.
When code is structured that way, it is easy to figure out what it is doing, and a whole lot less messy. Code that is easy to figure is also easy to test and debug. Easy code is higher quality code. Encapsulated madness below is where you’ll hit the bugs, and is harder to test. But if that is clearly separated, then it is just a matter of reproducing the problem and noticing why it was ineffective.
We should have not forgotten about portability layers, they were definitely a far better decomposition to help with overly complex runtime issues.
The idea was that above that layer, the code is simple and clean. It is the layer that deals with all of the strange stuff on the different platforms and different configurations.
In that sense, it is a separation of the code itself from the ugliness of making it run everywhere.
Sometime later, the practice fell out of use. Partly because there were fewer operating systems, so most people were just opting to rewrite their stuff for each one. But also because there was a plethora of attempts like the JVM to write universal machines to hide portability issues. They were meant to be that portability layer.
Portability is oddly related to user preferences and configurations. You can see both as a whackload of conditional code triggered by static or nearly static data. A platform or a session are just tightened contexts.
And that is the essence of a portability layer. It is there to hide all of the ugly, messy, nested mess that is triggered from above.
You can’t let the operating system, specific machine configs, or user preferences muddle the code. The code needs to do one thing. Only one thing. Part of that one thing might be a little different below, but above the layer, it is all the same. So the layer hides that.
There may be extra steps for certain users or hardware. The layer hides that too.
The presentation may be different for different users, or even the language. The layer should hide that as well.
It would be better to call this an ugliness layer than a portability layer. It separates the high-level actions of the code from the implementation nightmares below it. And it should be driven by configuration data, which is just the same as any other data in the system. Realistically part of the database, part of the data model.
In a very abstract sense, some code is triggered with some configuration data, and some alternative code chains that combine to do the work. At the top level, you can lay out the full logic, while underneath the portability layer if some things aren’t required they are ‘no-op’s, basically empty functions. When structured that way, it’s a lot easier to verify, and debug.
The high level is a large set of polymorphic instructions, the underlying level is specific to the context.
The system is a big set of functions, that implement various features. Each function is high level, these are mapped to the lower level stuff that is specific to the machine, user, etc. Any sub-context that can be applied.
When code is structured that way, it is easy to figure out what it is doing, and a whole lot less messy. Code that is easy to figure is also easy to test and debug. Easy code is higher quality code. Encapsulated madness below is where you’ll hit the bugs, and is harder to test. But if that is clearly separated, then it is just a matter of reproducing the problem and noticing why it was ineffective.
We should have not forgotten about portability layers, they were definitely a far better decomposition to help with overly complex runtime issues.
Friday, November 22, 2024
Wobble
As a software development project spins out of control, it wobbles all over the place. The real issues get replaced by fabricated ones.
This is evident in its political discussions.
If all of the issues are grounded, then they are just work that needs to be accomplished. But if they have become disjoint from the underlying technical or domain constraints, then the project is wobbling.
Because the issues are no longer bound to reality, it does not matter how they are handled. They are not the problem, the wobble that caused them is the problem.
A wobbling project quickly gets messy and disorganized. That lowers the quality of both the proposed solution and its implementation. You end up with a lot of badly fitting technology that requires significant handholding just to get through basic functionality. Its fragility defeats its utility.
Wobbles are caused by all sorts of technical, domain, and control issues. These issues are disconnected from the work of building the solution itself. They are irrational and sometimes quite emotional.
The best thing you can do is recognize them as a wobble and work carefully to not let them interfere with the actual development effort. That can be tricky.
If some technology is irrationally banned even though it is suitable for the current development, you end up just having to find an acceptable replacement that doesn’t force a full re-architecture.
If the domain features and functionality are in question, you usually need to go right back to the users themselves and figure out directly from them what parts they actually need, and what parts they will not use. Skip the middle players, they are the source of the wobble.
For process issues, sometimes you need the illusion of following a useless makework process, but ensure that that effort does not steal away too much focus from the real underlying process. Nontechnical people love to throw all sorts of hoops in your way to jump over. You just have to make sure it is not too disruptive.
The biggest cause of most wobbles is a lack of real development experience, from the developers, management, stakeholders, or any others with influence. Software development often seems conceptually easy, but the size, scale, and sheer amount of effort involved in even a medium-sized effort make it somewhat non-intuitive. Good experience is necessary to detect and avoid wobbling.
This is evident in its political discussions.
If all of the issues are grounded, then they are just work that needs to be accomplished. But if they have become disjoint from the underlying technical or domain constraints, then the project is wobbling.
Because the issues are no longer bound to reality, it does not matter how they are handled. They are not the problem, the wobble that caused them is the problem.
A wobbling project quickly gets messy and disorganized. That lowers the quality of both the proposed solution and its implementation. You end up with a lot of badly fitting technology that requires significant handholding just to get through basic functionality. Its fragility defeats its utility.
Wobbles are caused by all sorts of technical, domain, and control issues. These issues are disconnected from the work of building the solution itself. They are irrational and sometimes quite emotional.
The best thing you can do is recognize them as a wobble and work carefully to not let them interfere with the actual development effort. That can be tricky.
If some technology is irrationally banned even though it is suitable for the current development, you end up just having to find an acceptable replacement that doesn’t force a full re-architecture.
If the domain features and functionality are in question, you usually need to go right back to the users themselves and figure out directly from them what parts they actually need, and what parts they will not use. Skip the middle players, they are the source of the wobble.
For process issues, sometimes you need the illusion of following a useless makework process, but ensure that that effort does not steal away too much focus from the real underlying process. Nontechnical people love to throw all sorts of hoops in your way to jump over. You just have to make sure it is not too disruptive.
The biggest cause of most wobbles is a lack of real development experience, from the developers, management, stakeholders, or any others with influence. Software development often seems conceptually easy, but the size, scale, and sheer amount of effort involved in even a medium-sized effort make it somewhat non-intuitive. Good experience is necessary to detect and avoid wobbling.
Subscribe to:
Posts (Atom)