A while back I wrote a small product that, unfortunately, had an early demise. It was great technically -- everyone who saw it loved it -- it’s just that it didn’t get any financial support.
The idea was simple. What if all of the data in an application was observable? Internally it was just thrown into a giant pool of data. So, it could be variables in a model, calculations, state, context, etc. Everything is just observable data.
Then for each piece of derived data, it would watch any of its dependencies. If it was a formula, it would see that any of the underlying values changed, recalculate, and then tell anyone watching it that it is now different.
The stock way of implementation observable in an object-orient paradigm is to keep a list of anyone watching, then issue an event, aka function call, to each. The fun part is that since any object can watch any other object, this is not just a tree or a dag, it can be a complete graph.
Once you get a graph involved, any event percolating through it can get caught in a cycle. To avoid this, I traded off space by having each event keep a dictionary of the other objects it had already visited. If an object gets notified of an event and it is already in that visited set, it just ignores the event. Cycles defeated.
So, now we have this massive pool of interconnected variables, and if we dumped some data into the pool, it would set off a lot of events. The leaf variables would just be updated and notify watchers, but the ones above would recalculate and then notify their watchers.
I did do some stuff to consolidate events. I don’t remember exactly, but I think I’d turn on pause, update a large number of variables, then unpause. While paused, events would be noted, but not issued. For any calculation with a lot of dependents, if one thing changed, I’d track the time and since it has already recalced, grabbing all of the child variables data, it would toss any events for other dependents that were earlier. So, for A+B+C, after the unpause, you’d be notified that A changed, and do the recalc, but then the time that B and C changes would be earlier than the recalc, so ignored. That cut down significantly on any event spikes.
Then finally, at the top, I added a whole bunch of widgets. Each one was wired to watch a single variable in the pool. As data poured into the pool, the widgets would update automatically. I was able to wire in some animations too, so that if a widget displayed a number, whenever it changed it would flash white, and then slowly return to its original color. But then I wired in tables and plotting widgets as well. The plot, for instance, is a composite widget, whose children, the points of the curve, are then they are all wired to different variables in the pool. So, the plot would change on the fly as things changed in the pool.
Now if this sounds like MVC, it basically is. I just didn’t care if the widgets were in one view or a whole bunch of them. They’d all update correctly either way. and the entire model was in the pool. And any of the interface context stuff was in the model so in the pool. And any of the user’s settings were in the model, so in the pool. In fact, every variable, anywhere in the system, was in the model, so in the pool. Thus the steroids designation. Anything that can vary in the code is an object and that is observable in the pool.
Since I effectively had matrices and formulas, it was a superset of a spreadsheet. A sort of disconnected one. A whole lot more powerful.
Because time was limited, my version had to wire up each object as code explicitly. But they didn’t do much other than inherit the mechanics, define the children to watch, and provide a function to calculate. It would have been not too hard to make all of that dynamic, and then provide an interface to create and edit new objects in the app itself. This would let someone create new objects on the fly and arrange them as needed.
It was easy to hook it up to other stuff as well. There was a stream of incoming data, so it was handled by a simple mapping between the data’s parameters and the pool objects. Get the next record in the stream, and update the pool as necessary. Also, keyboard and button events would dump stuff directly into the pool. I think some of the widgets even had two-way bindings, so the underlying pool variable changed when the user changed the widget and could percolate to everything else. I had some half cycles, where the widget displayed a value, it was two-way bound and as the user changed it, it triggered other changes in the pool, which updated stuff on the fly which would change the widget. I used that for cross-widget validations as well. The widgets change the meta information for each other.
I did my favorite form of widget binding, which is by name. I could have easily added scope to that but the stuff I was working with was simple enough that I didn’t have any naming collisions. I’ve seen structural-based binding sometimes, but they can be painful and rigid. The pool has no structure and the namespace is tiny because the objects were effectively hardwired to their dependencies. Extending it would need scope.
To pull it all together I had dynamic forms and set them to handle any type of widget, primitive or composite. I pulled a trick from my earlier work and expanded the concept of forms to include everything on the screen including menus and other recursive forms. As well, forms could be set to not be editable, which lets one do new, edit, and view all with the same code, which helps to save coding time and enforce consistency.
Then I put in some extremely complex calculations, hooked it up to a real-time feed, and added a whole bunch of screens. You could page around while the stream was live and change stuff on the fly, while it was continuously updating. Good fun.
It’s too bad it didn’t survive. That type of engine has the power to avoid a lot of tedious wiring. Had I been allowed to continue I would have wired in an interpreter, a pool browser, and a sophisticated form creation tool. That and some way to dynamically wire in new data streams would have been enough to compete with tools like Excel. If you could whip up a new table and fill it with formulas and live forms, it would let you craft complex calculation apps quickly.
Software is a static list of instructions, which we are constantly changing.
Thursday, November 30, 2023
Thursday, November 23, 2023
Bottom Up
The way most people view software is from the top down. They see the GUI and other interfaces, but they don’t have any real sense of what lies behind it.
The way that bugs are reported is from the top down. Usually, there is an interface problem somewhere the underlying cause may be deep in the mechanics.
The foundation of every system is the data it persists. If there isn’t a way to reliably keep the data around for a long time, it might be cute, but it certainly isn’t practical.
The best way to build software is from the bottom up. You lay down a consistent set of behaviors and then you build up more complicated behaviors on top of those. This leverages the common lower stuff, you don’t have to reduplicate the work again for everything on top.
The best way to extend a system is bottom-up, You start with getting the data into persistence, then into the core of the system, and then you work your way upwards until all of the interfaces reflect the changes.
The way to deal with bugs is top-down. But the trick is to keep going as low as you can. Make the lowest fixed as time will allow. Sometimes it is rushed, so you might fix a bunch of higher-level symptoms, but even if you do that you still have to schedule the proper lower-level ones soon.
The best way you screw up a large system is to let it get disorganized. The way you organize a system is from the bottom up, Looking at it top down will only confuse you.
Some people only want to see it one way or the other, as top-down or bottom-up, but clearly, that isn’t possible. When a system becomes large, the clash in perspectives becomes the root of a lot of the problems. Going in the wrong direction, against the grain, will result in hardship.
The way that bugs are reported is from the top down. Usually, there is an interface problem somewhere the underlying cause may be deep in the mechanics.
The foundation of every system is the data it persists. If there isn’t a way to reliably keep the data around for a long time, it might be cute, but it certainly isn’t practical.
The best way to build software is from the bottom up. You lay down a consistent set of behaviors and then you build up more complicated behaviors on top of those. This leverages the common lower stuff, you don’t have to reduplicate the work again for everything on top.
The best way to extend a system is bottom-up, You start with getting the data into persistence, then into the core of the system, and then you work your way upwards until all of the interfaces reflect the changes.
The way to deal with bugs is top-down. But the trick is to keep going as low as you can. Make the lowest fixed as time will allow. Sometimes it is rushed, so you might fix a bunch of higher-level symptoms, but even if you do that you still have to schedule the proper lower-level ones soon.
The best way you screw up a large system is to let it get disorganized. The way you organize a system is from the bottom up, Looking at it top down will only confuse you.
Some people only want to see it one way or the other, as top-down or bottom-up, but clearly, that isn’t possible. When a system becomes large, the clash in perspectives becomes the root of a lot of the problems. Going in the wrong direction, against the grain, will result in hardship.
Thursday, November 16, 2023
The Power of Abstractions
Programmers often complain about abstractions, which is unfortunate.
Abstractions are one of the strongest ‘power tools’ for programming. Along with encapsulation and data structures, they give you the ability to recreate any existing piece of modern software, yourself, so long as you have lots and lots of time.
There is always a lot of confusion about them. On their own, they are nothing more than a generalization. So, instead of working through a whole bunch of separate individual special cases for the instructions that the computer needs to execute, you step back a little and figure out what all of those different cases have in common. Later, you bind those common steps back to the specifics. When you do that, you’ve not only encoded the special cases, you’ve also encoded all of the permutations.
Put another way, if you have a huge amount of code to write and you can find a small tight abstraction that covers it completely, you write the abstraction instead, saving yourself massive amounts of time. If there were 20 variations that you needed to cover but you spent a little extra time to just create one generalized version, it’s a huge win.
Coding always takes a long time, so the strongest thing we can do is get as much leverage from every line as possible. If some small sequence of instructions appears in your code dozens of times, it indicates that you wasted a lot of time typing and testing it over and over again. Type it once, name it, make sure it works, and then reuse it. Way faster.
A while back there were discussions that abstractions always leak. The example given was for third-generation programming languages. With those, you still sometimes need to go outside of the language to get some things done on the hardware, like talking directly with the video card. Unfortunately, it was an apples-to-oranges comparison. The abstractions in question generalized the notion of a ‘computer’. But just one instance of it. Modern machine architecture however is actually a bunch of separate such computing devices all talking to each other through mediums like the bus or direct memory access. So, it’s really a ‘collection’ of computers. Quite obviously if you put an abstraction over the one thing, it does not cover a collection of them. Collections are things themselves (which part of what data structures is trying to teach).
A misfitting abstraction would not cover everything, and an abstraction for one thing would obviously not apply to a set of them. The abstraction of third-generation programming languages fit tightly over only the assembler instructions that manipulated the computer but obviously didn’t cover the ones that were used to communicate with peripherals. That is not leaking, really it is just scope and coverage.
To be more specific, an abstract is just an abstraction. If it misfits and part of the underlying mechanics is sticking out, exposed for the whole world to see, the problem is encapsulation. The abstract does not fully encapsulate the stuff below it. Partial encapsulation is leaking encapsulation. There are ugly bits sticking out of the box.
In most cases, you can actually find a tight-fitting abstraction. Some generalization with full coverage. You just need to understand what you are abstracting. An abstraction is a step up, but you can also see it as binding together a whole bunch of special cases like twigs. If you can visualize it as the overlaid execution paths of all of the possible permutations forming each special case, then you can see why there would always be something that fits tightly. The broader you make it the more situations it will cover.
The real power of an abstraction comes from a hugely decreased cognitive load. Instead of having to understand all of the intricacies of each of the special cases, you just have to understand the primitives of the abstraction itself. It’s just that it is one level of indirection. But still way less complexity.
The other side of that coin is that you can validate the code visually, by reading it. If it holds within the abstraction and the abstraction holds to the problem, then you know it will behave as expected. It’s obviously not a proof of correctness, but being able to quickly verify that some code is exactly what you thought it was should cut down on a huge number of bugs.
People complain though, that they are forced to understand something new. Yes, absolutely. And since the newer understanding is somewhat less concrete, for some people that makes it a little more challenging. But programming is already abstract and you already have to understand modern programming language abstractions and their embedded sub-abstractions like ‘strings’.
That is, crafting your own abstraction, if it is consistent and complete, is no harder to understand than any of the other fundamental tech stack ones, and to get really good at programming, you have to know those anyway. So adding a few more for the system itself is not onerous. In some cases, your abstraction can even cover a bunch of other lower-level ones, so if it is encapsulated, you don’t need to know those anymore. A property of encapsulation itself is to partition complexity, making the sum more complex but each component a lot less complex. If you want to write something sophisticated with extreme complexity, partitioning it is the only way it will be manageable.
One big fear is that someone will pick a bad abstraction and that will get locked into the code causing a huge mess. Yes, that happens, but the problem isn’t the abstraction. The problem is that people are locking things into the codebase. Treating all of the code in the system as write-once and untouchable is a huge problem. In doing that, it does not matter if the code is abstract or not, the codebase will degenerate either way, but faster if it is brute force. Either the code on top a) propagates the bugs below, b) wraps another onion layer around the earlier mess, or c) just spins off in a new silo. All three of these are really bad. They bloat up the lines of code, enshrine the earlier flaws, increase disorganization, and waste time with redundant work. They get you out of the gate a little faster, but then you’ll be stuck in the swamp forever.
If you pick the wrong abstraction then refactoring to correct it is boring. But it is usually a constrained amount of work and you can often do it in parts. If you apply the changes non-destructively, during the cleanup phase, you can refactor away some of the issues and check their correctness, before you pile more stuff on top. If you do that a bunch of times, the codebase improves for each release. You just have to be consistent about your direction of refactoring, waffling will hurt worse.
But that is true for all coding styles. If you make a mistake, and you will, then so long as you are consistent in that mistake, fixing it is always a smaller amount of work or at the very least can be broken down into a set of small amounts. If there are a lot of them, you may have to apply the sum over a large number of different releases, but if you persist and hold your direction constant, the code will get better. A lot better. Contrast this with freezing, where the code will always get worse. The mark of a good codebase is that it improves with time.
Sometimes people are afraid of what they see as the creativity involved with finding a new abstraction. Most abstractions however are not particularly creative. Really they are often just a combination of other abstractions fitted together to apply tightly to the current problem. That is, abstractions slowly evolve, they don’t just leap into existence. That makes sense, as often you don’t fully appreciate their expressibility until you’ve applied them a few times. So, it’s not creativity, but rather a bit of research or experience.
Programming is complicated enough these days that you will not get really far with it if you just stick to rediscovering everything yourself from first principles. Often the state of the art has been built up over decades, so going all of the way back in time and trying to reinvent everything again is going to be crude in comparison.
This is why learning to research a little is a necessary skill. If you decide to write some type of specific computation, doing some reading beforehand about others' experiences will pay huge dividends. Working with experienced people will pay huge dividends. Absorbing any large amount of knowledge efficiently will allow you to start from a stronger position. Code is just a manifestation of what the programmer understands, so obviously the more they understand the better the code will be.
The other side of this is that an inexperienced programmer seeking a super-creative abstraction will often be a disaster. This happens because they don’t fully understand what properties are necessary for coverage, so instead they hyper-focus on some smaller aspect of the computation. They optimize for that, but the overall fit is poor.
The problem though is that they went looking for a big creative leap. That was the real mistake. The abstraction you need is a generalization of the problems in front of you. Nothing more. Step back once or twice, don’t try to go way, way out, until much later in your life and your experience. What you do know should anchor you, always.
Another funny issue comes from concepts like patterns. As an abstraction, data structures have nearly full coverage over most computations, so you can express most things, with a few caveats, as a collection of interacting data structures. The same isn’t true for design patterns. They are closer to idioms than they are to a full abstraction. That is why they are easier to understand and more tangible. That is also why they became super popular, but it is also their failure.
You can decompose a problem into a set of design patterns, but it is more likely that the entire set now has a lot of extra artificial complexity included. Like an idiom, a pattern was meant to deal with a specific implementation issue, it would itself just be part of some abstraction, not the actual abstraction. They are implementation patterns, not design blocks. Patterns should be combined and hold places within an abstraction, not be a full and complete means of expressing the abstraction or the solution.
Oddly programmers so often seek one-size-fits-all rules, insisting that they are the one true way to do things. They do this because of complexity, but it doesn’t help. A lot of choices in programming are trade-offs, where you have to balance your decision to fit the specifics of what you are building. You shouldn’t always go left, nor should you always go right. The moment you arrive at the fork, you have to think deeply about the context you are buried in. That thinking can be complex, and it will definitely slow you down, thus the desire to blindly always pick the same direction. The less you think about it, the faster you will code, but the more likely that code will be fragile.
You can build a lot of small and medium-sized systems with brute force. It works. You don’t need to learn or even like abstractions. But if you want to work on large systems, or you want to be able to build stuff way faster, abstractions will allow you to do this. If you want to build sophisticated things, abstractions are mandatory. Once the inherent complexity passes some threshold, even the best development teams cannot deal with it, so you need ways of managing it that will allow the codebase to keep growing. This can only be done by making sure the parts are encapsulated away from each other, and almost by definition that makes the parts themselves abstract. That is why we see so many fundamental abstractions forming the base of all of our software, we have no other way of wrangling the complexity.
Abstractions are one of the strongest ‘power tools’ for programming. Along with encapsulation and data structures, they give you the ability to recreate any existing piece of modern software, yourself, so long as you have lots and lots of time.
There is always a lot of confusion about them. On their own, they are nothing more than a generalization. So, instead of working through a whole bunch of separate individual special cases for the instructions that the computer needs to execute, you step back a little and figure out what all of those different cases have in common. Later, you bind those common steps back to the specifics. When you do that, you’ve not only encoded the special cases, you’ve also encoded all of the permutations.
Put another way, if you have a huge amount of code to write and you can find a small tight abstraction that covers it completely, you write the abstraction instead, saving yourself massive amounts of time. If there were 20 variations that you needed to cover but you spent a little extra time to just create one generalized version, it’s a huge win.
Coding always takes a long time, so the strongest thing we can do is get as much leverage from every line as possible. If some small sequence of instructions appears in your code dozens of times, it indicates that you wasted a lot of time typing and testing it over and over again. Type it once, name it, make sure it works, and then reuse it. Way faster.
A while back there were discussions that abstractions always leak. The example given was for third-generation programming languages. With those, you still sometimes need to go outside of the language to get some things done on the hardware, like talking directly with the video card. Unfortunately, it was an apples-to-oranges comparison. The abstractions in question generalized the notion of a ‘computer’. But just one instance of it. Modern machine architecture however is actually a bunch of separate such computing devices all talking to each other through mediums like the bus or direct memory access. So, it’s really a ‘collection’ of computers. Quite obviously if you put an abstraction over the one thing, it does not cover a collection of them. Collections are things themselves (which part of what data structures is trying to teach).
A misfitting abstraction would not cover everything, and an abstraction for one thing would obviously not apply to a set of them. The abstraction of third-generation programming languages fit tightly over only the assembler instructions that manipulated the computer but obviously didn’t cover the ones that were used to communicate with peripherals. That is not leaking, really it is just scope and coverage.
To be more specific, an abstract is just an abstraction. If it misfits and part of the underlying mechanics is sticking out, exposed for the whole world to see, the problem is encapsulation. The abstract does not fully encapsulate the stuff below it. Partial encapsulation is leaking encapsulation. There are ugly bits sticking out of the box.
In most cases, you can actually find a tight-fitting abstraction. Some generalization with full coverage. You just need to understand what you are abstracting. An abstraction is a step up, but you can also see it as binding together a whole bunch of special cases like twigs. If you can visualize it as the overlaid execution paths of all of the possible permutations forming each special case, then you can see why there would always be something that fits tightly. The broader you make it the more situations it will cover.
The real power of an abstraction comes from a hugely decreased cognitive load. Instead of having to understand all of the intricacies of each of the special cases, you just have to understand the primitives of the abstraction itself. It’s just that it is one level of indirection. But still way less complexity.
The other side of that coin is that you can validate the code visually, by reading it. If it holds within the abstraction and the abstraction holds to the problem, then you know it will behave as expected. It’s obviously not a proof of correctness, but being able to quickly verify that some code is exactly what you thought it was should cut down on a huge number of bugs.
People complain though, that they are forced to understand something new. Yes, absolutely. And since the newer understanding is somewhat less concrete, for some people that makes it a little more challenging. But programming is already abstract and you already have to understand modern programming language abstractions and their embedded sub-abstractions like ‘strings’.
That is, crafting your own abstraction, if it is consistent and complete, is no harder to understand than any of the other fundamental tech stack ones, and to get really good at programming, you have to know those anyway. So adding a few more for the system itself is not onerous. In some cases, your abstraction can even cover a bunch of other lower-level ones, so if it is encapsulated, you don’t need to know those anymore. A property of encapsulation itself is to partition complexity, making the sum more complex but each component a lot less complex. If you want to write something sophisticated with extreme complexity, partitioning it is the only way it will be manageable.
One big fear is that someone will pick a bad abstraction and that will get locked into the code causing a huge mess. Yes, that happens, but the problem isn’t the abstraction. The problem is that people are locking things into the codebase. Treating all of the code in the system as write-once and untouchable is a huge problem. In doing that, it does not matter if the code is abstract or not, the codebase will degenerate either way, but faster if it is brute force. Either the code on top a) propagates the bugs below, b) wraps another onion layer around the earlier mess, or c) just spins off in a new silo. All three of these are really bad. They bloat up the lines of code, enshrine the earlier flaws, increase disorganization, and waste time with redundant work. They get you out of the gate a little faster, but then you’ll be stuck in the swamp forever.
If you pick the wrong abstraction then refactoring to correct it is boring. But it is usually a constrained amount of work and you can often do it in parts. If you apply the changes non-destructively, during the cleanup phase, you can refactor away some of the issues and check their correctness, before you pile more stuff on top. If you do that a bunch of times, the codebase improves for each release. You just have to be consistent about your direction of refactoring, waffling will hurt worse.
But that is true for all coding styles. If you make a mistake, and you will, then so long as you are consistent in that mistake, fixing it is always a smaller amount of work or at the very least can be broken down into a set of small amounts. If there are a lot of them, you may have to apply the sum over a large number of different releases, but if you persist and hold your direction constant, the code will get better. A lot better. Contrast this with freezing, where the code will always get worse. The mark of a good codebase is that it improves with time.
Sometimes people are afraid of what they see as the creativity involved with finding a new abstraction. Most abstractions however are not particularly creative. Really they are often just a combination of other abstractions fitted together to apply tightly to the current problem. That is, abstractions slowly evolve, they don’t just leap into existence. That makes sense, as often you don’t fully appreciate their expressibility until you’ve applied them a few times. So, it’s not creativity, but rather a bit of research or experience.
Programming is complicated enough these days that you will not get really far with it if you just stick to rediscovering everything yourself from first principles. Often the state of the art has been built up over decades, so going all of the way back in time and trying to reinvent everything again is going to be crude in comparison.
This is why learning to research a little is a necessary skill. If you decide to write some type of specific computation, doing some reading beforehand about others' experiences will pay huge dividends. Working with experienced people will pay huge dividends. Absorbing any large amount of knowledge efficiently will allow you to start from a stronger position. Code is just a manifestation of what the programmer understands, so obviously the more they understand the better the code will be.
The other side of this is that an inexperienced programmer seeking a super-creative abstraction will often be a disaster. This happens because they don’t fully understand what properties are necessary for coverage, so instead they hyper-focus on some smaller aspect of the computation. They optimize for that, but the overall fit is poor.
The problem though is that they went looking for a big creative leap. That was the real mistake. The abstraction you need is a generalization of the problems in front of you. Nothing more. Step back once or twice, don’t try to go way, way out, until much later in your life and your experience. What you do know should anchor you, always.
Another funny issue comes from concepts like patterns. As an abstraction, data structures have nearly full coverage over most computations, so you can express most things, with a few caveats, as a collection of interacting data structures. The same isn’t true for design patterns. They are closer to idioms than they are to a full abstraction. That is why they are easier to understand and more tangible. That is also why they became super popular, but it is also their failure.
You can decompose a problem into a set of design patterns, but it is more likely that the entire set now has a lot of extra artificial complexity included. Like an idiom, a pattern was meant to deal with a specific implementation issue, it would itself just be part of some abstraction, not the actual abstraction. They are implementation patterns, not design blocks. Patterns should be combined and hold places within an abstraction, not be a full and complete means of expressing the abstraction or the solution.
Oddly programmers so often seek one-size-fits-all rules, insisting that they are the one true way to do things. They do this because of complexity, but it doesn’t help. A lot of choices in programming are trade-offs, where you have to balance your decision to fit the specifics of what you are building. You shouldn’t always go left, nor should you always go right. The moment you arrive at the fork, you have to think deeply about the context you are buried in. That thinking can be complex, and it will definitely slow you down, thus the desire to blindly always pick the same direction. The less you think about it, the faster you will code, but the more likely that code will be fragile.
You can build a lot of small and medium-sized systems with brute force. It works. You don’t need to learn or even like abstractions. But if you want to work on large systems, or you want to be able to build stuff way faster, abstractions will allow you to do this. If you want to build sophisticated things, abstractions are mandatory. Once the inherent complexity passes some threshold, even the best development teams cannot deal with it, so you need ways of managing it that will allow the codebase to keep growing. This can only be done by making sure the parts are encapsulated away from each other, and almost by definition that makes the parts themselves abstract. That is why we see so many fundamental abstractions forming the base of all of our software, we have no other way of wrangling the complexity.
Thursday, November 9, 2023
Time Ranges
Sometimes you can predict the future accurately. For instance, you know that a train will leave the station tomorrow at 3 pm destined for somewhere you want to go.
But the train may not leave on time. There are a very large number of ‘unexpected’ things that could derail that plan, however, there is only a tiny probability that any of them will actually happen tomorrow. You can live with that small uncertainty, ignore it.
Sometimes you can only loosely predict the future. It will take me somewhere between 10 minutes to 1.5 hours to make dinner tonight. It’s a curve and the probability is most likely that the time it takes to make dinner lands somewhere in the middle; so not 10 mins and not 1.5 hours. It may only take half an hour, but I will only be certain about that after I finish.
If you have something big to accomplish and it is made up of a huge number of little things, and you need to understand how long it will take, you pretty much should only work with time ranges. Some things may be certain like train schedules, but more often the effort is like making dinner. This is particularly true if it is software development.
So, you have 10 new features that result in 30 functional changes to different parts of the code. You associate a time range for each functional change. Then you have to add in the time for various configurations and lots of testing.
Worth noting that the time to configure something is not the actual time to add or modify the parameters, that is trivial. It is the time required to both find and ensure that the chosen configuration values are correct. So, lots of thinking and some testing. Might take 1 minute to update a file, but up to 3 days to work through every possible permutation until you find one that works as expected. Then the range is 5 mins if you get lucky, and 3 days if the universe is against you, which it seems to be sometimes.
For most things, most of the time, you’ll get a bit of luck. The work falls on the lower side of the range. But sometimes life, politics, or unexpected problems cause delays. With time ranges, you can usually absorb a few unexpected delays and keep going.
As the work progresses the two big levers of control are a) the completion date and b) the number of features. If luck is really not on your side, you either move the date farther out or drop a few features. You often need to move one or the other lever, which is why if they are both taken away it becomes more likely the release will explode.
Some things are inherently unestimatable. You can’t know what the work is until someone has managed to get to the other side and there is no way to know if anyone will ever get to the other side.
These types of occurrences are a small part of development work, but lots of other stuff is built on top of them. If you have something like that, you do that exploration first, then re-estimate when you are done.
For example, half the features depend on you making an unguessable performance improvement. If you fail to guess how to fix that lower issue in a reasonable time frame, then those features get cut. You can still proceed with the other features. The trick is to know that as early as possible, thus don’t leave inestimable work until the end. Do it right away.
It’s worth noting too that coding is often only one-third of the time. The analysis and design should be equal to the coding time, as should the testing time. That is, it can take 3 times longer to get done than most programmers expect, but they start 2/3rds of the way through.
This is often a shortcut by doing almost no analysis or design, but that tends to bog things down in scope creep and endless changes. Skimping on testing lets more bugs escape into production which makes any operation drama far more expensive. So, in both cases attempting to save time by not doing necessary work comes back to haunt the project later and always ends up wasting more time than what was saved. Shortcuts are always nasty time trade-offs. Save a bit of time today, only to pay more for it later.
If you have all of the work items as time ranges, then you can pick an expected luck percentage and convert the schedule into an actual date. Most times, it’s around the 66% mark. You can tell if you are ahead or behind, and you can lock in some of the harder and easier work early, so there is at least something at the end. If you end up being late, at least you know why you are late. For example, most of the tasks ended up near their maximum times.
Time ranges also help with individual estimates. For example, you can ask a young programmer for an estimate and also a senior one. The difference will give you a range. In fact, everyone could chime in with dates, accurate or crazy, and you’d have a much better idea of how things may or may not progress. You don’t have to add in random slack time, as it isn’t ever anchored in reality. It is built in with ranges.
Time ranges are great. You keep them as is when you are working, but can convert them to fixed dates when dealing with external people. If you blow the initial ranges, you’ll know fairly early that you are unlucky or stuck in some form of dysfunction. That would let you send out an early ‘heads up’ that will mitigate some of the anger from being late.
But the train may not leave on time. There are a very large number of ‘unexpected’ things that could derail that plan, however, there is only a tiny probability that any of them will actually happen tomorrow. You can live with that small uncertainty, ignore it.
Sometimes you can only loosely predict the future. It will take me somewhere between 10 minutes to 1.5 hours to make dinner tonight. It’s a curve and the probability is most likely that the time it takes to make dinner lands somewhere in the middle; so not 10 mins and not 1.5 hours. It may only take half an hour, but I will only be certain about that after I finish.
If you have something big to accomplish and it is made up of a huge number of little things, and you need to understand how long it will take, you pretty much should only work with time ranges. Some things may be certain like train schedules, but more often the effort is like making dinner. This is particularly true if it is software development.
So, you have 10 new features that result in 30 functional changes to different parts of the code. You associate a time range for each functional change. Then you have to add in the time for various configurations and lots of testing.
Worth noting that the time to configure something is not the actual time to add or modify the parameters, that is trivial. It is the time required to both find and ensure that the chosen configuration values are correct. So, lots of thinking and some testing. Might take 1 minute to update a file, but up to 3 days to work through every possible permutation until you find one that works as expected. Then the range is 5 mins if you get lucky, and 3 days if the universe is against you, which it seems to be sometimes.
For most things, most of the time, you’ll get a bit of luck. The work falls on the lower side of the range. But sometimes life, politics, or unexpected problems cause delays. With time ranges, you can usually absorb a few unexpected delays and keep going.
As the work progresses the two big levers of control are a) the completion date and b) the number of features. If luck is really not on your side, you either move the date farther out or drop a few features. You often need to move one or the other lever, which is why if they are both taken away it becomes more likely the release will explode.
Some things are inherently unestimatable. You can’t know what the work is until someone has managed to get to the other side and there is no way to know if anyone will ever get to the other side.
These types of occurrences are a small part of development work, but lots of other stuff is built on top of them. If you have something like that, you do that exploration first, then re-estimate when you are done.
For example, half the features depend on you making an unguessable performance improvement. If you fail to guess how to fix that lower issue in a reasonable time frame, then those features get cut. You can still proceed with the other features. The trick is to know that as early as possible, thus don’t leave inestimable work until the end. Do it right away.
It’s worth noting too that coding is often only one-third of the time. The analysis and design should be equal to the coding time, as should the testing time. That is, it can take 3 times longer to get done than most programmers expect, but they start 2/3rds of the way through.
This is often a shortcut by doing almost no analysis or design, but that tends to bog things down in scope creep and endless changes. Skimping on testing lets more bugs escape into production which makes any operation drama far more expensive. So, in both cases attempting to save time by not doing necessary work comes back to haunt the project later and always ends up wasting more time than what was saved. Shortcuts are always nasty time trade-offs. Save a bit of time today, only to pay more for it later.
If you have all of the work items as time ranges, then you can pick an expected luck percentage and convert the schedule into an actual date. Most times, it’s around the 66% mark. You can tell if you are ahead or behind, and you can lock in some of the harder and easier work early, so there is at least something at the end. If you end up being late, at least you know why you are late. For example, most of the tasks ended up near their maximum times.
Time ranges also help with individual estimates. For example, you can ask a young programmer for an estimate and also a senior one. The difference will give you a range. In fact, everyone could chime in with dates, accurate or crazy, and you’d have a much better idea of how things may or may not progress. You don’t have to add in random slack time, as it isn’t ever anchored in reality. It is built in with ranges.
Time ranges are great. You keep them as is when you are working, but can convert them to fixed dates when dealing with external people. If you blow the initial ranges, you’ll know fairly early that you are unlucky or stuck in some form of dysfunction. That would let you send out an early ‘heads up’ that will mitigate some of the anger from being late.
Thursday, November 2, 2023
Special Cases
One of the trickiest parts of coding is to not let the code become a huge mess, under the pressure of rapid changes.
It’s pretty much impossible to get a concrete static specification for any piece of complex software, and it is far worse if people try to describe dynamic attributes as static ones. As such, changes in software are inevitable and constant. You write something, be prepared for it to change, it will change.
One approach to dealing with this is to separately encode each and every special case as it own stand-alone siloed piece of code. People do this, but I highly recommend against it. It is just an exponential multier in the amount of work and testing necessary. Time that could have been saved by working smarter.
Instead, we always write for the general case, even if the only thing we know today is one specific special case.
That may sound a bit weird, but it is really a mindset. If someone tells you the code should do 12 things for a small set of different data, then you think of that as if it were general. But then you code out the case as specified. Say you take the 3 different types of data and put it directly through the 12 steps.
But you’ve given it some type of more general name. It is isn’t ProcessX, it something more akin to HandleThese3TypesOfData. Of course, you really don’t want the name to be that long and explicit. Pick something more general that covers the special case, but does not explicitly bind to it. We’re always searching for ‘atomic primitives’, so maybe it is GenerateReport, but it only actually works for this particular set of data and nly goes these 12 steps.
And now the fun begins.
Later, they have a similar case, but different. Say it is 4 types of data, but only 2 overlap with the first instance. And it is 15 steps, but only 10 overlap.
You wrap your generate report into some object or structure that can hold any of the 5 possible datatypes. You set an enumeration that switches between the original 12 steps and the newer 15 steps.
You put an indicator in the input to say which of the 2 cases match the incoming data. You write something to check the inputs first. Then you use the enum to switch between the different steps. Now someone can call it with either special case, and it works.
Then more fun.
Someone adds a couple more special cases, You do the same thing, trying very carefully to minimize the logic wherever possible.
Maybe you put polymorphism over the input to clean that up. You flatten whatever sort of nested logic hell is building up. You move things around, making sure that any refectoring that hits the original functionality is non-destructive. In that way, you leverage the earlier work, instead of redoing it.
And it continues.
Time goes by, you realize that some of the special cases can be collapsed down, so you do that. You put in new cases and collapse parts of the cases as you can. You evolve the complexity into the code, but make sure you don’t disrupt it. The trick is always to leverage your earlier work.
If you do that diligently, then instead of a whole pile of spaghetti code, you end up with a rather clean, yet sophisticated processing engine. It takes a wide range of inputs but handles them and all of the in-between permutations correctly. You know it is correct because, at the lower levels, you are always doing the right thing.
It’s not ‘code it once and forget it’, but rather carefully grow it into the complicated beast that it needs to become in order to be useful.
One approach to dealing with this is to separately encode each and every special case as it own stand-alone siloed piece of code. People do this, but I highly recommend against it. It is just an exponential multier in the amount of work and testing necessary. Time that could have been saved by working smarter.
Instead, we always write for the general case, even if the only thing we know today is one specific special case.
That may sound a bit weird, but it is really a mindset. If someone tells you the code should do 12 things for a small set of different data, then you think of that as if it were general. But then you code out the case as specified. Say you take the 3 different types of data and put it directly through the 12 steps.
But you’ve given it some type of more general name. It is isn’t ProcessX, it something more akin to HandleThese3TypesOfData. Of course, you really don’t want the name to be that long and explicit. Pick something more general that covers the special case, but does not explicitly bind to it. We’re always searching for ‘atomic primitives’, so maybe it is GenerateReport, but it only actually works for this particular set of data and nly goes these 12 steps.
And now the fun begins.
Later, they have a similar case, but different. Say it is 4 types of data, but only 2 overlap with the first instance. And it is 15 steps, but only 10 overlap.
You wrap your generate report into some object or structure that can hold any of the 5 possible datatypes. You set an enumeration that switches between the original 12 steps and the newer 15 steps.
You put an indicator in the input to say which of the 2 cases match the incoming data. You write something to check the inputs first. Then you use the enum to switch between the different steps. Now someone can call it with either special case, and it works.
Then more fun.
Someone adds a couple more special cases, You do the same thing, trying very carefully to minimize the logic wherever possible.
Maybe you put polymorphism over the input to clean that up. You flatten whatever sort of nested logic hell is building up. You move things around, making sure that any refectoring that hits the original functionality is non-destructive. In that way, you leverage the earlier work, instead of redoing it.
And it continues.
Time goes by, you realize that some of the special cases can be collapsed down, so you do that. You put in new cases and collapse parts of the cases as you can. You evolve the complexity into the code, but make sure you don’t disrupt it. The trick is always to leverage your earlier work.
If you do that diligently, then instead of a whole pile of spaghetti code, you end up with a rather clean, yet sophisticated processing engine. It takes a wide range of inputs but handles them and all of the in-between permutations correctly. You know it is correct because, at the lower levels, you are always doing the right thing.
It’s not ‘code it once and forget it’, but rather carefully grow it into the complicated beast that it needs to become in order to be useful.
Subscribe to:
Posts (Atom)