Even the most straightforward routine programming tasks involve having to know a crazy amount of stuff. The more you know, the faster the work will go and the better quality you’ll achieve. So knowledge is crucial.
Software is always changing and is a very trendy profession, so it is endless learning. You have to keep up with it. While It is a more relaxed schedule than university, it is nearly the same volume of information. Endless school. And when you finally get near the end of the tunnel and understand most of the tiny little parts of any given stack, you inevitably have to jump to another one to keep the job opportunities coming.
For me, the most effective way to learn a lot of stuff was directly from someone else. If I spend time working with someone who has nearly mastered a technology or set of skills, I’ll always pick up a huge amount of stuff. Mentors were my biggest influence always causing big jumps in my abilities.
Hands-on experience can be good too, but really only if you augment it with some other source of guiding information, otherwise you run the risk of stumbling into too many eclectic practices that you think are fine, but end up making everything a lot harder. Bad habits create lots of bugs. Lots of bugs.
Textbooks used to be great. They were finely curated dense knowledge that was mostly trustworthy. Expensive and slow to read but worth it. Many had the depth you really needed.
Some of the fluffy books are okay, but lots of them are filled with misconceptions and bad habits. I tend to avoid most of them. Some I’ll read for entertainment.
Programming knowledge has a wide variance. You can find out an easy way to do something, but it is often fragile. Kinda works, but isn’t the best or most reliable way.
Usually, correctness is a bit fiddly, it isn’t the most obvious way forward. It is almost never easy. But it is worth the extra time to figure it out, as it usually saves a lot of stress from testing and production bugs. Poor quality code causes a lot of drama. Oversimplifications are common. Most of the time the code needs to be industrial strength, which takes extra effort.
University courses were great for giving broad overviews. You get a good sense of the landscape, and then it focuses in on a few of the detailed areas.
People don’t seem to learn much from Q&A sites. They just blindly treat the snippets as magic incantations. For me, I always make sure I understand ‘why’ the example works before I rely on it, and I always rearrange it to fit the conventions of the codebase I am working on. Blog posts can be okay too, but more often they are better at setting the context than fully explaining it.
If you have all of the knowledge to build a system and are given enough time, it is a fairly straightforward task. There are a few parts of it that require creativity or research, but then the rest of building it is just work. Skills like organization and consistency tend to enhance quality a lot more than raw knowledge mostly because if you find out later that you were wrong about the way something needed to be done, if the codebase is well written, it is an easy and quick change. In that sense, any friction during bug fixing is a reduction of quality.
Software is a static list of instructions, which we are constantly changing.
Friday, August 30, 2024
Thursday, August 22, 2024
Navigation
The hardest part about creating any large software user interface is getting the navigation correct. It is the ‘feel’ part of expression ‘look and feel’.
When navigation is bad, the users find the system awkward to use. It’s a struggle to do stuff with it. Eventually, they’ll put up with the deficiencies but they’ll never really like using the software.
When the navigation is good, no one notices. It fades into the background. It always works as expected, so people don’t see it anymore. Still, people tend to rave about that type of software and get attached to it. They are very aware that “it just works” and appreciate that. It is getting rarer these days.
The design of graphical user interfaces has been fairly consistent for at least thirty years. It can be phrased in terms of screens, selections, and widgets.
You start with some type of landing screen. That is where everyone goes unless they have customized entry to land elsewhere. That is pretty easy.
There are two main families of interfaces: hierarchical and toolbox.
A hierarchical interface makes the user move around through a tree or graph of screens to find the functionality they need. There are all sorts of navigation aids and quick ways to jump from one part of the system to another.
Toolboxes tend to keep the user in one place, a base screen, working with a small set of objects. They select these and then dig down into different toolboxes to get to the functionality they need and apply it to the object.
So we either make the user go to the functionality or we bring the functionality to the user. Mixing and matching these paradigms tends to lead to confusion unless it is super held together by some other principle.
It all amounts to the same thing.
At some point, we have established a context for the user that points to all of the data, aka nous, needed by some functionality, aka verbs, to execute correctly, aka compute. Creating and managing that context is navigation. Some of that data is session-based, some is based around the core nouns that the user has selected, and some of it is based on preferences. It is always a mixture which has often lead to development confusion.
For nouns, there are always 0, 1, or more of any type of them. Three choices. A good interface will deal with all three. For base verbs, they are create, edit, and delete. That directly or indirectly applied to every action, and again they are all handled for everything. There are connective verbs like find.
There are common navigation patterns like find-list-select as a means to go from many to one. The opposite is true too, but more often you see that in a toolbox. You can go from one to many.
There are plenty of variations on saving and restoring context. In some interfaces, it can be recursive, so you might go down from the existing context into a new one, execute the functionality, and link that back into the above context.
The best interfaces minimize any cognitive demands. If the user needs three things from different parts of the system, as they navigate, those parts are both connected to the context, but can also be viewed, and possibly modified. The simplistic example is copy and paste, but there are other forms like drag and drop, saved values, history, etc. They are also consistent. They do not use every trick in the book, instead, they carefully pick a small number of them and apply them consistently. This allows the user to correctly guess how to get things accomplished.
The trick is to make it all intuitive. The user knows they want to do something, but they don’t have to think very hard about how to accomplish it, it is obvious. All they need to do is grok the philosophy and go.
Good interfaces sometimes degrade quickly. People start randomly adding in more functionality, sort of trying to follow the existing pattern, but not quite. Then suddenly the interface gets convoluted. It has an overall feel, but too many expectations to that feel.
In that sense, an interface is scaled to support a specific amount of functionality, so when that grows a new interface is needed to support the larger amount.
There is no one-size-fits-all interface paradigm, just as there are no one-size-fits-all scaling solutions for any other part of software. This happens because organizational schemes are relative to size. As the size changes, it all needs to be reorganized, there is no avoiding that constraint. You cannot organize an infinite number of things, each organizational scheme is relative to a range.
The overall quality of any interface is how it feels to the user. Is it intuitive? Is it complete? A long time ago, someone pointed out that tutorials and manuals for interfaces exist only to document the flaws. They document the deviations from a good interface.
We could do advanced things with interfaces, but I haven’t seen anyone try in a long time.
One idea I’ve always wondered about is if you could get away with only one screen for everything that mixes up all of the nouns. As you select a bunch, the available verbs change. This runs slightly away from the ‘discovery’ aspects of the earlier GUI conventions, where menu items should always be visible even if they are grey right now because the context is incomplete. This flips that notion. You have a sea of functionality, but only see what is acceptable given the context. If the context itself is a first-class noun, then you build up a whole bunch of those, they expose functionality. As you create more and more higher-order objects, you stumble on higher-order functionality as well.
Way back, lots of people were building massive workflow systems. It’s all graph based but the navigation is totally dynamic. You just have a sea of disconnected screens, all wired up internally, but then you have navigation on the fly that moves the users between them based on any sort of persistent customization. Paired with dynamic forms, and if you treat things like menus as just partial screens, you get the ability to quickly rewire arbitrary workflows through any of the data and functionality. You can quickly create whatever workflow you need. If there is also an interface for creating workflows, users can customize all aspects of their work and the system will actually grow into its usage. It starts empty, and people gradually add the data pipes and workflows that suit them.
If you abstract a little, you see that every interface is just a nested collection of widgets working together. We navigate between these building up a context in order to trigger functionality. The interface does not know or need to know what the data is. It should contain zero “business logic”. The only difference between any type of data is labels, types, and structure. Because of that, we can actually build up higher-level paradigms that make it far less expensive to build larger interfaces, we do not need to spend a lot of time carefully wiring up the same widget arrangements over and over again. There are some reusable composite widgets out there that kind of do this, but surprisingly fewer of them than you would expect after all of these decades.
We tend to treat interface creation as if it were a new thing. It was new forty years ago, but it really is just stock code wiring these days. The interface reflects the user model of the underlying persisted data model. It throws in context on top. There are a lot of well-established conventions. They can be fun to build, but once you decide on how the user should see their data and work with it, the rest is routine.
When navigation is bad, the users find the system awkward to use. It’s a struggle to do stuff with it. Eventually, they’ll put up with the deficiencies but they’ll never really like using the software.
When the navigation is good, no one notices. It fades into the background. It always works as expected, so people don’t see it anymore. Still, people tend to rave about that type of software and get attached to it. They are very aware that “it just works” and appreciate that. It is getting rarer these days.
The design of graphical user interfaces has been fairly consistent for at least thirty years. It can be phrased in terms of screens, selections, and widgets.
You start with some type of landing screen. That is where everyone goes unless they have customized entry to land elsewhere. That is pretty easy.
There are two main families of interfaces: hierarchical and toolbox.
A hierarchical interface makes the user move around through a tree or graph of screens to find the functionality they need. There are all sorts of navigation aids and quick ways to jump from one part of the system to another.
Toolboxes tend to keep the user in one place, a base screen, working with a small set of objects. They select these and then dig down into different toolboxes to get to the functionality they need and apply it to the object.
So we either make the user go to the functionality or we bring the functionality to the user. Mixing and matching these paradigms tends to lead to confusion unless it is super held together by some other principle.
It all amounts to the same thing.
At some point, we have established a context for the user that points to all of the data, aka nous, needed by some functionality, aka verbs, to execute correctly, aka compute. Creating and managing that context is navigation. Some of that data is session-based, some is based around the core nouns that the user has selected, and some of it is based on preferences. It is always a mixture which has often lead to development confusion.
For nouns, there are always 0, 1, or more of any type of them. Three choices. A good interface will deal with all three. For base verbs, they are create, edit, and delete. That directly or indirectly applied to every action, and again they are all handled for everything. There are connective verbs like find.
There are common navigation patterns like find-list-select as a means to go from many to one. The opposite is true too, but more often you see that in a toolbox. You can go from one to many.
There are plenty of variations on saving and restoring context. In some interfaces, it can be recursive, so you might go down from the existing context into a new one, execute the functionality, and link that back into the above context.
The best interfaces minimize any cognitive demands. If the user needs three things from different parts of the system, as they navigate, those parts are both connected to the context, but can also be viewed, and possibly modified. The simplistic example is copy and paste, but there are other forms like drag and drop, saved values, history, etc. They are also consistent. They do not use every trick in the book, instead, they carefully pick a small number of them and apply them consistently. This allows the user to correctly guess how to get things accomplished.
The trick is to make it all intuitive. The user knows they want to do something, but they don’t have to think very hard about how to accomplish it, it is obvious. All they need to do is grok the philosophy and go.
Good interfaces sometimes degrade quickly. People start randomly adding in more functionality, sort of trying to follow the existing pattern, but not quite. Then suddenly the interface gets convoluted. It has an overall feel, but too many expectations to that feel.
In that sense, an interface is scaled to support a specific amount of functionality, so when that grows a new interface is needed to support the larger amount.
There is no one-size-fits-all interface paradigm, just as there are no one-size-fits-all scaling solutions for any other part of software. This happens because organizational schemes are relative to size. As the size changes, it all needs to be reorganized, there is no avoiding that constraint. You cannot organize an infinite number of things, each organizational scheme is relative to a range.
The overall quality of any interface is how it feels to the user. Is it intuitive? Is it complete? A long time ago, someone pointed out that tutorials and manuals for interfaces exist only to document the flaws. They document the deviations from a good interface.
We could do advanced things with interfaces, but I haven’t seen anyone try in a long time.
One idea I’ve always wondered about is if you could get away with only one screen for everything that mixes up all of the nouns. As you select a bunch, the available verbs change. This runs slightly away from the ‘discovery’ aspects of the earlier GUI conventions, where menu items should always be visible even if they are grey right now because the context is incomplete. This flips that notion. You have a sea of functionality, but only see what is acceptable given the context. If the context itself is a first-class noun, then you build up a whole bunch of those, they expose functionality. As you create more and more higher-order objects, you stumble on higher-order functionality as well.
Way back, lots of people were building massive workflow systems. It’s all graph based but the navigation is totally dynamic. You just have a sea of disconnected screens, all wired up internally, but then you have navigation on the fly that moves the users between them based on any sort of persistent customization. Paired with dynamic forms, and if you treat things like menus as just partial screens, you get the ability to quickly rewire arbitrary workflows through any of the data and functionality. You can quickly create whatever workflow you need. If there is also an interface for creating workflows, users can customize all aspects of their work and the system will actually grow into its usage. It starts empty, and people gradually add the data pipes and workflows that suit them.
If you abstract a little, you see that every interface is just a nested collection of widgets working together. We navigate between these building up a context in order to trigger functionality. The interface does not know or need to know what the data is. It should contain zero “business logic”. The only difference between any type of data is labels, types, and structure. Because of that, we can actually build up higher-level paradigms that make it far less expensive to build larger interfaces, we do not need to spend a lot of time carefully wiring up the same widget arrangements over and over again. There are some reusable composite widgets out there that kind of do this, but surprisingly fewer of them than you would expect after all of these decades.
We tend to treat interface creation as if it were a new thing. It was new forty years ago, but it really is just stock code wiring these days. The interface reflects the user model of the underlying persisted data model. It throws in context on top. There are a lot of well-established conventions. They can be fun to build, but once you decide on how the user should see their data and work with it, the rest is routine.
Thursday, August 15, 2024
Nailed It
“If all you have is a hammer, everything looks like a nail” -- Proverb
When you need a solution, the worst thing you can do is to fall back on just the tricks you’ve learned over the years and try to force them to fit.
That’s how we get Rube Goldberg machines. They are a collection of independent components that appear to accomplish the task, but they do so in very awkward and overly complex ways. They make good art, but poor technology.
Instead, you have to see the problem itself, and how it really affects people. You start in their shoes and see what it is that they need.
You may have to twist it around a few times in your head, in order to figure out the best way to bind a bunch of seemingly unrelated use cases. Often the best solution is neither obvious nor intuitive. Once you’ve stumbled onto it, it is clear to everyone that it fits really well, but just before that, it is a mystery.
Just keep picturing it from the user's perspective.
There is some information they need, they need it in a useful form, and they need to get to it quickly. They shouldn’t have to remember a lot to accomplish their tasks, that is the computer’s job.
The users may phrase it one way or describe it with respect to some other ancient and dysfunctional set of tools they used in the past. You can’t take them literally. You need to figure out what they really need, not just blindly throw things together. It can take a while and go through a bunch of iterations. Sometimes it is like pulling teeth to get enough of the fragments from them to be able to piece it together properly. Patience and lots of questions are the keys.
If you visualize the solution that way, you will soon arrive at a point where you don't actually know how to build it. That is good, it is what you want. That is the solution that you want to decompose and break down into tangible components.
If you’ve never worked through that domain or type of system before, it will be hard. It should be hard. If it isn’t hard, then you are probably just trying to smash pieces in place. Hammering up a storm.
For some visualizations, while they would be nearly perfect, you know you’ll never quite get there, Which is okay. As you work through the mechanics, time and impatience will force you to push a few lesser parts. If you have each one encapsulated, then later you can enhance them. Gradually you get closer to your ultimate solution.
If you can visualize a strong solution to their problems, and you’ve worked through getting in place the mechanics you need to nearly get there, then the actual coding of the thing should be smooth. There will be questions and issues that come up of course, but you’ll already possess enough knowledge to answer most of them for yourself. Still, there should be an ongoing dialog with the users, as the earlier versions of the work will be approximations to what they need. You’ll make too many assumptions. There will be some adjustments made for time and scheduling. They may have to live with a few crude features initially until you get a window to craft something better.
This approach to building stuff is very different than the one often pushed by the software industry. They want the users to dump out everything fully, literally, and then they just want to blindly grind out components to match that. Those types of projects fail often.
They’ve failed for decades and different people have placed the blame on various parts of the process like technology or methodology, but the issue is that the users are experts for a problem, and this is forcing them to also be experts on the solution, which they are not. Instead, the people who build the solution have to understand the problem they are trying to solve as well. Without that, a disconnect will happen, and although the software gets built it is unlikely to fit properly. So, it’s a waste of time and money.
When building software, it is not about code. It is not about solving little puzzles. It is about producing a solution that fits back to the user's problems in a way that makes the user’s lives better. The stack, style, libraries, and methodologies, don’t matter if the output doesn’t really solve the problem. It’s like building a house and forgetting to make it watertight or put a roof on it. Technically it is a house, but not really. The only real purpose of coding is to build things that make the world better. We build solutions, and hopefully not more problems.
When you need a solution, the worst thing you can do is to fall back on just the tricks you’ve learned over the years and try to force them to fit.
That’s how we get Rube Goldberg machines. They are a collection of independent components that appear to accomplish the task, but they do so in very awkward and overly complex ways. They make good art, but poor technology.
Instead, you have to see the problem itself, and how it really affects people. You start in their shoes and see what it is that they need.
You may have to twist it around a few times in your head, in order to figure out the best way to bind a bunch of seemingly unrelated use cases. Often the best solution is neither obvious nor intuitive. Once you’ve stumbled onto it, it is clear to everyone that it fits really well, but just before that, it is a mystery.
Just keep picturing it from the user's perspective.
There is some information they need, they need it in a useful form, and they need to get to it quickly. They shouldn’t have to remember a lot to accomplish their tasks, that is the computer’s job.
The users may phrase it one way or describe it with respect to some other ancient and dysfunctional set of tools they used in the past. You can’t take them literally. You need to figure out what they really need, not just blindly throw things together. It can take a while and go through a bunch of iterations. Sometimes it is like pulling teeth to get enough of the fragments from them to be able to piece it together properly. Patience and lots of questions are the keys.
If you visualize the solution that way, you will soon arrive at a point where you don't actually know how to build it. That is good, it is what you want. That is the solution that you want to decompose and break down into tangible components.
If you’ve never worked through that domain or type of system before, it will be hard. It should be hard. If it isn’t hard, then you are probably just trying to smash pieces in place. Hammering up a storm.
For some visualizations, while they would be nearly perfect, you know you’ll never quite get there, Which is okay. As you work through the mechanics, time and impatience will force you to push a few lesser parts. If you have each one encapsulated, then later you can enhance them. Gradually you get closer to your ultimate solution.
If you can visualize a strong solution to their problems, and you’ve worked through getting in place the mechanics you need to nearly get there, then the actual coding of the thing should be smooth. There will be questions and issues that come up of course, but you’ll already possess enough knowledge to answer most of them for yourself. Still, there should be an ongoing dialog with the users, as the earlier versions of the work will be approximations to what they need. You’ll make too many assumptions. There will be some adjustments made for time and scheduling. They may have to live with a few crude features initially until you get a window to craft something better.
This approach to building stuff is very different than the one often pushed by the software industry. They want the users to dump out everything fully, literally, and then they just want to blindly grind out components to match that. Those types of projects fail often.
They’ve failed for decades and different people have placed the blame on various parts of the process like technology or methodology, but the issue is that the users are experts for a problem, and this is forcing them to also be experts on the solution, which they are not. Instead, the people who build the solution have to understand the problem they are trying to solve as well. Without that, a disconnect will happen, and although the software gets built it is unlikely to fit properly. So, it’s a waste of time and money.
When building software, it is not about code. It is not about solving little puzzles. It is about producing a solution that fits back to the user's problems in a way that makes the user’s lives better. The stack, style, libraries, and methodologies, don’t matter if the output doesn’t really solve the problem. It’s like building a house and forgetting to make it watertight or put a roof on it. Technically it is a house, but not really. The only real purpose of coding is to build things that make the world better. We build solutions, and hopefully not more problems.
Thursday, August 8, 2024
Coding Style
Over the decades I’ve worked in many different development teams of varying sizes.
For a few teams, we had super strict coding conventions, which included formatting, structure, naming, and even idioms.
In a bunch, we weren’t strict — there were no “enforced” standards — but the coders were mostly aligned.
In some projects, particularly later in my career, it was just herding cats, with all of the developers going their own way. A free-for-all.
In those first cases, the projects were all about high-quality code and that was what we delivered. It was tight, complex, and industrial. Slow and steady.
For the loosely aligned teams, the projects were usually successful. The quality was pretty good. There were some bugs, but they were quickly fixable. Really ugly code would mostly get refactored to fit in correctly. There was always cleanup.
For the rogue groups, the code was usually a huge mess. The architecture was an unstructured ball of mud. Stuff was scattered everywhere and most people were overly stressed. The bugs were legendary and the fixes were high risk and scary. If a bug crossed multiple author’s work, it was an unsolvable nightmare. Nothing was ever cleaned up, just chucked in and left in that state.
So it’s no surprise that I see quality as tied to disciplined approaches like coding standards. It’s sort of style vs substance, but for programming, it is more like initial speed vs quality. If you understand in advance what you are building and concentrate, slowly and carefully, when you code, your output will be far better. It requires diligence and patience.
That manifests in consistency, readability, and reuse. All three reduce time, reduce bugs, and make it easier to extend the work later. If the development continues for years, it is mandatory.
If you’ve ever worked on tight projects vs sloppy ones, you see the difference. But if all of your projects have just been dumpster fires, it might not be so obvious. You might be used to the blowups in production, the arguments over requirements. A lot of development projects burn energy on avoidable issues.
It is getting increasingly harder to convince any development team that they need coding conventions. The industry frowns on it, and most programmers believe that it impinges on their freedoms. They felt constrained, incorrectly believing that it would just slow them down.
Whenever I’ve seen a greenfield team that refuses or ignores it, I can pretty much predict the outcome. Late and low quality. Sometimes canceled after a disastrous set of releases. You can see it smoldering early on.
Sometimes I’ve been dragged into older projects to rescue them. What they often have in common is one or two really strong, but eclectic, coders pulling most of the weight, and everyone else a little lost, mulling around the outside. So, they are almost standardized, in that the bulk of the code has some consistency, even if the conventions are a bit crazy. You can fix eclectic code if it is consistent, so they are usually recoverable.
One big advantage of a coding style is that it makes code reviews work. It is easier to review with respect to conventions than just comment on what is effectively random code. Violations of the conventions tend to happen in quick, sloppy work, which is usually where the bugs are clustered. If you are fixing a big mess, you’ll find that the bugs always take you back to the worst code. That is how we know they are tied to each other.
However, I find it very hard to set up, implement, and even follow conventions.
That might sound weird in that I certainly recognize the importance of them. But they do act as a slight translation of my thinking about the code and how to express it. It usually takes a while before following the conventions becomes more natural. Initially, it feels like friction.
Also, some of the more modern conventions are far worse than the older ones. Left to decide, I’ll go far back to the stricter conventions of my past.
In software, all sorts of things are commonly changed just for the sake of changing them, not because they are improved. You could write endless essays about why certain conventions really are stronger, but nobody reads them. We really have to drop that “newer is always better” attitude, which is not working for us anymore.
Still, if you need to build something reliable, in the most efficient time possible, the core thing to do is not to let it become a dumpster fire. So, organization, conventions, and reuse are critical. You’ll take a bit longer to get out of the gate, but you’ll whizz by any other development team once you find the grove. It’s always been like that for the teams I’ve worked in and talked to, and I think it is likely universal. We like to say “garbage in, garbage out” for data but maybe for coding we should also add “make a mess, get a mess” for code.
For a few teams, we had super strict coding conventions, which included formatting, structure, naming, and even idioms.
In a bunch, we weren’t strict — there were no “enforced” standards — but the coders were mostly aligned.
In some projects, particularly later in my career, it was just herding cats, with all of the developers going their own way. A free-for-all.
In those first cases, the projects were all about high-quality code and that was what we delivered. It was tight, complex, and industrial. Slow and steady.
For the loosely aligned teams, the projects were usually successful. The quality was pretty good. There were some bugs, but they were quickly fixable. Really ugly code would mostly get refactored to fit in correctly. There was always cleanup.
For the rogue groups, the code was usually a huge mess. The architecture was an unstructured ball of mud. Stuff was scattered everywhere and most people were overly stressed. The bugs were legendary and the fixes were high risk and scary. If a bug crossed multiple author’s work, it was an unsolvable nightmare. Nothing was ever cleaned up, just chucked in and left in that state.
So it’s no surprise that I see quality as tied to disciplined approaches like coding standards. It’s sort of style vs substance, but for programming, it is more like initial speed vs quality. If you understand in advance what you are building and concentrate, slowly and carefully, when you code, your output will be far better. It requires diligence and patience.
That manifests in consistency, readability, and reuse. All three reduce time, reduce bugs, and make it easier to extend the work later. If the development continues for years, it is mandatory.
If you’ve ever worked on tight projects vs sloppy ones, you see the difference. But if all of your projects have just been dumpster fires, it might not be so obvious. You might be used to the blowups in production, the arguments over requirements. A lot of development projects burn energy on avoidable issues.
It is getting increasingly harder to convince any development team that they need coding conventions. The industry frowns on it, and most programmers believe that it impinges on their freedoms. They felt constrained, incorrectly believing that it would just slow them down.
Whenever I’ve seen a greenfield team that refuses or ignores it, I can pretty much predict the outcome. Late and low quality. Sometimes canceled after a disastrous set of releases. You can see it smoldering early on.
Sometimes I’ve been dragged into older projects to rescue them. What they often have in common is one or two really strong, but eclectic, coders pulling most of the weight, and everyone else a little lost, mulling around the outside. So, they are almost standardized, in that the bulk of the code has some consistency, even if the conventions are a bit crazy. You can fix eclectic code if it is consistent, so they are usually recoverable.
One big advantage of a coding style is that it makes code reviews work. It is easier to review with respect to conventions than just comment on what is effectively random code. Violations of the conventions tend to happen in quick, sloppy work, which is usually where the bugs are clustered. If you are fixing a big mess, you’ll find that the bugs always take you back to the worst code. That is how we know they are tied to each other.
However, I find it very hard to set up, implement, and even follow conventions.
That might sound weird in that I certainly recognize the importance of them. But they do act as a slight translation of my thinking about the code and how to express it. It usually takes a while before following the conventions becomes more natural. Initially, it feels like friction.
Also, some of the more modern conventions are far worse than the older ones. Left to decide, I’ll go far back to the stricter conventions of my past.
In software, all sorts of things are commonly changed just for the sake of changing them, not because they are improved. You could write endless essays about why certain conventions really are stronger, but nobody reads them. We really have to drop that “newer is always better” attitude, which is not working for us anymore.
Still, if you need to build something reliable, in the most efficient time possible, the core thing to do is not to let it become a dumpster fire. So, organization, conventions, and reuse are critical. You’ll take a bit longer to get out of the gate, but you’ll whizz by any other development team once you find the grove. It’s always been like that for the teams I’ve worked in and talked to, and I think it is likely universal. We like to say “garbage in, garbage out” for data but maybe for coding we should also add “make a mess, get a mess” for code.
Thursday, August 1, 2024
Massively Parallel Computing
A long time ago I was often thinking about the computer architectures of that day. They were a little simpler back then.
The first thing that bothered me was why we had disks, memory, and registers. Later we added pipelines and caches.
Why can’t we just put some data directly in some long-term persistence and access it there? Drop the redundancy. Turn it all upside down?
Then the data isn’t moved into a CPU, or FPU, or even the GPU. Instead, the computation is taken directly to the data. Ignoring the co-processors, the CPU roves around persistence. It is told to go somewhere, examine the data, and then produce a result. Sort of like a little robot in a huge factory of memory cells.
It would get a little weirder of course, since there might be multiple memory cells and a bit of indirection involved. And we’d want the addressable space to be gigantic, like petabytes or exabytes. Infinite would be better.
Then a computer isn’t time splicing a bunch of chips, but rather it is a swarm of much simpler chips that are effectively each dedicated to one task. Let’s call them TPUs. Since the pool of them wouldn’t be infinite, they would still do some work, get interrupted, and switch to some other work. But we could interrupt them a lot less, particularly if there are a lot of them, and some of them could be locked to specific code sequences like the operating system or device drivers.
If when they moved they fully owned and uniquely occupied the cells that they needed, it would be a strong form of locking. Built right into the mechanics.
We couldn’t do this before, the CPUs are really complicated, but all we’d need is for each one to be a fraction of that size. A tiny, tiny instruction set, just the minimum. As small as possible.
Then the bus is just mapped under addressable space. Just more cells, but with some different type of implementation beneath. The TPUs won’t know or care about the difference. Everything is polymorphic in this massive factory.
Besides a basic computation set, they’d have some sort of batch strength. That way they could lock a lot of cells all at once, then move or copy them somewhere else in one optimized operation. They might have different categories too, so some could behave more like an FPU.
It would be easy to add more, different types. You would start with a basic number and keep dumping in more. In fact, you could take any two machines and easily combine them as one, even if they are of different ages. You could keep combining them until you had a beast. Or split a beast in two.
I don’t do hardware, so there is probably something obvious that I am missing, but it always seemed like this would make more sense. I thought it would be super cool if instead of trashing all the machines I’ve bought over the decades, I could just meld them together into one big entity.
The first thing that bothered me was why we had disks, memory, and registers. Later we added pipelines and caches.
Why can’t we just put some data directly in some long-term persistence and access it there? Drop the redundancy. Turn it all upside down?
Then the data isn’t moved into a CPU, or FPU, or even the GPU. Instead, the computation is taken directly to the data. Ignoring the co-processors, the CPU roves around persistence. It is told to go somewhere, examine the data, and then produce a result. Sort of like a little robot in a huge factory of memory cells.
It would get a little weirder of course, since there might be multiple memory cells and a bit of indirection involved. And we’d want the addressable space to be gigantic, like petabytes or exabytes. Infinite would be better.
Then a computer isn’t time splicing a bunch of chips, but rather it is a swarm of much simpler chips that are effectively each dedicated to one task. Let’s call them TPUs. Since the pool of them wouldn’t be infinite, they would still do some work, get interrupted, and switch to some other work. But we could interrupt them a lot less, particularly if there are a lot of them, and some of them could be locked to specific code sequences like the operating system or device drivers.
If when they moved they fully owned and uniquely occupied the cells that they needed, it would be a strong form of locking. Built right into the mechanics.
We couldn’t do this before, the CPUs are really complicated, but all we’d need is for each one to be a fraction of that size. A tiny, tiny instruction set, just the minimum. As small as possible.
Then the bus is just mapped under addressable space. Just more cells, but with some different type of implementation beneath. The TPUs won’t know or care about the difference. Everything is polymorphic in this massive factory.
Besides a basic computation set, they’d have some sort of batch strength. That way they could lock a lot of cells all at once, then move or copy them somewhere else in one optimized operation. They might have different categories too, so some could behave more like an FPU.
It would be easy to add more, different types. You would start with a basic number and keep dumping in more. In fact, you could take any two machines and easily combine them as one, even if they are of different ages. You could keep combining them until you had a beast. Or split a beast in two.
I don’t do hardware, so there is probably something obvious that I am missing, but it always seemed like this would make more sense. I thought it would be super cool if instead of trashing all the machines I’ve bought over the decades, I could just meld them together into one big entity.
Subscribe to:
Posts (Atom)