I was crafted by the Waterloo co-op program in the late eighties. Part of that experience was a crazy large number of interviews, so I got pretty good at them. Since then, I’ve worked for over a dozen companies.
For one interview (for my all-time favourite job), I was really young, so I got asked rapid tech questions and had to bring printouts of my older code with me. That was fine, it was a systems programming job and really competitive, lots of people desired it.
For another interview, at the turn of the century, I just went for dinner and drinks. Definitely my favourite interview.
I’ve had a couple of interviews in coffee shops; they were usually successful. The casual atmosphere really helps to connect.
Once I got ganged up on. I think it was six of them crammed into a little office, but the questions were product, process, and feature-related, not coding, so it was fine.
Often, I was at the interview because a friend I had worked with in the past was trying to pull me in. That generally made them go pretty easy on me.
I’ve had some bad interviews, though. Usually, when applying for advertised jobs.
Once I showed up, they put me in a big room with a bunch of other people and gave us a written test. I took it, sat down, and just signed my name. Then I got up, handed it in and left. No way I was going to work for them.
Another time, I was grilled on tech that I told them I hadn’t used for twenty years. The kid interviewing me was annoyed that I couldn’t remember some esoterica. Seriously?
One time, they gave me an online coding test. An editor embedded in an online chat. The first question was okay, but for the second question, they wanted me to correctly code something huge. I explained the actual theory behind it and why it wasn’t trivial, but they didn’t understand and told me just to grind it out in a little bit of ‘approximate’ code. I hemmed and hawed for a while, then said I wasn’t going to finish. They told me to try anyway, so I sat there quietly until the interview timed out. I was hoping awkward silence made my point.
After one interview, one of the executives proudly told me that I would have to look after his “hobby” system. I turned that down without a second thought.
For another, it was going well, but then I started making jokes about live locks. Turns out the interviewer did his master's thesis on that topic. Opps.
One time, they said my take-home coding test had too many functions. I just laughed.
Another time, the interviewer started in with tricky little puzzle questions. I had seen them all before, so I could have answered, but I was already having a bad day, so I blew up. I got really angry, and the interviewer tried to calm me down. We agreed to meet in person, which went really well. I was sent across the country for a round of second interviews, but I was told I had a bit too much personality for them. It was still fun, and they paid my expenses.
One time, one of my co-workers showed me the tests he was going to use for interviews. He said to find the one problem, I pointed out a whole bunch of them; he got mad at me. Lol.
I often interviewed candidates, too. If we were multiple interviewers, I’d get the others to ask the tech questions, and I’d focus on personality. I like to see that people are curious and keen to learn. If they had that and I could get them talking about something that excited them, I generally accepted them. That had a pretty good track record of finding good people.
I usually expected a long ramp-up time and the need for training, so I was rarely looking for prefab employees. I saw them as longer-term bets, which tended to pay off better.
For one company, since the code was brutal, I used a technical question that I was pretty sure the candidates couldn't answer. I just wanted them to try to work through the problem. I would help, but not give it all away. If they were stumped and started pitching ideas, it was perfect. I had some pretty good hires from that.
For that round of interviews, one of the senior candidates got insulted and said he wouldn’t take the test. I obviously sympathized, but for that type of work, it wasn’t optional; it was our daily grind.
Overall, my hiring track record is iffy. Some great hires, but also some duds. Usually, though, the duds were caused by scarcity and/or my not trusting my own instincts. Sometimes the candidates are too limited; there is not much you can do about it.
I really hate the big, long, stupid interviews, particularly when the questions are way out of whack with the actual work. Seems like an ego problem if they’ll test you on stuff that you’d never have to do. They’re trying too hard to be cool, and I really hate the taste of Kool-Aid. If the interview makes you uncomfortable, the actual job is probably worse. I never felt bad when I just walked away, but I usually wasn't desperate either. That helps.
Only once did I really have to take a job that I didn’t want. I stayed for a while, but some days were hard. The irony was that many of my later jobs were with people I had met at that early one. The job wasn’t great, but the contacts turned out to be awesome. That's why it's important to keep a positive attitude even in a negative situation.
I’ve always figured that if you got the world’s ten greatest programmers all together on a project, they would spend their days fighting with each other and nothing practical would get built. A less impressive team that works together really well is always better. It’s too bad modern interview practices don’t reflect that.
If you’re interviewing these days, be patient, stay strong. It’s a numbers game. Win a few, lose a lot.
The Programmer's Paradox
Software is a static list of instructions, which we are constantly changing.
Thursday, March 19, 2026
Thursday, March 12, 2026
Functions
Over the decades, I’ve seen the common practices around creating functions change quite a bit.
When I first started coding, functions had come out of the procedural paradigm. I guess, long ago, in maybe assembler, a program was just one giant list of instructions. That would be a little crippling, so one of the early attempts to help was to break it up into smaller functions. An added benefit was that you could reuse them.
By the time I started coding, the better practice was to break up the code along the lines of similarity. Code that is similar is clumped together.
When I first started coding, functions had come out of the procedural paradigm. I guess, long ago, in maybe assembler, a program was just one giant list of instructions. That would be a little crippling, so one of the early attempts to help was to break it up into smaller functions. An added benefit was that you could reuse them.
By the time I started coding, the better practice was to break up the code along the lines of similarity. Code that is similar is clumped together.
As the data structures and object-oriented paradigms started taking hold, the practices switched to being targeted. For instance, you’d write a lot of little ‘atomic’ primitive functions for each action you did against the structure, like create, add, or traverse. Indirectly, that gave rise to the notion of a function just having a single responsibility.
In data structures, you might end up coding up a whole bunch of structures, then stacking them one on top of the other, mostly trying to get to one rather giant data structure for the whole program. That was excellent in building up sophistication from reusable parts, but a lot of programmers just saw it as excessive layering, not one big interactive structure. People kept wanting to decompose, without ever reassembling.
Object-oriented followed suit, but seemed to get lost on that notion of building up application objects. There were often dozens at the top. It also renamed functions to ‘methods’, but I’ll skip that. It was initially a very successful paradigm change, but later people started objecting to the feeling of layering, and to the idea that the entry points were somehow a ‘god’ object.
The very early smalltalk object-oriented code had lots of functions, many of which were just one-liners. My first encounter surprised me. There were so many functions...
I guess, as a later reaction to the success of those earlier styles, the common practices moved back towards procedural. Almost no layers and very huge functions. Giant ones. This reopened the door for more brute force practices, which had been pushed away by those earlier paradigms.
I’ve always known that huge functions were a nightmare. Too much stuff all tangled together, it is unreadable and hard to follow. But any of the earlier attempts to limit function size were too restrictive and too fragile. You can’t just say that all functions must be less than 10 lines of code, for example. The attempts to categorize it as a single responsibility were pretty good, but because of the intense dislike of layers, they didn’t get across the notion that it is one thing at just one level. So, it would be coded as one thing, but with all of the raw instructions below that, as far down as the coder could go, all intertwined together. If you have messy low-level stuff to do, it should be hidden below in more functions, effectively a layer, but not really. For example, string fiddling in the middle of business logic is distracting, quickly killing off the readability.
Layers, as they first came out, were really architectural lines, not stacked data structures. For example, you cut a hard line between code that messes with persistence and code that computes derived objects, so you don’t mix and match. The derived stuff then sits in a layer above the persistence.
The point of those types of lines is to make the code super easy to debug later. If you know it is a derived calculation problem, you can skip the other code.
Overall, I’d say that there can never be too many functions. Each one is a chance to attach some self-describing name to a chunk of code. Think of it as concise language-embedded comments. If you were tight about coding with some of the older paradigms, then the data structures or even objects are pretty much all named with nouns, and the functions are all verbs. Using that, you can implement the code with the same vocabulary that you might verbally describe to a friend or another programmer. The closer that implementation is to descriptive paragraphs of what it does, the more you will be able to verify its behaviour on sight. That doesn’t absolve you of testing, or even for some code, creating a real formal proof of correctness, but it does cut down a lot of the work later when catching bugs.
If you also put a hard split between the domain logic and the technical necessities, you can usually just jump right to the incorrect block of code. When someone describes the bug, you know immediately where it is located. Since diagnostics and debugging eat way more time than coding, any sort of practice to reduce friction for them will really help with scheduling and reducing stress. Code that you can fix effortlessly is worth far more than code that you can write quickly.
For me, I think we should return to that notion of stacking up data structures and objects in order to build up sophistication. The best code I’ve seen does this, and has crazy long shelf lives. Its strength is that it encapsulates really well and makes it easy for reuse. It is also quite defensive, and it helps to zero in quickly on bugs. Realistically, it isn’t layering; it is a form of stacking, and those two should not be confused with each other. A layer is a line in the architecture you should have; stacking is just depth in the call chain. If the stacking is really encapsulated, programmers don’t have to go down a rabbit hole to understand what is happening in the higher levels. Entangling that all together is worse.
You can always get a sense of the code quality by quickly looking at the functions. Big, bloated functions with ambiguous, convoluted, or vague names are just nurseries for bugs. If you can skim the code and mostly know what it should do, then it is readable. If you have to pick over it line by line, it is a cognitive nightmare. If the function says DoX, and the code looks like it might actually do X, then it is pretty good.
In data structures, you might end up coding up a whole bunch of structures, then stacking them one on top of the other, mostly trying to get to one rather giant data structure for the whole program. That was excellent in building up sophistication from reusable parts, but a lot of programmers just saw it as excessive layering, not one big interactive structure. People kept wanting to decompose, without ever reassembling.
Object-oriented followed suit, but seemed to get lost on that notion of building up application objects. There were often dozens at the top. It also renamed functions to ‘methods’, but I’ll skip that. It was initially a very successful paradigm change, but later people started objecting to the feeling of layering, and to the idea that the entry points were somehow a ‘god’ object.
The very early smalltalk object-oriented code had lots of functions, many of which were just one-liners. My first encounter surprised me. There were so many functions...
I guess, as a later reaction to the success of those earlier styles, the common practices moved back towards procedural. Almost no layers and very huge functions. Giant ones. This reopened the door for more brute force practices, which had been pushed away by those earlier paradigms.
I’ve always known that huge functions were a nightmare. Too much stuff all tangled together, it is unreadable and hard to follow. But any of the earlier attempts to limit function size were too restrictive and too fragile. You can’t just say that all functions must be less than 10 lines of code, for example. The attempts to categorize it as a single responsibility were pretty good, but because of the intense dislike of layers, they didn’t get across the notion that it is one thing at just one level. So, it would be coded as one thing, but with all of the raw instructions below that, as far down as the coder could go, all intertwined together. If you have messy low-level stuff to do, it should be hidden below in more functions, effectively a layer, but not really. For example, string fiddling in the middle of business logic is distracting, quickly killing off the readability.
Layers, as they first came out, were really architectural lines, not stacked data structures. For example, you cut a hard line between code that messes with persistence and code that computes derived objects, so you don’t mix and match. The derived stuff then sits in a layer above the persistence.
The point of those types of lines is to make the code super easy to debug later. If you know it is a derived calculation problem, you can skip the other code.
Overall, I’d say that there can never be too many functions. Each one is a chance to attach some self-describing name to a chunk of code. Think of it as concise language-embedded comments. If you were tight about coding with some of the older paradigms, then the data structures or even objects are pretty much all named with nouns, and the functions are all verbs. Using that, you can implement the code with the same vocabulary that you might verbally describe to a friend or another programmer. The closer that implementation is to descriptive paragraphs of what it does, the more you will be able to verify its behaviour on sight. That doesn’t absolve you of testing, or even for some code, creating a real formal proof of correctness, but it does cut down a lot of the work later when catching bugs.
If you also put a hard split between the domain logic and the technical necessities, you can usually just jump right to the incorrect block of code. When someone describes the bug, you know immediately where it is located. Since diagnostics and debugging eat way more time than coding, any sort of practice to reduce friction for them will really help with scheduling and reducing stress. Code that you can fix effortlessly is worth far more than code that you can write quickly.
For me, I think we should return to that notion of stacking up data structures and objects in order to build up sophistication. The best code I’ve seen does this, and has crazy long shelf lives. Its strength is that it encapsulates really well and makes it easy for reuse. It is also quite defensive, and it helps to zero in quickly on bugs. Realistically, it isn’t layering; it is a form of stacking, and those two should not be confused with each other. A layer is a line in the architecture you should have; stacking is just depth in the call chain. If the stacking is really encapsulated, programmers don’t have to go down a rabbit hole to understand what is happening in the higher levels. Entangling that all together is worse.
You can always get a sense of the code quality by quickly looking at the functions. Big, bloated functions with ambiguous, convoluted, or vague names are just nurseries for bugs. If you can skim the code and mostly know what it should do, then it is readable. If you have to pick over it line by line, it is a cognitive nightmare. If the function says DoX, and the code looks like it might actually do X, then it is pretty good.
Thursday, March 5, 2026
Stress
Being a software developer is difficult and stressful.
In the early days, there is an uncontrollable fear that you cannot build what you were asked to build.
The industry is awash with too many unknown unknowns, and few programmers receive adequate training. Newbies are often just pushed into the deep end with a brick tied to their ankle and expected to figure out how to swim.
Worse, the industry discourse is erratic. Some people claim one thing works correctly, while lots of people contradict them. Everyone argues, so there is usually never a consensus. It’s super trendy and plagued with myths and misunderstandings. Over the decades, this has gotten far worse. It’s a turbulent sea of opinions and amnesia.
At some point, if you survive long enough, you figure out how to build the things they ask you to build.
Well, almost. Each time around, the thing they ask for is larger and far more complicated. That seems to never end. Most programmers believe, as a result of this, that fundamentally everything you do is new, but oddly, most things you do will have been done before by hundreds, if not millions, of other people. Real greenfield software is a rarity, even if it’s a newly evolving domain. The basics that underlie the development have been around for a very long time, and haven’t really changed all that much over the decades, even if the dependencies and stacks are different.
After you’ve sort of managed to get your feet solidly on the ground -- for instance, you can build most different types of common applications -- your problems get worse.
It is inevitable that as you outgrow the work of coding, you find yourselves entangled in all sorts of other industry issues, like management, planning, usability, architecture, design, domain knowledge, etc. Once you are no longer an inexpensive kid, you find that you need to dip your toes into these other issues in order to justify your higher salary. You probably don’t want to, which is why you focused on coding instead. Still, you quickly learn that the more you bring to the table, the more people will be willing to put up with your demands.
That is when software development gets really hard.
The more knowledge you acquire, and the more experiences you survive, the more likely you will find yourself in a situation where you can anticipate a big problem, know absolutely how to avoid it, but are not taken seriously enough to be allowed to do that. So, you’ve shed the creation stress only to be pummeled by tactical or strategic stress. You’re expected to code, but are not supposed to control things. “They” just want you to be a cog in their machine. That is often the low point.
That is the time that you have endless discussions with people about how too little time will grind the quality far below usable, or how throwing in extra bodies will only slow things down, not speed them up. That’s when someone recommends a technology that you know is completely unreliable, or they push a change that is inherently destructive, even if it seems to work on their machine. You end up sitting through meetings about design, where the most popular options are awful and wasteful, and practices that you know will work have been deemed to be too old school.
The biggest skill you end up learning is to pick your battles carefully. Maybe the code is too messy, but the interface is better suited to what the users actually need. Maybe you rush through a throw-away feature in order to get enough time to do some mission-critical core work. As you get more and more experience, you find yourself higher up in the ranks, but if your fingers are still in the code, it is hard to be taken seriously. That irony, where the work is mostly controlled by people who are the most clueless about the nature of the work, starts to haunt you.
Some people give in at this point; others push through the pain. If you push through, you find yourself staring at yet another development effort that is one tiny step away from being a death march, and there is a huge wind trying to push it off the cliff. Sometimes you just have to shrug it off and walk away.
That’s kinda when you change. At first, you thought the priorities were technical engineering. That the code should be as good as the code can get. Then you switched to understanding that helping users through their problems is more important, even if the code gets dinged because of it. Now, though, you wake up and realize that building stuff is stupidly expensive, and what really matters is managing all of those strings tied to the money that you need to continue.
If you’re meta-physical, you’ve moved from being concerned about the code to being concerned about the data and the code. Then you were concerned about the users and whether they were happy or not. Then you’re concerned about the development shop itself. Is it functioning properly?
You get to a point where you're no longer trying to build software; now you are trying to build organizations that can build good software.
And if you wonder past that, then you are concerned about creating organizations that can collect enough funds to be able to set up a development shop that can create software worth using.
Basically, the horizons of what you are trying to build just keep expanding farther and farther afield.
The irony is that the stresses of the early days are looking somewhat pedestrian at that point. You miss just being obsessed with creating good code; it all seemed so much more innocent in those days.
What is the case for most developers is that they start stressed and as they conquer those stresses, they are replaced by even bigger, less manageable ones. Just interacting with a computer was fun; interacting with people, politics, strings, and agendas is not. But if you want to keep on building bigger and more sophisticated things, you have to keep getting broader in your focus.
On the other side of the coin, if you picked a place where eventually you get to a point where there is little or no stress, then you’ll start to get stressed by the upcoming fate of your own obsolescence. That is, any path to avoid the stress will lead you to the stress of being too expensive, too far behind, or easily replaceable.
Stress, it seems, in programming, is unavoidable. At best, you can try to pick the types you are willing to put up with.
In the early days, there is an uncontrollable fear that you cannot build what you were asked to build.
The industry is awash with too many unknown unknowns, and few programmers receive adequate training. Newbies are often just pushed into the deep end with a brick tied to their ankle and expected to figure out how to swim.
Worse, the industry discourse is erratic. Some people claim one thing works correctly, while lots of people contradict them. Everyone argues, so there is usually never a consensus. It’s super trendy and plagued with myths and misunderstandings. Over the decades, this has gotten far worse. It’s a turbulent sea of opinions and amnesia.
At some point, if you survive long enough, you figure out how to build the things they ask you to build.
Well, almost. Each time around, the thing they ask for is larger and far more complicated. That seems to never end. Most programmers believe, as a result of this, that fundamentally everything you do is new, but oddly, most things you do will have been done before by hundreds, if not millions, of other people. Real greenfield software is a rarity, even if it’s a newly evolving domain. The basics that underlie the development have been around for a very long time, and haven’t really changed all that much over the decades, even if the dependencies and stacks are different.
After you’ve sort of managed to get your feet solidly on the ground -- for instance, you can build most different types of common applications -- your problems get worse.
It is inevitable that as you outgrow the work of coding, you find yourselves entangled in all sorts of other industry issues, like management, planning, usability, architecture, design, domain knowledge, etc. Once you are no longer an inexpensive kid, you find that you need to dip your toes into these other issues in order to justify your higher salary. You probably don’t want to, which is why you focused on coding instead. Still, you quickly learn that the more you bring to the table, the more people will be willing to put up with your demands.
That is when software development gets really hard.
The more knowledge you acquire, and the more experiences you survive, the more likely you will find yourself in a situation where you can anticipate a big problem, know absolutely how to avoid it, but are not taken seriously enough to be allowed to do that. So, you’ve shed the creation stress only to be pummeled by tactical or strategic stress. You’re expected to code, but are not supposed to control things. “They” just want you to be a cog in their machine. That is often the low point.
That is the time that you have endless discussions with people about how too little time will grind the quality far below usable, or how throwing in extra bodies will only slow things down, not speed them up. That’s when someone recommends a technology that you know is completely unreliable, or they push a change that is inherently destructive, even if it seems to work on their machine. You end up sitting through meetings about design, where the most popular options are awful and wasteful, and practices that you know will work have been deemed to be too old school.
The biggest skill you end up learning is to pick your battles carefully. Maybe the code is too messy, but the interface is better suited to what the users actually need. Maybe you rush through a throw-away feature in order to get enough time to do some mission-critical core work. As you get more and more experience, you find yourself higher up in the ranks, but if your fingers are still in the code, it is hard to be taken seriously. That irony, where the work is mostly controlled by people who are the most clueless about the nature of the work, starts to haunt you.
Some people give in at this point; others push through the pain. If you push through, you find yourself staring at yet another development effort that is one tiny step away from being a death march, and there is a huge wind trying to push it off the cliff. Sometimes you just have to shrug it off and walk away.
That’s kinda when you change. At first, you thought the priorities were technical engineering. That the code should be as good as the code can get. Then you switched to understanding that helping users through their problems is more important, even if the code gets dinged because of it. Now, though, you wake up and realize that building stuff is stupidly expensive, and what really matters is managing all of those strings tied to the money that you need to continue.
If you’re meta-physical, you’ve moved from being concerned about the code to being concerned about the data and the code. Then you were concerned about the users and whether they were happy or not. Then you’re concerned about the development shop itself. Is it functioning properly?
You get to a point where you're no longer trying to build software; now you are trying to build organizations that can build good software.
And if you wonder past that, then you are concerned about creating organizations that can collect enough funds to be able to set up a development shop that can create software worth using.
Basically, the horizons of what you are trying to build just keep expanding farther and farther afield.
The irony is that the stresses of the early days are looking somewhat pedestrian at that point. You miss just being obsessed with creating good code; it all seemed so much more innocent in those days.
What is the case for most developers is that they start stressed and as they conquer those stresses, they are replaced by even bigger, less manageable ones. Just interacting with a computer was fun; interacting with people, politics, strings, and agendas is not. But if you want to keep on building bigger and more sophisticated things, you have to keep getting broader in your focus.
On the other side of the coin, if you picked a place where eventually you get to a point where there is little or no stress, then you’ll start to get stressed by the upcoming fate of your own obsolescence. That is, any path to avoid the stress will lead you to the stress of being too expensive, too far behind, or easily replaceable.
Stress, it seems, in programming, is unavoidable. At best, you can try to pick the types you are willing to put up with.
Thursday, February 26, 2026
When the Bubble Bursts
I’ve been deep into software since the mid-eighties, obsessively following the industry while I slough through its muddy trenches.
The benefit of having survived so long is that you get the repeated pleasure of seeing the next annoying hype cycle explode.
The pattern is always the same. Something almost newish comes along. It’s okay, but not that big of a deal. Still, it gets exposed to way more people than before. That fuels the adrenaline, which twists into a hype machine detached from reality. As it grows, its growth adds more fuel, until it has been so watered down that it is far beyond irrational. Eventually reality hits, and it goes *pop*.
AI, which started in the sixties, almost hit that point in the eighties. But now it’s returned with a vengeance, this time reaching stratospheric heights and causing untold damage to the world.
To be clear, it is cute. LLMs will survive, and eventually be relegated to the same bucket as full-text search or command line completion. Something that is useful for some people, but not significant and definitely not monetizable. A throwaway feature used by a few people, but not vital.
Not good enough to make profits and definitely not good enough to replace employees. If the world were sane, we would have barely noticed it and just shoved it into the ‘not worth the resources it consumes’ category.
But that’s not what happened. Instead, some tech bros are making suicidal bets on profits, while executroids foolishly believe it will liberate them from payroll woes. Neither will happen, but a lot of people will burn because of these delusions. Again.
The Web was similar. Yes, it survived the dotcom bomb, and gradually ate the world, but the initial gold rush turned out to mostly mine huge chunks of pyrite.
Technology takes a long time to mature. If you rely on it too early, it will bite you. Nothing ever changes that. Not well-written books, management theories, nor aggressive marketing. Immature technology might be fun to play with, but it is not yet industrial strength. It will collapse under any sort of weight.
LLMs play a clever trick with finding paths of tokens through a huge tensor space. That’s all they do. Nothing else. If you anthropomorphize those paths as being anything other than a random ant trail through interwinned data, you are being fooled. Sure, it looks pretty good sometimes. But “sometimes” isn’t even close to good enough.
You wouldn’t replace your employees with Furbies; LLMS are only marginally better. They are no threat to intelligence, even if the lack of it has been triggered by them.
But that isn’t even the real problem.
The technology sets resources on fire. It is an all-consuming flame of computation. So stupidly expensive that even our fabulous modern hardware can barely keep up. So stupidly expensive that its value is not even close to its costs.
Someday in the future, when our computers are thousands of times more powerful than today and have finally been optimized to use minimal electricity, that value may be there. But not today. Not next week, next year, and probably not for at least a decade.
Nothing short of scientific simulations or extreme mathematics eats that amount. Burning that much on a massive scale isn’t viable. And any sort of value is clearly not worth it. There are no profits to be made here, at this point in time.
As an added benefit, the technology obliterates security and opens the door for outlandish surveillance. Since it is too expensive and too flaky to run locally, people have leaped in to help. You’re literally sending all of your IP and process knowledge to these unvetted third parties in the hopes they won’t betray you.
What’s consistent about the 21st Century is that eventually that information will become valuable enough for them to seek profits. And there is absolutely nothing out there to stop them. So, as we have seen over and over again, they’ll go whole hog into monetizing your secrets. Their impending financial crisis will be so large that they won’t even have a choice. There will be data buffets springing up on every corner, hawking your appetizers.
I’m old enough that I don’t even need to predict the burst. It will happen, it always does. And someday in the future, at most interactive text bars, you’ll be able to get stale gobblygook generated locally from a decrepit model that hasn’t been retrained for years. It won’t be as good as now, but it won’t be that much worse either.
As for programmers and the panic setting into the industry, don’t worry. You get paid to know things; code is just what you do with that knowledge. You won’t be replaced by a mechanical procedure that doesn't actually understand anything. Bounce that noise between a thousand models, and it will still fail eventually. And when it does, unless it's been constantly retrained on its own slop, it will be clueless and unable to save the day. Sooner or later, management will wake up to the fact that they are exfiltrating their own information in an epic breach and put a stop to it. If some of that generated code is nearly usable today, when the resource excesses stop, the quality will plummet past hopelessness. Any development that isn’t entirely local is far too dangerous to be allowed to continue. This too shall pass.
The benefit of having survived so long is that you get the repeated pleasure of seeing the next annoying hype cycle explode.
The pattern is always the same. Something almost newish comes along. It’s okay, but not that big of a deal. Still, it gets exposed to way more people than before. That fuels the adrenaline, which twists into a hype machine detached from reality. As it grows, its growth adds more fuel, until it has been so watered down that it is far beyond irrational. Eventually reality hits, and it goes *pop*.
AI, which started in the sixties, almost hit that point in the eighties. But now it’s returned with a vengeance, this time reaching stratospheric heights and causing untold damage to the world.
To be clear, it is cute. LLMs will survive, and eventually be relegated to the same bucket as full-text search or command line completion. Something that is useful for some people, but not significant and definitely not monetizable. A throwaway feature used by a few people, but not vital.
Not good enough to make profits and definitely not good enough to replace employees. If the world were sane, we would have barely noticed it and just shoved it into the ‘not worth the resources it consumes’ category.
But that’s not what happened. Instead, some tech bros are making suicidal bets on profits, while executroids foolishly believe it will liberate them from payroll woes. Neither will happen, but a lot of people will burn because of these delusions. Again.
The Web was similar. Yes, it survived the dotcom bomb, and gradually ate the world, but the initial gold rush turned out to mostly mine huge chunks of pyrite.
Technology takes a long time to mature. If you rely on it too early, it will bite you. Nothing ever changes that. Not well-written books, management theories, nor aggressive marketing. Immature technology might be fun to play with, but it is not yet industrial strength. It will collapse under any sort of weight.
LLMs play a clever trick with finding paths of tokens through a huge tensor space. That’s all they do. Nothing else. If you anthropomorphize those paths as being anything other than a random ant trail through interwinned data, you are being fooled. Sure, it looks pretty good sometimes. But “sometimes” isn’t even close to good enough.
You wouldn’t replace your employees with Furbies; LLMS are only marginally better. They are no threat to intelligence, even if the lack of it has been triggered by them.
But that isn’t even the real problem.
The technology sets resources on fire. It is an all-consuming flame of computation. So stupidly expensive that even our fabulous modern hardware can barely keep up. So stupidly expensive that its value is not even close to its costs.
Someday in the future, when our computers are thousands of times more powerful than today and have finally been optimized to use minimal electricity, that value may be there. But not today. Not next week, next year, and probably not for at least a decade.
Nothing short of scientific simulations or extreme mathematics eats that amount. Burning that much on a massive scale isn’t viable. And any sort of value is clearly not worth it. There are no profits to be made here, at this point in time.
As an added benefit, the technology obliterates security and opens the door for outlandish surveillance. Since it is too expensive and too flaky to run locally, people have leaped in to help. You’re literally sending all of your IP and process knowledge to these unvetted third parties in the hopes they won’t betray you.
What’s consistent about the 21st Century is that eventually that information will become valuable enough for them to seek profits. And there is absolutely nothing out there to stop them. So, as we have seen over and over again, they’ll go whole hog into monetizing your secrets. Their impending financial crisis will be so large that they won’t even have a choice. There will be data buffets springing up on every corner, hawking your appetizers.
I’m old enough that I don’t even need to predict the burst. It will happen, it always does. And someday in the future, at most interactive text bars, you’ll be able to get stale gobblygook generated locally from a decrepit model that hasn’t been retrained for years. It won’t be as good as now, but it won’t be that much worse either.
As for programmers and the panic setting into the industry, don’t worry. You get paid to know things; code is just what you do with that knowledge. You won’t be replaced by a mechanical procedure that doesn't actually understand anything. Bounce that noise between a thousand models, and it will still fail eventually. And when it does, unless it's been constantly retrained on its own slop, it will be clueless and unable to save the day. Sooner or later, management will wake up to the fact that they are exfiltrating their own information in an epic breach and put a stop to it. If some of that generated code is nearly usable today, when the resource excesses stop, the quality will plummet past hopelessness. Any development that isn’t entirely local is far too dangerous to be allowed to continue. This too shall pass.
Thursday, February 19, 2026
Data Collection
One of the strongest abilities of any software is data collection. Computers are stupid, but they can remember things that are useful.
It’s not enough to just have some widgets display it on a screen. To collect data means that it has to be persisted for the long term. The data survives the programming being run and rerun, over and over again.
But it’s more than that. If you collect data that you don’t need, it is a waste of resources. If you don’t collect the data that you need, it is a bug. If you keep multiple copies of the same data, it is a mistake. The software is most useful when it always collects just what it needs to collect.
And it matters how you represent that data. Each individual piece of data needs to be properly decomposed. That is, if it is two different pieces of information, it needs to be collected into two separate data fields. You don’t want to over-decompose, and you don’t want to clump a bunch of things together.
Decomposition is key because it allows the data to be properly typed. You don’t want to collect an integer as a string; it could be misinterpreted. You don’t want a bunch of fields clumped together as unstructured text. Data in the wrong format opens up the door for it to be misinterpreted as information, causing bugs. You don’t want mystery data, each datam should have a self-describing label that is unambiguous. If you collect data that you can not interpret correctly, then you have not collected that information.
If you have the data format correct, then you can discard invalid junk as you are collecting it. Filling a database with junk is collecting data you don’t need, and if you did that instead of getting the data you did need, it is also a bug.
Datam are never independent. You need to collect data, and that data has a structure that binds together all of the underlying datam correctly. If you downgrade that structure, you have lost the information about it. If you put the data into a broader structure, you have opened up the possibility of it getting filled with junk data. For example, if the relationship between the data is a hierarchical tree, then the data needs to be collected as a tree; neither a list nor a graph is a valid collection.
In most software, most of the data is intertwined with other values. If you started with one specific piece of data, you should be able to quickly navigate to any of the others. That means that you have collected all of the structures and interconnections properly, and you have not lost any of them. There should only be one way to navigate, or you have collected redundant connections.
As such, if you have collected all of the data you need, then you can validate it. There won’t be data that is missing, there won’t be data that is junk. You can write simple validations that will ensure that the software is working properly, as expected. If the validations are difficult, then there is a problem with the data collection.
If you collect all of the data you need for the software correctly, then writing the code on top of it is way simpler and far easier to properly structure. The core software gets the data from persistence, then passes it out to some form of display. It may come back with some edits, which need to be updated in the persistence. There may be some data that you did not collect, but the data you did collect is enough to be able to derive it from a computation. There may be tricky technical issues that are necessary to support scaling, but those are independent from the collection and flow of data.
Collecting data is the foundation of almost all software. If you get it right, you will be able to grow the software to gradually cover larger parts of the problem domain. If you make a mess out of it, the code will get really ugly, and the software will be unreliable.
It’s not enough to just have some widgets display it on a screen. To collect data means that it has to be persisted for the long term. The data survives the programming being run and rerun, over and over again.
But it’s more than that. If you collect data that you don’t need, it is a waste of resources. If you don’t collect the data that you need, it is a bug. If you keep multiple copies of the same data, it is a mistake. The software is most useful when it always collects just what it needs to collect.
And it matters how you represent that data. Each individual piece of data needs to be properly decomposed. That is, if it is two different pieces of information, it needs to be collected into two separate data fields. You don’t want to over-decompose, and you don’t want to clump a bunch of things together.
Decomposition is key because it allows the data to be properly typed. You don’t want to collect an integer as a string; it could be misinterpreted. You don’t want a bunch of fields clumped together as unstructured text. Data in the wrong format opens up the door for it to be misinterpreted as information, causing bugs. You don’t want mystery data, each datam should have a self-describing label that is unambiguous. If you collect data that you can not interpret correctly, then you have not collected that information.
If you have the data format correct, then you can discard invalid junk as you are collecting it. Filling a database with junk is collecting data you don’t need, and if you did that instead of getting the data you did need, it is also a bug.
Datam are never independent. You need to collect data, and that data has a structure that binds together all of the underlying datam correctly. If you downgrade that structure, you have lost the information about it. If you put the data into a broader structure, you have opened up the possibility of it getting filled with junk data. For example, if the relationship between the data is a hierarchical tree, then the data needs to be collected as a tree; neither a list nor a graph is a valid collection.
In most software, most of the data is intertwined with other values. If you started with one specific piece of data, you should be able to quickly navigate to any of the others. That means that you have collected all of the structures and interconnections properly, and you have not lost any of them. There should only be one way to navigate, or you have collected redundant connections.
As such, if you have collected all of the data you need, then you can validate it. There won’t be data that is missing, there won’t be data that is junk. You can write simple validations that will ensure that the software is working properly, as expected. If the validations are difficult, then there is a problem with the data collection.
If you collect all of the data you need for the software correctly, then writing the code on top of it is way simpler and far easier to properly structure. The core software gets the data from persistence, then passes it out to some form of display. It may come back with some edits, which need to be updated in the persistence. There may be some data that you did not collect, but the data you did collect is enough to be able to derive it from a computation. There may be tricky technical issues that are necessary to support scaling, but those are independent from the collection and flow of data.
Collecting data is the foundation of almost all software. If you get it right, you will be able to grow the software to gradually cover larger parts of the problem domain. If you make a mess out of it, the code will get really ugly, and the software will be unreliable.
Subscribe to:
Comments (Atom)