I’ve been deep into software since the mid-eighties, obsessively following the industry while I slough through its muddy trenches.
The benefit of having survived so long is that you get the repeated pleasure of seeing the next annoying hype cycle explode.
The pattern is always the same. Something almost newish comes along. It’s okay, but not that big of a deal. Still, it gets exposed to way more people than before. That fuels the adrenaline, which twists into a hype machine detached from reality. As it grows, its growth adds more fuel, until it has been so watered down that it is far beyond irrational. Eventually reality hits, and it goes *pop*.
AI, which started in the sixties, almost hit that point in the eighties. But now it’s returned with a vengeance, this time reaching stratospheric heights and causing untold damage to the world.
To be clear, it is cute. LLMs will survive, and eventually be relegated to the same bucket as full-text search or command line completion. Something that is useful for some people, but not significant and definitely not monetizable. A throwaway feature used by a few people, but not vital.
Not good enough to make profits and definitely not good enough to replace employees. If the world were sane, we would have barely noticed it and just shoved it into the ‘not worth the resources it consumes’ category.
But that’s not what happened. Instead, some tech bros are making suicidal bets on profits, while executroids foolishly believe it will liberate them from payroll woes. Neither will happen, but a lot of people will burn because of these delusions. Again.
The Web was similar. Yes, it survived the dotcom bomb, and gradually ate the world, but the initial gold rush turned out to mostly mine huge chunks of pyrite.
Technology takes a long time to mature. If you rely on it too early, it will bite you. Nothing ever changes that. Not well-written books, management theories, nor aggressive marketing. Immature technology might be fun to play with, but it is not yet industrial strength. It will collapse under any sort of weight.
LLMs play a clever trick with finding paths of tokens through a huge tensor space. That’s all they do. Nothing else. If you anthropomorphize those paths as being anything other than a random ant trail through interwinned data, you are being fooled. Sure, it looks pretty good sometimes. But “sometimes” isn’t even close to good enough.
You wouldn’t replace your employees with Furbies; LLMS are only marginally better. They are no threat to intelligence, even if the lack of it has been triggered by them.
But that isn’t even the real problem.
The technology sets resources on fire. It is an all-consuming flame of computation. So stupidly expensive that even our fabulous modern hardware can barely keep up. So stupidly expensive that its value is not even close to its costs.
Someday in the future, when our computers are thousands of times more powerful than today and have finally been optimized to use minimal electricity, that value may be there. But not today. Not next week, next year, and probably not for at least a decade.
Nothing short of scientific simulations or extreme mathematics eats that amount. Burning that much on a massive scale isn’t viable. And any sort of value is clearly not worth it. There are no profits to be made here, at this point in time.
As an added benefit, the technology obliterates security and opens the door for outlandish surveillance. Since it is too expensive and too flaky to run locally, people have leaped in to help. You’re literally sending all of your IP and process knowledge to these unvetted third parties in the hopes they won’t betray you.
What’s consistent about the 21st Century is that eventually that information will become valuable enough for them to seek profits. And there is absolutely nothing out there to stop them. So, as we have seen over and over again, they’ll go whole hog into monetizing your secrets. Their impending financial crisis will be so large that they won’t even have a choice. There will be data buffets springing up on every corner, hawking your appetizers.
I’m old enough that I don’t even need to predict the burst. It will happen, it always does. And someday in the future, at most interactive text bars, you’ll be able to get stale gobblygook generated locally from a decrepit model that hasn’t been retrained for years. It won’t be as good as now, but it won’t be that much worse either.
As for programmers and the panic setting into the industry, don’t worry. You get paid to know things; code is just what you do with that knowledge. You won’t be replaced by a mechanical procedure that doesn't actually understand anything. Bounce that noise between a thousand models, and it will still fail eventually. And when it does, unless it's been constantly retrained on its own slop, it will be clueless and unable to save the day. Sooner or later, management will wake up to the fact that they are exfiltrating their own information in an epic breach and put a stop to it. If some of that generated code is nearly usable today, when the resource excesses stop, the quality will plummet past hopelessness. Any development that isn’t entirely local is far too dangerous to be allowed to continue. This too shall pass.
No comments:
Post a Comment
Thanks for the Feedback!