As the AI winter began thawing, I dipped in periodically to see how it was progressing.
The results were getting amazing. However, I tended to see these large language models as just dynamically crafting code to fit specific data. Sure the code is immensely complex, and some of the behaviors are surprising, but I didn’t feel like the technology had transcended the physical limitation of hardware.
Computers are stupid; software can look smart, but it never is. The utility of software comes from how we interpret what the computer remembers.
A few weeks ago I was listening to Prof. Geoffrey Hinton talk about his AI concerns. He had survived the winter in one of our local universities. I have stumbled across his work quite often.
You have to respect his knowledge, it is incredibly deep. But I was still dismissing his concerns. The output from these things is a mathematical game, it may appear intelligent, but it can’t be.
As his words sank deeper I started thinking back to some of Douglas Hofstadter’s work. Godel, Escher, Bach is a magnum opus, but I read some of his later writings where he delved into epiphenomenon. I think it was I Am a Strange Loop, where he was making an argument that people live on in other’s memories.
I didn’t buy that as a valid argument. Interesting, sure, but not valid. Memories are static, what we know of intelligent life forms is that they are always dynamic. They can and do change. They adjust to the world around them, that is the essence of life. Still, I thought that the higher concept of epiphenomenon itself is interesting.
All life, as far as I know, is cellular. Roger Penrose in The Emperor's New Mind tried to make the argument that our intelligence and consciousness on top of our bodies sprang from the exact sort of quantum effects that Einstein so hated. Long ago I toyed with the idea that that probabilistic undertone was spacetime, as an object, getting built. I never published that work, early readers dismissed it quite rapidly, but that sense that the future wasn’t written yet stayed with me. That it all somehow plays back into our self-determination and free will as Penrose was suggesting. Again, another interesting perspective.
And the questions remained. If we are composed of tiny biological machines, how is it possible that we believe we are something else entirely on top of this? Maybe Hofstadter’s epiphenomenon really are independent from their foundations? Are we entities in our own right, or are we just clumps of quadrillions of cells? A Short History of Nearly Everything by Bill Bryson helps to muddle that notion even further.
Does it roll back to Kurt Godel’s first incompleteness theorem, that there are things -- that are true-- that are entirely unreachable from the lower mechanics? I’ll call them emergent properties. They seem to spring out of nowhere, yet they are provably true.
If we searched, would we find that there was some surprising formula that dictates the construction of sequential huge prime numbers, starting at a massive one, and continuing for a giant range, yet except for actually calculating it all out and examining it, we’d be totally unaware of the formula's existence. Nothing about the construction of primes themselves would lead us to deduce this formula. It seems to be disconnected. Properties just seem to emerge.
Godel did that proof for formal systems, which we are not, but we have become the masters of expressing the informal relationships that we see in our world with formal systems, so the linkages between the two are far tighter than we understand right now.
That argument that our sense of self is an epiphenomenon that is extraordinarily complex and springs to “life” on top of a somewhat less than formal biological system that is in the middle of writing itself out is super fascinating. It all sorts of ties itself together.
And then it scared me. If Hinton is correct then an AI answering questions through statistical tricks and dynamic code is just the type of complex foundation on which we could see something else emerge.
It may just be a few properties short of a serious problem right now. But possibly worse because humans tend to randomly toss things into it at a foolish rate. A boiling cauldron of trouble.
We might just be at that moment of singularity, and we might just stumble across the threshold accidentally. Some programmer somewhere thinks one little feature is cool, and that is just enough extra complexity for a dangerous new property to emerge, surprising everyone. Oops.
That a stupid computer can generate brand new text that is mostly correct and sounds nearly legitimate is astounding. While it is still derived and bounded by a sea of input I still don’t think it has crossed the line yet. But I am starting to suspect that it is too close for comfort now. That if I focused really hard on it, I could give it a shove to the other side, and what’s worse is that I am nowhere close to being the brightest bulb in the amusement park. What’s to keep someone brilliant near genius from just waking up one night and setting it all off, blind to the negative consequences of their own success?
After the AI winter, I just assumed this latest sideshow was another fad that would fade away when everyone gets bored. It will unless it unleashes something else.
The results were getting amazing. However, I tended to see these large language models as just dynamically crafting code to fit specific data. Sure the code is immensely complex, and some of the behaviors are surprising, but I didn’t feel like the technology had transcended the physical limitation of hardware.
Computers are stupid; software can look smart, but it never is. The utility of software comes from how we interpret what the computer remembers.
A few weeks ago I was listening to Prof. Geoffrey Hinton talk about his AI concerns. He had survived the winter in one of our local universities. I have stumbled across his work quite often.
You have to respect his knowledge, it is incredibly deep. But I was still dismissing his concerns. The output from these things is a mathematical game, it may appear intelligent, but it can’t be.
As his words sank deeper I started thinking back to some of Douglas Hofstadter’s work. Godel, Escher, Bach is a magnum opus, but I read some of his later writings where he delved into epiphenomenon. I think it was I Am a Strange Loop, where he was making an argument that people live on in other’s memories.
I didn’t buy that as a valid argument. Interesting, sure, but not valid. Memories are static, what we know of intelligent life forms is that they are always dynamic. They can and do change. They adjust to the world around them, that is the essence of life. Still, I thought that the higher concept of epiphenomenon itself is interesting.
All life, as far as I know, is cellular. Roger Penrose in The Emperor's New Mind tried to make the argument that our intelligence and consciousness on top of our bodies sprang from the exact sort of quantum effects that Einstein so hated. Long ago I toyed with the idea that that probabilistic undertone was spacetime, as an object, getting built. I never published that work, early readers dismissed it quite rapidly, but that sense that the future wasn’t written yet stayed with me. That it all somehow plays back into our self-determination and free will as Penrose was suggesting. Again, another interesting perspective.
And the questions remained. If we are composed of tiny biological machines, how is it possible that we believe we are something else entirely on top of this? Maybe Hofstadter’s epiphenomenon really are independent from their foundations? Are we entities in our own right, or are we just clumps of quadrillions of cells? A Short History of Nearly Everything by Bill Bryson helps to muddle that notion even further.
Does it roll back to Kurt Godel’s first incompleteness theorem, that there are things -- that are true-- that are entirely unreachable from the lower mechanics? I’ll call them emergent properties. They seem to spring out of nowhere, yet they are provably true.
If we searched, would we find that there was some surprising formula that dictates the construction of sequential huge prime numbers, starting at a massive one, and continuing for a giant range, yet except for actually calculating it all out and examining it, we’d be totally unaware of the formula's existence. Nothing about the construction of primes themselves would lead us to deduce this formula. It seems to be disconnected. Properties just seem to emerge.
Godel did that proof for formal systems, which we are not, but we have become the masters of expressing the informal relationships that we see in our world with formal systems, so the linkages between the two are far tighter than we understand right now.
That argument that our sense of self is an epiphenomenon that is extraordinarily complex and springs to “life” on top of a somewhat less than formal biological system that is in the middle of writing itself out is super fascinating. It all sorts of ties itself together.
And then it scared me. If Hinton is correct then an AI answering questions through statistical tricks and dynamic code is just the type of complex foundation on which we could see something else emerge.
It may just be a few properties short of a serious problem right now. But possibly worse because humans tend to randomly toss things into it at a foolish rate. A boiling cauldron of trouble.
We might just be at that moment of singularity, and we might just stumble across the threshold accidentally. Some programmer somewhere thinks one little feature is cool, and that is just enough extra complexity for a dangerous new property to emerge, surprising everyone. Oops.
That a stupid computer can generate brand new text that is mostly correct and sounds nearly legitimate is astounding. While it is still derived and bounded by a sea of input I still don’t think it has crossed the line yet. But I am starting to suspect that it is too close for comfort now. That if I focused really hard on it, I could give it a shove to the other side, and what’s worse is that I am nowhere close to being the brightest bulb in the amusement park. What’s to keep someone brilliant near genius from just waking up one night and setting it all off, blind to the negative consequences of their own success?
After the AI winter, I just assumed this latest sideshow was another fad that would fade away when everyone gets bored. It will unless it unleashes something else.
I did enjoy the trilogy Wake, Watch, Wonder by Robert J Sawyer, but I suspect that the odds of a benevolent AI are pretty low. I'd say we have to slow way, way, down, but that wouldn’t stop progress from the dark corners of the world.
If I had a suggestion it would be to turn directly into opening Pandora's box, but to do it in a very contained way. A tightly sandboxed testnet that was locked down fully. Extra fully. Nothing but sneakernet access. A billion-dollar self-contained simulation of the Internet, with an instantaneous kill switch, and an uncrossable physical moat between it and the real world. Only there would I feel comfortable deliberately trying out ideas to see if we are now in trouble or not.
Thanks for sharing your thoughts! You may be interested in reading the book "2084 - Artificial Intelligence and the Future of Humanity" by John C. Lennox. I really enjoyed that book because it gives such a fresh and interesting (christian) perspective on this matter.
ReplyDelete