A while ago I was discussing some software architectural options with a fellow developer and we ran into a little hitch with our conversation. Put quite simply, the word 'dynamic' has a couple of possible meanings. He was using it one way, and I was off on a complete different definition.
For me I've always taken 'dynamic' to literally mean the opposite of 'static', and for 'static' -- in the context of coding -- to mean the very precise specification of something specific be it code or data.
Static variables, static data, static data-type, static code, etc. all imply that there are no inherent degrees of freedom within any of the code or data to do anything other that its basic function. It is what it is.
As we code, we can replace instructions with all sorts of variables, and with each new variability the code gets a little more dynamic. Things that are well-defined are static, otherwise if they can vary to any degree they are dynamic.
More interestingly, if we allow the data-types of the variables themselves to go as not being specified, the code gets even more dynamic.
There are of course techniques to do this with the code itself, like polymorphism or function pointers, so the code can be dynamic in this sense as well. In a similar fashion, as the code runs, the underlying objects and methods can be not specific until they are needed. Then they are bound to some concrete implementation. Exactly the same as above.
Most interpreters even allow for code to be created on the fly, another way of making it dynamic, although the construction techniques are generally bound to static permutations (the generated code is limited in breadth by the generating code).
In a sense my use of dynamic means stuff is generalized, but only in its behavior relative to a set of fixed actions or the data. The more dynamic a routine, the more different variety of actions and data it can handle.
Of course, that software always being grounded in practical things means that at some point in the running life of the code, all of those degrees of freedom must collapse down to something specific. The code must actually do something, the data must actually be something.
The variable nature of the data must go from being abstract to concrete. Specific lines of code must be executed. The whole thing must actualize. There must be static data and data-types moving through the low levels of the code at all times, or else the code can't eventually make heads or tails of its data.
You can pass around some generalized data structure, but to do something specific with it, you need to know exactly what the structure is. If it's too general it's useless. Thus we can dynamically pass around pointers to things, but we have to de-reference them to get the actual data back.
So, I tend to think of dynamic as just introducing more and more degrees of freedom into a body of code and data.
Ultimately to really do something, everything that is dynamic must eventually become static in the end. What binds these degrees of freedom to their ultimate counterparts could be code driven or come from config files, or even passed in as some type of syntax.
Every computer system falls back to the same basic principles a) the total amount of code is finite (or generated in a finite manner) and b) the total different types of data is also finite (although some types can be very general). These two points become very important later in this discussion.
My developer friend however has a slightly different definition of dynamic. He sees it as the ability of something to extend itself. In his sense a dynamic protocol for instance would be one that could grow and get larger, adding new things as it goes. Where in mine definition of dynamic, for a protocol it would be one where fewer things were initially bound to static values. It could range over a wide variety of outputs, but it couldn't actually grow.
I understand his definition, but I tend to think of this properly of something to really grow itself as being far beyond that of being just dynamic. In a sense, I think that what he is getting into is a far larger attribute that needs its own term, something more closely connected with extending a formal system, not just opening up the degrees of freedom.
Our discussion was interesting because lately I've been reading 'Gödel, Escher, Bach: An Eternal Golden Braid' written by Douglas Hofstadter. It's an awesome book written almost thirty years ago, and has been on my must-read list forever. I've delayed reading it over the years partially because it draws a lot of similarity to what I studied in university as part of the combinatorics and optimization branch of mathematics. Partially because I know it is a very hard slog, although well worth it.
The book really focuses on strange loops with respect to formal systems in mathematics. By that I mean particular corner cases in math such as recursion, self-description, stacks, infinity and other bits of interesting, but non-intuitive concepts. It ties these ideas back to Escher's works in Art, and Bach's work in music as a way of trying to make such very difficult abstract topics map back onto very concrete things in reality.
He centers his ideas around formal systems, they are some well-defined way of specifying the rules of a system through a set of parameters and axioms. Basically a rigorous set of rules that when applied correct can generate all and everything within the system. Arithmetic is a simple example. Logic is probably the best known one (thanks to Spock). Various branches of mathematics are larger and more complex formal systems.
While I've seen these ideas in various courses in my university days, it has been decades since they've held a place so active in my thoughts.
In another more recent conversion with a friend -- over beers -- I found myself headed back over some of Hofstadter's key concepts. Most notably that Gödel and Turing showed that all formal systems were finite.
Not only were they finite, but also there is always at least one axiom that should (could) be in the formal system, but is not actually contained there. A valid axiom that falls out of bounds. Thus formal systems are incomplete by definition.
Another consequence of these ideas is that the overall space of all formal systems is infinitely large. It is effectively boundless. Always larger than the largest formal system.
While these ideas are simple in concept, they are hard for humans to accept since for centuries we've been categorizing the world as one large deterministic system.
The core of our beliefs is to accept that one day science will have all of the answers, and the core of Gödel's work was to show that mathematics itself was a formal system and that it was incomplete and would always be so. Our sciences are all founded on mathematics to some degree or another. So no matter how much we know, there will always be things we could know, that are beyond our knowledge. It never ends.
Oddly I saw this excellent video recently called Dangerous Knowledge based on the lives of Georg Cantor, Ludwig Boltzmann, Kurt Gödel and Alan Turing which focused on how all of their lives ended tragically, trying in many ways to blame it on their obsessions with the very kinds of strange loops that Hofstadter likes so much.
And even more recently I was having an interesting discussion with another blogger Mark Essel at his site called Victus Spiritus. That conversation started with the semantic web, but gradually drifted over to some concepts from AI.
I was finding it more than curious how all of these different references to the same underlying bodies of knowledge were crisscrossing each other just in time for me to be working on my next post.
Getting back to the beers, as they loosened my tongue, I found myself talking about how formal systems are always finite, and how computers themselves are intrinsically formal systems.
In that sense, as I have said before, the only intelligence in a computer program comes directly from someone putting it there.
Mapping that concept back to a formal system, we see that a formal system is composed of the base axioms that define it, and nothing else. The size of a formal system is dependent on the number of base axioms in it, although the axioms themselves might lead to the production of other valid axioms in the system. These derived axioms are entirely limited by the initial ones (like generated code). Thus the system is only as large as someone makes it, and only as complex as someone makes it. What really feeds a formal system is outside of it. The intelligence of its creator.
Leaping back and forth between "programs" and "formal systems" we can see that the code and knowledge we program into the computer are just axioms in a formal system. That isomorphism bothers some people, but it is far too convenient to pass up or ignore. Our programs are only as smart as we make them.
Gödel's ideas however have some interesting consequences. One of which is that humans do not appear to be limited by formal systems. We seem quite able to re-write our axioms over longer periods -- changing as we need to -- so any type of simple model like logic is just too restrictive to account for our current level of sophistication (Vulcan aren't possible either, I guess).
That brings up another interesting bit, as I was reading Scientific American and they had a good article on the two halves of the brain. They were suggesting that the two halves work differently in many other species besides man. Mammals and other creatures show split brain behaviors.
They hypothesize that one half of the brain handles the normal day to day living issues. We can see it as a formal system that contains all of the axioms on how to spend our days and nights.
The other half of the brain is more interesting. They implied that it was some form of almost error handling. The part of the brain that handles special circumstances., strange conditions, etc.
As we work through the day, both halves of our brains are searching for answers, both working on the identical problems at the same time. However, one half or the other finishes first, and we respond to that result. It's a heuristic with a constant race condition between throwing an error or computing the results.
Although I suspect it may be too simple, I really like this model of the way a brain functions. It means in a sense that we always thinking about what we are doing in two different ways. Assuming that the day to day stuff works, it finished first, so we do that and move on. If it doesn't finish first we react with the other half of our brain, the error half. It seems to explain why people get funny under unusual circumstances. Perhaps it explains why things like beer liberate our behavioral patterns. It certainly is an interesting concept, isn't it?
But if we're trying to tie this back to a formal system concept we need to go deeper. Two formal systems are really just one larger formal system, and they are still both finite and incomplete. It wouldn't change us as people to have two halves of the brain or just one, unless something was different between them.
Beer and a bit of tobacco (I quit, but occasionally I mis-behave) got me thinking about the opposite of a formal system. After all, formal systems sit in an infinite space. No one system can occupy it all, and it goes on forever. There are an infinite number of formal systems, and even the sum of all of them does not fill the space.
What if we had the opposite of a formal system? An anti-formal system. If it's defined as a set of rules which are not in the system and the whole thing is unbounded and infinitely large then it would be exactly the theoretical opposite of a formal system. I vaguely recall Hofstadter talking about how such a thing isn't well defined, but I sense that his meaning was a little different than mine.
If one half of our brains were a set of axioms about what we should do, then the other half might be a set of axioms about what we should not do. I.e we should not hang around, or we should not get that glass of water or we should not go to the store. Really it's this negative space of axioms about what doesn't fit into the anti-formal system. A system that is infinitely large.
That's a neat idea because we know that the only way a formal system can grow is if we (as intellectual beings) grow it to be larger. But two formal systems are just one giant one, so they can't really grow on their own (derived axioms don't count).
This gets around that, in that if a human is a combination of a formal system in one half of the brain, and an anti-formal one in the other half suddenly we have a model (albeit peculiar) were one half of the brain can add rules to the other half, but it itself is not bounded (but it's also not enough to drive us to do things). In some funky way, that secondary half could be seen as the sub-consciouses, while the formal half is the conscious.
OK, it's a bit weird, but in that way we have a solid (and nearly deterministic) perspective on people that explains why we're not bound by formal systems. Of course "intelligence" sits on top of all of this, since according to the article other mammals have the split brain as well, yet they are not (nearly) as intelligent as we are (although I have occasionally been outwitted by my dog, so who really knows).
In thinking about this I did have another idea as well. Although computers are bounded by finite formal systems, and a near infinite capacity is a necessity for intelligent behavior, we don't necessarily need to break that restriction in order to have something close enough.
If we build a system that makes it nearly trivial to add new intelligent axioms, although the system would have to go outside of itself to get it (to a human), it could possibly grow fast enough to more or less appear infinite. I.e. a rapidly expanding finite system converges in the long run with infinity (although it never fully gets there).
Get enough people adding enough data to some finite base and to most people it seems to go on forever, Wikipedia is an example of that.
We might not be able to get to AI, but at very least we might be able to create formal systems that for all intensive purposes are completely and totally indistinguishable from AI.
A NEARLY DETERMINISTIC UNIVERSE
Just because it walks like AI, and talks like AI, doesn't mean it really has to be AI.
In many ways this is the lesson that Hofstadter, Cantor, Boltzmann, Gödel and Turing have left us with, that the universe is not as simple and deterministic as we think, but it is not entirely un-understandable either. Once we grok the strange loops, we can get beyond just trying to force the world around us into our narrow human perspective.
We try to orient things into our simplest viewpoint, but in doing so we often conveniently leave out the really awkward, but hugely important ugly bits. Like infinity in Cantor's time or the space of all formal systems being infinite in Gödel's and Turing's time.
Getting back to my friend's definition of dynamic, my version meant that to any degree a dynamic system was still a formal system. The dynamic nature is in how the axioms are specified, not the overall size of the system itself.
My friend's definition however, meant that any dynamic system in his sense was not a formal system, but one that was larger and complete (and possibly not bounded). Theoretically I'm not sure where that stands, but certainly with respect to computers it is somewhat of an oxymoron. A dynamic system, in his sense of the word, is not a system that could or would fit into a computer. It is one where the system grows all on it's own, within its own power. A fabulous thing if we could get one, but not something we're going to see any time soon.
A little farther down in the beer, I started talking about some little blog post I wrote for one of my lessor blogs called The Differences. Mostly the ideas in that blog are just jotted down to be forgotten in time, but one of them was particularly interesting. I had watched a video on Richard Feynman and it got me to thinking about gravity.
Insanely I wrapped my ideas of the universe into a 5D version, but I figured that I had probably overlooked something obvious.
With my friend, who has a little bit of a mathematical background, but like me is no physicist, we started talking about some of these underlying ideas. At least in the beer haze it seemed as if they weren't entirely offbeat.
They managed to pass the "let's talk about it test", were one should really bounce their weirder concepts off friends first. Given that, I figure someone out there can give a good explanation why it isn't so. Or maybe take it further if it is possible (but don't forget to include me in your Nobel prize speech).