History dictates today’s computer architecture. That’s the normal progression for knowledge and technology, but often there comes a time when we can and should take another look at the foundations. Some things don’t ‘need’ to exist the way they are, they just exist that way.
Memory and disk split themselves because of practical hardware considerations. The faster RAM was considerably less expensive than the larger hard-drives. This duality meant a lot of work moving data and code back and forth between the divide. It also meant that there were interesting race conditions created when we tried to optimized this split with caching.
With today’s solid state drives, and their increasing speed, one starts to wonder whether this split is really necessary. What if there where only one location? What if you basically installed the software right into memory, and memory here huge (and completely addressable)? What then?
Well for one, the issues with storing things out to persistent storage would just go away. Internally as you created instances of data, they would just stay put until you got rid of them. The static nature of the database would give way to a full dynamic representation.
Another interesting issue comes from the program counter (PC). Right now we pick up the code, and then toss it into memory where one or more PCs march through it, building up an execution stack. What if instead of the code going to the PCs, the PCs went to the code. That is, the whole nature of time-splicing and process/thread handling changes. PCs become wandering minstrels, dropping into particular executables and moving them forward a few paces. Any stack information is left with the code, and they all exist together, always in memory.
In that way, the user could redirect a number of different PCs towards different tasks, allowing them to bounce around as needed. It sounds eerily similar to what we have now, except that the whole thing is turned on its head.
On peculiar aspect of this is that programmers would no longer created threads. Instead, threads essentially just spring into existence. No longer could one assume that the code doesn't need to be thread-safe, since at anytime there could be multiple PCs working in the same area. This would likely shift the atomicity from each individual instruction, into blocks of instructions, a change that might require a fundamental paradigm shift in our programming languages.
There would be issues with addressing, disks these days are huge, but one could implement some form of segmented indexing, maintained in a master table somewhere. If PC1 goes to process X, it looks up its general location from the table, then gets specifics from process X. At that point it happily marches through the code, until it’s told to go elsewhere.
I’m not entirely sure if this would work, and no doubt there are dozens of corner-cases I haven’t considered, but it would a) make coding a lot easier and b) change the nature of exceptions. It would be interesting to try it, and if it worked I think it would be revolutionary.
No comments:
Post a Comment
Thanks for the Feedback!