A continuous system is one where no matter how small a section you exam, there is an infinite amount of further depth. A discrete one is where there are indivisible atomic pieces, which can be broken down no farther.
Initially, as a by-product of how we naturally view the world, our perceptions were likely that the things around us were continuous. The Ancient Greeks were one of the earliest to propose that everything was composed of an underlying discrete set of atoms. That view seemed to prevail, more or less until the Industrial Revolution launched us on a deeper perspective implicitly showing us that the world was continuous once again. But the rise of our computer technology has been gradually reversing our perceptions back towards the discrete. We’ve been flip-flopping between these two alternate perspectives as our knowledge of the world around us changes, and grows.
The change in perspective is not what I find most interesting. It’s the behavior of the boundaries between these conflicting views when they are layered that is most fascinating. That is, moving from a continuous system to an underlying discrete one or vice versa.
A great number of interesting details about the human body can be learned from Bill Bryson’s “A Short History of Nearly Everything”, a book I highly recommend because it is both entertaining and also enlightening. By now we accept that humans are a large collection of millions (billions?) of co-operating cells. Bryson also points out that we are also composed of a nearly equal number of bacteria, and other symbiotic living organisms (which we need to function). He also addresses that from a matter perspective, we’re basically replacing ourselves continuously. New matter comes in, old matter goes out. New cells grow, old ones die. I think he speculates that we are completely re-composed of new matter approx. every seven years. If you’re interested, you can get the full details from his book.
With that in mind, from a perspective purely based on particles, we can view ourselves as a big collection -- a cloud -- of related matter. For a given human being -- we’ll pick a male and call him Bob -- as he interacts with the world on a daily basis, he is leaving behind some of this cloud. That is, Bob’s discarded skin cells add to the dust in the room, his bacteria and fingerprints get left on everything he touches, and his moisture evaporates into the area around him. If Bryson’s speculation about seven years is correct, then over that period half of the particles that have made up Bob have been dissipated to wherever Bob has been. From a cloud of particles perspective, ‘Bob’ is the densest part, but the whole cloud covers a much wider area based on his travels.
If we could build a machine to detect ‘Bob’ particles over a period of time, then some really interesting things occur. For instance, on any average week, Bob would go back and forth between his house and work. The cloud would include both locations, but also to a much sparser degree, the pathways in between. Setting the machine for a granularity of 7 days, would show that Bob was both at home and he was at work. Depending the mechanics of the detector, the bulk of Bob might appear in either location. From this viewpoint, we might conclude that Bob was in both states simultaneously, and that when actuated he would appear randomly in one or the other. We could compile a set of Bob particle-cloud states, that he might be simultaneously inhibiting: work, home, traveling, cottage, etc.
One can easily suspect that Bob would easily disagree with this world view. He is, after all, either at work or he is at home. He knows where he is. Although he is a discrete entity, he likely sees himself as continuous. The detector however takes a discrete measurement over a continuous time period, which at 7 days is not in line with Bob’s continuous view of his discrete underpinning. Bob does not see himself as a cloud, and certainly not as a cloud over a long period of time.
If we set the detector’s time frame to 1ms, then the resulting Bob cloud would be roughly what we see and think of as ‘Bob’. Shortening it some more would produce gradually more refined Bob versions, until we get down to some discrete time-block where our perception of Bob matches the detectors. In this case, the detector’s information would align with Bob’s understanding of himself and his world. There would be no randomness with regard to Bob’s state.
What this comes down to is that as we are flipping back and forth between continuous and discrete perspectives, if our discrete boundaries are not perfectly aligned (even though they are not adjacent), then an element of randomness appears to creep into the mechanics. Since our continuous/discrete perceptions of the world around us are not explicit (we use both of them without being aware of it), we are not always aware of the affects this bias in the way we model the world. Thus one might conclude that the randomness isn’t actually there, but that it is just an artifact of the boundary.
Software is a static list of instructions, which we are constantly changing.
Friday, May 27, 2011
Saturday, May 21, 2011
Focus
So, you’ve got a large team of talented programmers, a problem to solve and some people with an understanding of the underlying domain details. What now?
Any big job needs to get work done in order to be successful. Any type of constructive effort needs that completed work to be brought together into one large coherent piece. In this, software is no different than constructing buildings, bridges, machines, paintings etc. The raw man-power alone is not enough to be successful, the effort must converge on a final by-product. One that makes sense.
If you ask twenty different people their opinions, you’ll get twenty different answers. If you let twenty different artists produce work, you’ll get twenty different styles. Sometimes that is a good thing, but when, in the end you’re looking for one consistent piece -- not twenty -- that can be a huge problem.
Even where there is little latitude, it’s hard to get a couple of people to precisely agree on the same points. Sometimes there are individuals who sync up; get on the same page, but that is a rare occurrence. We are individuals and to some degree, we are all unique. It’s precisely because of this fundamental aspect of human behavior that pretty much if there is a desire for consistency, then the flow needs to be focused by a single individual. This is true in leadership, design, art and anywhere else that rests on making a series of choices. If it can’t be spelled out right down to the details, then any degrees of freedom involve trade-offs, and if there are enough of them, everyone will choose different options.
It’s easy to see examples of unfocused work in software. From the interface level, there may be a huge amount of functionality, but it is haphazardly scattered all over the interface. Without extensive guidance, navigation becomes a major obstacle to usage. Unfocused tools are ugly, frustrating and annoying to use, even if they work properly.
At the operational level, the system might be installed, but keeping it running or upgrading it to the next version is a nightmare. Magic, undecipherable incantations are required at unpredictable intervals in order to restore some semblance of a working state. Thus necessary maintenance gets hampered by fears of just making it worse. Rust settles in.
At the coding level, the only hope of fixing problems or extending the functionality in an unfocused work comes from just slogging through the details one-by-one. But because there is no boundary to how much could be effected, there is hesitation to start. As the problems get worse, the likelihood of finding someone with enough patience to deal with the problem decreases rapidly. Thus the problems get ignored as they grow and propagate through-out the system.
What a large team of talented programmers needs is someone to bring their work into focus. What a problem needs in order to get solved is for someone to bring the scope of the work into focus. What the domain experts need is for someone to help them focus on which aspects of the problem need to be addressed, and in which order. Without focus, the work will become a blurry disaster, gradually getting worse with each new attempt, not better.
While the necessity of focus is obvious, there are always those who feel that having some other person handle it, limits their freedom. We are awash in attempts to distribute this control out to the masses to make the work more appealing, but while these ideas may be attractive to many people the consequences of their usage should not be. Is it better to go wild on an unfocused project, then to be constrained by a focused one? Maybe not in the heat of the moment, but certainly contributing to a large number of unfocused failures starts to eat at one’s own self worth. What good is a masterpiece if it is buried in the mud? Some people work purely for money, but for anyone that cares about what they do, it’s the ultimate success of the project that matters most, nit their freedoms. And to have any chance at success, the project needs focus. Trading freedom for the possibility of success is an acceptable cost.
And thus a project, particularly a software one, needs someone to bring it into focus. But this is no easy task and requires a vast amount of real-world experience to understand the subtleties and costs of the various trade-offs. It’s more than just knowing how to do the work, it’s also about knowing what to avoid, where the problems come from, how to fix them and how the work fits into the environment around it. Focus comes from practical experience. Lots and lots of it. But the software development culture has been particularly good at ignoring this, and as a consequence a massive number of the development efforts have died, or are limping along in barely usable states, gradually getting worse. It’s a lesson we seem destined not to learn.
Any big job needs to get work done in order to be successful. Any type of constructive effort needs that completed work to be brought together into one large coherent piece. In this, software is no different than constructing buildings, bridges, machines, paintings etc. The raw man-power alone is not enough to be successful, the effort must converge on a final by-product. One that makes sense.
If you ask twenty different people their opinions, you’ll get twenty different answers. If you let twenty different artists produce work, you’ll get twenty different styles. Sometimes that is a good thing, but when, in the end you’re looking for one consistent piece -- not twenty -- that can be a huge problem.
Even where there is little latitude, it’s hard to get a couple of people to precisely agree on the same points. Sometimes there are individuals who sync up; get on the same page, but that is a rare occurrence. We are individuals and to some degree, we are all unique. It’s precisely because of this fundamental aspect of human behavior that pretty much if there is a desire for consistency, then the flow needs to be focused by a single individual. This is true in leadership, design, art and anywhere else that rests on making a series of choices. If it can’t be spelled out right down to the details, then any degrees of freedom involve trade-offs, and if there are enough of them, everyone will choose different options.
It’s easy to see examples of unfocused work in software. From the interface level, there may be a huge amount of functionality, but it is haphazardly scattered all over the interface. Without extensive guidance, navigation becomes a major obstacle to usage. Unfocused tools are ugly, frustrating and annoying to use, even if they work properly.
At the operational level, the system might be installed, but keeping it running or upgrading it to the next version is a nightmare. Magic, undecipherable incantations are required at unpredictable intervals in order to restore some semblance of a working state. Thus necessary maintenance gets hampered by fears of just making it worse. Rust settles in.
At the coding level, the only hope of fixing problems or extending the functionality in an unfocused work comes from just slogging through the details one-by-one. But because there is no boundary to how much could be effected, there is hesitation to start. As the problems get worse, the likelihood of finding someone with enough patience to deal with the problem decreases rapidly. Thus the problems get ignored as they grow and propagate through-out the system.
What a large team of talented programmers needs is someone to bring their work into focus. What a problem needs in order to get solved is for someone to bring the scope of the work into focus. What the domain experts need is for someone to help them focus on which aspects of the problem need to be addressed, and in which order. Without focus, the work will become a blurry disaster, gradually getting worse with each new attempt, not better.
While the necessity of focus is obvious, there are always those who feel that having some other person handle it, limits their freedom. We are awash in attempts to distribute this control out to the masses to make the work more appealing, but while these ideas may be attractive to many people the consequences of their usage should not be. Is it better to go wild on an unfocused project, then to be constrained by a focused one? Maybe not in the heat of the moment, but certainly contributing to a large number of unfocused failures starts to eat at one’s own self worth. What good is a masterpiece if it is buried in the mud? Some people work purely for money, but for anyone that cares about what they do, it’s the ultimate success of the project that matters most, nit their freedoms. And to have any chance at success, the project needs focus. Trading freedom for the possibility of success is an acceptable cost.
And thus a project, particularly a software one, needs someone to bring it into focus. But this is no easy task and requires a vast amount of real-world experience to understand the subtleties and costs of the various trade-offs. It’s more than just knowing how to do the work, it’s also about knowing what to avoid, where the problems come from, how to fix them and how the work fits into the environment around it. Focus comes from practical experience. Lots and lots of it. But the software development culture has been particularly good at ignoring this, and as a consequence a massive number of the development efforts have died, or are limping along in barely usable states, gradually getting worse. It’s a lesson we seem destined not to learn.
Tuesday, May 17, 2011
Deep Refactoring II
In my last post, I laid out some very high-level ideas about refactoring our basic software architecture by removing the distinction between memory and disk. It’s was a very simplistic sketch of some of the underlying characteristics of this type of change.
A reader, Kemp, commented on several issues. The first was with regard to temporary memory usage and the ability to clear out short-term objects. The second was with respect to the program counters (PCs) and scheduling. I thought these were interesting points, worthy of a bit more exploration of the idea.
Please keep in mind that both of these posts are really just playing around with the basic concept, not laying out a full proposal. Ideas must start somewhere; in some simplified form, but it is often worthy to give them consideration, even if only to understand why they will or will not work appropriately. We tend to take the status quo too seriously. Just because things have evolved in a particular fashion doesn’t make them right, or other alternatives wrong. At some point big progress has to leap frog over things existing. The path to innovation is not linear, but often curvy and riddled with gaps and lumps.
Kemp’s first concern was with temporary data. A process has two basic sections of allocatable memory, the stack and the heap. The stack works by pushing new frames onto the structure for each function call, and popping them when its completed, thus allowing for recursion. Temporary variables are allocated within that chunk of memory. The heap, on the other hand, generally just grows as new blocks of memory are allocated. It is explicitly managed by the process (although languages like Java now hide them from the programmers). In many systems, the code and any static data are loaded roughly into the center of memory, while the stack and heap grow out in both directions.
Way back, when I was building long-running processes in C, it was considered good practice to only use the heap for long-term data. Anything temporary was placed on the stack, which was essentially cleared when a function finished. Buffer overflow concerns, Object Oriented languages, and more advanced memory management techniques contributed to change this, by dumping more stuff into the stack.
These changes, and a reduced understanding of the underlying behavior of the machines have contributed to an increasing problem with bloat in modern software. There was a day when 64K could be used effectively, but these days many programs eat megabytes or even gigabytes excessively. Although the hardware has followed Moore’s law, doubling every two years, software programs aren’t significantly more responsive than they were twenty years ago. In some cases, like scrolling, they’ve even gotten noticeably slower.
If we are considering refactoring the underlying architecture, then it is not unreasonable that this would have a significant affect on how we design programming languages as well. The compiler or interpreter can tell in many cases from the scope of the variable that it is temporary. But there will be many situations where some data is allocated, used and then passed around into other regions of the code, where it will be assumed to be global.
One way to fix this, would be to explicitly indicate that some memory should be deleted, or preserved. In an Object Oriented like language long-term objects could inherit from Persistence, for example (OODBs worked this way). That still might leave cyclic dependencies, but given that the program never actually stops running, there is a considerable amount of available CPU power during slack times to walk the structures and prune them. For workstations, late at night they might start a cleanup process (similar to how dreaming compacts/deletes memories).
One thing that is really noticeable with most modern systems is that they are paging excessively. Disks have considerably more space then addressable memory, and we have the expectation that there are many different programs and threads all running at the same time. There just isn’t enough physical memory to hold all of the code and data for everything that is running. Depending on the OS (Unix and Windows use different underlying philosophies for paging), there can be a considerable amount of resources burned, causing excessive copying (thrashing) back and forth between memory and disk. In many overloaded Windows systems, this is often the reason for the strange, inconsistent pauses that occur.
If you consider even a simple arrangement for many systems it’s easy to see how inefficient these operations really are. For example, consider someone running Outlook on their workstation. If they have a large number of inbox messages, and they want to search them it can be very resource intensive. Their client talks to an Exchange server, which then talks to a database server. The emails are stored in the database on disk, so if you want to use Advanced Find to search for some text in the emails, each and every one of them needs to be loaded into the database. Most databases have some sort of internal caching mechanism, plus there is the OS paging as well. For a large number of emails, the data is loaded, searched, then probably paged out and/or cached. The actual text matching code may be in the database, or it may be in the Exchange server. The resulting Subjects (and possibly text) are collected, probably cached/paged and then sent to the Exchange server, which again caches/pages. Finally the data is sent over a socket to the client, but again this triggers more caching or paging. In the worst case, any of the email bodies might exist twice in memory, three times on disk. The Subjects could be replicated as many as six times on several machines. And all of this extra work doesn’t necessarily make the process faster. Caches are only effective if the same data is requested multiple times (which in this case they aren’t), while memory thrashing can add large multiples of time to the necessary work (depending on the paging algorithm).
One large block of fully accessible memory that covers all programs and their current execution states would still have to have some way of indexing memory blocks (or our pointers would be arbitrarily long), but it could follow an architecture more like a memory mapped file. The process would have some internal relative addresses that would then be indexed back onto the larger space (but no need to copy). Given the rapid growth of modern storage, the indexing method would likely be hierarchical, to support a much larger range than 32 or 64 bits (or we could fall back into ‘memory models’ from the old DOS days).
The access to each instruction in the code, and its data might be slower, but these days instruction caching in the chips is far more effective. Sometimes with the early RISC systems, for some programs it was necessary to turn off all of the cache optimizations (in programs with complex flows), but that doesn’t seem to be the case anymore (or people just stopped analyzing performance at this level).
Paging and caching aren’t the only places that eat resources. Most systems store their data in a relational database (although other types of data stores are becoming popular again). The representation in the database is considerably different from the one used in memory, so there are often huge translation costs associated with moving persistent data into memory and back again. These can be alleviated by keeping the persistent representation almost identical to the internal one (absolute memory addresses have to be made relative, at least), but that cripples the ability to share data across different programs. Thus programmers send a great deal of effort writing code to convert from a convenient memory representation to a larger, more generalized persistent one. Not only does this eat up lots of time, but it also makes it far more laborious to write the code and the underlying static nature of the database representation tends to be fragile. So basically we do more work to waste more resources and get more bugs.
With all of that in mind, changing our language paradigm somewhat to be more vigilant in cleaning out temporary data isn’t a onerous shift, particularly if we can eliminate thrashing problems, and although I didn’t talk about them, race conditions caused by faulty caching.
That leads nicely into Kemp’s second point. My original description of the idea in the last post was considerably vague. But what I had in mind is to flip the dynamics. Right now we send the code and data to the CPU, but in this new idea we could send the CPU (and all of its PCs) to the code, wherever it lies.
For a single machine, since this new memory would be slower, it would take longer. But not necessarily longer than it already takes to page in the data, then cache it in the chip. Like our current architecture, there could be many different threads all running within the same process. One simple arrangement would be to basically emulate our modern control flow, so that there is a main thread which spans off new threads as needed.
This is OK, but I can envision also changing this arrangement. Instead of a main line branching off different executions paths, we could reconstruct the system so that everything executing was basically similar to async handling. That is, the code is completely event-driven. Nothing happens in the program until an event is processed. There is no main busy-wait loop. Each program is just a massive series of independent functionality. As the user interacts with the system one or many of these are triggered. They too can trigger other bits of code. In this way, the user interaction, and the scheduler can send various PCs to spots in a program, based on their needs. If the user triggers an operation that can exist in parallel, a considerable amount of resources can be sent in that direction to handle the processing. A system level scheduler can also launch back-ground processing as it is required. Given this flexibility, the system has a greater degree of control on how to balance the load between competing processes and back-ground jobs. Re-indexing the file system, or checking for viruses can be temporary delayed if the user is performing furiously with their mouse. This flexibility doesn’t exist in modern systems because the control over the internal resources is buried inside of each program. The scheduler isn’t privy to the main loop, or to any busy-waits.
From a user perspective on a workstation, there would be no noticeable difference, except that intermittent pauses would be less likely to occur. But underneath, the only main loop in operation would be the operating system’s.
Now, since we’ve got one massive chunk of memory, and some ability to memory map the process space onto this, it isn’t a stretch to extend this further. Instead of going to a particular external server to request data with a sub-language like SQL, we could just map a portion of the server’s memory right into our workstations. Then we could access it’s data, as if it were already in our process. That of course, would require coordination, locking, and fighting with some particular race conditions from writes, but if the performance were fast enough then we could use the network to effectively extend out our workstation to terabytes or petabytes. Why ‘copy’ the data over, when you can just ‘mount it’ and use it directly?
While that would be fun (if it’s workable), even more entertaining would be to mount our memory onto other workstations. Late at night, when every one is gone, we could just co-opt their CPUs to start updating our processes. You’d just need to send some type of communications to the other stations to access your memory, and respond to, or generate events. After working-hours, programmers could turn their now abandoned offices into giant clusters, all working in tandem to handle some heavy duty processing.
This type of architecture is far from unique, way (way) back I worked on some Apollo computers, that run a CAD/CAM for super-computer circuit boards. The system was developed in Pascal, and as the co-op student my task was to write some code to distribute processing to any workstation that wasn’t being used (unfortunately the company crashed before I had made significant progress on the work). Systems like that and Beowulf clusters distribute the code and data to their nodes, but in this model the the nodes would instead converge on the code and data.
There would of course be lots of resource contention issues, and some maximum threshold for how many CPUs could effectively work together, but distributing or replicating the underlying information to speed up computation would be a lot simpler, and could be handled dynamically, possibly on demand.
Alright, that’s it for now (or possibly forever). With this much text, I’ve got a 2% chance of any reader getting down to this paragraph, so although I could go on, I’ll stop, after getting one final round-up paragraph :-)
If the underlying synchronization, locking and race conditions are manageable one big large addressable chunk of memory would reduce a lot of redundant data movement and translation. It would also allow for ways to extend it across networks to both provide massive available memory and to allow independent machines to co-operate with each other. Scheduling would be more effective, since it has more information to work with. From a coding perspective, it would drop out a lot of the work required to convert the data from the database, although some other multi-process synchronization technique would still be necessary. As an odd point, it would make many user’s lives easier because they already can’t distinguish between memory and disk. Also, after power outages, the programs would go right back to their last state, there would be no rebooting time. Backups would no longer be hot/cold and there might be a way to use RAID-like ideas for synchronizing memory, leading to better fault tolerance and recovery (particularly if the same chunk of memory were shared on multiple machines, and kept up-to-date).
Then again, it might not be possible :-) We can’t know, until someone tries to build it.
A reader, Kemp, commented on several issues. The first was with regard to temporary memory usage and the ability to clear out short-term objects. The second was with respect to the program counters (PCs) and scheduling. I thought these were interesting points, worthy of a bit more exploration of the idea.
Please keep in mind that both of these posts are really just playing around with the basic concept, not laying out a full proposal. Ideas must start somewhere; in some simplified form, but it is often worthy to give them consideration, even if only to understand why they will or will not work appropriately. We tend to take the status quo too seriously. Just because things have evolved in a particular fashion doesn’t make them right, or other alternatives wrong. At some point big progress has to leap frog over things existing. The path to innovation is not linear, but often curvy and riddled with gaps and lumps.
Kemp’s first concern was with temporary data. A process has two basic sections of allocatable memory, the stack and the heap. The stack works by pushing new frames onto the structure for each function call, and popping them when its completed, thus allowing for recursion. Temporary variables are allocated within that chunk of memory. The heap, on the other hand, generally just grows as new blocks of memory are allocated. It is explicitly managed by the process (although languages like Java now hide them from the programmers). In many systems, the code and any static data are loaded roughly into the center of memory, while the stack and heap grow out in both directions.
Way back, when I was building long-running processes in C, it was considered good practice to only use the heap for long-term data. Anything temporary was placed on the stack, which was essentially cleared when a function finished. Buffer overflow concerns, Object Oriented languages, and more advanced memory management techniques contributed to change this, by dumping more stuff into the stack.
These changes, and a reduced understanding of the underlying behavior of the machines have contributed to an increasing problem with bloat in modern software. There was a day when 64K could be used effectively, but these days many programs eat megabytes or even gigabytes excessively. Although the hardware has followed Moore’s law, doubling every two years, software programs aren’t significantly more responsive than they were twenty years ago. In some cases, like scrolling, they’ve even gotten noticeably slower.
If we are considering refactoring the underlying architecture, then it is not unreasonable that this would have a significant affect on how we design programming languages as well. The compiler or interpreter can tell in many cases from the scope of the variable that it is temporary. But there will be many situations where some data is allocated, used and then passed around into other regions of the code, where it will be assumed to be global.
One way to fix this, would be to explicitly indicate that some memory should be deleted, or preserved. In an Object Oriented like language long-term objects could inherit from Persistence, for example (OODBs worked this way). That still might leave cyclic dependencies, but given that the program never actually stops running, there is a considerable amount of available CPU power during slack times to walk the structures and prune them. For workstations, late at night they might start a cleanup process (similar to how dreaming compacts/deletes memories).
One thing that is really noticeable with most modern systems is that they are paging excessively. Disks have considerably more space then addressable memory, and we have the expectation that there are many different programs and threads all running at the same time. There just isn’t enough physical memory to hold all of the code and data for everything that is running. Depending on the OS (Unix and Windows use different underlying philosophies for paging), there can be a considerable amount of resources burned, causing excessive copying (thrashing) back and forth between memory and disk. In many overloaded Windows systems, this is often the reason for the strange, inconsistent pauses that occur.
If you consider even a simple arrangement for many systems it’s easy to see how inefficient these operations really are. For example, consider someone running Outlook on their workstation. If they have a large number of inbox messages, and they want to search them it can be very resource intensive. Their client talks to an Exchange server, which then talks to a database server. The emails are stored in the database on disk, so if you want to use Advanced Find to search for some text in the emails, each and every one of them needs to be loaded into the database. Most databases have some sort of internal caching mechanism, plus there is the OS paging as well. For a large number of emails, the data is loaded, searched, then probably paged out and/or cached. The actual text matching code may be in the database, or it may be in the Exchange server. The resulting Subjects (and possibly text) are collected, probably cached/paged and then sent to the Exchange server, which again caches/pages. Finally the data is sent over a socket to the client, but again this triggers more caching or paging. In the worst case, any of the email bodies might exist twice in memory, three times on disk. The Subjects could be replicated as many as six times on several machines. And all of this extra work doesn’t necessarily make the process faster. Caches are only effective if the same data is requested multiple times (which in this case they aren’t), while memory thrashing can add large multiples of time to the necessary work (depending on the paging algorithm).
One large block of fully accessible memory that covers all programs and their current execution states would still have to have some way of indexing memory blocks (or our pointers would be arbitrarily long), but it could follow an architecture more like a memory mapped file. The process would have some internal relative addresses that would then be indexed back onto the larger space (but no need to copy). Given the rapid growth of modern storage, the indexing method would likely be hierarchical, to support a much larger range than 32 or 64 bits (or we could fall back into ‘memory models’ from the old DOS days).
The access to each instruction in the code, and its data might be slower, but these days instruction caching in the chips is far more effective. Sometimes with the early RISC systems, for some programs it was necessary to turn off all of the cache optimizations (in programs with complex flows), but that doesn’t seem to be the case anymore (or people just stopped analyzing performance at this level).
Paging and caching aren’t the only places that eat resources. Most systems store their data in a relational database (although other types of data stores are becoming popular again). The representation in the database is considerably different from the one used in memory, so there are often huge translation costs associated with moving persistent data into memory and back again. These can be alleviated by keeping the persistent representation almost identical to the internal one (absolute memory addresses have to be made relative, at least), but that cripples the ability to share data across different programs. Thus programmers send a great deal of effort writing code to convert from a convenient memory representation to a larger, more generalized persistent one. Not only does this eat up lots of time, but it also makes it far more laborious to write the code and the underlying static nature of the database representation tends to be fragile. So basically we do more work to waste more resources and get more bugs.
With all of that in mind, changing our language paradigm somewhat to be more vigilant in cleaning out temporary data isn’t a onerous shift, particularly if we can eliminate thrashing problems, and although I didn’t talk about them, race conditions caused by faulty caching.
That leads nicely into Kemp’s second point. My original description of the idea in the last post was considerably vague. But what I had in mind is to flip the dynamics. Right now we send the code and data to the CPU, but in this new idea we could send the CPU (and all of its PCs) to the code, wherever it lies.
For a single machine, since this new memory would be slower, it would take longer. But not necessarily longer than it already takes to page in the data, then cache it in the chip. Like our current architecture, there could be many different threads all running within the same process. One simple arrangement would be to basically emulate our modern control flow, so that there is a main thread which spans off new threads as needed.
This is OK, but I can envision also changing this arrangement. Instead of a main line branching off different executions paths, we could reconstruct the system so that everything executing was basically similar to async handling. That is, the code is completely event-driven. Nothing happens in the program until an event is processed. There is no main busy-wait loop. Each program is just a massive series of independent functionality. As the user interacts with the system one or many of these are triggered. They too can trigger other bits of code. In this way, the user interaction, and the scheduler can send various PCs to spots in a program, based on their needs. If the user triggers an operation that can exist in parallel, a considerable amount of resources can be sent in that direction to handle the processing. A system level scheduler can also launch back-ground processing as it is required. Given this flexibility, the system has a greater degree of control on how to balance the load between competing processes and back-ground jobs. Re-indexing the file system, or checking for viruses can be temporary delayed if the user is performing furiously with their mouse. This flexibility doesn’t exist in modern systems because the control over the internal resources is buried inside of each program. The scheduler isn’t privy to the main loop, or to any busy-waits.
From a user perspective on a workstation, there would be no noticeable difference, except that intermittent pauses would be less likely to occur. But underneath, the only main loop in operation would be the operating system’s.
Now, since we’ve got one massive chunk of memory, and some ability to memory map the process space onto this, it isn’t a stretch to extend this further. Instead of going to a particular external server to request data with a sub-language like SQL, we could just map a portion of the server’s memory right into our workstations. Then we could access it’s data, as if it were already in our process. That of course, would require coordination, locking, and fighting with some particular race conditions from writes, but if the performance were fast enough then we could use the network to effectively extend out our workstation to terabytes or petabytes. Why ‘copy’ the data over, when you can just ‘mount it’ and use it directly?
While that would be fun (if it’s workable), even more entertaining would be to mount our memory onto other workstations. Late at night, when every one is gone, we could just co-opt their CPUs to start updating our processes. You’d just need to send some type of communications to the other stations to access your memory, and respond to, or generate events. After working-hours, programmers could turn their now abandoned offices into giant clusters, all working in tandem to handle some heavy duty processing.
This type of architecture is far from unique, way (way) back I worked on some Apollo computers, that run a CAD/CAM for super-computer circuit boards. The system was developed in Pascal, and as the co-op student my task was to write some code to distribute processing to any workstation that wasn’t being used (unfortunately the company crashed before I had made significant progress on the work). Systems like that and Beowulf clusters distribute the code and data to their nodes, but in this model the the nodes would instead converge on the code and data.
There would of course be lots of resource contention issues, and some maximum threshold for how many CPUs could effectively work together, but distributing or replicating the underlying information to speed up computation would be a lot simpler, and could be handled dynamically, possibly on demand.
Alright, that’s it for now (or possibly forever). With this much text, I’ve got a 2% chance of any reader getting down to this paragraph, so although I could go on, I’ll stop, after getting one final round-up paragraph :-)
If the underlying synchronization, locking and race conditions are manageable one big large addressable chunk of memory would reduce a lot of redundant data movement and translation. It would also allow for ways to extend it across networks to both provide massive available memory and to allow independent machines to co-operate with each other. Scheduling would be more effective, since it has more information to work with. From a coding perspective, it would drop out a lot of the work required to convert the data from the database, although some other multi-process synchronization technique would still be necessary. As an odd point, it would make many user’s lives easier because they already can’t distinguish between memory and disk. Also, after power outages, the programs would go right back to their last state, there would be no rebooting time. Backups would no longer be hot/cold and there might be a way to use RAID-like ideas for synchronizing memory, leading to better fault tolerance and recovery (particularly if the same chunk of memory were shared on multiple machines, and kept up-to-date).
Then again, it might not be possible :-) We can’t know, until someone tries to build it.
Monday, May 16, 2011
Deep Refactoring
History dictates today’s computer architecture. That’s the normal progression for knowledge and technology, but often there comes a time when we can and should take another look at the foundations. Some things don’t ‘need’ to exist the way they are, they just exist that way.
Memory and disk split themselves because of practical hardware considerations. The faster RAM was considerably less expensive than the larger hard-drives. This duality meant a lot of work moving data and code back and forth between the divide. It also meant that there were interesting race conditions created when we tried to optimized this split with caching.
With today’s solid state drives, and their increasing speed, one starts to wonder whether this split is really necessary. What if there where only one location? What if you basically installed the software right into memory, and memory here huge (and completely addressable)? What then?
Well for one, the issues with storing things out to persistent storage would just go away. Internally as you created instances of data, they would just stay put until you got rid of them. The static nature of the database would give way to a full dynamic representation.
Another interesting issue comes from the program counter (PC). Right now we pick up the code, and then toss it into memory where one or more PCs march through it, building up an execution stack. What if instead of the code going to the PCs, the PCs went to the code. That is, the whole nature of time-splicing and process/thread handling changes. PCs become wandering minstrels, dropping into particular executables and moving them forward a few paces. Any stack information is left with the code, and they all exist together, always in memory.
In that way, the user could redirect a number of different PCs towards different tasks, allowing them to bounce around as needed. It sounds eerily similar to what we have now, except that the whole thing is turned on its head.
On peculiar aspect of this is that programmers would no longer created threads. Instead, threads essentially just spring into existence. No longer could one assume that the code doesn't need to be thread-safe, since at anytime there could be multiple PCs working in the same area. This would likely shift the atomicity from each individual instruction, into blocks of instructions, a change that might require a fundamental paradigm shift in our programming languages.
There would be issues with addressing, disks these days are huge, but one could implement some form of segmented indexing, maintained in a master table somewhere. If PC1 goes to process X, it looks up its general location from the table, then gets specifics from process X. At that point it happily marches through the code, until it’s told to go elsewhere.
I’m not entirely sure if this would work, and no doubt there are dozens of corner-cases I haven’t considered, but it would a) make coding a lot easier and b) change the nature of exceptions. It would be interesting to try it, and if it worked I think it would be revolutionary.
Memory and disk split themselves because of practical hardware considerations. The faster RAM was considerably less expensive than the larger hard-drives. This duality meant a lot of work moving data and code back and forth between the divide. It also meant that there were interesting race conditions created when we tried to optimized this split with caching.
With today’s solid state drives, and their increasing speed, one starts to wonder whether this split is really necessary. What if there where only one location? What if you basically installed the software right into memory, and memory here huge (and completely addressable)? What then?
Well for one, the issues with storing things out to persistent storage would just go away. Internally as you created instances of data, they would just stay put until you got rid of them. The static nature of the database would give way to a full dynamic representation.
Another interesting issue comes from the program counter (PC). Right now we pick up the code, and then toss it into memory where one or more PCs march through it, building up an execution stack. What if instead of the code going to the PCs, the PCs went to the code. That is, the whole nature of time-splicing and process/thread handling changes. PCs become wandering minstrels, dropping into particular executables and moving them forward a few paces. Any stack information is left with the code, and they all exist together, always in memory.
In that way, the user could redirect a number of different PCs towards different tasks, allowing them to bounce around as needed. It sounds eerily similar to what we have now, except that the whole thing is turned on its head.
On peculiar aspect of this is that programmers would no longer created threads. Instead, threads essentially just spring into existence. No longer could one assume that the code doesn't need to be thread-safe, since at anytime there could be multiple PCs working in the same area. This would likely shift the atomicity from each individual instruction, into blocks of instructions, a change that might require a fundamental paradigm shift in our programming languages.
There would be issues with addressing, disks these days are huge, but one could implement some form of segmented indexing, maintained in a master table somewhere. If PC1 goes to process X, it looks up its general location from the table, then gets specifics from process X. At that point it happily marches through the code, until it’s told to go elsewhere.
I’m not entirely sure if this would work, and no doubt there are dozens of corner-cases I haven’t considered, but it would a) make coding a lot easier and b) change the nature of exceptions. It would be interesting to try it, and if it worked I think it would be revolutionary.
Tuesday, May 10, 2011
Software Virgin
I know, I know. The term 'software virgin' is derogatory, but in my defense it wasn’t me that coined it. It was a fellow exec. from long ago that set the phrase. Ironically, while she was referring to others, she herself was never able to transcend its base definition.
Software is exceedingly complex, but it is easy to miss that fact. Every year, more people are attracted to it and many come in with preconceived notions about what they can do, and how simple it will be to get it done.
For those of us that have been around for a long while, and have gradually come to learn what’s possible and what is not, this influx of new people has always represented a problem. It’s hard enough to get the system working, without having fight off someone who is dismissing your knowledge as pessimism.
Knowledge is a funny thing. The 10,000 ft view may appear simple, but the true understanding comes from the depths. Doing something, and doing it well, are very different beasts. I wouldn’t discourage people from playing with a new field, but that is completely different from doing it professionally.
Software virgins come in three basic flavors. Some are business people that are drawn to technical bubbles because they sense easy wealth. Some are managers trying to make their careers by implementing a grand project. And some are programmers, fresh out of school, capable of coding solutions to homework and small systems, but dreaming of creating giant masterpieces.
They all have an over-simplified expectation of the effort. And their own contributions. The business people think all they need to be successful is an idea. The managers believe that if they get a list of things to be done, that is enough. The young coders think its some clever functionality or a nice algorithm that will get it done. Each in their own way, is missing the essentials of the problem.
Software is about building something. Something large.
An idea is great, but they are a dime a dozen these days. You see an endless stream of new web sites out there all with a slightly different variation, what my friend humorously refers to as “roadkill on the information super highway”. You can’t win with an idea, it has to go deeper than that. Ultimately software is about people, and about what the software does for them. If the idea is how to make money, then its has little to do with people. You have to convince someone to use the software, it has to do something better than what they have now.
Managers love lists. Many think that if you just get the work items into a list, and start checking them off, then you will achieve success. So often I’ve seen that fail because they’ve chosen to ignore the dependencies between the list items, or the exponential explosion of the sub-items. Some things must be done first in order to keep the work possible. In a big software project, the dependencies quickly become a scrambled nest of complex inter-relationships. Trade-offs have to be made, and the consequences can be severe. If you start in the wrong place, each thing you do simply adds more work to the list. It grows forever. A big project is really a massive sea of details, buried in a tonne of complexity. Any list is a shallow reflection of the work needed to be done. If you believe in Jack Reese’s perspective, the final list, the one that contains everything, is the code itself. You don’t have all of the details in place until you have all of the details in place. The list is the code, the configuration items, documentation and all of the other stuff that is assembled to be able to move the system into a place were it can be utilized. Until then, you have no idea about ‘everything’, just some things.
When programmers first learn to code, they quickly become entranced by their own ability to assemble increasingly larger sets of instructions. At some point, usually before they’ve experienced working on a full project from start to finish, they come to an over-estimation of their experience. If they can build something small, then clearly they can build something massive. I guess it’s a natural part of our human behavior, best summarized by “a little knowledge can be a dangerous thing”. Since software isn’t well organized, and there isn’t some 60 inchs of textbooks that will give you the full sense of just how wide and deep things really are, it’s easy to miss its vastness. And to make it worse, those with experience tend to frequently contradict each other, or drop out early. We’re a long way from being a sane profession, or even a craft. Within this environment, it is easy to draw certainty from too little knowledge. The consequences of this litter the web with endless arguments over things poorly understood. In many instances, both sides are wrong.
Software has a startling high failure rate. So do technical startups. And every time, each different type of virgin, blames the others. Many, even after some experience, become fixed to their errors believing that they are somehow superior, even if their efforts are barely working. It’s a strange industry, software. A place where one of the most difficult problems is working around the software virgins, on your way towards trying to get something substantial. And the more you know, the more you learn, the less certain you are of anything, or everything. Someday I’ll get to the point where I am absolutely sure I know absolutely nothing. The zen of software knowledge I guess.
Software is exceedingly complex, but it is easy to miss that fact. Every year, more people are attracted to it and many come in with preconceived notions about what they can do, and how simple it will be to get it done.
For those of us that have been around for a long while, and have gradually come to learn what’s possible and what is not, this influx of new people has always represented a problem. It’s hard enough to get the system working, without having fight off someone who is dismissing your knowledge as pessimism.
Knowledge is a funny thing. The 10,000 ft view may appear simple, but the true understanding comes from the depths. Doing something, and doing it well, are very different beasts. I wouldn’t discourage people from playing with a new field, but that is completely different from doing it professionally.
Software virgins come in three basic flavors. Some are business people that are drawn to technical bubbles because they sense easy wealth. Some are managers trying to make their careers by implementing a grand project. And some are programmers, fresh out of school, capable of coding solutions to homework and small systems, but dreaming of creating giant masterpieces.
They all have an over-simplified expectation of the effort. And their own contributions. The business people think all they need to be successful is an idea. The managers believe that if they get a list of things to be done, that is enough. The young coders think its some clever functionality or a nice algorithm that will get it done. Each in their own way, is missing the essentials of the problem.
Software is about building something. Something large.
An idea is great, but they are a dime a dozen these days. You see an endless stream of new web sites out there all with a slightly different variation, what my friend humorously refers to as “roadkill on the information super highway”. You can’t win with an idea, it has to go deeper than that. Ultimately software is about people, and about what the software does for them. If the idea is how to make money, then its has little to do with people. You have to convince someone to use the software, it has to do something better than what they have now.
Managers love lists. Many think that if you just get the work items into a list, and start checking them off, then you will achieve success. So often I’ve seen that fail because they’ve chosen to ignore the dependencies between the list items, or the exponential explosion of the sub-items. Some things must be done first in order to keep the work possible. In a big software project, the dependencies quickly become a scrambled nest of complex inter-relationships. Trade-offs have to be made, and the consequences can be severe. If you start in the wrong place, each thing you do simply adds more work to the list. It grows forever. A big project is really a massive sea of details, buried in a tonne of complexity. Any list is a shallow reflection of the work needed to be done. If you believe in Jack Reese’s perspective, the final list, the one that contains everything, is the code itself. You don’t have all of the details in place until you have all of the details in place. The list is the code, the configuration items, documentation and all of the other stuff that is assembled to be able to move the system into a place were it can be utilized. Until then, you have no idea about ‘everything’, just some things.
When programmers first learn to code, they quickly become entranced by their own ability to assemble increasingly larger sets of instructions. At some point, usually before they’ve experienced working on a full project from start to finish, they come to an over-estimation of their experience. If they can build something small, then clearly they can build something massive. I guess it’s a natural part of our human behavior, best summarized by “a little knowledge can be a dangerous thing”. Since software isn’t well organized, and there isn’t some 60 inchs of textbooks that will give you the full sense of just how wide and deep things really are, it’s easy to miss its vastness. And to make it worse, those with experience tend to frequently contradict each other, or drop out early. We’re a long way from being a sane profession, or even a craft. Within this environment, it is easy to draw certainty from too little knowledge. The consequences of this litter the web with endless arguments over things poorly understood. In many instances, both sides are wrong.
Software has a startling high failure rate. So do technical startups. And every time, each different type of virgin, blames the others. Many, even after some experience, become fixed to their errors believing that they are somehow superior, even if their efforts are barely working. It’s a strange industry, software. A place where one of the most difficult problems is working around the software virgins, on your way towards trying to get something substantial. And the more you know, the more you learn, the less certain you are of anything, or everything. Someday I’ll get to the point where I am absolutely sure I know absolutely nothing. The zen of software knowledge I guess.
Thursday, May 5, 2011
Risk and Inexperience
In most of the discussions about startups, it is commonly mentioned that the special rewards belong to those who take huge risks by investing and working full-time for no money. Most founders see this as a rite of passage, and justification for them to get the bulk of the winnings and credit.
The general idea is that you’re not taking significant risks unless you’ve put your life on hold, mortgage your house and thrown all of your savings into a project that basically has a 1 in 10 chance of success. It’s an all or nothing deal. You either win, or you end up broke, divorced and living on the streets. VCs also feel that without this type of commitment, you’re liable to bail early.
Honestly, that’s about as whacked as it gets.
The first flaw in this delusion is that many of the people playing in startups have significant cash holdings. They can go for a year or two without salary, and in doing so it won’t mean eating “cat food” for retirement. From that perspective, it really reeks of a rich person’s game to keep out those pesky ‘commoners’. If they don’t need to work for a while, then the risk is minimal.
But also for many of the successful startups in the past, the founders were young, had little commitments and could easily recover if things floundered. It wasn’t an all or nothing deal, it was just a diversion in life. At worst case they could live with their parents for a while again. Failure wasn’t going to cripple them. Their risks were comparatively low.
However for an experienced technical person, the risks are huge even if they are getting a salary. When I fell out of one startup, it took years to get stable again. And it didn’t come without a lot of pain, I basically had to fall back several huge notches in my career and start over. Before I left, I had foolishly assumed that the experience would pay for itself, but that turns out to be true if and only if you win. If you win, everybody loves you. If you lose, we’ll lets just say your phone isn’t ringing off the hook. It’s a pretty nasty face plant.
Risk then is relative to where you are in life, and what you really have to lose. If you’re thinking about forming a startup then you need to think about losing. It’s the most likely scenario. You don’t have to dwell on it, or use it to prevent you from trying, but deliberately wearing blinders just seems, well, foolhardy. Often startups don’t work, so if you really want to play, it may take quite a few before you are finally successful. Believing in an idea does not require willfully ignoring reality.
A good software developer is good not just because they can write code to do something. They’re good because they can write code that anticipates the chaos of the world, and then survives long enough to do something. They're good because they can communicate that to others, and send the herd off in a single direction. That understanding only solidifies with experience; it’s not something they can teach you in school. That experience is vital in being able to get something up and running. If a company goes forth without that experience, success is just an accident. If someone is serious about setting up a new company, then they should be serious about partnering with someone who knows what they are doing. Software is too complicated to just ‘wing it’ these days, and it gets worse every year, not better. The heady days of garage computing have been over for a long time now.
The thing is, that development skill set is also applicable in the real world. If you can arrange things so they don’t fail on a computer, then it would seem pretty crazy to ignore that understanding and dive into a startup head-first without first assuring that the costs of failure are mitigated in some way. Experience ages you (literally) and along the way you’ve picked up dependencies. A wife, a mortgage, kids, etc. Things that you can’t ignore. Because of that, failure becomes rapidly worse than a face plant. Worse than a few years without pay. Worse than losing some disposable income. Even if you get a salary, you’re putting up a huge chunk of your life and your career. Failing fast is nice, but it often takes years before the dust has finally settled. Early promise fades into autumn blues.
A good, experienced developer wouldn’t release code that has a 1 in 10 chance of working, that's worse odds than Russian roulette. So why would he, or she, be interested in an all or nothing game with those same odds?
Basically, the types of people that would make a good technical founder are the types that have been around long enough to not want the position. We’ll unless they are already rich of course, and if they are, hmmm, what’s their risk again? Is it any wonder that so many startups fail ...
The general idea is that you’re not taking significant risks unless you’ve put your life on hold, mortgage your house and thrown all of your savings into a project that basically has a 1 in 10 chance of success. It’s an all or nothing deal. You either win, or you end up broke, divorced and living on the streets. VCs also feel that without this type of commitment, you’re liable to bail early.
Honestly, that’s about as whacked as it gets.
The first flaw in this delusion is that many of the people playing in startups have significant cash holdings. They can go for a year or two without salary, and in doing so it won’t mean eating “cat food” for retirement. From that perspective, it really reeks of a rich person’s game to keep out those pesky ‘commoners’. If they don’t need to work for a while, then the risk is minimal.
But also for many of the successful startups in the past, the founders were young, had little commitments and could easily recover if things floundered. It wasn’t an all or nothing deal, it was just a diversion in life. At worst case they could live with their parents for a while again. Failure wasn’t going to cripple them. Their risks were comparatively low.
However for an experienced technical person, the risks are huge even if they are getting a salary. When I fell out of one startup, it took years to get stable again. And it didn’t come without a lot of pain, I basically had to fall back several huge notches in my career and start over. Before I left, I had foolishly assumed that the experience would pay for itself, but that turns out to be true if and only if you win. If you win, everybody loves you. If you lose, we’ll lets just say your phone isn’t ringing off the hook. It’s a pretty nasty face plant.
Risk then is relative to where you are in life, and what you really have to lose. If you’re thinking about forming a startup then you need to think about losing. It’s the most likely scenario. You don’t have to dwell on it, or use it to prevent you from trying, but deliberately wearing blinders just seems, well, foolhardy. Often startups don’t work, so if you really want to play, it may take quite a few before you are finally successful. Believing in an idea does not require willfully ignoring reality.
A good software developer is good not just because they can write code to do something. They’re good because they can write code that anticipates the chaos of the world, and then survives long enough to do something. They're good because they can communicate that to others, and send the herd off in a single direction. That understanding only solidifies with experience; it’s not something they can teach you in school. That experience is vital in being able to get something up and running. If a company goes forth without that experience, success is just an accident. If someone is serious about setting up a new company, then they should be serious about partnering with someone who knows what they are doing. Software is too complicated to just ‘wing it’ these days, and it gets worse every year, not better. The heady days of garage computing have been over for a long time now.
The thing is, that development skill set is also applicable in the real world. If you can arrange things so they don’t fail on a computer, then it would seem pretty crazy to ignore that understanding and dive into a startup head-first without first assuring that the costs of failure are mitigated in some way. Experience ages you (literally) and along the way you’ve picked up dependencies. A wife, a mortgage, kids, etc. Things that you can’t ignore. Because of that, failure becomes rapidly worse than a face plant. Worse than a few years without pay. Worse than losing some disposable income. Even if you get a salary, you’re putting up a huge chunk of your life and your career. Failing fast is nice, but it often takes years before the dust has finally settled. Early promise fades into autumn blues.
A good, experienced developer wouldn’t release code that has a 1 in 10 chance of working, that's worse odds than Russian roulette. So why would he, or she, be interested in an all or nothing game with those same odds?
Basically, the types of people that would make a good technical founder are the types that have been around long enough to not want the position. We’ll unless they are already rich of course, and if they are, hmmm, what’s their risk again? Is it any wonder that so many startups fail ...
Monday, May 2, 2011
Acting on My Behalf
One thing that has really bothered me lately is how many software developers have lost one of the key essentials in software. The computer is a tool to do what I want, when I want it done. Installing or running software shouldn’t be beyond my knowledge or control. I should always be aware of what the computer is doing when it is acting on my behalf.
It used to be that way. You installed software, and that software only did things when you requested it to. The functionality could be at a very high level, or a set of low-level tools. The older programmers where very careful about making sure that all actions, all of the time where directed by the users.
I think that started to change when Microsoft became targeted for viruses. Somehow they convinced us that they had to have control over releasing immediate critical security updates. It was in our best interests to leave the problem to them.
Once that door opened, everybody flooded in and now we have all sorts of software sending and communicating on our behalf, without us being aware. There are annoying popups on the screen, things turning all sorts of colors and it’s a constant battle just to type something in without some type of interruption. To me, these are very unwelcome behaviors. I need to trust my machine it order to get the most out of it. I can’t trust it, if I don’t know what it is doing.
Of course, it was security concerns that got us into this mess, but because of the way people chose to address the problems, we’ve done nothing but make that problem worse. The more crap that interrupts, the more people ignore it. It makes it easier to get in bad code, not harder. A hasty approach to security has sent us in a downward spiral.
I’d love to see this trend get reversed. If the computer can’t initiate actions without our consent, then it becomes harder for people to run malicious code on our machines. The answer to our security issues is to put the users back into the driver’s seat. They choose what functionality they want to execute and when, and only then can the machine initiate actions. From there we can strengthen their ability to know and understand what they are doing (or at least certify it in a way that they can know it’s not malicious). User’s need to drive their computers, not software. That approach is clearly not working.
It used to be that way. You installed software, and that software only did things when you requested it to. The functionality could be at a very high level, or a set of low-level tools. The older programmers where very careful about making sure that all actions, all of the time where directed by the users.
I think that started to change when Microsoft became targeted for viruses. Somehow they convinced us that they had to have control over releasing immediate critical security updates. It was in our best interests to leave the problem to them.
Once that door opened, everybody flooded in and now we have all sorts of software sending and communicating on our behalf, without us being aware. There are annoying popups on the screen, things turning all sorts of colors and it’s a constant battle just to type something in without some type of interruption. To me, these are very unwelcome behaviors. I need to trust my machine it order to get the most out of it. I can’t trust it, if I don’t know what it is doing.
Of course, it was security concerns that got us into this mess, but because of the way people chose to address the problems, we’ve done nothing but make that problem worse. The more crap that interrupts, the more people ignore it. It makes it easier to get in bad code, not harder. A hasty approach to security has sent us in a downward spiral.
I’d love to see this trend get reversed. If the computer can’t initiate actions without our consent, then it becomes harder for people to run malicious code on our machines. The answer to our security issues is to put the users back into the driver’s seat. They choose what functionality they want to execute and when, and only then can the machine initiate actions. From there we can strengthen their ability to know and understand what they are doing (or at least certify it in a way that they can know it’s not malicious). User’s need to drive their computers, not software. That approach is clearly not working.
Subscribe to:
Posts (Atom)