Sunday, July 26, 2015

Intelligence

I've always been curious about intelligence; both as it relates to our species's ability to reason about its surroundings and to how much we can actually fit into software. I've done little bits of reading, here or there, for years but it's never been intentional. As such I've built up a considerable number of disjoint knowledge fragments that I thought I'd try to organize into something a little more coordinated. Given my utterly amatuer status on this topic any comments, scepticism, corrections or other fragments are highly welcome.

I'll start with a long series of statements, then I'll do my best to explain what I think is both right and wrong about them:

  1. Life forms are dynamic, driven internally.
  2. Life forms choose how to spend their energy.
  3. Smart is the sophistication of that choice.
  4. The choice can be very simple, such as fight or flee.
  5. The choice can be on-the-fly, thus choosing the best current option.
  6. The choice can be based on internal models, this supports longer term benefits.
  7. The models are dynamic, both the data and the structure itself can adapt and change.
  8. Intelligence is the ability to create models of the world and benefit from them. To be able to adapt to changes faster than evolution.
  9. Life forms can be self-aware.
  10. Self-awareness makes a life form smarter.
  11. Internal questions are spontaneous queries directed at models. They are not direct responses to external stimuli.
  12. The purpose of asking internal questions is to refine the models.
  13. Consciousness is a combination of models, being self-aware and asking internal questions.
  14. Humans are not entirely logical, their models get created for historic and/or emotional reasons.
  15. Some understanding comes initially as instinctive models.
  16. Some understanding gets built up over time from the environment and cultures.
  17. Common sense is only common to inherited cultural models.
  18. Contradictions arise from having multiple models.

Definitions

I'll start first with expanding out a few basic definitions  and then gradually delve into the more interesting aspects:

An object can be static, like a rock. With respect to its surroundings, a rock doesn't change. An object can be dynamic, like an ocean. It is constantly changing, but it does so because of the external forces around it. If an object can change, but the reason for the change is encapsulated in the object itself, then I see this as some form of life. In order to accomplish the change, the object must build up a supply of energy. A cell is a good example, in that it stores up energy and then applies that in a variety of different ways, such as splitting, signalling other cells or repairing itself. A machine however is not alive. It may have sophisticated functionality, but it is always driven by some external life form. It is an extension.

One big problem I am having with the above definition is hybrid cars. They store up energy and they choose internally whether to run on electric or gas power. A car is just a machine and it's destination is the choice of the driver, but that energy switchover collides with my first two points. I'll try to address that later.

I generally take 'smart' to be an external perspective on an object's behaviour and 'intelligent' to be the internal perspective. That is, something can behave smartly in its environment even if it doesn't have a lot of intelligence, and intelligent beings aren't necessarily being smart about what they are doing externally. Splitting the two on external vs. internal boundaries helps clarify the situation.

Given the external perspective, a life form that is smart is one that adapts well to its environment in a way that benefits it as much as possible. Depending on the environment, this behaviour could range considerably in sophistication. Very simple rules for determining when it is autumn help Maple trees shed their leaves in the fall. It's likely that behaviour is a simple chemical process that gradually accumulates over a period of time to act as a boolean switch. Cluster enough cold days together over a month or so and a tree decides to hibernate. That may be smart, but we generally do not define that as intelligence. It is a reactionary triggering mechanism to external stimuli.

Some life forms make considerably more sophisticated choices. Not just reactions, but there is a long term component there. In order to project out into the future one needs some data about both the present and the past. But this data can't be unstructured. Since it is internal, at best it can be some combination of symbolic representations and explicit sensory input. The structure is no doubt chaotic (naturally organized), but exists never the less. It is this resource that I call a model. A very simple example might just be a crude map of where to go to get food. A more complex one might be handling social organizational relationships. Practically, it is likely that models get coalesced into larger models and that there are several different internal and external ways in which this is triggered. Still, it is expected that life forms that have this capacity would have multiple models, that are in different stages of development.

The purpose of keeping a model is that it can be used to project future events. It is essentially a running total of all exposed relative information. Once you know that night is coming in a few hours, you can start arranging early to find a safe sleeping place. Certainly, a great many creatures have crude models of the world around them. Many apply crude logic (computation?) to it as well. We have shown a strong preference for only accepting sufficiently advanced models as truly being intelligence, but that is likely a species bias. Sometimes people also insist that the ability to serialize paths through a model and communicate it with others who can then update their own understanding is really what we mean by intelligence. The ability to transfer parts of our models. As far as we know, we're the only creatures with that ability. Other animals do communicate their intentions, but it seems restricted to the present.

Douglas Hofstadter made an interesting point in "I am a Strange Loop" about the strength of self awareness. A feedback loop can have very deep and interesting behaviours. Life forms whose models include an understanding of their own physical existence can no doubt make more sophisticated choices about their actions.

Self-awareness clearly plays a crucial role in behaviour. I'm just not sure how deeply that goes, most mobile creatures have at least the fight or flee responses and it seems that fleeing could require self-awareness. An attempt to save oneself. Perhaps, or it could just be wired into the behaviour at the instinctual level? Or both.

The really big question about life forms has always been consciousness. Some feel that it is just self-awareness, but I think you can make many valid objections to it being that simple. As I mentioned above, lots of creatures will flee at the first sign of danger, but some of those seem simple enough, like insects, that it is hard to imagine calling them intelligent, let alone conscious. Their behaviour most often seems to be statically wired. It doesn't really change quickly, if at all.

Some others feel it is just a thread or stream of conscious thought. That really only works if the stream is aware of itself, and is essentially monitoring itself. Still it doesn't seem to be strong enough to explain consciousness properly, and it doesn't provide any reason why it would have evolved that way.

Recently I've come to think of it as the ability to internally ask questions of one's own models. That is, it is a stream of self-awareness thoughts, that frequently question the model on how (and why) to expend energy. A non-conscious life form would just react to the external changes around it directly via the gradual buildup of rules or simple models. A conscious life form would also question those models and use those questions to further enhance them. Simply put, a conscious life form is curious about itself and it's own view of the world.

That opens the door to someone correctly arguing that there is a big hole in this definition. You could point out that people who never question their own actions are not conscious. For example, someone who joins a cult and then does crazy stuff only because the leader says that they should do it. I'm actually fine with the idea that there are degrees of consciousness, and that some people are barely conscious, and that some people can be turned that way. It matches up with that sense I got when I was younger that suddenly I had 'emerged' as a being. Before that I was just a kid doing whatever the adults specified, then almost suddenly I started getting my own views. Questioning what others were saying or what I was told. I can accept that as becoming gradually more conscious. So it isn't boolean, it is a matter of degrees.

Asking questions does show up in most fiction about artificial intelligence. From Blade Runner to Ex-machina, the theme of the AI deviating from expectations is always driven by their questioning the status quo. That makes sense in that if they just did what they were told, the story would be boring. It's just the story of a machine. Instead, there needs to be tension and this derives from the AI coming to a different conclusion than at least one of the main characters. We could see that as "questions that are used to find and fill gaps in their models". Given that each intelligent life form's model is unique, essentially because it is gradually built up over time, internal expansions would tend towards uniqueness as well. No doubt the models are somewhat sticky, that is it is easier to add stuff then it is to make large modifications. People seem similar in this regard. Perhaps that is why there are so many disagreements in the world, even when there are established facts?

Artificial Intelligence

Now that I've sort of established a rough set of definitions, I can talk about more interesting things.

The most obvious is artificial intelligence. For a machine to make that leap, it would have to have a dynamic, adaptable model embedded within it, otherwise it is just a machine with some embedded static intelligence. To test this, you would have to insure that the model changed in a significant way. Thus, you might start with a blank slate and teach the machine arithmetic on integers. Then you would give it a definition for complex numbers (who are structurally different than integers) and see if it could automatically extend its mathematical definition to those new objects.

These days in software development we always make a trade-off between static finite data types and having less structure in some generic dynamic representation (a much rarer form of programming but still done at times like in applications like symbolic algebra systems), but I've never seen both overlaid on top of each other. It's currently a trade-off.

A fully dynamic structure as a massive graph would give you some form of sophisticated knowledge representation, but that would still not be artifical intelligence as we often think of it. To get that next leap, you'd need this thread of constant questioning, that is gradually refining the model. Testing to see if this exists would be problematic, since by definition it isn't connected to external events. My guess is that you need to use time. You give the machine a partial model with an obvious gap, and latter you return to see that the gap has been filled in. An important point here is that at bare minimum it would require two different tests spaced out by time, and as such shows a huge weakness in the definition of a Turing test. That latter test would have to be really long, in order to allow for the modifications to happen, and possibly by definition it wouldn't because the subject would be preoccupied by the test itself.

A second critical point is that any life form that is questioning itself and its surroundings is inherently non-controllable. That is, you could never guarantee that the answers to any of those questions always came out positive in your favor. As such, any fixed set of rules like Isaac Asimov's three laws of robotics are probably not enough to guarantee a domesticated and benevolent artificial intelligence. It's more like a slot machine, where a jackpot means big trouble. Once the wrong question is asked, all bets are off. A conscious being always has enough free will to extend the model in a way that is not in the best interests of those around it. It is looking after itself with respect to the future, but that forward projection could literally be anything.

Self-driving and self-repairing cars

The current state of self-driving cars leads to some interesting questions. The version at Google clearly models the world around it. I don't know how dynamic that modelling is, but it's aligned in a way that could potentially make it a life form. The differentiator is probably when the car encounters some completely new dynamic object, and not only adds it to it's database but also all it's new, previously unseen, properties as well. That update would have to alter both the data and its structure. But to arrive at a life form it might not even need to be dynamic...

At some point, a self-driving car will be able to drop you off at work, and then find a cheap place to park during the day. What if one day it has an accident after it left you? It could easily be wired with sensors which might notify it that the fender is dented. At this point, it could call up a garage, schedule an appointment and afterwards pay for it with your credit card. This is fairly autonomous, is it a life form now? One could argue that it is still reliant on external objects for the repair, but we have the technology to bypass that. What if it had a built-in 3D printer and some robot arms? At this point it could sculk away somewhere and repair itself, no one needs to know.

So let's say it's a normal day, the car drops you off and you go to work. If later in the afternoon the police show up at your office and start questioning you about a fatal hit and run accident, what happens? You tell them that you've been at work all day, they ask to see the car. You summon it, and it is in pristine condition, nothing wrong. What if, as part of the repair, the car automatically deletes any logs. You ask the car if it got into an accident. It says no. At this point, you would not expect to be held liable for the accident. You weren't there, there isn't any proof and the car itself was involved in the cover up, not you. It does seem that with the right combination of existing technologies that the car could appear to be fully autonomous. Making it's own decisions. Repairing itself. That, at this point has it crossed the threshold between machine and life form? It also seems as if it doesn't even need to be dynamic to make that leap? It has just become smart enough that it can decide how to spend it's energy.

Origins of life

If life is just an internally dynamic process that got set in motion, a crucial question is how and why did it all start? My naive view is that somehow bundles of consumable energy coalesced. From there, all that would be necessary to go forward is a boolean mechanism to build up containment around this energy. Given those two pieces, something like a cell would start. Initially this would work to fully contain the current bundle, and then gradually extend out to more sophisticated behaviours, such as gathering more energy. In that sense, one could start to see how the origins could be derived out of the interplay between mechanical processes.

In a way that's similar to the issues with a self-driving car. What's radically different though is what set it all in motion. Our perspective of evolution implies that some of these internally dynamic processes are a consequence of external dynamic processes. That is, the mechanism to attempt to contain, and later collect energy is a normal part of our physical world. Something akin to saying that it is similar to the way weather and water shape the landscape, but in this case the external force gradually morphs to maintain its own self interests. This is an extraordinarily deep issue, in that it differentiates the line between machines and life forms. If a self-driving car does become a life form, it is because we set it in motion. We're in motion because of all the life forms before us. But before that is seems as if we need to look to physical processes for an explanation?

Dynamic Relationships

There are sometimes very complex relationships between different things. They might be continually dynamic, always changing. Still, they exist in a physical system that bounds their behaviour. Computation can be seen as the dynamic expression of these relationships. In that sense it is a complex  time-based linkage whether explicit or not, that relates things together. We have some great theories for what is computable and what is not. We know the physical world has boundaries as well, a classic example is limits on exponential operations such as paper folding. Over the millennia we have refined all sorts of ways of expressing these relationships, and have dealt with different degrees of formality as well as uncertainty. In that we externally model the relationships and then apply them back to the real world, in order to help us make better choices. If that sounds incredibly like my earlier description of intelligent life forms, it is no coincidence.

We started out communicating parts of the models, but now we are developing ways to work on the models externally. If the addition of internal models helped us along, the ability to create shared external ones is central to our continued survival. Change is the default state of the universe, and the planet; it can be locally faster than many life forms can handle. Thus speed and also accuracy are essential for dynamic processes to continue. The sooner we figure out how to collectively model the world around us, the more likely we will be prepared for the next oncoming major change. There will always be a next major change.

Getting back to computation, we can express any mechanism as a collective series of relationships. If we think of the flow of the universe as going from chaos to natural organization and then back to chaos again, this could be seen as permuting most forms of relationships. That is, eventually, somewhere, a given relationship is going to be instantiated. Most likely more than once. If that relationship exists, then the mechanism exists and if that is the ability to contain energy, then we can see how the dynamic nature of the universe can morph, rather unintentionally, into some internally dynamic process. At that point, the snowball just starts rolling down the hill...

Final thoughts

Given my lack of concentrated depth on this subject, my expectation is that my views will continue to change as I learn more. I am not certain of anything I said in this post, but I do find this line of reasoning to be interesting and it seems to fairly explanatory. It is, explained in the current context, just a minor model that has been gradually accumulating accidently over time. Lots of gaps, and plenty of overlap with the many other models floating about in my thoughts.

We have arrived at a point in our development as an intellectual species were it seems that we really need a deeper understanding of these issues. They need to be clarified, so we can work upwards to correct some of our accidental historic baggage. That is, our loosely organized collective model of the world as it is chaotically distributed between the billions of us is not particularly accurate. With lower population densities and less technological sophistication this wasn't a problem, but that seems to have changed. Every person will predict the future differently because their starting points, their models, will differ. Those differences lead to conflict, and enough conflict leads to us into taking a rather massive leap backwards. History implies this is inevitable, but I tend to think of it as just one more changing aspect of the world that we have to learn to adapt to. Intelligence or our collective lack of it, are crucial to understanding how we should best proceed in the future.

Sunday, July 19, 2015

Privacy

There are some things that I only tell my wife. There are things that I share with my family, and there are other things that I share with my closest friends. It's not that any of these things are bad, or dangerous, or in any way harmful to society, but rather that I deliberately choose to manage how I present myself to different people.

One of the pillars of a close personal relationship with someone else is that you trust each other enough with private information. If everything is already in the public, you lack the ability to build up these deeper relationships.

Thus controlling the 'scope' of the information that we share about ourselves is fundamental. I let my goofy side hang out for my friends, but at work, I try to appear professional. I want to be seen as happy and confident to most people, restricting my interactions on bad days to those closest to me. Sometimes I even bite my tongue and don't say what immediately comes to mind, just because it doesn't fit. It would be awful if every little tidbit about me was constantly floating about in a digital tsunami. I don't want to be an 'open book'. Some aspects of my life are for selected people only.

Long before computers disrupted the world we used to be the masters of our own information. We could choose to tell stuff to certain people, and if we picked them correctly that information would not end up in the gossip mill. It would not go public. What was said between friends stayed between friends. If something was leaked, then it was because you trusted the wrong person.

As more and more of our interactions become digital, we started losing control of our private information. Now it is far too easy for any unknown person to alter the scope of what we are communicating. You no longer know who you are implicitly trusting anymore. That currently makes private digital communication impossible.

If we want to restore our necessary privacy in the digital age, we're going to have to set down some very specific rules about how information is properly shared. The most obvious one is that unintentional intermediaries to anyone's data should never, ever, share it. They should not alter that person's original chosen scope. They should not steal control from them.

That is, a system administrator who is not a direct party to a conversation between two other people in email should never give those emails over to a third party. Communications between a group of people is meant only for those people, and it is only one of those people that can widen the scope of the information. If someone was the intended audience, then they have the right to mess with scope, but if they were not, it should be considered morally wrong for them to 'spy' on others.

This is very simple in principle. If you log into a website and fill out the registration forms, you are having a conversation with the company that created the website. If later, you are utilizing their web site to have a private conversation with a couple of other people, that conversation no longer includes the company that runs the site. They should not alter the scope.

If you submit a post to a public forum, it's a public conversation. If you post something to a small select group, then it is not public and should not be visible to the public unless one of the members of that group explicitly makes it so.

Only the people in the discussion can change the scope, no one else. If we set this as the convention, then some of our privacy issues go back to normal. We can start building the next generation of technology that enforces this behavior. We can also easily determine when something leaked was morally wrong and choose not to make it worse. To be a whistleblower, you have to be on the inside of the conversations, not just stumble across them while spying on others.

There is, of course, the rather ugly and remaining law enforcement issues. Previously, in order to spy on people you had to both have some evidence that they were up to no good and you had to get permission from the courts (essentially a sober third-party) before you could proceed. All of that was to prevent any collected information from being abused, and certainly, history shows again and again that abuse will always be rampant.

We need to return to that as well. No organization should ever be collecting mass amounts of any type of information on people in the off chance that it might just be useful for punishing them later. That's so far off the scale of decent behavior that I've really been surprised that more people aren't disgusted by it. A government peeping Tom, located at every house, watching everybody, is beyond creepy.

We need to regain control over the scope of our information. We need to return to a set of rules that doesn't glorify peeping Toms. We need to do this because we respect the rights of our citizens to be able to present themselves as they choose. We need to respect their choices. It's really that simple.

Sunday, July 12, 2015

Sweatshops

Decades ago, when I got out of school, I remember being very wary of accidentally working for a software sweatshop. 

It was a frequent conversation between new grads. All sorts of industries have variations on this theme but in programming, sweatshops had already started to become significant career hazards.

What we wanted to avoid in those days were jobs with crazy hours, hacking at mediocre code while under humiliating micromanagement. The sort of work that makes you wish you had pursued an alternate career. Maybe forestry.

It's not that I minded long hours when I was young, but given that I am essentially donating my personal time to a company, in exchange for this I expected to work on meaty problems that would help my career. Most code is fairly routine, and with plenty of practice you can shut off your brain and just pound out stuff that kinda works. The jobs that I have loved over the years have been the ones where we went deeper than that. Where we didn't just 'barely' get things working, rather we carefully crafted sophisticated solutions that solved real problems for people. Partially solving some aspect of a problem is easy, refining that to work flawlessly is exponentially harder, but it's those deep solutions that people really need; that truly make their lives easier.

Of course, the type of code I wanted to write wasn't just chucking some data onto the screen and then saving it a few minutes later. A sloppy GUI to support editing a crappy data model is just automating something already broken. Deep solutions require complex modeling so that they can add value on top of the basic data collection process. Really big, but low-quality data isn't going to be mined for any real insights, there is just too much noise embedded within it. Good user interfaces also require deep insights in order to properly structure the workflow. Littering the screen with tables, dropdowns and buttons is easy for the programmers, but horrible for everyone else.

Besides the long hours and the weak solutions, sweatshops also tended towards hovering over their employees, while trying not to let them know much about the higher-up planning. My guess is that this is done for two reasons.

The first is to keep squeezing as much out of everyone as possible. That's sort of an odd irony since programming is mostly intellectual work; the closer programmers get to burnout, the worse the quality of their efforts. But a really intense sweatshop doesn't get this, so they think continually throwing the development into "crunch mode" will actually net them more results; they'll somehow get a better return on their salaries.

Of course, that directly leads to the second reason. The continual whipping to extract more from the coders will always result in a degenerating code base. If the initial version of the system wasn't too bad, but the next version is expected to be a total bugfest, then it is a strong indication of a sweatshop. When 'mediocre' is the best possible outcome, that turns the morale of many of the programmers. My sense has been that in order to prevent the programmers from realizing this, you have to keep them super busy. If they're just too tired to think at the end of the day, then they are less likely to reach the conclusion that their jobs suck. That they are just churning code.

Thus getting caught up in a sweatshop is a somewhat self-perpetuating cycle. They work you hard enough that you are too tired to escape, so you just hang in there on a day-to-day basis until way too much time has passed or you melt down. If you're really screwed, you don't even have any relevant market experience that would make you attractive to other employers. With no visible options, you'd have to take a serious risk in order to leave.

So, of course, as new grads, we really wanted to avoid this horrific fate. If you want a long career in building modern software, we quickly realized that you had to keep finding good positions that didn't burn you out but also didn't lock you into some unsellable skill.

Decades ago, there were lots of good development shops that were focused on building up reliable systems at a reasonable pace. That, of course, all changed with the dot com era. All of the sudden programmers became scarce, and programming shifted from deep intellectual construction to just an endless panic. Tech has always suffered from stupid fads, but suddenly the fads themselves became the bleeding edge, and the cost to play was that you had to work for a sweatshop.

Not surprisingly, the software being produced dropped heavily in quality. There are always counterexamples, but collectively the mentality shifted away from taking enough time to build something meaningful, to just getting anything out there and then trying to make it look pretty later so consumers don't notice that it's broken.

It certainly hasn't helped that the methodologies drifted into being hyper-reactive towards the "stakeholders". The idea being to cover one's ass when building a mess by rigorously ensuring that the blame can be placed on the clients. If we did "exactly" and "only" what they wanted (even if they don't understand the consequences) then how can that be wrong? To me, that always seemed like a Doctor asking patients to self diagnose their own illness, so that they can't be blamed later if it went wrong. The clients aren't professional Software Developers, so why would they know how to build a working system in a reasonable time? They don't even have a realistic sense of what 'working' and 'reasonable' means. It's a classic recipe for disaster.

Lately, I've been watching the job market again and I am saddened by the number of sweatshops out there. Even worse is that many of them actually brag about it in their job descriptions. Job ads that a few decades back would have resulted in 0 serious resumes are suddenly extremely popular. Programmers seem attracted to these like moths to a bonfire. Companies are so over the top that they add perks like "frequently paying for dinner" as if they were actual benefits, but to me, it seems to be enshrining a rather sick work/life balance. If their priorities are that skewed, the work is probably pretty shoddy as well. I love to work, but not enough that I don't feel bad about missing out on the rest of what life can offer. In those extreme periods in the past where my years just disappeared in an instant, I’ve looked back with disappointment. Coding is fun, but so are a lot of other things.

It seems like we're in an epidemic of the sort of career-destroying culture that as a new grad I was trying so hard to avoid. Programming shouldn't be a continual high-stress hackfest. There are always times in any system's life where it becomes difficult and stressful, but the measure of a good job is that those times are rare, not constant. If the development is thought through and the process is organized, then most days in programming are spent carefully working on building up reliable code. You need to spend the time to make things of decent quality, they don't just get created by accident. When the work is crazy, you might feel proud that you survived, but you can never really feel proud about creating that inevitable ball of duct tape and bandages. It is just ugly. Myself, I am driven to build things, software is just the medium I choose to work in. If I am going to spend a lot time building, then I need to be proud of my creations.

High-stress, of course, leads to rather poor thinking, and code is only ever as good as the thinking that went into it. Computers are "multipliers", thus bad code realistically just makes the problems worse. You can hide that, but only for a while. So it's been clear for a long time now that even if the code started out okay, throwing stressed out teams at extending it is detrimental in the long run. Setting up an environment that glorifies stressed out teams is unlikely to be sustainable (with a few notable exceptions). If you care about what you create, then you have to find an environment that does too.

In many ways, it has been extremely frustrating to watch this continuing madness grow, even as we are becoming dangerously reliant on software. It's like watching a car crash in slow motion. The pools of good jobs are gradually drying up, getting replaced by sweatshops. Each year more people proclaim crazier and crazier approaches to building software that pivot on exploiting some obviously bad shortcut or a hopelessly redundant time wastage. Eventually, enough people catch on that the latest fad isn't helping, but by then something even worse has come along to replace it. We're so trapped in bad approaches right now, so deeply, that many newbies think they are actually the core of our profession.

The truth is that at some point, with enough variation, you start to realize that most of the industry's persistent problems are self-inflicted. And by the time you finally get to this realization, you're heavily wishing that you had pursued an alternate career. Like forestry.