Saturday, April 19, 2014

What and How

The real work in programming comes from building up massive sets of instruction for the computer to follow blindly. In order to accomplish this, modern programmers rely on a huge number of ready made instruction sets, including the operating system, frameworks, libraries and other large scale infrastructure like databases and web servers. Each one of these technologies comes with an endless number of APIs that provide all of the functional primitives required for interaction.

A modern programmer thus has to learn all sorts of different calling paradigms, protocols, DSLs, formats and other mechanisms in order to get their machines to respond the way that is desired. Fortunately the world wide web provides a seemingly endless collection of examples, reference documents and discussions on utilizing all of this underlying functionality. All a programmer need do is read up on each different collection of functionality and then call it properly. But is that really enough?

Underneath all of the peices are literally billions of lines of code, written over a 30-40 year period. We've pile it so high that it is easy to lose touch with the basic concepts. Programmers can follow examples and call the appropriate functions, but what happens when they don't work as expected? What happens during the corner cases or disruptions? And what happens when the base level of performance given isn't even close to enough?

In order to really get a system to be stable and perform optimally it is not enough to just know 'what' to do with all of these calls and interfaces. You have to really understand what is going on beneath the covers. You can't write code that handles large amounts of memory, if you don't understand virtual memory and paging in the operating system. You can't write caching if you don't understand the frequency of the cached data and how to victimize old entries. You can't push the limits of an RDBMS, if you don't get relations or query optimization. Nor can you do error handling properly if you don't undstand transaction semantics. 

Examples on the Web aren't going to help because they are generally lightweight. Nothing but a full sense of 'how' these things actually work will allow a programmer to correctly utilize these technologies. They can sling them around for small, simple programs, but the big serious jobs require considerable knowledge. 

It's much harder for younger programmers to get a real sense of 'how' things work these days. The technologies have become so complex and so obfuscated that unless you grew up watching them evolve, you can't grok their inner madness. The myths we have about younger programmers being better value for the cost are really predicated on an era of simpler technology. We have surpassed that point now in most of our domains. Slinging together code by essentially concatenating online examples may appear quick, but software is only really 'engineered' when it behaves precisely as the builder expects, and it does this not just for the good days, but also for the very bad ones. Understanding 'how' all this functionality works underneath is unavoidable if you need to build something that is resilant in the real world. 

This complexity shift is quietly underlying the problems that we are having in trying to effective leverage our technologies. Newbie programmers might be able to build quickly, but if the secondary costs of maintaining the system are stratopheric, the value of the system could be turned negative. In our rush to computerize, few paid attention to this, we were too busy heralding the possible benefits of building the next wave of systems, but it seems as if our tunnel vision is quickly catching up with us. 

If computers are really doing what they can, life should be both easier and more precise. That is, our tools should take care of the simple and stupid problems allowing us to focus on higher level concerns. But as anybody out there who has lots of experience with utilizing modern software knows, instead we spend our days bogged down in an endless battle against trivial problems, leaping from silo to silo. Some simple problems have vanished, like copying bits from one machine to the next, but they've been replaced by far more silliness and a plague of partially usable tools.

If we are going to make good on the potential of computers, more people need to really understand 'how' they work under the covers. And as they understand this, they'll be able to craft better pieces that really will simplify our lives, rather than complicate them. Just being able to sling together a larger collection of instructions for a machine does not mean that thoses instructions are good, or even usable. The great mastery in programming is not in being able to code, but rather in knowing what code is helpful and what code causes harm.

Saturday, April 12, 2014

Collective Protection

Over the last few decades of my career, at least half of my working experiences has been really good. They've been great opportunities to solve complex problems, learn new domains and to build things which make people happy. I've been proud of the things I've built even though none of them are famous or made me any significant money. My favorite jobs have all been wrapped in a strong engineering culture; one that strives to always do the right thing, pay attention to the details, avoid politics and focus on getting the best quality possible. 

The other half of my jobs, however, have not been great experiences. Most of them were dominated by horrible people or bad cultures. In those circumstances, you try to do what you can to improve things, but generally the deck has been stacked against you. The only real option is to vote with your feet. To walk away. And I've had to do that many times, although sometimes I wasn't able to escape as early as I wished.

In a really bad environment you get pummelled by what I call 'unqualified decisions'. That is bad choices made by someone in an authority position that doesn't have the background or knowledge to make a good choice. Back in the dotcom boom, when software development became popular it attracted a great deal of outsiders. It still does. These are people with little or no direct experience in serious software development. They know enough to be dangerous and they are usually over-confident enough to be convincing, at least to upper management. Once someone like that takes root in a high enough position, nothing a software developer is going to say will prevent trouble. Since we've never really gelled into a recognized profession, a software developer has less credibility as seen from above, when compared to a non-technical egotistical jerk. And without credibility, any attempt to escalate problems, concerns or prevent outright disasters will fail miserably. There is just nothing you can do if your stakeholders don't take you seriously or want to listen. 

What has often occured in this type of scene is that an increasing number of bad decisions flow downhill, causing bigger problems, which usually spurs on more bad decisions. The trouble makers get aggressive  if the engineers point to the real underlying causes, so the enginners just end up passive and do whatever they are told to do until they get their opportunity to leave. Often it can be really sad as it amounts to the slow and steady death of some previous work. The antagonists don't even catch the blame for their mistakes because they attribute the faults to departing staff. They quite often get away with it, and sometimes even have long running careers hopping from one organization to the next replaying the same sad scenerio. Life is tough.

Besides walking away, I've often wondered if there was some other solution for this type of problem. Obviously if software developers were respected as a profession, their advice and opinions would carry more weight during personality or directional conflicts. But we're an immature industry and we can't even agree on basic standards or titles, let alone some larger knowledge base that's respected as professional. Our industry track record is quite horrible right now. Security breaches, information theft and the rapid erosion of everyone's privacy are all tragedies that are laid at our feet. At its worst, software has killed people, brought down companies and wreaked havoc in the world markets. Most people's home computers are plagued with an endless series of nasty viruses and the one time that we all pulled together to avert a technological calamity, Y2K, it turned out that the problem was grossly over-rated. No wonder people don't trust us. Don't believe that we are professionals. And even think that the whole mess is just some random performance art where we just hack away madly while hoping that it will some how magically work well-enough in the end.

As a 'profession' we are not taken seriously. There are pockets of goodness, but even in many software companies we are still the downtrodden. Crazy coders that whack out instructions that most of our handlers believe that they could do trivially, if only they wanted to spend a little more time on it. That's why we lose so often in personality clashes with these people and that's why so much of what we build these days is reckless crap thrown together in a fraction of the time that would have been needed to do it properly. We even have popular movements that twist our own values to allow for increased micro-management within ever shortening spasms of coding panic. Ideas that might sound fun, but any other sane profession would immediately see it as demeaning and fight against it. We, on the other hand, just keep getting sucked in again and again.

Maybe some day we'll mature into a real profession, but we've actually moved farther away from that during my career. The rapid growth of the dotcom age diluted our respect and the inevitable implosion at the end sealed the deal. Our latest security and privacy debacles aren't helping either. 

Our last ditch idea, the hail Mary pass of any oppressed workforce, is the one that 25 years ago I could never imagine myself suggesting. I'm still not sure I like it, but I don't see any viable alternatives. At the turn of last century, profits and greed were out of control. Blue collar workers were at the mercy of their employers and the world for many was an unhappy place. Robert Reich, in his film Inequality for All, pointed to the decrease in unions as being one of the causes of a shrinking middle class in North America. Not explicitly mentioned, but I think worth noting has been the shift from blue collar jobs to white collar ones. But in the early days of this shift an office job was still seen as being higher status, it didn't intrinsically need the same level of protections as those poor souls being exploited in the factories. 

Much has changed and computers have help to thin the ranks of office workers, replacing them with vast armies of IT workers, both on the development and operations side. If developers get it bad sometimes from crazy non-techies, operations people have it ten times worse. Together we're constructing vast piles of hopelessly tangled complexity. Most of which could have been avoided, if things weren't so disorganized and rushed. 

Learning from the past, pretty much the easiest thing we can do to change our destinies is to unionize. Not for pay or senoirity, but rather to protect ourselves from all of the bad people out there that want to compromise the work down to an unusable quality. If we did go this route, we could create a new IT union that espoused our values and that helped us enforce a code of good conduct for the things that we are creating. If we're being pushed to do bad things, we could appeal to the union to intervene on our behalf. A neutral third-party to arbitrate disputes. We'd have some way of preventing technical debt from swamping the projects. At minimum we'd have access to a broader perspective on what is really current practice, so we could tell right from wrong. Good practice from foolishness. We'd have some way of really standardizing things that wasn't controlled by specific vendors. 

The dark side would be insuring that control doesn't fall into the hands of the wrong people, either vendors or others with self-oriented agendas. We could mitigate against that, by voting for leadership and setting term limits to prevent people from settling in. Big companies would hate the ideal and activity work against it, but if we unite ourself in order to stabilize the profession and build better stuff, even the public would find it hard to be against the idea right now. No doubt the fiercest opposition would be within the ranks of programmers themselves. We're an uncontrollable herd of cats, setting off in every direction possible, so any attempt that is perceived to change that status quo is usually fought heavily. But the idea here is not to reign in coding practices, rather it is to clearly establish reasonable practices and then give the programmers some leverage with their stakeholders to follow those directions if desired. And to lend some type of real authority to inter-programmer disagreements. If one coder has gone way off the map, it would be useful for a project to be able to establish this with some certaincy before it is too late.

Overall I'm still not certain that we don't have other alternatives, but I'm not really aware of any at the moment (comments are welcome). As our societies rely more heavily on computers, the pressure to write crappier stuff has increased. This is a dangerous trend that I know we have to reverse if we really want to leverage our technologies. If I felt our industry was maturing, this type of remedy wouldn't be necessary, but there is little trustworthy evidence that we've improved and considerably more evidence that we've actually degenerated in the face of our mounting complexity. Right now I don't trust most software, and I generally find my self dancing around an endless series of annoying bugs just to get simple things done. I've lost my belief that computers are actually helping us. We need to fix this before it becomes catastrophic.

Sunday, March 2, 2014

Unwinding Complexity

keep coming back to complexity. It underlies so much of what software developers do for a living. Building a big, sophisticated software system is all about juggling millions of little moving parts. When it goes well the results are organized, stable and hopefully incredibly useful. We strive to build software that people can rely on to make their lives easier.

Building software is like building anything else, except that few people can see the final results. They can see the interface, feel the performance and sense the stability, but rarely can they stand back and appreciate all of the peices. This allows programmers to hide a tremendous amout of dirt in the system from their stakeholders. Given the high expectations and short timeframes, programing has become increasingly about sweeping dirt under the 'rug'. People chase every possible shortcut, even when they know the long-term consequses are dire. Added to those problems, deep analyse of any business reveals lots of skeletons in their closets plus a raft of inefficiencies that have built up over history. Rarely does a development project clean up the mess it finds, rather it just lifts the rug a little higher and sweeps more dirt under it.

Thus the millions of little parts in a project aren't just code and configuration. They are the huge collection of what is essentially artifical complexity, both internally and externally, that has built up over time. There is never enough time to get it cleaned up and reorganized, so the results just contribute to the ever growing organizational backgroud noise that increasingly hampers progress. Of course it is far worse for larger and older organizations, but even little companies have their messes. 

You'd think that a big new development would be the perfect time to clean house but no one ever leaves the time to get it done. No one ever has the energy. The inevitable consequence is that the dirt builds up layer upon layer. Decades go by while the artificial complexity grows into an ever larger Gordian knot. In many ways our rush to automate with software has just made these problems worse. Computers can simplify, but they can more easily obfuscate, and the way we build software these days, the latter is the most frequent occurrence. 

At some point this complexity exceeds the ability of people to deal with it. These days most large non-technical organizations just flitter around that threshold, spending most time barely in control. The natural response we have to this is to try to tighten down the processes, usually with new rules, but these rules just add more rope to the growing knot. The core, all of those skeletons and dirt, almost never gets touched. The ball of complexity just grows larger. This vicious cycle is what facinates me. Surely there is some way to reverse the cycle; to unwind the problems?

In the early days of the Industrial Revolution people were automating manual tasks in a rather ad hoc fashion. New inventions were siloed, and then thrown into use to deal with isolated aspects of the existing labor. In time, people like Henry Ford approached the problem from a broader perspective. They set the stage for modern factory production. Well-organized 'lines' of production that smoothly turn out works of high precision. An individual craftsman may put more skill and care into handcrafting their work, but their individuality shows in the output. Each peice is slightly different. Factories took some 'soul' out of the work, but they brought down the price while increasing the quality. This type of tradeoff is now well-established for a lot of manual work. We know how to mass product things like cell phones and build incredibly sophisticated projects like skyscrappers. In a sense we can use scale to minimize the overall complexity to a point where we can control quality. But all this knowledge onlyapplies to physical things.

Organizational processes and software both share the same property that the are crafted mostly with intellectual effort, not physical effort. The people doing this work these days operate as individual craftsmen, each infusing the output with our own personal stamp. Added to that, we have no way to measure the quality of our intellectual effort on either the small scale or the large one. We don't know if the results have been well-thought out or are half-baked. Anyone can propose an idea to fix a process problem, but there is no way to measure the quality or depth of that idea. It may fix the issues or it may be misguided and make them worse. There is no way to tell.

What I keep wondering about complexity is whether or not it really is possible to unwind it correctly, in a manner that is controlled? Could I, for instance, actually fix some of the deep company problems that I've run across? Not just in the software, but also in the organization?

My best guess is that complexity can be unrolled, and that it can be unrolled in a controlled process that is predictable. I keep thinking about what the equivalent of a factory line might be in this case. The problems roll in on one side and the appropriate thinking rolls out the other. In between, any and all of the necessary intellectual work that needs to be completed is done in an albeit uncreative, but very reliable way. That is, there is some definiable process that one should be apply to intellectual pursuits that eliminates artificial complexity by essentially normalizing the work in thinking to a finite number of defined steps. To clean up the mess, you just need to correctly follow the process. It might be time-consuming, but it is also deterministic. Once the work gets stripped down that way, it should gain intellectual precision but of course it loses that individual touch. It clearly wouldn't be as fun or creative as intellectual work is today, but the results would hopefully make up for that.

Industrialized thinking would be a much larger leap forward than was factory automation. We could, for instance, tidy up our governmental organizations so that for once they operate efficiently and with the least cost. That probably wouldn't decrease our taxes, rather our countries could provide more services at a much higher quality. The costs incrued by a bedraggled bureaucracy would finally be minimized instead of gradually eating more money for less output. Companies would be freed from their internal fog to really be able to compete well in their markets. Costs would be controllable.

I can definitely see how these ideas could change our world, but are they really possible? It could just be that the invisible nature of thinking negates this possibility in the same way that software isn't visible to most people. We can't yet measure what we can't see. It is this indirect link that binds building software to any possible industrial thinking concept. How do you guarantee the quality of intellectual output? You can't just hire anybody for quality control, and you can't even be assured that the people you hire are really seeing things as they truely are. These types of problems already dominate software development.

So far I don't have any decent answers for this problem. Some people might suggest documentation, but I've seen that fail too often as an approach. People don't proof read very well, and when they do it is usually for syntax not concepts like semantics or higher. To industrialize, the output has to be checkable in some way that would occur in an intermediate stage, long before it is too late. It would have to be reliable.

One approach that might have merit is to trace all things backwards. Start at the end and traverse back to the start. That is, if you wanted to simplify something like the laws of a country, you start tracking those that are actually being applied. Where they depend on other laws for properties like consistency or integration those get added into the mix as well. At some point you'll end up with a denormalized tree (dag actuallly) with what's currently in practice. Normalized these relative to a consistent underlying set of principles and the results should hopefully be a considerably smaller sets of 'clean' laws that could effectively replace the older ones. Even from the description you can tell that this whole process sounds rather tedious and boring, but if it really has those qualities and it is complete when done than it is likely the sort of industrialized approach that we are looking for. It might not be as fun as crafting endless new laws on top of each other, but compressing the overall complexity of the legal system would make it far easier for people to get fairer resolutions to their problems. Some lawyers might not be happy about that, but it would benefit society.

Roughly, it took about a hundred years for manual automation to get industrialized and it probably took another one hundred to get to today's level of precision. Cleaning up our disorganizations and ad hoc thinking processes will most likely follow suit at some point in the future. It may take generations, but it is likely inevitable. Until then, the only ways to clean up artificial complexity are time and luck, and pehaps the occasional brilliant insight.

Friday, February 21, 2014

Levels of Code

Throwing together computer code isn't that tricky, but doing it well enough that it's usable in a serious enviroment requires a lot of work. Often programmers stop long before their code gets to an 'industrial strength' level (5 by this classification). 

For me the different levels of code are:

        1. Doesn't compile
        2. Compiles but doesn't run correctly
        3. Runs for the main logic
        4. Runs and handles all possible errors
        5. Runs correctly and is readable

And the bonus case:

        6. Runs correctly, is readable and is optimized

The level for any peice of code or system under consideration is it's lowest one; it's weakest link. Thus if the code is beautifully readable but doesn't compile then it is level 1 code, not level 5.

Level 1 can be random noise or it can only be slightly broken, but it doesn't matter. It wouldn't even be worth mentioning except that people sometimes check this level of code into their repositories, so it deserves its own level. Level 2 isn't much of an accomplishment either, but a lot of level 2 code actually makes its way into software unnoticed.

Level 3 is where most programmers stop, they get the main logic functioning, but then fail to deal with all of the problems that the code can encouter as it runs. This includes not checking returns from lower level calls and not being able to cope with the unavailability of shared resources like databases.

Level 4 involves being able to correctly deal with any type of external failure, usually in a way that doesn't involve a crash and/or manual intervention. Networks, databases, filesystems, etc. can all become unexpectantly unavailable. The code should wait and/or signal the problem. Once the resource is available again the code should automatically return to using it properly. Any incomplete work should be correctly finished. If the downtime period is short, except for a log entry it shouldn't even be noticed.

Level 5 is somewhat subjective, but not as much as most people might assume. Realistically it means that some other qualified programmer can come along and quite easy figure how to enhance the code. That doesn't mean the code will remain at level 5 after the changes, but it does mean that the work to change it won't vex the next coder. If they understand the domain and the algorithm then they will understand where to insert the changes. Indirectly this also implies that there are no extra variables, expressions or other distractions. It is surprising how little level 5 code is actually out there. 

Level 6 requires a deep understanding of how things work underneath. It means that the code meets all of the other levels while doing the absolute minimum amount of work and it also takes advantage of techniques like memoization when it's both possible and practical. Extremely rare, it is often the minimum necessary for code to be described as 'elegant'. Failed attempts to reach level 6 can result in the code dropping several levels, sometimes all of the way back to level 2, thus the expression about premature optimization being the root of all evil.

Level 4 code is often good enough, but going to level 5 makes it possible for the code to be extended. Level 6 is a very worthwhile and achievable goal, often seen within the kernel of sucessful products, but it's incredibly difficult to maintain such a high standard across a large code base.

In this classification I haven't explicitly dealt with reuse or architecture, but both of these are absolutely necessary to get to level 6 and certainly help to get to level 5. Readability is clearly impaired if similar chucks of code are copy-pasted all over the place. A good architecture lays out the organization that allows the underlying sections of the code to efficiently move the data around, which supports level 6. In general, disorganized code usually converges towards level 2, particullarly if it is under active development.

Sunday, February 16, 2014

Principles

This post http://tekkie.wordpress.com/2014/02/06/identifying-what-im-doing/ by Mark Miller really got me thinking about principles. I love the video he inserted by Bret Victor at http://vimeo.com/36579366 and while the coding examples in it were great, the broader theme of finding a set of principles really resonated with me. I've always been driven to keep building larger, more sophisticated systems, but I wasn't really trying to distill my many objectives into concrete terms. Each new system just needed to be better than the last one (which becomes increasingly hard very quickly).

Framing my objectives as a set of principles however sets an overall theme for my past products, and makes it easier to be honest about their true successes and failures. 

As for principles, I no doubt have many, but two in particular drive me the hardest. One for the front end and another for what lies behind the curtains. I'll start with the latter since to me it really lays the foundations for the development as a whole.

Software is slow to write, it is expensive and it is increadibly time consuming. You can obviously take a lot of short-cuts to get around this, but the usefulness of software degrades rapidly when you do, often to the point of negating the benefits of the work itself. As such, if you are going to spend anytime building software you ought to do it well enough that it eventually pays for itself. In most instances this payoff doesn't come by just deploying some code to solve a single problem. There are too many development, operational and support costs to make this an effective strategy. It's for exactly this reason that we have common code like operating systems, libraries, frameworks, etc. But these peices are only applied to the technical aspects of the development, what about the domain elements? They are often way more complex and more expensive. What about the configuration and integration?

My backend priciple then is really simple: any and all work done should be as leveraged as much as possible. If you do the work for one instance of a problem then you should be able to leverage that effort for a whole lot of similar problems. As many as possible. For code this means 'abstraction', 'generalization' and eventually 'reuse'. At an organizational level this means some architectural structure that constrains the disorganization. At the documentation level this means that you minimize time and maximize readership.

Everything, at every level, should be designed and constructed to get the upmost leverage out of the initial effort. Every problem solved needs to viewed in a much larger context to allow for people to spot similar problems elsewhere.

Naysayers will invoke the specter of over-engineering as their excuse to narrow down the context to the absolute smallest possible, but keep in mind that it is only over-engineering if you never actually apply the leverage. If you manage to reuse the effort, the payoff is immediate and if you reuse it multiple times the payoff is huge. This does mean that someone must grok the big picture and see the future direction but there are people out there with this skill. It's always hard for people who can't see big pictures to know if someone else really does or not but that 'directional' problem is more about putting the wrong people in charge than it is about the validity of this principle. If the 'visionary' lacks vision than nothing will save the effort it is just doomed.

When a project has followed this principle it is often slower out of the gate than a pure hackfest. The idea is to keep building up sets of larger and larger lego blocks. Each iteration creates bigger peices out of the smaller ones which allows for tackling larger and larger problems. Time no longer is the enemy, as there are more tools available to tackle a shrinking set of issues. At some point the payoff kicks in and the project's capabities actually get faster, not slower. Leverage, when applied correctly, can create tools well beyond what brute force can imagine. Applied at all levels, it frees up the resources to push the boundaries rather than to be stuck in a tar pit of self-contructed complexity. 

My principle for the front end is also equally effective. Crafting software interactions for people, whether it be command line, a GUI or a NUI, is always slow and messy work. It is easily the most time-consuming and bug prone part of any system. It is expensive to test and any mistakes can cost significant resources in managing the debugging, support, training and documentation. A GUI gone bad can suck a massive whole into a development project. 

But an interface is just a way of hanging lots of entry-points of functionality for the users to access. There is a relative context to save the users time from having to respecify stuff and there is often some navigational component to help them get quickly from one peice of functionality to another, but that's it. The rest is just litterally window dressing to make it all look pretty. 

So if you are going to build a GUI, why would you decompose everything into a billion little peices and then start designing the screens from a bottom up perpective? That would only insure that there was extra effort in making endless screens with nearly the same bits displayed in a redundant manner. You can't design from the bottom up, but rather it must be from the top down. You need to look at what the users are really doing, how it varies and then minimize that into the smallest, tightest number of entry-points that they need. An interface built this way is small. It is compact. It contains fewer screens, less work and less code. It takes the users quickly to what they need and then gets them back to a common point again in as little of effort as possible. It's less work and they like it better.

A system with hundreds of scattered screens and menus is almost by definition a bad system, since it fails due to its size to be cohesive; to be usable. Functionality is useless if you can't find it. Sure it is easier to write, you don't have to agonize over the design, but that lack of thought comes with a heavy price tag.

Programmers build GUIs from the bottom up because they've been told to build the rest of the code from the bottom up. But for an interface this is backwards. To be effective, the interface has to be optimized for the user, and of course this will make the programmer's job far more difficult, but so what? Good coding is never easy, so forcing it to be that way simply dumps the problems back onto the people we are trying to help. The system should be easy to use even if that means the code is harder to write. And if the work is hard, but relatively redundant than that is precisely what the first principle is for. The difficult bits should be collected together and encapsulated so that it can be leveraged across the entire system. So for example, If the coders spent extra time generalizing a consistent paging mechanism for a screen, then that same code should be applied to all screens that need paging. Ten quick, but flakey paging implementations is ultimately more expensive and very annoying for the users.

It's hard to put a simple name to this second principle, but it could be characterized by stating that any people/machine interfaces need to be designed in a top-down manner to insure that they are optimized for the convience of people rather than for the convience of the construction. If people are going to benefit from software then they have to be the higest priority for its design. If money or time is a problem, less stuff should be delivered, but the priority must be people first.

Both principles echo strongly in my past works. Neither is really popular within the software development communities right now, although both frequently get lip service. People say they'll do both, but it is rare iin actuallity. Early agile, for instance, strongly focused on the end users but that gradually devolved into the stakeholders (management) and generally got pushed aside for the general gameification of the development process. These days it is considered far better to sprint through a million little puzzles, tossing out the results irratically, then it is to insure that the work as a whole is consistent and cohesive. Understanding the larger context is chucked outside the development process onto people who are probably unaware what that really means or why it is vital. This is all part of a larger trend where people have lost touch with what's important. We build software to help people, making it cheap or fun or any other tasty goal is just not important if the end product sucks.