The real work in programming comes from building up massive sets of instruction for the computer to follow blindly. In order to accomplish this, modern programmers rely on a huge number of ready made instruction sets, including the operating system, frameworks, libraries and other large scale infrastructure like databases and web servers. Each one of these technologies comes with an endless number of APIs that provide all of the functional primitives required for interaction.
A modern programmer thus has to learn all sorts of different calling paradigms, protocols, DSLs, formats and other mechanisms in order to get their machines to respond the way that is desired. Fortunately the world wide web provides a seemingly endless collection of examples, reference documents and discussions on utilizing all of this underlying functionality. All a programmer need do is read up on each different collection of functionality and then call it properly. But is that really enough?
Underneath all of the peices are literally billions of lines of code, written over a 30-40 year period. We've pile it so high that it is easy to lose touch with the basic concepts. Programmers can follow examples and call the appropriate functions, but what happens when they don't work as expected? What happens during the corner cases or disruptions? And what happens when the base level of performance given isn't even close to enough?
In order to really get a system to be stable and perform optimally it is not enough to just know 'what' to do with all of these calls and interfaces. You have to really understand what is going on beneath the covers. You can't write code that handles large amounts of memory, if you don't understand virtual memory and paging in the operating system. You can't write caching if you don't understand the frequency of the cached data and how to victimize old entries. You can't push the limits of an RDBMS, if you don't get relations or query optimization. Nor can you do error handling properly if you don't undstand transaction semantics.
Examples on the Web aren't going to help because they are generally lightweight. Nothing but a full sense of 'how' these things actually work will allow a programmer to correctly utilize these technologies. They can sling them around for small, simple programs, but the big serious jobs require considerable knowledge.
It's much harder for younger programmers to get a real sense of 'how' things work these days. The technologies have become so complex and so obfuscated that unless you grew up watching them evolve, you can't grok their inner madness. The myths we have about younger programmers being better value for the cost are really predicated on an era of simpler technology. We have surpassed that point now in most of our domains. Slinging together code by essentially concatenating online examples may appear quick, but software is only really 'engineered' when it behaves precisely as the builder expects, and it does this not just for the good days, but also for the very bad ones. Understanding 'how' all this functionality works underneath is unavoidable if you need to build something that is resilant in the real world.
This complexity shift is quietly underlying the problems that we are having in trying to effective leverage our technologies. Newbie programmers might be able to build quickly, but if the secondary costs of maintaining the system are stratopheric, the value of the system could be turned negative. In our rush to computerize, few paid attention to this, we were too busy heralding the possible benefits of building the next wave of systems, but it seems as if our tunnel vision is quickly catching up with us.
If computers are really doing what they can, life should be both easier and more precise. That is, our tools should take care of the simple and stupid problems allowing us to focus on higher level concerns. But as anybody out there who has lots of experience with utilizing modern software knows, instead we spend our days bogged down in an endless battle against trivial problems, leaping from silo to silo. Some simple problems have vanished, like copying bits from one machine to the next, but they've been replaced by far more silliness and a plague of partially usable tools.
If we are going to make good on the potential of computers, more people need to really understand 'how' they work under the covers. And as they understand this, they'll be able to craft better pieces that really will simplify our lives, rather than complicate them. Just being able to sling together a larger collection of instructions for a machine does not mean that thoses instructions are good, or even usable. The great mastery in programming is not in being able to code, but rather in knowing what code is helpful and what code causes harm.