We live in a era where there is a massive volume of information that is available about any topic. An era where people often combine together common knowledge into idealized concepts like '"best practices", giving them some sort of semi-official sounding endorsement. An era where large amounts of what we know is right, isn't really.
For those that build or operate computers, we work in an industry where we wildly guess at our needs. Where we cling to ideas that are easily proven incorrect, because we lack better ones. Where we react, rather than understand. An industry that says incorrect things often with a certainty that should be laughable, if only it really was.
There is an huge distance between an idea that sounds good and a good idea; something that too easily escapes so many people. Recently I was stumbling right across another classic example, that of forcing users to constantly change their passwords at regular, enforced intervals.
If a person stands back and thinks heavily about a problem, they can often envision a reasonable number of the variables. In that context, they might come up with some pretty reasonable sounding solutions, but they always need to understand that their "universe" of construction was just a artificially limited subset of the real one. We're not able to work in the full scale, only a subset. That makes a huge difference, because while the primary behaviors may work as expected with the primary variables, it's the secondary side-effects that are not properly being accounted for. And sometimes, those sides-effects are far more significant than the original variables.
Forcing users to change their passwords on a fixed cycle, and to not reuse old passwords is consider to be a "best practice" by most of the IT industry. There are a lot of intelligent sounding arguments that have been devised to justify this approach. In theory it sounds great.
For anyone who has actually worked in a environment where this was been implemented, if they are objective, and thorough they will find empirically that it is frequently not working correctly. It can actually force a significant number of the users into making their passwords less physically secure, and far less secure in general, not to mention that it seen as an irritant and a morale killer.
The problem is that most people barely remember their password to begin with. And in many environments it is not entirely uncommon to have lots of passwords. All of the grand attempts at unifying systems under big corporate user databases have generally failed due to politics, complexity or technological problems. Most users still have multiple passwords, and many users have dozen of them, particularity operations and development staff.
While some truly organized person with a great memory devised this password changing scheme -- perhaps fearing that a stale password was a weak one -- the bulk of the population does not have the capacity or desire to change their multiple passwords every few months. Stale passwords are no more or less easier to crack through brute-force; their biggest problem is in them getting out verbally to the public, or in a crack not being noticed. Not sharing accounts and passwords, and locking out dormant accounts, as well as telling a user each time they log on when they were in last, are all effective ways around these issues.
Stale passwords aren't necessary weak passwords. Causing new crack attempts after every password change, doesn't negate why the initial attempt was successful; the problem remains.
Forcing people to change their password every three months is forcing the passwords out of most people's active memory. That is, it's far harder to remember something that is constantly changing, so you don't. And that, is one big, giant, huge, enormous step backwards.
So once the users cannot remember their passwords, they must find other ways not to forget them. Writing it down, memory schemes, common dates, etc. Each and everyone of these schemes is weak by definition. Where a user "might" have picked a strong password, frequently changing schemes force most of them into definitely picking weak ones. The problem hasn't been fixed, it has simply been moved into a worse one.
The missed variables in this case, are people's inability to remember constantly changing things. For some security expert whose focus is on passwords, the idea that the bulk of the population are too focused on their work to be able to or care to spend effort continuously on creating unique passwords is totally missed; just not added to the equation. Their focus is too narrow to match the reality.
An additional problem is procrastination, as all users wait till the very last moment before the computer forces them to change their password, so the password chosen is often chosen in a panic and haste.
In many cases, using some type of simplification scheme is the only way to avoid constantly forgetting the passwords. Automated checks may prevent a bit of this, but there are a multitude of clever ways around this. For instance, my six digit phone mail password is actually just a two digit incrementing (by 2) number. For the "once a month", call-me-back message I get, real security is pointless. I kept forgetting six digits, remembering one isn't that bad.
Thus the passwords end up on papers and stickies, and other bits floating physically around the office. Clever schemes are created to remember them. Frequently they are lost, causing wasted hours of work waiting for resets. Operational people have to keep reseting them, using up more of their time and costs. In the end, it irritates most of the users and it is all a big mess.
There are probably some environments where this type of scheme may work well -- perhaps low-skill jobs in a high turnover environment -- but for most environments that involve a computer this type of practice is counter-productive.
The really big problem is that it is more than obvious to many of the users that this type of practice is needlessly controlling, and therefore insulting to them. It's a "we don't trust you" practice. When IT departments fall back on the mantra that it's officially a "best practice" so it's not arguable, that diminishes their standing in front of their users. People hate to be told that things are good, when they are clearly not. It's just another reason why there is a growing gulf between many IT departments and their users. If they don't trust our security practices, why should they trust our estimations ones, or even our architectural choices.