A common one is if they are working on a massive system, with a tonne of rules and administration. They feel like a little cog. They can’t do what they want, the way they want to do it. If they went rogue their work would harm the others.
Having a little place in a large project isn’t always fun. So people rail about complexity, but they mean the whole overall complexity of the project, not just specific parts of the code. That is the standards, conventions, and processes are complex. Sometimes they single out little pieces, but usually really it's the whole thing that is bugging them.
The key problem here isn’t complexity. It is that a lot of people working together need serious coordination. If it's a single-person project or even a team of three, then sure the standards can be dynamic. And inconsistencies, while annoying, aren’t often fatal in small codebases. But when it’s hundreds of people who all have to be in sync, that takes effort. Complexity. It’s overhead, but absolutely necessary. Even a small deviation from the right path costs a lot of time and money. Coding for one-person throw-away projects is way different than coding for huge multi-team efforts. It’s a rather wide spectrum.
I’ve also seen programmers upset by layering. When some programmers read code, they really want to see everything all the way down to the lowest level. They find that reading code that has lots of underlying function calls annoys them, I guess because they feel they have to read all of those functions first. The irony is that most code interacts with frameworks or calls lots of libraries, so it is all heavily layered these days one way or the other.
Good layering picks primitives and self-descriptive names so that you don’t have to look underneath. That it is hiding code, i.e. encapsulating complexity, is actually its strength. When you read higher-level code, you can just trust that the functions do what they say they do. If they are used all over the system, then the reuse means they are even more reliable.
But still, you’ll have a pretty nicely layered piece of work and there will always be somebody that complains that it is too complicated. Too many functions; too many layers. They want to mix everything together into a giant, mostly unreadable, mega-function that is optimized for single-stepping with a debugger. Write once, read never. Then they might code super fast but only because they keep writing the same code over and over again. Not really mastery, just speed.
I’ve seen a lot of programmers choke on the enormous complexity of the problem domain itself. I guess they are intimidated enough by learning all of the technical parts, that they really don’t want to understand how the system itself is being used as a solution in the domain. This leads to a noticeable lack of empathy for the users and stuff that is awkward. The features are there, but essentially unusable.
Sometimes they ignore reality and completely drop it out of the underlying data model. Then they throw patches everywhere on top to fake it. Sometimes they ignore the state of the art and craft crude algorithms that don’t work very well. There are lots of variations on this.
The complexity that they are upset about is the problem domain itself. It is what it is, and often for any sort of domain if you look inside of it there are all sorts of crazy historical and counter-intuitive hiccups. It is messy. But it is also reality, and any solution that doesn’t accept that will likely create more problems than it fixes. Overly simple solutions are often worse than no solution.
You sometimes see application programmers reacting to systems programming like this too. They don’t want to refactor their code to put in an appropriate write-through cache for example, instead, they just fill up a local hash table (map, dictionary) with a lot of junk and hope for the best. Coordination, locking, and any sort of synchronization is glossed over as it is just too slow or hard to understand. The very worst case is when their stuff mostly works, except for the occasional Heisenbug that never, ever gets fixed. Integrity isn’t a well-understood concept either. Sometimes the system crashes nicely, but sometimes it gets corrupted. Opps.
Pretty much any time a programmer doesn’t want to investigate or dig deeper, the reason they give is over-complexity. It’s the one-size-fits-all answer for everything, including burnout.
Sometimes over-complexity is real. Horrifically scrambled spaghetti code written by someone who was completely lost, or crazy obfuscated names written by someone who just didn’t care. A scrambled heavy architecture that goes way too far. But sometimes, the problem is that the code is far too simple to solve stuff correctly and it is just spinning off grief all over the place; it needs to get replaced with something that is actually more complicated but that better matches the real complexity of the problems.
You can usually tell the difference. If a programmer says something is over-complicated, but cannot list out any specifics about why, then it is probably a feeling, not an observation. If they understand why it is too complex, then they also understand how to remove that complexity. You would see it tangled there caught between the other necessary stuff. So, they would be able to fix the issue and have a precise sense of the time difference between refactoring and rewriting. If they don’t have that clarity then it is just a feeling that things might be made simpler, which is often incorrect. On the outside everything seems simpler than on the inside. The complexity we have trouble wrangling is always that inside complexity.
Good layering picks primitives and self-descriptive names so that you don’t have to look underneath. That it is hiding code, i.e. encapsulating complexity, is actually its strength. When you read higher-level code, you can just trust that the functions do what they say they do. If they are used all over the system, then the reuse means they are even more reliable.
But still, you’ll have a pretty nicely layered piece of work and there will always be somebody that complains that it is too complicated. Too many functions; too many layers. They want to mix everything together into a giant, mostly unreadable, mega-function that is optimized for single-stepping with a debugger. Write once, read never. Then they might code super fast but only because they keep writing the same code over and over again. Not really mastery, just speed.
I’ve seen a lot of programmers choke on the enormous complexity of the problem domain itself. I guess they are intimidated enough by learning all of the technical parts, that they really don’t want to understand how the system itself is being used as a solution in the domain. This leads to a noticeable lack of empathy for the users and stuff that is awkward. The features are there, but essentially unusable.
Sometimes they ignore reality and completely drop it out of the underlying data model. Then they throw patches everywhere on top to fake it. Sometimes they ignore the state of the art and craft crude algorithms that don’t work very well. There are lots of variations on this.
The complexity that they are upset about is the problem domain itself. It is what it is, and often for any sort of domain if you look inside of it there are all sorts of crazy historical and counter-intuitive hiccups. It is messy. But it is also reality, and any solution that doesn’t accept that will likely create more problems than it fixes. Overly simple solutions are often worse than no solution.
You sometimes see application programmers reacting to systems programming like this too. They don’t want to refactor their code to put in an appropriate write-through cache for example, instead, they just fill up a local hash table (map, dictionary) with a lot of junk and hope for the best. Coordination, locking, and any sort of synchronization is glossed over as it is just too slow or hard to understand. The very worst case is when their stuff mostly works, except for the occasional Heisenbug that never, ever gets fixed. Integrity isn’t a well-understood concept either. Sometimes the system crashes nicely, but sometimes it gets corrupted. Opps.
Pretty much any time a programmer doesn’t want to investigate or dig deeper, the reason they give is over-complexity. It’s the one-size-fits-all answer for everything, including burnout.
Sometimes over-complexity is real. Horrifically scrambled spaghetti code written by someone who was completely lost, or crazy obfuscated names written by someone who just didn’t care. A scrambled heavy architecture that goes way too far. But sometimes, the problem is that the code is far too simple to solve stuff correctly and it is just spinning off grief all over the place; it needs to get replaced with something that is actually more complicated but that better matches the real complexity of the problems.
You can usually tell the difference. If a programmer says something is over-complicated, but cannot list out any specifics about why, then it is probably a feeling, not an observation. If they understand why it is too complex, then they also understand how to remove that complexity. You would see it tangled there caught between the other necessary stuff. So, they would be able to fix the issue and have a precise sense of the time difference between refactoring and rewriting. If they don’t have that clarity then it is just a feeling that things might be made simpler, which is often incorrect. On the outside everything seems simpler than on the inside. The complexity we have trouble wrangling is always that inside complexity.
No comments:
Post a Comment
Thanks for the Feedback!