Wednesday, July 8, 2020

Defensive Coding: Direction

A rather big question in software development that typically gets avoided is whether or not the development project is going well. 


That seems like an easy question. There is more code than last year, there are more features, more people are using it, etc. But those types of metrics really don’t capture momentum. They are still somewhat short term. 


For example, you start building a domain-based inventory system, and it all seems great. It’s using a fairly recent tech stack, there are a growing number of users and lots of new features are in development. So, it’s a success, yes? If you could fast forward to 2 years later, you might find that the system has become hopelessly over-complicated, it’s kinda ugly and slow now, the database is full of questionable data, the code is a mess and the original dev team has moved on to greener pastures. What happened? 


We could have looked at the project 2 years earlier and seen the seeds of its destruction. It was there, in the workmanship, the process, and generally the direction. 


Those ongoing little problems gradually become the dominant, fatal issues. They start small but multiply quickly. To see them through the noise requires looking at higher-level properties of the project. 


For instance, it’s not really the amount of code you have, it’s the amount of code that isn’t crappy or misplaced that matters. It’s not how many features you have, but rather the number that is easily accessible and obvious to a user during their normal workflows. It’s not the number of releases you’ve done, or whether you have made it on schedule, but really the operational stability that matters. Looking at these types of metrics gives a better sense of momentum.


We could really build up some serious underlying metrics that are geared towards showing these growing problems, but there is a much easier way to see it. 


You’re working hard for this upcoming release, but after that is done, will the work get easier or harder for the next release? That’s it. That is all there is to it. 


If the ongoing work is getting easier, it’s because you’ve built up and refined better code, processes, knowledge, etc. Then the momentum of the project is positive. 


If each time, it is getting harder, the things that you are trying to ignore, or workaround, or just hope to go away are getting bigger and becoming more of a blocker, then the momentum of the project is negative. If a project suffers from a bunch of negative releases, it has most likely gotten caught in a cycle, and that is very difficult to get out of. 


With that foundation then it isn’t hard to start getting into particular details. What would make some upcoming work easier? What would make it harder? We just want to spend time reducing friction and providing more ability to make the work go easily. 


We can look at a few specific issues. 


First, if there are 4 or 5 programmers, and each one is coding in their own unique style, then unless they are fully siloed from each other, their ability to utilize, fix or incorporate each others work is compromised. Slower. 


The opposite is also true. If there are well-defined standards and conventions that everyone is forced to follow, then moving around the entire codebase isn’t that difficult. There might be some type of domain understanding necessary, but the technical implementations are obvious. 


Following standards is obviously a bit slower, and a bit of ramp-up to learn them, but when traded off against a detached, siloed codebase or a big ball of mega-mud, it is a huge improvement.


The same is true for frameworks and libraries. If everyone has thrown in their own massive set of dependencies, then moving around the code requires epic amounts of learning, which eats time.


Often in big systems, there is a lot of build mechanics and configuration floating around. If that is set up cleanly, then it’s not to hard to absorb it and enhance it. If it is spaghetti, then it just becomes another obstacle.


Abstractions are the double-edge sword of most development. On the one hand, they reduce code often by orders of magnitude, and in doing so they kick up the quality. If they are reused all over, they also cut down on the redundancies and impedance mismatches. On the other hand, they can be weird enough that most other programmers have little hope of understanding them, or their implementations can be impenetrable. If they are nicely documented, with clean decompositions, and encapsulated they are a huge strength and a massive reduction in work, code, time, etc. But they need to be clean and there needs to be a way that most of the current and future team understands them. That has been a growing problem, particularly with the over-reliance on question-and-answer sites for patching code to avoid understand how it works. 


The overall flow of the development matters too. 


Are the ideas for new changes coming from feedback by people who actually use the system, or is it more of a wanton creative exercise based on assumptions? The way the work enters ‘the development pipeline’ usually defines its quality. A weird, non-essential, misplaced feature is just wasting space and time, it’s contribution is negative. 


Once the work is in the pipeline, there are lots of questions that need to be answered, usually with very precise details. Again, if that is getting skipped, it’s substituted with more assumptions or bad facts, so any downstream work is unlikely to be positive. 


There is too, a strong necessity to ensure that the goals are technologically feasible. Adding a feature to search everything doesn’t help if it takes hours to return, or forces a crazy complex replication and caching architecture into being. That might work for Google search, but it out scales most system’s limitations. The costs (money and time) are unrecoverable.


In larger shops, there is usually a need to parallelize the pipelines, so making sure that they don’t knock each other off course means having to have a process around it to control, track and adjust as the priorities and schedules shift around. 


I could continue, adding a lot more, but I think that stepping back and assessing whether or not the endless series of releases are getting easier or harder is such a strong way of being able to focus in on slow brewing problems, that it doesn’t need to be explicit. It’s an easy question to ask, and you can get valid answers directly from the development teams themselves. If things are getting harder, then you mostly need to identify the many reasons why this is happening and start to mitigate them one-by-one in order of contribution. For example, if it is hard to release the code, requires lots of steps that are often forgotten, then coding that as a single script is obviously going to shave off a significant part of the blockage. Getting rid of the blockages gets rid of the friction, and frees up more time to spend on better issues like quality, readability, or even performance. A positive direction for a big project is far better than letting gravity do its job.

No comments:

Post a Comment

Thanks for the Feedback!