Not sure what happened, but my initial understanding of the role of DevOps comes from my experiences in the early 90s.
I was working for a big company that had a strong vibrant engineering culture. We were doing very complex, low-level work. Worldwide massively distributed fault tolerance. Very cutting-edge at the time.
Before we’d spent a lot of time coding, we write down what we were going to do in a document and send it out for feedback. The different teams were scattered all over the planet, so coordinating all of the efforts was vital.
Once we knew what we needed to do, we’d craft the code and test it in our environments. But these were effectively miniature versions of that global network.
When we were happy with how our code was performing, we’d give the code to the QA group. Not the binaries, not an install package, but the actual source code itself.
We had our own, hand-rolled source code control repository, it was long before CVS, Subversion, or later git. The QA team had its own separate repo, and more importantly, they had their own environment configurations for our code as well. Only our source would get copied into their repo.
We had some vague ideas about their environment but were never told the actual details. It was more like we were a vendor than an internal group. The code we wrote was not allowed to make weak, hardcoded assumptions.
They took the code, configured it as they needed to, set it up on their test networks, and beat it mercilessly. Where and when problems occurred, they would send us the details. Usually with some indication about whether the problems needed to be fixed right away or could wait until the next release.
After that, we had no idea. We never knew when stuff was going into production, we only got infrequent reports outlining its behavior. We delivered the code to QA and they integrated it, configured it, tested it, and then when they were happy, deployed it.
That separation between development and operations was really, really great.
We were able to focus on the engineering, while they took care of the operational issues. It worked beautifully. We really only had one bug ever in production, and given the complexity of what we were building, the quality was super impressive.
When DevOps came along, I figured it would play that same intermediary role. Get between dev and ops. That it would be the conduit for code going out and feedback coming in. Developers should develop and operations should operate. Mixing that up just rampantly burns time, degrades quality, and causes chaos. A middle group can focus on what they have now, how it works, and what could make it better is a much stronger arrangement.
Over the decades, it seems like people have gotten increasingly confused about operations. It’s not just throwing the software out there and then reacting to it later when it glitches. Operations is actively ‘driving’ the software, which is proactive. And certainly, for any reoccurring issues, a good operations department should be able to handle them all on their own.
On the other side, development is slow, often tedious, and expensive. As technology has matured, it has also gotten increasingly convoluted. While there are more developers than ever, it’s also harder to find developers with enough underlying knowledge to build complex, yet stable, code. It is easier to craft trivial GUIs and static reports, but how many of those do you really need? Hiring tanks to get the core and infrastructure solid is getting harder to do, driving them away by getting them tangled up in operational issues is self-defeating.
So we get back to the importance of having strong developers focus entirely on producing strong code. And we put in intermediaries to deal with the mess of getting that tested and deployed. Then if the operations group really is monitoring properly, we can build less, but leverage it for a lot more uses. That is working smarter, not harder.
No comments:
Post a Comment
Thanks for the Feedback!