There are lots of strategy questions that circulate around software development. I thought I would address a few of the major ones.
Buy vs build?
A company that depends on, but does not own, their main technology is going to have a couple of problems: a) anyone else can use the same technology to compete with them and b) it limits any possibilities of competitive advantage.
If a company is basically a service company, that is underpinned by technology, then since it is core to their line-of-business, they should set up their own development effort and build the key pieces that they need. In that sense, the ‘coding’ is actually part of the business, and they are essentially an applied software company. if they are going to do the work themselves, then it is also super important that they take it seriously and acquire enough experienced resources that it has at least reasonable quality.
The counter is also true. If there is a complex technical problem that is necessary, but outside the scope of their business, then it doesn’t make sense for them to build up the resources to solve that unless they are considering getting into the business of selling that specific technology.
So, it’s not a question of buy vs build, but rather a question of what specific parts do we need to build to out complete the other companies in our industry.
Outsource, offshore, or assemble local talent?
Code is a manifestation of a programmer’s understanding of a solution to a problem. That’s a pretty loaded sentence, but what it boils down to is if the programmer’s don’t ‘get’ what they are building, then it is unlikely that the outcome will be cost-effective. So, it’s a bad idea to try to cheap out and staff a development effort with the lowest price resources, unless there is a reliable way to drive them to the goal correctly. But that means having to produce very precise, very low-level specifications for everything, or at least most of the major things. That work itself is almost the same as writing the program.
Thus, if you have a group of programmers that don’t already know how to build something to solve the problem, then you essentially need another group that does, to write the specifications for the first group. Which obviously a) doesn’t save any money and b) opens up the risks of communications errors. If you try to get a group of non-technical people to write high-level specifications, then the intrinsic vagueness of their output is highly likely to result in the actual code being unusable for its purpose.
Or in short, with programming, you get what you pay for. There is no finding a cheaper way around it. If you pay crappy, you get a crappy system. If you are willing to spend the extra money, and you’ve gone to the extra work to hire the right resources that have the right knowledge to get the thing built, then you should get the system you need. But it’s also worth noting that is is extremely hard for non-technical people to validate the skillsets of technical ones, so you still run the risk of hiring someone to lead who says they can do it, but ultimately they can’t.
Proprietary vs OpenSource?
It’s nice to get things for free, but when your business depends on something, it is important to make sure that it has decent quality control in its construction. In that sense, it doesn’t matter what you’ve paid, the quality of the components is what drives their usefulness. So, if it’s a well-known OpenSource that is amazingly well-supported and has a great track record for heading in a positive direction, then it is quite usable. If it is some little proprietary piece that is badly built and releases are scary, then you really don’t want to be dependent on it. Quality is more important than origination.
The only other factor is time. If the OpenSource project is hot today but dies off quickly, then the code might end up being unsupportable before the lifetime of the system is finished. When it comes down to money vs popularity, companies that are making a bit of money off their software have a strong tendency to keep it going for as long as possible. So, propriety software has the edge, with respect to time. The popularity of OpenSource tends to be shorter (but not always as big projects like Linux prove). So, the expected lifespan of the system should be a big factor in choosing the dependencies, but oddly it’s rarely considered.
Refactor or Rewrite?
I didn’t save the link, but I saw a good post a while back that basically said that it depends on where the knowledge of the system lies. If the knowledge is outside of the system, and it is relatively complete, then a rewrite might be faster and less resource-intensive. If however, the system is decades worth of understanding piled together in a heap, then a rewrite is most likely to set the whole game backward.
Refactoring a big ugly mess is very slow, but it can be done fairly safely, one self-contained part at a time. As well, if it is non-destructive refactoring (the behavior of the system doesn’t change afterward), then it preserves the knowledge that went into it. Technical people will often prefer rewrites, as they don’t like to read or deal with other people’s code, so getting their opinion on it is difficult because of their bais.
If the hole is big enough, then what seems to mostly work is to identify the lowest and most costly mistakes and then refactor them one-by-one is a long series of releases. Obviously, from a business perspective, that is horrible, as it means the technical resources remain at the same cost, but the outward status of the project looks like it is frozen. So what happens in practice is that the project might flip between long-running refactors, follow by new features, then followed by long-running refactoring again. The temptation is to do both in parallel, but the cross-dependencies between the work tends toward a merge nightmare, which ultimately makes it all slower (and can often throw the whole thing back into the same dysfunctional state that you were trying to get out of).
And Finally ...
Choosing to use software to automate part of a company is a non-trivial and difficult decision. When it is handled well and executed properly, it can contain the costs to be low enough that it is hard for others to compete with you. When it goes wrong, it is a huge cost sink. It’s not the type of decision where it is easy to get lucky, or even has a 50/50 chance, but rather the type where the number of bad outcomes easily exceeds the number of good ones. The wisest thing to do is to find someone who understands it at a very deep level and has been living with the consequences of it for decades. To understand the trade-offs, you have to have spent a lot of time in the trenches, otherwise, it is far too easy to over-simplify the nature of the task.
No comments:
Post a Comment
Thanks for the Feedback!