One
of the most persistent myths in software development is that it is
impossible to estimate the amount of programming work required for a
project. No doubt this is driven by people who have never been put in a
position where estimates mattered. It is indeed a very difficult skill
to master but a worthwhile one to have, since it routinely helps to make
better decisions about what to write and when to write it.
In
my younger days I was firmly in the camp that estimates were
impossible. That changed when I got a job working with a very
experienced, veteran team of coders. I whined that it was "hopeless",
but they said to "just try it", so eventually I did. We settled on
lines-of-code as the metric, and over the years we tracked how much we
wrote and how large the system was. Surprisingly, after years of
tracking our progress, we found that we could make quite strong
estimates about our future work output.
After
I left that job, I found myself in a startup that was particularly dependent on selling custom code to the clients. It accounted for a
significant part of our revenue, and it all relied on being able to
produce quick, accurate estimates during client meetings. Estimate too
high and they wouldn't buy in. Estimate too low and we'd lose money on
the work. Since it was just after the dotcom bomb, things were really
tricky so we needed every penny we could get to avoid bankruptcy. My
earlier experience really paid off. I would sit in meetings, noting down
all of the pieces in discussion, then be able to quickly turn these
features into dollars so that we could hammer out an appropriate set of
functionality that matched back to the client's budget. That on-the-fly
type of price negotiation could have been really dangerous except that I
was quite accurate with calculating the numbers. When you have no
alternatives, you get forced to sharpen your skills.
For
the rest of this post I'll do my best to describe how I estimate work.
There isn't a formal process and of course there are some tricky
tradeoffs to be made, but with plenty of practice anyone can keep track
of their past experiences and use that to project future events. It's a
skill, a very useful one and one that really helps with ensuring that
any software project does not get derailed by foreseeable issues.
The
first thing that is necessary is to choose a reasonable metric for
work. Although it is imperfect, I really like lines-of-code (LOC) since
it's trivial to compute and when used carefully it does reasonably match
the output of a development project. It's also easy for programmers to
visualize.
Most
of the systems I've worked on have been at least tens of thousands of
lines, often hundreds of thousands, so to simplify this discussion I'll
use the notation 10k to mean 10,000 LOC, it will just make it easier to
read.
For
me, I count most of the lines including spaces and comments, since they
do require effort. I generally avoid counting configuration files, and
for web apps I usually separate out the GUI. Coding in languages like
HTML and CSS is often slower than the more generalized languages like
Java or C#. This causes a multiplier effect between different
programming languages. That is, one line in Java is worth two lines in
straight C, since the underlying primitives are larger. Tracking the
differences in multipliers due to language or even the type of code
being written is important to being able to assess the size of the work
properly.
I
generally like to watch two numbers, the first is the current running
total of lines for the system. So we might talk about a 150k system or
perhaps a rather smaller 40k one. If along the way someone replaces 5k
with a better 10k piece, the total has only increased 5k. That is,
modified lines count for nothing, deleted lines are subtracted and new
lines are added. It's only the final total in the repo that really
matters. It's easy to calculate this using a command line tool like word
count, i.e. "wc -l *.java". One of the first scripts I write for any
new job is to traverse all of the source and get various counts for the
existing code. The counts are always tied to the repo. If it isn't
checked in, it hasn't been completed.
The
second number I like to watch is any single developer's production.
That is the total number of lines added to the repo per year (although
sometimes I talk about weekly numbers). So I think of programmers as
being able to contribute say 20k or 50k. Sometimes I think in terms of
new lines, sometimes in terms of net lines, depending on the level of
reuse in the code. Again, language and code type are also significant,
you can't compare a 15k JavaScript programmer to a 25k C# coder, but
knowing the capacity of the coders on a given team really helps in
determining if any particular project is actually viable.
One
big issue to watch out for are the copy-and-pasters. They might have
awesome numbers like 75k, but the messiness of their output and the
dangers of their redundant code really reduces their work to 1/3 or
worse of their actual counts. That, and they endanger any ability to
safely extend their code. If you were to try and capture the counts, the
copied code would count for 0 lines, modifications and additions would
add, but deletes would be ignored.
Now
generally a 50k programmer might have lots of 1k weeks, but very few
programmers are actually consistent in their output and modern software
development is highly disruptive. It's a lot of 'hurry up and wait".
Back in the waterfall days, when the development stage lasted months or
years it was easier to be consistent for long periods, and it also meant
higher yearly counts. But even then some weeks will have very low
counts, while others might be over 2k. Because of that variability it is
more appropriate to look at the yearly numbers. As the project matures,
the counts should get lower as well. That happens, particularly if the
software is well-written, because the size of the underlying primitives
grow. Instead of having to start from scratch each time, most new work
can leverage the existing code (if the counts aren't going down, then
possibly the cut & pasting is increasing).
Code
that is algorithmically complex or is particularly abstract is
obviously a lot slower to write and debug. In that sense, the backend
guy from a project will progress much slower than the front end guy. So
you might have a domain expert coder at 15k, with a GUI front guy at
35k. A decent generalized piece might have really low counts, but when
it's reused each time it saves the overall system size from bloating,
although it doesn’t reflect directly in the numbers.
Thinking
in terms of totals and capacity has the nice secondary attribute that
it makes it easy to translate into man-years. For example, a small but
functional enterprise web-app product needs at least 100k to round out
it's functionality (including administration screens). For a 50k coder
that's at least 2 man-years. If you see a 350k system, you can get a
sense of how long it might take to rewrite it if you had a team of 25k
coders, it's sitting at about 14 man-years to replace unless you can get
away with a lot less code.
My
suggestion to most programmers is to learn to track both numbers for
themselves. They don't have to share them with management, but
understanding their own abilities to produce code and the size of the
projects that they've experienced really helps in knowing their current
skill level. That keeps programmers from over-promising on their
assignments. If you are a 30k programmer and someone wants you to hammer
out a 40k web app in half a year, you can be pretty sure that you're in
trouble coming right out of the gate. If you know that in advance -- as
opposed to the last month or so -- you can start taking action to
address the issue, such as asking for more resources or reducing
expectations (or reducing the quality).
It's
also worth noting that speed isn't what counts, it's organization and
quality that really matter. I would be far happier with a
detail-oriented 15k programmer for which we never have to debug any of
their work or rewrite it, than a reckless 75k programmer that is
essentially just contributing more problems. Projects last forever, so
it's important to get the work done well, encapsulate it and then move
on to other more interesting code. A well-rounded team usually has a
mixture of fast and slow programmers, with hopefully a wide range of
different skills.
My
first code tracking experiences came when building a massive 150k
distributed cache. In the early years I was cranking out about 50k per
year as my primary project (I've always had at least one secondary
project that was not intended for production, but just to sharpen my
skillset). Years later, I was working on a 120k enterprise product in
Perl at about 30k per year (with lots of sales and management duties).
These days I'm somewhat less than 15k in C# on a 350k behemoth (which if
it wasn't for cut & paste should have been closer to 150k), but
again my role isn't just pure coding anymore.
Size
matters. I've worked on a few >1M systems, but at that level there
are usually teams that are responsible for 50k - 200k chunks of the
system. From experience I've found that a code base over 50k - 100k
starts to get unwieldy for a single individual to handle by themselves.
Most single developer in-house systems tend to range from 25k - 50k.
Commercial systems have to be larger (to justify the sales) and tend to
be higher quality so there are more lines; they usually start around
100k and go into the millions (except possibly for apps). Quality
requires more code, but reuse requires quite a bit less, and there can
be big multipliers such as 2x or even 10x for brute force and cut &
paste. It's those latter multipliers that always makes me push reuse so
hard. Time might be saved from pasting in code, but it gets lost again
in testing, bugs and obfuscating any attempt to extend the work. In a
big project, bad multipliers can result in the work falling "years"
behind schedule particularly if the work is compared to a competitor
with an elegant code base. If you have to do three times the work,
you'll have 1/3 of the features.
Once
you've mastered the skill of tracking, the next thing to learn is
estimating. I always use ranges, so a new bit of work might be 10k - 20k
in size. The best and worst cases are drawn directly from experience
with having written similar code. The ranges get wider if there is a lot
of uncertainty. They are always relative to the system in question, so
that a new screen for one rather brutish system might require 20k in
code, but for an elegant one with lots of reuse that might be 5k or less
(getting below 1k is awesome).
Of
course estimating in an ongoing project is easier than a green field
one, but in some sense if a programmer isn't experienced enough to make a
reasonable estimate he or she might take that as a sign that they don't
currently have enough experience to get the work completed properly.
Most system extensions are essentially horizontal, that is they are just
adding in features that have the same design patterns as existing
features. In that case, if the new work leverages existing work, then a
huge amount of time can be saved per feature and the code will
automatically maintain good consistency (and have less testing).
I
pretty much expect that something similar to the Pareto rule works for
overall complexity. That is between 10 and 20 percent of the system is
difficult, slow code while the rest is fairly routine. The 80% still
requires work, but it should be quite estimable with practice. The
difficult stuff is nearly impossible to estimate since unexpected
problems creep up, which is why each key aspect of it should be
prototyped first during the analysis and/or design phase (for startups
this is essentially version 1.0). If you separate out the code into
independent problems then it makes it easier to plan accordingly. Often
under-estimates in some parts balance against over-estimates in others.
Another
common issue is uncontrolled scope creep, where the underlying
requirements are constantly shifting. Sometimes that is just a lack of
reasonable analysis. Since analysis is way cheaper than code the
development should be put on hold until the details are explored more
thoroughly. It's always faster than just flailing at the code. Sometimes
scope creep is caused by natural shifts in the underlying domain, in
which case the design needs to be more dynamic to properly solve the
real problem.
It's
worth keeping in mind that depending on the testing requirements and
the release procedures, a coding estimate is only a fraction of the
work. Testing, in reasonable systems tends to stay in lock step as a
fraction of the coding work. Releases are more often a fixed amount of
effort.
Once
you can size incoming work, the next trick is to be able to shift the
team around to get it done effectively. Programmers are like chess
pieces, they all differ in strengths. It doesn't work to give a backend
guy some GUI piece if you have a faster front end guy available for the
job. This is spoiled of course since some programmers don't want to
focus on their strengths, they want assignments that stretch their
abilities. With good estimations, it actually makes it easier to let
them try out different parts, since tracking lets you know how much work
is remaining and whether there is actually some slack time available.
High efficiency teams are generally ego-less such that the programmers
aren't siloed, they can work on any section of the code at any time. Of
course having at least two coders familiar with every part of the system
reduces disruptions with emergencies, holidays and people leaving. It
also helps with communication and overall quality. But it has to be
managed in a way that doesn't just blindly assume that all programmers
are interchangeable cogs. Each one is unique, with their own skills and
suitability, thus it takes plenty of observations and some deep
management skill to deploy them well.
For
me, with deadlines, I'll either accept a variable number of features
with a hard date, or the full set of features with a variable date. That
is, we can deliver 4 - 10 features on date X, or we can deliver all 10
features at some floating time in the future. Sometimes the stakeholders
argue, but generally there are one or two really key things they are
after anyways. With reasonable estimations, fixed dates are manageable.
You can choose to do a couple of priority items early, then go after
some low hanging fruit (easy items), then just aggressively cut what
isn't going to make it. A key trick to this approach is to get the
developers working on just one or possibly two items at a time, and
don't let them go on until the items are done and dusted. Estimations
really help here because you can quickly see if any of the items exceeds
the length of the next iteration, and thus should be branched off from
the main development (and then not tied up in cross dependencies).
Obviously the more accurate the estimates, the more options you have in
planning the development and managing the expectations of the
stakeholders. Accurate estimates reduce a lot of the anxieties.
Once
a project gets established, the incoming workload tends to fluctuate.
That is, it's quiet for a while, then a whole load of new work comes in
all at once followed by a rush to get it done. Software development is
best when it is a smooth and steady process, so being able to scale the
work loads and re-arrange the teams really helps in smoothing out the
development to a nice consistent pace. This avoids destructive practices
like 'crunch mode' and allows for intermediate periods to reduce
technical debt. It also makes it easier to decide when to just hack up
some throw-away code and when to take the time to generalize the code so
that you can leverage it later. Foresight is an incredibly useful skill
for big development projects.
What
gets a lot of projects into trouble is that become reactive. They focus
on just trying to keep up with the incoming work, which ultimately
results in very high technical debt and further complicates the future
workload. Learning to estimate is tricky, but valuable as it helps to
make better decisions about what to write and when to write it. This
helps in getting ahead of the ball in the development, which leads to
getting enough 'space' to be able to build clean, efficient and high
quality systems.
Once
a programmer has mastered the basics of coding, they start getting the
itch of wanting to build larger and larger systems. For medium sized
systems and above the technical issues quickly give way to the
architectural, process and management issues. Within that level of
software development, life is much easier if you have the skills to size
both the work and the final system. It allows significant planning and
it provides early indicators of potential trouble.
Estimating
is a tricky skill and one that can not be trivially formalized, but
it's not impossible and once you have enough mastery you really wonder
how you ever lived without it. Nothing is perfect, estimates less so,
but with practice and experience software developers can produce very
usable numbers for most of their development efforts, and the technical
leads can understand and utilize the underlying capacity of their team
members. Estimations are not feasible from non-programmers and they
should not be used to rank programmers against each other. There are
very real limits to how and why they can be practical, but when used
correctly they can at least eliminate many of the foreseeable problems
that plague modern development efforts. A little less chaos is a good
thing.
No comments:
Post a Comment
Thanks for the Feedback!