You don’t need story-points either

Or an estimation-less approach to software development

I’ve had too many conversations (and overheard a few) about re-estimating stories or “re-baselining” the effort required for software projects. The latest one was about when it might be a good idea to do this re-estimation, and how, and what Mike Cohn says about it, and so on. The logic appears to be that as you gain more knowledge about the software being built, you can create better estimates and thereby have better predictability. This post talks about estimates, how they are just another form of muda (waste) and what might be a better way.

First things first

Lets start with fundamentals again. In order to stay profitable and prosperous, a software products company must be competitive. If you do enough root-cause analysis, the only thing that can ultimately ensure this is raw speed. Hence, the only metric that matters is cycle-time and organizations must always try to reduce it.

OK, so now, the issue simplifies itself – all we have to do is look for constraints, remove them, and repeat – all the while measuring cycle-time (and thereby throughput). Simple enough! And in the first few passes of doing so, I first realized that if you must estimate things, use story points. Then I realized that estimation in general is a form of muda – and converting from story-points to real-hours at the start of each iteration is a really stupid waste of time.

Even less muda

Now, having said all that, we can move on to the point of this post. I’ve come to realize there is a better way. One which achieves the goal of faster throughput, with less process and less waste. Eliminate estimation altogether. You don’t even need story points anymore. Here’s how it works –

Take a list of requirements. Ensure they always stay current – you know, by talking with the stakeholders and users and all the other involved parties. Ensure it is always in prioritized order – according to business value. Again, not hard to do – you just need to ensure that the stakeholders are as involved with the product development as the rest of the team is.

Next, pick a duration – say a couple of weeks. If in your business, two weeks is too short (you’re either a monopoly or you’re on your way out) then pick what makes sense. Pick up the first requirement and break it down into stories. Here’s the trick – each story should not be more than 1-3 days of work to implement. In other words, each story, on average, should be about 2 days of development time. Simple enough to do, if the developers on your team are as involved in the process of delivering value to your customers as the business is. Pick the next requirement and do the same thing, keep going until your two weeks seems full. Do a couple more if you like. Ensure that this list stays prioritized.

Then, start developing. There is no need for estimation. Sure, some stories will not really be 2 days worth – break them down when you realize it. Sure, you need good business-analysts and willing developers. You’ve hired the best, right? Next, as business functionality gets implemented to the point where releasing it makes sense, release it! Then, repeat.

What (re) estimation?

In this scenario, there really is no reason to do formal-estimation at all. Sure, super-high-level sizing is probably still needed to get the project off the ground, but beyond that, there is no need to do anything other than deliver quality software, gather real-user feedback, and repeat.

Now come the questions of how to say how much work is done, or when a certain feature will be delivered. The dreaded “how much more work is left” questions. The answer in this new world is simply = 2 days * number of stories in question. (Remember the average story size?) Many people like to track things, but measuring actuals is also muda.

The important thing is of course to not look too far ahead. In today’s business environment, the world changes at the speed of the Internet, so luckily this works to our advantage. Whenever something changes, you just re-prioritize the list of stories. This process is not only much leaner, it is also much simpler.

Let’s come back to the issue of re-estimation. People cite the fact that during the start of a project, there isn’t enough knowledge to truly estimate the high-level requirements. Then, as the project proceeds and they learn more, they seem to want to re-estimate things. I ask, why bother?

The key here is to recognize that “accurate” estimates don’t deliver any business value. It is only important that complex requirements are broken down into incremental stories and they are implemented and released quickly.

The real benefits

The fact that this saves time and effort is actually just a good side-effect. The real value comes from being able to truly stay in control of the development process. Tiny stories allow you to change them when required, pull them from the backlog when needed, or to add new ones whenever business demands it. It also lets you move faster because it’s easier to write code in small incremental chunks, testing is easier, and pushing small releases out to users is easier.

And finally, by imposing the discipline that each story be small, it ensures that people think about the requirement deeply enough before coding begins. It also forces the team to break down requirements into incremental pieces and totally avoids never-ending stories that always have a little bit left to complete.

So there

This allows the whole process to be streamlined. It allows the team to focus on just what the users cares about (working software at their fingertips), and not on all the meaningless artifacts mandated by many methodology books (and indeed unenlightened management). Also, in this manner we handle the issue of estimation and re-estimation. In a zen-line fashion, we do it by not doing it.

Why I don’t use real hours for sprint planning

OK, so several of my friends and colleagues pointed me to Mike Cohn’s recent blog post about why he doesn’t use story points for sprint planning. I left a couple of comments on his blog, but I couldn’t wait until he moderates them… so I thought I’d post my thoughts here.

In my opinion, using either story-points or real-hours for sprint planning makes one assumption. The assumption is that it is somehow very important to get a sprint “just right”. In other words, it is important for a team to estimate a sprint’s worth of work well enough so as to make it fit perfectly within a sprint.

I believe that it’s not important at all. Here’s why.

First of all, the length of a sprint is picked by the team in a mostly arbitrary manner. Some pick 30 days, some pick 1 week. For the most part, it could be any number in between there. So if the duration is not that critical, then why should fitting work into it be? In any case, when a team happens to under-estimate, the undone work gets pushed into a future sprint. And when a team over-estimates, the stakeholders gleefully add more work to the sprint. So… what was the point of all the real-hours-based estimation?

Next, lets talk about predictability. Take an example of a basket-ball team. They can make a general statement about their long-term average, but can they predict what they’ll score in the next game? No… but even more critical, is it even important to try to make this prediction? The answer is not really, it is only important that they do their darned best. A software team can certainly spend time breaking stories down into tasks and then even more time estimating each in real hours, and then bargaining with the stakeholders to make sure whatever they pick all fits in… but it’s just more important that they work on the highest priority cards, with as less distraction and road-blocks as possible.

Whether they fit it all in, into that one sprint or not, is a mere detail.

Now, it’s not to say that having dates is not important. It is, but you can’t be scope-bound AND time-bound. Pick one. My default choice is time-bound with variable-scope. In other words, I like to set a release-date, and then ensure that the team is burning down the highest priority items at all times, as that date approaches. When the date is nigh, release! We’re all writing production-ready software, aren’t we? Then, repeat.

This approach then makes it possible to simply drive the whole project off the product-backlog… without strict sprint-backlogs. It allows velocity to be measured in story-points (from the product-backlog) and prediction is a matter of total-story-points divided by velocity-in-story-points. Simple. If anyone asks what will be done this sprint, the answer is the highest priority stories on the backlog, as many as the team can, here’s our best guess (average-velocity-in-story-points). This approach is actually faster than sitting and re-estimating individual stories in real-hours, because well, the team doesn’t have that distraction anymore.

In fact, from my experience, when the team gets comfortable doing this, they even ask… OK, so if the length of the sprint was set fairly arbitrarily, and as long as we work on the most important stuff at all times, then why are we bothering with the concept of sprints at all? These teams go beyond what I described above and eliminate the waste of iterations. Fun, eh?

If I talk about this stuff to people, sooner or later, someone brings up the happy concept of rhythm – how it is important for the team to have a pulse, a beat to which they work – sprint-planning, development, sprint-close, retrospective, repeat. It gives them satisfaction. It probably does, but I think we can do better. I prefer my teams to not just close an iteration based on some rhythm, but even push the software into production each time. Let the rhythm be set not by an arbitrarily picked number, but through customers getting their hands on the next release of the software. They still do the retrospectives regularly, and have team outings all the time. But the rhythm of the team is set through actual releases. When teams get better and better at this, and realize the critical importance of cycle-time, they do all they can to reduce it.

Anyway, I can talk about this stuff for a long time. Back to sprint-planning. My take is – do it like agile architecture/design. Think of it as evolutionary. We’ve eliminated the big-up-front-design, it is time to eliminate the big-up-front-sprint-plan. Prioritize the back-log constantly, always ensure that the team is working on the highest priority item. Ensure there are as less distractions and road-blocks as possible. And release quickly, as early and as often as possible – make a rhythm out of it.

And if the stake-holder asks when a particular story will be done, see where it stands in the prioritized backlog. The answer to the question is, of course, after all the ones above it are completed. Ask also if they would like to re-prioritize. That’s all there is to it!

Estimation – a necessary evil

Recent conversations with some co-workers have prompted me to realize that both sides are equally right and equally wrong about the value that estimation provides.

Developers: Yes, it is what it is. Putting a number on a set of features will not change how long it will take to implement, test, and release. I think people understand that, and if they don’t then you should try to explain that to them. However, you do have to realize that without any idea of how large an effort something might be would result in a chaotic plan that would have no basis whatsoever. How else can one plan for the right amount of budgets, resources, or what expectations to set with say – users and marketing?

Asking for estimates, for these purposes – as long as managers understand that they are estimates, is only fair.

Project managers: The other side of the coin. Yes, estimation is important for the reasons outlined above. But, and I’m going to repeat myself – estimates do not (and can never) change the inherent amount of work that something entails. Estimates are only useful for high-level planning purposes when used with a list of risks and assumptions, and an understanding of queuing theory (which will help derive the right buffer sizes).

There is no substitute for actual velocity – and that is the only thing that is real. Also, accurate estimates is an oxymoron.

So there.

Measuring actuals considered harmful

I don’t quite know what it is, but some project managers really like to measure ‘actuals’. They say it helps them plan better, and that it helps them predict the end-date better. Again, thanks to my love of the dramatic, the title of this article is a tad more severe than what I really mean. In this post, I only make the point that many project managers measure unnecessary things, and often the wrong things – and understanding the what, the how, and most importantly the why, of metrics can make a world of difference to the overall success of a project.

People who advocate ‘measuring actuals’ often mandate their developers report how much actual time they spend working on a user story. So, let’s say that a particular card was estimated at 100 points of work. If the developer pair spends 2 hours working on unexpected build-failures, 1 hour on a staff-meeting and another couple of hours fixing bugs on a story from the previous iteration, then they must report a total of 5 hours less than the total time they might otherwise claim for the card. They must also, of course, report the 5 hours separately.

Often, developers are required to update these numbers on a daily basis. Sometimes even on a per-task basis. (Ouch).

These PMs then subtract the total amount of actual time the team worked from the total available time (capacity) to determine what is sometimes called ‘drag’. The ratio of drag to capacity is their ‘drag factor’, and they then use it to plan the remaining work and predict the end date. I believe this practice is less than optimal for several reasons.

The first reason is that it is painful to report time like this. Ask any developer. It distracts from writing code and breaks flow. And if part of the process doesn’t directly add value to the software being built, it should be a candidate for streamlining. It also sends a signal to the team that tracking/accounting/number-crunching is apparently more important (or at least as important) to the management as the value of the software being built. A related reason is that PMs and HR in certain organizations use these numbers during performance reviews. This, of course, makes the measurement completely useless (because using the numbers in this manner alters behavior too much for the measurement to be accurate anymore).

Another reason I don’t like calculating ‘drag’ and using it to plan future work (and predicting the end-date) is that it simply gives an incorrect answer. This is because a truly agile team that is doing the right things will always see an increase in velocity, iteration after iteration. And so, using drag to project plans becomes less than accurate. That leaves even lesser value in measuring and calculating it.

Now, assuming that the end goal is better estimation of the end-date and a better idea of current rate of development, measuring actuals in this manner is not the best way to achieve it. That’s simply because there is a much easier and more efficient method – it’s called velocity.

It’s called velocity for a reason – it helps answer the question ‘are we there yet?’ Take the analogy of driving a car from city A to city B. Since the drive is long (Google Maps estimate it will be about 12 hours of continuous driving), you make several stops along the way. Making the stops is natural, so when asked how long you might take to get to city B, your answer will usually include those unproductive stops; so you might say – about 15 hours. You don’t say – well, if I stick to the speed limit then I’ll be driving for 12 hours, and I’ll also take a one-and-a-half-hour break for lunch and to recoup a little, and two or three other half-hour breaks. It only matters how much total time you might take to get to city B.

Software development also has a convenient metric like this – and as mentioned above, it’s also called velocity. It is the total amount of work that gets done per unit of time (usually an iteration). This already includes things like meetings, lunches, bathroom-breaks, training, filling out review forms and so on. The total time spent, of course, is a matter of counting the number of developer bodies that were in the team-room for that duration, and multiplying by the number of days in question. Done.

Now – a confession. I do care about ‘drag’. I care about it because by reducing it, the project can move faster. This, however, is what the PM or Scrum-Master should be watching out for during the day, every day. If developers (or BAs or QAs) are being pulled out of the team room into meetings that do not add value directly to the project, then the PM should work the system to get these meeting cancelled or, at least, get the concerned folks out of them. If builds are constantly failing, the PM should sell to management all the benefits that the team will get from an improved build-system. Letting the team refactor ugly parts of the code follows the same reasoning – and sometimes, these refactorings might become rewritings parts of the system. It is the PM’s job to do the cost-benefit analysis with the help of the technical folks, and get permission from management to do the right thing. These things ought to be the goal, not telling management that the team only got 1200 points done because of a 30% drag.

So – the takeaway? Don’t measure actuals the way some people do. Use a ‘drag’ of 0 percent for your iteration planning. Use historical team velocity to estimate team capacity. Be vigilant about the time your team spends on non-project-related things, and fight for them to reduce that time. Enable them to optimize themselves (refactoring, investment in fixing annoyances, better software tools, outings), and watch their velocity increase along with quality. Your job becomes easier, too!

If estimation is harmful… then, what’s not?

Or leaner Agile – part II
(Part I is here)

During conversations with a couple of co-workers about estimation (after writing this), we were all reminded of strange situations where estimation went haywire. Here’s an example. I was with a small team at a potential client and we had been creating a proposal for a fairly simple project. We intuitively felt that it would take the four of us about 3 months to get done and our high-level estimates validated this guess. However, the project-management team wanted us to break things down into a more granular level, to ensure nothing was missed and so on. Which was fine, and so we did. When we re-estimated the smaller, lower-level tasks, we ended up with a total of 8 months for the same team. No one could really argue with that, the cards were all laid out on the table. We didn’t get the project, a competitor came in and did the project in about four months.

I’ve heard several war-stories that talk of how people break things down, analyze them to death, and then estimate for worst-case scenarios. Then they pad the estimates, sometimes at all levels – at the task-level, at the story-level, and at the project-level. This simply bloats the estimates to a bizarre level. In the case where the estimation is being done by an external vendor, they often risk losing the work. When this is done by an internal, and often captive group, the customer is left with little choice. They budget for the bloated estimate, get the money approved, and end up spending it. After all, no one ever got in a fight with Mr. Parkinson, and won.

We also talked about times where teams spent more time breaking things down and estimating them than they spent to actually implement the darn thing. All fun stuff.

These stories make me think of Heisenberg’s uncertaintly principle – the very act of trying to measure something seems to change it. Obviously this doesn’t quite fit what we’re talking about here, but still… I’ve seen people break things down into minute tasks – and then when they begin to assign time-based estimates to them, they still go with fairly large units of time. Maybe half-day units or sometimes a couple of hours. And then, next thing you know, 2 or 3 simple tasks take up an entire day…

Of course, in certain situations, there is no option – you just have to break things down and make your best guess. This is especially true for vendors bidding for projects. When you’re not in this situation, however, there really is little benefit of taking this approach. Collaborate with your customer instead – do high-level estimates for baseline sizing, get a team together and get your hands dirty, then make claims about when you might be done. This will definitely work better – and will allow the customer to be more involved in the whole process – and they’ll be able to give feedback about what they really want, based on their involvement with the evolving software. In other words, keep it simple – just use your velocity to determine when you might be done. If I have 700 points of work (high-level estimate), and after four iterations I’m doing about 60 points an iteration – I think I’ll be done in another 8 iterations! Quite simple!

And by all means, revise your high-level estimates ever so often if you so wish to do so. Every time something changes or a risk becomes a reality or scope changes or your architecture group changes direction – take another pass at the sizing exercise – and derive a new answer. Just keep it quick and simple, your answer will probably be close enough to the answer you’ll get with more detailed (and more painful and more expensive) analysis.

The trick to this, of course, is to step away from time-based estimation. When people think of time-based estimates, they begin to practice defensive-estimation – and pad estimates “just to be safe”. This is done almost unconsciously, and also quite casually, so much so that it seems perfectly natural. And it adds up. So, by using story points instead, one can easily side-step this issue. The theory is that although the end goal still is to answer the question “what will it take (in terms of time, resources, money)?”, using story-points splits the answer into two parts. The what, and the how much. Story points address the what – by only focusing on the relative complexity of the work. It also has other advantages. The second part of the answer manifests itself after a few iterations – you divide the total number of estimated story points by the team velocity to get an approximate duration. Again, pretty straight-forward.

P.S. – Finally, buffers have their place, since a system with no slack can’t cope with change. However, that is a topic for a different post.
(Part I is here)

Estimation considered harmful

Or leaner Agile – Part I
(Part II is here)

This is a rather long post on a topic that has sparked many arguments whenever anyone – colleagues or friends – brought it up. It is about applying a key concept of Lean to, what some may consider, a rather extreme degree. And I’ll admit this – the title is a little misleading, and deliberately a bit inflammatory. Anyway, here goes…

The what

I want you, dear readers, to walk with me. Walk with me and follow along as I travel the path of an imaginary project that we, as forward thinking Web 2.0 visionaries, take from an idea stage through to construction. And during this journey, I want to show you what place estimation as a practice ought to occupy in the ideal software development life-cycle.

So, first we need an idea. Hmmm, a quick look around tells us that a mobile social-networking website would be cool – it would be just the thing to get easily funded, and hopefully, bought by Google in short order. Having satisfied ourselves that this is a solid business plan (the VCs are happy with it), and that it’s (clearly) bound to work, it is now time to get started on getting the thing built.

Now, what features should the site have? Since we are the founders, we brainstorm among ourselves, and since we’re hip, we even ask a couple of potential users to participate. At the end of the day, and after a long and grueling session of arguing about what’s useful to teenagers and what isn’t, we have a nice, shiny list of stories. Again, since we’re hip, we decide to call it our Master Story List.

The how much

OK, how much money are we going to need? And when can we launch the first release? Certainly, our friendly VCs will ask us such questions. Hmmm… time to call in the cavalry. Through our connections, we hire the best developers we can find. We also hire a couple of QA folks. Since we plan to play the role of the business-analysts ourselves, we have a full quorum. We’re ready to do some estimation. We don’t really know too much about the system, at least none of the specific details, and nor do we know exactly how certain things are going to be implemented. What we do know, is at a high-level – what we want built and what each feature might look like. With this input, and with the years of software development experience that the development team has, we assign story-points to each requirement on our Master Story List. After another couple of days of intense sessions (discussing and arguing about what each feature is, how it relates to the market and to other features, and how it might be implemented and so on) we finish our first cut of the sizing exercise. Our story-points have a simple scale (lets ignore unknowns for now) – 10, 20, 40, and 80 – where a 20 was a small story, 40 a medium, and 80 a large. A 10 was assigned to really simple stories. When we added everything up, we had a total of 13,340 story points.

Wheee! We’re done estimating! At least for now. Again, from all the experience that we have as a group, and from our best guess of how things might go on this particular project, we think we can get 600 story points done every couple of weeks. The full project now appears to be about a year in length. We also want to do multiple releases so we can release something quicker. So we pick the features that we think we absolutely need for the initial launch, and it turns out to be about half the overall set. We therefore plan for a six month initial release, followed by several shorter releases. We’re ready to go!

By the way, when we spent those few days locked inside the conference room discussing features and so on, we also derived a list of questions, assumptions, and risks. We’re going to use that list as a starting point for our project-long risk-management activities. We’ve got all of it written down (in a nice Excel spreadsheet, no less) and the plan is to update and monitor everything on a weekly basis.

Go! Wait! Iteration planning

So – we’re ready to start! We bite off a bunch of work that we think we can get done in the first couple of weeks – and the developers go at it. How exciting! Now, we’ve read books on Agile software development, and some of them recommend that at the iteration planning session, we should re-estimate those stories that are planned for that iteration. They say that this detailed, more granular, task-level estimation is a key part of the iteration planning meeting (IPM), or sprint planning meeting. Much of the literature also says that we do this second level of estimation in real units of time (as opposed to the story-points we used earlier) – so that we can plan iterations better, and track progress better.

We think about this – and question the value of this effort. We already know what needs to be built – for this sprint and for the release. More accurate estimates will not change the amount of work, much less reduce it. So we decide to save the half-day or the whole day that the team might spend discussing design and implementation details to get at the more accurate estimates (wow, that’s some oxymoron, huh?) All that stuff changes the minute someone writes the first bunch of code, anyway. So we decide to let the code, and hence the software, speak for itself. We let the developers do what they do best – we know they’ll use good software development practices to ensure high quality. The QA folks are here to test the application as well. The actual working system is what matters, anyway!

We do, however, ask the development team to break things down into manageable chunks of work – in other words, they task each story out. This is useful to them, because it helps them think through things, and keeps them focused throughout the implementation of each story.

Keeping it simple

In making the decision of not bothering with more detailed estimates, we also thought of this – there are only two things that we can play with – scope, and schedule. Quality has to be high – that’s non-negotiable. We’ve hired good people – they’re enthusiastic about the project, and they care about their craft. We’re sitting in the room with them, and helping move things along whenever they’re blocked on things. If we want to get done quicker, the only option we have is to pull stuff out of the release plan. If we want all the stuff that we had decided on, no matter if things take longer than we hoped – the only option we have is to delay the release. It’s that simple.

Remember the old days, before this whole Agile thing, when people thought that the way to control the complexity of building software was to freeze requirements before starting work? That didn’t work so well, did it? Today, another delusion persists in the minds of many managers – that of being able to predict the date of completion with accuracy. Just like the weather (the most powerful supercomputers can’t predict weather with any accuracy more than a few days out, and they often get it wrong anyway), software is too complex to think that by controlling a couple of variables, one can control the trend-line. Even that control doesn’t have predictable results – by de-scoping a complex story (whatever complex might mean to you), is there a quantifiable way to say how much time was saved? Or by re-factoring a gnarly area of code, can you tell how much efficiency was gained?

Why do we still persist in trying to adjust for velocity, drag, complexity, performance, estimates, re-estimates, actuals, rate of scope increase, etc.? Why do we expect that by running massaged numbers we can produce the date? After all, what software team ever announced a date that they actually met? These days, many companies just stick the number of the year at the end of the name of their product – Office 2007, Windows 2003, Pocket PC 2006. They’ve given up the charade – they’re now saying they’ll just deliver it sometime in the year. Just keep your fingers crossed.

So now what?

So then the question becomes, if the software development is going “slow” (in other words, estimates turned out to be just that – estimates), when will we be done? And can we do anything to speed things up? The answer to when comes from the story-points we’d assigned to every story coupled with the data we gathered over the past several iterations. If it appears we’ve been getting done only 450 points each time, then, hmm… we’re going about 25% slower than originally “planned” and realistically, our six month release is looking more like an eight month release. Again, fairly simple math gave us the answer. The second question is also fairly simple – and in fact, it has nothing to do with estimation at all. It is a quintessential question about software development – how can we deliver faster? We can’t answer that – maybe no one can. Doing things right, hiring smart people, these things help. For now, we either accept a delivery in about 8 months time, or we trim scope.

The thing to note, also, is that if we’d spent the time to do detailed estimation (at the task level for each story in the sprint) – we’d have lost another half-day to a day every iteration, and we’d still have no real change in the outcome. Sure we’d have data as to how accurate (or entirely off) our development team was with their estimates, but that would be about it… it wouldn’t change the amount of time they’d have taken. And we’d be short about 5 to 10 percent of the time to actually build the software. Oh boy, we sure made a good decision of not going even slower!

Some books on Agile also talk about how important the planning meeting is… what with the estimation effort, and the communication that goes on during it. If you’ve been following along, we just thought through why detailed estimation is a wasted effort. Now lets talk about the communication aspect.. The theory is that this meeting allows developers to find out what other developers are working on, and also gain understanding about the other areas of functionality. All this as the business analysts drone on about how this feature or that. Didn’t we catch sight of those two QA people dozing off? And that developer? Some of the other BAs looked pretty bored too… the reality is that long meetings are not good forums for anything, much less exchanging information about something as complex as software. When developers are curious about someone else’s code, they bother each other, and talk about it at a whiteboard – our whiteboards are always filled with the remains of all these technical discussions… squiggles that only mean something to the techies.

Re-estimation

The literature also points us to another advantage of detailed estimation and re-estimation. The idea is that if we keep careful notes (and numbers) about what and how the team originally estimated each type of story, then if certain estimates were consistently off, we could go back and fix those types of estimates across the project. Thereby, we’d improve the accuracy of the end date quickly and easily. This makes several assumptions – the biggest one is that the original estimates were all consistently inconsistent. Ouch, now we’re in the land of mirrors. Not only were they consistently off, but they were consistently off for the same reason – a leap of faith, really, given all the variables.

Now, I’ll admit, I don’t see any reason not to try and “fix” incorrect estimates, especially after you know more about the system. It’s just that you don’t need the whole detailed estimation waste-of-time for this (or actuals for that matter – they’re evil also and I’ll talk about them another day). Any good development team will realize that certain estimates were made on an assumption that was wrong (or whatever), and will provide information that will enable management to go back and correct for such things. This is just a normal and an everyday software development activity – you make an assumption and move ahead, when things change, you go back and adjust. Normal communication habits are all that one needs. Some people say developers are not capable of doing this – they only care about writing code. They say they’re geeks and don’t know how to communicate, and don’t know what’s important to management. I’ve even heard that you can’t trust developers to raise visibility of such things cause it’s like admitting a mistake or something. Oh boy – if you have such developers, your project is doomed no matter how good the estimates are.

However, like I said, there is value in going back and correcting your plan based on more information or a risk being realized. Again, this should happen automatically, every day! This is also a happy side-effect of the regular risk-management sessions we talked about earlier. Finally, just like quality, risk-management is everyone’s problem.

Further planning

One more question that invariably gets asked when this topic is being discussed: OK, so if I go with this approach, then how do I know how much functionality to plan for in a given sprint or iteration? I have two answers – the first is to simply use yesterday’s weather – if you got done 500 points last sprint, try the same thing this time. If half your team is going to be gone on vacation, try shooting for 250 points. Again, simple and as effective as anything else.

The other answer is that iterations are evil, they add no value, they’re considered harmful, and should not be done. But that’s fodder for another post.

Anyway…

So this is my pitch about why detailed (or task-based) estimation is in general a waste of time and resources, and is a contributing factor of a lowered velocity (the 5 to 10 percent we talked about, depending on how much time is spent every iteration, more thanks to the interrupted flow) . It also adds an unexciting element to the software development process that adds no value to the actual deliverable.

So, do I believe that all teams ought to abandon low-level (or detailed, or task-level) estimates? No – I believe this is an advanced technique – and can only be applied effectively if all the players in the development team truly grok software development. It assumes that all players are equally enthused by the art of building software, and are engaged in the iterative process of development that is Agile. If your team is just starting, or you have team members that aren’t experienced using Agile techniques, and especially if you have a project manager that doesn’t fundamentally understand agile and/or lean software development, then you’re most definitely better off doing things the traditional way – and estimating at the task-level.

For others, here’s a way to trim some muda off your process. Flame away.
(Part II is here)

Story points – handling unknowns

I’m a big fan of usingstorypoints to estimate effort on software development projects. It is a good way of keeping things simple and it ensures that teams can be efficient and quick about their estimation tasks.

One question that often comes up is how a team ought to handle stories (or epics) that they just don’t know enough about at a given moment. My answer depended on the kind of scale the team was using – if they were using T-Shirt sizing, then the scale ought to have an Unknown Size (translating to a scale of say – XS, S, M, L, and UNKNOWN).

If they’re using a numbered scale, then I suggest a twist on the UNKNOWN level. I’ve recently switched to using a geometric scale (as opposed to a Fibonacci scale) – and to that I’ve added an UNKNOWN level of 1,000,000 points. So now, my scale is – 10, 20, 40, 80, 1000000.

What this does is it radiates a little bit more information about the total estimate for the project. Lets say you have 200 stories, of which 10 are unknown. Lets also say that the rest 190 stories add up to 8450 points. Now, the total (because of the new level for the unknown items) becomes 10,008,450 points. It is still clear that we have about 8450 points of estimated work, but it also makes clear that we have 10 items that are unknowns.

If the unknown level was only twice or thrice or even ten times the size of a Large story, then the total for the project obfuscates how many items are unknown – and gives an impression that is not quite accurate. This large one-million-point level for unknown stories fixes that. And it also effectively broadcasts the idea that a subset of stories just don’t have the same level of detail and clarity as the rest.