How Lean and Agile are different, not that it matters

This post is an attempt to clear the air a little bit around this issue. First off I want to make it clear that, in my opinion, the theoretical distinctions between the two concepts are not important enough for teams to change anything they’re doing to produce good software. In other words, who cares what you call it?

However, it is useful to understand what each concept is. This allows you to have clear conversations about issues and solutions, and it also makes it easier to see what needs to be done at every level to improve the overall throughput of the team or organization (or, indeed, of the entire value-stream.) The way I see it, Agile and Lean are similar in many ways, and they really complement each other the rest of the way.

Agile

Agile is a framework that focuses on iterative-development and feedback to build out a software product. Agile focuses on quality by making things like test-driven-development, continuous integration, and refactoring essential pillars of the process. By definition, agile fosters the creation of an adaptable and evolving team process – it accommodates the fact that each project is different, and processes need to fit the situation. Agile also emphasizes communication and collaboration over documentation – thus, it enables teams to move quicker by resolving things face-to-face. It also considers the customer a part of the team, thereby ensuring maximum feedback reaches the ultimate users, and allowing them in turn to change the course of the work product.

Agile methods usually measure things within the software development world – number of stories (scope), estimates (time), velocity (speed). This in turns drive costs and budgets.

Overall, Agile can be thought of as a framework for the software-construction (planning, development/QA, release) part of creating a product.

Lean

Lean, on the other hand, is a management philosophy. Ultimately, it focuses on throughput (of whatever is being produced) by taking a strictly systems-level view of things. In other words, it doesn’t focus on particular components of the value-stream like code-construction or QA, but on whether all the components of the chain are working as efficiently as possible so as to generate as much overall value as possible. Value, of course, includes things like high-quality, and optimized for time and resources.

Lean is based on several things – queuing theory, the theory of constraints, concurrent engineering (set-based development) and delaying commitment to the last responsible moment. Careful metrics (only the ones that truly measure throughput: this often means going one level up) is an important part of lean – this allows everyone to be objective about everything. Speaking of which, Lean software recommends measuring things across the value chain – and as extended as that chain can be. It recommends measuring return on investment (ROI), customer satisfaction, customer usage patterns, market share, and so on. This in turn drives budgets and costs within the software organization.

Where they’re similar

Both Lean and Agile focus on people – over pretty much everything else. They both focus on inspecting and adapting – in order to improve the work-product and efficiency in producing it. In other words, feedback is critical – from people, from customers, from stake-holders, and from the product itself. They’re both quality focused, and they encourage early discovery and more importantly, prevention of defects.

The area where they complement each other most, is in the breadth of their world-view. Agile is usually focused very much within the software development team or organization, Lean focuses on the entire system that includes as many workers, partners, customers, external stakeholders as possible.

How they fit

Having made these remarks, I’d like to present this picture – it is a way I like to think of the way Agile and Lean fit together. It is sort of a stack – Lean is a foundational framework, Agile sits on top of it. I’ve also taken a shot at depicting some of the building blocks within each.

How Lean and Agile fit together

It doesn’t really matter

I’ve found that after incorporating more and more aspects of Lean into your Agile processes, the distinctions start to blur. In the end, Lean subsumes everything, and ultimately, the only thing that matters is whether the organization delivers what customers want and like, how fast they can deliver it, and how much money the organization and its customers can make together.

Lean Software Development: How to find bottlenecks – Metrics that Matter

In the past, I’ve written about why measuring actuals in the way a lot of people do, isn’t particularly effective. I’ve also written that I think the only way to truly improve the quality of a development team is by reducing its cycle-time. Which begs the question – what metrics ought one to collect in order to reduce cycle-time? This post focuses itself on one set of numbers that can be useful in order to pursue this reduction.

Queue-sizes as symptoms of constraints

Typical software teams have multiple roles – customers, analysts, developers, testers, and so on. To develop the best possible software, these folks should work together in a collaborative way. Despite this mode of interaction, there often are localized work products that each function works on – story cards and narratives, the code that implements the desired functionality, the manual, ad-hoc, and exploratory testing that needs to get done outside of what automation covers.

If we consider the entire team to be a system comprising of the various functional components, then the work of creating working software flows through it in the form of a story card that changes state from requirement-analysis to elaboration to development to testing. This shouldn’t be considered as a waterfall process – this is just how the work flows. An agile team would iterate through these phases very quickly and in a highly collaborative manner.

To be able to determine the health of such a system, which by the way can also be thought of as a set of queues that feed into each other, one needs to know how much work-in-progress is actually inside it at any given moment of time. To tell if the system is running at steady-state, the dashboard of the project team must include number of story-cards in each queue.

Let’s take an example –

queues.png

So the moment we see that the number of cards in a particular area begin to grow in an out-of-proportion manner, we know there is a bottle-neck, or in other words – we have found a constraint. By resolving it – maybe by adding people (always a good idea, right?), or reducing the batch-size (the number of stories picked up in an iteration), or by reducing the arrival rate (stretching out the iteration), or removing other waste that might be the cause (found through root-cause analysis) – one can improve the throughput of the software development team (system).

A Finger Chart

Analyzing carefully collected data is the only way to reliably diagnose a problem in a system – probably something familiar to all developers that were ever tasked with improving the performance of a piece of software. After collecting data, visualizing it helps even more. In the case of trying to discover a bottleneck through the data above, a finger chart can help. It looks like this –

finger_chart.png

The reason it’s called a finger chart is because it is supposed to look like the lines formed between stacked fingers. The width of each color represents the size of the queue. Now, finding the bottleneck is as simple as looking for a widening band. There’s your trouble area, the one you need to optimize.

Systems thinking

This technique may be useful in your team situation. However, it never does to miss the forest for the trees. In many cases, much more bang for the buck can be got by simply looking at an extended value-stream, as opposed to a localized one. In other words, the largest constraint may lie outside the team boundary – for example from having multiple stake-holders that often set conflicting goals for the team. Or that several critical team-members are involved in multiple projects and are therefore spread too thin to be productive on any. It behooves the team members, and indeed the management, to try and resolve issues at all levels by taking a systems approach to improvement.

In any case, despite the fact that queueing theory teaches us (among much else) that optimizing locally at various points ultimately results in lowered throughput of the complete system, it is useful at times to fix what one can. There are certainly several areas where things can be improved, and several ways of handling each. The finger chart above helps in one such area, and by mounting it on a team wall, it becomes an information radiator that lets everyone in on the health of the system.

Lean software development: Why reduce cycle-time?

Or why reducing cycle-time is like Aikido

Why, indeed?

To cut a long story short, the answer to why reducing cycle-time matters is that a short cycle-time is the only thing that can truly differentiate between a great development team and a mediocre one. A team that has reduced its cycle-time to say, about a day, has mastered many things in the process. In other words, it takes a lot of things for a team to be able to deliver software in a super-short cycle, but it is those very things that become its biggest advantages.

Let’s talk about some of these – both what it takes, as well as the advantages that these translate to.

Understanding delivery

To be able to even talk about a short cycle-time, it is important to first establish what the starting point is and what finish-line is. The first one is relatively easy. It is simply the point in time when a requirement has been recognized as something that would add value to the software being developed. This recognition might come from a customer who made a request, or it might come from an internal product-management function, or a combination of the two.

The finish line is more fuzzy – at least it is with many traditional software development teams. Some people call a feature done when it moves out of development, some call it done when it moves out of QA, some when it moves out of user-acceptance testing, some when it moves into the backlog of the depoloyment team, and so on. Sometimes, there are multiple levels of done-ness, so to speak – development-done-but-not-integrated, integrated-but-not-tested, done-but-not-verified, done-but-not-user-accepted, all-but-deployed, and so on and so forth.

The first thing that a focus on reducing cycle-time brings about is a clear understanding of what being done means. And here, like in many places, the common-sense answer is the best. Something is done, when a customer can use it. Period.

The aim, then, of reducing cycle-time simply becomes that of getting a feature into the hands of a customer as fast as possible.

Identifying waste

If the goal is to get something into the hands of the customer as fast as possible, then anything that gets in the way of doing that becomes waste. This gives a simple definition to what qualifies as an inefficiency. Cause and effect – the goal of delivering quickly to a customer gives rise to the next goal – removing obstacles from the path of this delivery. In other words – removing waste from the team’s software development process.

If weekly or daily status meetings that last an hour or longer dont seem to be helping anyone move towards this delivery, then in the spirit of removing waste, these meetings should be cancelled. The best way for an integration defect to get fixed usually isn’t filling out a bug-report in the tracking system and having it run through a workflow involving the enterprise-architect, two tech-leads, the QA manager, and the project-manager. Again, to eliminate waste, process needs to be changed. It might just become scheduling time to get the two concerned developers to talk and fix the issue. So on and so forth.

Avoiding rework, upping quality, and root-cause analysis

If a feature has to be worked on again because of defects, it represents a large waste of time and effort. This wasted time doesn’t help in getting software in the hands of the customers, therefore, an effort should be made to reclaim it. The only way to do that would be to – get it right the first time. To do that, one has to ensure that no (or as few as possible) bugs are found during the final testing, instead, everything ought to be caught and fixed earlier.

This means two things – one, that testing needs to be done early in the game and continously through all stages of the game. The second is that each time a non-trivial issue is found – a root-cause analysis is done to ensure that the same kind of thing doesn’t occur again. This is the equivalent of the ‘stop-the-line’ philosophy of lean manufacturing.

Getting better at the difficult stuff

One reason why certain teams don’t truly deliver software frequently is because deployment is hard. They’re not good at it, or it is an arduous process with lots of moving parts etc. These are all just excuses – if something is difficult to do – it shouldn’t be put off and worked-around. Instead, it should be tackled head-on – and be thoroughly analysed for root-causes – and all issues should be fixed so the problem actually goes away.

So basically, time should be taken to look into and simplify the difficult parts of the process, and the team should ultimately become good at those things. There can be no excuses about this – the debt that inevitably piles up from not doing this boldly, simply must be avoided at all costs – else it will kill even the possibility of speedy delivery.

Poly-skilled people

When a team-member is an expert at something, and only that one person is an expert at it, she becomes a single-point of failure. If she takes some time off, getting something that involves her area of expertise into the hands of the customer becomes impossible. What if she should leave the team? What if there is more than one thing in the pipeline at a given point of time that needs the expert?

To be able to truly deliver software to customers in the shortest time possible, such bottlenecks must be resolved. When there is work for two in a certain area and only one ‘expert’, then another team-member should be able to help out. If a QA person becomes unexpectedly unavailable for a period of time, a developer or anyone else ought to be willing and able to step in to fill the gap.

Being poly-skilled in a lean team is extremely important – because the final delivery of software is the culmination of the team’s effort – not of a subset of team-members or of only one or two functions. To be able to perform as efficiently as possible, sometimes, having poly-skilled team-members becomes a life-saver.

A culture of excellence

There are other things a team must be able to do (indeed, has to!) to be able to reduce cycle-time to a minimum. These include taking on just enough work, taking on work in smaller chunks, limiting the size of its queues, using pull scheduling systems, and so forth. It also entails always looking for the next bottleneck so it can be resolved, the next piece of waste that can be trimmed from the process, and the next optimization that would improve throughput of the team.

Each team-member should have a caring attitude for the customer – and also an attitude that asks a lot of questions. One set of questions are directed towards the customers themselves – so as to gain a real understanding of what they’re trying to achieve – so that the best, most creative, and efficient solution can be found. Another set of questions are directed at the team and its processes – there should be no sacred-cows when it comes to trying to improve things. Anything that wants to stay on as part of the team’s process should be able to withstand stringent questioning and devil’s advocacy. By doing all this, things become better for the customer, thereby for the company and the thus for the team as well.

This culture of excellence judges its results by one metric alone – throughput. It realizes that any step taken to improve things needs to be questioned, yes, but if put into practice, its effects are to be measured. Measurements are extremely important to a lean software development team – but it only cares about the one measure that matters – that of delivery to the customer. This is why no optimizations should be made at a local level (say within QA or deployment), at the expense of the global work-product. In other words this culture is always looking to optimize the whole system, and not a part of it. Cycle-time is the easiest and most effective measure of this throughput.

A short cycle-time as a secret-weapon

This matters most to me, on a personal level, because I work for a small company today – and probably always will. If you’re competing with large software corporations with deep pockets that throw large development teams at projects – indeed specifically at ones that related to the product your little start-up is undertaking – then the only advantage you have is that of agility.

This is where the Aikido thing comes in – to succeed in a world dominated (for better or worse) by large companies that usually have wasteful processes, traditional project-management philosophies, old technology stacks, but often lots of cash and people – you have to use their ‘strenghts’ and weaknesses against them.

If your little start-up can turn out a feature in a day or two, what customer will be able to resist? Sure, you still have to get over other obstacles in the path of business success (like building what the customer actually wants) – but at least one thing will be possible. Because of the small size of your team, you’ll be able to adopt all kinds of agile and lean practices quickly – and deliver features into the hands of customers so fast that you could run circles around your larger competitors.

Reducing cycle-time and doing all that it entails is all you have to do.

P.S. If large corporations are able to successfully pull this off as well – then… who knows what innovation is possible?

Lean software development: An iteration-less agile process

A lean software process offers a lot of possibilities. In this post, I want to try and explain why I think iterations are superfluous in a good agile team environment.

Premises

I thought I’d present my reasoning as a set of statements that together build up the argument. I’ll try to address the following things – business analysis, estimation and tasking, ensuring communication, iterations, requirements prioritization, planning, showcasing and closure, taking breaks. There’s probably more, but these are the areas where most people ask questions about this approach.

So here goes!

Analysis

– Business conditions and requirements can change quickly.
– Analysis therefore, is only valid for as long as the relevant business requirement is valid.
– To ensure that the team takes the latest requirements and information into account, therefore, it makes sense to delay analysis to keep it as close to development as possible.
– In an ideal world, analysis could be done real-time. Or, as they say – just-in-time.

Tasking (work-breakdown)

– Tasking a user-story is valuable.
– Developers don’t really pay attention when other developers are tasking other stories. Especially in large meetings. They communicate and figure things out in other ways.
– Developers often don’t even remember how/why they tasked something a few days ago. Sometimes, things change enough to make their earlier tasking less valuable as well.
– It makes sense, therefore, to just have the pair that’s going to develop a card, task it.
– It is also more valuable, by the same reasoning, to try to minimize the delay between tasking and implementing.
– The ideal is right before implementation – in other words – in real-time, or as they say – just-in-time.

Communication

– Large meetings are inefficient.
– It is valuable to have developers immersed in the domain.
– They learn from each other, and from BAs and QAs during discussions. However, the value of this exchange is diminished when it occurs in large meetings.
– The best form of communication is organic – face to face, across the team-room, during work, a little now and a little later, as required.
– Domain related (and other type) information , therefore, is not distributed only during IPMs, nor is it the most desirable form of this exchange.
– The best way is to have domain people and developers co-locate and encourage conversations and daily interaction

Iterations

– The duration of an iteration is arbitrary.
– The only important factor is that it shouldn’t be too long. People have discovered that 1 to 4 weeks work well. Can we do better?
– Iterations are an artificial construct, and just like estimation, they don’t (and can’t) alter the amount of inherent work.
– Zero is the smallest natural number.

Prioritization

– Customers/product-owners are constantly learning new things that impact requirements.
– Prioritization of stories must, therefore, be an on-going activity. It has nothing to do with ongoing development iterations.
– Even a developed user-story (the partially-developed product) provides feedback into this changing understanding and priorities.
– It therefore makes sense to reduce the cycle-time of developing a story – to allow for maximum, and fastest feedback.
– Prioritizing stories can, and should, therefore run in parallel, at all times.

Closure

– Showing progress to stake-holders is important.
– A work-in-progress should be show-cased to them whenever significant features are implemented.
– This implies a non-deterministic schedule. Pre-planned showcase-schedules therefore are not optimal.

Breaks

– A hard-working team deserves to take breaks and have fun.
– This again should be on an on-demand schedule – especially after a burst of extra-hard-work or overtime.
– This also implies a non-deterministic schedule.
– This holds good for things like retrospectives as well.

Recap

– Prioritization is an on-going, parallel activity that runs throughout the duration of the project.
– Analysis ought to be done just-in-time.
– Estimation adds little direct value to implementation.
– Tasking ought to be done just-in-time.
– An iteration is an artificial construct, and shorter the better.
– Showcasing work is important, and taking breaks is important, and these are non-deterministic activities and shoe-horning them into a ‘regular’ schedule is sub-optimal.

Proposal

– Reduce iteration lengths to zero. Do away with the artificial construct altogether.
– Ensure business analysts work with product-owners and stake-holders to maintain a constantly prioritized list of user-stories.
– Ensure maximum effectiveness of the development team by allowing business analysis, and tasking, and development to occur in a just-in-time manner.
– Showcasing product, conducting retrospectives, and taking breaks should occur whenever needed!

Final pitch

Hopefully, I’ve been a little more coherent in my thinking here than I normally am during my… well, incoherent rants. Ultimately, I want to be able to remove all parts of a process that adds no value by itself. Any opposition to this proposal would have to be in terms of what value iterations provide, and that this process will be unable to.

Oh, and also – this is a work in progress 🙂

Extra credit

– Do away with re-estimation altogether.

Coming soon

– What requirements should be satisfied in order for a team to be successful at iteration-less agile?
– What impact does this have on a client-vendor relationship: project management, expectations-management, risk-management and so on?
– Does iteration-less agile make life difficult for development teams that like to have short-term targets? E.g.- ‘200 points this week’?
– Possible disadvantages of saying no to iterations (everyone else is doing it!).
– Also – applying more lean principles to traditional Agile processes.

** Accurate estimate: this phrase is quite an oxymoron.

Estimation – a necessary evil

Recent conversations with some co-workers have prompted me to realize that both sides are equally right and equally wrong about the value that estimation provides.

Developers: Yes, it is what it is. Putting a number on a set of features will not change how long it will take to implement, test, and release. I think people understand that, and if they don’t then you should try to explain that to them. However, you do have to realize that without any idea of how large an effort something might be would result in a chaotic plan that would have no basis whatsoever. How else can one plan for the right amount of budgets, resources, or what expectations to set with say – users and marketing?

Asking for estimates, for these purposes – as long as managers understand that they are estimates, is only fair.

Project managers: The other side of the coin. Yes, estimation is important for the reasons outlined above. But, and I’m going to repeat myself – estimates do not (and can never) change the inherent amount of work that something entails. Estimates are only useful for high-level planning purposes when used with a list of risks and assumptions, and an understanding of queuing theory (which will help derive the right buffer sizes).

There is no substitute for actual velocity – and that is the only thing that is real. Also, accurate estimates is an oxymoron.

So there.

Measuring actuals considered harmful

I don’t quite know what it is, but some project managers really like to measure ‘actuals’. They say it helps them plan better, and that it helps them predict the end-date better. Again, thanks to my love of the dramatic, the title of this article is a tad more severe than what I really mean. In this post, I only make the point that many project managers measure unnecessary things, and often the wrong things – and understanding the what, the how, and most importantly the why, of metrics can make a world of difference to the overall success of a project.

People who advocate ‘measuring actuals’ often mandate their developers report how much actual time they spend working on a user story. So, let’s say that a particular card was estimated at 100 points of work. If the developer pair spends 2 hours working on unexpected build-failures, 1 hour on a staff-meeting and another couple of hours fixing bugs on a story from the previous iteration, then they must report a total of 5 hours less than the total time they might otherwise claim for the card. They must also, of course, report the 5 hours separately.

Often, developers are required to update these numbers on a daily basis. Sometimes even on a per-task basis. (Ouch).

These PMs then subtract the total amount of actual time the team worked from the total available time (capacity) to determine what is sometimes called ‘drag’. The ratio of drag to capacity is their ‘drag factor’, and they then use it to plan the remaining work and predict the end date. I believe this practice is less than optimal for several reasons.

The first reason is that it is painful to report time like this. Ask any developer. It distracts from writing code and breaks flow. And if part of the process doesn’t directly add value to the software being built, it should be a candidate for streamlining. It also sends a signal to the team that tracking/accounting/number-crunching is apparently more important (or at least as important) to the management as the value of the software being built. A related reason is that PMs and HR in certain organizations use these numbers during performance reviews. This, of course, makes the measurement completely useless (because using the numbers in this manner alters behavior too much for the measurement to be accurate anymore).

Another reason I don’t like calculating ‘drag’ and using it to plan future work (and predicting the end-date) is that it simply gives an incorrect answer. This is because a truly agile team that is doing the right things will always see an increase in velocity, iteration after iteration. And so, using drag to project plans becomes less than accurate. That leaves even lesser value in measuring and calculating it.

Now, assuming that the end goal is better estimation of the end-date and a better idea of current rate of development, measuring actuals in this manner is not the best way to achieve it. That’s simply because there is a much easier and more efficient method – it’s called velocity.

It’s called velocity for a reason – it helps answer the question ‘are we there yet?’ Take the analogy of driving a car from city A to city B. Since the drive is long (Google Maps estimate it will be about 12 hours of continuous driving), you make several stops along the way. Making the stops is natural, so when asked how long you might take to get to city B, your answer will usually include those unproductive stops; so you might say – about 15 hours. You don’t say – well, if I stick to the speed limit then I’ll be driving for 12 hours, and I’ll also take a one-and-a-half-hour break for lunch and to recoup a little, and two or three other half-hour breaks. It only matters how much total time you might take to get to city B.

Software development also has a convenient metric like this – and as mentioned above, it’s also called velocity. It is the total amount of work that gets done per unit of time (usually an iteration). This already includes things like meetings, lunches, bathroom-breaks, training, filling out review forms and so on. The total time spent, of course, is a matter of counting the number of developer bodies that were in the team-room for that duration, and multiplying by the number of days in question. Done.

Now – a confession. I do care about ‘drag’. I care about it because by reducing it, the project can move faster. This, however, is what the PM or Scrum-Master should be watching out for during the day, every day. If developers (or BAs or QAs) are being pulled out of the team room into meetings that do not add value directly to the project, then the PM should work the system to get these meeting cancelled or, at least, get the concerned folks out of them. If builds are constantly failing, the PM should sell to management all the benefits that the team will get from an improved build-system. Letting the team refactor ugly parts of the code follows the same reasoning – and sometimes, these refactorings might become rewritings parts of the system. It is the PM’s job to do the cost-benefit analysis with the help of the technical folks, and get permission from management to do the right thing. These things ought to be the goal, not telling management that the team only got 1200 points done because of a 30% drag.

So – the takeaway? Don’t measure actuals the way some people do. Use a ‘drag’ of 0 percent for your iteration planning. Use historical team velocity to estimate team capacity. Be vigilant about the time your team spends on non-project-related things, and fight for them to reduce that time. Enable them to optimize themselves (refactoring, investment in fixing annoyances, better software tools, outings), and watch their velocity increase along with quality. Your job becomes easier, too!

If estimation is harmful… then, what’s not?

Or leaner Agile – part II
(Part I is here)

During conversations with a couple of co-workers about estimation (after writing this), we were all reminded of strange situations where estimation went haywire. Here’s an example. I was with a small team at a potential client and we had been creating a proposal for a fairly simple project. We intuitively felt that it would take the four of us about 3 months to get done and our high-level estimates validated this guess. However, the project-management team wanted us to break things down into a more granular level, to ensure nothing was missed and so on. Which was fine, and so we did. When we re-estimated the smaller, lower-level tasks, we ended up with a total of 8 months for the same team. No one could really argue with that, the cards were all laid out on the table. We didn’t get the project, a competitor came in and did the project in about four months.

I’ve heard several war-stories that talk of how people break things down, analyze them to death, and then estimate for worst-case scenarios. Then they pad the estimates, sometimes at all levels – at the task-level, at the story-level, and at the project-level. This simply bloats the estimates to a bizarre level. In the case where the estimation is being done by an external vendor, they often risk losing the work. When this is done by an internal, and often captive group, the customer is left with little choice. They budget for the bloated estimate, get the money approved, and end up spending it. After all, no one ever got in a fight with Mr. Parkinson, and won.

We also talked about times where teams spent more time breaking things down and estimating them than they spent to actually implement the darn thing. All fun stuff.

These stories make me think of Heisenberg’s uncertaintly principle – the very act of trying to measure something seems to change it. Obviously this doesn’t quite fit what we’re talking about here, but still… I’ve seen people break things down into minute tasks – and then when they begin to assign time-based estimates to them, they still go with fairly large units of time. Maybe half-day units or sometimes a couple of hours. And then, next thing you know, 2 or 3 simple tasks take up an entire day…

Of course, in certain situations, there is no option – you just have to break things down and make your best guess. This is especially true for vendors bidding for projects. When you’re not in this situation, however, there really is little benefit of taking this approach. Collaborate with your customer instead – do high-level estimates for baseline sizing, get a team together and get your hands dirty, then make claims about when you might be done. This will definitely work better – and will allow the customer to be more involved in the whole process – and they’ll be able to give feedback about what they really want, based on their involvement with the evolving software. In other words, keep it simple – just use your velocity to determine when you might be done. If I have 700 points of work (high-level estimate), and after four iterations I’m doing about 60 points an iteration – I think I’ll be done in another 8 iterations! Quite simple!

And by all means, revise your high-level estimates ever so often if you so wish to do so. Every time something changes or a risk becomes a reality or scope changes or your architecture group changes direction – take another pass at the sizing exercise – and derive a new answer. Just keep it quick and simple, your answer will probably be close enough to the answer you’ll get with more detailed (and more painful and more expensive) analysis.

The trick to this, of course, is to step away from time-based estimation. When people think of time-based estimates, they begin to practice defensive-estimation – and pad estimates “just to be safe”. This is done almost unconsciously, and also quite casually, so much so that it seems perfectly natural. And it adds up. So, by using story points instead, one can easily side-step this issue. The theory is that although the end goal still is to answer the question “what will it take (in terms of time, resources, money)?”, using story-points splits the answer into two parts. The what, and the how much. Story points address the what – by only focusing on the relative complexity of the work. It also has other advantages. The second part of the answer manifests itself after a few iterations – you divide the total number of estimated story points by the team velocity to get an approximate duration. Again, pretty straight-forward.

P.S. – Finally, buffers have their place, since a system with no slack can’t cope with change. However, that is a topic for a different post.
(Part I is here)