Why I don’t use real hours for sprint planning

OK, so several of my friends and colleagues pointed me to Mike Cohn’s recent blog post about why he doesn’t use story points for sprint planning. I left a couple of comments on his blog, but I couldn’t wait until he moderates them… so I thought I’d post my thoughts here.

In my opinion, using either story-points or real-hours for sprint planning makes one assumption. The assumption is that it is somehow very important to get a sprint “just right”. In other words, it is important for a team to estimate a sprint’s worth of work well enough so as to make it fit perfectly within a sprint.

I believe that it’s not important at all. Here’s why.

First of all, the length of a sprint is picked by the team in a mostly arbitrary manner. Some pick 30 days, some pick 1 week. For the most part, it could be any number in between there. So if the duration is not that critical, then why should fitting work into it be? In any case, when a team happens to under-estimate, the undone work gets pushed into a future sprint. And when a team over-estimates, the stakeholders gleefully add more work to the sprint. So… what was the point of all the real-hours-based estimation?

Next, lets talk about predictability. Take an example of a basket-ball team. They can make a general statement about their long-term average, but can they predict what they’ll score in the next game? No… but even more critical, is it even important to try to make this prediction? The answer is not really, it is only important that they do their darned best. A software team can certainly spend time breaking stories down into tasks and then even more time estimating each in real hours, and then bargaining with the stakeholders to make sure whatever they pick all fits in… but it’s just more important that they work on the highest priority cards, with as less distraction and road-blocks as possible.

Whether they fit it all in, into that one sprint or not, is a mere detail.

Now, it’s not to say that having dates is not important. It is, but you can’t be scope-bound AND time-bound. Pick one. My default choice is time-bound with variable-scope. In other words, I like to set a release-date, and then ensure that the team is burning down the highest priority items at all times, as that date approaches. When the date is nigh, release! We’re all writing production-ready software, aren’t we? Then, repeat.

This approach then makes it possible to simply drive the whole project off the product-backlog… without strict sprint-backlogs. It allows velocity to be measured in story-points (from the product-backlog) and prediction is a matter of total-story-points divided by velocity-in-story-points. Simple. If anyone asks what will be done this sprint, the answer is the highest priority stories on the backlog, as many as the team can, here’s our best guess (average-velocity-in-story-points). This approach is actually faster than sitting and re-estimating individual stories in real-hours, because well, the team doesn’t have that distraction anymore.

In fact, from my experience, when the team gets comfortable doing this, they even ask… OK, so if the length of the sprint was set fairly arbitrarily, and as long as we work on the most important stuff at all times, then why are we bothering with the concept of sprints at all? These teams go beyond what I described above and eliminate the waste of iterations. Fun, eh?

If I talk about this stuff to people, sooner or later, someone brings up the happy concept of rhythm – how it is important for the team to have a pulse, a beat to which they work – sprint-planning, development, sprint-close, retrospective, repeat. It gives them satisfaction. It probably does, but I think we can do better. I prefer my teams to not just close an iteration based on some rhythm, but even push the software into production each time. Let the rhythm be set not by an arbitrarily picked number, but through customers getting their hands on the next release of the software. They still do the retrospectives regularly, and have team outings all the time. But the rhythm of the team is set through actual releases. When teams get better and better at this, and realize the critical importance of cycle-time, they do all they can to reduce it.

Anyway, I can talk about this stuff for a long time. Back to sprint-planning. My take is – do it like agile architecture/design. Think of it as evolutionary. We’ve eliminated the big-up-front-design, it is time to eliminate the big-up-front-sprint-plan. Prioritize the back-log constantly, always ensure that the team is working on the highest priority item. Ensure there are as less distractions and road-blocks as possible. And release quickly, as early and as often as possible – make a rhythm out of it.

And if the stake-holder asks when a particular story will be done, see where it stands in the prioritized backlog. The answer to the question is, of course, after all the ones above it are completed. Ask also if they would like to re-prioritize. That’s all there is to it!

A Project Delivery Sanity Test

I’ve been working at ThoughtWorks as a software consultant for many years. During this time, I have done development as well as a project manager work. My most recent exploits were at a large project ($100 million+ program) as the PM on a sub-team of about 40 people.

After working with many clients and projects and also having worked on several side projects during the past few years, I like to think I’ve seen projects that span many characteristics – size, complexity, team composition, geography, domain, and technology. Based on the observations I’ve made over this time I’d like to present a list of things – a checklist if you will – of the most important issues that may determine the ultimate success or failure of a project.

1. A Delivery Focus

Being delivery focused begins with releasing software the moment it is written (almost). Releasing means to get the software into the hands of real, live users. Nothing else counts – developer-done doesn’t count, deploying on QA environments doesn’t count, staging doesn’t count.

Until the software reaches at least a single real user, all the code written (and all the design documents, passing tests, whatever else) is just inventory. (This implies that the goal of making money from the software can’t be achieved, thereby reducing the throughput to zero.)

The biggest benefit of getting code early into production (after creating a value-stream that might actually have some output) is that it brings the kind of focus on doing the right things that nothing else can.

I’d like to make a related point here. One of the few metrics that makes sense on software projects is cycle-time. The team must endeavor to reduce this cycle-time – and should be able to take an idea that needs to be implemented and push it into the hands of users in as little time as possible.

Another important issue of note here is that of usability. Once the software is out there, in the hands of the customers, are they happy using it? Or is it an exercise in frustration? If the users are captive (say internal users of an IT application), then the software still gets used (albeit it is the cause of multiple cuss-words a day). If it is a web-app meant for the general internet population, then the product will tank. Either way, the only way to truly understand and fix usability problems is by releasing it to the actual users.

a) Is your project in production? Are you satisfied with how long something takes to get into the hands of your users?

b) Is the feature-set being designed with a usability focus?

2. Clear Priorities

Speaking of bringing focus to software development, the most important thing is to ensure that you’re building the right thing! This relates to what feature next (and what feature never), what to improve, and what to remove. Setting a priority can not be arbitrary, it must come from a deep understanding of the market, the domain, and from listening feedback from real users.

This is not to say that one should do whatever any fickle user says, but it is important to listen, and include the feedback with the other things that go into prioritizing goals and features.

Without a source for rock-solid priorities, a development team will flounder. Rock-solid means that at any given time, anyone on the team can say what the next few most important things are. It also means that it is rock-solid until it is changed (which can happen at any time, based on changing reality and discovery).

I also like to think of a similarity between priorities and constraints. There can be only one top priority. When there are too many top priorities, there aren’t any.

Do you have a clear set of priorities (at any given point in time)? Does everyone on the team understand them, and how they are set?

3. Stakeholder Involvement

The one thing you can’t outsource is your product-owner function. Many teams seem to forget this, as product-owners spend so little time with the team, that they might as well not be present.

In today’s busy world of everyone playing multiple roles and having too much to do, it is common for people to multitask their days away. The effect of this on developers has been documented and studied – I have seen pretty much the exact same thing happen to busy product-owners.

The reality, however, is that if the bread and butter of an organization is the software their users buy, nothing can be more important than ensuring that it is built right and will garner the customer satisfaction and loyalty that is a must for it to succeed. Nothing.

A side-effect of this is some form of seagull-product-ownership. Here’s an example – the product-owner appears during a planning meeting (or the like), and makes comments that alter priorities (which is OK in itself). This changes what the team has understood so far and sends them into a flurry of activity to try and get things done which don’t necessarily flow with what they were doing until then.

A product-owner must, well, own the product. There can be no substitutes to spending several hours (at least) with the team each week – and truly understanding what the development issues are, and providing the team with critical information that is driving the priorities of the project.

Is your team’s product-owner involved enough with the project?

4. Business Analysis

This item is a larger issue in more complex domains. Remember, to truly complete a story it should be pushed into production. Small, incremental stories that make business sense are thus critical to running things in this manner. To help with analyzing requirements down into suitable chunks of work, they should be created with the INVEST principles in mind.

On teams delving with sufficiently complex domains, having business analysts that both understand the domain and are able to create such stories is a must. Again, the important thing here is the role – and that someone needs to do it. Having a team of poly-skilled people that include this skill-set is enough as well.

For what it’s worth, a good ratio of BA people to developers is about one BA for every one or two pairs of developers.

Does your project run off of well-written user stories? Is it sufficiently staffed up from a BA standpoint?

5. Team Size and Structure

From my experience, this one is a bit counter-intuitive to most people. People seem to think that for large projects (cost, number of requirements, whatever) large teams are required and even justified. There is plenty of research and literature out there that speaks to the negative impact of having too many people on a team. Even empirical observation allows one to reach the same conclusion – after all, how many smooth and successful large projects have you seen?

Ultimately, I think, this issue boils down to one of competent people. I find it somewhat difficult to believe that your average IT shop can attract not a dozen or two, but a couple of hundred developers to a project – and all of them are super-smart, super-motivated, and super-interested in the given domain.

The other issue that hits hard is how the team gets ramped up. Often there are too many people right at the beginning. Teams need to be ramped up slowly at the start – while high-level architecture is solidified (through code, being pushed to production!), and patterns are laid out, things of that sort. As these things happen and everyone understands the code-base well, the team can begin to add new members. This should be done slowly, as each new person needs to reach the same level of deep understanding. Once there are several people who know the existing system well, the ramp up may accelerate a bit.

However, I’m yet to see a project team of about 10 to 20 developers that couldn’t outperform a team of a couple of hundred. After all, you can easily scale up hardware, get more office-space, buy more snacks and food, but you can’t always scale up the rock-star developers that are required for a project to be successful. In other words, it’s probably Ok to hire less of those awesome developers you know if you paid them twice as much as your finance department thinks they’re worth. You’ll end up ahead anyway.

A related issue here is that once a project does get saddled with a large team, how is it sub-divided? Are the divisions made along functional lines (modules) or are they put into fluid workflow teams as the need arises? Is it one large team working on different stories with the whole code-base being collectively owned? Or is it bickering factions that jealously guard their areas of code from any change?

People who have suffered through artificial team separation along ‘module’ lines know the pain this can cause. Throw in the typical office politics into the mix, and the project ends up producing a system that conforms highly to Conway’s Law.

Finally, most people on projects know when these things have become issues – just listen in on any lunch conversation – or for the full details, hang out with the team at the bar.

Is your project over-staffed? Is the team sub-divided along the right lines?

6. Planning and Tracking

I’ve seen many teams ‘do Agile’, while not being agile in any fashion at all. Being agile is about focusing only on what’s important – and ignoring the rest. It means doing just enough of something so that it works as desired and produces the right kind of output. It means not doing anything just for the sake of it, and it means questioning any kind of process-thing which doesn’t add value to the software being produced.

Being agile allows and aids you in being lean.

How does this relate to planning and tracking? Because those teams that truly understand what being agile and lean is all about, focus on the activity of planning, and then react responsibly to changes that follow. Iterations are used to gather more and more knowledge about the system being built (while delivering to production at all times) so that better decisions can be made and better priorities can be set. Really mature teams don’t even need iterations.

What agile isn’t is a set of smaller periods of waterfall-type software development. An iteration planning meeting doesn’t produce a plan that must be met with perfect accuracy. Not meeting an iteration goal doesn’t mean it was a failure. Estimates still need to be treated as estimates, and iteration capacity still shouldn’t be filled to the maximum (basic queueing theory).

Planning on many teams becomes an exercise in craziness. This is especially true when the level of detail that people get into becomes so great, that they might as well just get the work done right then and there. Planning should be high-level goal setting, and as long as each individual team member understands the desired outcome for anything he/she picks up to work on, the intention of the planning meeting has been met.

There is often a tendency to re-estimate work in ‘real hours’, and then to track the progress and completion of each sub-task throughout the length of the iteration. This is a waste of time, and simply doesn’t need to be done. Might as well spend the time on taking the team bowling and spare them the wasteful effort of this sort of planning. At least the team will have a good time, and will get time to gel on a more personal level.

A more insidious issue here is the false sense of security or concern tracking things this way often produces. Graphs, pie-charts, trend-lines – all show the amount of work done, amount outstanding, tasks in progress, how many builds were green, how many bugs were found and how many were fixed, how many hours were worked overall, etc., etc. Instead, how about how many customers liked the product (is it even in their hands yet?), how many releases were made to the customer (boy, are we responsive!), how old a discovered bug is (why isn’t it fixed yet?), how little work-in-progress there is (unreleased software is just inventory), how fast the cycle-time is (turnaround time – from idea to profit) and so on? Metrics do matter, but ensure that they’re the right ones.

Does your project plan at the right level, and track and measure the right things?

7. Technical Architecture

Technical architecture can be a strategic advantage. This is most true if the design of the code allows new features to be added (or bad ones to be taken out) quickly and safely, and also allows maintenance to be done without breaking half of the system. It is an advantage if the turn-around time of a new feature or a fix is so short that the team can run circles around its competitor.

This is a system-wide issue (what isn’t?). If every part of the architecture except one is very well designed, is maintainable and extensible, etc., then that one module will become the constraint of the system when it comes to features in that area. The code-base needs to be kept clean and trimmed of fat at all times, across all areas. (All disclaimers apply – in the end it boils down to the cost of change vs. ROI).

Developer productivity is a related issue here, but is very important. The write_test-write_code-deploy-test-see_results-repeat-checkin cycle is how each developer spends his/her entire day. If the deploy-test-see-results part of this cycle is not down to less than a minute or so (at worst), you can expect major productivity losses from the lack of flow. Build times must be carefully monitored and everything must be done to ensure that developers aren’t waiting around for something like a build result.

Does the technical architecture of your project support a quick response (new feature, bug-fix, enhancement)? Does your project have long build times?

8. Testing, QA practices

Automated tests are the most important part of a code-base. Period.

No one can get a design right the first time, no one can get all the requirements down pat the first time. Things always need to be changed, and software spends four-fifths of its life in maintenance mode. The only thing that can automatically tell a developer that he has broken something is the test-suite. Without this, even making a simple change is a nightmare. Enough said.

QA people are also an important part of this. They are the independent verifiers of the developers’ understanding of the requirements. Testing early and often is the key, as they say, and QA folks need to be part of the development team. They need to be in there while a story is being written up, before it is played, and soon after it leaves the developer’s workstation. And everywhere else with their ad-hoc testing. And they should automate, automate, automate. Why should they do things which the computer is suited for perfectly – repeating a step-by-step procedure and ensure the output is correct? Enough said.

A good ratio of QA to developers is about one QA person for every one or two pairs of developers.

Is your team staffed sufficiently from a QA standpoint? Do you have automated GUI testing in place?

9. Understanding software development

This one is actually the simplest to state, but I’ve noticed that a lot of people have trouble accepting it. Here it is – people who run software projects should know what software development is all about.

If your project is being run by someone who doesn’t really understand the nuances of software development, and heaven forbid thinks he/she does, then you’re in trouble. Generally speaking, of course. I won’t say too much more about this – I just happen to think that people who either write the software or those who manage the people who write the software should know what the heck it’s all about. It just saves a lot of silly mistakes, absurd conversations, rework, and generally helps getting things done right, quicker.

Does your project team have managers that truly understand software development?

So there it is – my first cut at crystalizing many of the issues I’ve seen in the past few years on various projects into a list of nine items. There are other important issues to be watchful for, be sure of that. Also, I don’t think that the issues on any project that is in trouble can be traced to one single point above. It will always be a mix of several things, some listed above, some not.

I hope this is useful though!

Lean Software Development: How to find bottlenecks – Metrics that Matter

In the past, I’ve written about why measuring actuals in the way a lot of people do, isn’t particularly effective. I’ve also written that I think the only way to truly improve the quality of a development team is by reducing its cycle-time. Which begs the question – what metrics ought one to collect in order to reduce cycle-time? This post focuses itself on one set of numbers that can be useful in order to pursue this reduction.

Queue-sizes as symptoms of constraints

Typical software teams have multiple roles – customers, analysts, developers, testers, and so on. To develop the best possible software, these folks should work together in a collaborative way. Despite this mode of interaction, there often are localized work products that each function works on – story cards and narratives, the code that implements the desired functionality, the manual, ad-hoc, and exploratory testing that needs to get done outside of what automation covers.

If we consider the entire team to be a system comprising of the various functional components, then the work of creating working software flows through it in the form of a story card that changes state from requirement-analysis to elaboration to development to testing. This shouldn’t be considered as a waterfall process – this is just how the work flows. An agile team would iterate through these phases very quickly and in a highly collaborative manner.

To be able to determine the health of such a system, which by the way can also be thought of as a set of queues that feed into each other, one needs to know how much work-in-progress is actually inside it at any given moment of time. To tell if the system is running at steady-state, the dashboard of the project team must include number of story-cards in each queue.

Let’s take an example –


So the moment we see that the number of cards in a particular area begin to grow in an out-of-proportion manner, we know there is a bottle-neck, or in other words – we have found a constraint. By resolving it – maybe by adding people (always a good idea, right?), or reducing the batch-size (the number of stories picked up in an iteration), or by reducing the arrival rate (stretching out the iteration), or removing other waste that might be the cause (found through root-cause analysis) – one can improve the throughput of the software development team (system).

A Finger Chart

Analyzing carefully collected data is the only way to reliably diagnose a problem in a system – probably something familiar to all developers that were ever tasked with improving the performance of a piece of software. After collecting data, visualizing it helps even more. In the case of trying to discover a bottleneck through the data above, a finger chart can help. It looks like this –


The reason it’s called a finger chart is because it is supposed to look like the lines formed between stacked fingers. The width of each color represents the size of the queue. Now, finding the bottleneck is as simple as looking for a widening band. There’s your trouble area, the one you need to optimize.

Systems thinking

This technique may be useful in your team situation. However, it never does to miss the forest for the trees. In many cases, much more bang for the buck can be got by simply looking at an extended value-stream, as opposed to a localized one. In other words, the largest constraint may lie outside the team boundary – for example from having multiple stake-holders that often set conflicting goals for the team. Or that several critical team-members are involved in multiple projects and are therefore spread too thin to be productive on any. It behooves the team members, and indeed the management, to try and resolve issues at all levels by taking a systems approach to improvement.

In any case, despite the fact that queueing theory teaches us (among much else) that optimizing locally at various points ultimately results in lowered throughput of the complete system, it is useful at times to fix what one can. There are certainly several areas where things can be improved, and several ways of handling each. The finger chart above helps in one such area, and by mounting it on a team wall, it becomes an information radiator that lets everyone in on the health of the system.

Lean software development: Why reduce cycle-time?

Or why reducing cycle-time is like Aikido

Why, indeed?

To cut a long story short, the answer to why reducing cycle-time matters is that a short cycle-time is the only thing that can truly differentiate between a great development team and a mediocre one. A team that has reduced its cycle-time to say, about a day, has mastered many things in the process. In other words, it takes a lot of things for a team to be able to deliver software in a super-short cycle, but it is those very things that become its biggest advantages.

Let’s talk about some of these – both what it takes, as well as the advantages that these translate to.

Understanding delivery

To be able to even talk about a short cycle-time, it is important to first establish what the starting point is and what finish-line is. The first one is relatively easy. It is simply the point in time when a requirement has been recognized as something that would add value to the software being developed. This recognition might come from a customer who made a request, or it might come from an internal product-management function, or a combination of the two.

The finish line is more fuzzy – at least it is with many traditional software development teams. Some people call a feature done when it moves out of development, some call it done when it moves out of QA, some when it moves out of user-acceptance testing, some when it moves into the backlog of the depoloyment team, and so on. Sometimes, there are multiple levels of done-ness, so to speak – development-done-but-not-integrated, integrated-but-not-tested, done-but-not-verified, done-but-not-user-accepted, all-but-deployed, and so on and so forth.

The first thing that a focus on reducing cycle-time brings about is a clear understanding of what being done means. And here, like in many places, the common-sense answer is the best. Something is done, when a customer can use it. Period.

The aim, then, of reducing cycle-time simply becomes that of getting a feature into the hands of a customer as fast as possible.

Identifying waste

If the goal is to get something into the hands of the customer as fast as possible, then anything that gets in the way of doing that becomes waste. This gives a simple definition to what qualifies as an inefficiency. Cause and effect – the goal of delivering quickly to a customer gives rise to the next goal – removing obstacles from the path of this delivery. In other words – removing waste from the team’s software development process.

If weekly or daily status meetings that last an hour or longer dont seem to be helping anyone move towards this delivery, then in the spirit of removing waste, these meetings should be cancelled. The best way for an integration defect to get fixed usually isn’t filling out a bug-report in the tracking system and having it run through a workflow involving the enterprise-architect, two tech-leads, the QA manager, and the project-manager. Again, to eliminate waste, process needs to be changed. It might just become scheduling time to get the two concerned developers to talk and fix the issue. So on and so forth.

Avoiding rework, upping quality, and root-cause analysis

If a feature has to be worked on again because of defects, it represents a large waste of time and effort. This wasted time doesn’t help in getting software in the hands of the customers, therefore, an effort should be made to reclaim it. The only way to do that would be to – get it right the first time. To do that, one has to ensure that no (or as few as possible) bugs are found during the final testing, instead, everything ought to be caught and fixed earlier.

This means two things – one, that testing needs to be done early in the game and continously through all stages of the game. The second is that each time a non-trivial issue is found – a root-cause analysis is done to ensure that the same kind of thing doesn’t occur again. This is the equivalent of the ‘stop-the-line’ philosophy of lean manufacturing.

Getting better at the difficult stuff

One reason why certain teams don’t truly deliver software frequently is because deployment is hard. They’re not good at it, or it is an arduous process with lots of moving parts etc. These are all just excuses – if something is difficult to do – it shouldn’t be put off and worked-around. Instead, it should be tackled head-on – and be thoroughly analysed for root-causes – and all issues should be fixed so the problem actually goes away.

So basically, time should be taken to look into and simplify the difficult parts of the process, and the team should ultimately become good at those things. There can be no excuses about this – the debt that inevitably piles up from not doing this boldly, simply must be avoided at all costs – else it will kill even the possibility of speedy delivery.

Poly-skilled people

When a team-member is an expert at something, and only that one person is an expert at it, she becomes a single-point of failure. If she takes some time off, getting something that involves her area of expertise into the hands of the customer becomes impossible. What if she should leave the team? What if there is more than one thing in the pipeline at a given point of time that needs the expert?

To be able to truly deliver software to customers in the shortest time possible, such bottlenecks must be resolved. When there is work for two in a certain area and only one ‘expert’, then another team-member should be able to help out. If a QA person becomes unexpectedly unavailable for a period of time, a developer or anyone else ought to be willing and able to step in to fill the gap.

Being poly-skilled in a lean team is extremely important – because the final delivery of software is the culmination of the team’s effort – not of a subset of team-members or of only one or two functions. To be able to perform as efficiently as possible, sometimes, having poly-skilled team-members becomes a life-saver.

A culture of excellence

There are other things a team must be able to do (indeed, has to!) to be able to reduce cycle-time to a minimum. These include taking on just enough work, taking on work in smaller chunks, limiting the size of its queues, using pull scheduling systems, and so forth. It also entails always looking for the next bottleneck so it can be resolved, the next piece of waste that can be trimmed from the process, and the next optimization that would improve throughput of the team.

Each team-member should have a caring attitude for the customer – and also an attitude that asks a lot of questions. One set of questions are directed towards the customers themselves – so as to gain a real understanding of what they’re trying to achieve – so that the best, most creative, and efficient solution can be found. Another set of questions are directed at the team and its processes – there should be no sacred-cows when it comes to trying to improve things. Anything that wants to stay on as part of the team’s process should be able to withstand stringent questioning and devil’s advocacy. By doing all this, things become better for the customer, thereby for the company and the thus for the team as well.

This culture of excellence judges its results by one metric alone – throughput. It realizes that any step taken to improve things needs to be questioned, yes, but if put into practice, its effects are to be measured. Measurements are extremely important to a lean software development team – but it only cares about the one measure that matters – that of delivery to the customer. This is why no optimizations should be made at a local level (say within QA or deployment), at the expense of the global work-product. In other words this culture is always looking to optimize the whole system, and not a part of it. Cycle-time is the easiest and most effective measure of this throughput.

A short cycle-time as a secret-weapon

This matters most to me, on a personal level, because I work for a small company today – and probably always will. If you’re competing with large software corporations with deep pockets that throw large development teams at projects – indeed specifically at ones that related to the product your little start-up is undertaking – then the only advantage you have is that of agility.

This is where the Aikido thing comes in – to succeed in a world dominated (for better or worse) by large companies that usually have wasteful processes, traditional project-management philosophies, old technology stacks, but often lots of cash and people – you have to use their ‘strenghts’ and weaknesses against them.

If your little start-up can turn out a feature in a day or two, what customer will be able to resist? Sure, you still have to get over other obstacles in the path of business success (like building what the customer actually wants) – but at least one thing will be possible. Because of the small size of your team, you’ll be able to adopt all kinds of agile and lean practices quickly – and deliver features into the hands of customers so fast that you could run circles around your larger competitors.

Reducing cycle-time and doing all that it entails is all you have to do.

P.S. If large corporations are able to successfully pull this off as well – then… who knows what innovation is possible?

Lean software development: An iteration-less agile process

A lean software process offers a lot of possibilities. In this post, I want to try and explain why I think iterations are superfluous in a good agile team environment.


I thought I’d present my reasoning as a set of statements that together build up the argument. I’ll try to address the following things – business analysis, estimation and tasking, ensuring communication, iterations, requirements prioritization, planning, showcasing and closure, taking breaks. There’s probably more, but these are the areas where most people ask questions about this approach.

So here goes!


– Business conditions and requirements can change quickly.
– Analysis therefore, is only valid for as long as the relevant business requirement is valid.
– To ensure that the team takes the latest requirements and information into account, therefore, it makes sense to delay analysis to keep it as close to development as possible.
– In an ideal world, analysis could be done real-time. Or, as they say – just-in-time.

Tasking (work-breakdown)

– Tasking a user-story is valuable.
– Developers don’t really pay attention when other developers are tasking other stories. Especially in large meetings. They communicate and figure things out in other ways.
– Developers often don’t even remember how/why they tasked something a few days ago. Sometimes, things change enough to make their earlier tasking less valuable as well.
– It makes sense, therefore, to just have the pair that’s going to develop a card, task it.
– It is also more valuable, by the same reasoning, to try to minimize the delay between tasking and implementing.
– The ideal is right before implementation – in other words – in real-time, or as they say – just-in-time.


– Large meetings are inefficient.
– It is valuable to have developers immersed in the domain.
– They learn from each other, and from BAs and QAs during discussions. However, the value of this exchange is diminished when it occurs in large meetings.
– The best form of communication is organic – face to face, across the team-room, during work, a little now and a little later, as required.
– Domain related (and other type) information , therefore, is not distributed only during IPMs, nor is it the most desirable form of this exchange.
– The best way is to have domain people and developers co-locate and encourage conversations and daily interaction


– The duration of an iteration is arbitrary.
– The only important factor is that it shouldn’t be too long. People have discovered that 1 to 4 weeks work well. Can we do better?
– Iterations are an artificial construct, and just like estimation, they don’t (and can’t) alter the amount of inherent work.
– Zero is the smallest natural number.


– Customers/product-owners are constantly learning new things that impact requirements.
– Prioritization of stories must, therefore, be an on-going activity. It has nothing to do with ongoing development iterations.
– Even a developed user-story (the partially-developed product) provides feedback into this changing understanding and priorities.
– It therefore makes sense to reduce the cycle-time of developing a story – to allow for maximum, and fastest feedback.
– Prioritizing stories can, and should, therefore run in parallel, at all times.


– Showing progress to stake-holders is important.
– A work-in-progress should be show-cased to them whenever significant features are implemented.
– This implies a non-deterministic schedule. Pre-planned showcase-schedules therefore are not optimal.


– A hard-working team deserves to take breaks and have fun.
– This again should be on an on-demand schedule – especially after a burst of extra-hard-work or overtime.
– This also implies a non-deterministic schedule.
– This holds good for things like retrospectives as well.


– Prioritization is an on-going, parallel activity that runs throughout the duration of the project.
– Analysis ought to be done just-in-time.
– Estimation adds little direct value to implementation.
– Tasking ought to be done just-in-time.
– An iteration is an artificial construct, and shorter the better.
– Showcasing work is important, and taking breaks is important, and these are non-deterministic activities and shoe-horning them into a ‘regular’ schedule is sub-optimal.


– Reduce iteration lengths to zero. Do away with the artificial construct altogether.
– Ensure business analysts work with product-owners and stake-holders to maintain a constantly prioritized list of user-stories.
– Ensure maximum effectiveness of the development team by allowing business analysis, and tasking, and development to occur in a just-in-time manner.
– Showcasing product, conducting retrospectives, and taking breaks should occur whenever needed!

Final pitch

Hopefully, I’ve been a little more coherent in my thinking here than I normally am during my… well, incoherent rants. Ultimately, I want to be able to remove all parts of a process that adds no value by itself. Any opposition to this proposal would have to be in terms of what value iterations provide, and that this process will be unable to.

Oh, and also – this is a work in progress 🙂

Extra credit

– Do away with re-estimation altogether.

Coming soon

– What requirements should be satisfied in order for a team to be successful at iteration-less agile?
– What impact does this have on a client-vendor relationship: project management, expectations-management, risk-management and so on?
– Does iteration-less agile make life difficult for development teams that like to have short-term targets? E.g.- ‘200 points this week’?
– Possible disadvantages of saying no to iterations (everyone else is doing it!).
– Also – applying more lean principles to traditional Agile processes.

** Accurate estimate: this phrase is quite an oxymoron.

Estimation – a necessary evil

Recent conversations with some co-workers have prompted me to realize that both sides are equally right and equally wrong about the value that estimation provides.

Developers: Yes, it is what it is. Putting a number on a set of features will not change how long it will take to implement, test, and release. I think people understand that, and if they don’t then you should try to explain that to them. However, you do have to realize that without any idea of how large an effort something might be would result in a chaotic plan that would have no basis whatsoever. How else can one plan for the right amount of budgets, resources, or what expectations to set with say – users and marketing?

Asking for estimates, for these purposes – as long as managers understand that they are estimates, is only fair.

Project managers: The other side of the coin. Yes, estimation is important for the reasons outlined above. But, and I’m going to repeat myself – estimates do not (and can never) change the inherent amount of work that something entails. Estimates are only useful for high-level planning purposes when used with a list of risks and assumptions, and an understanding of queuing theory (which will help derive the right buffer sizes).

There is no substitute for actual velocity – and that is the only thing that is real. Also, accurate estimates is an oxymoron.

So there.

Measuring actuals considered harmful

I don’t quite know what it is, but some project managers really like to measure ‘actuals’. They say it helps them plan better, and that it helps them predict the end-date better. Again, thanks to my love of the dramatic, the title of this article is a tad more severe than what I really mean. In this post, I only make the point that many project managers measure unnecessary things, and often the wrong things – and understanding the what, the how, and most importantly the why, of metrics can make a world of difference to the overall success of a project.

People who advocate ‘measuring actuals’ often mandate their developers report how much actual time they spend working on a user story. So, let’s say that a particular card was estimated at 100 points of work. If the developer pair spends 2 hours working on unexpected build-failures, 1 hour on a staff-meeting and another couple of hours fixing bugs on a story from the previous iteration, then they must report a total of 5 hours less than the total time they might otherwise claim for the card. They must also, of course, report the 5 hours separately.

Often, developers are required to update these numbers on a daily basis. Sometimes even on a per-task basis. (Ouch).

These PMs then subtract the total amount of actual time the team worked from the total available time (capacity) to determine what is sometimes called ‘drag’. The ratio of drag to capacity is their ‘drag factor’, and they then use it to plan the remaining work and predict the end date. I believe this practice is less than optimal for several reasons.

The first reason is that it is painful to report time like this. Ask any developer. It distracts from writing code and breaks flow. And if part of the process doesn’t directly add value to the software being built, it should be a candidate for streamlining. It also sends a signal to the team that tracking/accounting/number-crunching is apparently more important (or at least as important) to the management as the value of the software being built. A related reason is that PMs and HR in certain organizations use these numbers during performance reviews. This, of course, makes the measurement completely useless (because using the numbers in this manner alters behavior too much for the measurement to be accurate anymore).

Another reason I don’t like calculating ‘drag’ and using it to plan future work (and predicting the end-date) is that it simply gives an incorrect answer. This is because a truly agile team that is doing the right things will always see an increase in velocity, iteration after iteration. And so, using drag to project plans becomes less than accurate. That leaves even lesser value in measuring and calculating it.

Now, assuming that the end goal is better estimation of the end-date and a better idea of current rate of development, measuring actuals in this manner is not the best way to achieve it. That’s simply because there is a much easier and more efficient method – it’s called velocity.

It’s called velocity for a reason – it helps answer the question ‘are we there yet?’ Take the analogy of driving a car from city A to city B. Since the drive is long (Google Maps estimate it will be about 12 hours of continuous driving), you make several stops along the way. Making the stops is natural, so when asked how long you might take to get to city B, your answer will usually include those unproductive stops; so you might say – about 15 hours. You don’t say – well, if I stick to the speed limit then I’ll be driving for 12 hours, and I’ll also take a one-and-a-half-hour break for lunch and to recoup a little, and two or three other half-hour breaks. It only matters how much total time you might take to get to city B.

Software development also has a convenient metric like this – and as mentioned above, it’s also called velocity. It is the total amount of work that gets done per unit of time (usually an iteration). This already includes things like meetings, lunches, bathroom-breaks, training, filling out review forms and so on. The total time spent, of course, is a matter of counting the number of developer bodies that were in the team-room for that duration, and multiplying by the number of days in question. Done.

Now – a confession. I do care about ‘drag’. I care about it because by reducing it, the project can move faster. This, however, is what the PM or Scrum-Master should be watching out for during the day, every day. If developers (or BAs or QAs) are being pulled out of the team room into meetings that do not add value directly to the project, then the PM should work the system to get these meeting cancelled or, at least, get the concerned folks out of them. If builds are constantly failing, the PM should sell to management all the benefits that the team will get from an improved build-system. Letting the team refactor ugly parts of the code follows the same reasoning – and sometimes, these refactorings might become rewritings parts of the system. It is the PM’s job to do the cost-benefit analysis with the help of the technical folks, and get permission from management to do the right thing. These things ought to be the goal, not telling management that the team only got 1200 points done because of a 30% drag.

So – the takeaway? Don’t measure actuals the way some people do. Use a ‘drag’ of 0 percent for your iteration planning. Use historical team velocity to estimate team capacity. Be vigilant about the time your team spends on non-project-related things, and fight for them to reduce that time. Enable them to optimize themselves (refactoring, investment in fixing annoyances, better software tools, outings), and watch their velocity increase along with quality. Your job becomes easier, too!