One of the basic tenets of lean software development is that you try to delay decisions and commitments. This allows you to stay as flexible as possible because you have the most number of options for as long as possible – that is, before you actually make a decision that narrows down those options.
However, this becomes a cause for concern when planning and running a non-trivial and/or large project with multiple teams working in parallel. A big part of ensuring maximum throughput in such a scenario is making sure that dependencies between teams are not missed and that they are played out in a correct order. This ultimately ensures that blockages are held to a minimum.
Most often, to achieve this, many project managers step up planning. The idea behind this is to analyze a lot of the upcoming work, see what dependencies might lurk in the expected work, and ensure that each team plans to work on them before they block another team. This approach, that of planning as the answer, is definitely useful. It is a tad misleading, though. This is because it gives the illusion of having things under control. The trouble is that software development is extremely susceptible to even small changes. And, as anyone who has ever written software knows, changes happen. Quite often. Plans, as they say, are mildly useful as best. Instead, it is the planning itself that holds the value.
What I’m saying then, is just that in my experience, the amount of effort spent on planning never seems to a) cover everything, and b) things change along the way to make the whole exercise decay in value exponentially. This is not to say that planning is useless, of course – see above.
Instead, I believe, a better approach might be to do a basic amount of planning so that folks know what’s coming up in terms of work. This ought to be followed by teams planning for only about, say, 70% of their total capacity. The remaining 30% can be used for one of two things a) satisfying dependencies that other teams discover, and b) extra work if the former doesn’t take up all remaining time. This way, teams can be more agile, and actually complete work as they delve into it and discover issues. It also lessens the need for deep and detailed analysis, especially around integration nodes – which (again) is inventory with a short half-life.
One other benefit of going this route is that it makes explicit (and OK!) the phenomenon of discovering dependencies as teams work on their stuff, and the idea of fulfilling orders (functionality requests from dependent teams) in a just-in-time manner with analysis as fresh as it possibly can be.
What do you think?