On lean software development: Just-in-time vs. dependency management

One of the basic tenets of lean software development is that you try to delay decisions and commitments. This allows you to stay as flexible as possible because you have the most number of options for as long as possible – that is, before you actually make a decision that narrows down those options.

However, this becomes a cause for concern when planning and running a non-trivial and/or large project with multiple teams working in parallel. A big part of ensuring maximum throughput in such a scenario is making sure that dependencies between teams are not missed and that they are played out in a correct order. This ultimately ensures that blockages are held to a minimum.

Most often, to achieve this, many project managers step up planning. The idea behind this is to analyze a lot of the upcoming work, see what dependencies might lurk in the expected work, and ensure that each team plans to work on them before they block another team. This approach, that of planning as the answer, is definitely useful. It is a tad misleading, though. This is because it gives the illusion of having things under control. The trouble is that software development is extremely susceptible to even small changes. And, as anyone who has ever written software knows, changes happen. Quite often. Plans, as they say, are mildly useful as best. Instead, it is the planning itself that holds the value.

What I’m saying then, is just that in my experience, the amount of effort spent on planning never seems to a) cover everything, and b) things change along the way to make the whole exercise decay in value exponentially. This is not to say that planning is useless, of course – see above.

Instead, I believe, a better approach might be to do a basic amount of planning so that folks know what’s coming up in terms of work. This ought to be followed by teams planning for only about, say, 70% of their total capacity. The remaining 30% can be used for one of two things a) satisfying dependencies that other teams discover, and b) extra work if the former doesn’t take up all remaining time. This way, teams can be more agile, and actually complete work as they delve into it and discover issues. It also lessens the need for deep and detailed analysis, especially around integration nodes – which (again) is inventory with a short half-life.

One other benefit of going this route is that it makes explicit (and OK!) the phenomenon of discovering dependencies as teams work on their stuff, and the idea of fulfilling orders (functionality requests from dependent teams) in a just-in-time manner with analysis as fresh as it possibly can be.

What do you think?

The software development process and chaos theory

Over the weekend, I was talking to a non-software friend, and he asked what the big deal was about this whole software “engineering” thing. He’d been working in the manufacturing industry (he makes iron-ore furnaces) for many years, and said they’d solved the variability problem. They could make the furnaces within really small tolerances – and he could predict how long it would take. And, it would pass the levels of quality they’d set up for everything they made.

The fact of the matter was, as I explained to him, that it was impossible to control the effect of variability in the software process. Sure, Agile methods help many things – like setting up a constant heartbeat of 2-4 week iterations, providing a constant supply of jobs (stories) that need to be worked upon, ensuring constraints are identified and optimized etc.

However, even the smallest of changes can ripple into a totally different outcome. It’s like the whole thing about the butterfly beating its wings over the east cost of the United States – and it causing a hurricane in Japan. This is most evident if you try the thought experiment of having the same team build the same software twice in a row. The end result will be fairly different in each case – and certainly, the path taken will be very different.

Small variations, like some critical resource being on vacation on particular and different days, will cause changes in the schedule and dependencies that could potentially cause more significant changes in the overall project. Certainly, things like a different design (in the code) would cause different response times towards the various requirements as they came down the pipe. Different pairs of developers working on the same stories would result in different outcomes – maybe even different bugs. Heck, even the same developer coding the same thing twice would do it differently (right, guys?)

Given that these types of changes are arbitrary and therefore bound to happen, it is unreasonable to expect that software can be created (and the process controlled) to produce completely predictable results.

Just something to think about when trying to understand software development! And yes, I know it’s called complexity theory now, but chaos still sounds cooler.

More on Lisp syntax, and language extensions

Following my recent post on the topic, I thought of one more thing that the syntax of Lisp allows you to do. Being homoiconic, and the fact that code manipulation is so simple (it’s all lists), layering on “language extensions” becomes possible. For example, if Betty Programmer realizes that OO is a great way to design and write code but that Lisp by itself doesn’t provide an OO facility (there are no “class” constructs, no inheritance etc.) – she doesn’t need to despair.

She can write code to add an OOP system to the language. Yes, this means Lisp really blurs the distinction between the language designer and the programmer. In other words, while it’s fairly obvious that Lisp is very well suited to writing DSLs, it is also possible to fundamentally extend the language as well – like adding an OO system, or pattern-matching, or logic-programming (ala Prolog).

Now, obviously, I’m not proficient enough yet to do anything of this sort. But, as I said before, it is my intention to learn 🙂

Lisp. A language where being meta is something worth thinking about.

Story Points vs. “Real” Hours – The Advantages

This from another recent lunch conversation –

1. Story points are a pure measure of size and complexity>
2. Story points are relative (say, with respect to the simplest story) and so have a much longer shelf-life
3. Story points are usually independent of who gives the estimate (as in, an experienced developer and an apprentice can usually agree on something like complexity fairly quickly)
4. Story points avoid the need for discussions like “what are *ideal* hours, really?” or “My ideal hours are different from your ideal hours, stupid.” These add no value.
5. Story points don’t influence behavior (e.g. Parkinson’s Law)
6. Story points are easier to work with – especially when product owners start to wonder why “3 ideal days take a week…”
7. Story points are more fun – especially when they’re in units like gummy-bears, polar bears, or other endangered species.

On a slightly different note –

When you use a geometric series for your story-point scale (say 10, 20, 40, 80, 160) as opposed to, say the fibonacci sequence (1, 2, 3, 5, 8, 13, or multiples thereof), it is a lot easier for your scale to satisfy the closure properties over addition and subtraction. In other words, with a geometric scale, a product owner can say “Hmm… I think this 40 point story is not ready to be played yet”, and you can respond with “How about we swap it with these two 20 pointers?” This could be a bit less intuitive when dealing with the fibonacci scale. All IMHO.

Lisp syntax, and when code is data

Like I said earlier, my friend Ravi introduced me to Lisp several years ago, but it has taken me many years to really want to learn it well enough. I’ll write about my reasons in another post. In any event, at the beginning of this year, I started to pick it up again, promising myself that I’d be serious. This time. So far so good.

I think I’ve started to grok one of the core ideas of Lisp. I had always read that the syntax of Lisp was one of its strengths. And I had always struggled with that idea, knowing it was important, yet was quite unable to really put my finger on it. I think I’m closer to it today.

If you had to create a programming language to write programs that wrote programs (as in, say DSLs) – what design choices would you make?

For one thing, you’d have to be able to generate and manipulate (walk parse-trees, compare and transform nodes etc.) code as though it were just another data-structure. Right? OK, so the code that was being generated would look like and behave like data.

You would then create an EVAL function that could run the generated code. Maybe your generated code would in turn produce generated code, so to keep things easy and simple, your language syntax would be the same as that of the generated language. In other words, you’d end up with a homoiconic programming language. Finally, you would bootstrap your language processor and arrive at your final metacircular evaluator.

To recap, this language would have syntax that looked and behaved like data and because of it could generate and manipulate that data, which itself could be code. What would this data structure look like? One obvious choice for this is a tree (because of parse-trees). If you think about it, XML is just like a tree. But it’s kludgey. What we want is something like XML but without all the cruft. For example –


<function name="add_to_stock">
<param name="counter" />
<call_function name="increment">
<argument value="counter"/>

<function name="remove_from_stock">
<param name="item"/>
<call_function  name="decrement_from_stock_file">
<argument value="item"/>

The syntax is truly disgusting, but useful – especially if you need to programmatically generate it. Let’s now try to make it easier for humans, too. I’m going to remove the ‘program’ tag, because all this stuff is code. I’m going to then change from XML tags to simple ‘(‘ and ‘)’ without the names – and make an assumption – the first word that appears is always a function call. Except for define – which I’ll use to denote a definition for a function. I’ll also lose the XML attribute names, assuming that words that follow the function name are always parameters (unless it’s a code block itself – which would get evaluated first). So, we’re left with –

(define (add_to_stock counter)
    (increment counter))

 (define (remove_from_stock item)
    (decrement_from_stock_file item))

Where does this leave us?

It’s the same exact XML syntax, but it’s just a bit modified and has a few rules thrown in. Importantly, it’s still as easy to generate as XML. It’s just a list of lists of words. As in, a unit of code in this format would always start and end with parenthesis, and they would enclose either a bunch of zero or more symbols, or other lists.

In fact, a language that was good at list processing and had an eval function would probably do a really good job with this stuff!

Advice to new project managers

I was having lunch with my scrum-master colleagues today, and one them – Eric Plue – asked, “If you had to list four of the top things that you’d advise a new PM to do, what would they be”. It was a lively sort of discussion, and here are mine –

1. Risk management
2. Manage expectations
3. Apply lean thinking
4. Inspect and Adapt

What are yours? Remember, no thinking about it for hours, just the top four on your mind!

Multiple stake-holders and Agile

There are many times when a team ends up with having to deal with multiple simultaneous stake-holders, or at least with several external folks that have an interest in what the team does. As they all try to direct and prioritize the team’s work, often in a somewhat contradictory fashion, the team finds itself thrashing as it runs down one path of execution and then another the next day. This makes the situation sub-optimal and leaves team-members feeling unsatisfied as they get almost nothing done.

How does a project-manager or scrum-master handle this? After all, while the scenario described can clearly distress a team, it also affects the scrum-master in quite a stressful way. I’ve seen many people try to find a “real” solution by attempting to get all the stake-holders in a room together and try to get them to come to some form of consensus. This is definitely something worth pursuing, but often, this can take a long time in many large organizations. Eric Anderson, a colleague, and I were conducting a retrospective the other day when he made a comment about how one can use a basic aspect of the Agile process to help clarify the direction for such a team.

While each stake-holder feels that his or her agenda is as important as anyone else’s, quite often, everyone involved is aware of the overall goals for the team. When faced with clear consequences of picking one priority over another, they can make good decisions. In many cases, the issue is not as much one of conflicting goals, as much as it is of not having a clear picture of the outcomes of going down routes with different prioritization. Agile processes excel at elaborating just such scenarios. Every time an alternative presents itself (say in the form of a stake-holder changing direction or changing priorities mid-sprint) the project-manager can juggle the sprint and product backlogs (or master-story-lists) to show how that change might affect implementation of other features downstream. When things become as clear as particular areas of functionality being pushed out to particular (approximate) dates or even outside acceptable timelines, it becomes a lot easier to reconsider and plan accordingly. It makes it easier to think in terms of the project and the team instead of particular agendas or personal beliefs of priorities.

So, the take-away is, one doesn’t need new tools or techniques to resolve problems that arise due to multiple customers or stake-holders or interested parties. Simply taking each plan (based on each person’s idea of priority) and demonstrating what they would do to the overall project deliverables is often enough to get everyone together and on a similar page. Of course, getting the various stake-holders to work as a team and funneling requirements and priorities to the team as one entity is still the ideal situation to be in. Demonstrating the above described projections may help getting the right conversations started.