You don’t need story-points either

Or an estimation-less approach to software development

I’ve had too many conversations (and overheard a few) about re-estimating stories or “re-baselining” the effort required for software projects. The latest one was about when it might be a good idea to do this re-estimation, and how, and what Mike Cohn says about it, and so on. The logic appears to be that as you gain more knowledge about the software being built, you can create better estimates and thereby have better predictability. This post talks about estimates, how they are just another form of muda (waste) and what might be a better way.

First things first

Lets start with fundamentals again. In order to stay profitable and prosperous, a software products company must be competitive. If you do enough root-cause analysis, the only thing that can ultimately ensure this is raw speed. Hence, the only metric that matters is cycle-time and organizations must always try to reduce it.

OK, so now, the issue simplifies itself – all we have to do is look for constraints, remove them, and repeat – all the while measuring cycle-time (and thereby throughput). Simple enough! And in the first few passes of doing so, I first realized that if you must estimate things, use story points. Then I realized that estimation in general is a form of muda – and converting from story-points to real-hours at the start of each iteration is a really stupid waste of time.

Even less muda

Now, having said all that, we can move on to the point of this post. I’ve come to realize there is a better way. One which achieves the goal of faster throughput, with less process and less waste. Eliminate estimation altogether. You don’t even need story points anymore. Here’s how it works –

Take a list of requirements. Ensure they always stay current – you know, by talking with the stakeholders and users and all the other involved parties. Ensure it is always in prioritized order – according to business value. Again, not hard to do – you just need to ensure that the stakeholders are as involved with the product development as the rest of the team is.

Next, pick a duration – say a couple of weeks. If in your business, two weeks is too short (you’re either a monopoly or you’re on your way out) then pick what makes sense. Pick up the first requirement and break it down into stories. Here’s the trick – each story should not be more than 1-3 days of work to implement. In other words, each story, on average, should be about 2 days of development time. Simple enough to do, if the developers on your team are as involved in the process of delivering value to your customers as the business is. Pick the next requirement and do the same thing, keep going until your two weeks seems full. Do a couple more if you like. Ensure that this list stays prioritized.

Then, start developing. There is no need for estimation. Sure, some stories will not really be 2 days worth – break them down when you realize it. Sure, you need good business-analysts and willing developers. You’ve hired the best, right? Next, as business functionality gets implemented to the point where releasing it makes sense, release it! Then, repeat.

What (re) estimation?

In this scenario, there really is no reason to do formal-estimation at all. Sure, super-high-level sizing is probably still needed to get the project off the ground, but beyond that, there is no need to do anything other than deliver quality software, gather real-user feedback, and repeat.

Now come the questions of how to say how much work is done, or when a certain feature will be delivered. The dreaded “how much more work is left” questions. The answer in this new world is simply = 2 days * number of stories in question. (Remember the average story size?) Many people like to track things, but measuring actuals is also muda.

The important thing is of course to not look too far ahead. In today’s business environment, the world changes at the speed of the Internet, so luckily this works to our advantage. Whenever something changes, you just re-prioritize the list of stories. This process is not only much leaner, it is also much simpler.

Let’s come back to the issue of re-estimation. People cite the fact that during the start of a project, there isn’t enough knowledge to truly estimate the high-level requirements. Then, as the project proceeds and they learn more, they seem to want to re-estimate things. I ask, why bother?

The key here is to recognize that “accurate” estimates don’t deliver any business value. It is only important that complex requirements are broken down into incremental stories and they are implemented and released quickly.

The real benefits

The fact that this saves time and effort is actually just a good side-effect. The real value comes from being able to truly stay in control of the development process. Tiny stories allow you to change them when required, pull them from the backlog when needed, or to add new ones whenever business demands it. It also lets you move faster because it’s easier to write code in small incremental chunks, testing is easier, and pushing small releases out to users is easier.

And finally, by imposing the discipline that each story be small, it ensures that people think about the requirement deeply enough before coding begins. It also forces the team to break down requirements into incremental pieces and totally avoids never-ending stories that always have a little bit left to complete.

So there

This allows the whole process to be streamlined. It allows the team to focus on just what the users cares about (working software at their fingertips), and not on all the meaningless artifacts mandated by many methodology books (and indeed unenlightened management). Also, in this manner we handle the issue of estimation and re-estimation. In a zen-line fashion, we do it by not doing it.

30 thoughts on “You don’t need story-points either

  1. Interesting concept, I can see the idea working well in practice, but our biggest challenge is that our clients expect tight estimations before we even start the work. From a budgeting perspective, estimations are very valuable to assign a value to the work being done.

    Is there a way to work with this model and still find a way to accurately estimate the value that a project provides? For consultants, this is crucial to closing the deals!

  2. Even though you don’t state it explicitly, your model still relies on estimation.

    – How do you know a story is 2-3 days worth? Unless you know approximately how big the story is (possibly relatively) and have a sense for the speed of the team (velocity).
    – How do you determine the ‘value’ of a story for prioritisation? I’ve blogged before about how a story represents the ‘benefit’ a business will get – ‘value’ is determined by comparing that benefit to the estimated cost to implement.

    Your right in some ways though. The only reason to re-estimate is to get an increasingly more accurate idea of scope (in the face of learning), which is useful if your trying to hit a scope target. But if you instead treat the software as a system (which supports a business function) on which development will be continually be done, then you don’t need to go through this process.

    In the last case, however, there is still a challenge that I’m grappling with. How do you measure ‘throughput’ on a software project if not by ‘story points’? The throughput of a system is the amount of the system goal achieved for some given constraint time (say machine minute, or team day). For a software project, the system goal is stories implemented (and in production). We could just do stories per day, but since stories vary wildly in the amount of effort required to implement them this isn’t going to give meaningful measures of throughput. Instead I feel there needs to be some sort of relative size metric of stories that allows us to give a meaningful measure of team throughput – which is what story points are.

  3. Well, yes – it does rely on estimation in the same way that agile methods are really about discipline (despite seeming chaotic).

    This approach focuses on something different though – it is less on the idea of sizing and predictability than on trying to maximize throughput by reducing cycle-time. This is achieved in this case with small job-sizes and small batch-sizes (queueing theory). The fact that this leads to greater predictability and simplifies the process is a bonus.

    Throughput should *never* be measured in the manner you are describing, because measuring it that way leads to local optimizations. This is because throughput (by definition) should be measured *end-to-end*. Which means, the metric should be something final like market-share, or profit/cash-flow, or customer-satisfaction. Not by the number of story-points – which is a meaningless metric, by itself. After all, as a user, I don’t care what your velocity is or how good your test-coverage is… I only care about how well your solution fits my needs and if I get value out of it for my purposes. So, story-points are a bad measure of throughput.

  4. I think ‘never’ is a very strong term that is probably a massive overstatement here. The approach I describe for measuring throughput is certainly not ideal, but we have to be a little pragmatic.

    One has to draw a boundary around a system at some point. Whilst it is a nice perfectionist idea to consider market-share and the like as a measure of throughput, this is not really practical for numerous reasons including the fact that the feedback time would be unreasonably long. We must find an appropriate boundary within which to optimise, and make this boundary as wide as reasonably possible.

    You are right in that using story points as a measure will lead to optimisation of the delivery process alone. I think this is a reasonable place to start. I usually push to include deployment into this (measure throughput of story points into production), to widen that view as much as possible.

    Outside of this boundary, one must rely on customer prioritisation (demand) to control the system and ensure the throughput it is creating is leading to ultimate value. This is akin to using customer demand for a car as the driver for a car manufacturing process – and the throughput is the value of that sold card minus the fully-variable costs of producing it. We don’t consider the customer root cause need within this system boundary – such as if the customer would have achieved a better outcome from buying train tickets than a car. Similarly in software delivery, we must draw a line somewhere and then show performance within that.

  5. I should also point out that, as I said in my first response, I’m still uncertain about the use of ‘story points’ as the metric for measurement. My concern with these is mostly around the alignment of incentive – it creates an incentive to increase the number of story points given to a similarly sized story over time (which would show an false increase in throughput over time, that isn’t representative of more real work being done). However I feel the metric must be directly associative with the story, and measured within the boundary of the delivery system, to be reasonably effective – and story points currently seem the best of all the bad options.

    Measuring the benefit received for a story (marketshare, profit/loss, customer satisfaction) would be awesome, but not only do these have long feedback period (as I mentioned), it is also near impossible to show that these are the result of a single story (or even of the software delivery process entirely).

  6. This sounds like a small part of the idea behind Feature Driven Development- by using a standard size feature, each individual feature does not have to be estimated, you can use a velocity type measure based purely on the number of features.

    However, you state the case far too broadly here. The data from software development can actually be used quite well to improve predictability for organizational coordination, measuring ROI, plan the number of developers required, etc. Have you read Steve McConnell’s book?

  7. Yes – I’m familiar with FDD. And I think getting all stories similarly sized (as small as possible) is a great idea. They’d all have a ‘story point’ of 1, and we don’t have to do as much math anymore, which totally works for me. But I think the relatively sized story point system is recognition that it’s not always reasonable or efficient to try and make every story the same size – so it’s also being a little bit pragmatic and at least giving us a way to manage that variance in size. Either way, as you said, you get a velocity type measure out.

    I’m not sure what you mean by “state the case far too broadly”. I totally agree that the data from development is really useful to the business for planning. We just have to be honest with them about how accurate those plans could possibly be.

    Haven’t read McConnell’s book. I assume you mean “software estimation” – or is there another you recommend to start with? I’m just starting on “software by numbers”, which I’m hoping will have some good suggestions.

  8. Never is indeed a very strong word. And, in my post, it certainly is most suited to the ideal world. Having said that, being “pragmatic” is a rather slippery slope… After all, CMM seemed like a great idea on paper, and function-points and lines-of-code are also used as metrics for the “lack of better ones” to often disastrous results.

    It is a sad reality that most organizations become too dysfunctional to make it possible to measure the actual benefit that IT has on the bottom-line. Most such organizations actually don’t even make software for their users really, they just create me-too products which sales then tries to sell to the CIO-types over games of golf. My post, to be quite honest, is focused on those smaller, more innovative organizations that want to undercut such competition, and want to create products that users actually want to use and enjoy doing so.

    It behooves us as software professionals, especially those of us in the consulting space, to try to change the sorry state of affairs our industry is in. We can’t throw in the towel because something is difficult to do. At the very least, if measuring real throughput of a software team is too difficult, we shouldn’t sow the seeds of sub-optimal behavior by measuring something that is merely convenient.

    Specifically to your point, including deployment into the acceptance criteria is a good start, but again, something that falls short. The software industry folk-lore is littered with examples of companies that launched fantastic products into the market, but died out… Measuring cash-flow/customer-satisfaction/market-share is the only thing we should strive for.

    Lets take your analogy of the car manufacturer – after spending tons of money on building the most efficient mass-produced car assembly line in the world, if the market only wants to buy two cars – well, then it wouldn’t matter if they could be built in the blink of an eye. And if buying train tickets would indeed leave customers better off, then the business is doomed in the long term anyway. It would be better to factor this kind of stuff into the analysis rather than saying it is “not my problem”.

    Finally, here’s an example of how to measure the effectiveness of individual features – good old A/B testing. It isn’t that complicated at all, even a tiny startup can host two versions of an application and then compare logs. This is just one way to do this – smart people can come up with others – all that is needed is understanding and inclination.

  9. This is a great discussion.

    I think, based on your last comment, that your missing a small point.

    The success of the business, when leveraging or enabled by software, is dependent on 2 things:

    1. The rate (throughput) at which the software can be built and put to use (internally on in the market).
    2. The suitability of that software to the business needs or to the market.

    I’m talking about using a metric to measure the improvement of the first. And yes, this will lead to the optimisation of only the first.

    But by doing so we provide a more ‘agile’ environment in which the business can make decisions, try directions, succeed or fail fast. Sure, it would be great if we only ever built the exact solution required – but we know that predicting that is near impossible. So instead we try to have a development process with high throughput so that we can respond rapidly.

    The car analogy is still appropriate – there is a system in place for producing cars that is driven by customer demand. It is assumed that the demand will always be there in some degree and hence we have a system for building cars that we monitor and improve. We measure the system by how much ‘throughput’ it can produce. If the ‘throughput’ is too low (because our system sucks or there actually is no demand) then we can at least rapidly see that, identify the cause and respond appropriately.

    Sure it would be lovely to measure the successfulness of car manufacturing on how well it enables the end users real goal of getting from A to B, but we have to draw a line somewhere.

  10. Amit,

    I have to say that I agree with most of the ideas, but in some points I think they aim for a too ideal situation.

    One of the most important points in my opinions, which was also mentioned by Chris, is the utilization of estimations to prioritize business requirements. Is essential for the stakeholder to know how much a feature will cost (even if approximately) to make about implementing it or not.

    And measuring the market-share or ROI to improve software development productivity is really hard (not to say unpracticable), even for small and innovative companies. If agile is about feedback, how long does it take to have enough information for market-share feedback, even when every feature is deployed as soon as it gets done. We have to accept that someone will always be driving the business requirements in software development, and this someone might also be wrong, but that’s part of the game.

  11. Hmmm… on the face of it, the ROI question seems to justify careful estimation, and the relentless (but oxymoronic) drive towards “accurate” estimates. However, I find it hard to believe that rough estimates (high-level, experience/expertise-driven, back-of-the-envelope, or gut-driven) are not sufficient to answer the ROI question. Usually, the desired pay-off from implementing a piece of software is an order or two of magnitude larger than what it costs to build… which therefore means that even if the estimates were say off by half, it shouldn’t matter that much. At least not enough to bother going down the rabbit-hole of trend-lines, re-estimation and re-baselining, and velocity/load/drag factor calculations and so on…

    If the cost to build vs. return question is really so critical because the opportunity is approaching arbitrage-like dimensions, then perhaps it will serve the stake-holders more to take a harder look at the business model…

    Still… I guess this question is important enough in some cases… and I think aggregate stories that roll into features can be used to determine cost. So if a feature was divided into 5 stories, then (approximately) 2 days * 5 stories * team-size * average salaries for 2 days would be the starting point of the answer. Also, you could add the time to test/fix/deploy.

    Like I said above, though, I don’t think it should be that critical… especially when the process of creating software (point #1 in a comment above) was being constantly fine-tuned to reduce-cycle time by removing constraints.

  12. In an ideal world, eliminating estimation could work (and, in my opinion, and as comment #2 states, you’re subtly using estimation). But the question is how much involved will the stakeholders be (they have to be so much involved for this to work). Even in a perfectly agile project, there’s a limit to how much you want to involve your stakeholders.

    In this time & age, everyone is expected to give estimates, and will be judged based on those estimates. The trick is to manage expectations by promising low (long estimates) and delivering low or high… But then again, most clients expect tight estimates as comment #1 states.

  13. “Usually, the desired pay-off from implementing a piece of software is an order or two of magnitude larger than what it costs to build… which therefore means…”

    I think that sentence has some problems. You cannot assume that every (not even the majority) of requirements will provide more value than the money spent to built it. If you say that, it means that no software project can go wrong, it simply pays off all the times.

    Giving a specific example, I had a company before joining TW. We developed two products. Since Im not a millionaire, you can probably infere that the ROI for most of the functionalities we developed was negative. And that happens every day, because business is always changing, and the client cant never be absolutely sure about everything he is ordering. And that is even worst if he does not know how much he’s paying for it.

  14. Well, obviously, I meant that the analysis a company does before deciding to implement something should show that there is more profit to be made than the cost incurred. Things don’t always turn out that way, reality overrides the best laid plans.

    That doesn’t mean that estimation with a precision of a hundred significant digits is helpful, or that it will improve the odds of making money. Given that, my suggestion was to go with high-level numbers and focus instead on rapid cycles to maximize the chances of making the product right based on feedback. Estimates don’t matter as much, if you do this part right, and in super-small chunks as described in my post above.

  15. I’m loving this discussion 🙂

    We have to admit that it’s highly unlikely that anyone will come up with a process that makes every project succeed. Estimates or re-estimates are not precise and even less a prediction about project success. But if you want to have the real visibility of how your project is delivering (transforming ideas into deployed software), I see much more value in using an approach such as this described by Amit than any other. What’s even better is that it gives you this visibility as a by-product, when it’s actually showing where your process can be improved continuously.

  16. Now we’re agreeing more than disagreeing… 🙂
    I also believe that estimating with a high precision is purely waste, and big waste. I actually wrote about it in a post some weeks ago (

    But I still think that the idea of super-small chunks is still very hard to achieve, despite agreeing that it would help. In order to achieve it, the customer commitment would have to be very high, and then you have to start to ask yourself if that is really possible.
    I like to think that this is like going to a restaurant and, while waiting and chatting with your friends, be asked to go to the kitchen five times to check how your food is.

    Software development is still part of a business, and has to adapt itself to provide most benefit to this business, and not the other way around. But that doesnt mean that there arent other points to improve. I just believe that super fast releases is not the best way to approach it (at least not the first one).

  17. I disagree – it is not at all like going to a restaurant. When you go to a restaurant, you trust the chef to make you a good dish. You don’t need to dictate the recipe to him/her. However, if you have a business that needs a software solution, and especially if you sell a software product as your business, then you *must* dictate the business logic to the developers. You must be committed, not just involved.

    Further, there is hardly any discovery or evolutionary behavior in cooking something at a restaurant (the chef must follow a recipe as described in the menu). In software, the whole point of agile/lean is to embrace change and to iterate as fast as possible in order to incorporate the (required) feedback as fast as one can. The stakeholder, therefore, must be involved!

    What the stakeholder can (and should) get away from is micro-management of the development team – “is this task done yet?” or “what library are you using for this functionality?” and so on. That stuff is the job of the dev team. But the stakeholders should very much be involved in the process of prioritizing business requirements, getting the feedback from their users, and incorporating it into their world view of the domain. That’s the job of the stakeholder.

    And super fast releases, makes this possible to a degree that allows even small startups run circles around their larger competitors.

  18. I agree with you… the restaurant example was just illustrative, but I still think that the cooking level of a steak is subject to exploratory testing 🙂

    Now we are disagreeing over details. I haven’t accepted yet the whole idea, but it wont probably change here. If you come to London I can buy you a pint and then we might reach the final terms 🙂


  19. I’m having a hard time figuring out how to write a comment that doesn’t sound too much like a “flame” :). I respect the out-of-the-box opinion but feel strongly your point is both hypocritical and inefficient. You’re saying not to estimate however but then you’re saying to make sure every story can conform with a two day estimate. That’s even more constraining than just estimating any size story. How does a product manager know how to write a two-day story? Do you spend effort refining until it’s worth two-days?

    My real question, if nothing else, is why avoid estimation? Is it killing you to estimate?

    I’m curious if you have a full understanding of story points and their benefits. Please take a refresher before you hurt someone :).

  20. Sorry I took so long to respond, but I was taking a break from doing harm (in Vegas…)

    Anyway – I wanted to respond to your flame-bait. Estimation is not ‘killing’ anyone, in the same way that in the old days of waterfall, documentation never *killed* anyone. That doesn’t make either of them a value-addition to the process of software development. When you have a bunch of average to below-average developers (or some other equally terrible constraint), maybe documentation *is* the right thing to do. Who knows?

    What I do know for sure is that given a high-performing team, and a short-iterative process – *explicit* estimation is waste. The whole reason you estimate is that you want to try and deliver *exactly* what you sign up for in the period of time you chose as your iteration length. This is the whole in-search-of-perfect-estimates thing. However, what if you over-estimated, and finished early? Wouldn’t you try to take on more and thus deliver more? What if you under-estimated? You’d have to cut scope. So why this silly pursuit of ‘perfect’ estimates? Clearly, the product here is the working software – none of the other things matter. (Or are your developers being rated on their “estimation” skills? *shudder*)

    The idea of getting all your stories down to 2-3 days in size serves several purposes – a) it eliminates the need for *explicit* estimation and lets your team be more productive by just focusing on the only thing your customers care about (the actual software), and b) it helps stakeholders and managers keep an eye on the progress, while giving them the opportunity to provide feedback – early and often, and c) homogeneous stories like this provide an automatically averaged velocity that can be used as a *reality* based prediction tool.

    Who should do the job of breaking large size features into these manageable chunks? I don’t know – the answer depends the people on your team, their level of skill, and their depth of domain knowledge. If the project-manager (or scrum-master) can’t do this job, then they should obviously ensure that there is enough skill-set on the team for this to happen properly.

    Hope this helps!

  21. Completely agree. Agile is reverting more to waterfall every day with an over-reliance on process over people.

    The team need to find the quickest and most efficient way to deliver the maximum benefit in the shortest time (which with a good project might have some real business-benefit analytics to clarify team-wide goals).

    Good to have some measures so development teams can collaborate and compete, but the measures are a tool not a goal in themselves. They let great teams seek out even better ones to compete with!

  22. Interesting article and discussion.

    I personally think the foundational statement is the problem.

    “If you do enough root-cause analysis, the only thing that can ultimately ensure this is raw speed. Hence, the only metric that matters is cycle-time and organizations must always try to reduce it.”

    Raw speed is not the measure we should be using. The real measure is whether optimal business value is delivered through optimal attention to customer need and preference. Quality is a huge part of that. Yes, speed for an organization is important in terms of time to market, first to market, etc, however that’s not the whole story. Also if speed is the actual measure, then we can toss out all sorts of engineering rigor and just get crap (and I mean crap) out the door – which clearly, no one is advocating.

    You are right about having small bite sized pieces (small stories) that are easier to manage, but estimating is what tells you what is small and what is not. Estimating allows an organization to make plans, allows teams to manage their throughput, allows for growth goals, allows for comparisons over time to see improvement in efficiency, and so much more.

    Another problem with not estimating is, while that might work better for a tiny company and a small project, would a large company be able to do this? Are they going to be able to give to upper management the numbers they need to make decisions (especially in a non-agile world?). Agile can give those numbers, but estimating is an engineering practice that remains necessary regardless of the project management process you’re using.

    Estimating allows you to think through the process and then gives you a measure with which to compare future work. Knowing what you can do and where you can go is part of the core of management. The good takeaway for me was the highlighting of small stories being more optimal than large ones, and for challenging us to think about what we do, allow ourselves to be uncomfortable, and then make choices instead of letting them get made on autopilot.

  23. Well… first of all when I say speed, the obvious question is speed of what? In this case, it is the speed at which you can deliver quality, value-adding software to real customers. Delivering shoddy output which requires re-work is negative speed.

    About the other points you raised – you’re probably right – it is definitely very difficult to do this kind of a thing in “large” organizations. Or at least those that haven’t started thinking about inherent waste in software development methods, how to eliminate it, and about throughput and how to maximize it. There are other ways to deal with “dates” and “communication with upper management”. You just have find creative solutions to these problems.

    One team I know did something quite cool – they got a budget approved from management for a new product (using a visioning exercise and supporting market research). The management trusted them to build/release it right – so they just ran with the program – releasing internally, and building up sales/marketing based on these preview releases. They adjusted realtime, and things went unbelievably smoothly. Sure, they had to cut back on the original vision because it was way too much to build given the time/budget constraints… but they didn’t need to nail things down before starting… What they did was make decisions based on the super-fast iterations (2-3 times a week sometimes) and show-casing the preview product to prospective customers, and stake-holders.

    And this was in a big (fortune 500) company.

  24. On the whole I think the concepts are great, and we’ve found they work really well in practice. Although there are a couple of things that aren’t fully covered.

    First of all, to realise “business value” you need to make a complete coherent release. Sometimes called a Minimal Marketable Feature (MMF). This release must be actually useful and beneficial to the business, otherwise there’s probably no reason to release it. (Or build it in the first place.) Identifying coherent, internally consistent releases like this isn’t always easy, and a good business analyst is invaluable in the process.

    So from a business point of view, it’s not stories that they’ll be prioritising but releases. Stories aren’t worth a dime if they’re not released. So figuring out stories that are on average 2 days each, while an estimation exercise in itself, probably isn’t worth that much to you, or more specifically, to the business.

    Now that we realise that it’s releases that are important, not stories, we want to start prioritising releases. To prioritise a release, we need to figure out the cost/benefit ratio. How much value is this release going to generate vs. how much is it going to cost. Ah, you say, that involves estimation, right? Exactly!

    But we’re not just estimating the _cost_ of a release, we’re also estimating the _benefit_ of a release. And there’s little point in getting highly accurate cost estimates if we’re not going to get highly accurate value estimates, we need to be able to compare like for like. Which is partly why _cost_ estimates are often seen as waste, because their accuracy almost always rivals the accuracy of _value_ estimates. When that’s true, cost estimates _are_ waste!

    With both sides of the equation, we start to think in terms of acceptable ranges of risk. A cost/benefit ratio of 1/5 is excellent, it’s a no-brainer, just get on with it. A cost/benefit ratio of 1/2 is much less excellent, and because estimates aren’t always accurate it could turn out to be 1/1.5, or even 1/1, or worse. Either you de-prioritise it, or see if you can get more accurate estimates before you commit.

    Now we’re seeing why and where estimation accuracy might be important: high benefit ratios probably don’t require high accuracy, because even if you’re way out you’re probably still going to make money. Low benefit ratios either require more accurate estimates, or you probably shouldn’t bother at all.

  25. ClearWorks New Version – 2.4 Released

    Many our customers asking us about enhancements, and we are doing our best to provide requested features and functionality.
    Today’s release is a big update of current Sevenuc best seller product (also known as agile lifecycle tool for hardware & software project)
    and at the same time composition with other software configuration management tools,
    and more automation test tools and build servers.

    Update contains more elements for Lean R&D real-time collaboration platform
    and reflects latest innovations in Lean Kanban created by Sevenuc and other platform vendors.

    What’s New in 2.4

    * Workflow define for deferent project with Lean stage management.
    * Event and status driven mechanism by Triggers.
    * Email classfication for effective customer request life-cycle management.
    * Complete release support for Lean agile project.
    * Lean R&D behavior improvements for all type of statistic charts.

    read more

  26. Nice post. I learn something totally new and challenging on blogs I stumbleupon on a daily basis.
    It’s always useful to read through content from other authors and practice something from other sites.

  27. Do you mind if I quote a few off your articles as long as I provide
    credit and sources back to your website? My blog
    site is in thee exact same niche as yours and my users would certainly benefit from a lot of the information you present here.
    Please let me know if this ok with you. Thank you!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s