Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>PERT 3-point estimates...It's taught in project management courses but there're no simple tools for it.

Thanks for mentioning some theory behind this. What is lost by doing a simple best/worst/likely time estimate of all sub tasks, as compared to doing the full blown statistical analysis? The simple approach can be achieved relatively easily with a spreadsheet. For small-ish projects, does the full blown approach provide any significant benefit?

I work mostly on small projects, implementing new server/network infrastructure for small/medium businesses. Time estimation is always a crap shoot, and the only approach I know to combat it is to pad the estimate with generic line items like "testing" and "troubleshooting". If the statistical analysis would help create a better estimate of time, I would spend time developing or contributing to an estimating tool.



> What is lost by doing a simple best/worst/likely time estimate of all sub tasks, as compared to doing the full blown statistical analysis?

I'm not sure I understand you, but I'm just going to talk at you anyway.

In the 3-point estimate you break the estimate down into smaller parts, then have experts assign best/likely/worst case values to each part. Then they are rolled up into a single CDF.

The thing is that the formulae used are basically made up. They give a "triangular distribution", which superficially resembles a normal distribution at low precision. But lots of things don't resemble a normal distribution. You can hope that through the central limit theorem your estimate will improve towards resembling a normal distribution ... except that humans persistently underestimate everything.

A lot of the Ops Research / PM literature is about fiddling with the formulae, or introducing adjustments based on historical data, which all helps. But by itself the 3-point method improves outcomes simply because you bother to enumerate stuff.

The other secret is that if you do it a few times, you tend to look at previous estimates and you begin to remember things that are frequently forgotten. A common cause of underestimation is leaving off common tasks. For example, software developers can often give a reasonable estimate for the core work they're doing (code and tests), but tend to forget to account for everything else that's needed before a feature can be considered done done: merging, discussion with others, integration, deployments, documentation and so on.


Thanks. So the main benefits are from simply enumerating sub tasks, assigning best/likely/worst case values, and repeating the exercise to improve sub task coverage. It sounds like the actual statistical analysis is only an incremental improvement beyond that, and probably does not add significant value to small projects.


I don't know if anyone has checked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: