p-values must be the most mis-interpreted statistic in the history of statistics. As an example of the trouble you can get into with p-values see recent discussion over misreporting of the LHC experiments (a good starting point is http://normaldeviate.wordpress.com/2012/07/11/the-higgs-boso...). Every time I see p-values reported in A/B tests I get leery -- people are trying to adopt a scientific mindset but the tools are leading them astray. (Disclosure: I work on an A/B testing product, called Myna [mynaweb.com] that adopts a different approach.)
The other point I particularly like is the discussion on mean values and robust statistics. Most analytics packages just report means, losing so much information in the process. Again I bet many bad decisions are made due to poor tools.
The other point I particularly like is the discussion on mean values and robust statistics. Most analytics packages just report means, losing so much information in the process. Again I bet many bad decisions are made due to poor tools.