> but now you have a function call per renormalization and that has a cost that may be unacceptable.
I would go for the function, and pass along Knuth's advice about premature optimization. If you're writing at such a low level that function calls actually aren't acceptable, go with a comment "// renormalize vector." Your instinct should be the function though. I bet there is more than one vector normalization going on in this hypothetical codebase, and that line looks pretty typo-prone.
> I disable the flash on every piece of camera equipment I own. In my book, there is never a scenario where it is okay to use a flash, if it is too dark to take the photo, then don’t take it.
I think the fact that you (and I) don't trust current camera flashes to make photos worth keeping is actually an argument that it's a problem worth fixing. It remains to be seen how successful the 5S flash will be, but I think it's cool they're trying. I'm also glad Apple made the sensor elements larger rather than cramming in more mega pixels. Normally that's a concern of the geeky DSLR consumers, and point & shooters suffer with noisy low-light performance and more pixels than they need.
I welcome improvements in the iPhone flash, because there are cases in which I'm forced to use it. But the biggest problem is also the hardest to solve: the flash is too close to the lens. A flash right next to the lens will always look bad, no matter what you do. Being able to pivot and bounce it is even better, but just moving it a few inches away makes a big difference. Unfortunately, all of those things are fairly incompatible with the need to make phones that fit easily in peoples' pockets.
Moving it a few inches away isn't inherently impossible, since you could put the flash at the other end of the phone from the lens. At the minimum it'd be a lot more awkward to use, though.
The coolest use I've seen for the iPhone's flash is the Azumio heart rate app¹. It uses the flash to illuminate your finger, which you place over the lens of the camera. And then they do some fantastic image processing (presumably comparing shades of red) to detect your pulse.
I recently read Software Estimation (Steve McConnell, author of Code Complete). My main takeaway was to distrust developer intuition and instead build estimates from data. One example is to look at the duration of past projects of "similar size" (still requires some qualitative analysis). Another example is to count the number of distinct "features" or "components," and build an estimate from those. Both of these still require some qualitative analysis. That will be the weak point of any estimate. The goal is to minimize inputs from developer intuition and use as much quantitative analysis as possible when estimating.
My main takeaway from the McConnel's book has been not to provide an estimate as a single point number (say something like it will be done in 5 weeks) but rather as a range (say something like 5-8 weeks), as providing a single point number would give the impression that it is the target date (while it is not). But unfortunately most of the
times all I have got is blank stares when I give estimates as a range and a question why don't you give me the time it takes to finish :(. Wish more number of people who do project management read at least 1st two parts of McConell's book on software estimation.
I ran into the same problem, and eventually the only workable solution was to use ranges within the software development group, but to only communicate the high end of the range to the project management group and executive team. Most of the time our projects came in "under budget" this way, which seems self-serving, and sometimes projects we thought we could complete on-time were shelved as being too-expensive, but whenever we tried to give those groups better information (eg: the range) they always went straight for the low-end and gave that out to marketing and other groups as a committed deadline date.
I don't think Steve McConnell was suggesting that developer intuition was not relevant. In some of the formulas he suggests for estimation he includes a factor for how accurate the team was in estimating previous projects. Ultimately each developer will need to estimate individual features or other small units.
The other big take away from that book is that estimates should not be a single number but rather a range. If the client insists on a fixed price bid then you start with the higher end of your estimated range and add in something for the risk of a fixed price bid.
One important lesson to take from McConnell's book is that you should not rely on a single sort of estimation. Serious estimators will estimate in distinct ways and then compare the outcomes. If they are widely divergent, it's time to investigate why.
> A scientist isn't committed to believing in natural selection in the same way a bibilical literalist is committed to rejecting it. All he's committed to is following the evidence wherever it leads.
At the risk of responding out of identity, but also with the hope of contributing constructively despite having some identity at stake:
A literal interpretation of the bible doesn't preclude accepting natural selection as an ongoing, observable process. It may reject that process as the explanation of our species' origins. Those two are often conflated, contributing to the identity-driven, emotion-filled discussion that PG describes.
Regarding the main premise of the article, I think it's possible to have rational, logical discussions about topics in which identity is involved. They are more challenging than, say, discussing Javascript. But I don't think those discussions work well on the internet. They require trust (that both parties will be heard in good faith) careful check of emotions, motivations, and reasoning. Those are even more challenging without high-bandwidth communication aids like facial expressions and tone of voice.
It's quite possible to see flamewars when discussing javascript, e.g. ember vs angular, OO-style or functional, etc. Or the classic emacs vs vim. So I don't think argumentativeness is exclusive to politics and religion by any means.
I think the problem is that software / algorithm patents have broader implications and abuses than patents for chemical formulas. They are also being granted for topics that many would consider obvious to those active in the field. The issue isn't protecting valid and specific invention from copying, it's the ease with which software / algorithm patents can be used to sue / bully others with legitimate (not copied) creative inventions. I think the ephemeral nature of software makes it inevitable that a patent eventually gets approved by an examiner if resubmitted enough times. A bogus chemical patent is probably easier to prove bogus.
I would go for the function, and pass along Knuth's advice about premature optimization. If you're writing at such a low level that function calls actually aren't acceptable, go with a comment "// renormalize vector." Your instinct should be the function though. I bet there is more than one vector normalization going on in this hypothetical codebase, and that line looks pretty typo-prone.