Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

HN'ers may have a good laugh at these taken from Yan LeCun's page. LeCun made it a tradition to have Hinton jokes in the lines of Chuck Norris ones (or more appropriately Doug McCllroy ones).

A few will recall that neural networks all but died from the US after Minsky's damning book. Hinton gave backpropagation which is one of the foundational pillars of feed-forward neural net algorithms. With his new thrust on whats called "deep belief networks" he is challenging his own early seminal contribution in the field. Not often do you see researchers throw away such huge swathes of their own work and start again to solve the same problem. Unless you are Niklaus Wirth of course.

Some background is necessary to get the inside jokes, but I have tried to minimize the requirement.

    Geoff Hinton doesn't need to make hidden units. They hide
    by themselves when he approaches.

    Geoff Hinton discovered how the brain really works.
    Once a year for the last 25 years.

    Markov random fields think Geoff Hinton is intractable.

    Geoff Hinton can make you regret without bounds.

    Geoff Hinton doesn't need support vectors. He can
    support high-dimensional hyperplanes with his pinky.

    All kernels that ever dared approaching Geoff Hinton woke up convolved.

    The only kernel Geoff Hinton has ever used is a kernel of truth.

    After an encounter with Geoff Hinton, support vectors become unhinged

    Geoff Hinton's generalizations are boundless.

    Geoff Hinton goes directly to third Bayes.

Links: http://yann.lecun.com/ex/fun/index.html

http://en.wikipedia.org/wiki/Backpropagation

http://en.wikipedia.org/wiki/Perceptrons_%28book%29

http://en.wikipedia.org/wiki/Niklaus_Wirth

Re: Downvotes. I seem to have touched a nerve. Yes I agree humor is frowned upon here, and I largely agree with that, but when its not humor for humors own sake, I sometimes make an exception. Not everyone would know who Hinton is, but may gauge that he is someone important from these fun anecdotes. But of course you are free to like it or dislike it, no hard feelings either which way.



I like the quotes, however:

> With his new thrust on whats called "deep belief networks" he is challenging his own early seminal contribution in the field

I don't know if I agree with that, he still uses backprop. Backprop has always been known to have problems when you scale to millions of connections, and his work on RBMs/DBNs is really quite old. What was novel more recently was showing that the contrastive divergence step need only be performed once, rather than 100 times, while the performance remained similar. The networks are generally still 'fine tuned' with backprop.

Still, the focus on generative networks (not sure if that's the right term still, been a while) and single layer training is fairly recent even if the concepts are quite old.


Mostly agreed, one difference that I would like to highlight is that errors are not always backpropagated across all the layers. In addition to contrastive divergence the breakthrough has been that you can get away with unsupervised learning (like with autoencoders) in the layers.

On the comment that RBMs are new, now I have to come to accept that if one looks hard enough almost all things are old, only the name changes !


I think these are hilarious; thanks for posting




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: