Mostly agreed, one difference that I would like to highlight is that errors are not always backpropagated across all the layers. In addition to contrastive divergence the breakthrough has been that you can get away with unsupervised learning (like with autoencoders) in the layers.
On the comment that RBMs are new, now I have to come to accept that if one looks hard enough almost all things are old, only the name changes !
On the comment that RBMs are new, now I have to come to accept that if one looks hard enough almost all things are old, only the name changes !