Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Overview of Deep Learning

No description

Conor Nugent

on 8 March 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Overview of Deep Learning

Deep Learning Conor Nugent I got a new Android phone Android Jelly Bean Lots of interesting new features and in particular a massive jump in voice recognition performance Then I stumbled upon this wired article: How Google retooled Android with help from your brain http://www.wired.co.uk/news/archive/2013-02/19/android-voice-commands Vincent Vanhoucke http://research.google.com/pubs/author37534.html 'voice error rate with the new version of Android -- known as Jelly Bean -- is about 25 percent lower than previous versions of the software' "It kind of came as a surprise that we could do so much better by just changing the model," It's just one example of the way neural network algorithms are changing the way our technology works -- and they way we use it. This field of study had cooled for many years, after spending the 1980s as one of the hottest areas of research, but now it's back, with Microsoft and IBM joining Google in exploring some very real applications Jeff Dean "Jeff Dean once failed a Turing test when he correctly identified the 203rd Fibonacci number in less than a second." MapReduce, BigTable, Spanner What Changed? A bunch of people were still working away on neural nets including Hinton and his team and they had a series of breakthroughs
Some of their work involved a switch to a different of type network, Restricted Boltzmann Machines (RBMs), which they had found an improved training algorithm for called gradient based contrastive divergence
Really quite a different approach, RBMs are a generative model
Improved computing power! Chief Research Officer Rick Rashid: Microsoft Research Microsoft Chief Research Officer Rick Rashid showed a live demonstration of Microsoft’s neural network-based voice processing software in Tianjin, China. In the demo, Rashid spoke in English and paused after each phrase. To the audience’s delight, Microsoft’s software simultaneously translated what he was saying and then spoke it back to the audience in Chinese. The software even adjusted its intonation to make itself sound like Rashid’s voice Geoffrey Hinton Really interesting google tech talk in 2007 Coursera : Neural Network course! https://www.coursera.org/course/neuralnets RBMs, Generative Models and Deep Belief Networks Jake Luciani Took the course and wrote a really nice overview of some of the topis covered


Implemented RBM & DBN

https://github.com/tjake/rbm-dbn-mnist New York Times Article:

http://www.nytimes.com/2012/06/26/technology/in-a-big-network-of-computers-evidence-of-machine-learning.html?pagewanted=all&_r=1& Cat detection http://research.google.com/pubs/pub38115.html Building high-level features using large scale unsupervised learning Andrew Y. Ng Improve Predictive Drug Interactions http://blog.kaggle.com/2012/11/01/deep-learning-how-i-did-it-merck-1st-place-interview/ Serious computing power!

'the model has 1 billion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days' Deep Belief Networks, a high-level overview A traditional feedforward neural network http://www.iro.umontreal.ca/~bengioy/yoshua_en/research.html Yoshua Bengio Yann LeCun http://yann.lecun.com/
Required a lot of labeled training
Slow and difficult to train
Back-propagation doesn't always work that well
etc etc Convolutional Neural Networks (CNNs) Hadn't everyone gone off Neural Networks? Google teach talk: Learning Deep Representations Google Tech Talk 2012: Recent developments in Deep Learning 'Generative models contrast with discriminative models, in that a generative model is a full probabilistic model of all variables, whereas a discriminative model provides a model only for the target variable(s) conditional on the observed variables' wikipedia
Don't use labels to train features detectors
Train the features detectors to model the structure in the underlying data, basically unsupervised learning
Stack them to create a hierarchy of feature detectors
Create some extra inputs in the top most layer which can be used as discriminative variables
You can then use back propagation to fine-tune the discriminative task if you wish
An interesting approach, first learn concepts and then learn to associate the concepts with a particular label Lots of other stuff going on under the banner of deep learning
I have only touched on a tiny tiny bit of Hinton's work.
Lots of other people are doing really interesting work http://deeplearning.net/

http://www.cs.toronto.edu/~hinton/ Some Successful Applications of this type of Approach Voice Recognition http://research.microsoft.com/apps/pubs/default.aspx?id=171498 RBM Discriminative Deep Belief Network discriminative variables
Full transcript