So I now have a little Google team in Toronto, part of the Brain team. And I've been doing more work on it myself. Let's see, any other advice for people that want to break into AI and deep learning? >> Yes so that's another of the pieces of work I'm very happy with, the idea of that you could train your restricted Boltzmann machine, which just had one layer of hidden features and you could learn one layer of feature. >> You worked in deep learning for several decades. Idealized neurons • To model things we have to idealize them (e.g. This 5-course certificate, developed by Google, includes innovative curriculum designed to prepare you for an entry-level role in IT support. Hi Thanks for the A2A ! Si te aceptan para realizar el programa completo de la MaestrÃa, el trabajo del curso MasterTrack se cuenta para tu tÃtulo. And that may be true for some researchers, but for creative researchers I think what you want to do is read a little bit of the literature. And maybe that puts a natural limiter on how many you could do, because replicating results is pretty time consuming. Although it wasn't until we were chatting a few minutes ago, until I realized you think I'm the first one to call you that, which I'm quite happy to have done. >> Yes, happily, so I think that in the early days, back in the 50s, people like von Neumann and Turing didn't believe in symbolic AI, they were far more inspired by the brain. Use hand-written programs based on common-sense to define the features. flag. >> To different subsets. Aprende a utilizar los datos para cumplir los objetivos operativos de tu organizaciÃ³n. What are your, can you share your thoughts on that? So one example of that is when and I first came up with variational methods. And notice something that you think everybody is doing wrong, I'm contrary in that sense. And I'm hoping it will be much more statistically efficient than what we currently do in neural nets. And the answer is you can put that memory into fast weights, and you can recover the activities neurons from those fast weights. • Adding a layer of hand-coded features (as in a perceptron) makes them much more powerful but … Prof. Geoffrey Hinton - Artificial Intelligence: Turning our understanding of the mind right side up - Duration: 1:01:24. What's happened now is, there's a completely different view, which is that what a thought is, is just a great big vector of neural activity, so contrast that with a thought being a symbolic expression. Geoffrey Hinton Coursera Class on Neural Networks. And by about 1993 or thereabouts, people were seeing ten mega flops. >> I see, great, yeah. COMPANIES. We'll emphasize both the basic algorithms and the practical tricks needed to… Offered by HEC Paris. It was the first time I'd been somewhere where thinking about how the brain works, and thinking about how that might relate to psychology, was seen as a very positive thing. Geoffrey Hinton Coursera course "Neural Networks for Machine Learning" https://www.youtube.com/watch?v=cbeTc-Urqak&list=PLoRl3Ht4JOcdU872GhiYWf6jwrk_SNhz9 >> I see. And I went to talk to him for a long time, and explained to him exactly what was going on. And so the question was, could the learning algorithm work in something with rectified linear units? >> Over the years I've heard you talk a lot about the brain. >> [LAUGH] I see, yeah, that's great, yeah. So we managed to make EN work a whole lot better by showing you didn't need to do a perfect E step. Heroes of Deep Learning: Andrew Ng interviews Geoffrey Hinton >> Yeah, I think many of the senior people in deep learning, including myself, remain very excited about it. A cutting-edge Computer Science Masterâs degree from Americaâs most innovative university. So you can try and do it a little discriminatively, and we're working on that now at my group in Toronto. And then to decipher whether to put them together or not, you get each of them to vote for what the parameters should be for a face. >> That's why you did all that work on face synthesis, right? In these videos, I hope to also ask these leaders of deep learning to give you career advice for how you can break into deep learning, for how you can do research or find a job in deep learning. So this is advice I got from my advisor, which is very unlike what most people say. As far as I know, their first deep learning MOOC was actually yours taught on Coursera, back in 2012, as well. Der KI-Forscher und Turing-Preisträger Geoffrey Hinton (Universität Montreal / Microsoft) gehört zu den Befürwortern des Deep Learning. >> One other topic that I know you follow about and that I hear you're still working on is how to deal with multiple time skills in deep learning? Deep Learning Specialization. Department of Computer Science : email: geoffrey [dot] hinton [at] gmail [dot] com : University of Toronto : voice: send email: 6 King's College Rd. Reasons to study neural computation • To understand how the brain actually works. And if you give it to a good student, like for example. Geoffrey Everest Hinton CC FRS FRSC (born 6 December 1947) is an English Canadian cognitive psychologist and computer scientist, most noted for his work on artificial neural networks.Since 2013 he divides his time working for Google (Google Brain) and the University of Toronto.In 2017, he cofounded and became the Chief Scientific Advisor of the Vector Institute in Toronto. Learning to confidently operate this software means adding... Aprende una habilidad relevante para el trabajo que puedes usar hoy en menos de 2Â horas a travÃ©s de una experiencia interactiva guiada por un experto en la materia. Because if you work on stuff that your advisor feels deeply about, you'll get a lot of good advice and time from your advisor. What the family trees example tells us about concepts • There has been a long debate in cognitive science between two rival theories of what it means to have a concept: The feature theory: A concept is a set of semantic features. As long as you know there's any one of them. And I submit papers about it and they would get rejected. Offered by University of Michigan. >> I guess recently we've been talking a lot about how fast computers like GPUs and supercomputers that's driving deep learning. I guess my main thought is this. Learn to address the challenges of a complex world with a Master of Public Health degree. I'm actually really curious, how has your thinking, your understanding of AI changed over these years? - Understand the major technology trends driving Deep Learning And the reason it didn't work would be some little decision they made, that they didn't realize is crucial. Advanced embedding details, examples, and help! Maybe you do, I don't feel like I do. And I think what's in between is nothing like a string of words. Geoffrey Everest Hinton FRS is a … – It allows us to apply mathematics and to make analogies to other, familiar systems. There just isn't the faculty bandwidth there, but I think that's going to be temporary. So it hinges on, there's a couple of key ideas. 1. And generative adversarial nets also seemed to me to be a really nice idea. Complete your Bachelorâs Degree with the University of North Texas and transfer your technical or applied community college, technical college, or military credits to save time & money. In the early 90s, Bengio showed that you can actually take real data, you could take English text, and apply the same techniques there, and get embeddings for real words from English text, and that impressed people a lot. RecibirÃ¡s la misma credencial que los estudiantes que asistieron a la clase en la universidad. - Know how to implement efficient (vectorized) neural networks >> The variational bands, showing as you add layers. And what this back propagation example showed was, you could give it the information that would go into a graph structure, or in this case a family tree. And they don't understand that sort of, this showing computers is going to be as big as programming computers. A better way of collecting the statistics The job qualifications for contact tracing positions differ throughout the country and the world, with some new positions open to individuals wi... Machine learning is the science of getting computers to act without being explicitly programmed. Geoffrey E. Hinton Neural Network Tutorials. Now it does not look like a black box anymore. I think it'd be very good at getting the changes in viewpoint, very good at doing segmentation. Artificial Neural Network, Backpropagation, Python Programming, Deep Learning, Excellent course !! But in recirculation, you're trying to make the post synaptic input, you're trying to make the old one be good and the new one be bad, so you're changing in that direction. Normally in neural nets, we just have a great big layer, and all the units go off and do whatever they do. This specialization is intended for anyone who seeks to develop one of the most critical and fundamental digital skills today. And it provided the inspiration for today, tons of people use ReLU and it just works without- >> Yeah. But I saw this very nice advertisement for Sloan Fellowships in California, and I managed to get one of those. I figured out that one of the referees was probably going to be Stuart Sutherland, who was a well known psychologist in Britain. And I was very excited by that. So the simplest version would be you have input units and hidden units, and you send information from the input to the hidden and then back to the input, and then back to the hidden and then back to the input and so on. And I think the people who thought that thoughts were symbolic expressions just made a huge mistake. Which was that a concept is how it relates to other concepts. >> Some of it, I think a lot of people in AI still think thoughts have to be symbolic expressions. >> Yes. Department of Computer Science : email: geoffrey [dot] hinton [at] gmail [dot] com : University of Toronto : voice: send email: 6 King's College Rd. This course gives you easy access to the invaluable learning techniques used by experts in art, music, literature, math, science, sports, and many other disciplines. 상위 대학교 및 업계 리더의 Geoffrey Hinton 강좌 온라인에서 과(와) 같은 강좌를 수강하여 Geoffrey Hinton을(를) 학습하세요. Buscar el objetivo y el significado de la vida, IntroducciÃ³n a la InformÃ¡tica en la nube, Experto en palabras: escritura y ediciÃ³n, ModelizaciÃ³n de enfermedades infecciosas, AutomatizaciÃ³n de las pruebas de software, Habilidades de Excel aplicadas para los negocios, Habilidades de comunicaciÃ³n para ingenieros, AutomatizaciÃ³n de TI de Google con Python, Certificado en ingenierÃa y gestiÃ³n de la construcciÃ³n, Certificado en Aprendizaje automÃ¡tico para el anÃ¡lisis, Certificado en emprendimientos y gestiÃ³n de la innovaciÃ³n, Certificado en Sostenibilidad y Desarrollo, Certificado en IA y aprendizaje automÃ¡tico, Certificado en AnÃ¡lisis y visualizaciÃ³n de datos espaciales, Licenciatura en Ciencias de la ComputaciÃ³n, MaestrÃa en ciencias de los datos aplicada, MaestrÃa en InnovaciÃ³n y emprendimiento. So we need to use computer simulations. And I got much more interested in unsupervised learning, and that's when I worked on things like the Wegstein algorithm. >> I see. Course Original Link: Neural Networks for Machine Learning — Geoffrey Hinton COURSE DESCRIPTION About this course: Learn about artificial neural networks and how they're being used for machine learning, as applied to speech and object recognition, image segmentation, modeling language and human motion, etc. Mejora tu capacidad para tomar decisiones en los negocios con la MaestrÃa en Inteligencia AnalÃtica de Datos de UniAndes. Learning with hidden units (again) • Networks without hidden units are very limited in the input-output mappings they can model. >> And the idea is a capsule is able to represent an instance of a feature, but only one. >> I had a student who worked on that, I didn't do much work on that myself. Learn about artificial neural networks and how theyre being used for machine learning, as applied to speech and object recognition, image segmentation, . >> I was really curious about that. Construction Engineering and Management Certificate, Machine Learning for Analytics Certificate, Innovation Management & Entrepreneurship Certificate, Sustainabaility and Development Certificate, Spatial Data Analysis and Visualization Certificate, Master's of Innovation & Entrepreneurship. And to capture a concept, you'd have to do something like a graph structure or maybe a semantic net. And we actually did some work with restricted Boltzmann machines showing that a ReLU was almost exactly equivalent to a whole stack of logistic units. So we actually trained it on little triples of words about family trees, like Mary has mother Victoria. Nuestra experiencia de aprendizaje de tÃtulo modular te otorga la capacidad de estudiar en lÃnea en cualquier momento y obtener crÃ©ditos a medida que completas las tareas de tu curso. I'm actually curious, of all of the things you've invented, which of the ones you're still most excited about today? GitHub is where people build software. Now if the mouth and the nose are in the right spacial relationship, they will agree. And in psychology they had very, very simple theories, and it seemed to me it was sort of hopelessly inadequate to explaining what the brain was doing. >> So when I was at high school, I had a classmate who was always better than me at everything, he was a brilliant mathematician. Aprende Geoffrey Hinton en línea con cursos como . Explore our catalog of online degrees, certificates, Specializations, & MOOCs in data science, computer science, business, health, and dozens of other topics. I think generative adversarial nets are one of the sort of biggest ideas in deep learning that's really new. And so I think thoughts are just these great big vectors, and that big vectors have causal powers. And because of the work on Boltzmann machines, all of the basic work was done using logistic units. So Google is now training people, we call brain residence, I suspect the universities will eventually catch up. And then the other idea that goes with that. Provided there's only one of them. >> To represent, right, rather than- >> I call each of those subsets a capsule. This course also teaches you how Deep Learning actually works, rather than presenting only a cursory or surface-level description. We invented this algorithm before neuroscientists come up with spike-timing-dependent plasticity. And that memories in the brain might be distributed over the whole brain. >> And in fact, a lot of the recent resurgence of neural net and deep learning, starting about 2007, was the restricted Boltzmann machine, and derestricted Boltzmann machine work that you and your lab did. And you could look at those representations, which are little vectors, and you could understand the meaning of the individual features. EMBED. And then there was the AI view of the time, which is a formal structurist view. There may be some subtle implementation of it. >> Well, I still plan to do it with supervised learning, but the mechanics of the forward paths are very different. And I did quite a lot of political work to get the paper accepted. And you staying out late at night, but I think many, many learners have benefited for your first MOOC, so I'm very grateful to you for it, so. Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed Neural Networks for Machine Learning Lecture 12b More efficient ways to get the statistics ADVANCED MATERIAL: NOT ON QUIZZES OR FINAL TEST . So you're changing the weighting proportions to the preset outlook activity times the new person outlook activity minus the old one. So when you get two captures at one level voting for the same set of parameters at the next level up, you can assume they're probably right, because agreement in a high dimensional space is very unlikely. So it would learn hidden representations and it was a very simple algorithm. Well, generally I think almost every course will warm you up in this area (Deep Learning). I've heard you talk about relationship being backprop and the brain. After it was trained, you then had exactly the right conditions for implementing backpropagation by just trying to reconstruct. You look at it and it just doesn't feel right. A lot of top 50 programs, over half of the applicants are actually wanting to work on showing, rather than programming. I think the idea that thoughts must be in some kind of language is as silly as the idea that understanding the layout of a spatial scene must be in pixels, pixels come in. I mean you have cells that could turn into either eyeballs or teeth. >> Yeah, one thing I noticed later when I went to Google. Los tÃtulos de Coursera cuestan mucho menos dinero en comparaciÃ³n con los programas presenciales. In 1986, I was using a list machine which was less than a tenth of a mega flop. >> I see, right, so rather than FIFO learning, supervised learning, you can learn this in some different way. You could do an approximate E step. 世界トップクラスの大学と業界のリーダーによる Geoffrey Hinton のコース。 のようなコースでGeoffrey Hinton をオンラインで学んでください。 You can give him anything and he'll come back and say, it worked. I think what's happened is, most departments have been very slow to understand the kind of revolution that's going on. >> Right, that's why you did all that. And in fact that from the graph-like representation you could get feature vectors. I kind of agree with you, that it's not quite a second industrial revolution, but it's something on nearly that scale. What advice would you have for them to get into deep learning? The value paper had a lot of math showing that this function can be approximated with this really complicated formula. There's no point not trusting them. If you want to produce the image from another viewpoint, what you should do is go from the pixels to coordinates. >> I think that at this point you more than anyone else on this planet has invented so many of the ideas behind deep learning. Which is, if you want to deal with changes in viewpoint, you just give it a whole bunch of changes in view point and training on them all. That was almost completely ignored. Spike-timing-dependent plasticity is actually the same algorithm but the other way round, where the new thing is good and the old thing is bad in the learning rule. And so I guess he'd read about Lashley's experiments, where you chop off bits of a rat's brain and discover that it's very hard to find one bit where it stores one particular memory. >> Yes, it was a huge advance. It was a model where at the top you had a restricted Boltzmann machine, but below that you had a Sigmoid belief net which was something that invented many years early. And then I gave up on that and tried to do philosophy, because I thought that might give me more insight. And then, trust your intuitions and go for it, don't be too worried if everybody else says it's nonsense. They cause other big vectors, and that's utterly unlike the standard AI view that thoughts are symbolic expressions. I was never as big on sparsity as you were, buddy. Which is I have this idea I really believe in and nobody else believes it. And he came into school one day and said, did you know the brain uses holograms? >> Actually, it was more complicated than that. Geoffrey Hinton : index. 来自顶级大学和行业领导者的 Geoffrey Hinton 课程。通过 等课程在线学习Geoffrey Hinton。 And so I was showing that you could train networks with 300 hidden layers and you could train them really efficiently if you initialize with their identity. >> I eventually got a PhD in AI, and then I couldn't get a job in Britain. >> Yes, so from a psychologist's point of view, what was interesting was it unified two completely different strands of ideas about what knowledge was like. Later on, Joshua Benjo, took up the idea and that's actually done quite a lot of more work on that. And then you'll use a bunch of neurons, and their activities will represent the different aspects to that feature, like within that region exactly what are its x and y coordinates? If you want to get ready in machine learning with neural network, then you need to do more things that are much more practical. So, around that time, there were people doing neural nets, who would use densely connected nets, but didn't have any good ways of doing probabilistic imprints in them. I think when I was at Cambridge, I was the only undergraduate doing physiology and physics. So it was a directed model and what we'd managed to come up with by training these restricted Boltzmann machines was an efficient way of doing inferences in Sigmoid belief nets. >> Yes and no. AT&T Bell Labs (2 day), 1988 ; Apple (1 day), 1990; Digital Equipment Corporation (2 day), 1990 Because in the long run, I think unsupervised learning is going to be absolutely crucial. So I knew about rectified linear units, obviously, and I knew about logistic units. And what we managed to show was the way of learning these deep belief nets so that there's an approximate form of inference that's very fast, it's just hands in a single forward pass and that was a very beautiful result. So what advice would you have? But the crucial thing was this to and fro between the graphical representation or the tree structured representation of the family tree, and a representation of the people as big feature vectors. Los cursos incluyen tareas revisadas entre compaÃ±eros y con calificaciones automÃ¡ticas, lecciones en video y foros de debate comunitarios. Spreadsheet software remains one of the most ubiquitous pieces of software used in workplaces across the world. Learn about artificial neural networks and how theyre being used for machine learning, as applied to speech and object recognition, image segmentation, . Welcome Geoff, and thank you for doing this interview with deeplearning.ai. So that's what first got me interested in how does the brain store memories. They're sending different kinds of signals. The people that invented so many of these ideas that you learn about in this course or in this specialization. And then when I was very dubious about doing, you kept pushing me to do it, so it was very good that I did, although it was a lot of work. So they thought what must be in between was a string of words, or something like a string of words. Contribute to Chouffe/hinton-coursera development by creating an account on GitHub. And you'd give it the first two words, and it would have to predict the last word. But then later on, I got rid of a little bit of the beauty, and it started letting me settle down and just use one iteration, in a somewhat simpler net. PodrÃ¡s conformar y liderar equipos de desarrollo de software de alto desempeÃ±o responsables de la transformaciÃ³n digital en las organizaciones. Grow your public health career with a Population and Health Sciences Masterâs degree from the University of Michigan, the #1 public research university in the U.S. Intl & U.S. applicants welcome. So let's suppose you want to do segmentation and you have something that might be a mouth and something else that might be a nose. >> I see, yeah. But what I want to ask is, many people know you as a legend, I want to ask about your personal story behind the legend. COMPANIES. >> Yes, so actually, that goes back to my first years of graduate student. But when you have what you think is a good idea and other people think is complete rubbish, that's the sign of a really good idea. And in particular, in 1993, I guess, with Van Camp. >> Yes. And over the years, I've come up with a number of ideas about how this might work. If your intuitions are good, you should follow them and you'll eventually be successful. >> I think that's a very, very general principle. If it turns out the back prop is a really good algorithm for doing learning. Offered by Imperial College London. >> And then what you can do if you've got that, is you can do something that normal neural nets are very bad at, which is you can do what I call routine by agreement. This course aims to teach everyone the basics of programming computers using Python. And therefore can hold short term memory. Where you take a face and compress it to very low dimensional vector, and so you can fiddle with that and get back other faces. This Specialization helps you improve your professional communication in English for successful business interactions. So the idea is that the learning rule for synapse is change the weighting proportion to the presynaptic input and in proportion to the rate of change at the post synaptic input. Discriminative training, where you have labels, or you're trying to predict the next thing in the series, so that acts as the label. >> I think that at this point you more than anyone else on this planet has invented so many of the ideas behind deep learning. It turns out people in statistics had done similar work earlier, but we didn't know about that. And what you want, you want to train an autoencoder, but you want to train it without having to do backpropagation. Te pueden interesar nuestras recomendaciones. And from the feature vectors, you could get more of the graph-like representation. >> Right, and I may have misled you. They think they got a couple, maybe a few more, but not too many. I usually advise people to not just read, but replicate published papers. >> So that was the second thing that I was really excited about. And a lot of people have been calling you the godfather of deep learning. Each course focuses on a particular area of communication in English: writing emails, speaking at meetings and interviews, giving presentations, and networking online.