The following is the transcript from the 4/18 JC with Stephen Grossberg. As with all transcripts, it’s imperfect; blame otter.ai for any inaccuracies.
Chaytan Inman 00:00:22 it's gonna resonate. 00:00:24 Really, where I can see myself. user avatar Stephen Grossberg 00:01:35 Jana. 00:01:38 Can you hear me. 00:01:44 I don't. user avatar Chaytan Inman 00:01:47 Like 15 minutes early because we're still setting things up right now. user avatar Stephen Grossberg 00:01:51 Are you in a place where you need your mask. user avatar Chaytan Inman 00:02:25 All of us. user avatar Stephen Grossberg 00:02:43 ready. user avatar Chaytan Inman 00:02:45 Yes, I am. 00:02:47 Let me fix my background, so you can see Jennifer. 00:02:55 Are you doing. user avatar Stephen Grossberg 00:02:57 i'm good it's a little later than I would usually be doing something like this, but here I am keeping what. user avatar Chaytan Inman 00:03:06 We We certainly appreciate. 00:03:10 excited to talk with you. 00:03:13 When is your time. user avatar Stephen Grossberg 00:03:15 i'm gonna do you need your mascot muffled your voice. user avatar Chaytan Inman 00:03:22 That time. user avatar Stephen Grossberg 00:03:24 That would be better. 00:03:31 So when you are vaccinated and all those good. user avatar Unknown Speaker 00:03:33 Things. user avatar Chaytan Inman 00:03:35 Yes. user avatar Unknown Speaker 00:03:37 Good. user avatar Stephen Grossberg 00:03:40 Four shots already. 00:03:42 wow what's shot is the GIs their shot and i'm a geezer. 00:03:50 So do we wait till 830 in case more people show up or is everyone here. user avatar Chaytan Inman 00:04:00 we're already here I mean you have a room full of people right now but yeah. 00:04:07 i'm so Thomas time. user avatar Stephen Grossberg 00:04:11 Well, whenever you want to start i'm glad to start. user avatar Chaytan Inman 00:04:19 Sure, I mean. 00:04:22 I think. user avatar Stephen Grossberg 00:04:25 We could work on a. user avatar Chaytan Inman 00:04:27 promise time which is. user avatar Stephen Grossberg 00:04:30 Fine. 00:04:32 Whatever. 00:04:33 What is it now it's 19 after. 00:04:37 So this is part of a journal club. user avatar Chaytan Inman 00:04:41 Yes. 00:04:43 yeah so our club is called interactive intelligence and our our goal is to try to understand sort of the shortcomings of machine learning right now, and why it's not as smart as brains are. 00:04:58 And how we can sort of. 00:05:00 improve that. 00:05:03 So, then, we have a journal club every Monday and last week we talked about. 00:05:09 dendritic spike Spikes and predictions in the brain network right so yeah. 00:05:19 we're still trying to understand your adopted resonance theory, there are many videos on it, but it's either too long or too complicated to understand. 00:05:29 we're in a different language. user avatar Stephen Grossberg 00:05:33 I mean you're taught you mean the work I do. 00:05:37 Well that's why I wrote my book, if you're ever gonna stand the work that I and many collaborators have done that's the one shop one stop shop, you should go to. 00:05:50 it's written to be self contained and as non technical as possible it's written in a conversational stuff I don't know if any of you have been reading parts of it. user avatar Chaytan Inman 00:06:03 yeah I started reading the overview. user avatar Stephen Grossberg 00:06:07 Right well, if you read the preface. 00:06:12 I wrote the chapters to be readable basically independently of each other. 00:06:19 Because it is a very long book and not everyone has the time or the interest to read 768 double column page is so I think a lot of people read the preface and then they jumped into a chapter, whose topic interest them the most. 00:06:40 And then another chapter, and you know as long as they want to read them, they can go back, of course, if you do read it sequentially you do get at a somewhat better sense because I start with perception and then go on to cognition and emotion and action and. 00:07:01 The whole perception, cognition emotion action cycle, with the environment and so there's a natural progression and the chapters, but you don't have to follow it if you don't have time or interest. user avatar Unknown Speaker 00:07:18 So. user avatar Chaytan Inman 00:07:23 what's your favorite chapter. user avatar Stephen Grossberg 00:07:25 that's like asking oh my favorite child is. 00:07:32 But, but what pleases me a lot is I have quite a few friends who aren't scientists who've been reading parts of work with pleasure in the spirit that I just mentioned, you know. 00:07:45 Not necessarily cover to cover but in one of my friends, is a rabbi and others a cantor another is a visual artist and other the gallery, on another is a lawyer. 00:07:59 Another is a social work, you know people who know nothing about the brain. 00:08:05 And they've enjoyed the parts they read they read the part that attract their interest that's what I wrote it for. 00:08:12 So I wrote it for the general public who may be interested in the topic you're not interested in the topic, you know income read a murder mystery or thriller or you know in nature book or whatever interests you the most. 00:08:31 So, what is your major chaiten. user avatar Chaytan Inman 00:08:35 I am majoring in computer science. user avatar Stephen Grossberg 00:08:40 That can mean so many things. user avatar Chaytan Inman 00:08:42 yeah yeah sometimes it feels like it's just a math major in disguise. 00:08:51 What did you major in. user avatar Stephen Grossberg 00:08:54 Well, I have an odd story and part of. 00:08:58 You know the first question that i'm supposed to answer is, how do you explain your work to someone that doesn't have any prior knowledge and. 00:09:08 To explain that I, I really explain how I got started so rather than try to answer that question twice, if you wait a few minutes i'll give you an answer and then that raises for the questions by all means ask me. user avatar Unknown Speaker 00:09:27 So. user avatar Stephen Grossberg 00:09:28 The short answer is, I consider myself a theoretical psychologist but on the other hand, my my. 00:09:40 Title apart from my endowed chair. 00:09:43 i'm wearing professor of cognitive and neural systems, which is about this stuff, but I am also whole professorships in mathematics and statistics, psychological and brain science and biomedical engineering which sort of captures the interdisciplinary nature of the work that i've done. 00:10:09 So the core it always starts with mind, but then it goes on into things like computer science know people have generated algorithms based on. 00:10:24 Learning models and our vision models leave image, processing and machine vision algorithms and so on. 00:10:35 So. 00:10:38 So Jana, what is your major. user avatar Chaytan Inman 00:10:41 i'm majoring in neuroscience right now but i'm also double minor in education and computational neuroscience. user avatar Unknown Speaker 00:10:49 Ah. user avatar Stephen Grossberg 00:10:50 So, in your neuroscience major because it primarily experimental. user avatar Chaytan Inman 00:10:58 Music laboratories. user avatar Stephen Grossberg 00:11:00 Well, that experimental yeah. user avatar Chaytan Inman 00:11:02 yeah I bought some insurance, for example, last quarter, we were just talking about this, one year on, like one action potentials and how it populates draft the brain and this quarter is all about the brain and. 00:11:17 So lots of memorization. 00:11:21 single day number is 100 words. user avatar Stephen Grossberg 00:11:27 Well, part of the art of modeling the way I do it is to clarify how multiple levels interact together. 00:11:40 behavior anatomy and physiology. 00:11:46 physics, chemistry. 00:11:50 Networks this dumbs individual neurons and you know i've written papers about dendritic Spikes, for example, and I make predictions about. 00:12:01 Learning on the dendritic Spikes that will latest supported by people like Henry markram who's a friend of mine so true theory isn't nearly. 00:12:16 explanatory of descriptive it it really can be predictive and shed new functional light on data data alone is very passive. 00:12:29 I mean, for example, if you say. 00:12:32 What is a database what criteria do you use to assemble experiments, if you don't have a conceptual understanding of the underlying unity in apparently different experiments you can't even bring together a set of experiments that you want to explain. 00:12:54 So. 00:12:56 anyway. 00:12:58 what's our time Oh, and three minutes. 00:12:59 Week again. user avatar Chaytan Inman 00:13:03 Do you happen to know Carl system. user avatar Stephen Grossberg 00:13:06 What does that. user avatar Chaytan Inman 00:13:07 happen to know Carl system. user avatar Stephen Grossberg 00:13:10 i'm not understanding. user avatar Chaytan Inman 00:13:14 Do you happen to know Carl friston. 00:13:17 Dr Carl friston. user avatar Stephen Grossberg 00:13:19 Call what. user avatar Unknown Speaker 00:13:21 Kristen. user avatar Stephen Grossberg 00:13:23 Is that the last name yeah. 00:13:27 Kristen yes, I know call. 00:13:32 very nice and very smart man. 00:13:36 He doesn't know any psychology. user avatar Chaytan Inman 00:13:39 Oh wow he's. user avatar Stephen Grossberg 00:13:40 Quite formal he likes using formal concepts like free energy principles stuff like that that's because that's what he knows. 00:13:53 But as i'll clarify in a minute if. 00:13:57 If you don't understand psychological data or behavior you don't understand. 00:14:05 i'm. 00:14:07 Just functions that our brains are carrying out. 00:14:11 And if you don't understand the functions you can't really clarify what the mechanisms are in a functionally meaningful way you can describe them and a lot of the greatest experimental neuro scientists. 00:14:26 classically know very little psychology. 00:14:30 But that's no longer true the greatest experiment neuro scientists now. 00:14:38 For example, may use await behaving monkeys to do multiple electrode experiments of monkeys doing cognitively challenging task, like my colleague and friend Earl Mila at MIT or about death and mom. 00:14:57 But in the old days you know, for example, john echols. 00:15:02 He was or even more so, sharing and they were. 00:15:09 Were Ramon a car, they were just trying to figure out or then neurons, let alone their functional utility and network, since this systems. 00:15:23 and user avatar Chaytan Inman 00:15:28 Get the PowerPoint up and get started. user avatar Stephen Grossberg 00:15:30 Good yeah take 30 you we're gonna do a little introduction Jana and then as a couple of questions and then. 00:15:40 shaping was going to ask further questions yeah and we have the order of questions, because I have notes, you know i'm going to try to tell you in a very compact ways, a lot of stuff and to make it self contained. 00:15:57 We need to keep the order. user avatar Chaytan Inman 00:16:00 Right, sir. user avatar Stephen Grossberg 00:16:03 Well that's not the first question, we agreed to. 00:16:11 answer the questions. 00:16:13 Even if you don't ask me them so. 00:16:17 So you want to start Jana. user avatar Chaytan Inman 00:16:19 yeah sure. 00:16:22 So, my name is Jana and we here at interactive intelligence, are very honored to have a student respect to talk with us today. 00:16:30 Dr grossberg is widely regarded as an important founding member of the fields of computational neuroscience psychology and technology, this work is focused on developing theories and supporting agents that can be fixed and understand the mechanisms behind learning memory. 00:16:48 Yes, published over 500 times in a box proper grossberg also founded the Department of quantitative sciences and systems and he currently serves as the Professor. 00:17:00 He has worked at Boston university for more than 47 years and has been busy as a professor emeritus, which means that we retired with honors of mathematics and statistics psychologically brain sciences environment i've been to a. 00:17:13 If you want to learn more about the topics are going to discuss today, please check out this piece that we publish book of conscious mind doesn't agree how the brain makes. 00:17:27 To further introduces background after becoming the first joint undergraduate major in mathematics and psychology at Dartmouth university and mathematics as a PhD student at Stanford and Rockefeller who became an assistant professor and. 00:17:40 Applied mathematics in a while you continue to discover and develop a stream of conceptual and methodical results about the many aspects of networks, unlike doctor goes for most of, not even the specific list and computational science and the 1970s. 00:17:58 parkersburg eventually moved to Boston University in 1975 and bear the President and provost and efficient and awarded in a chair and commented, and your assistance. 00:18:09 Bear he established the Department of commented and assistance which we and many colleagues develop into the world's leading graduate department that theoretically experience how greensmith minds. 00:18:21 carries our experiments to test these explanations and predictions and acquire these insights to the applications and engineering technology and Ai. 00:18:30 He is the foundational archetype of this field, in particular, is models helps to explain essentially all the fundamental brain processes that move this human and provide a blueprint for an Ai human level intelligence. 00:18:48 It is worth mentioning, Dr greer skirts biggest collaborator and mathematician wife gail carpenter. 00:18:54 Together they develop the widely adopted resonance theory and bombing Boston university's department of commented and assistance, the 600 page book, you see, right here, which I have introduced in the beginning, is actually dedicated to spy carpenter. 00:19:12 One of Dr grossberg school is to reach a broader audience so not only did he write his foot in a self contained and longtime at the matter and we also feed Oxford University press his own money to make a 600 page book cheaper for the general public to access. 00:19:27 He will talk in more detail about his foot and work in today's discussion and leader, not this very long intro is only scratching the surface of this work. 00:19:39 Now it is time to finally meet Dr greenberg and start a P amp a session. 00:19:46 Could you just start by giving us a brief other the other words that you have done over the years and how would you explain your work to someone that does not have any. user avatar Stephen Grossberg 00:19:57 Sure, are you can open the screen, so I could see you and people can see me Joanna. 00:20:09 yeah great. 00:20:11 So. 00:20:13 First, I should say. user avatar Chaytan Inman 00:20:15 Given that. user avatar Stephen Grossberg 00:20:16 The audience testing systems, students, that my work began in 1957 when I took introductory psychology is a 17 year old college freshman at Dartmouth college so you're never too young to start. 00:20:35 At that time I was fascinated by classical psychological data about how humans learn lists of things, in particular, there was a famous cereal verbal learning experimental literature. 00:20:50 And it was paradoxes in those data that forced me in a way that will clarify to introduce the paradigm of using systems and nonlinear differential equations to explain. 00:21:05 Our brains make our minds, as well as the basic equations for story you do that, and they include equations for cell activation neuronal activation or short term memory. 00:21:20 Activity dependent habituation or medium term memory and learning and memory long term memory that still form the foundation of all biological models. 00:21:33 of how brains make minds to the President, as I rapidly started with more and more collaborative is he is went on to use the models to explain an increasingly broad range of psychological phenomena, so my point here is that since this early beginning 65 years ago. 00:21:56 Well, my work is try to explain how brain mechanisms starting just with the basic equations control psychological functions and that's why. 00:22:08 I like to emphasize that brain evolution needs to achieve behavioral success it doesn't matter if you have gorgeous neurons. 00:22:18 That they can't control successful behavior then Darwinian selection will wipe you out and that's why my new book is called conscious mind resident brain how each brain makes a mind. 00:22:33 And why do we need models well interactions within and across several brain regions are often needed to generate these psychological functions. 00:22:47 And these behaviors of US emergent properties of the brain interactions you know the cells themselves simple the emergent properties can have language can have emotion. 00:23:01 can have cognition. 00:23:04 So purely experimental approaches can't really understand emergent properties that's where models are essential, and in order to arrive these models I developed the modeling method and cycle. 00:23:24 So, because brain evolution needs to T behavioral success the modeling method or we started with scores, or even hundreds of psychological. 00:23:36 Experiments cause that's the level on which behavioral success is defined now, when we study data in a book it's just you know static curves of this variable against that. 00:23:51 And the art of modeling is to think about the data long enough, develop a deep enough nourishment with that you can imagine it as. 00:24:02 being created by an individual mind hacking autonomous Lee in real time to adapt to its environment that's the art of modeling it's a speculative read. 00:24:15 No algorithm for us as scan any scientist who's done serious theory and ask them to explain the speculatively they won't be able to tell us for Albert Einstein none of them could tell you. 00:24:30 So. 00:24:33 By thinking in real time about the day that's the hardest thing how the data emerges from an individual mind interacting with the world the method, if you think hard enough hundreds of experiments leads you to discover underlying design principles. 00:24:53 And then you can convert the design principles into the minimal mathematical models that embody them. 00:25:03 and use the models to explain a lot more psychological data than went into the derivation but the big surprised me when I was a kid. 00:25:14 Was that the models look like neural networks and we were explaining data about mine, and so the biological neural network I didn't even know about neurons and I arrived my pre medical student friends call me about neurons and axons and transmitters and synopsis. 00:25:37 Well, clearly, you can derive a brain in one step. 00:25:42 And so after doing one stage of the derivation I could see it's explanatory limits set in a more colorful way I can see the boundary between what I knew and what I didn't know. 00:26:00 and understanding the shape that boundary always suggested a new design principle and then I could go through the cycle again and get an expanded. 00:26:10 More predictive more explanatory deep a model that often included the early a model, although maybe in a refined way, for example, ultimately, the neural networks forced me into. 00:26:28 predicting why he. 00:26:32 circuits in the neocortex or role lamanna lay it circuits and how they work so that was part of this on lumping process and so to sum this up. 00:26:44 I developed the modeling cycle that leads to increasingly realistic models of it ever greater explanatory power and since I started, many years ago, like gone through the cycle many, many times and, if you look at my book a picture's worth 1000 words in Chapter two figured to 37. 00:27:11 gives you a picture of the modeling cycle. 00:27:16 So. 00:27:17 The crucial thing is, I always try to understand the meaning of the data. 00:27:23 And to do that, you have to develop experimental intuition, and so my recommendation to young people want to understand the mind or brain or both you go to fall in love with data you desperately want understand. 00:27:38 You know if you're not dying to know what they mean you're not going to put in the work needed. 00:27:47 Now, into a killer I fell in love. 00:27:52 With a problem for sure, a lot of data in 1957 going back to the 30s through the 50s how we learn and remember lists of events or items. 00:28:05 And that's because the passage of time greatly influenced those data, and let me just sketch, for you why that led me to introduce neural networks. 00:28:15 So, starting with psychological data about how, for example, you learn the alphabet a B, C D, I was led to equations for short term medium term and long term memory that's a huge leap and a team very natural, even to a 17 year old. 00:28:35 And so cereal variable learning you'll practice a list of items or events over and over and over again at a certain Ray you present each item at a certain rate. 00:28:47 Then you rest you presented again at that rate you rest you keep going until you reach a criterion, where you can predict the next item correctly before you see it, so I say you predict be I say be you predict see and so forth, so just an example of practice makes perfect. 00:29:08 And the reason I was excited by these days is a classical serial physician perfect cumulative errors if you've got through a number of learning 12 how many errors occur at each list position A to B, or why does he or me and. 00:29:27 And it was such a paradoxical thing we're writing that anyone who can explain it should get a Nobel Prize in psychology, which of course doesn't exist. 00:29:38 Anyway, why was it a paradox it's because there are more areas in the middle of the list, then, at the beginning of the. 00:29:50 So the beginning and the end are easier to learn than the middle or you know I like sitting in a cute sort of way it's sort of like a relationship. 00:30:00 You remember how it started I would end it, but the middle is a model, so why is it paradoxical, you might have assumed that, as you get deeper and deeper in the list. 00:30:11 Like if this were a computer you would accumulate more and more areas because there's more response interference students stuff that you've already been presented, but it doesn't get hotter and hotter it gets harder and then it gets easier there's an inverted you in the areas. 00:30:31 So. 00:30:33 Moreover, if you increase the amount of rest and the entire distribution of errands through the whole list. 00:30:40 Changes like let's say you present the items every two seconds you rest for six seconds versus you present them every four seconds, which is slower and your recipe two minutes and six seconds in the ladder the whole ever occur from the first array crashes. 00:30:58 And the longer you rest. 00:31:01 The bigger the crashes, so that showed that the non occurrence of a future item, ie a longer rest before you repeat the list can influence the learning about St Louis. 00:31:16 So that implies in some sense events can influence each other backwards in time, because the future non occurrence can influence the whole distribution of past events and as a 17 year old I found that really exciting. 00:31:34 Simpler example of that is called back with learning let's say I practice a be. 00:31:42 Well, if you do if you test this that'll also increase your probability of if you're given beale say hey so ad helps you to learn the that's backwards in time. 00:31:57 But on the other hand, we can still learn a B, C. 00:32:04 Which means the association from the etc must have been stronger than that from the air, so, even though they're about what affects the future is stronger than the past well, what does that mean because of all these interactions between everything we're currently you need a network. 00:32:24 With nodes representing a B, C or whatever and associations from A to B and from diva and from A to Z and from cedar hey and that's what forced me into neural networks and because the rate at which all these things back and influence the learning. 00:32:47 I needed a timescale finer than two seconds or four seconds, or one second or whatever. 00:32:54 And in science, the classical way of representing a time skillet finer than any macroscopic event timescale is differential equations so I needed a. 00:33:07 Non linear differential equations cause of the recurrent interactions between the short term memory applications and how they activated long term memory at the synopsis etc. 00:33:23 about my book, I think you want to ask me. user avatar Chaytan Inman 00:33:26 Another question so basically our next question was. 00:33:37 See. user avatar Unknown Speaker 00:33:39 Over. user avatar Chaytan Inman 00:33:40 You but. user avatar Stephen Grossberg 00:33:41 If it's hard for you to keep pulling it up, I wrote all your questions down. user avatar Chaytan Inman 00:33:46 Right so i'm trying to maybe log into another account and maybe put up there that could be an option, but you can go on and answer this question. user avatar Stephen Grossberg 00:33:56 Okay, so the question, could you say a little more about the book and it's cons. 00:34:02 So, as I remark. 00:34:03 The book is called conscious mind resonant brain how each brain makes a mind, and I want to emphasize it was written for the general public. 00:34:14 That doesn't mean it isn't challenging we're talking about our minds, which are one of the most complicated systems, we can understand that science. 00:34:24 But to help the people want to read it itself contain I try to keep it as non technical as possible, I wrote it in a conversational style and wherever I could I wrote it as a series of stories. 00:34:41 And so. 00:34:46 It really tries to give an overview of the main processes whereby each whoops. 00:34:55 whereby each brain makes the mind set in another way, it helps to scientifically explained the human condition which is all about that. 00:35:07 And it provides principle then unifying explanations of the data really in hundreds of psychological and neurobiological experiments. 00:35:18 And it makes the radical predictions, many of which have been confirmed by subsequent data, so if you haven't looked at the book it's spread over a preface and 17 chapters. 00:35:31 As I was saying to change and before if you don't have time or interest read the whole thing read the preface. 00:35:39 I wrote the chapters to be readable independently of each other so jumped in into chapter Chapter six topics interest to the mouth. 00:35:47 But the book begins with perception both visual and auditory and then it goes on to discuss cognition emotion and action. 00:35:58 And it does it in both healthy individuals and people from who suffer from clinical disorders, including outside this disease autism amnesia schizophrenia post traumatic stress disorder attention deficit hyperactivity disorder and so on, and you might say, well, why did you get into i'm. 00:36:24 Studying mental disorders and here what to make a general point, if you work really hard to understand the underlying principles that govern and scientific process um. 00:36:39 It will give you more than you ever expected to explain so after explaining large amounts of data about normal typical behaviors. 00:36:50 I was able to login understanding if they become unbalanced or damaged in prescribed ways that leads to mechanistic explanations of the behavioral symptoms of quite a few mental disorders so. 00:37:08 I call it the gift that keeps on giving if you work really hard you'll get the gift that keeps on giving just like I never tried to understand consciousness, I only tried to understand learning, but you notice that consciousness is the title of my book. 00:37:26 Anyway, Chapter 17 if you want to jump two chapters it really gets more speculative and discusses biological bases of creativity morality religion causality you know why do we, how do we learn what causes what. 00:37:48 And also clarifies how we can adhere to superstitious or self defeating or even false beliefs in certain social environments, despite this, confirming evidence, just like people who believe conspiracy theories today in the United States. 00:38:09 And I should emphasize that work tries clarifies how perception, cognition emotion action cycles, with the environment. 00:38:19 support this kind of intelligence. 00:38:22 So I think that's enough fun question to. user avatar Chaytan Inman 00:38:30 Okay, wait, so I just have a follow up question on that before as the next one you mentioned, like in general, your book helps to explain the human condition. 00:38:41 which I guess makes sense in that you know any way that we can sort of explain why the mind works and how the mind works would help explain the human condition, but I just feel like that was a broad statement you want to elaborate a little or. user avatar Stephen Grossberg 00:38:55 Well, only in the sense that the book tries to. 00:39:01 clarify the foundations of the main processes whereby a week know the world it doesn't do what everything in one of your. 00:39:10 questions later, is what I haven't done that I would like to do with someone should do and that'll clarify I don't think it's everything but it gives the Foundation that's why I said when you talk about the human condition. 00:39:27 That includes things like biological bases of creativity morality, religion and so on, these are important parts of the human condition so it's in that sense it gives the Foundation. 00:39:43 it's not nothing in the book is sufficient I hope it's all necessary but remember that science is never finished. 00:39:54 Even theoretical physics even knew he didn't finish it even Einstein didn't finish it eisenberg didn't finish it it's always a process. 00:40:06 Okay, but the foundation, the Foundation to secure a revolution and physics might with find the Foundation, or it might lead a new direction like in quantum theory, but the Foundation will endure, just like. 00:40:23 equations or assimilated into einstein's general relativity. user avatar Chaytan Inman 00:40:32 Okay cool so. 00:40:37 One of the things that we talked about a lot is back propagation. 00:40:42 So you know, one of the things that machine learning talks about a lot in general is does this exist in the brain and is it is it biologically plausible. 00:40:55 So what, what do you think about that well i'm sorry before I move on i'm just going to say all the questions we're going to ask the questions at the very end so. 00:41:07 If you have any questions, right now, you can. user avatar Stephen Grossberg 00:41:10 I welcome that so i'm going to give a short. 00:41:14 piece of answer now, and then the next question you asked me will enable me to lay the foundation to give a deeper answer so The short answer is. 00:41:29 That back propagation and deep learning that uses back propagation as as learning engine is just the feed forward after filter you know you go from here to here with adaptive way. 00:41:45 And it learns using a mechanism what's called non local way trance for and what that means is you are officially move the adaptive weights that you learn from the location and the network, where you learn them. 00:42:04 To where you need them to fill the bottom of inputs that non local way transport is not a physical operation, let alone a brain operation. 00:42:17 So non local transport as know analog in the brain we're all interaction, as in you know what macroscopic physical theories are locals. 00:42:30 Now the next question you asked me well, let me answer that in a more specific and deeper way, so you should go on to that. user avatar Chaytan Inman 00:42:39 will do. 00:42:41 Okay, then how what ways to keep learning. 00:42:45 and 00:42:46 You might be different from from adaptive resonance. user avatar Stephen Grossberg 00:42:50 Okay, so. 00:42:51 I have to tell you a little about. 00:42:53 What is it that do for us. 00:42:55 They differ so. 00:43:00 Essentially, all the deepest insights about our brain works that i've ever discovered have come from an analysis of how brain self organize both through childhood development and adult learning, you know you start with an individual cell goes through my post is that added added a. 00:43:24 Development and learning. 00:43:27 going on and incredible rate. 00:43:30 And like we're learning now in real time you won't remember everything we discussed but you'll remember a shocking amount to that we're going to an exciting movie. 00:43:41 you'll remember a shocking amount of scenes that flash by at a very fast framework, so I would claim that adaptive resonance theory or for short or. 00:43:55 Is the most advanced cognitive and neural theory of how our brains learn to attend recognize and predict objects in events in a changing world, and I say that because of its explanatory and predictive success. 00:44:16 So on like deep learning or is not just a fee for it adaptive filter it's a principle theory. 00:44:27 it's not a filter it's not an algorithm it's a theory so P process is a biological intelligence embodied by a self organizing explainable and I have to say what that means production system. 00:44:47 um. 00:44:50 i'll have to explain what that means, if you asked me and it carries out hypothesis testing best self stabilizing learning classification and prediction in a rapidly changing or non station or world. 00:45:07 Deep learning dies in a non station or world gone like or. 00:45:13 Deep learning is untrustworthy recorded because it's not explainable and i'll say work, that means. 00:45:23 And it's unreliable because it experiences catastrophic again. 00:45:31 i'll explain what that is and then. 00:45:34 One sided. 00:45:37 yay it's reviewed in the book I showed that back propagation has 17 problems, not just those two that adaptive residents 30,000. 00:45:51 So what is. 00:45:54 Not explainable me. 00:45:57 Well, as I will be explaining to you in a moment um because or has short term memory traces activations or can focus attention. 00:46:11 On. 00:46:13 A set of critical features that are going to control predictive success, I call it a critical feature pattern i'll say more about in a moment, and anyone who is looking at an art network by looking at the critical features knows the information that it's using. 00:46:36 To make a prediction, so you can explain how it's making its prediction, but you can't do that with deep learning so if you use it in a medical or financial. 00:46:47 application and you someone died or you went broke, and it was challenge deep learning you'd be sued for everything you're worth. 00:46:57 Because you don't know why it works or whether it works it's not explainable and catastrophic forgetting means when you're doing this very slow supervised learning and deep learning. 00:47:12 Which is another problems, one of the 17 problems or can learn in one trial or like I learned through your faces and one trial today within a second i'm. 00:47:30 at any point in that slow learning and unpredictable part of memory can crash and burn it's forgotten catastrophic Lee well because it's a feat for with. 00:47:42 adaptive filter so it's. 00:47:48 trustworthy because it's not explainable it's unreliable because it experiences catastrophic. 00:47:57 Now, or the reason I say it's the most advanced theory is not only have all of the foundational hypotheses about. 00:48:06 been confirmed by subsequent experiments, but it's also provided principal then unifying explanations of hundreds of additional experiments, since I introduced in 1978 I should say it's not it, you know, I was not. 00:48:25 created with no changes it's been evolving incrementally that's remember, I said that they were lamanna circuit embodiments in the neocortex now about. 00:48:39 that's called laminar that wasn't part of the original theory but it's still embodies the original hypotheses, but in a refined form it's an unloving process. 00:48:53 Now, why should you believe it explanatory success is one criterion, but there's a much more profound reason that, in a way, you have to believe it. 00:49:08 and 00:49:11 In 1980 I was able to derive art from a thought experiment. 00:49:17 about how any learning system can autonomously correct predictive errors in a changing world. 00:49:28 You may know, the thought experiments of Einstein were derived special relativity in general to be I don't know if he's studying physics. 00:49:37 But when you can derive use a thought experiment to arrive if theory. 00:49:44 you've hit gold and that's because the hypothesis, the hypothesis my thought experiment and einstein's or facts that are familiar to everyone. 00:49:57 Because they represent ubiquitous environmental pressures on the evolution of our brains. 00:50:04 So it's not like you have to go running into a psychology and neuroscience book to follow the thought experiment, the hypotheses are trivial. 00:50:15 What gives the thought experiment, the power is, if you realize that a few ubiquitous constraints are operating together on the evolutionary your brain. 00:50:27 Or will happen, so I recommend in either in the book or in my psych review page, can you read the thought experiment and critically nowhere in the thought experiment. 00:50:42 Other words mind or friend. 00:50:47 So in this sense or design principle, the mechanism already universal solution to this learning problem. 00:50:59 I don't take it personally if you can find a problem in the thought experiment, I have to take it back if you can't you're stuck. 00:51:08 And I call the learning problem, the stability plasticity dilemma. 00:51:16 Because it asks how any system can learn quickly without experiencing catastrophic. 00:51:24 Disease the learning stability is no catastrophic forgetting. 00:51:31 And so, as we can talk further we find it's there, for I think remarkable that these art results about learning led me in many, many steps later decades later. 00:51:46 To try to explain how and why, from an evolutionary perspective, we have consciousness, how we consciously see here feel and know things about the world and. 00:52:04 Why we use conscious states to plan, an act to realize well need goals, so I didn't know this in 1978 I didn't know this till 2009 you know that's a long time. 00:52:22 So the conscious States all arise from resonances which occur when excited tori feedback signals between two or more brain signals approximately match. 00:52:36 Their signal patterns well enough to cause the active cells to synchronize boom boom boom boom boom boom boom and because of excited tori feedbacks is staying. 00:52:50 The following long enough to trigger a conscious state and learning in the adaptive weights, that is sending the signals and bottom up and top down, it up the filters. 00:53:02 And it's because of that, I call the theory adaptive resonance theory because resonance provides the dynamic state that drives learning or adaptation. 00:53:14 And it's the top down expectations in art that solve the stability of plasticity dilemma I enable you to get fast learning without catastrophic forgetting, and it isn't a feed poured it that the filter it's a resonating bottom up top down. 00:53:32 The top down expectations like top down matching and then driving memory searching apart, this is testing this what salts because of stress to getting them. 00:53:44 So if you want to you know talk high falutin the proposed solution here, the mind body problem arises from a computation analysis of how humans and other higher animals autonomously, learn and. 00:54:04 Without stress. 00:54:07 And I want to emphasize I also used as another thought experiment to derive a model of cognitive emotional processes or how thinking and feeling in Iraq. 00:54:23 to realize how you go. 00:54:25 And that also uses hypotheses you're all familiar with you just have to think about them in the right way and you force dinner with. 00:54:34 And I use this model to explain lots of facts about of thinking and feeling in Iraq, I call it the car get em model which is short for cognitive emotional motor because there's a cognitive emotional interaction, leading to action to have predictive consequences in the punishment. 00:55:00 And so, because the thought experiments to derive both cognitive models like odd, as well as cognitive emotional models like again never mentioned the word mind or brain. 00:55:14 They provide a blueprint for what kinda miss adaptive intelligent applications in engineering technology and Ai. 00:55:25 they're not about mind and brain per se, they were about how you would that autonomously in real time two different kind of environmental constraints. 00:55:35 So really I think if you have to say, if I have to say, well, why did Steve grossberg doing three words or less i'd say I introduced. 00:55:46 revolutionary paradigm of autonomous captive intelligence and that is going to be more and more what our engineering and technology and Ai will be doing in the future. 00:56:04 Increasingly autonomous intelligent adaptive devices, you know small beginnings now in self driving planes and trains and cars, but wait 50 years I won't be around but, hopefully, you will. 00:56:25 So, to make this work. 00:56:29 The top down expectations and adaptive resonance theory or defined. 00:56:35 By a circuit that avoids catastrophic. 00:56:37 Forgetting by being designed in a very specific way they obey what I call the art matching rules and it's realized by a particular anatomical circuit that's now been confirmed anatomically a multiple species, including BATs. 00:56:57 So it's realized. user avatar Chaytan Inman 00:56:58 By. user avatar Stephen Grossberg 00:56:58 A top down. 00:57:01 Let me say the word modular tori on Center offs around network. 00:57:09 The onset the cells are excited by the top down, and puts put their only PRIMES desensitized they get just a little boost because it's modulating it's getting you ready it's getting you ready to expect something that may or may not occur. 00:57:28 The your surround is it driving off surround it's inhibiting cells around the module Ettore on sentence. 00:57:37 So the module tutorial on centers priming you. 00:57:41 And these prime cells are selectively coding the features, to which the network will start to pay attention. 00:57:51 And I call that feature pad and the critical feature. 00:57:57 Because the critical feature pattern when you go into resonance are going to be the patterns that are going to go on the adaptive weights and your bottom of filters. 00:58:08 And in your top down expectations, the critical features are going to be the ones that are going to drive predictions and action and that's why they're explainable because you can actually record them, and you have the right micro electrode array. 00:58:28 So the art matching rule in summary enables these top down expectations to select pay attention to and learn the critical feature patterns that control predictive success. 00:58:45 And to do that, you need short term memory activation cell activation which deep learning doesn't have. 00:58:56 So, in summary, the art matching rule solves the Stability plus this dilemma because outlier features that fall in the orb surround or inhibited. 00:59:12 Only predictive critical features are standard and learn, so the outline of features cannot cause catastrophic together. user avatar Chaytan Inman 00:59:26 next question. 00:59:28 um. 00:59:30 yeah I mean. 00:59:32 So do we do we have top down expectations like when we're born or How does that work. user avatar Stephen Grossberg 00:59:43 Oh no you're not getting that fine yeah great. 00:59:48 So i'd like to recast the. 00:59:52 question a little bit see why I want to keep the order equation I built the whole thing up. 00:59:57 So it's self contained, even though it's all brief. 01:00:02 um let's translate into the question of how does learning get started, you know. 01:00:10 Especially if there are no learned top down expectations to match learn patterns, because you haven't learned anything yet. 01:00:19 So said another way, how does this top down expectation match a pattern before anything for your kid. 01:00:29 And my prediction is and it's consistent with all our models is done by choosing the initial top down adaptive weights, so you have cells they're active they send signals down axons there. 01:00:49 gated by. 01:00:52 Long term memory tracer adaptive weights these adaptive way to award shows, and to be uniformly distributed from each category sell to all of its target teacher parent cells, in other words, because the top can wait or uniform across the network, they can match any input. 01:01:18 Before learning begins, and so what learning does is it prunes his big touchdown way. 01:01:27 So they gradually match the critical feature patterns that are incrementally discovered in bottom up feature patterns on multiple learning trials, so the top down weights are gradually converging on a stable set of critical features as at the bottom up with through the. user avatar Chaytan Inman 01:01:54 yeah. user avatar Stephen Grossberg 01:01:55 You can't match any particular thing. 01:01:58 So you have takedown. 01:01:59 Uniform weights, so you can match anything and that's how you can get started, because if you mismatched on the first trial, you can never go into resonance you get a reset and everything. user avatar Chaytan Inman 01:02:14 yeah that makes sense, but okay one other question I can see how art can explain, you know clustering and. 01:02:26 grouping like similar things together. 01:02:30 But for problems like something like regression does it have a way to do that. user avatar Stephen Grossberg 01:02:35 Did you say regression. user avatar Unknown Speaker 01:02:37 yeah. user avatar Chaytan Inman 01:02:38 Like predicting a particular number, for example. user avatar Stephen Grossberg 01:02:43 in what context, you have to be more specific, as much too broad. user avatar Chaytan Inman 01:02:48 Right, so I mean what machine. user avatar Stephen Grossberg 01:02:50 Learning. 01:02:50 to predict anything if you have court map, in other words it's one thing to talk. 01:02:58 About on supervised or which is just category learning, but to do prediction let's say, for example, you're trying to learn. 01:03:09 by looking at multiple fonts of letters that you see visually to learn a whole series of categories that will respond selectively. 01:03:21 To different letter fonts then before that learning occurs, there are pre processing stages that will let you learn. 01:03:30 Those letters shapes, no matter what size, they are no matter what position they are no matter what orientation day or so you learn in variant. 01:03:42 categories for recognizing the letters, but the same time you have another modality, where you're learning to understand. 01:03:53 let's say we did that, for the letter, a vision and then an audition you learn how to say and understand that, so you have an auditory. 01:04:06 thing going on and then you'll have an associative map from vision to what division, so that if you see any letter a and any font. 01:04:18 You can come out and say verbally oh that's the letter a It could be that map, it could be it's the number 10 doesn't matter. 01:04:30 So, if you look at fuzzy art map distributed right map these or can operate both in supervisor unsupervised. 01:04:41 mode, you can learn on arbitrary number of trials either run supervise supervisors, or some hybrid so look up for the art map look up distributor now it's no longer just category learning things either prediction systems okay. user avatar Chaytan Inman 01:05:04 Okay. 01:05:06 So then, our next question is has to do with some of the things we talked about in our journal club previously where. 01:05:14 there's you know 150,000 cortical columns about in the neocortex so we're curious to report some other theories like does art have an explanation for what each one does the module wide stuff for that. user avatar Stephen Grossberg 01:05:32 he answered yes. 01:05:36 Although I didn't have it in 78 is took decades to do. 01:05:41 So what I call my lamb and art model provides the detail anatomical neuro physiological and functional explanation of why lamanna neocortex is yes, and how. 01:05:57 identified cells in different cortical columns operate, and so I call the fact that only your cortex. 01:06:07 As lamanna circuits typically six main layers and exceptional and cognitive. 01:06:17 circuits a quote the paradigm of laminate computing. 01:06:23 and 01:06:25 computing began to explain how all higher forms of biological intelligence or generated by variations of a single canonical when a cortical circuit and, in particular. 01:06:44 With multiple colleagues i've developed lamanna models of vision speech perception and car nation that are all using variations of the same. 01:07:00 canonical lamanna critical circuit so that's an existence proof, not the end of the story there's a century of additional work by young people like you. 01:07:11 But that existence proof shows how things is different from vision speech and cognition can all be explained tons of data seemingly very different data can all emerge from a very from specializations of the same canonical circuit. 01:07:33 And now. 01:07:36 If you want to talk about lamanna computing on a very high level and say, well, what is it do not this specific level of explaining a lot of data. 01:07:48 First. 01:07:51 It We well it realizes three high level computational goals, the first is the developmental and learning process whereby cortex shapes its circuits. 01:08:04 to match environmental constraints and dynamically maintain them, that is to solve the Stability plasticity dilemma, so, as I said, or morphed into laminar to explain instability plus this dilemma. 01:08:22 It also carries out the binding process whereby cortex groups distributed data into a coherent object representation. 01:08:32 And you are very different appearances of grouping in visual perception then speech perception, then in cognitive working memory, but they all use if i'm correct and the data support needs so far variations of the same canonical circuit. 01:08:54 And finally, the attentional process whereby cortex selectively processes important events so first self stabilizing learning then distributed binding problem and then attention and what i've shown is. 01:09:13 If you want to stand out a sub instability plasticity dilemma. 01:09:19 In a laminate setting. 01:09:21 That is to say, the first property you get the grouping and attention for free, they fall out of the wash it's really one problem it's the learning problem. 01:09:36 and 01:09:38 I also prepared some notes, because you know lamanna computing is one paradigm I introduced that also introduced what I call complimentary computing and maybe if you'll bear with me. 01:09:55 i'll say a little about that because it's through an understanding complimentary computing that I was led to be able to explain some data for conscious should I do that or, should I skip that whatever. user avatar Chaytan Inman 01:10:12 I think we're a little bit short on time. 01:10:16 So a little bit you. user avatar Stephen Grossberg 01:10:20 Were what right now. user avatar Chaytan Inman 01:10:21 it's short on time right now. user avatar Stephen Grossberg 01:10:23 we're on time. user avatar Chaytan Inman 01:10:29 on time. 01:10:31 Okay, all right. user avatar Stephen Grossberg 01:10:32 i'll skip it fine with me if it comes up in less time okay. 01:10:39 next question that. user avatar Chaytan Inman 01:10:41 sure. 01:10:45 So. 01:10:48 Have you have anything that art has not explained to your satisfaction about the brain. user avatar Stephen Grossberg 01:10:56 Well, the first thing I should say is art. 01:10:58 is just a small part of my work so. 01:11:04 I mean, I can say a little more in a few minutes about a broader perspective of my work as as reflected in my book, but if you if you said. 01:11:16 What I haven't explained my satisfaction about how the brain works, which is how the question was phrased, let me make the following remarks first, as I already commented, no scientific theory is ever complete. 01:11:33 Even super string theory not complete. 01:11:37 So i'd like to have a better understanding of language in all its complexity. 01:11:44 I have created a foundation for doing it in my work that clarifies basic mechanisms of addition speech learning and perception and cognition. 01:11:57 But to really understand language and all its complexity, little own poetry it'll take a lot of people working for a long time. 01:12:08 But I have also been recently working on how language meanings are learned in infants and children, and so language isn't just about explaining data, independent of meaning okay. 01:12:28 Secondly, i'd like to understand us better and social cognition better. 01:12:36 I just published the first article about music in front is in systems neuroscience it just sort of touches the surface. 01:12:46 But it's called toward understanding the brain dynamics and music learning and conscious performance of lyrics and the melodies variable with them and beats and what's interesting about in part is that I was able to use this stuff I summarized in my book to make that paper. 01:13:08 and 01:13:10 12 years ago in 2005 with my postdoc Tony lattice that shy publish the first article in the journal neural networks about social cognition. 01:13:22 including an explanation of our student can share joint attention with a teacher we're learning a new skill from her, in other words, if you watch someone doing something. 01:13:36 You can try to imitate it, even though you're seeing it from a different perspective that requires what's called joint attention. 01:13:46 And it's a problem in autism, a lot of autistic individuals camp do joint attention and that paper is called how to children learn. 01:13:57 To follow gaze share drawing attention imitate that teachers and use tools during social interaction i've done a lot of work on how and why we can use tools. 01:14:13 And both of these contributions just beginning, so there are many things i'd like to understand better language music social cognition purchase to drop in the bucket. 01:14:25 But in each case, the foundation of my book, helped me. 01:14:31 So you want to go to the next question. user avatar Chaytan Inman 01:14:34 Yes, okay i'm Andre. 01:14:38 Andre. user avatar Stephen Grossberg 01:14:39 Andre that's. 01:14:43 It have to go. user avatar Chaytan Inman 01:14:45 Oh i'm just asking the next few questions about. 01:14:50 The next question is, can we reach general artificial intelligence by stimulating the brain or by simulating the evolution of the breath or. user avatar Stephen Grossberg 01:15:02 So please keep. 01:15:03 in mind that the. 01:15:04 Thought experiments, from which my cognitive and cognitive emotional models were derived don't mention mind or brain their consequences of several familiar types of environmental constraints can jointly shaping the evolution of our brains, so the models are universal. 01:15:28 solutions that have any system can autonomously correct predictive errors in a changing world so. 01:15:42 He is or time and. 01:15:46 If you want to emulate human intelligence which we often do if only be able to interact with human operators, then the results show the answers, yes, because it's universal. user avatar Chaytan Inman 01:16:10 And then next question generally, what do you think it's the next milestone tasks or milestone tasks for a development. user avatar Stephen Grossberg 01:16:22 So whenever i'm asked a question like a. 01:16:24 shy shy I couldn't predict the present, so I never tried to predict the future that's just me you purchase. 01:16:33 But That being said, my work tries to explain how biological intelligence works, but Ai doesn't have to restrict itself in that way. 01:16:46 You know if they were if Ai. 01:16:49 Has a completely different set of goals and doesn't care if it's like biology that's fine part of the problem is. 01:17:00 If you want to be intelligent, how are you going to define intelligence, if not in terms of correcting predictive errors and then you have a universal solution. 01:17:10 So. 01:17:14 Anyway, I can't say more than that. 01:17:17 But. 01:17:19 I hope that helps you have to really think about the universality solutions into that you have to read the thought experiments and if you can't get out of them you're stuck and I haven't ever seen a way out of them so i'm stuck. 01:17:39 But the good thing about being stuck is i've been a theorist for 65 years and because i've gotten to this level of foundational principle that the thought experiments, among other methods helped me to find I never hit a brick wall. 01:17:59 I have worked through multiple fads which are the hottest thing, since I don't know chocolate cake or Marilyn Monroe, whatever your preferences for two or three years and then they hit a brick wall and they're gone and forgot. 01:18:18 Really 20 of them more over the decades, but I never hit a brick wall, so another way of saying, that is, if you want to have a productive research life which you don't necessarily there are lots of other ways to enjoy your life. 01:18:37 worry about the Foundation, so you don't hit a brick wall if that requires that you have to study grossberg because he worked for 65 years already. 01:18:48 And you can just try to read discover the wheel because someone will get up to you and say put growth spurt did that already you just wasted your time, but if you can use people like me as a launching pad, then you can go on without hitting a brick wall. user avatar Chaytan Inman 01:19:10 So, then, I guess how. 01:19:12 And on that question, how do you see your work being used in the future. 01:19:20 How to expand on that, how would you see your work being used in the future. user avatar Stephen Grossberg 01:19:27 Now that's just. 01:19:29 Not here right. user avatar Chaytan Inman 01:19:31 No yeah that's just just curious about. user avatar Stephen Grossberg 01:19:34 already answered that question I can't predict the person. 01:19:39 So when I when I feel is the following. user avatar Chaytan Inman 01:19:43 let's talk scientific. user avatar Unknown Speaker 01:19:46 politics. user avatar Stephen Grossberg 01:19:49 um. 01:19:51 You know, we pride ourselves on living in a world where there's almost instantaneous communication. 01:19:59 But because it has the signal to noise ratio is exceedingly low. 01:20:05 it's a lot of noise. 01:20:08 we're over the market is shameless marketers or exceptionally loud. 01:20:15 So, for example, both back prop and deep learning have sold their wares in a way that I consider it intellectually dishonest. 01:20:26 Does Sheffield and didn't have it tell you my work is unexplainable and unreliable, you know it's untrustworthy and unreliable no. 01:20:40 He says, this is the way the brain works. 01:20:44 You know it's just. 01:20:48 unfortunate. 01:20:51 So between the ignorant than or dishonest marketing and just all the other things that can attract your attention. 01:21:02 Work like mine, even though it's been building incrementally with thousands of people have applied it hundreds of developed. 01:21:14 A gets a little lost in it, because there isn't a buzzword you have to actually study for a while to learn something and then you can get a huge reward. 01:21:27 I mean I don't know if you read the reviews of my book, well, the pre publication reviews look like 22 of the most famous people. 01:21:37 In psychology neuroscience and technology and they sort of wrote things that made me blush but the hundred post publication reviews, two of them said on the Einstein of the mind. 01:21:55 Already I didn't write that they did now let's stay there right. 01:22:02 You might say, well, if he's Einstein. 01:22:06 And the mind is the next you know, the biggest revolution in science, which many people feel. 01:22:13 Then why don't, we all know about. 01:22:16 And that has to do, I think, not only because we live in a world with a very low signal to noise ratio, but also the nature of the mind and the nature of einstein's work. 01:22:29 You know Einstein makes a famous prediction about the perihelion of mercury everyone, since you were a child looks up at the heavens, the heavens obsessed all the stark civilizations you'd look at it, to know your omen, you know you'd look at it to navigate. 01:22:52 Different moans predict, will it be romantic tonight everyone sees the heaven so when the perihelion of mercury was predicted we all knew what that meant. 01:23:06 But the whole point about mind is the property of cognitive impenetrability. 01:23:14 For many, many years people didn't even know that the the mind within the brain some people thought it was in the pancreas it's been a not some people thought it wasn't a hard. 01:23:29 You know it's been a lot of discussion because it's cognitive impenetrability and now you might say, well, why do we have cognitive impenetrability. 01:23:41 Well, thank God, that we do because, since we don't have to worry about cells and axons and synopsis and transmitters and resonances all we have to worry about or thoughts and feelings and add and. 01:24:00 We can live in a macroscopic world like. 01:24:04 Our interaction now. 01:24:07 So those are some of the reasons. 01:24:13 To these to these questions here or. 01:24:19 I think, in a way dissipating number of them. user avatar Chaytan Inman 01:24:24 yeah I think we've got the 10th one, I think we talked about kind of stuff. 01:24:31 So if you don't mind. 01:24:35 going to skip them do you mind if we take some questions from the audience. user avatar Stephen Grossberg 01:24:41 No, but let me just make a remark. 01:24:48 About one of the questions you didn't ask was, I think it might repeat some cancer. 01:24:54 There was a question, do you think that some form of art is necessary and sufficient for intelligent day. 01:25:02 And I want to clarify. 01:25:05 Although order and colleague, Mr universal there a very small part of our brains classes that fires even learning and variant object categories occur in just a couple of parts of our brain in infrared temple cortex the posterior anterior input temple cortex and, if we look in my book. 01:25:31 Talk about the predictive are or port model. 01:25:37 which shows how this. 01:25:41 Classification port is embedded in a much larger. 01:25:49 brand architecture for achieving biological intelligence just for starters working memory circuits and prefrontal cortex which I didn't discuss it all. 01:26:02 They temporarily store sequences of recently apparently. 01:26:08 Like storing a sentence or something and they use the sequences to learn and choose plans to achieve currently valued goals they were linguistic working memories. 01:26:23 For speech and language, there was spatial working memories for navigation, they were motor working memories for skilled movements like dancing and tool use. 01:26:37 So what we talked about with the classifier today, and even the art map supervise mapping system, a very small part of what makes intelligence. 01:26:54 So so precise about my look it up in the back, but if you don't have the book, although I think it's good. 01:27:03 For one stop shopping. 01:27:07 I published an article 2018 about it. 01:27:13 It has a funny name kind of desirability availability credit assignment category learning and attention colon cognitive emotional and working memory dynamics. 01:27:27 Of orbital frontal ventrolateral and DORSAL lateral prefrontal cortex, so there are many parts of the prefrontal cortex at least seven. 01:27:37 They all do different things they all interact to control higher water cognitive behavior as it interacts with reinforcement learning and motivational circuits and visual and auditory circuits and categorization circuits and that's what the predictive model brings together so. 01:28:02 Okay now. 01:28:04 I guess you had a couple of questions. 01:28:11 To students I don't know if you want to ask those. user avatar Unknown Speaker 01:28:16 Questions whatever. user avatar Chaytan Inman 01:28:21 Okay, so um, my question concerns the idea of top down expectations that you mentioned earlier, so it's my understanding, there are top down expectations that are learned that are compared to bottom up inputs from the real world. 01:28:37 So minor thing is top down expectations are predictions in some way, so they predict the input that the you're going to receive from the world, but how does the mind know which top down expectation to activate before receiving the bottom of info. user avatar Stephen Grossberg 01:28:59 Well that's very good question that's why I just i'm glad that I mentioned prefrontal cortex because prefrontal cortex by. 01:29:10 Storing sequences of events that just occurred you learn sequence chunks are what I call list chunks that are the categories that are going to predict the most likely outcomes in that environment if it's familiar to you. user avatar Unknown Speaker 01:29:29 Okay. user avatar Stephen Grossberg 01:29:32 So you can't get that just from a single classifier, but you can get it as a prediction by alyssa chunk and there are two kinds of predictions. 01:29:43 One kind is bottom up top down, whether it's on a feature pattern or a working memory pattern you'll have a resonance bottom up top down, the second is an associative prediction, you know I can predict, given this sequence of events i'm going to predict i'm going to prime. 01:30:06 What event I expect to happen next. 01:30:11 And then, if I want I can deliciously activate that crime and fantasize it until I can have internal thought and planning without having. 01:30:27 Does that help. 01:30:30 You can't use a single category to predict the future. user avatar Chaytan Inman 01:30:36 So the prefrontal cortex stores these lists chunks that you said, and those are integral to. 01:30:44 picking the right top that expectation, because without them, you would have no way of. 01:30:50 Setting up the priming the brain for the bottom of input. user avatar Stephen Grossberg 01:30:55 Well, if you're trying to predict first. 01:30:57 list chunks of learned. 01:30:59 Their categories, just like in our categories learned so you have sequences coming into working memory that temporarily stored. 01:31:11 As and you have to know how to design a working memory, for example. 01:31:16 let's say i've learned the following words before my self and L. 01:31:26 three different words myself and out and i've learned to categorize them so when I hear those words I can go into a whole song and dance and with associations to their meaning now for the first time i'm going to store and working memory, the word myself. 01:31:47 And I want to learn might what myself means it means something different from my Elf, and so. 01:31:55 The question is, how do you store myself and working memory without forcing catastrophic forgetting, but will learn categories for myself. 01:32:06 And that's a constraint on how working memory that design. 01:32:14 Okay, so first, you have to know how to design working memories, then you can have events coming in their store and working memory have to know how to do that. 01:32:24 And what's going to happen when they are, and then they can get they can, if you experience until they become familiar learn list chunks and there you have to deal with the question well if myself is now familiar. 01:32:42 Familiar sub chunks myself and out there were the phone names for the individual letters, how do you ensure that the category for the whole word myself wins in. 01:32:58 The competition, you have to know how to design it and my work suggests solution to that problem so everything we say here. 01:33:08 You can easily state what the problem is and the solution that very easy answers, if you think about in the right way. 01:33:18 For example, to understand I working memory is designed, I am post what I call the long term memory and variance principle it forces, you went to very simple explanations working memory explain ton of data and they show why for linguistic. 01:33:39 spatial and motor working memories of a similar circuit design. 01:33:47 And that's why, for example, a person can easily sign as a mouse. 01:33:56 The linguistic words that they're trying to communicate, because these working memories can interact with each other seamlessly because they have the same design they're all lying next to each other and ventrolateral and DORSAL lateral prefrontal cortex. user avatar Chaytan Inman 01:34:21 Okay sweet. user avatar Stephen Grossberg 01:34:23 Nice to students, what do you want to ask something else i'm happy to do whatever you want. user avatar Chaytan Inman 01:34:28 I think we have another question from the audience that won't get that okay. user avatar Chris’ whiteboard 01:34:37 yeah I can unmute myself, thank you for coming faster growth, but I just want to start by asking a few clarifying questions, make sure we're on the same page, so the photo in my name is Chris so i'm computer science going to a PhD program at you Chicago so closer to you than Seattle right. 01:34:53 The first one is, I just want to understand, but it's it seems like. 01:34:56 Art right what we're focusing on art for this, but maybe also the majority of your work, talks about understanding the brain and creating these theories, for it. 01:35:03 And then there's the underlying principle where, if you can effectively model, the brain right, you can effectively model. 01:35:09 learning systems right, and so, then that's what you're making the comparison of art, not only as a neuroscientific theory, but also as a comparison to deep learning is that accurate. user avatar Stephen Grossberg 01:35:21 I start by trying to understand. 01:35:25 Hundreds of psychological experiments about many different aspects of our behavior i'm mostly interested in things. user avatar Chris’ whiteboard 01:35:37 Like learning. user avatar Stephen Grossberg 01:35:38 Because I believe that the developmental and learning processes have rate limiting role in. 01:35:49 Our biological intelligence, so that, for example, you can learn a category. 01:35:57 That will recognize a ladder using the same dynamics as a category that will recognize the word. 01:36:08 So there's a great or ceremony in the circuit. 01:36:17 Now, could you repeat your question in the light of that. user avatar Chris’ whiteboard 01:36:21 I think my key question is it's preferred. user avatar Stephen Grossberg 01:36:24 Learning I couldn't care less about deep learning. 01:36:29 Deep learning is fed. 01:36:32 And that prop got very popular well first that proc was popularized by romo for hinton and Williams in 1986 to hear them talk you'd never realized they didn't discover it. 01:36:50 It was discovered by shinichi Mr Gary David Parker and poor we're both were both did his PhD thesis on it in the 1970s, they just popularized. 01:37:05 And they developed some applications, so it became popular for a while again until people realize that problems very, very small learning catastrophic forgetting. 01:37:20 No internal representation, because it's just basically a team forward adapted filter no intelligence, so it stopped being popular faded and lots of other models became more. 01:37:41 Popular, but then, in the interim, two things happen one is due to the World Wide Web. 01:37:51 There were huge databases that you could Google. 01:37:57 Like millions of pictures of cats and the speed of computers and computer networks became blindingly fast, so that now I carry a supercomputer in my pocket. 01:38:16 What you put together once further these things, and you get a list stirring speed of computation huge databases very fast computation so suddenly you could train a bad prep network or. 01:38:36 Black prep on steroids deep learning on 100 million pictures of cats and you put in a picture of a cat and I would come cat and this was supposed to be some exciting thing well in the field of neural networks which I founded that's a Joe. 01:39:01 Okay, so now you want to repeat your question. user avatar Chris’ whiteboard 01:39:07 Well, I think what i'm interested in is, you say that you like you couldn't care less about deep learning right and you say it's a fad right as a computer scientist that really. user avatar Stephen Grossberg 01:39:17 won't care about it because it is a very weak damage model that teaches me nothing that I want to know that doesn't. user avatar Chris’ whiteboard 01:39:27 mean as a neuroscientist. user avatar Stephen Grossberg 01:39:29 No, no. user avatar Chris’ whiteboard 01:39:31 As a as a. user avatar Stephen Grossberg 01:39:33 What the leading technical model of biological intelligence I don't think it's bad if people take a deep learning package off the shelf. 01:39:47 And it helps them through you know if you have stationary data and a lot of it and you do offline learning, you can get it to learn some predictions and then you can put it hardwired into your iPhone or whatever it can be useful use it. 01:40:09 that's fine anything that's useful use it, but don't delude yourself about what it is, you have to know. 01:40:20 its strengths and weaknesses of the models you use, including mine, so what I find objectionable is the sheer intellectual dishonesty of how some people describe. 01:40:35 The significance of deep learning they never compare it with other models in comparative benchmarks when gail carpenter and I introduced. 01:40:49 To his various or algorithms for applications we benchmark it against every available popular model, including back propagation which we blew out of the water. 01:41:04 Now i'm not going to spend the rest of my life benchmarking the models I developed because they used by thousands of people who know their provable. 01:41:17 properties, for example in 1987 gail carpenter and I proved a complete set of fear. 01:41:24 about how are one learns and their reviewed briefly. 01:41:30 In my book, including that it doesn't experience catastrophic to getting that there are situations where you can learn a whole database. 01:41:40 On a single learning trial in itself stabilizing way so because of these things deep learning in fact prop teach me nothing, and I know Paul where bows. 01:41:54 I know frank, I knew who frank rosenblatt I thought I Lee frank, I like oil, and I think he's an authentic scientists it just I don't need them to understand what I want to know I know more than that that's why I don't care. user avatar Chris’ whiteboard 01:42:14 So, so let me, let me make sure I understand that, then what you're saying is that for you deep learning is it has like the signal to noise ratio right there's a lot of dishonesty about its application right and you're not opposed to the. user avatar Stephen Grossberg 01:42:27 methods. user avatar Chris’ whiteboard 01:42:28 But you're but but can I can i'm sorry i'm gonna can I can wait, can I finish before, because I feel like there's a disconnect somewhere so to clarify. 01:42:35 It sounds like what you're saying is that you're again you're not opposed to this method, they don't teach you anything right, but if there's value in them, for example, for you know converting speech to text or for you know, recognizing a cat image you're you're fine with that right. user avatar Stephen Grossberg 01:42:49 Why shouldn't I be i'm a technologist. user avatar Chris’ whiteboard 01:42:52 Since then okay. user avatar Stephen Grossberg 01:42:53 I have this that. 01:42:57 People who use the should understand the range of applicability and i've heard a number of people I don't look for them saying you know oh my God it crashed now, certainly if you do it offline. 01:43:15 And then you're careful and if it crashes you do it again and again and again until we get the criterion and then you freeze the way okay good. 01:43:28 One issue, though, is scalability. 01:43:32 So. user avatar Chris’ whiteboard 01:43:34 catastrophic. user avatar Stephen Grossberg 01:43:35 Getting is much, much more likely, if you have inputs, that a dense in the vector space that inputs and one way to avoid that is to spotify the network, which means you have millions and millions of nodes or hundreds and hundreds of layers, which is what they've gone. 01:44:01 that's very inefficient you don't have to do that in algorithms that self stabilize their learning. 01:44:11 If you look at our brain we don't have 100 like I think I have a figure. 01:44:17 Of a deep learning net with 100 layers we don't have 100 layers and I hope brain. 01:44:24 let alone for classification and prediction. 01:44:29 I mean you know you should give that some thought, what the reason i'm concerned is just I want people who use their. 01:44:38 to know it's untrustworthy and unreliable and a very specific scientific sense, the fact that it's not always presented with those facts in mind, I think it's unfortunate to the youth to the user community. 01:44:58 You want you might want to call it, intellectually dishonest to some people that's true for other people may just be they don't really know. user avatar Chris’ whiteboard 01:45:08 I think that's a great point because I agree with you that there's a lot of challenges with explain ability right and with convergence I think what i'm interested in, as someone who doesn't have the background in neuroscience that you do I want to understand what. 01:45:20 place art has, in the field and your work right, because what i'm confused about is is like like you analogies a lot of your work to like to other great scientists right which is really respectable, but what what. 01:45:31 What i'm confused about is just. 01:45:33 What other competing theories are existing right now and kind of weird the different like again comparative exactly what you're talking about comparative analysis between these theories and understanding, where like yours may hold way over others and vice versa right. user avatar Stephen Grossberg 01:45:47 Well quarter what you say, has nothing to do with what I think. user avatar Unknown Speaker 01:45:53 You just lose you. user avatar Stephen Grossberg 01:45:56 don't forget i'm a mathematician and an engineer okay I got my PhD in mathematics don't keep focusing on neuroscience please, I think, like a mathematician and an engineer. 01:46:15 And to me. 01:46:17 The unexplained ability and catastrophic forgetting a worrisome if I knew that about an algorithm I would ask is there an algorithm that doesn't have those products. 01:46:30 The answer is, there is at least one and they're probably more i'm not going to try to give a tutorial on the space of neural models. 01:46:45 For example, support vector machine. user avatar Unknown Speaker 01:46:48 just started. user avatar Stephen Grossberg 01:46:52 So no i'm not talking like i'm. user avatar Unknown Speaker 01:46:56 talking like and I. user avatar Chris’ whiteboard 01:47:02 think that i'm getting word from Jana I think it's kind of at our time right. user avatar Stephen Grossberg 01:47:08 I think that's correct. 01:47:10 If you want to ask me more I can hang out for a while. user avatar Chris’ whiteboard 01:47:15 The students, I could tell you. user avatar Stephen Grossberg 01:47:22 mean the main thing I already said. 01:47:25 If you want to. 01:47:28 spend a lot of your life studying something you should love it. 01:47:33 find a problem you really are dying to know the answer to if you're not dying to know the answer you're not going to put in the work pretty simple. 01:47:47 anyway. 01:47:49 it's been at the lie, so you want to ask more should we call it a night. user avatar Chris’ whiteboard 01:47:56 I personally might stick around but. 01:47:58 cheating why don't you go ahead and say whether or not the group is obligated to be stuck with me. user avatar Stephen Grossberg 01:48:02 Well, everyone could leave if they want, if people want to stay stay and, if you want to leave leave I won't be insulted you've already heard me flap my lips, for an hour and I. 01:48:15 know you gotta go i'm staying up late, do you usually sleep at 10 o'clock. 01:48:21 So I stayed up late, I want to talk some more i'm glad you don't. user avatar Unknown Speaker 01:48:26 go to bed. user avatar Chaytan Inman 01:48:29 yeah Thank you so much for talking about this, I think we're just interested in their advice for students. user avatar Stephen Grossberg 01:48:38 What did what did you say I didn't understand shake. user avatar Chaytan Inman 01:48:43 Thank you for for staying up late, I also go to bed around 10. user avatar Stephen Grossberg 01:48:50 Well, but it's not then it's seven. user avatar Chaytan Inman 01:48:56 What, what is your advice for us. user avatar Stephen Grossberg 01:49:00 Young people well I just gave. user avatar Chaytan Inman 01:49:05 It to fall in love with the data, I would argue, sitting you fall in love with the problem and then go find the data. user avatar Stephen Grossberg 01:49:12 Well that's what I was trying to say. 01:49:15 You know something that you die and cannot. 01:49:19 be applying to know something. user avatar Chaytan Inman 01:49:21 You haven't even define the problem yeah. user avatar Stephen Grossberg 01:49:24 You know, so you know you gotta. user avatar Chaytan Inman 01:49:28 sort of. user avatar Stephen Grossberg 01:49:29 Try to figure out what are you willing to work your tail. 01:49:35 And you know you had a question, what would I do differently if I were a kid today, and my answer is, I haven't a clue you know I hadn't been a kid for decades, but. 01:49:49 You also have a specific hopes about how I want you to develop and we be the last few paragraphs to my book we're right talk about that, in brief, I, I hope that. 01:50:05 You know so much of the success of our society is based on. 01:50:12 Being able to make things in the external world you know work buildings and our automobiles and our railroads so we spent a huge amount of effort. 01:50:24 developing technologies for the external world at what huge blessings to humanity but as much less study students have our internal world you know what makes us tick how we think how we feel. 01:50:45 What makes us happy and said, you know and the end of my book, just said, I hope the book helps people to rebalance that a little bit by having you know, a pretty thick summary of some of the things we now know about how our minds work. 01:51:06 To give our internal worlds, a little more clarity. 01:51:11 With this within the spirit that there's a huge amount to do so many of you people, what do you majors in engineering math psychology neuroscience what are you. 01:51:27 you're a neuro scientist right. user avatar Chaytan Inman 01:51:32 or science. user avatar Stephen Grossberg 01:51:34 Did nice a computer scientist. user avatar Chaytan Inman 01:51:38 Maybe you did. user avatar Stephen Grossberg 01:51:39 I meant to my but maybe I didn't i'm a little sleepy yeah so. user avatar Chaytan Inman 01:51:50 yeah. user avatar Stephen Grossberg 01:51:54 Any other comments yeah. user avatar Chaytan Inman 01:51:57 andres in computer science yay gore's in computer science divya is in neuroscience. 01:52:07 Almost maybe will be soon. 01:52:10 CS for Hello all CS for Alec neuroscience. 01:52:16 For Jana. 01:52:20 Applied math for more easy. 01:52:23 math for Eric. 01:52:28 intended neuroscience. user avatar Stephen Grossberg 01:52:30 Pretty interdisciplinary group well you know when you study in neuroscience usually unless. user avatar Chaytan Inman 01:52:39 you're in a computational neuroscience. user avatar Stephen Grossberg 01:52:41 group you gotta get in a lab and there are two quite different. 01:52:47 Things to think about getting in a lab what is will a particular lab teach you. 01:52:55 The hottest experimental methods available so that you could. 01:53:02 get an lab doing what you really want to do, another is to get in the lab with enter that you could get in the lab with a very young person who doesn't really know much more than the technique, but then it's good to look for a more senior person who, as a clear a. 01:53:22 Research program where your skills my induce them to hire you. user avatar Unknown Speaker 01:53:30 To do stuff you don't know anything about. user avatar Stephen Grossberg 01:53:33 And i've trained, a lot of students in our department which is primarily modeling department, although we also. 01:53:42 Did the kind of experiments that might take days or weeks or months but not yours, but we didn't do any. 01:53:53 Micro electro neuroscience and our students were in very great demand because. 01:54:01 They had a conceptual training that enabled them to rapidly learn large databases and and then learning the actual techniques to use in the lab was relatively easy. 01:54:19 So you should ask yourself how do you sell yourself effectively. 01:54:26 So maybe we should call it a day or you have another question or. user avatar Chris’ whiteboard 01:54:34 I think I think i'm just confused as to. 01:54:38 Well, first off, thank you, thank you for talking i'm confused. 01:54:41 Like like you well, let me also say that I really identify with being a mathematician background, so my background is in computer science, research and specifically quantum computing right so lots of math there. 01:54:52 What what i'm interested in in and change, and I hope you I hope you find that this conversation is illuminating right well i'm really interested in is is why you don't want to be labeled as a neuroscientist right Professor grossberg like. 01:55:04 Because I really. 01:55:05 Sorry, what. user avatar Stephen Grossberg 01:55:06 I didn't say. user avatar Chris’ whiteboard 01:55:08 Oh i'm sorry I must have misheard you then can do you mind kind of elaborating more. user avatar Stephen Grossberg 01:55:15 Might titles or Wang professor of cognitive and neural systems. 01:55:23 i'm a professor of mathematics and statistics, psychological and brain science and biomedical engineering. 01:55:34 You kept focusing on me as the neurosciences i'm an interdisciplinary scientist that's why I mentioned i'm a mathematician and an engineer, to try to point out i'm not just a neuroscience. 01:55:51 i'm an interdisciplinary scientist and I bring together all the skills to do what I do. user avatar Chris’ whiteboard 01:56:00 Okay, do you think that sometimes other people in the field misunderstand you like becomes because you come from a more theoretical like like literally math theoretical background. user avatar Stephen Grossberg 01:56:13 Well, I don't know what people think, who I never met Okay, I do know that I win prizes. 01:56:24 Like in 2015 I won the lifetime achievement award of the society of experimental psychologist, that is the most distinguished group. 01:56:35 of experimental side psychologists in the world in 2017 I won the I Tripoli computation crank rosenblatt computational neuroscience award I Tripoli is the biggest engineering society in the world. 01:57:00 In 2019 I won. 01:57:06 The award of the international neural networks society, which is the biggest society of neuro modelers in the world, including psychologists neuroscience is mathematicians computer science engineers for my work on learning. 01:57:26 And in 2022 my book won the 2022 pros book award in neuroscience of the Association of American publishers. 01:57:41 So somebody out there likes me, and if you read the 122 reviews in my book you'll see that, among the people who, like me, are some of the most gifted in Famous people in psychology neuroscience and technology, so I don't know what you mean I don't know what you mean. 01:58:08 Sure, there are people who don't know what I do, but you know you say I compare myself talents time you know I didn't I was trying to make a point about them. 01:58:19 But even in the case of Einstein Einstein the myth of Einstein is there only two people in the world who understand Einstein That was a lie. 01:58:31 You know I don't know what people say about me who I haven't met, what did they say I haven't a clue. 01:58:40 What I say to you is the following. 01:58:44 If someone makes you more about what anyone's. 01:58:48 They say oh that work is crap. 01:58:51 I always say, well, which article, are you talking about. 01:58:59 What result was crap and then get quiet and wait for the answer if there is no answer you've got your answer if they do have a genuine answer then you've learned some. user avatar Unknown Speaker 01:59:16 Okay. user avatar Chris’ whiteboard 01:59:19 Thank you, I appreciate that. user avatar Chaytan Inman 01:59:21 I think the answer is that pretty well recognize. 01:59:28 Okay, so we have one more question. 01:59:32 Hello raspberry I know answer this question is. user avatar Stephen Grossberg 01:59:36 You manage scope so you're not. user avatar Chaytan Inman 01:59:38 Okay um I know you've been around for a long time right a lot happening science right like some other scientists, because I was wondering how do you distinguish between what's really essential and what's just a bad in silence. user avatar Stephen Grossberg 01:59:51 And what is what. user avatar Chaytan Inman 01:59:52 And what's just a bad in science. user avatar Stephen Grossberg 01:59:55 Well that's hard. 02:00:00 First, the more you know, the easier it is because you have a broader context of knowledge to evaluate things so, for example in my life. 02:00:13 You know. 02:00:16 Trust in the in the people whose work I usually hear about their technical ability is unquestionable. 02:00:27 And then, of course, if they use proper statistical methodology that's always published up front. 02:00:36 So for me lots of experiments of one or another kind that I read about I would say, oh yeah that's just that or that's just that are that you know it's familiar. 02:00:50 it's a variation on a theme and some experimental is make their whole career doing variations on one theme and there's nothing wrong with that. 02:01:01 You probe it a little more deeply and you elaborate it but it's not like I said oh wow what the hell, does that mean. 02:01:12 every once in a while there's an oh wow what the hell, does that mean experience for me and then I dive in with both feet and get really excited and try to figure it out so with a lot of my stuff I couldn't figure it out. 02:01:30 Even by figure it out, I mean what I currently think I know for 10 2030 years I wasn't ready there wasn't enough data to guide my thinking. 02:01:44 So it's not like you know oh you're such a smart person, you should understand everything right away that's not the way it works you gotta be patient that's why I say. 02:01:56 fine things you really a dying to know the answer, because only in that way will you put in the work the scholarship the thinking. 02:02:07 You know all that all that goes into it. 02:02:11 So at this point in my life most things I I hear about within the realm of the kinds of science, I do feel so much in them. 02:02:23 But some work is radically. 02:02:28 Creative but, for example, new methods like up their genetics, which has become the rage. 02:02:37 A lot of those experiments that I read are done very well and the technology is wonderful, but at least so far, a lot of what they've done is rediscover known facts, but in a glitzy or way now for quite a while that is true of. 02:02:59 functional neuroimaging fmri but it's no longer true functional neuroimaging has matured and now they have some very challenging and interesting experiments with real new discoveries. 02:03:16 Especially when you do fmri superimposed on you know magnetic and. 02:03:27 er key measures where you have all of these happening at once. 02:03:34 So yeah. 02:03:38 But the main thing is fine, find your passion, without passion don't go in and science. 02:03:46 I mean, of course, if you can go into computer science, because you want a good job that you know, there are a lot of jobs, a lot of the problems of interesting application you don't have to be in love you know but it's nice to have a passion. 02:04:04 You know it's nice to be in love right so, but if you have a good job and you find the work interesting that's good that's what most of us get I i'm lucky I fell in love, when I was 17. 02:04:19 So and i've worked like a dog to justify. user avatar Unknown Speaker 02:04:24 still do. user avatar Chaytan Inman 02:04:29 Oh Thank you so much for that reason why for that answer. user avatar Stephen Grossberg 02:04:33 So social we say goodnight. user avatar Chaytan Inman 02:04:37 Sure we'll let you go um yeah Thank you so much. user avatar Stephen Grossberg 02:04:42 My pleasure and you're gonna you recorded this you're going to use it for anything. user avatar Chaytan Inman 02:04:49 I, yes, Sir, I mean we're going to post it on YouTube. 02:04:55 We can say we have a big following but. 02:05:00 yeah. user avatar Stephen Grossberg 02:05:02 Well, but then, when you post. 02:05:05 shaken please give me a heads up with the URL and you might want to trim the beginning cause. 02:05:14 You started recording before. 02:05:15 We actually began. user avatar Unknown Speaker 02:05:18 and user avatar Stephen Grossberg 02:05:20 you'll figure that out. 02:05:22 But thank you. 02:05:23 So much for inviting me I enjoyed it take care, good luck D well. user avatar Chaytan Inman 02:05:30 bye Thank you.