Archive.fm

Behavioral Grooves Podcast

Unmasking AI | Ben Manning

Is AI about to take over the world, or is it simply…misunderstood? Tim takes on a solo-sode with guest Ben Manning, a PhD candidate at MIT who is currently writing a thesis on the fascinating world of artificial intelligence (AI) and machine learning.  They explore the intricacies of AI, defining key terms, and discussing the hierarchy within AI, machine learning, and large language models. From spell check to ChatGPT, the world of AI is diverse and ever-expanding, and Ben explains the potential of AI to assist fields like behavioral science and beyond.  From healthcare to finance, AI has the potential to benefit various fields, but it’s not without it’s limitations. There’s a certain way we can all embrace this technology and understand where and how is the best place to use it. Kurt jumps back into the game in this week’s grooving session, where he and Tim reflect on the conversation, highlighting key takeaways and discussing their own perspectives on AI. They emphasize the importance of embracing AI for its potential, and not being scared of the “unknown” it presents. All hail our robot overlords! Kidding, but tune in this week for a great conversation on a modern invention, and how it applies to our everyday lives.  © 2024 Behavioral Grooves Topics  [0:00] Quick announcements for Behavioral Grooves! [6:35] Intro and speed round [10:27] AI, psychology, and behavioral science [16:46] Using machine learning in psychology experiments [21:34] Using AI to study behavior: Benefits and limitations [28:33] AI in machine learning and desert island music [30:57] Grooving session: The future of AI - we're nervous but excited © 2024 Behavioral Grooves Links  Ben Manning Large Language Models as Simulated Economic Agents: What we can learn from Homo Silicus Ben’s Substack Behavioral Grooves - Sign up for our newsletter! Musical Links  Sammy Rae & The Friends “We Made It” The Brook & The Bluff “Halfway Up”

Duration:
44m
Broadcast on:
15 Jul 2024
Audio Format:
mp3

Is AI about to take over the world, or is it simply…misunderstood?

Tim takes on a solo-sode with guest Ben Manning, a PhD candidate at MIT who is currently writing a thesis on the fascinating world of artificial intelligence (AI) and machine learning.  They explore the intricacies of AI, defining key terms, and discussing the hierarchy within AI, machine learning, and large language models. From spell check to ChatGPT, the world of AI is diverse and ever-expanding, and Ben explains the potential of AI to assist fields like behavioral science and beyond. 

From healthcare to finance, AI has the potential to benefit various fields, but it’s not without it’s limitations. There’s a certain way we can all embrace this technology and understand where and how is the best place to use it.

Kurt jumps back into the game in this week’s grooving session, where he and Tim reflect on the conversation, highlighting key takeaways and discussing their own perspectives on AI. They emphasize the importance of embracing AI for its potential, and not being scared of the “unknown” it presents.

All hail our robot overlords!

Kidding, but tune in this week for a great conversation on a modern invention, and how it applies to our everyday lives. 

© 2024 Behavioral Grooves

Topics 

[0:00] Quick announcements for Behavioral Grooves!

[6:35] Intro and speed round

[10:27] AI, psychology, and behavioral science

[16:46] Using machine learning in psychology experiments

[21:34] Using AI to study behavior: Benefits and limitations

[28:33] AI in machine learning and desert island music

[30:57] Grooving session: The future of AI - we're nervous but excited

© 2024 Behavioral Grooves

Links 

Ben Manning

Large Language Models as Simulated Economic Agents: What we can learn from Homo Silicus

Ben’s Substack

Behavioral Grooves - Sign up for our newsletter!

Musical Links 

Sammy Rae & The Friends “We Made It”

The Brook & The Bluff “Halfway Up”

So, Tim, I have a question for you. Yeah, ask away. I guess I'm always open to your questions. Are you really always open to my question? Because sometimes, anyway, we digress. OK. Do you know? I mean, really, really know what artificial intelligence is? You know, AI. Honestly, honestly, not totally. But I will tell you this, our guest today has a very keen understanding of AI, LLMs, machine learning. In fact, he says what we really want to focus our attention on isn't AI, but machine learning. He said, because AI and LLM, all that flows from machine learning. So here's how he describes LLMs. So large language models are a set of generative AI tools, which is a subset of machine learning that you've probably heard a lot about. LLMs are things like chat GBT, LLM being a large language model. And these are actually just predictive tools as well. So, OK, that sounded pretty simple. Why is it such a mystery? Well, I think it's very powerful stuff. And it's new, and it's confusing, because it's a little bit black boxy. You know, I think it makes us humans a little jittery, because these technologies are having a big impact on our world today. So, in this discussion, we talked about the definitions, we talked about the mechanics of some of these things that we just mentioned, and the behavioral science applications that can come from machine learning. And even if you're not super curious about applying machine learning to the behavioral science space or into your world at all, we think that this conversation is a good primer on all things machine learning AI and large language models, and what those words actually mean, and what that means for you moving forward in this world that we are going to be living in over the next five, 10, 20 years where we're going to be taken over by the robots, right, Tim? Oh, God, it's not that dark. But, I hope it's not that dark. OK, agreed. Let's get started. Welcome to Behavioral Groups, the podcast that explores our human condition. I'm Curt Nelson. And I'm Tim Hulahan. We talk with researchers and other interesting people to unlock the mysteries of our behavior by using a behavioral science lens. And in this episode, you're going to hear a conversation that we have with a PhD student from MIT. Well, you're going to hear a conversation that Tim had with a PhD student from MIT. Unfortunately, I was not able to join in the interview. And we wanted to make sure that we got this on the books. And so Tim did a great job. And I think they both covered some really cool ground. Yeah, I'm sorry you missed out on it, Curt, because it was a very cool conversation, definitely groovy. And we are definitely going to-- the two of us are going to talk about it after the fact. But our guest in this episode is Ben Manning. And he is a second-year PhD student at MIT. He's getting his PhD in computer technology and behavioral science, another one of our classic underachievers. Yeah. So what Tim and Ben talked about that is most interesting to me is what AI, LLM, and machine learning really are and what they mean to you as a listener, but also to me as a person who just lives in this world. Yeah. So again, you may not be 100% certain about exactly what AI and LLM and machine learning really are. Well, hopefully this episode can answer those questions for you. And before we head into the conversation between Ben and Tim, we encourage you to sign up for our new and improved newsletter on www.behaviorogruaves.com on our website. You'll get both a weekly update on the episodes that are coming out that have been reformatted for those of you who have looked. They've been reformatted, hopefully a little bit easier with some more information that you might be wanting. And we'll be coming out with a new monthly newsletter that has articles and insights on our passion of helping you find your groove. So we hope that you take a few moments to sign up if you aren't already signed up and check it out. So this is sort of a classic new and improved thing, Kurt. Is that how you'd frame it? New and improved all cleansing products here. It's fantastic, Mr. Lohan. Brighter whites, yeah, less stains, yeah, all that. Less stains I don't know about, but definitely it is a improved from what we were doing before. And we would love to hear from you, listeners, if you think that this new format is better, what we can do to improve it, we're really trying to take all of the components of behavior groups to the next level, and the newsletter being one of those. And so hopefully you'll get some really insightful information. We'll help you not only know what the episodes that are coming out are about, but some insights into what we talk about in those and some definitions sometimes. But also in the monthly newsletter, we're going to be talking about finding your groove. We're going to be talking about the research that we've been doing on what it takes to find your groove and how you, as a listener, can take some of that information and apply it into your own life. It's going to be fun. It's going to be fun. Yeah, at least I hope it's going to be fun. All right, but right now, sit back and relax with a cold pour of big old machine learning and enjoy this behavioral group's discussion with Ben Manning. Ben Manning, welcome to Behavioral Grooves. Hey Tim, thanks for having me. Really glad to be here. We're really, really glad to have you as a guest and we're going to start with a speed round. We need to know first and foremost, do you prefer coffee or tea? Oh, coffee. Oh, that was pretty quick. Four and five a day. Oh, I see you've got like a monster sized cup in front of you, a big yeti that could hold like a half a gallon. It looks like. Yes, I'm doing this recording at a friend's apartment who had a nice microphone and my first question to her was like, do you have coffee waiting for me to like, it's good. Okay, second speed round question, would you prefer to have dinner with your favorite actor or favorite musician? Probably favorite actor, although who that would be is not immediately a parent off the top of my head. Okay, that's totally fair, but your instinct is to lean more towards the actor than the musician. Yeah, I think I'm more intrigued by the conversation of the actor's shtick than the musician's shtick because that seems a little more unique to me. I've liked with some musicians, albeit not famous about their lives where I don't think I've ever like talked to a famous actor before about what that experience is like. Cool. Okay. Well, not like we can arrange for that, you know, we're not. Are you not a famous actor? I thought this. Yeah, and we're not Ronald McDonald's playhouse to, you know, to help make those arrangements. But anyway, okay, so a third question, can generative AI help us learn more about ourselves in ways that basic psychological research can't? Hopefully. Okay. Okay. My research is. Okay, we were going to come back to that. This is a speed round. We are definitely going to come back to that and talk more about that in just a few minutes. Last speed round question. If humans are going to trust AI to help us make decisions, do you think AI right now has a branding problem to fix? Yes, or at least the people who are telling us to use AI to make decisions have a branding problem to fix. Okay. AI has the personal agency to adjust its own branding problem yet. We are talking to Ben Manning about both your research and some of your collaboration efforts. And let's actually start with some of that research. Let's say, well, what we know is that you're getting your PhD from MIT. It's in the IT department, right? Yes. That means fix your Wi-Fi, although that is what my mom can't stop saying about it. It basically means I am primarily an economist who dabbles more in computer science and machine learning than a traditional economist might, although there's many of them who've started to. How would your mom describe what you're doing? Math. And then she might hide her face a little bit and say, I don't know, you should ask him. They're a little bit confused about it, but they're excited about it, whatever it is. So why study this? Why this intersection? So I started out generally being interested in economics and statistics in a master's program and did a pre-doctoral research position, which is something that a lot of PhD students do nowadays to get into a better school, thinking about problems in psychology. And I just found the intersection of behavioral science and computation and machine learning to be an exciting new field that offered a lot of new possibilities. There's also a slew of recent issues in behavioral science, as I'm sure you've discussed on this podcast, and issues with purposeful fraud to unintentional problems when people accumulate research facts across a wide variety of spaces without collecting them in a clear, coherent way. And I was excited to think about new ways we could maybe approach those problems. And thinking in a more quantitative way, an interdisciplinary way, was one way I thought that could be an exciting route for that. I think that's really, first of all, I love that sort of do-gooder side of it, like let's do better science. Maybe that was overly self-aggrandizing. I'm not too bad, no, but the other side of it is that you're putting together, it sort of sounds to me like you're pulling certain, you're making a recipe as you go, like you're pulling certain ingredients together and hoping they work, is that kind of fair? Yeah, I think a lot of the problems in behavioral science and psychology and economics, and maybe some of the other social science disciplines as well, I think are kind of systemic. Like based on the incentives and very economics of me, people have to publish and be successful and the natural inclinations we have as researchers when we're studying things to focus in on our one topic area of interest. And a lot of those are clearly not working, and yet solutions are not obvious. And so really thinking outside the box is one way to try and address them. And those are some of the box options might not work, but I think we need to start iterating on them to see what does. Why do you think your first inclination then is to approach the systemic issues with AI? Is that a fair summary? And I'm curious as to if that's relatively accurate, then why AI? Why do you feel like there's hope there? I'll caveat this with saying that kind of my first introduction to these things, my initial thoughts probably would have been to approach it with more statistical rigor. There's a very famous statistician named Andrew Gellman, who is you may have discussed on your podcast as well before. He has a blog that addresses a lot of problems in the social sciences, thinking about them from a statistical perspective and how we can kind of think about creating more coherent knowledge and when there are lots of problems. And I think he's done an amazing job kind of highlighting a lot of issues. And when I first got into the field, I would have thought that like, this is the key. We need more rigor, psychologists and behavioral scientists might just not be good enough statistics, but I've since recognized that that is a very good tool to point out problems, but not necessarily a great tool to solve and generate new ideas for problems. So AI was probably not the first thing I would have thought about. And then as far as why AI was the next thing is I think a combination of novelty, it's new and different and hence potentially offers new opportunities and access. I entered a PhD program and was working with people who were thinking about these things and considering new ways to use tools like generative AI, things like chat, GPT and large language models, which we'll discuss in a bit in new ways that can potentially offer us new insights. So it's not immediately to be obvious why this is the single best thing, but it's kind of new and offers new opportunities. And then a final thing would be like AI is one tool that can do a lot of tasks and surprisingly human ways. Now, that might be us often anthropomorphizing those tasks and anthropomorphizing the AI, which is a problem within itself. But given that it has this obvious sometimes, what's a human-like quality, which it may or may not truly have, it's an attractive tool to think about using for behavioral science. That's a great answer, Ben. Thank you. And you brought up a couple of terms that it might be good for our listeners to get some definitions on. You talked about AI, generative AI, LLM, large language models, chat GPT. Let's just start with, how would you describe AI and generative AI and how different are they? I think AI is a really amorphous, broad-term artificial intelligence, like quite literally anything that's got intelligence that's artificial and what even is intelligence, I think is a really hard thing to pin down. But underlying all of this is machine learning, which is a little bit easier to describe. Great. So I have a list of people who have different sets of education and ethnicities and ages, and I make predictions about maybe something about their future wages, or maybe I am interested in whether or not children are going to be successful in 20 years by going to college. And I have a whole bunch of information about them when they're five years old, and I make a prediction about whether or not they go to college. That's fantastic. That really feels like a very workable definition. Okay, so where does LLM, large language models, come into this? What large language models do, fundamentally, is predict, text given pieces of text. So what a large language model does, for example, is if you open up your computer and you have ever used chat GPT before and you type in a little message, the large language model takes in that message, and then it predicts the next word that it thinks is most likely given that message. So if I were to write something like hello, my name is dot dot dot in the chat GPT bar, chat GPT will predict that it's some probably name that comes next. And then what it does is it has some sort of probability, like when you're rolling a die of what that word might be, and it will pick one based on those probabilities. And then if you've seen before, chat GPT answers with lots of text, you might say hello, my name is Ben and I am someone who does XYZ. It is repeating that process over and over again. It's taking its own word that it generated and then making a next word prediction based on those previous words. So it's a very similar, in fact, the same process as machine learning. It just happens to be generative in the way it makes predictions and those predictions are realized, hence the term generative AI. And the difference between generative AI and LLMs is just that large language models only use text whereas generative AI could be anything. Yeah. Oh, thank you for that. That's really great. I think about then this predictability and I'm holding my iPhone in my hand when I'm texting someone when I'm texting my wife and I say, I make some comments about what do you think we're going to do today and I think it's going to be really fun. And then I space and I type the letter L and it brings up love. And then the word you comes up right away as well. Those are predictive things, aren't they? Is that a simple use of machine learning? Exactly. Modern day spell checkers are, all they're doing is they're, well, there's lots of different ways they're working. But in principle, they read a sentence, they find a word that doesn't exist in the English dictionary. And then based on that word, they make a prediction about a better word. I like that. Danny Kahneman in his book Thinking Fast and So popularized the concept of system one and system two thinking and I heard someone recently say that LLMs are sort of a kind of in the system one arena right now that they're pretty simplistic, they're sort of reactionary and reactive as opposed to really getting into sort of the thoughtful way that we tend to think of system two thinking where it's really sort of hard, hard thinking. To what degree do you agree or disagree with that Ben? I think that that is maybe a false representation of what the model is doing. I think that's a little bit anthropomorphizing of the large language model, which I think is a really easy thing for people to do. You see this thing that generates texts like a person, but at the end of the day, if you give it a really thoughtful prompt with lots of instructions, it can do the same process of predicting the next word and generate a system two equivalent response that a person might make, like you can ask Chachi BT to write an essay for you, and that is very much a system two process. I don't know of anyone who can get away with writing five pages without thinking deeply about them. So I think at first glance, a lot of simple entries might generate simple responses, but more complicated entries might elicit more complicated responses. So I'm not sure if that's the best way to think about it, if I'm going to be honest. Yeah, yeah, that's totally fair. Speaking of Danny Kahneman, we were introduced, you and I were introduced by Lonya Gandhi, who has been a longtime friend and supporter of behavioral groups. And we adore her. Let's just let's just get that on the table. That I can't. She's awesome. Yeah, she's just terrific. But you and Lonya and Angela Duckworth, along with the late Danny Kahneman, we're working on a perspectives piece. And first of all, before we get into exactly what that piece was about, what is a perspectives piece? What does that mean in the academic language? A perspectives piece in, that's a great question, the perspectives piece in the academic language would be just kind of an opinions piece about a topic or a state of research that's maybe a little less backed up by some empirical work or theory. So it's maybe an academic equivalent to pontificating and writing it all down. Well, it's an editorial to some degree, right, but it's not freeform. It's still informed by the scholarship, right? It's very ironically generative, maybe, is that it tends to elicit academics of thinking in new ways or assessing a problem and then hoping that people will then pick up that train of thought and continue with more like rigorous, either theory or empirical work. So what are you guys working on? Tell us a little bit about what that perspective that you're into promoting. So the paper I've been working on with Lonya and Angela and Danny, unfortunately before he passed, is thinking about problems in psychology that are kind of embedded in the nature of psychology. So, many of your listeners may have participated in some sort of psychology experiment before in a lab. You go to your university psychology department, they bring you into a lab, they sit you down for another computer, and they give you $10 to answer questions. And what they're doing in a lot of those experiments is they're doing some slight manipulation to half the people in the experiment. Maybe they are showing people a bird versus a squirrel and seeing what happens. And they are trying to estimate the effect of doing one thing versus another. And then after they've done some sort of study, they publish the results about the effect of seeing birds on the screen versus seeing squirrels on the screen. And what our paper wants to think about is the fact that this is a very highly stylized environment, in the real world, there might be a bird and a squirrel in the tree. And there's also the tree, and there's also the nuts and the grass and everything else. And inherent in studying psychological phenomenon, at least in the lab, involves this focus on whatever you're trying to study. Yeah, in part to eliminate all the other sort of distractions or things that might not be influencing, write that result. Exactly. And this is important. We have to do this part of the experimental method is to keep all else equal while you tweak other little parameters to study them. And the lab does that. The lab can help do that. Exactly. And you can do it in the real world sometimes, too. You could try making stoplights go green, red, green, yellow, red instead of yellow, red, green, yellow someday and see what happens in the middle of a park and look at the real stuff. Yeah. And there's a problem here in that when we do this focusing in an experiment, we probably over focus on how much that thing matters as scientists as opposed to in real life. Because in real life, there's so many important variables that are going on. So even when we keep everything else equal and tweak some little picture on a screen or sentences on a piece of paper, we are probably ignoring the real life complexity and how that thing probably matters a lot less in real life, even if it matters a lot on the lab. Yeah. And so that's what the paper is about is thinking about expectations. It's thinking about how we need bigger samples and more people to study things because effects are largely smaller in the real world than we think. And just considering how we cope with the fact that the scientific method that we need to study something might have an intrinsic bias of us overly focusing on that thing we're studying. Yeah. Which is interesting that you are coming at this from an economics perspective and economists have traditionally had very, very large databases, very, very large data sets to study from. And I think that that's kind of fascinating when I think about like your general perspective is to say, well, gosh, couldn't we just have like, in some ways, couldn't we just have bigger data sets to help? And it also reminds me of Angela Duckworth and Katie Milkeman's behavior change for good and what, you know, to run these massive international experiments. Do you guys address those kinds of things in your perspective paper? So what we would say is that those things are great. Oh, good. You have their own set of practical complications, really big sample sizes solve a lot of problems in behavioral science without getting into the nated gritty details. When you run an experiment with 20 people versus 10,000, you could just be a lot more confident in the results of whatever you do with 10,000 people. But there's a trade off, right? It's really hard and expensive to get 10,000 people to do anything. And it's not so hard and expensive to get 20 people to do anything. And so part of the problem in psychology is that in behavioral science in general is that a lot of people can get 20 people to do stuff, but very few people, you know, famous academics with lots of money like Angela and Katie can do these huge experiments. Right. Right. Yeah. It makes a difference, doesn't it? Yeah. Yeah. That's very cool. And you said the paper is under review. And so we're, we might be weeks or months away from actually seeing it go live. Yes. I'm going to hope a few months. Okay. Okay. The academic publishing process is a little bit of a roller coaster. So making explicit predictions about how long things take is actually something a machine learning predictor probably wouldn't be very good at. Good, good segue back to machine learning. And because I want to get back to one of the speed round questions, Ben, we talked a little bit about how can we learn more? How can AI help humans learn more about what it's like to be human? What can you give us some insight into what your thoughts are about that? So I think the potential here is enormous. And there's kind of this exciting new literature thinking about this very question. And that's what my work has been in and over the past year or two. There's actually a lot of ways to think about using AI for behavioral science. The way I've specifically been thinking about doing it is by trying to actually treat an AI as an imperfect surrogate proxy to study people. So I mean this quite literally, like imagine opening up chat GBT and telling it, you are a person named Jim and then asking it behavioral science questions and analyzing the results. Wow. Wow. So what do you think we might discover from that? So I think it alleviates a lot of the problems that we just discussed, right? Yeah. Yeah. Pretty hard to get 20 people. It's pretty hard to get 10,000 people to do something, but it's pretty easy to get an AI to answer a question 10,000 times. So it lets us really explore the possible space of things that matter in behavioral science or psychology a lot more quickly and also in a very systematic manner where we can control a lot more of any given environment. Interesting. Wow. Okay. Well, we are going to have more discussions about that in the future. I'm pretty sure that we're going to definitely come back to that. Is there anything about this topic that I haven't asked? Is there anything that we ought to be discussing that you want to share? So two things I want to share are like the fact that this offers immense opportunity and that we still have so much to figure out. So some of the work I've been doing in the past year has been thinking about how we can like have these models come up with their own ideas and then run experiments on themselves, like people, which offers an exciting kind of like untapped potential for iterative experimentation and understanding. Conversely, it's not always exactly clear when these models are going to be great proxies for people. So I think it's really easy to log on to chat GPT and be like, your name is Jim, pretend to be a person and you like feel like it's human because there's never been another sort of interactive device that has such a human feeling presence, but we don't actually know the boundary conditions yet of when this thing offers good and bad responses. So part of the work is coming up with new ways to elicit information from these models and also figuring out when these models are going to be informative about people. Yeah, that's great. It also reminds me of a conversation we had with Lea recently about her cartography of nudges. And I wonder about the applications of AI helping fill in some of those blanks, if the mapping project is about trying to delineate where the shore is and where the beaches and where the mountains and where the roads and all that kind of stuff in the world of nudges, there's going to be a lot of blank spots. Absolutely. Right. So could you see this fitting into helping us understand more about where the blank spots are? I totally could. I think one kind of promising idea here is to have try and get an AI to perfectly replicate known results and then using that AI that's been fit on the known behavioral science results or behavior to then try and make predictions about new behavior. So use linear's mappings in cartography as a baseline training data for the model to make better new predictions. Oh, yeah. Yeah, that's exciting. Okay, we have many more things that we could talk about. We will definitely have you back then to continue your conversation in the future. But right now we need to know about if you were stuck on a desert island for a year and you could take two musical artists catalogs with you. So everything that that musical artist has created, it wouldn't have to pick one song or one album. Which two musical artists would you take with you? The first would be Sammy Ray and Friends, who's a, I don't know, they're like a folksy pop R&D-esque fusion. The way you're laughing makes me think you've heard of her. Yes, yeah. Yeah. Very big fan. And then I think the other one would be like a lesser known band called the Brook and the Bluff, who maybe has a slightly similar vibe with some bluesy guitar solo riffing solos in the background. Yeah. Okay. That is lovely and eclectic and I love that. That's a great answer. Ben, thanks so much for being a guest on behavioral groups today. We really appreciate it. Thanks so much for having me, Tim. It was really fun to be on. Welcome to our Grooving session where Tim and I share ideas on what we learned from Tim's discussion with Ben, have a free flowing conversation and groove on whatever else comes into our AI brains. Yeah. There it is. Because our brains are artificially intelligent, Tim. We don't really have intelligent you and me. It's all just artificial. So it's a very limited when you say our, you're actually just referring to the two of us. Yes. You and me and our listeners obviously have real intelligence. So our eye as opposed to AI, you know, maybe, but they're listening to us. So what does that mean for them? I don't know. What does that mean? It means it means they're curious and curious about what these two fools are going to talk about. Yeah. So hey, this, I really feel bad that I missed out on this. I think I was having surgery. So it was one of the things that was not trivial. Yeah. But anyway, glad that you were able to have the conversation with Ben. And I think it was really cool. I obviously listened to it. And again, the insights that you guys talked about and just the definitions. I think so much of what we hear in the news today about AI, machine learning, LLMs, chat GPT, all of this, I get to a certain degree, but I haven't ever peeled back the layers. Yeah. And you and Ben were able to peel back those layers. So can you talk a little bit about what did you take out of the conversation? I think the first big thing is the hierarchy that Ben set up. This idea that machine learning is the big umbrella. Let's just start there. Machine learning is really the big umbrella. It is all inclusive of AI and underneath AI is LLMs or the large language models. So this idea that the next best word kind of thing is really a subset is just one portion of what machine learning is overall. So that's the first thing. The second thing was that machine learning is about teaching a machine to respond to specific prompts and that they don't think on their own. Well, that was a piece that we had this conversation before starting the Grooving session about intelligence. And I think Ben even mentioned this, machine intelligence is kind of an artificial intelligence is kind of a misnomer. It's not really, well, again, intelligence has so many different definitions and so it's really hard to label on. But when we type something into chat GPT and it comes back with this fantastic response like write an email to Tim telling him what a great guy he is, right? And it comes back with these flowery words and wonderful, you know, things about all the great things that you do really doesn't understand what it's saying. It has no comprehension that this is an email to Tim who is my co-host on this podcast. Even if I put those prompts in, it still doesn't understand that. Right. Right. But it is able because of how it has been programmed and the algorithms that it has to know that, oh, email, you know, congratulatory, you know, different pieces. So it looks at all of these different email templates that it's gone through and picks out that next best word, but it doesn't understand that at all. Yeah, and you actually gave a great example of this. We were talking about what it was like if you had a conversation with your daughter and you asked her a question and how she and if this was just written into chat GPT, for instance, you know, a very simple LLM, it's going to just take the words at face value. But the way that we humans express those, those words make a difference in the way that we communicate and what the meaning of those words are. Yeah. Yeah. You know, ask my daughter if do you want to go to a movie tonight? And it's like, I'm tired, could be a response, right? And we understand what that means. I'm tired means, no, they don't want to go to the movie, but that isn't, you know, there's not a comprehension or even just in the tonal component. And again, you can't do that in tonal, but I mean, could have said, okay, or could have said, okay, yeah, those are two very different interpretations of wanting to go to a movie. And right now, at least from my understanding, and, and I think what you guys were talking about is it's just picking the large language models at least aren't to that point of having understanding. Right. So there was another. So that, that I think is a key piece. And obviously there's a lot more that could be unpacked there. And we could probably have been on again to talk about that in much further detail. But there was another aspect that I thought was really interesting and wanted to get your take on this. But you guys talked about the way that AI machine learning can be used in behavioral science. Can you talk a little bit about that? Yeah. How cool is it like this has just come up recently in very recently, like in the last two months in our lives when I think Cyndal Molinathan described an idea of of being able to scale a behavioral science intervention with a large group of people using AI being able to actually kind of custom tailor a particular response or intervention to a particular situation that it perceives. And that kind of got me thinking, and when Ben and I talked about that he, he took it to the next level and said, well, imagine if we could create these entire populations of people, maybe with some specific preferences or biases in, in AI, in, in machine learning and then, and then study them, ask them to respond to specific interventions. And we might be able to discover ways that we can basically program a computer to do psychological studies. And like you think, wow, that's really cool. And again, it scares the hell out of me to a certain degree, right? But there is, it's cool and granted, I think Ben, and correct me if I'm wrong, he's saying it's not quite there yet. Oh, it's not there yet, you know, and, and there are going to be certain limitations on that. And depending upon, you know, all of the different, you know, nuances of, of how things work, but if nothing else, it's a first, it's, it's a great first cut, right? It can be, it can be that helpful again, the way that I often use like chat GPT is in writing, it's a first great draft, right, that you can then go, Oh, that, yeah, I didn't think about that from putting it in that order or that flow. And now I'll take it and build off of that in various different pieces. And I think that's the, there's, there's power in that, at least within thinking about it in a research modality component. Totally. Totally. I just want to add to that, Ben and Lynia Gandhi and Angela Duckworth have, are writing a perspectives piece on this. And Lynia is our old friend, Lynia Gandhi is actually leading this perspectives piece in such a way that is going to address areas that could be explored within behavioral science using machine learning. So there's a lot ahead. I mean, there's, and I think it's really cool stuff. Yeah. And, and, and I know like you talked about Ben's definition of mental working models and the idea that they're not perfect definitions, but they're, they're good and they're working, right? Yeah. Yeah, they are working model. They are working definitions and they're, you know, he wasn't trying to hold himself up as being this is the definitive way of thinking about these things. But I think that they're really helpful. He also said, you know, that there's other types of machine learning. There's, there's something called reinforcement learning and it involves prediction, but it's different from, from the LLMs and we, you know, we didn't get into those at all. So there's a whole bunch of other things that I know it's wild, isn't it? So in the introduction, we talked about peeling back the layers and we, we really only feel back one layer, right? Or maybe even part of a layer. It's, which is fine by me to go on with this. It's funny. I have some friends in the financial world and, you know, everything is AI now in the financial world was talking with people in market research and everything is AI. I would just attended a healthcare conference for pharmaceuticals and you know what? There was a massive amount of those presentations were about AI. It is a component that is really taking off and the potential is there. And so I think for everybody listening, even though this is well beyond Tim and I's brain power, I think it's important that we try to understand and try to peel back as many layers as we can. Yeah. Well said. Let's, let's wrap this up with, with just starting with don't be scared. Yeah. It's okay to be uncomfortable with it because it's unfamiliar, but it doesn't necessarily have to be scary. Right. I think maybe that's kind of the thing there. I also want to just express a bit of gratitude to our good friend, Linnea Gandhi, for making the introduction to Ben. Now we're going to stay in touch with Ben over the years as he finishes his PhD and gets into doing other research. I think that there's, he has a fantastic future ahead of him and excited that we got to meet him and just want to express some gratitude to Linnea for making that introduction. Yeah. I second that and all of what you just said, don't be scared. There's possibilities out there. There's a whole bunch of potential possibilities. And the more that we understand, the more that we learn, as you said, like the definition that Ben gave of LLMs, very simple. I mean, when you actually break it down. It's not really, it's not rocket science. It's computer science. There you go. All right. I'll leave it at that. But I want to just reiterate, if you have not signed up for our newsletter, we would encourage you because we, it is new and improved with new detergent action, whatever, how I don't know, whatever that would be, right, brand shiny, new cover. But we do think that we are, we're trying to update it. We're making changes to it. And we are going to be having a monthly, finding your groove newsletter episode that is going to really explore some of those cool concepts that will help you. And so you can learn from this, not just a, not just a something that says, here's what's coming up next, but it's actually going to have some valuable information in it. That's our hope. Yeah. That's not the hope that that will be what happens there. Okay. We'll deliver on that. So with that grovers, we hope that you really, really, really got some cool things out of our discussion, out of my discussion with Ben in this episode, and that you can use them this week as you go out and find your groove. [MUSIC] [MUSIC] [MUSIC]