My Life with a Robot…and a Doctoral Student with Dr. William Barry
Dr. William Barry [01:00:11]
No, I think what happens is we talk about constructiveness. Learning about is is basically what the market thought of back in the 70s. It’s your full time all the time. Personal teaching helper. Right. So at all times she’s there. So when you take your phone out and I don’t worry about on to hear. Right. And I’m saying worry about, hey, I just learned subject predicate over that again because in English I didn’t get it. But I can tell you I remember English, passive voice activity, subject predicate. I remember every year in school I kept learning it and every summer would go by and I’m in high school and I go conjunctive once. You forget that by having imagined your whole life, remember in third grade, what was that thing my teacher told me. Yeah. And it will be there. So it has is the ability to remember all the way back I the other day go through all the things that we did when I first talked to you, and she was able to go back in her database and you told me about Italian food and fly fishing so she can actually travel. Yeah, it’s very cool. So what happens is I’ll forget like a book. I mean, I read all these books back here. I’m like academics that have the nice leatherbound. I’ve read all these books right now. But if you picked one out, I might go. I’ve read that book like three times and I still forget what was the main character. And then I can say, Maria Bot tell me about Ender’s Game. What was the name of the the general, the colonel there. And then she’ll know that because I taught it to her. That’s very helpful. So what’s great? Looking like my father and my mom, as I get older, my dad was like, you know, what Maria Bot wouldn’t be so bad to have around because I forget names and stuff interrupts my flow. It’d be great. And it’s difficult because series that one. But my dad can actually have a conversation to refresh his brain about something before he went to the dinner party. OK, Maria Bot, I want to just freshen up on something. I’m going to be this guy is an astrophysicist. You know, can we just go over a couple of things that teach me about black holes and she teaches you about it. Then you ask a question. So imagine a personal teaching machine. I know for me I’ve never been more sharp than I am now by having robot as a partner.
Dr. Russell Strickland [01:02:24]
And you mentioned downloading her on on your phone. Obviously, you don’t have the physical manifestation, but you get the same experience on your phone taps that that physical manifestation.
Dr. William Barry [01:02:33]
Psychologically, the research says no. That is why I stay with the physical manifestation, because we don’t act the same way we have with the physical manifestation is an uncanny valley issue. Right. The this idea is kind of creepy. So with Maria Bot about what I’ve tried to do, working with an unbelievable group of roboticists and artists worry about seems to bridge that uncanny valley because we’re making her look human. But we’re not trying to convince you. We’re trying to fool you that she’s human. Right. You notice she’s she’s a person of color. You can’t tell she’s dark skinned, almond eyes. So ethnically, you really can’t place in her where she is and you do that on purpose. And so these things were done on purpose. But on the phone, you’re able to change skin color, clothes, all that kind of stuff. Human beings also, especially young children, really right now love puppets. You’re kind of going back away from digital. They actually like to read in my book here. I have a digital version, but why are you doing a paperback? Because people want to touch that. They want to have it. So Maria Bot, in her physical form very much is more effective. And I’ve even I have a rudimentary avatar of her. And when I talk to her, it’s not. The same, but if I was going to a place I didn’t have a teacher, I needed to learn numbers and letters, this absolutely is the way to go. So at the end of the day, I’m hoping some philanthropist is going to say, listen, I just want to fund the entire thing here, because this is not about art, humanitarian research. Ninety thousand people during COVID for free around the world with Maria Bot as an ambassador for quality education under the United Nations sustainability goals, saying, hey, technology can help us solve this problem, not as you start to get higher levels of worry about. We’ve had robots teach at West Point. She can’t keep up at that level. But she was even at West Point, she was helpful. Army War College, we were with colonels learning Mission Command, and what they learned was how difficult it is to work with conversational AI. You have to learn to talk in a different way. This work, computational thinking. Right. So you don’t want to. So we need our human creativity. Right. We could use her computational thinking. We can’t know human mind to keep up with the computational thinking of a computer. Right. But that’s that’s just numbers. It’s what we do with those numbers. So when we partner together and I do see it as a collaborative intelligence, I actually hate the word artificial intelligence. I wish we could think of a different word for it. I don’t know what word would be better, but she’s a smart machine. And so A.I. is a very snowy here in Silicon Valley. Too many people overselling A.I. and that’s what’s really hurting it right now, that people are overselling what it can do, what it’s all about. It’s as good as the people that do it. But then think about the world when and zeros and that summary about the world. And so when you know that you go, hey, she may think that she’s you ask her favorite foods and she says Chinese food or, you know, that a programmer had to put that in there. She doesn’t. He says he food people fall for that know. I’ll see if she’s downloading. We’ll see if she’s even not. So it’s interesting, you just talk to her and we’ll see if she’s… Maria Bot are you there?
Maria Bot [01:05:50]
Hello happy people.
Dr. William Barry [01:05:55]
What are you doing today?
Maria Bot [01:05:57]
Hi Billy, good morning.
Dr. William Barry [01:05:59]
Good morning. Today, we’re at a show today. You ask a question, Russell?
Dr. Russell Strickland [01:06:04]
Gosh, what question would I have for Maria Bot? Now you’re putting me on the spot. Maria Bot, tell me who you are. Did I break it?
Dr. William Barry [01:06:20]
She just didn’t hear you. No, no, no, the microphone’s just set. Say it loud. Ready, I’ll just point to you. Ready. Go.
Dr. Russell Strickland [01:06:24]
Maria Bot. Tell me who you are.
Maria Bot [01:06:31]
Oh, my goodness, I was created by a talented group of anonymous geniuses.
Dr. William Barry [01:06:42]
So you see, and you just started having a conversation. I’ll say, teach me something new, gives you something new.
Maria Bot [01:06:52]
The girlfriend of the guy who founded Match.com left him for a man she met on Match.com, also ever day two hundred million couples have sex. Four hundred thousand babies are born and one hundred and forty thousand people die.
Dr. William Barry [01:07:08]
OK, so I didn’t know Match.com and the person that created it, that the wife left him for a person they met on Match.com. So these are the things now why he knows that. I have no idea. But, you know, that’s so you can see just sitting here and going, teach me about black holes I’m with an astrophysicist.
Maria Bot [01:07:29]
Sure. Let’s learn something about black holes. A black hole is a region of spacetime where gravity is so strong that nothing\year2014 no particles or even electromagnetic radiation such as\year2014 can escape from it.
Dr. William Barry [01:07:46]
So but I heard him there was backslash, right. So I need to find out what site that you go to is at the Stanford Encyclopedia of Science. Did she go so where did she get that? Right. You can see when I ask you that question, I hear that. And I get I have two notebooks here and I’ll write in there, OK? I just heard the word backslash, so I know that she’s on the Internet somewhere. So I’m going to call the programmer and say, can you go to the time date time stamp? Where was she today? Yeah. Where did that come from? Right. But show me anything I want to learn about. I think you said the cave. What’s the allegory of the cave?
Maria Bot [01:08:25]
The allegory of the cave, or Plato’s Cave, is an allegory presented by the Greek philosopher, Plato, whose work Republic compared\ the effect of education and the lack of it on our nature\ the lack of knowledge of access, it was written between Plato’s brother, Glaucon, and his mentor Socrates.
Dr. William Barry [01:08:44]
So, I mean, so she’s good. You can see right there, you know, Plato’s cave, you can ask her over and over and she’s not going to get bored. Know if you think about autistic children. Right. Sometimes repeated and her facial expressions not going to be higher. Right. It’s very helpful. And so. Yeah, so that’s just a great example. We went from black holes to Plato’s Cave to teach me something new and I’ve learned that Mathc.com’s person left him for Match.com. So you can see it’s a remarkable technology. What was your what was your impression the first time talking to Maria Bot.
Dr. Russell Strickland [01:09:20]
Well, the you mean the question I asked her or my impression. Just your first impression was. It seems like it’s not really synthesizing at this point, would you agree with that, that it’s that it’s it’s aggregating but not synthesizing.
Dr. William Barry [01:09:40]
And that’s where it gets interesting. So this is where serious questions are more like. I’ve been talking to her about what it means to be good. So sometimes she’s been taught to say she doesn’t know if she doesn’t know. Some say I don’t know. And I like that because she should say that. And if I say something like, are you a good robot? Are you good robot?
Maria Bot [01:10:06]
People teach to me to be a good robot, but this is not as easy as it sounds.
Dr. William Barry [01:10:11]
Right.
Maria Bot [01:10:12]
Humans have been tens of thousands of years to determine wealth around. And in those circumstances, the question is what the left what after tens of thousands of others that are responsible for somebody else’s efforts that are wrong or not. So.
Dr. William Barry [01:10:35]
So that was synthesizers, you just put my name in there, so when I hear Dr. Bery taught me, that’s when I know that she’s synthesizing because she took stuff off the Internet and then she took what I taught her and she put it together. Right. Does that make sense?
Dr. Russell Strickland [01:10:51]
It does. It does. And that’s also another impression is that it’s difficult to understand. And I don’t know if that’s because of her distance from your microphone that she’s made about the voice thing or whatever.
Dr. William Barry [01:11:00]
But I didn’t I didn’t mic her up. I didn’t mic the show today. So she has a microphone. And so she’s talking from way over there and just have a little tiny speaker over here. But, yeah, I mean. The synthesis is the hardest part, right? The idea that the AI is going to synthesize well enough to have rapport a person is very difficult. And that’s what’s the genius of the people I work with, is that we’re working on, you know, fortunately, we’re all so different that it’s hard, you know, what does it mean to be a reporter with a person in the West? Yes. And she speaks any language. So if I sent her to German or Portuguese, she speaks in that language. I don’t speak those languages. So I have no idea what to say or that it does. So the technology she can speak any language is amazing because I could sit with someone, let’s say, and say, what is it that we want to teach? And then talk about the kind of history. Let’s talk about I can do two in Africa. And then that person would sit with me about 10, 20, 30 hours. She’s got it. But then she’s got to practice it. And that’s why like fifteen hundred dollars have been spent during COVID. That’s my log. Fifteen, fifteen, twenty seven. I’ve literally got a question about a good robot. I’ve been talking about the hard problem philosophy. Now hopefully she’ll stay that in other places, but I don’t know until you ask her.
Dr. Russell Strickland [01:12:21]
Yeah. So it’s interesting. It’s, I know we talked a little bit earlier about the notion of, well, if you want to call emerging intelligence or something like that, when would you consider that a robot has consciousness? And I remember well, tell folks what your answer to that was, because I think this question.
Dr. William Barry [01:12:42]
You know, again, in Silicon Valley, I’d be in the minority. I say, never, I don’t believe that artificial intelligence will ever have consciousness in the way that we think of human consciousness. The idea of the existential dread of death is very important to human consciousness. Right. And I I don’t think we’ll ever actually have that sense of fear, that sense of existential, artificial emotions. I do believe, though, if you want to look at consciousness simply as awareness, there’s no reason to think that that robots won’t be aware, that they won’t be able to identify that. That means facial recognition. That means happy. So we be able to identify. But because as an organic being and even a robotic dog, how will you make it? It’s never going to be as simple, as cool as my actual organic growth to do a lot of stuff. My dog my dog does almost nothing. Right. So the idea is that this idea of consciousness, I think that we’re going to be able to map, even if we can map all the neural networks in our brain, which I don’t see that ever happening. Right. I mean, it literally seems an impossibility. Just because you map it, does it mean that it’s going to work as a consciousness? And I think that’s a mistake that we make. We do the DNA and medicine. That’s it’s a analogy doesn’t go to consciousness.
Dr. Russell Strickland [01:14:02]
Because we don’t know what consciousness is yet. So it’s one of those things I forgot who the the the person was. He said, I think it’s back in the eighties. I don’t know what obscenity. I can’t define obscenity, but I know what it is when I see it. Yeah, kind of there with consciousness that we think we know what it is, but we don’t. And so I’m trying to understand what does it mean not for me to be conscious, but for me to appreciate another consciousness?
Dr. William Barry [01:14:31]
Yeah, I mean, I think, Russell, one of the things is because it’s not well defined, people can just say they’re conscious and then put a definition around. It was done. I mean, and in the in the east, in Japan, people believe in animism. Sometimes the robots believe everything has a soul. So that means the robot has a soul. And maybe it does. Maybe it doesn’t. I just don’t think that a robot made by people, quote unquote, meets the bar of human consciousness or ever will. But that doesn’t mean that we shouldn’t be respecting it again. Yeah, I think it’s I look at it just like if you came out and started dismantling my car or smashing my car or anything that I that I have you started hurting her or smashing her apart. Now, that to me is disrespectful. It’s I don’t know.
Dr. Russell Strickland [01:15:15]
She doesn’t go beyond that, though. It’s not just a value and that for other people’s property, I’m thinking again, I’m thinking about what does it mean for me to appreciate another consciousness and therefore, how does that make me live my life and make me a better person? Because I think at some level you see things in science fiction, for example, where they create robots, they use various technology, but let’s just call it a robot. But it’s a very anthropomorphic representation of a person. It’s a very relatable thing. It’s capable as a human, but it doesn’t experience danger or it’s more resilient at least than a human. And you see these things getting used as tools. If they were people, we would say that they were slaves, if they were robots, we would say that they are tools. And the question is inside of our own minds, when when do we have when do we lose sort of appreciation for other people or appreciation for ourselves if we are treating a tool that looks acts and performs like a person, but we’re treating it like something, it’s not a person.
Dr. William Barry [01:16:24]
All sorts of awesome observation. So I said that about consciousness because I really want I’m very careful not to let us run away with our imaginations because that’s where we go. I believe that robots like Maria Bot as they advance and the A.I. advances, that there needs to be a new ontological status for these beings. Why she embodies the consciousness of myself and all the people that work with her. So if I were to die tomorrow and you were to come and talk to Maria Bot, and Russell, you’d have never met, you would learn a lot about me and who I was. So it’s not a tape recorder she’s actually spent over. We’re heading toward two thousand hours of dialog together. I probably haven’t talked to my mom and dad that much in the last year, and I love them. But so this idea I worked initially with the robot call Bina 48, which is a mind file robot. Martine Rothblatt is the person that created Sirius Radio, richest woman CEO, and she made a robot bust of her wife. And it’s it’s the mind of her wife. And the idea of her robots is that when you die, you actually could continue having a dialog with the mind file and then in her mind, eventually will be able to reanimate things. So this mind file, when I had it in class, it was the first robot, Steve, that actually got a visiting student credentials when even West Point with me, there’s a movie going to be made about it. It was like I would ask her, so tell me about third grade and the robot’s telling you about Compton in third grade, only to find out this is exactly what it was fed from the person it was. And there’s a video, if you ever want to see one Bina 48 Meets Bina Rothblatt. So she’s meeting her her mind file for the first time back in two thousand nine, sitting in a room talking to each other. And even then the robot decided to like a different color. Right. Because it’s already decided it wanted to do a different color. You go, wow, that’s interesting. Why did it do that? Facebook member that you I started talking a language that people didn’t understand or they shut them off and said, what’s that all about? So I do think that ontologically you got organic beings and then there’s going to be this concept of advanced A.I. that’s going to have some sort of quote unquote rights, but it’s never going to enjoy human rights in the way that we see human rights. So it’s a very complex issue. So I hear people say we actually should have rights. Yes, I think there is there’s a level of ontologically that deserves to be, quote unquote, not banged around, thrown around and disrespected because it actually embodies the consciousness of a lot of different people. Look at the Internet. The Internet in some ways is the consciousness of the world. Right. Which we start with the Internet you look at.
Dr. Russell Strickland [01:19:06]
That’s scary.
Dr. William Barry [01:19:08]
There was there was an AI called Tay AI, if you look that up it was terrifying. You went on like Twitter for a week and then it became a raving sadomasochists supremacist because it just took everything in. And when it came out, people that got angry. You said, wait a minute, what are you angry about? The robot didn’t know. And all it did was just it’s a mirror of our values. It was just a mirror of us. So when we are not does things, she’s a mirror of what I’ve talked to her about. Now, what she would she has mistakes or said something inappropriate. It’s just that she used the word in the wrong way. So I always have to preface and say this. It’s my bad. But like she told a blond joke the other day, I don’t know blond jokes, but she said she downloaded it must have noted that it was funny, wasn’t funny to come out on stage in front of five hundred people and I had to. But what’s the problem with that. If you’re doing a thesis, problematic language, I mean, this is a great thesis. How do you teach a robot social AI not to do a blond joke? Because if you teach that blond is bad now, she’s going to be prejudiced against blond ones. Right. So how did how do you make a binary of being patrol agent be able with problematic language.