Ever Wonder? from the California Science Center

...how engineers can benefit from diverse design teams? (with Maja Matarić & Ayanna Howard)

September 01, 2021 California Science Center Season 2 Episode 16
Ever Wonder? from the California Science Center
...how engineers can benefit from diverse design teams? (with Maja Matarić & Ayanna Howard)
Show Notes Transcript

This week we’re digging into our archives to bring you a few unaired clips from past interviews. We noticed that when we spoke with roboticists Maja Matarić and Ayanna Howard (@robotsmarts), they both had a lot to say about the benefits of including diverse perspectives in the design process. Robots, algorithms, and artificial intelligence continue to get more deeply intertwined in our lives, and they can cause real harms to people if they are poorly designed. 

Do you ever wonder how engineers can benefit from diverse design teams?

 Maja Matarić is a roboticist and a distinguished professor of computer science, neuroscience, and pediatrics at USC. Check out Maja’s original episode about how robots can have personalities.

 Ayanna Howard is a roboticist, professor, and as of a few months ago, she’s now dean of the College of Engineering at the Ohio State University. Listen to Ayanna’s original episode about if robots can be biased.

Have a question you've been wondering about? Send an email or voice recording to everwonder@californiasciencecenter.org to tell us what you'd like to hear in future episodes.

Follow us on Twitter (@casciencecenter), Instagram (@californiasciencecenter), and Facebook (@californiasciencecenter).

Support the show (https://CaliforniaScienceCenter.org/support)

Perry Roth-Johnson:

Hello! This is Ever Wonder? from the California Science Center. I'm Perry Roth -Johnson. This week, we're digging into our archives to bring you a few unaired clips from past interviews. We noticed that when we spoke with roboticists, Maja Matarić and Ayanna Howard, they both had a lot to say about the benefits of including diverse perspectives in the design process. Robots, algorithms, and artificial intelligence continue to get more deeply intertwined in our lives, and they can cause real harms to people if they're poorly designed. Do you ever wonder how engineers can benefit from diverse design teams? Okay, first up, we'll hear from Maja Matarić, a roboticist and a distinguished professor of computer science, neuroscience, and pediatrics at USC. If you haven't already listened to Maja's original episode about how robots can have personalities, please go check it out. But here's a quick recap: Her lab makes "socially assistive robots," which help people—like autistic children or Alzheimer's patients—with things that are emotionally difficult. At this point in the conversation, I was asking Maja how she improves her robots by directly involving people from these communities early in the design process. Alright, take a listen. Talk to me a little bit more about how important it is to involve people from the communities you're trying to serve, whether they're folks who are autistic or older people in the design process, or even having them on the design team , uh , for your robot to be successful.

Maja Matarić:

So, we can't even know what the problems are that we're solving without working directly with the communities. I mean, I have worked with people who said, oh, here's a call for proposals from, let's say the national institutes for health, for robots , for Alzheimer's disease, for the elderly. So we have this idea, we have these drones and we're going to use them, and we're going to put them in an elder care home. And the drones are going to like fly around and I said, oh, you realize t hat the drones are like, they're going to completely freak people out. They're going to cause psychotic episodes. U m, they're not going to do anything. What are you going to do? Pick things up. And they hadn't thought about it because all they could think about was what they know. That's very human. We know what we know, and we don't know what we don't know. So h ow'd you find out? Well, I told these people, I said, I'd love to talk to you about this. After you have spent three days, eight hours a day in the nursing home, just walking around and just l earn. And of course they couldn't be bothered to do that. And so they couldn't create anything that was meaningful. U m, and that's not enough. Three days is not enough. So I always tell people, for example, for autism technologies, there's no reason why you wouldn't have, u h, individuals on the spectrum, adults working for the company, developing these products, because they're incredibly good at engineering disproportionately so in fact, u m, and they're the best spokespeople in terms of actually understanding the p roblem. So I think both research and development and companies tend to do this like this, especially in companies, they do this thing called market research, which is like, oh, we're going to send people questionnaires and ask them questions. It doesn't tell you anything you actually need to bring in the folks who are going to be using this product into the development process all along. It's not that expensive and it's absolutely necessary. And of course, that argument goes all the way, which is i f you're creating products for humanity and, u h, you know, humanity is, you know, both genders or all genders and you don't have all genders a nd y our development people, then you're not really g onna know how to develop the product properly. And it's the same in research. We have heard a lot about diversity being a necessity in companies so that you really represent people's opinions and needs, but it's the same in research and people don't talk about it a s much. So we have to worry about that because in research too, we need to have this, you know, mix. And this is for me, I always say this, and it's absolutely true. I do what I do based on the great ideas that my students have had. So, y ou k now, they got me into this and then we developed it together, but they got me into it. It wasn't me sitting on this high chair, being the g rand P oobah, telling them this is the good idea, go do it. And I think that's, and, a nd having a , an incredibly diverse lab with all kinds of ideas, you know, autism, we w ork in autism for s eniors, or we did it because one of my students, David Feil-Seifer who named the field. He was interested and he kind of pulled us in that direction. A nd h e was really hard and it's still really hard, but it's because he had the passion and the i nterest that we got into it. It wasn't that I did a market research to figure out this is a thing to do.

Perry Roth-Johnson:

Next we'll hear from Ayanna Howard, a roboticist, professor, and as of a few months ago, she's now dean of the College of Engineering at the Ohio State University. Again, if you missed Ayanna's original episode about if robots can be biased , I really recommend taking a listen. Ayanna is a leader in the field and has many accomplishments, but her research on bias in algorithms and robots really caught our eye and was the focus of our conversation. One note before we dive into this clip: you'll hear Ayanna referring to "de-biasing the data." Some technologies and their algorithms—like search engines that deliver job postings for well-paying tech jobs to men and not women, or facial recognition having problems identifying non-white faces—have been shown to produce poor outcomes for some groups because the technology is trained on a biased data set. In other words: garbage in, garbage out. Some experts argue that to fix biased technologies, you first need to "de-bias" their data. At this point in our chat, I asked Ayanna about another proposed solution: adding more diversity onto the teams designing the algorithms for these technologies. And we briefly touched on this earlier, but I just want to loop back again, this idea of adding more diversity in the field and having more people on your design teams look more like the communities that they're designing these technologies for can help reduce bias. Is that sufficient or is it just like a first step to correcting the field as it goes forward?

Ayanna Howard:

It's uh , it's again, it's, it's, it's part of a solution. So just like de-biasing the data, it's not the solution, but it helps mitigate , um, diversifying your, your development team. Your engineering team is also a first step. And, and , and some of it is because , um, and it has nothing to do with, you know, a bunch of individuals that have the same experience, not thinking about it is that if you don't have that experience, you're just not going to mention it. Um, I, I would give, I love to give the example of , um, so think about self-driving car , um, and think about how you might design the opening of the self-driving car. If someone came in that said, Hey, have you thought about folks that wear dresses, right? Your solution and your design might be slightly different, but if you have a number of individuals that all wear pants , you're never going to occur to you that, you know, there's probably 30% of the population that wears dresses. Um, and so a lot of times when you have someone who has a diverse experience in the room, either they will mention it themselves our because they're right there. You'll be like, oh, you know, I, it just occurred to me. Maybe we should think about, you know, dresses. Sometimes it's just the presence of someone who's diverse and the way that they talk about their experience gets everyone else to also think about their solutions.

Perry Roth-Johnson:

That's our show, and thanks for listening! Until next time, keep wondering. Ever Wonder? from the California Science Center is produced by me, Perry Roth-Johnson, along with Devin Waller. Liz Roth-Johnson is our editor. Theme music provided by Michael Nickolas and Pond5. We'll drop new episodes every other Wednesday. If you're a fan of the show, be sure to subscribe and leave us a rating or review on Apple Podcasts—it really helps other people discover our show. Have a question you've been wondering about? Send an email or voice recording to everwonder@californiasciencecenter.org, to tell us what you'd like to hear in future episodes.