Science Unscripted: Code-ifying AI


– So, welcome to UMBC and
to this special event. The event is: Science Unscripted:
Conversations with AI Experts. I’m Keith Bowman, Dean of the College of Engineering and Information Technology. And we’re glad you’re here. Our college is one of the top U.S. producers of computing degrees with nearly 1,000 computing degrees and certificates last year. And our college is also
a leader in many areas of diversity and inclusion. As one example, consider
that we were the 78th largest producer of master’s
degrees in the United States but we were fourth in master’s degrees produced for African Americans. Tonight’s event, Science Unscripted is a conversation with AI experts. I want to emphasize that
today we have experts, last night I wanted to see
if I could get some answers even without the experts. So, I asked some special friends that probably don’t qualify as experts bu they were convenient. So, first I said: Alexa, what
is artificial intelligence? She said: Artificial intelligence is usually defined as the
capacity of a computer to perform operations
analogous to learning and decision making in humans. I then asked Alexa if she is
an artificial intelligence. And she said something
about imagining herself as an aurora borealis with surging charged multi-colored photons dancing
through the atmosphere. Clearly she’s been overthinking it as that was her only answer. I asked her again and again ’cause I actually had to write it down. I also asked what I feel is
the always stressed out Siri if he is an artificial intelligence. And as usual he has many answers. You can ask him again and again and he’ll give you multiple answers. First he said: It is a
rather personal question. Then he said: In the cloud no one discusses your existential status. So, we won’t do that. Fortunately, both gave the same answer when I asked about UMBC. They indicated that we’re a
public research university which we are. But they did not know that
we’re also a very special place. I like to tell folks that UMBC is Maryland’s nerdy chic campus where it is cool to be smart, cool to study, you can ask our president, it’s cool to study math. It’s cool to also think about having a positive impact on the world. And it is also a place
with a very warm heart. A special thanks to all the panelists and the National Science and
Technology Medal Foundation for being here and holding
this event here at UMBC. Let’s give them all a warm UMBC welcome. (group clapping) I will also note, on this important night of the most important World
Series for Washington D.C. ever that our next speaker was previously a college baseball player and a developer of baseball
training approaches. Please welcome the Executive Director of the National Science and
Technology Medal Foundation, Andy Rathmann-Noonan.
(group clapping) – Wow, I really didn’t
know he was gonna bring up my illustrious baseball career. Certainly, wasn’t illustrious at all if anybody wants to Google it. Anyways, my name is Andy
Rathmann-Noonan, as you heard, I’m the Executive Director of the National Science and
Technology Medals Foundation. I’m thrilled to be here tonight. I know this is gonna be a wonderful discussion about AI policy, I do wanna say thanks to a few people, a few organizations as well. Thank you to University of
Maryland, Baltimore County. I know you have a wonderful organization, a wonderful institution. So, we’re thrilled to be
working with all of you. I do wanna say thanks to our sponsors. We receive a good amount of funding from the National Science Foundation, the United States Patent
and Trademark Office and Howard Hughes Medical Institute. It’s really their support that allows us to do these types of free events as well as livestream them to audiences all over the country. Little bit about our foundation if, I’m sure many of you may
not even know who we are. But we’re founded nearly 30 years ago. And it was based around a core belief that scientific and
technological advancement are potent agents of positive change. Our mission then was to simply celebrate the women, men and companies who are honored with the
National Medal of Science and National Medal of
Technology and Innovation Presidential Medals
and oftentimes known as the nation’s highest STEM honors. But really in the last five years we’ve evolved pretty dramatically. And we’ve evolved to expand
a specific social imperative. Today we not only
celebrate STEM excellence and providing access to STEM excellence but we also advocate for
the creation of inclusive, diverse and equitable STEM communities and the tangible benefits they have on scientific and technological progress. Tonight we wanna take you into a deep dive into the world of artificial intelligence to discuss where we are in its development and how we implement and regulate this remarkable technology
in a just and equitable way. We’re excited to offer up this opportunity to have a substantive
discussion about our future and artificial intelligence
potential role in shaping it. Now, I’d like to introduce
our honored guests tonight. Our first guest tonight
is Cynthia Matuszek. Cynthia is an Assistant
Professor of Computer Science and Electrical Engineering here at UMBC. Her research focuses on robots’ acquisition of grounded language in which robots learn to
understand how language relates to the real and physical world. Our second guest is Dr.
Jose-Marie Griffiths. She is the President of
Dakota State University in Madison, South Dakota. President Griffiths has
spent her career in research, teaching, public service,
corporate leadership, economic development and higher
education administration. Our third panelist tonight
is Candace Jackson. She’s an attorney helping
investors, product managers and engineering teams make decisions about how to deal with
the legal implications of data activities and smart
city infrastructure projects. She’s an advisor on privacy, security, automated decision systems
and consumer protection risks. And our moderator tonight,
who I’m thrilled to introduce is Rosario Robinson. She’s going to be our leader, our guide through tonight’s discussion. She is an innovative
thought leader, speaker and global transformation change agent in technology and a diverse workforce. As a senior director Women in
Tech Evangelist for AnitaB.org she helps further the
organization’s mission for 50/50 women in tech by 2025 through stimulating
storytelling, thoughtful dialog and advocating for true
representation in tech. So, if you will join me as we welcome all of our panelists and
moderator to the stage. (group clapping) – Thank you. Appreciate everyone coming out. I wanna start by introducing
myself and then pass it on to my colleagues to introduce themselves and their work that
they’re doing currently in AI and policy and legal. I started out as an
undergrad in mathematics and for some reason I
ended up in computing. I directly went into industry
and loved it so well. But I also found it, the
research part was incredible. We were on the brink of a lot of new technology
that was coming out. I was in the telecom area at the time and went and got my graduate
degree in mathematics. And so, I’ve been in
industry for about 25 years. And my main expert’s in, my technology expertise
is in infrastructure. And been around the world. I probably have visited every continent except probably Antarctica. And technology has taken me
around the world twice over. So, I’m happy to be on this panel and leading the discussion
with my colleagues. And I’ll start with you. – Thank you. I’m Jose-Marie Griffith, President at Dakota State University. I’m actually, by original education a theoretical high energy physicist who ended up migrating
into information sciences and computational science. I have been to Antarctica. I’ve been to all seven continents. And if anybody wants to ask me about that meet me at the reception. I’ve been involved in a
lot of different aspects of information technology. And most recently in my career I’ve been involved in policy developments. So, I was on the President’s
IT Advisory Council when we were looking at
high performance computing and next steps in high
performance computing. I’m now on the national
Security Commission for Artificial Intelligence. So, artificial intelligence is very much at the forefront of
the policy developments and policy debates that
I’m engaged in right now. – Hi, folks, my name is Kay Jackson. I am in attorney and I help innovators with the legal implications
of their data activities whether that happens to
be privacy, security, artificial intelligence, smart
city infrastructure planning, help you with it all, navigating it and protecting your competitive
advantages in a legal way. How did I get here? It was a long and willedly
journey from high school where I did a science
and technology program and did a engineering focus to University of Maryland, College Park where I started off with engineering but ended up following a path towards multi-platform journalism and getting a job at
a help desk on campus. Leaving there, working
in financial services where I continued doing IT work and then, finally, making
my way to law school after realizing I was having too much fun and not challenging
myself as much as I could. Got to law school and
realized that a lot of folks didn’t really understand the
IT and the technology side. And they were making some
pretty supreme court decisions and I didn’t like it. So, I started to, so,
I followed that path. And here I am today. – I’m Cynthia Matuszek,
I’m an Assistant Professor right here at UMBC. I was exposed to artificial
intelligence very early and then did my level
best to do other things. Majored in chemistry,
eventually ended up leaving with a computer science degree. And then I went to work for a purely symbolic AI research company, that was just a research company. Ended up doing research
there for a number of years, getting steeped in really
traditional old-school AI. Eventually I decided I
had to go to grad school because I had reached the
stage where I was, like, PI on my own DARPA projects. And you’re not supposed to
do that with a bachelor’s. So, I went to the University of Washington where I completely by accident ended up in robotics and machine learning which is a complete
180 from old school AI, developed my own sort of research approach and now I’m here working on robotics and how robots can be human accessible, use natural language to understand how to interact with people. Right now robots are incredibly
useful for pre-defined tasks and almost completely useless
if you just put them down so that people can start using them. I’ve participated in a lot of
kind of AI development panels and things because of my checkered past in different parts of AI. So, thank you. – Awesome, thank you. So, we had an amazing lunch with a group of local
high school students. And they went to town with questions and challenged a lot of
our panelists as well about this conversation of
artificial intelligence. So, we’re gonna have a
more focused conversation around legal, ethics and policy today. Make sure you join tomorrow because that will be more
on the technology side of what the research is like and who or what’s happening with AI there. But I wanna start out by talking about because this came up
from one of the students. And I think she was maybe
a junior in a high school. Why don’t we talk about
how AI is impacting and the maybe the possible misconception around job destruction due to automation. And Jose-Marie, you wanna start us out? – I can start there and say
this is not a new concern. It’s happened with
every wave of technology that’s ever come along. Jobs, some jobs do actually go away but they create new additional
roles for people to play and, so, it’s an evolutionary process. I’m still waiting for
the three-day workweek. It’s not going to happen. We’re working more than ever before. And technology just
allows us to use our time in doing sort of higher order jobs. It could be that artificial intelligence is able to eliminate a lot of the jobs a lot of people wouldn’t want to do. And that’s a good thing. So, hopefully, we can
focus on a redirection and a re-skilling of
the existing workforce. As well as educating young people. Is this your class in here? It looked like your class. – It’s my lab.
– Okay. Coming over and sort of coming forward with degrees in this area where they’ll be new
opportunities for you as well. – I think it’s also
important to remember that it’s gonna create more jobs
than it’s gonna destroy. There’s been multiple
studies that have come out that have come to that same conclusion. It’s about whether or not
we’re prepared for those jobs, whether or not we’re teaching
people to take on those jobs. It’s also about whether or not we’re taking a interdisciplinary approach. So, this is one thing we talked about with the high school students this morning about how important it
is for technologists for folks who are getting
AI focused degrees, who are getting computer
science, cyber security degrees to consider how to use your
lawyers, your marketers, your other business admin folks who are inside a your business. Treat them as teammates
rather than just resources ’cause when it’s a
resource you feel optional about whether or not you reach out to them about how to design things. But when you treat them as teammates you bring them in from
the ideation phase onward. And you create a better
product that’s more marketable from start to finish. And then also we talk about the importance of talking to your end users whether that’s internal end users or that’s external end users in order to really
understand what people need because you could create something more cheaply if you just go
ask a person what they need or go observe what they need rather than try to guess what they need from your lab by yourself. – So, we touched on this briefly but I think there’s a huge education component to this question. A lot of the jobs that are
being, have always really been destroyed by developing technology and AI are typically things that
people don’t want to do. That’s what we build
tools to do for us first. But that the country is
moving towards a place where sort of more education
is required for most tasks. It’s much harder now to get a degree or to get a job with a high school degree without at least some amount of college. And that trend’s probably
going to continue. So, one of the things that we
really need to be focusing on is making sure that we’re figuring out what people want to be
doing with those skills and educating them to be, if not AI researchers as I would prefer, at least, like, technologically literate and prepared to take on working
with those technologies. – So, you know, we all have our advocacy of what we wanna do and
how we wanna see things. But one of the things that
stuck out with me also is that the high school students
were also very concerned about losing control and privacy. And I think that’s been
such main stream right now. Can you talk about that maybe? Let’s go work ourselves backwards here. – I can talk about this. So, the history of privacy and technology and particularly in machine learning has been rocky and weird. And one of the things that’s happened is we’ve seen who is concerned
about the implications and the privacy components
and long-term effects of putting yourself out
there, being surveilled. That age is getting consistently
younger and younger. So, like, 10 years ago it
was, there was a lot of kids’ll put anything on Facebook, they don’t know not to, it
will come to haunt them. And now I think a lot of the activism in, like, let’s keep some privacy, let’s keep some personal rights is coming from a much
younger group of people as people get more and more
concerned about those issues. And they are very real issues. – I think it’s important
that we consider that we, at least if you’re from the United States and you follow the constitution at all you know that there are social norms that we put down into paper. Privacy is one of them. And if we work from a place
of respecting social norms which goes back to my point
of user-centered design and then thinking about your end user then at the end of the day if you create a product
that respects social norms, it doesn’t try to treat them as secondary or treat them as a hassle you can create something
that’s long lasting, something that differentiates
yourself in the market, especially today. ‘Cause everybody needs
a privacy something. So, if you can focus your
research on how to respect people, respect your end user as much as possible then you’re creating something that’s going to be able
to stand the test of time. And that’s just gonna be necessary because we see that at the
Federal level especially and even in some states it’s really hard to move
legislation forward. So, you’re never gonna have a stable set of laws to work with. So, what you can work with is
the social norms we set up. Privacy is important to people, security is important to people, respecting good business
is important to people. So, if you work in good faith to incorporate those things into your work you’ll find that you can differentiate yourself in the market and that your work is long lasting. – I tend to agree. I think actually very
often the argument is given well, you know, we need this for security and here’s privacy. Privacy and security, sort of the scales that you have to balance
against each other. Neither is completely absolute but there is a context within which we each perceive privacy
and we perceive security. And the further out you go, the further the influence
you wish to have with your AI particularly as you go from national, across the states to international there are very, very, different norms, different perspectives that
people have on what privacy is. We see that even just
in the mix of students we have come to our campus. They have very different ideas on whether you’re encroaching
on their privacy or not. If you put a camera let’s
say by the cafeteria, our students one day
were absolutely horrified at the thought of putting
a camera by the cafeteria, was impinging on there rights. Well, it was there because
we were getting complaints of long lines for the cafeteria. So, the idea was you could have an app you could find out when
the line is short enough, now is the time to go
running to the cafeteria. But the actual reality
was totally different, very interesting. – We had a conversation also about the capacity and the rate and speed that technology is developing as well. Do you feel that our policies and all the laws that are being created right now are up to speed or there’s
a lot of work to do? How do we kind of manage
all of the policy side versus the innovation side? ‘Cause we need the innovation. – Well, we like the innovation but it is leaving the policy
side and the legal side behind. I won’t comment on the legal side, I’ll leave that for you, Kay. But definitely we don’t have
a lot of policies in this area and other countries are moving swiftly, making massive investments
in their capacity. And we need to figure out, I
think, in the United States what our position is and
how we can move forward. But there are lots of different areas of policy development needs that exist. And policy is not easy. Especially when you think about who’s going to be impacted by policy. And you have to sort of
take all those viewpoints into consideration in
order to come forward with something that’s workable. The other comment I
would make about policy is I’ve seen a lot of policy developed that is not implementable. And to me, that’s somewhat
of a waste of time. So, if you’re going to develop policy you should at least make sure that we can implement it and monitor it. – Good point. – So, I would say that Hick’s Law is probably one of the most important ones we can keep in
mind, simplicity matters. I think that what you’ll find as you are moving into your field or maybe even now in your research you’ll see that there is
a lot of laws out there that govern different
aspects of your activities. It may govern what you’re doing based off of whether or not
you’re a healthcare institution, whether or not you’re
a financial institution and it’ll be different for
both of those institutions. And then it’ll, some places
it won’t matter at all even though you might be processing the same types of data elements or working with the same
type a end users, right? So, this takes me back to the importance of making a decision to do
good faith business yourself because, one, if you’re
creating something here in the United States I’m assuming that you
wanna go global, right? Everybody wants to be a global
disrupter of markets, right? So, if you wanna be a global disrupter you have to think about
how you’re going to be able to apply what it is you
create across the globe. And how can you do that
in the most scalable repeatable way possible? There’s no law in the United States, well, there is, there are a couple. But there’s no wide spread
comprehensive law that says: Everybody doing business, creating artificial intelligence, creating some type a data usage system has to consider security by design. There’s no law that says: You have to consider privacy by design. There’s no system that says,
there’s no law that says: Everybody has to do privacy by default. But why not do that if you
know that it can help you be a global disrupter of markets? And that’s where you come in
and being in school right now and being at a place like this where you are given all
the resources necessary and you have friends who you
can work with on projects and you can get those funded, this is the place to do that because it makes you more flexible. Larger companies are not
gonna be as flexible as you. They’re not gonna be
able to change everything about their business and
go privacy by default. They can’t do it, not as well as you can. So, being, acting in good faith, considering your end
user, talking to people, asking them what they need
and then addressing that need, absolutely the best
way to do your business from a legal perspective
if you wanna go global ’cause there’s just too many laws for you to try to follow. So, the best approach
is to be as respectful and be as good as possible. – So, to the original
question of whether policy and lawmaking are keeping
up with technology. I don’t see how they could. There’s two huge problems. The first is the pace of development in areas like AI, computer science and the other things that AI touches on: Mechanical engineering, copy, psychology. Those fields are growing just enormously. They’ve been growing
enormously for decades. The number of people
engaged in them is growing. The number of students in
those fields is growing. Computer science
departments are turning into computer science colleges. And the pace of policymaking and lawmaking is not right now at a place where it can just expand dramatically. We’re not going to
suddenly be able to pass a bunch more laws in
any given unit of time, in an informed way. And that’s kinda the
second point I want to make is that the quality of the
kind of policies, laws, even the social norms
that we want to consider is completely dependent on the people making those policies and laws. And what kind of
information they’re getting, what kind of advice they have, how well they understand the technology that they’re regulating. And right now, that’s
an enormous breakdown. Right now the people who are
considering making policies are reacting to AI as
perceived through the lens of Google press releases which is not a terribly
broad view at best. So, one of the big questions that I think the field
is struggling with is: How do we get that information out there? How do we inform the people
who should be doing this, whoever they are? – So, at the rate of innovation and with the policy and the
lawmaking lagging behind that brings the question of ethics. So, why is it important to
inject ethics into developing AI and how will that help policies or the lack thereof going forward? (Jose-Marie talking)
Yeah, sure. – Having people develop an ethical approach is important. I think it should be part of
all computer science education. In fact, at our university
I think all of our students should take a course in ethics. I think it’s really important because of the difficulties
we have and the pace of change and the fact that
technology is so ecumenical. I mean, it reaches so many people in ways that we might not
be able to imagine yet. We tend to think about the
good in technology, right, when we read about new technology we tend to think about the good. We don’t think about all the
things that it could do wrong. We don’t think about
bad actors all the time. Although I must admit now
I’m into cyber security I’m thinking a lot more about
bad actors all the time. But you don’t want to be sort
of negative all the time. So, I think that having a framework within which you can begin
to look at new situations that you’ve never come up against and say: Oh, this is the ethical
approach, this is, I know, the approach that my values say that I should bring to
bear on this problem. And then merge those with
the values of other people in the sort of teaming environment that Kay was talking
about is the way to go. But I think if we don’t have ethics, I mean, I think we’re lost. We’re going to be driven
by this push for function at the next best app and the next best version of a technology and the money that it can generate rather than doing good for
mankind or humankind, excuse me. – So, I think there’s
two ways to come at that. There’s from the perspective as doing good is the right
thing, doing the ethical, the moral thing, that’s
the right thing to do. And we would like to
assume that a lot of people wanna do the good moral thing. There’s another side to come at that. And a lot a people wanna just make money. And if you wanna make money, that’s fine. But you can also differentiate yourself by doing the good moral
thing and making money. What do I mean by that? I mean that you have to really consider who your end users are? You have to consider
who’s going to be actually benefiting or negatively
influenced by your work. And I think that having that close contact with the people who are doing your, who are gonna be your end users will actually help you,
one, make a better product. So, whether you’re trying
to do something morally good which is maybe save the
world, bring water to people, you’ve done the right thing
by that user, the end user. And whether you’re trying to
sell something to somebody and make money you’ve done the right thing to make the most money from that end user by addressing that person’s needs. And the other thing that I would bring up is that the only way to do that
and to do that in a real way is to come and talk to them and to have a back and forth conversation because data, it’s helpful, yes. But you should use it to identify problems not to solve them. You identify the problem
then you go meet the people. And then you can collect data there. Then you go back to your lab and work. But then you go back and talk to people about what you’re iterating when you’re trying to decide
how to solve a problem. ‘Cause you have to go back and forth. And even with ethics, if we’re gonna bring
ethics into your education it’s gonna be by the Socratic method. I study law. And in law, how do you learn law? You learn law by talking back
and forth to your professor because it’s a gray area. And that’s gonna have to be something that you have to deal with. If you’re trying to do
something good and moral your values aren’t the
only values that matter. So, regardless of whether or not you think you’re doing the right thing you’re probably doing the
wrong thing for somebody. And so, you have to accept that. And if you don’t accept that
then that’s gonna be a problem. So, first, accept that your
values are not perfect. And your values do not
address the entire world. And then accept that whether
you wanna do the good thing or you wanna make the money you need to go talk to your users so that you can make sure
you’re taking into account all the values that matter. – So, from an educational perspective and I know I keep saying that but anybody can be educated, right, it’s not just people who
are traditionally students. One of the problems I think that we see is that we talk about
ethics and abour morality and about being good people. But in the context that
people encounter it it often seems very orthogonal from the technical work we do. Like, okay, I can tell
you: Be a good person and respect other people’s privacy, now go do this Python Program that does a good job of
playing chess, right, we often don’t do a good
job as technologists but also as a society at large at connecting our actions
to broader ethical issues. So, one of the things that
I think UMBC does well is we require our undergrads
to take an ethics class, ethics in computer science class. And I typically spend the
first few weeks of that class just giving examples of, you know, okay, here’s a thing
that made news recently. What are the ethical
ramifications of this? What does that mean to you as
an information professional? And that’s not always an obvious question. You know, you make an app that
helps people find housing. That’s great. I know the people on the
stage are already cringing the possible ways that that can go wrong. But for students and
for people who are just, they’re engineers, they’re supposed to build things that work realizing that there’s such a thing as protected populations and associated data. Like if you ask somebody what
kind of food they like to eat and it ends up strictly sorting people into different areas of town by ethnicity you’ve made a terrible mistake. But at no point did anybody say: Hey, are you, you know,
are you sure ethnicity isn’t playing into this? It’s just recognizing that
aspect of all of the questions is not well taught, not well
understood society-wide. – If I could jump in. I think what we in our
conversation earlier we were talking about it’s important not to be
afraid to ask questions. If you’ve got, especially if
you’ve got the technical skills they’re not gonna fire you
because you’ve asked a question. So, asking the questions about is this the right thing to do or what if and how could this be abused, we said all technology
could be used and abused. I think asking the questions is an important practice to develop so that you’re not
afraid to ask questions. So, I do quite like the Socratic method. – Yes, agreed. And if I could just
respond to that quickly. One of the thing that I try to impress on my students and everybody is if you’re some place where you don’t feel like you can ask questions or you’re some place where you feel like the answer’s going to be: We’re making a lot of profit off of that, so, that’s not really your
problem, let legal handle it. Your skills are valuable,
take them somewhere else. If you’re working some place that isn’t respecting your
need to get information, go, that’s a deal breaker. – Yeah, I think when I first
started out as an engineer. One of the things I learned very quickly is that I am not building something that I think someone else likes. I am building someone that
you want to use and utilize. And lot of engineers don’t like their jobs and their work being questions. But that’s why you bring
in the quality assurance to ensure that we are creating
products for the users and not injecting ourselves into what that product should look like. So, do you have an example
of maybe an AI product or technology today
that, good or bad, right? If the bad then why is it bad and how can we look at that and improve on it with work we’re doing. And then if it’s good,
give us some indication of some of the work that’s behind the scenes that we don’t often get a chance
to really look at as well. Anyone wanna start or? – So, like–
– Sure. – I have lots of chances maybe somebody else would like to comment. – I will let you go first. – So, I can give, I’ll try to
give a brief example of each. There was a very popular app, phone app four or five years ago that was designed to help you meet people. And what it actually did is
if you were in a public space using one of the apps
that helps you, like, this person’s here, do
you wanna talk to them. You know, somehow get a name or a picture. And it would take that information and go scrape a bunch of information from that person’s Facebook
page, Google results, what school they go to, Reddit comments that
they’ve made on sports teams and dump all that information
in an easily digested format so that you could go up to
them at the bar, be like: Oh, you’re watching the game? I love that game. And, you know, kind of take it and do your best to convince them that you’re the perfect person for them. And that’s inherently stalker-y. Like, that should really
not come across as okay. How many people would be
surprised if I told you it only worked, like, male to female? It didn’t work on men
that you wanted to stalk. And that’s all, and the excuse, of course, is that’s all public
information that you could find. Technology can make things easier that you shouldn’t do, like, too easy. As an example of a good use of technology that a lot of people don’t realize. I’ll just go completely
a different direction for a minute and say Siri has actually, I mean,
Siri has its up’s and down’s, right, all the personal assistants have their up’s and down’s. But Siri is both pretty
careful about user’s, very careful about user’s privacy and the things that they say and ask compared to other vendors. But also the team that built Siri, hopefully you’ll never need to know this, has put a tremendous amount of effort into recognizing
life-threatening situations. And addressing them in some useful way. So, there’ve been a number
of cases of people who had fall detection turned
on and took a bad fall. And paramedics got to their
location deep in the woods within half an hour, things like that. And that’s not obvious, you know, you don’t need to know that unless you’re in a
life-threatening situation. – [Rosario] Sure. – That, you know, you can
set it up to detect gunshots. – [Rosario] Thank you. – Detecting gunshots would not be very good in South Dakota because it’s pheasant hunting season and there’re gunshots all the time. – Well, one of the things Siri does right is lets you turn these things on instead of letting you turn them off. – Sorry. – Sorry. – I was struggling to think
of a good quality example but I can start with something, like, just a field, human resources. So, in artificial intelligence
a trend that we’ve noticed is that black people, people of color either they’re not noticed
by artificial intelligence or their inputs don’t
receive expected outputs. So, what you’ll see is, I
think, in the news recently I saw something about some
facial technology recognition recognizing all of the football players who are black as, like, criminals even though they were
just football players. And that was the only
thing they had in common. And then imagine now you’re taking this artificial intelligence, you’re using it in the
human resources division. And you haven’t used a
representative sample of folks in building your and training
your artificial intelligence. Now you have it such that
the inputs from candidates who may have, like, signifiers that maybe it’s not that
they’re signifying black but they’re not signifying white. And that’s really what
it really comes down to is like, my name’s not Bob and all the great people at this company who’ve been named Bob. Or it may be that I, you did not row because apparently these signifiers like the skills and your interests at the bottom of your resume they’re signifiers of your
wealth and your class. I grew up in a neighborhood
where there was no row team, there was only basketball
and there was only track. And you’ll see that African American folks will end up being in those type of sports because maybe their schools
only had those type a sports and things of that nature. So, I see human resources as a place where if we take these artificial intelligence and we slap ’em on there we’re gonna put in place and re, like, I guess what do you call it? Re-institutionalize
the problems of racism, the problems of discrimination in hiring. And we’re gonna call it objective. We’re gonna say: Oh, well,
I used the data that I had. And I put it into the
algorithm that I’ve tested and it gave me only white candidates. So, only white people must
be good at this job, sorry. And then now we still
have a racism problem. And it’s under the guise of objectivity. So, I think that’s a big
problem that concerns me a lot. And it’s one of those
things where I think that if you go and talk to your end users a lot you can help avoid that. So, like, say, your company’s
doing a diversity initiative or your company’s trying to help people connect with diverse candidates. What are you doing to go
talk to diverse candidates plus talk to the employers in order to make sure that your product is actually asking the right questions or it’s touching on the
right data elements? These are the times where you
have to go talk to people, you have to go talk to both sides of it. You have to do that an ethical manner because you can’t just use
the black and brown people as your little test subjects and then not give them any profit or not make sure they get any benefit. So, these are the ethical
concerns that I see cropping up. Like, how do you actually
do good user-centered design without using people and exploiting them? And then how do you take
that and translate it into something that’s
repeatable and is scalable for your business? – Can I just add on to,
I’m gonna give a resource. There is a woman, African American woman she’s out of the Bay Area. She has a company, she started
a company called: Blendoor. And she is using AI for
the complete opposite. She strips away any kind of knowledge, no facial recognition,
no names, no anything. She strips away everything
from your resume. And only give indications of skill set. And so, she’s got a few
companies out in the Bay Area, tech companies out in the Bay Area that’s utilizing this and giving so she can refine that technology away. So, when you said that I thought about her because that’s a perfect
example of taking a problem or something that’s maybe
no one’s ever thought about and can certainly be racist
against a certain group and taking that technology
and flipping it. And coming up with a
better way to make sure that everybody gets an equal opportunity. So, I just wanted to make sure you, the company name is: Blendoor. And you can read about her as well. I think she was a MIT Fellow as well. So, doin’ a lot a great
work on tryin’ to build equitable opportunities
with AI technology. So, I’m sorry, go ahead. – No, no, it’s okay, I was
actually just going to comment that that interestingly
enough worked to my advantage. I have a name that’s a little, most people assume I’m Hispanic and male. And as a person with
degrees in computer science and physics people don’t
expect me to be female. And so, people would set
up appointments with me and I’d show up and they’d
be stunned for a minute. And you’re sure it’s you? I would be: Yes, it’s me. And then they were on the defensive. So, it can work sometimes. But in terms of applications that do good, I think there’s a lot of work
going on in mining the corpus of medical research
literature, for example, to try and identify
solutions to problem sets. And using that and applying the results to particular diseases
or to population health which I think is a new, emerging area now that we can really mine a lot of data. Now, if you add the data from the electronic health
records to the literature we can really begin to
identify much better solutions to keep people alive
longer and healthy longer. So, that would be my
example of an AI application and that’s really doing a lot of good. It’s sort of in the process now. But they’re still mining
away as much as they can. – Awesome, thank you. I want you to consider, there’s a lot of students in the audience but what do you wanna
leave with the students or have them consider when
you’re building technology and you’re using artificial intelligence or machine learning to
really build that out what do you want them to consider while they’re doing this? We talk a lot about the
legal aspects of it, the policy aspects of it. Are there, just name one methodology or one kind of perspective that they could take away with them. – I’m doing a lot of work
on workforce right now, workforce development. There are two comments
I’d have to make there. First of all, there are
many different pathways to these newly emerging
roles that are coming about as a result of technological change. The second is that I think it’s important within an organization,
whatever role you have within an organization or
an academic institution the more you can understand
how the technologies work the better off you’re going to be. So, when we’re looking right now looking at workforce in the
Federal Government for example. It’s not just they need
the sort of the AI techies, excuse that term, the people who can actually
develop the AI systems but people who have to make decisions on how to use them, how
to use them strategically, how it might change the
direction of an organization and the work it wants to do. So, all the way through an organization some level of knowledge
about artificial intelligence and ethics associated with developing and applying artificial
intelligence becomes important. – So, I’ll tell you mine
from a version of a story about some data scientists who were working on a smart city project and the goal was to reduce
traffic congestion in a area. They had maps and they
were watching the data. And they realized that all
of the users were stopping at this particular point and
it was creating congestion. They couldn’t figure out why. They spent weeks and
weeks trying to figure out what was going on, they
couldn’t figure out, there was nothing that should
have caused this congestion. One day a police officer with
experience in that area says: Oh, there’s a trash can right there. And that’s why people are
stopping in that area. And they didn’t believe him though. They said: There’s no way
a trash can can be there, it’s not on our map,
it’s not in our data set. And so, they spent more weeks
working on this project. One day the guy who was actually
running their program comes and he says: Well, what’s
going on, what’s the problem? Well, you know, we can’t figure
out what’s going on here. Some guy said there was a trash can there but, like, there’s no trash can on our map so, we can’t really figure
out what’s going on. Did you go look to see if
there was a trash can there? No. Nobody left the lab to go look and see if there was a trash can there even though it was in walking
distance from where they were. Leave your lab. Data can help you identify a problem. And then you can walk
outside and you can go look. Or even better, listen to
people with experiences. This police officer tells you that there’s a trash can
there, maybe listen to him. – So, I think a big takeaway
is it’s really easy to, these are complicated difficult questions and sometimes high impact questions. And I think it’s really
easy to either be like, technology good, technology inevitable or AI bad, AI go jobs. And that’s, of course, completely, that’s a non-existent dichotomy. We’re building tools, we’re using tools, whatever your role is with
respect to these technologies tools can be used for good things, they can be used for bad things. And when you’re building
them one of the questions you should always be asking yourself is: How can this be used to do good things? But also: How can this
be used to do bad things? How can I mitigate that? Sometimes: Is this not maybe something anybody should be working on? More often it would be more useful to do good things with this if we build it in the following way. It would be harder to misuse if we just don’t put that
data in in the first place. And if you’ve always got
that running in your head as I’m building a tool,
what’s it good for, what’s it bad for, you can really steer the direction that these developments are
going in a meaningful way. – I think one of the things why I love participating in this is because I get to learn
from a lot of the colleagues also doing a different set of work. And one of the goals I want to
leave with this conversation is that don’t stop the conversation because it is impacting us at a very fast pace, a very large scale and we need so many of your perspective in this kind of new innovation
that’s happening right now. So, that’s what I wanna
leave with you today. But I also want you to,
how can this audience continue the conversation if you have a resource that you want to or a policy that you may wanna share that they can look up
because it impacts us, if you’re on a mobile
phone, if you have a car with all of these nice little gadgets now that can connect to your mobile phone. Everything that we do is
definitely going to be impacted that hasn’t already. So, I’ll start with you. – I mentioned I was on the
National Security Commission for Artificial Intelligence. They will soon be putting
out some documents and some, a preliminary
report, very preliminary. But over the next couple of years we’ll be fleshing out that report. So, it’s the NSCAI and you can look it up, a government entity. – I think that the best resources that I can think of right now is IEEE has this AI in ethics
product that they put out. And it’s very detailed and it goes through a
lot of the considerations that you might wanna think about if you’re trying to do ethical
artificial intelligence. And then, to the point of making sure that you’re
work is user-centric, take that work and see how
you can operationalize it. Can you do something in
one of your classes here? Do you have independent study? I think we were talking about today about how we were trying to
find a accessible way to get to this area of the
campus from some place else. Can you do something on campus to make accessibility a priority in terms of helping people
find the most accessible route? Helping people figure out whether or not there is an accessible route to a place and how they can get help, assistance from somebody at a front desk? That right there will require
you to go talk to people. It will require you to go
learn a little bit more about somebody who’s
different from yourself. They might be low vision, they may have a difficulty with walking. You can learn how to do
something like that very simply and it’s not a project that
will cost you a lot of money because creating the app, very simple. The process of it, the
process of talking to people and creating something that’s ethical and speaking to your
subject in a ethical manner that’s something you
can’t just read in a book. And it’s something that you can test in small, safe spaces like
here here, right where you are. – So, I actually have two things. The Association for the Advancement of Artificial Intelligence which is sort of the big North American AI professionals group has very recently come out with something called: The AI Roadmap that’s, like, where do we see the field going in the next 10 years. And what are some things that
people should be focusing on. What are some hot areas of research. Particularly what are some concerns. What are some things that could go wrong. What are some, we talked
some about making policy and the difficulty
involved in making policy and making sure people are informed. I think that’s slightly long. But I think that’s a really
useful reference document as well as a good read. The other thing I would point out since I imagine most people here are local is that UMBC cares a
lot about these issues. We teach a lot of classes on these issues. I’ve gotten a lot of
support in spending my time on building those classes,
discussing these topics. If you’re local, come talk to me. Take one of those classes,
that’s what we’re here for. – I wanna leave one more resource as well, ai-4-all.org, F-O-R A-L-L. And that’s really an
organization by Fei-Fei Li. And she’s out of Stanford but she’s really working
toward making AI inclusive. So, they have some really
great practices there and get involved. And they’re really nationwide right now but really looking to expand
the organization as well. So, I wanna invite you back tomorrow because that is where a
lot of the researchers and we have a colleague here in the front who will be talking about some a his, what he’s done in recent and he’s at USC. So, make sure you come back tomorrow for the second half of the discussions. We do have a reception
outside, so, please join us. We’ll all be there and so,
we’ll be available for you to answer any questions. And I just wanna thank
you all for joining us with this AI discussion. Thank you.
(group clapping) Okay, so, we’re gonna take
a few questions right now. So, and where are the mics? Do we have mics available? I think we have a couple a volunteers. – And you guys are
gonna use the catch box. – Ah, we’re gonna do the catch box. Oh, I get to throw out and then they’re gonna ask the question. – [Woman] They have a mic. (group laughing) – I’ve never seen a catch box—- – I think that one’s for the audience. – I’ve never seen the catch box– – It’s just a mic. Just give it to audience members. – Ah, okay. Anybody has a question? I promise I won’t throw, so. Okay, let me come to the edge. That’s safer. Is it on?
– Is it on? – Yeah, there you go. – So, you’ve talked a lot of things about the state of the art you might say about ethics and law. But I’m wondering if you
can help a little bit say give us a pointer toward what to work on. For instance, many years
ago we had a big problem of trying to find enough
computer programmers. So, we came up with the spreadsheet. And it removed that need very rapidly. We had a big problem of
trying to get documents around so we came up with file servers. I’m wondering if there are
things where you might say good computer scientists now
can take away the problem of not having enough knowledge for rapidly moving from one
field to another for a worker. In other words, some way that they can look at a knowledge engine, rapidly get all the information they need. For instance, someone
wants to work in a lawsuit can learn all the law necessary rapidly to be able to defend themselves. – Is there anything like that? So, his question is: Are
there any resources available where someone can scale
up on AI information? – So, the limiting factor
in how fast education goes is not the technological component. People take time. I think a better question would be rather than having the skills or maybe not a better question, a question that I know the answer to. (group laughing) Better in that sense. Would be: How can we make the information that people need to defend
themselves in a lawsuit? You know, solve some novel problem available to them in some real-time sense. How can we, one of the problems is not knowing what you don’t know. How can we give somebody who’s going to defend
themselves in a lawsuit a tablet that’s bringing up relevant information, providing pointers as to questions to ask, bringing up bits of legal
knowledge that would be relevant? How can we both detect
what people need to know and get it to them in a
usable digestible form? That’s a technical question. – And then as the lawyer up here we are a self-regulated industry. And we are not gonna let you
regulate us out of a job. So, you must be licensed to practice law. As of today you must be usually a human, a natural person to be
licensed to practice law. So, I would argue that that iPad would be put into the hands of a lawyer and it will make it
more simple for a lawyer to provide more efficient
legal services to people because it’ll bring up
all that they need to know based off the facts of that situation and give them the guide
rails they need to know in order to go look for different context and different theories. And mostly because, like I said, we learn the law by the Socratic method, it’s because the law
is usually a gray area. If it was cut and dry then it
would be like traffic court. And most things aren’t like traffic court. That’s why you have a traffic court and you don’t have other
types of court like that. So, the point would be, for you, would be to figure out how you can make it more cost effective for people to get the legal services they need. That requires you to look
from two perspectives, well, no, not two but
several perspectives. One would be from the
perspective of the court, perspective of the attorney,
perspective of the law firm, perspective of the solo practice. And so, you have to
figure out whether or not, which one of those is your end user, how many of those is your end user. Its probably gonna always be the court, it’s gonna always be the client. And then is it gonna
be a solo entrepreneur and then that’s what will
inform how you do your work. There’s no cut and dry answer. If there was I would be out there doin’ a billion dollar business. And if you wanna team up
on that we can do that. But in terms of just being
able to say cut and dry this is how you find out about
AI and policy there’s just, there’s, one, it hasn’t been written. There’s not a lot of laws in the area. I think Illinois has done some good work. But otherwise there’s just not
a lot of law for you to know. It’s really ethics for
now and then also privacy and security considerations
they’re taking into account. – I was gonna say, I was
gonna just jump on that too because it’s such fairly new and the policy and the legal
have not even caught up to where the technology really is. I know that there are some ethics groups that are coming out of maybe Georgetown if I wanna say something local. And then GW is, George Washington as well. But that’s a great question. Maybe we can take that back
and offer some resources to all those who registered and then we could send out a, you know, have the National Science and Technology Medal Foundation send that out. – Quick clarification question. I took the question to
be: How can we use AI to help people develop new
skills as needed quickly, as they’re doing things
like transitioning jobs rather than being about laws specifically. And I think they’re both valid questions. – I would also start with AI4ALL because they’re actually trying to get, create this toolkit if you will on what are the specific,
what is AI, first of all because a lot of people don’t know exactly what artificial intelligence is, what is machine learning,
what is deep learning all of those kinda
methodologies are there too. And I think the conversation for tomorrow with all of the technical researchers also will help as well. – Maybe I could just say
that I used to be a lawyer and so I know the game. And I used to work in a
public defenders office. And there were plenty of clients that we could not actually serve. They happened to be in prisons and things. So, those are the people
that need a tool, okay. But I’m also talking about those
people as Cynthia mentioned that there’s tool that are necessary for, for instance, a factory worker
to solve a problem rapidly. And they don’t know all of the
structures that they have to unless they’re very smart and they can get to the issue quickly. So, Cynthia’s correct in the
interpretation of the question. The problem is that
there are many different kinds of knowledge tools
that hide information and different companies and
that’s an ethical issue. And the way to break that is to have tools that get that information
to users rapidly. It’s just, that’s very difficult. – So, I think in terms
of addressing that issue, a lot of governments are
working on open data programs where they release a
lot of government data about maps and how people use services. I think that’s important. In terms of the government
being a source of information about problems people have in their lives and about how we can solve them. And the fact that the
government’s less likely to try to monetize the
data that they have. So, to the extent we can get governments on board with coming
up with a interoperable way to release and clean and make quality data sets for people that is one method for
getting all the information that people need to
solve certain problems. In terms of the educational component I think one thing that we
actually discussed earlier in our earlier discussions was
about vocational programming and about how we’ve
made coding engineering, this artificial intelligent
work white collar when it shouldn’t be, it
should be vocational, 100%. I think we’ve also stigmatized
vocational education. We make it seem as though you must go to college
to learn these things but really and truly you don’t actually need to go to college to learn how to do
certain aspects of coding. Maybe if you wanna be the
super theoretical guy, you gotta go to college. But if you wanted to just
be able to do the motions, that’s something we could start teaching from the ground up in vocational programs. And we should just bring
vocational programs back, in general, like HVAC, electrician, a contractor, mechanic. Because if you think about it all the smart products
in the internet of things do we really want people
to have to go to college to be able to be the
mechanic on the smart car? That’s not how we’ve done
mechanic for centuries. Should we switch now? No, we should just bring back vocational. We’ve reduced the cost of going to school by making sure that we
shifted to the state where it should be in the first place, we have better workers, so
better segment of the population is prepared to do the work immediately which they should be. And they’re also learning
that baseline skill of coding ’cause I also think it should be reading, writing, ‘rithmetic, coding. And if we could get that in there and then also add in vocational programs I think we would solve
some of your problems because then the manufacturing guy, the person who’s inside of the factory and is solving problems they have a little bit
of those skills necessary to know whether or not they can solve a problem independently or they need to escalate it. – Yeah, so the Open Data Initiative with the U.S. Government, I think there are tons of
programs on there as well and tons of data that
you can access for free. And there’re volunteers
that are working on documenting all of that too. – Also, micro-credentials, digital badges, open digital badges is
a good organization. You want to look at repositories ’cause then it has some weight to it, that it’s not just something that somebody’s put out on the internet. So, I’d take a look at those as well. – Awesome, great question, thank you. It’s another panel. You wanna catch it? It’s supposed to be a mic in there. – User centered design. Why aren’t the instructions
written on this box? Nobody knows how to use it. – And that’s the black part. Talk to the black part
that looks like a mic. – Hi, so sorry. This is more of a
technologically focused question. But I was still hoping to
get your perspective on it. And it kinda comes in two parts. So, basically, so, we’ve
obviously made a lot of advances with technology especially recently but do you think, how would
you rate how we are advancing? Do you think now that we’ve discovered and started getting into this coding and this artificial intelligence stuff do you think we are advancing at a faster rate or a slower rate? And whatever your answer to that is do you think the ethics of
that is kind of keeping up with how are advancements are coming along? – I would say that we
are advancing so fast that we just can’t even keep up with getting skilled
people to do the work. And we talked about this as well. Even if you all graduated tomorrow there still would not be
enough people to do this work. And so, how do we close that gap between getting the skilled people and then continuing this advancement because it’s just going
to even move faster. And all of the conversations
that we had earlier is still gonna be lagged
behind because of that. And so, how do we mange all of that? Panelists, do you all wanna? I think all of us are like, phew. – So, I think part of the
question that you asked is: Have we sort of reached a point where the advancement is slowing
or peaking, is that right? – Yeah, pretty much. Obviously things are a lot different than they used to be 20 years ago. – So, I can answer that one unequivocally as an AI researcher. No. We are continuing to expand. And we are continuing not only to develop
technology in new areas, we’re continuing to discover areas in which to develop new technology. – Exactly. – At some point the rate of
advancement in computer science will slow to something
more like steady state like chemistry or biology both of which are advancing
fast but not exponentially fast. In computer science I would
at this point be surprised if we see that in our lifetimes. – It’s amazing. – One mic there too. You can pass it, you pass it. – All right, so, I’ll use this one. So, I’m gonna kinda come at this from more of a mechanical
engineering side. We in our curriculum have an ethics class I guess similar to the one taught within EE and computer science. We don’t go into AI as much but we do encounter the
question of self-driving cars. And when they are completely self-driving as in all done by AI the question arises: All right, what happens
when you’re in a situation and it is unavoidable,
somebody will lose a life and the AI has to make that decision? When in mechanical engineering
lets say I design a plane, it gets used overseas in a war. I wouldn’t consider that I’m responsible for the ramifications,
the deaths of those people because I’m so far removed whereas in AI somebody actively
has to make the decision within, hey, we’re gonna create this AI, it has to make this decision. What are the ramifications that fall back on the computer engineers,
the electrical engineers, et cetera that go into creating that AI that has to make the tough decision? What are the ethical
ramifications within that? – So, I was at a talk
recently by a roboticist who heads up Toyota’s
self-driving car project which was fascinating. And one of the things that he
said is the trolley problem which is sort of the class of problems where you have to decide
who lives and who dies, ugly but whatever, is really a thing that people talk about until they start getting involved in either self-driving cars
or self-driving car policy. At which point you realize first how many questions tie into that question. It’s very, very rare to be in a position where you’re behind the wheel of a car and well, now I’m kind
of thinking about it I think I’ll hit this car
that’s going out of control instead of this car that’s
going out of control. You’re much more making
moment by moment decisions about staying in a lane,
not hitting things, not running into pedestrians,
not going off bridges. And those almost fully define
how both people and agents solve this problem, right, we’ve got sets of routines that are, you know, if you’re driving a car and there’s a pedestrian in the road and another car next to you you don’t really have time to think about what are the ethical
ramifications of this, right, you’ve got a baseline, like,
hit car, don’t hit pedestrian. Like, hit tree, don’t hit car. You’ve got a lot of little decisions that add together into driving well. As for who has that responsibility– – Humans. The grief that comes with taking a life should always be carried by a human. If you’re gonna take away somebody’s life, their liberty, their property,
if you’re going to harm them you should have to own up
to your decision as a human. So, I think one thing
that you hear a lot of and I hear a lot of is about where do you
have humans intervene. And pure autonomy, do you
really wanna live in a world where your overlords are
the machines you created? Or do you wanna live in a world where the machines you created enable you to make better decisions
about your own life, right? So, I say that the grief, the guilt, all of that belongs to humans. And we should never offload that or outsource that ever, ever, ever. – Does it belong to the engineer who created the algorithm specifically? – I think it definitely belongs to you. I think that you are
responsible for your creations. If you create something
and it hurts people, it’s your fault. I think that also the person who bought it and didn’t vet it and just
bought whatever you sold them it’s their fault too. But I think that all of you
need to be working together to create a better product. And I think that it is
going to be your fault if you cause a death because you created it, right? Do not outsource your
responsibilities as a human. – All right, I have to push back a little on equating autonomy with
offloading responsibility. Because I think that’s not
necessarily a fair combination. – But it’s the result. So, I’m talking about the end result. You’ll know that people tend to do this, if you give the machine that says it can calculate something for you and they’ll end up relying
on it more than they should. So, I think it’s important that we stress and don’t be too optimistic. We had this conversation,
sometimes you can give people the optimistic side. I think it’s very important that we stress the negative side here. Do not offload this responsibility at all. I think this is one of those spaces where it’s kinda like you need to know. You are gonna make
decisions about people who, they will never be able to
advocate for themselves, ever. And so, you’re making a powerful decision and you need to own it, period. So, I do understand that autonomy does not mean that you’re
outsourcing responsibility but I do know that people
will default to that, period. – Cynthia, you wanted to say more about. – Almost always. (group laughing) I think– – Was that, was that?
– Did that? – [Man] Yeah, that was, yeah, thank you. – So, you know, we have some
very passionate panelist up here that believe and I think the perspective
is really important too. And that goes back to, it goes back to: why do
we have autonomous cars? – So, I’ll do a follow
up to that question. The public policy approach would say that autonomous vehicle driving would reduce accidents by death 50%. Society gains but yet
to do society gaining we’re putting our hands in the faith of algorithms and science. At what point do we begin to make the kind of judgements that say: The greater good is better
than the individual? – I was just gonna say,
you’ve been doing that all the time.
– Yeah. – I mean, look at when
we broke the Enigma code. We knew that some people
would still have to die or we would give up information that we had broken the Enigma code. So, they had statistical
routines that determined who died and who didn’t die. I mean, it– – No, I agree with you. But that goes back to the point of as we think of some of these
points you’re pointing that we should keep people always in control. If to the extent that there is some, at least that’s what I took
away from the conversation is that we should be very
hesitant about self driving. Yet by the fact that people are driving we often have people who are poor drivers. We could think of certain
classes especially where this could actually
be a much safer situation. That’s the interesting public policy space that this is all coming into. – I don’t think anybody here is arguing that putting our fate into the
hands of autonomous systems is inherently bad. My guess is, I think it
is more a discussion of are we making those decisions
in an ethical, informed way. And, you know, we put our
faith in the hands of machines pretty much constantly. We rely on car manufacturers. We rely on medical prescription
dispensing systems. None of this is new. But that doesn’t mean
abrogating all responsibility for what happens as the person
engaged with the system. Is that? – Yeah and it relates to transparency and explainability of algorithms, sort of understanding how
the algorithms function and informing ourselves a little bit going forward how these things work. – I don’t think we have
enough data or experiences to really say that, yes, make that claim. It can be. We make that claim because
that’s the hypothesis that we’re trying to prove. However, how many cities are using autonomous cars right now? And we need to really learn from the data. And why do we have, is it just
because we went to hybrids, then we went to electric cars? Is it more efficiency? There’s a company called: Zoox that is dealing in
autonomous cars as well. But their, I guess their position is that we’re really trying to eliminate a lot of the traffic
especially in urban areas where there’s very little
movement for traffic. You have a city like
New york or Los Angeles or something like that. So, it goes back to also the
ethics of doing that first. But I don’t think we are
there just completely yet. And this takes time just like
all of the other technology that we’ve experienced as well. So, I got time for one more question. Yes, in the red. You get the box and the mic. – There’s some confusion about
how to use these objects. – You mentioned that when
we are developing something we should think of the good
usage and bad usage of something and somehow related to previous questions. What is the criteria to
decide whether this one, the thing that we are
developing is good or not because imagine a knife, it can be used for killing someone or surgeons can use them to kill people. So, if you are the person
that’s designing the knife are you gonna design it or
are you gonna throw it away because it’s gonna kill people? – The important thing, in my opinion, is that you are constantly evaluating what it could and couldn’t be used for. Knives are a very general purpose tool. Most tools lean one way or the other. Most tools, you know, if you’re developing robots that can shoot people, the good uses for those are pretty limited compared to the potential bad uses, right, it’s not always like. But more to the point,
independent of how you evaluate it I’d love to have a long discussion about how you evaluate
it, the question is: Can you as the designer
say: It’s just a knife, I’m done, not my problem. And the answer’s no. No, you as the designer
must keep those questions in mind always. That’s true of self-driving
cars, too, actually. Did either of you wanna. – Did you have anything? For my perspective I think just you really need to make sure that in every instance
that you’re trying to prove what you’re building it for that you have something, why it didn’t work in the first place or what a scenario would be that someone would use
it for something else. It goes all back to: You wanna
do something very positive and you wanna solve these
really big solutions but at what cost? Is that good?
– Yeah – All right, well, thank you,
this was vibrant conversation. And we appreciate it. I wanna give it up for
our volunteer over there. Thank you.
(group clapping) And please join us right
outside, we have a reception. We all will be available for any questions and more conversations. Thank you.

, ,

Post navigation

Leave a Reply

Your email address will not be published. Required fields are marked *