The bottom line of ethics and governance in AI

June 5, 2018 at 04:01AM
via ethics - Google News

“What’s good for society might just be good for business,” believes Aimee van Wynsberghe

AIIA Network strives to discuss the issues that are most pressing to corporate enterprise AI & IA executives. We certainly want to ensure that we’re talking about what everyone is talking about.

Having said that, sometimes it’s important to talk about what no one else is talking about. This ensures that we uncover potential areas of innovation within the industry.

That’s precisely the approach to the ethics and governance conversation that I had with President and Co-Founder of The Foundation for Responsible Robotics (FRR), Aimee van Wynsberghe (listen to the corresponding podcast episode 50, available soon, to hear how to pronounce that last name).


It’s a conversation we should all be having, but to ensure that everyone inside the room is listening (including shareholders outside the room), how can good ethics and governance result in good business?


Here’s a segment of the conversation that I had with Aimee van Wynsberghe in the upcoming AIIA Podcast:


Seth Adler (SA): What would you say are the most pressing questions / challenges for artificial intelligence (AI) that need to be addressed today?


Aimee van Wynsberghe (AVW):


There is a range of issues from ethical concerns about how AI and robotics will affect our ability to lead a good life and issues related to the governance of the technology—how and can we regulate it to protect innovation and consumers at the same time.


The main focus of the FRR is on the societal and human rights issues that arise from wide spread adoption of AI and robotics. For us, the most pressing questions/challenges that need to be addressed to ensure that robotics and AI don’t pose a risk to humans focus on certain trends in innovation and their repercussions on ethical, legal, and societal issues.


SA: On ethics, what do you mean when you talk about a “bias in training data”?


AVW: There is a great video by Google that shows the various kinds of bias that can result in machine learning, latent bias, selection bias and the role that training data plays in these. We've seen some popular examples in the media as well that echo this, you might remember Tai the chatbot and how it demonstrated that when given the input of racist, discriminatory, white supremacist data then the resulting chatbot exhibited these traits. 



This is important to consider not only in terms of how AI should be developed (rigorous standards for training data and so on), but also in terms of educating the public—the public needs to know these things and to be aware of the risks when AI is used to assist in decision making for sentencing of prisoners or accepting mortgage loan applications.

"The public needs to know these things and to be aware of the risks when AI is used to assist in decision making"


SA: Could you elaborate on the issue of a lack of transparency?


AVW: This refers to the problem of AI being a kind of black box. We don't know how the AI gets an output after it's been given a certain set of inputs. This is a problem because there is no explanation for the answers AI gives us. We would never accept this from people but for some reason it is accepted with technology/AI.


SA: Tell me more about the lack of accountability?


AVW: If we consider responsibility to be taking the blame or praise when something goes right or wrong then we could consider accountability to be the aspect of accepting praise or blame.


SA: You’ve come up with an algorithm of sorts, which strikes at the heart of where innovators in the sphere are going. It also is the vice that transformation executives crave. What do you mean by ‘cool vs. good = misdirected?’


AVW: So much money is being invested in robotics and AI because the promise that it will bring good at some point and be money makers, but there hasn't been adequate testing or standards set (and I don't just mean regulations but guidelines for best practice). Everyone is hoping that something good will come from his or her efforts. It's like a big social experiment.


"I like to think that ethics can be a source of inspiration for design"

This also means that people are just trying to see what they can do with the technology rather than channeling the technology to the world's problems. From this aspect, I like to think that ethics can be a source of inspiration for design. This is in contrast to the normal view that ethics is a hindrance or a roadblock to innovation. Instead, ethics (social values) can be considered constraints in the same way that the context or user is and the task of the engineer is to find innovative ways to achieve success.


SA: We’ve already given up some of our compute to technology—I don’t know nearly anyone’s phone number anymore where I used to know the number of nearly everyone in my Rolodex (figurative). What are you getting at when you say we’ve got an over-trust in robots?


AVW: When the personal navigation system was first introduced into cars we heard examples of people driving into lakes because the device “told them to go that way” rather than using their own judgment. This is an example of over-trust in the technology; the technology is right and we must give authority to the decisions it forms. It is important for those interacting with robotics and AI to have realistic expectations of the capabilities and limitations of robots to encourage an appropriate level of trust in the technology, one that reflects the actual functioning of the robot.

Aimee van Wynsberghe was a key speaker at AI LIVE 2018. Watch the presentation:


SA: They are the ethics issues at hand. What are some of the governance issues?


AVW: There is a lack of regulation around protecting consumers. We've seen what happens when industry takes over and steers the path of technology—look at what happened with Facebook and Cambridge Analytica—economics is the bottom line not dignity, well-being, justice or fair access to goods (traditional ethical values).


GDPR: "This kind of protection is just the tip of the iceberg, more will come when we watch the ways companies find loopholes"

As companies weren’t inspired or given incentives to act accordingly to social justice issues, governments stepped in to create the GDPR. This is a step towards protecting the consumer because we've seen what happens without it! However, this kind of protection is just the tip of the iceberg, more will come when we watch the ways companies find loopholes and workarounds.


SA: I take your point that organizations are spending money to find loopholes, which get closed and then cost more money in the long run as opposed to dealing with the issue up-front. Facebook and Google got hit with over $8B in lawsuits on the first day of GDPR. We’ll see what actually happens, but it seems like one step forward two steps back.


As we saw in the US and then the EU, lawmakers can’t even keep up with Facebook, let alone AI. What are your thoughts on this?


There is talk about creating laws to control the speed of AI but we've seen throughout history that laws are rendered moot by the time they are put into action because technology moves too fast.


SA: As this relates to AI and IA, how can practitioners think ahead of oncoming ethics and governance?


AVW: Companies should be inspired to find new ways to innovate with social values (ethics) in mind.


Consumers are voting with their wallets for Fair Trade businesses. It’s already proven that what’s good for society can be good business.

While we do cover some of the same ground in the corresponding podcast episode, the conversations picks right up on that final point...