Ethics and the pursuit of artificial intelligence

August 6, 2018 at 08:35PM ethics - Google News Ethics and the pursuit of artificial intelligence

< !DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">

So many businesses and governments are scurrying to get into the artificial intelligence (AI) race that many appear to be losing sight of some important things that should matter along the way – such as legality, good governance, and ethics.

In the AI arena the stakes are extremely high and it is quickly becoming a free-for-all from data acquisition to the stealing of corporate and state secrets. The “rules of the road” are either being addressed along the way or not at all, since the legal regime governing who can do what to whom, and how, is either wholly inadequate or simply does not exist. As is the case in the cyber world, the law is well behind the curve.

Ethical questions abound with AI systems, raising questions about how machines recognise and process values and ethical paradigms. AI is certainly not unique among emerging technologies

in creating ethical quandaries, but ethical questions in AI research and development present

unique challenges in that they ask us to consider whether, when, and how machines should

make decisions about human lives – and whose values should guide those decisions.

In a world filled with unintended consequences, will our collectively shared values fall by the wayside in an effort to reach AI supremacy? Will the notion of human accountability eventually

disappear in an AI-dominated world? Could the commercial AI landscape evolve into a winner takes all arena in which only one firm or machine is left standing? Will we lose our ability to

distinguish between victory and a victory worth having – in business as well as on the military

battlefield? Some military strategists already view future AI-laden battlefields as “casualty-free”warfare, since machines will be the ones killing and at risk.

While AI remains in an embryonic state, it would be a perfect time to establish rules, norms, and

standards by which AI is created, deployed, and utilised. We should ensure that it enhances

globally shared collective values to elevate the human condition in the process. While there will probably never be a single set of universal principles governing AI, by trying to understand how

to shape the ethics of a machine, we are at the same time forced to think more about our own values, and what is really important.

New forms of threats are evolving as AI becomes more widely utilised, so it is important that we regain agency over it. In the United States, the technology giants of Silicon Valley have pledged to work together to ensure that any AI tools they develop will be safe. Equivalent Chinese

discussions about the limits of ethical AI research are occurring, but are more opaque, however,

and some other governments seem entirely unconcerned about ethical considerations.

All the leading AI researchers in the West are signatories to an open letter from 2015 calling for

a ban on the creation of autonomous weapons. Just as Microsoft proposed in 2017 a Digital

Geneva Convention that would govern how governments use cyber capabilities against the

private sector, an international protocol should be created to govern not only how governments project AI onto one another, but how they will do so with the private sector, and how the private sector will do so with itself.

History would suggest, however, that the things that should really matter will either get lost in translation or be left by the side of the road in the process

Attempting to govern AI will not be an easy or pretty process, for there are overlapping frames

of reference. New norms are emerging, but it will take a long time to work through the various

questions that are being raised. Many are straight forward issues about technology, but many others are about what kind of societies we want to live in and what type of values we wish to

adopt in the future. If AI forces us to look ourselves in the mirror and tackle such questions with

vigour, transparency, and honesty, then its rise will be doing us a great favour in the long-term.

History would suggest, however, that the things that should really matter will either get lost in

translation or be left by the side of the road in the process.

We may see a profound shift in agency away from man and toward machine, wherein decision-

making could become increasingly delegated to machines. If so, our ability to implement and

enforce the rule of law could prove to be the last guarantor of human dignity and values in an

AI-dominated world. As we continue to grapple with such fundamental issues as equality and gender bias with great difficulty, what should be on the top of the AI “values” pyramid? How can

we even know what human compatible AI is or will become?

In 2017 the Asilomar AI Principles were created as a framework to govern how AI may be used

ethically and beneficially. Thousands of AI researchers, and others, have signed on to these

principles. Some professionals in the field worry that regulations imposed in the future could

prove to be unhelpful, misguided, or even stifle innovation and cede competitive advantage to individuals and organisations in countries where the principles may not be adopted. Others see them as a definitive step in the right direction. There is, naturally, disagreement among AI researchers about just what the risks of AI are, when those risks could arise, and whether AI could ultimately pose an existential risk to humankind.

What appears inevitable is that AI will fall into the same abyss that the cyber arena has

succumbed to, with nefarious actors hijacking the domain and negatively impacting its evolution

Few researchers would suggest that AI poses no risk. The number of AI researchers who signed the Principles as well as the open letters regarding developing beneficial AI and opposing lethal autonomous weapons shows that there is strong consensus among

researchers more generally that much more needs to be done to understand and address known and potential risks of AI. The right policy and governance solutions could help align AI

development with these principles, as well as encourage interdisciplinary dialogue about how that may be achieved.

What appears inevitable is that AI will fall into the same abyss that the cyber arena has

succumbed to, with nefarious actors hijacking the domain and negatively impacting its evolution.

Serious ethical questions have already been raised with AI, which are only likely to grow

exponentially with time. It is up to governments and the global business and academic

communities to provide and maintain momentum and propose solutions about how the ethical

dilemmas raised by AI will be addressed in the decades to come. Failure to do so could pose

potentially existential threats to the human race.

Daniel Wagner is CEO of Country Risk Solutions. Keith Furst is Managing Director of Data

Derivatives. They are the co-authors of the forthcoming book AI Supremacy, which will be

published in September