June 7, 2018 at 02:09PM
via ethics - Google News
Following an internal rift over a Pentagon project that would use artificial intelligence as a surveillance tool, Google CEO Sundar Pichai has released a set of ethical guidelines to govern the company’s use of AI.
The rules, laid out by CEO Sundar Pichai, don’t take a strong or controversial stand on the values that should underpin its technology: The ideal Google AI algorithm is socially beneficial, unbiased, tested for safety, accountable, private, and scientifically rigorous. All traits that everyone can agree on, with extremely flexible interpretations.
“As we consider potential development and uses of AI technologies, we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides,” Pichai wrote in a blog, reiterating basic corporate responsibility.
But alongside these broad ethical guidelines, Google has also drawn a line in the sand for the AI it will not develop.
These no-nos include AI specifically for weaponry, as well as surveillance tools that would violate “internationally accepted norms.”
Google also now has a general rule that its AI will not cause harm, but that principle comes with the caveat that the company would proceed to make an AI system that may cause harm if it believed the benefit outweighed the risks. It’s easy to see this in applications like self-driving cars, which could theoretically kill.
However, the search company will take on government contracts that it believes won’t be used to hurt people (or at least will be beneficial enough to justify the harm).
“We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come,” Pichai wrote.
In addition to these ethical guidelines, Google published a starter’s guide for making responsible AI, including testing for bias and understanding the limitations of the data used to train the algorithm.