Google’s A.I. Ethics Statement is a First Step, But Tech Needs More

June 15, 2018 at 02:58PM

Google recently published a corporate blog posting breaking down its ethical principles for artificial intelligence (A.I.). “How A.I. is developed and used will have a significant impact on society for many years to come,” Google CEO Sundar Pichai wrote. “As a leader in A.I., we feel a deep responsibility to get this right.”

Those principles are pretty straightforward: Google’s A.I. should be “socially beneficial,” avoid “creating or reinforcing unfair bias,” feature strong safety and security features, incorporate “privacy design principles,” and uphold “high standards of scientific excellence.”

In addition, any Google A.I. platform should be “accountable to people,” meaning that humans should be able to direct and control its workings. And last but certainly not least, the A.I. must “be made available for uses that accord with these principles.” In other words, Google will watch to make sure that its A.I. isn’t easily adaptable to “harmful use,” and that it can scale in a way that has a widespread positive impact.

At the same time, Google has pledged to not pursue technologies that are likely to cause overall harm. It also won’t research A.I.-based weapons. This is clearly a nod to Google’s controversial contract with the Pentagon, which it won’t renew after employees at the company protested. That contract, intended to use Google’s A.I. to interpret objects in images and video feeds, could have been utilized to improve the “eyesight” of military drones, which are often used to fire missiles at targets.

Google’s A.I. ethical framework is considerably more detailed than some of the others pushed forward over the past few years. For example, OpenAI, a nonprofit organization devoted to figuring out how A.I. can most likely benefit humanity as a whole, has been somewhat vague about ethical specifics. From OpenAI’s introductory blog posting:

“We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field.”

As A.I. grows more sophisticated in coming years, and its impact more widespread, discussions of ethics will increasingly move out of the theoretical realm into the real world. And the ethical conundrums are coming. Militaries will throw lots of money at researchers in an attempt to weaponize A.I.; criminals may attempt to use platforms such as Google Duplex to launch social-engineering attacks on an industrial scale; and there’s always the potential for unintended consequences, such as a “smart grid” deciding to shut down on its own.

Even if other companies join Google in publishing detailed ethical guidelines, that might not be enough to prevent at least some of these negative consequences of A.I. Over the long term, the tech industry as a whole may face some hard decisions over how to shape A.I. in a way that’s truly beneficial.

via Dice Insights