Google Promises to Stay Away from Developing AI Weapons in New Ethics Guidelines
Google has released its much-anticipated guidelines for its work with artificial intelligence, the succinct document states Google won’t work to develop AI weapons but will continue its work with the military. Google promised the guidelines following the controversy around Google’s involvement with a Department of Defense drone project.
The document, titled Artificial Intelligence at Google: Our Principles doesn’t go into specifics about its involvement in the drone project but does firmly state that the company would not develop AI weapons, however, it says it will continue to work with the military in “in many other areas.”
Today we’re sharing our AI principles and practices. How AI is developed and used will have a significant impact on society for many years to come. We feel a deep responsibility to get this right. https://t.co/TCatoYHN2m
— Sundar Pichai (@sundarpichai) June 7, 2018
Google outlines seven guiding objectives for its AI program as well as four describing applications it will not pursue. The report concludes with a rounding statement saying:
“We believe these principles are the right foundation for our company and our future development of AI. We acknowledge that this area is dynamic and evolving, and we will approach our work with humility, a commitment to internal and external engagement, and a willingness to adapt our approach as we learn over time.”
Google may regret Drone Project involvement
A Google spokesperson spoke to online media outlet The Verge and said if they had the guidelines in place, Google would have probably not pursued its relations with Department of Defense drone project. The project used AI to analyze surveillance footage, which although was reportedly used for non-offensive purposes, its potential to do otherwise would likely to have been in breach of the guidelines.
“At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy."
The main takeaway Google want to give people is that it is using its vast network of engineers and the opportunity to focus on AI projects that are ‘socially beneficial’. Google CEO Sundar Pichai, wrote a blog post to accompany the release of the guidelines saying, “At Google, we use AI to make products more useful—from email that’s spam-free and easier to compose, to a digital assistant you can speak to naturally, to photos that pop the fun stuff out for you to enjoy. We recognize that such powerful technology raises equally powerful questions about its use. How AI is developed and used will have a significant impact on society for many years to come. As a leader in AI, we feel a deep responsibility to get this right.”
Thousands of Google employees signed an open letter to management urging the company to cut ties with the Department of Defence drone program after details were leaked. The project called Project Maven even caused the resignation of a dozen or so employees who objected to the company's involvement in such a potentially damaging project.
[see-also]
Downplayed by Google as simply “low-res object identification using AI”, many Google employees saw the potentially darker side of the technology. Google has said it will honor its contract with the Pentagon until Project Maven wraps up in 2019.
AI ethics is a hot topic in 2018 with scientist and observers calling for firmer rules and guidelines regarding the development of AI so that it respects basic principles of equality and non-discrimination.
Via: Google AI
The demand for blockchain skills increased by 552% in 2022. Solidity, smart contracts, truffle, hardhat, and chainlink are the most sought-after qualifications.