After a corporate petition against the project in question, Google has decided it will not renew a contract dealing with the US Pentagon's Artificial Intelligence project.
Thousands of Google employees signed a petition protesting the company's partnership with the Pentagon's Maven project. Several Google employees even resigned in conflict with the artificial intelligence project. One of the biggest fears of employees was that Maven would be used for lethal purposes.
Google has yet to release an official statement on the matter.
Company sources have told media outlets top executive Diane Greene told the Google staff that there would be no renewal of a contract after the current contract expires in March 2019. However, Gizmodo reporters told the BBC that Google hadn't actually canceled Project Maven and hasn't rule out future work with the Pentagon. The contract is currently valued at less than $10 million, but could lead to further contracts.
Gizmodo reported last month that Google was in the running against Amazon and Microsoft for the contract.
“It gives me great pleasure to announce that the US Undersecretary of Defense for Intelligence—USD(I)—has awarded Google and our partners a contract for $28M, $15M of which is for Google ASI, GCP, and PSO,” Scott Frohman, a defense and intelligence sales lead at Google, wrote in a September 29, 2017 email. “Maven is a large government program that will result in improved safety for citizens and nations through faster identification of evils such as violent extremist activities and human right abuses. The scale and magic of GCP [Google Cloud Platform], the power of Google ML [machine learning], and the wisdom and strength of our people will bring about multi-order-of-magnitude improvements in safety and security for the world.”
Various emails in late 2017 detailed the meetings with Pentagon representatives at both the Google Sunnyvale and Mountain View offices. Google had proven a high success rate of classifying images for Project Maven by December, according to Gizmodo's reports. Google successfully built a system that could identify vehicles missed by previous systems.
“Customer’s leadership team was extremely happy with your work, your active participation, and the early results we demonstrated using validation dataset,” Reza Ghanadan, a senior engineering program manager at Google, wrote. “Among other things, the results showed several cases that with 90+% confidence the model detected vehicles which were missed by expert labelers.”
However, despite the innovations in technology, Google employees were not comfortable with keeping such a high stakes subject under wraps. They also petitioned potential exploitation of those technologies, and they wanted to remain at the forefront of ethical usage of artifical intelligence.
Interesting Engineering will continue to keep readers updated with new decisions regarding this story.