Will artificial intelligence (AI) be the death knell for humanity, or will it improve our lives immeasurably? It depends who you ask. Tech luminary Elon Musk, for example, believes AI is the biggest threat we face as a civilization, while Facebook’s Mark Zuckerberg thinks such viewpoints are irresponsible, at best.
The bottom line is, we don’t really know how AI will evolve. But we do know that a lot of money is being invested in developing AI technologies, and we are also aware that many people in the know expect that machines will surpass human intellect and abilities at some point in the foreseeable future.
It’s against this backdrop that Alphabet’s U.K.-based AI subsidiary DeepMind has launched a new “ethics and society” research unit tasked with “exploring and understanding” the implications of AI gradually permeating the world.
“It [the unit] has a dual aim: to help technologists put ethics into practice, and to help society anticipate and direct the impact of AI so that it works for the benefit of all,” DeepMind explained In a blog post co-authored by Verity Harding and Sean Legassick, who will head up the research unit.
DeepMind has hit the headlines in recent years for various AI-focused initiatives. Arguably the most notable of its global headline-grabbing achievements came when the company’s algorithms defeated the world’s top Go players using an AI-powered program called AlphaGo. More recently, however, DeepMind has battled controversy in the U.K. over an ongoing data-sharing deal with the country’s public health service, the NHS, to develop AI technology that promises to better detect patients at risk of developing kidney injuries. It was ruled that this partnership contravened U.K. privacy laws.
These two episodes illustrate the growing debates we’ll see as AI develops. On the one hand, AI is already capable of great things — as evidenced by its beating humans at such a technical and skilled game as Go. It also promises to improve human life by helping doctors diagnose and detect serious medical conditions, even if the systems have yet to prove their efficacy.
But intertwined in all of this is the question of ethics — how should AI be used, applied, and governed? And more importantly, how do you respect people’s privacy while using their data to improve an AI algorithm?
When Google snapped up DeepMind back in 2014 — beating out Facebook in the process — the terms of the deal included setting up an ethics board to ensure that the AI technology isn’t abused. DeepMind has maintained over the years that such an ethics board exists but has declined to offer details, such as who sits on the board and what they discuss at meetings.
With its new ethics and society unit, DeepMind wants to reassure the public that its research and technology is underpinned by morality and mindful of the greater good of people everywhere — even if its ethics board remains opaque.
“As history attests, technological innovation in itself is no guarantee of broader social progress,” the company said. “The development of AI creates important and complex questions. Its impact on society — and on all our lives — is not something that should be left to chance. Beneficial outcomes and protections against harms must be actively fought for and built in from the beginning. But in a field as complex as AI, this is easier said than done.”
Indeed it is easier said than done. A common thread underlying much of the AI debate is that the technology needs to be regulated proactively rather than waiting until after an adverse event has occured. And as more companies invest in AI development there will undoubtedly be heated debate as to how “proactive” we should be in regulating it.
For now, DeepMind just wants you to know that it cares about ethics and is planning to carry out “interdisciplinary research” that will combine experts from the humanities and social sciences with “voices from civil society” and “technical insights” from DeepMind itself. It has also developed a set of principles in conjunction with a group of “independent thinkers” it calls the DeepMind Ethics & Society Fellows. “These Fellows are important not only for the expertise that they bring but for the diversity of thought they represent,” the company said.
Today’s news comes as DeepMind triples-down on its spending to attract top talent. In the company’s recently published accounts, DeepMind revealed its wage bill rose from $54 million ($71 million) to $164 million (£217 million) between 20015 and 2016.
Whether you like it or not, AI will play a major part in everyone’s future.