Government, Law, and Technology

Ethics and AI

June 30, 2019
Print
Text Size:
A A
Joy Ducey Miller large
Joy Ducey Miller. Courtesy Blue Medora

There are nine big tech companies — six American and three Chinese — that are primarily responsible for the future of artificial intelligence. In the U.S., they are Google, Microsoft, Amazon, Facebook, IBM and Apple. In China, it's Baidu, Alibaba and Tencent.

Within these companies, humans are responsible for building AI systems. This begs the question, what principles are being used to guide their work?

As a society, we have sets of rules which help guide our decisions — democracy, communism, socialism, religion, veganism, nativism, colonialism. These constructs aren't static. We always are adapting. How will AI be guided? How will it adapt?

A few questions we should be asking:

  • What is the motivation for AI?
  • What are the inherent biases?
  • Will inclusivity be ensured?
  • How are the technological, economic and social implications of AI understood by those involved in its creation?
  • What role do those commercializing AI play in addressing the social implications of AI?
  • Is it acceptable to build AI that recognizes and responds to human emotion?
  • What rights should we have to access and question the data sets, algorithms and processes being used to make decisions?

The big companies driving AI forward produce ethics studies and white papers, convene experts to discuss ethics and host panels about ethics. But, I have to wonder, how much of this translates to the actual team members developing the code?

Artificial intelligence systems are increasingly accessing our real-world data to build products with commercial value. Investors are excited so development cycles are shortening to keep them interested. While we may not know it, we are participants in a future that's being created without first answering all those questions. As AI systems advance and our lives become more automated, we will have less and less control over the decisions being made about and for us.

Recently, common guiding themes are emerging such as the need for AI that considers human rights, security, safety, transparency, trustworthiness and accountability. Recently, 42 countries adopted the new OECD Principles on Artificial Intelligence, which promote five principles for the responsible development of trustworthy AI. However, the recommendations are broad and not enforceable by law.

Given that societies differ on what is ethical, how can there ever be a global consensus on the ethical development of AI? We need to identify a shared set of norms of ethical behavior and leverage that as the basis of some agreed upon global rules and build trust among the big, global tech companies driving the future of AI.