Morality and the Machine: What the C-Suite Should Know About Machine Ethics

What the C-Suite should know about machine ethics

"As the use and impact of autonomous and intelligent systems (A/IS) become pervasive, we need to establish societal and policy guidelines in order for such systems to remain human-centric, serving humanity’s values and ethical principles. These systems have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between people and technology that is needed for its fruitful, pervasive use in our daily lives."

The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, Version 2.” IEEE, 2017.

Defining Machine Ethics

Algorithms are becoming self-taught and capable of independent decision-making. As intelligent, autonomous systems continue to evolve, human judgment and behavior is influencing machine decision-making. The field of machine ethics involves efforts to ensure that the behavior of AI-enabled machines is ethically acceptable. According to Nell Watson, AI and Robotics Faculty at Singularity University and Co-Founder of EthicsNet, “machine ethics is an emerging domain which is concerned with teaching machines about human values.”

Machine ethics should not be confused with computer ethics or robo-ethics, lines of thought that deal with the ethical use of technology by humans.

Programming Values

"The technical path towards socializing machines is one of the biggest questions of our age. It will take a global village of enthusiasts to help with the 'raise of the machines,' a massive collaborative effort across many geographies, demographics, cultures, and creeds."

Nell Watson, AI and Robotics Faculty at Singularity University and Co-Founder of EthicsNet

Is teaching computers about human values a matter of code? Watson explains that trying to specify or program things like right and wrong can be very challenging. “A lot of our moral impressions about things are not really things that we can explain easily: they bubble up from our subconscious, and so we don’t have a direct understanding of them.”

“Generally, when we try to explain why we believe something is morally okay or not okay, we’re kind of giving an ex post facto justification—we invent an explanation after-the-fact. But that may not actually correlate to the actual underlying reasons for why we have these kinds of impressions. And so it’s incredibly difficult to code.”

Computer programming tends to be decision-based, Watson says. For example, “If this is a condition, then do this, if this condition is slightly different, then do something else.” She says, “But a lot of the developments in machine intelligence are not about programming the machines, they’re about showing examples to machines and teaching them. It’s about machine learning.”

Data is an integral part of machine learning, so for an algorithm to learn human values, Watson says, “It’s all about collecting good examples, collecting a lot of information about something and then annotating it. Human beings may be giving systems a little bit of supervision and saying “this is good” or “this is not so good.” And in that way, machines can start to learn for themselves.”

It’s about recognizing patterns. “If you’re trying to program a system to recognize a bus, it would be incredibly difficult,” she says. “In fact it was basically impossible until very recently in the last five to ten years.”

Whose values?

"Danah Boyd, principle researcher at Microsoft Research, says there are serious questions about the values that are being written into such systems—and who is ultimately responsible for them. ‘There is increasing desire by regulators, civil society, and social theorists to see these technologies be fair and ethical, but these concepts are fuzzy at best,’ she says."

"Why the biggest challenge facing AI is an ethical one" - by Bryan Lufkin, BBC, 7 March 2017.

If we are to attempt teaching human values to machines, whose values are we talking about? And aren’t ethical behaviors often cultural and situational? Can an algorithm discern appropriateness of time and place, and who decides what that looks like?

Watson explains, “These systems need to be able to cater to different worldviews and different sets of values, and so I think that one of the biggest tasks of the years ahead is to be able to map the space of values, and to basically connect to the entire political and religious and values gamut—the whole spectrum, and to connect to everyone on that, the outliers and everyone in between.”

Imagine pulling norms and ideals from every philosophy, every religion, and every culture. “It is basically saying ‘we are inviting you to sit down at the table, we have a chair for you—please join us and put your values here.’ And enable different perspectives to get together as individuals, or groups, or communities, and to basically teach the machines with their own way of understanding the world. And so having mapped all of those, we can then begin to understand whether a given action, or a given decision, or a given behavior would be a good fit in a certain culture or a certain situation or context.”

Watson warns, “The last thing that we should want is for somebody to have a monopoly on deciding which values go into this thing—and it’s likely to be somebody in Silicon Valley, and that likely will not reflect the rest of the world. I think that would be a terrible thing.”

If machines are to learn to make decisions that benefit humans, how do we define that benefit? According to the IEEE “Common metrics of success include profit, occupational safety, and fiscal health. While important, these metrics fail to encompass the full spectrum of well-being for individuals or society. Psychological, social, and environmental factors matter. Well-being metrics capture such factors, allowing the benefits arising from technological progress to be more comprehensively evaluated, providing opportunities to test for unintended negative consequences that could diminish human well-being.”

Machine ethics today

“We are increasingly delegating all kinds of social and economic tasks to machines,” Watson says. And machine ethics will help automated systems act more appropriately. “We want them to represent us in a good way, so we can trust machines as our ambassadors in the world.”

As an example of how machine ethics will manifest in our day-to-day lives, Watson says “The earliest forms of implementation we are likely to see is helping personal assistants to understand our needs better. For example, Google has released Duplex, which is this system that can talk on the telephone, and it’s incredibly convincing. And if we are to have machines that are our own voice in the world, they need to understand a little bit about social graces.” She adds “they also need to understand what kind of behavior is reasonably safe in terms of ethics, and where a line might be crossed.”

Machine ethics is a topic of concern in several areas, including:

Automated customer serviceAutomated customer service: a customer may or may not be initially aware they are communicating with a chatbot. And if the computer is making purchasing recommendations, what is the driving principle – profit margin or a customer’s best interest?

Automated vehicles: should a driverless car sacrifice its passengers or pedestrians in an accident scenario? When human participants were given the opportunity to make those choices in MIT’s Moral Machine platform, the answers were not conclusive, and varied by geography.

Financial services: automation is transforming financial services with 24/7 customer service and on-demand transactions, however biased algorithms can discriminate against groups of people, and automated, ultrafast trading has led to disruptive flash crashes and price spikes in financial markets.

Healthcare: AI and data analytics could help doctors and patients worldwide, but the amounts of data needed for AI to contribute to global health puts patient privacy at risk. Also, tainted data could introduce bias, and a machine-generated recommendation or analysis could be manipulated by an operator with unethical motives.

On a very human level, Watson says, “Machine ethics is, perhaps in the near future, going to help us to connect with other people who share our worldview, or to help us to understand those who we may not necessarily at first agree with, that perhaps we have more in common with each other than we immediately would believe.”

These are the early days. “We are only at the beginning of the effort to finally deploy these kinds of technologies for the first time, but it is going to be very transformative,” Watson says. “In the same way that people initially just used Blockchain for things like Bitcoin, we’re now starting to understand all the kinds of different things that we can map with Blockchain, and the things that we can start to do with these crypto-technologies. We will see something similar in machine ethics.”

Machine Ethics technology is largely about people. “It can help us to better map the space of human values. This means that we will have opportunities to find amazing combinations of characters that make very effective teams, or to find people who could be amazing friends for us, whom we have not had an opportunity to get to know yet,” Watson says.

Executive Search Meets Machine Ethics

What is the role of machine ethics in search? From mitigating bias in recruiting and assessment to helping build better teams, values-based AI may have an impact on the executive search profession. “I think search is an exciting use for these new technologies, because machines are going to be able to better understand the values that drive people as well as the dynamics of different teams and how those dynamics fit together,” Watson says.

Team building: “Some of the best teams are going to be those that have different people of different skills and possibly different ways of understanding the world, but they share a kind of set of values, more or less, and therefore they can gel with each other,” Watson says. /p>

A deeper assessment: “I think that there are tremendous opportunities to use these technologies to understand people better on much deeper levels, beyond the surface and resume, to get a better actual picture of who that person really is, and what drives them.”

Making the placement: “These technologies can help determine if a person is more focused on a package that has a higher salary, or they want more flexibility, or the kinds of things can make a person happier,” Watson says.

“Perhaps even the machines can recommend a particular manager or management style that might be ideal for one executive versus another. There are huge opportunities here, and we’re only beginning to scratch the surface,” she says.

While not relevant in the high-touch executive and board search profession, two particularly interesting technologies are being deployed in the recruiting space. In “Artificial Intelligence: The Robots Are Now Hiring,” The Wall Street Journal reports that automation is deployed by most Fortune 500 companies in recruiting and assessment. One company, “DeepSense, based in San Francisco and India, helps hiring managers scan people’s social media accounts to surface underlying personality traits. The company says it uses a scientifically based personality test, and it can be done with or without a potential candidate’s knowledge.”

Another company mentioned in the WSJ article, “HireVue says its algorithm compares candidates’ tone of voice, word clusters and micro facial expressions with people who have previously been identified as high performers on the job.”

The Risks

In the article “AI, People and Society” published by Science magazine in July 2017, Eric Horvits writes “Excitement about AI has been tempered by concerns about potential downsides. Some fear the rise of super intelligences and the loss of control of AI systems, echoing themes from age-old stories. Others have focused on nearer-term issues, highlighting potential adverse outcomes. For example, data-fueled classifiers used to guide high-stakes decisions in health care and criminal justice may be influenced by biases buried deep in data sets, leading to unfair and inaccurate inferences.”

Algorithms are vulnerable to the biases in the data sets that inform them—data sets compiled by humans. The risks can have high-stakes consequences. For example, as Will Knight reported in the article “Forget Killer Robots—Bias Is the Real AI Danger,” published in MIT Technology Review “Black box machine-learning models are already having a major impact on some people’s lives. A system called COMPAS, made by a company called Northpointe, offers to predict defendants’ likelihood of reoffending, and is used by some judges to determine whether an inmate is granted parole. The workings of COMPAS are kept secret, but an investigation by ProPublica found evidence that the model may be biased against minorities.”

One way to mitigate the risks associated with machine learning and to reinforce machine ethics is transparency, but that may be very hard to accomplish. According to Knight’s article, “Many of the most powerful emerging machine-learning techniques are so complex and opaque in their workings that they defy careful examination.”

The Scary Bits

So which is worse—AI without any grounding in human values, or a super-moral machine?

With all the promise that AI holds for the world in terms of research, productivity and the imaginable range of human benefits, the converse is also true.

The Future of Humanity Institute published “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation” in February of this year, which identifies the various areas in which malicious AI threatens society. For example: high-quality voice and video forgeries that could foment public mistrust and lead to the manipulation of target populations; the proliferation of robotic aids in our homes, offices, and air space that could be armed and threaten both people and infrastructure; and malware coupled with machine learning, wielded by cyber attackers to thwart cyber security efforts.

The IEEE also warns, “according to some theories, as systems approach and surpass AGI (Artificial General Intelligence), unanticipated or unintended system behavior will become increasingly dangerous and difficult to correct. It is likely that not all AGI-level architectures can be aligned with human interests, and as such, care should be taken to determine how different architectures will perform as they become more capable.”

The report concludes, “As AI systems increase in capability, they will first reach and then exceed human capabilities in many narrow domains, as we have already seen with games like backgammon, chess, Jeopardy!, Dota 2, and Go and are now seeing with important human tasks like investing in the stock market or driving cars. Preparing for the potential malicious uses of AI associated with this transition is an urgent task.”

Perhaps the most frightening scenario is posited in a blog post by Watson. In “The Supermoral Singularity” Watson reasons that giving artificial intelligence moral agency could lead to a collective of morally righteous machines “actively campaigning as missionaries to enact their unified vision of an ideal world.”

She writes, “The ultimate outcome might be some sort of non-violent utopian paradise, but I fear that in the process a great number of persons (animal, human, and non-organic) may be destroyed.”

Morality and the MachineLooking ahead

Looking ahead to the near future, machines that understand norms and values will be able to help organizations better understand people, manage solutions, and construct effective teams. Watson says, “I think these technologies are going to unlock new forms of market segmentation as well, so not the demographics—these technologies will be able to drill down into the individual, and understand what really makes them tick. I think that’s going to create all kinds of new opportunities.”

One additional opportunity is bringing machine ethics into economics itself. Watson explains, “One of the biggest problems in our world is that we don’t have a good way of accounting for what in economics terms are called ‘externalities.’ Externalities are those things that, when one does something, there is a consequence for somebody else. Pollution is an externality.” Watson describes a factory that makes goods that creates an externality of pollution, and somebody unrelated to that transaction ends up being affected by it. She says, “Our world GDP is about 80 trillion dollars per year, and we increasingly live in tremendous luxury, and yet what isn’t on the balance sheet is all of the externalities that we’re creating,” she says. “In a sense we are kind of cheating, by not accounting for the externalities that are not on the balance sheet.”

Watson describes our failure to measure externalities as “fundamentally an accounting problem.” She describes Blockchain as “a triple-entry ledger system. It is an advancement of the old double-entry ledger system we’ve had since the Renaissance. And in combination with machine ethics, finally, over the next 10 to 15 years we will be able to start accounting for externalities for the first time in human history.”

The impact of measuring externalities through Blockchain could be immense. “That means that the price of the externalities that you create, whenever you consume something, will be able to go into the pricing mechanism,” Watson explains. “It makes people pay up front upon consumption of that product or service—so finally we can enjoy capitalism, but in a sustainable way. And I think that’s going be an essential technology for making a safe and sustainable world in the years to come.”

“That’s why I think that the blending of machine intelligence, machine economics, and machine ethics are going to give us the smarts, the heart, the wisdom, and also the incentive structure to form society in a new way.”

Download Issue Fourteen