Artificial Intelligence

Towards an ethical Artificial Intelligence

With the increasing presence of autonomous agents in our daily lives, Artificial Intelligence specialists propose to implement a set of ethical rules to allow them to realize the extent of their actions. Others seek the establishment of a right for robots as a way to resolve the emerging legal issues.

Among the numerous articles dedicated to autonomous cars today, one particular thought experiment resurfaces often: the trolley problem. Adapted to self-driving cars, it is usually presented as follows: an autonomous car, with a person onboard, sees five pedestrians suddenly appear in its path. The only way to avoid them is for the car to send itself off the road, crashing and therefore causing its passenger’s death. Faced with this Cornelian dilemma, what choice does the Artificial Intelligence control software have to make? This thought experiment is anything but science-fiction. Many American states have already authorized some manufacturers to test their autonomous vehicles on their roads (with a human driver, ready to intervene and take over if necessary). Tesla’s commercial vehicles are equipped with an auto-pilot mode, which allows the car to self-drive under certain conditions (like on a highway where the driving is easy). This subject does not only apply to autonomous cars: from virtual assistants (Siri, Cortana, Google Now…) to humanoid robots, Artificial Intelligence increasingly inserts itself into our daily lives. Given this reality, many are those who insist on the importance of setting up ethical rules to supervise Artificial Intelligence.

Tesla - Artificial Intelligence

A Tesla

A pragmatic approach

Grégory Bonnet, AI researcher at the University of Caen in France, is one of them. For three years, he has managed a team of IT specialists, sociologists and philosophers who, by adopting a pragmatic approach, all work on the ethical issues brought forward by the emergence and multiplication of autonomous agents. Grégory Bonnet explains:

“Ethics, or morals, can depend on the context, the users or the country in which the technology is used. Therefore, we do not seek to define a universal set of morals that AI must conform to. However, our objective is to make sure that machines are always capable of justifying each one of their decisions by manipulating logical symbols. These include the definition of data formats, philosophical notions, values, principles and maxims that can be integrated into the creating processes of these machines, so as to make them accountable for their actions in all circumstances”.  

When mathematics meets philosophy

Through practical cases, Grégory Bonnet’s team raises ethical issues that they, then, try to resolve by crossing disciplines.

“We choose a problem, the trolley problem for instance, we model it with the help of mathematics, then, by drawing from literature or philosophy, we look for the best tools to resolve it.”

Consequentialist philosophy, which focuses on the consequences of an action to motivate decision-making, and which utilitarian philosophy is a part of, has turned out to be resource-rich.

“For example, there is the doctrine of double effect, which helps decide between two options that each have good and bad effects. According to this doctrine, for an action to be considered ethical, on one hand, the good effects have to be proportionately more interesting than the bad effects, and on the other hand, the good effect cannot be a direct consequence of the bad effect’s existence.”

 

Unethical robot-Artificial Intelligence

An unethical robot

 

Machines: not perfect but responsible

Based on the ethical dilemma cited above and in virtue of this doctrine, the ethical choice here would be to sacrifice the passenger and save the five pedestrians. But it’s not that simple; Grégory Bonnet develops:

“If we add the notion of Kant’s Categorical Imperative, then an external observer affirming that the vehicle must sacrifice itself to save the five pedestrians has to also accept that his own vehicle must sacrifice him to save five other people”.

Yet, who in their right mind would accept to get into a vehicle knowing that it would sacrifice them to save a greater number of people? If autonomous cars are to be accepted by the larger population, then it is important to make sure that their priority is to also save their passengers… We can see that the emergence of Artificial Intelligence generates a lot of Cornelian dilemmas! For this reason, it is capital that AI is capable of justifying its decisions.

“What’s important is that the machine is capable of saying a posteriori: I acted like this in accordance with the doctrine of the double effect” or “I proceeded like this way because any other action would have put my passengers lives in peril” concluded Grégory Bonnet.

Towards rights for robots?

Even though his group of researchers focuses on ethical questions and does not venture into the legal arena, others have taken the plunge. This is the case for Alain Bensoussan. This attorney at the Paris Court of Appeals, specialized in advanced technologies, defines himself as “the advocate for the rights of robots and their sovereignty”. According to him, the right must adapt to the growing role that Artificial Intelligence plays in our lives.

Human freedom is framed by the legal entity and the freedom of other holders of rights and obligations is framed by a legal person. In the same way, because robots are automatons, intelligent machines capable of making decisions autonomously, I think that it is necessary to create the notion of a robot person. With everything that it implies: the robot’s identification so that we know who we addressing, responsibility, a governance equivalent to that of legal persons, a capital, an insurance and full traceability, like constant monitoring of their actions to use as proof of their eventual responsibility” explains Alain Bensoussan.

judge Artificial Intelligence

Preventing legal dilemmas

According to him, in the absence of such an evolution, we will quickly find ourselves faced with judicial aporias.

“Let’s imagine a Tesla on auto-pilot mode generates an accident. Who is responsible, today? Nobody. The driver was not driving, the developers who coded the software will state that the software trained itself on its own, through automatic learning, etc.”

However, the notion of a robot person helps introduce a chain of responsibility:

“ In the case of a failure, the robot would be the first responsible, then, in order, the user, the AI creator, the manufacturer and the owner would be to blame”. Alright, but how do we sanction a robot? “If the robot is judged too unreliable to serve in a public environment, it could be scrapped. If it as a capital, then it could be imposed a fine.” continues Alain Bensoussan, who drafted a charter on the rights of robots, available on his website.

The principle of the good samaritan

For the lawyer, in addition to allowing robots to judge, the law must also monitor their creation so that only ethical artificial intelligence can be conceived:

“We must make sure that some rules concerning decision-making are implemented during the creating process so as to ensure that the robot is benevolent. Some type of good samaritan rule. This implies, for example, that one cannot create robots capable of causing harm as is stipulated in one of Asimov’s laws. Lethal robots must be prohibited. It is also necessary for the robot to be transparent, he must not be capable of betraying his human interlocutor.”

 

Anthropomorphic robots-Artificial Intelligence

Anthropomorphic robots from the series Real Humans 

 

Andra Keay, Director of Silicon Valley Robotics, shares this last concern. According to her, anthropomorphic robots, especially, are dangerous because they generate empathy, inspire trust among humans, while being machines at the service of their owners, possibly big companies motivated by profit rather than the interests of the humans that their robots interact with. She worries:

“A person interacting with robot salesperson in a store could, for example, confuse the robot’s intentions, not realizing that it seeks, before all, to sell the product no matter what”.

The France AI Initiative

For Alain Bensoussan, we must finally ensure a certain transparency in the Artificial Intelligence software:

“We must also render the algorithms transparent so that they can be controlled and traceable.”

Therefore, the lawyer proposes the establishment of an Office for algorithms, charged with overseeing and ensuring that the algorithms are ethical, exactly the same way an auditor makes sure to keep finances in balance. If, for the moment, the political power has remained very timid on the issue, the idea is slowly gaining ground. The France AI Initiative, set up by Axelle Lemaire, French Minister of Digital Affairs and Thierry Mandon, her Higher Education counterpart, has recently instituted the implementation of seven work groups, one of which will focus on ethical and societal issues brought up by Artificial Intelligence. An affair to follow closely, then.