Artificial Intelligence

When it comes to AI, is a Black Mirror scenario actually possible?

Black Mirror is THE series of the hour. Its combination of innovation and the anxiety that anticipates it has made it a reference for everything that could go wrong with society. Though it was originally created and broadcast by the BBC, Netflix acquired the rights to share it with the world in 2016. Since then, the dystopian show has used each episode to present a digital trend affecting society and show what happens when it’s taken too far, raising a whole host of ethical issues for viewers.

If you’ve seen a few episodes, you know one thing: Black Mirror is scary. Viewers rarely emerge unscathed as the show covers everything from social networks, to dating apps and censorship, pushing the limits of society and forcing us to question our use of technology. It’s no surprise then, that Black Mirror has also tackled the subject of Artificial Intelligence throughout the series.

AI is definitely one of the trends currently sparking a lot of discussion. However, the reality of AI remains relatively unknown to the public. As a result, it is mainly talked about for the damage it can cause. Through a selection of Black Mirror episodes, we invite you to explore the negative effects of AI as it is discussed in the series – and take the opportunity to do a reality check because, despite what they say, AI is actually not (yet?) ready to conquer the world.

Black Mirror & AI - Arkangel© Black Mirror, Netflix

AI as a means of censorship: Arkangel (S4E2)

Synopsis: In this episode, a mother equips her daughter with a brain chip that allows her to monitor her actions. The device gives her real-time biological data, in addition to access to what she sees all while censoring any brutal images that could shock her. From violent TV scenes to scary dogs: everything that causes a state of stress is filtered and blurred thanks to this Artificial Intelligence.

Questions: Through the lens of child surveillance, this episode highlights the difference between protection and control. Where is the line between the two? In this case Artificial Intelligence, makes us question the value of censorship, even if it is done “for the good” of the individual.

Reality: Today, any kind of alliance between artificial intelligence and neuroscience is purely theoretical. In fact, the two feed the development of algorithms in and of themselves. For this scenario to come true (which we do not want, by the way), we would need an exhaustive library of shocking events to be censored since AI would be unable to make data transfers of this kind. It would also require technological and scientific capabilities to develop a chip and implant it in a human brain, an innovation that we simply do not have. So today, even if this scenario is conceivable, it is, fortunately, science fiction.

Black Mirror & AI : Hang the DJ© Black Mirror, Netflix

AI as the ultimate decision-making tool: Hang the DJ (S4E4)

Synopsis: This episode features a dating application that measures the love compatibility of individuals. Two characters decide to submit completely to this algorithm and to comply with all its decisions concerning their love life. At stake: the promise of the ideal romantic partner, with 100% compatibility. As their adventures progress, however, the 2 protagonists realize that despite what the AI says, they are made for each other.

Questions: The underlying question of this episode is that of free will. We know that we rarely make the optimal choice in a given situation – we have neither the capacity nor the energy. An AI promising to make better decisions than we can could be tempting in that regard, especially when it comes to important choices such as relationships, investments, nutrition, career choices and so on. But as powerful as it is, can an algorithm really replace our decision-making process? What role do intuition, chance and measureless choices play in this new decision-less world?

Reality: As we see at the end of the episode, the best choices we make are still our own. Although many tools exist to make our lives easier today, trusting them blindly for decisions as important as choosing a partner is simply foolish. Fortunately, we have the ability to step back and choose for ourselves! That said, for something as simple as making appointments, (which we all know is a time-consuming decision-making task with little added value) Julie and Slash can take charge and optimize the choices for you.

Black Mirror & AI - Be right back© Black Mirror, Netflix

AI as a digital double: Be Right Back (S2E1)

Synopsis: In “Be Right Back”, the main character is a woman who has just lost her partner. Assailed by grief, she begins to chat with an Artificial Intelligence that mimics her companion. Thanks to the data available online and machine learning, it recreates a digital duplicate of the deceased. The service gradually evolves, creating a physical representation of the person, or a digital double.

Questions: This episode touches on many topics: the process of mourning, immortality in the digital age, the right of access to post-mortem data and what it ultimately means to be alive. The reason this episode has caused so much reaction is because it is extremely realistic and blends practical and emotional issues – including romantic relationships with AI.

Reality: This episode is the most current of the whole series, since conversational AIs that reproduce the spirits of deceased people already exist. Roman Mazurenko’s griefbots (a contraction of chatbot and grief) is the best known one. Its tools make it possible to keep people’s memories alive while at the same time providing a sounding board for the feelings of those affected by grief. Whether you’re for or against it, one thing is reassuring: we’re still far from creating a human android double as presented in the series.

 

AI is a tool, designed by and for humans. As with any tool, it has its limits. The important question is whether we have confidence in those who develop and manage these tools?

This is what emerges from the Black Mirror episodes: the context is deliberately opaque. For artistic and comprehension reasons, the various episodes show simplified universes, detached from a real setting. Little information is given on the institutions behind the tools developed and when it comes to the legal, social and political frameworks in which they operate there don’t seem to be any.

In fact, a Chinese journalist stated that each story presents the consequences of accepting tacit rules and that disaster awaits those who welcome technology while ignoring morality. And although Black Mirror describes situations that are unlikely, it raises questions that are essential to humanity.

Technological organizations – but also consumers – have a duty to inform themselves: to become informed about the ins and outs of the tools used, their rights and possible abuses of their power.