The evolution of AI lead to interesting and passionate debates. One of the questions, I am often asked, I will try to answer is : How will Virtual Assistants impact the Internet and, on a larger scale, Society as a whole?
First, it’s important to define what we mean when we talk about virtual assistant based on Artificial Intelligence (AI). We are mainly influenced by science-fiction movies, and we all have our own fantasies about what our lives could look like with an assistant like « HER », capable of achieving every task we ask of her, and in the end, become a part of our daily life, and even a close friend!
Delegating vs. Directing
We often think of Siri or Cortana when we talk about virtual assistants, but if compared to a Google Car, Siri is less of an assistant! The Google Car actually drives you around, in theory, you could even stop paying attention altogether; the car is completing the task for you. Siri, on the other hand, is “just” interacting with your phone on voice command and fulfilling the tasks you ask of her.
- A system that allows you to drive a car through voice command is a new way of interacting.
- A Google Car is a virtual assistant.
“A virtual assistant is a system to which you can delegate the completion of a task from A to Z.”
Another example – one I know very well – is Julie Desk. For those of you who might not know about Julie (you’re missing out! 😉 ) here is a quick introductory video to understand what she does:
The purpose of Julie isn’t to send your contacts an automatic link for them to choose the time slot that suits them best. Julie is actually there to do the scheduling on your behalf.
Her responsibility is to book slots in your calendar, without checking in with you for every single action taken. In fact, you could be on holiday, Julie will still do her job.
With this system, we currently have 250+ clients and so far, we have scheduled +300 000 meetings!
Delegating forms a stronger commitment than directing by giving instructions; you actually give someone, or something, the power to fulfill a task on your behalf. The value proposition of virtual assistants like Julie Desk, is to free users from non-value added and time-consuming tasks, to optimize their time so they can focus on things that really matter to them. The key to achieving this goal is for people to accept giving Julie those responsibilities.
Great responsibilities only come with Trust…
Would you put your life in the hands of a virtual driver you don’t trust? I wouldn’t…
Indeed, we don’t give responsibilities to someone we don’t trust. But why should we trust Julie to schedule our meetings? In general, why should we trust AI and virtual assistants to complete complex and important tasks on our behalf?
Our anxiety grows bigger when we think of Siri and Alexa in terms of virtual assistants and AI: don’t get me wrong, they are great tools but we only see them as gadgets. For instance, imagine asking Siri to call Zoé and instead, Siri calls Chloé, what you can do is smile and cancel the call. But if Julie messes up with a meeting, there is more damage done because, as she is responsible for this task and she did it incorrectly, an important meeting might be in jeopardy in the process. It can’t be canceled, it’s done.
“To delegate tasks to a Virtual Assistant, you first need to trust them”
… And I trust what I understand!
We are not confident about virtual assistants and AI because, since most of us use AI as gadgets in our daily life, we already know that AI can often make mistakes.
At the beginning, we thought that if the AI made less errors than a human, then it would be easier to trust the AI more when it comes to scheduling meetings. My example on the Google Car follows the same logic: if there are less accidents with the Google car than there are with human drivers, then we should trust it more, right?
When we started the Julie Desk adventure, we worked as assistants ourselves for a period of 8 months, and manually answered all the requests we received! It allowed us to understand what the recurrent patterns in the meeting scheduling process were and then, with the help of data scientists, we coded them to give birth to Julie. But above all, we realized that there was a significant part of the process that was just noise: it was never the same, it was unpredictable. It was Human.
All of us working on AI know one thing: AI is built on statistics at some point. So the 0% error rate is a myth. There will always be a chance of error.
“AI is built on statistics at some point: the 0% error rate is a myth”
The question is: How do we handle the error made by the AI? Are we ready to accept it? Can we understand it?
From a rule-based to a probabilistic system.
Once, at Julie Desk, there was an error with one of our users and he kindly asked for the “bug to be fixed”! The same thing happened with the Google Car when it crashed again back in August this year. We usually ask why the “problem” has appeared and hope it will be resolved soon; and that reasoning makes sense: that’s exactly what we do with all the software and applications we use right now. If something doesn’t work the expected way, then there is a bug.
However, we can’t think the same way with AI. If we had to make a comparison, I’d rather compare it with a child: if your child has trouble pronouncing words correctly, would you ask your husband, wife, God, Science or whoever you take responsible for his/her creation “Can you please fix it”? No, I don’t think you would. Instead, you’ll think, “it’s okay, he/she will learn”. Well, the same goes for Julie and AI. You have to tell yourself: “Ah, Julie is AI, it messed up my meetings this time but it will learn for the next time”.
“Bugs do not exist in AI. There is just a learning process.”
One thing that is key to humans using a technology is their ability to set the limits for the system, and more specifically to identify situations where it will work and where it won’t work.
That’s what we do with Julie: She can schedule or reschedule your meetings but she can’t order flowers for you. We define the limits. But among those use cases, we’re still playing a probability game. And that defies the logic most people are used to.
The challenge with AI is to make the computer learn; if we start fixing each error with a “rule” then, it’s no longer AI. It’s an old rule-based system that in the end would be unmaintainable!
From the user’s perspective, the incapacity to predict how the AI will behave creates anxiousness.
But what if the situation that led to an error is so specific and complicated that even with a human, it would result in the same outcome? Would the user accept it then?
The Empathy Problem
With all the developments we’ve made at Julie Desk, we now have a work ratio of 80% AI and 20% Human: 80% is done correctly by the machine and 20% is corrected by humans. However, all the meeting requests received by Julie are sent to our operators for a final human validation before Julie replies to our clients. The AI pre-processes everything and the human operators give the “go-ahead” for a reply.
Not only do our operators guarantee the quality of service, by dealing with the unpredictable human behavior, but they also make Julie more understandable for our clients.
Something very funny happened when we replied to the client that had asked for the bug to be fixed. We told him:“You know what, it’s our operator’s fault. He missed it, we’re sorry.”
We asked the operator what happened and then explained the operator’s reasoning behind the mistake, the client told us he completely understood!
It’s a problem of Empathy. People are not empathetic towards a machine, but they are very empathetic towards another human because they can put themselves in other people’s shoes!
“Humans can be empathetic towards other humans, not towards machines.”
Therefore, the main question when you build an AI is: How can we build empathy for a machine?
From what we’ve discussed before, the solution might be to make the AI understandable and predictable.
That’s hard for an AI though, because we use statistical models; It’s difficult to explain why the AI is wrong or why it’s right. When a human makes a mistake, you can ask for the reasons behind it and either there is a good explanation and you forgive because you understand – you could have done the same mistake -, or it’s just a random mistake, so you tell yourself it happens. You’re human so you understand.
But how can we build a machine that can explain itself when it makes an error? A machine that a human could understand and feel empathy for?
As long as we cannot build such a machine, we will have Humans in the loop. To some extent, that’s actually something positive for the future because it will create new jobs as “AI Trainers” or “AI Supervisors”, like our operators at Julie Desk! They are key to making AI understandable right now and eventually, make people trust it completely.
Facebook M is doing it very well. They plan on having supervisors in their headquarters to actually run Facebook M for a very long time.
“Humans are key to making AI understandable and trustworthy right now.”
If we are ever able to become empathetic towards AI and if ever we come to a time when machines can bear the responsibility for their actions and feel emotions, maybe the question then will be: “Did we create a new race?” Time to re-watch Ex-Machina?
For more information or comments, feel free to visit our FAQ page or contact our team.