Algorithms will turn tomorrow’s employees into augmented workers. This change requires a rethinking of education, professional training and the development of ethical artificial intelligence.
One of the most interesting episodes of the science fiction series “Electric Dreams”, inspired by the works of the writer Philip K. Dick, is undoubtedly episode 8. Entitled “Autofac”, it depicts a dystopian universe controlled by a monopolistic company entirely operated by machines. Some say science fiction says more about the present (and the fears and fantasies that characterize it) than it does about the future and the show illustrates this perfectly. Viewers recognize the fear of an unequal future created by the replacement of humans by machines because it is a nightmare that has infused the collective imagination in recent years. This belief can be traced back to a 2013 Oxford University study by Carl Benedikt Frey and Michael A. Osborne, which concluded that 47% of American jobs were likely to be automated over the next 20 years. Since then, research, controversy and discussion on the subject have increased. Some have even revised the figures put forward by the two researchers upwards or downwards, while others have preferred to focus on the new jobs that artificial intelligence will create. For example, a recently published World Economic Forum report states that artificial intelligence and robotics will create 60 million new jobs by 2022.
Towards augmented workers
Numbers aside, some prefer to move from a discourse of “how much” to “how”, focusing instead on how artificial intelligence will transform the labor market, and how to prepare for these transformations. This question was hotly debated at the last edition of Dreamforce, an annual exhibition dedicated to new technologies organized by Salesforce, a powerful software company, in San Francisco. James Manyika, President and Director of the Mckinsey Global Institute says that the reason it is so difficult to estimate how many jobs will be automated in the coming years is because the framework is not the right one. “To get a better idea of the situation, it is important to think in terms of tasks rather than jobs. Each job consists of a number of activities, and advances in artificial intelligence make it possible to automate a number of them,” he said.
“Few jobs will be able to assign all their tasks to machines or software. On the other hand, according to figures from the US Labor Office, 60% of today’s jobs have one-third of tasks that can be automated or increased through artificial intelligence.” Therefore, we shouldn’t be worrying about the prospect of mass unemployment. Instead, we should be preparing for working world where artificial and human intelligence will be inspired to collaborate in an ever-closer way. This is also the thesis defended by the historian Yuval Noah Harari in his latest book, 21 lessons for the 21st century. “The 2050 labour market may well be defined by the collaboration between man and artificial intelligence, rather than by their entry into competition. In fields as vast as policing and finance, humans augmented by artificial intelligence could be more powerful than humans and computers separately.” In essence, Harari argues that since machines will take on the most routine and daunting tasks, tomorrow’s jobs will be much more rewarding.
The importance of learning
However, he adds that these jobs will require the acquisition of new technical skills, which will have to be regularly updated to keep up with advances in artificial intelligence. It will therefore be essential to rethink education, to enable individuals to learn throughout their lives, and give workers the opportunity to benefit from quality professional training. As far as education is concerned, MOOCS, or massive open online courses, are a good starting point for James Manyika. As a member Khan Academy’s management team, he says that online education not only allows people to be trained outside the university classroom, but also to experiment with new forms of teaching that are better adapted to tomorrow’s world. “By analyzing learning data, we realize that individuals are not learning as we thought they were. Users jump from one discipline to another, build bridges that we would not have thought of between subjects… so we discover new ways of learning, which will allow students to learn more quickly”, he explained.
The first online course platforms were designed by specialists in artificial intelligence. Sebastian Thrun, a former Google alumnus and professor of computer science at Stanford and Andrew Ng, also a professor at Stanford and co-founder of Google Brain, have both founded their own online education startups: Udacity and Coursera respectively. Thrun claims to have founded Udacity as an “antidote to the revolution brought about by artificial intelligence.” As for Ng, he believes that artificial intelligence researchers have a responsibility to find solutions to the potential problems caused by their research, and states that Coursera is his own contribution.
For professional training, several companies are also trying to set an example, including Salesforce. “We have implemented Salesforce Trailhead, a completely free training program, to enable individuals who are not technicians, who do not have specific artificial intelligence skills, to learn how to use the tools that will allow them to be more effective in their work,” said Sarah Franklin, vice president of Trailhead. Several large American companies, such as Walmart and AT&T, have also developed similar programs. However, according to James Manyika, the efforts made by companies remain insufficient for the time being. “The expenditure of public and private organisations on vocational training has decreased over the last twenty years. And they were already pretty low at the time! It is thus imperative that companies change their strategy in this area.
Ethics in Menlo Park
Even so, the arrival of artificial intelligence on the labor market is not just about employability. As computers and algorithms become increasingly important in the labor market, there is also the question of their reliability. Although they are often considered infallible and impartial, algorithms can, like their human designers, make mistakes or be biased in their decisions. In her book Weapons of Math Destruction, mathematician Cathy O’Neil shows that the algorithms used by American courts and police to predict crime, and by insurance companies to determine risks and calibrate their rates, are often biased towards the most disadvantaged populations. “The fact is rarely mentioned, but to train algorithms to help decision-making, it is often the same data set that is used by different actors who create the algorithms. So, if there are gaps in this starting game, biases in decision-making can quickly proliferate,” James Manyika explained. “Since algorithms are designed by humans, the data chosen to train them is less likely to be perfectly neutral,” he continued.
“It’s a complex debate, because algorithms can also allow humans to make better decisions,” added Timnit Gebru, a Google Brain researcher in charge of the ethics of artificial intelligence at Google. “For example, a company called HigherVue uses artificial intelligence to support human resources. Algorithms automatically select the most promising candidates for a given position so that human resources can interview them. They say that their solution allows them to have a more diversified workforce. The risk being, in the near future, to find itself with a company of this kind having a monopoly on this service, any potential bias in its algorithm can therefore have devastating effects. To avoid this, I believe it is essential to have a government agency responsible for auditing such solutions before they are deployed.”
For Sarah Franklin, it is also the responsibility of new technology companies to ensure that these algorithms work properly. “It is up to companies to mobilize on this subject. Setting up departments dedicated to the ethics of artificial intelligence can be a good start. It is also essential to provide free educational opportunities for as many people as possible to become familiar with these technologies, so that more of us can take part in the discussion. “According to Timnit Gebru, the demand must also come from the public and pressure groups, which have successfully begun to mobilize. “Over the past year, the media and civil society have shown growing interest in the challenges posed by new technologies, around cybersecurity on the one hand and the ethics of artificial intelligence on the other. Companies are becoming aware that they must be cautious in the deployment of their solutions, promoting artificial intelligence that is in line with human values and equity.” In September 2016, Amazon, Facebook, Google, Microsoft and other technology companies created a partnership to promote ethical artificial intelligence.