Asset Publisher

International Reports

Humanity Will Not So Swiftly Replace Itself

by Dr. Aljoscha Burchardt

Interjection

Terms such as artificial intelligence and machine learning have triggered a wave of expectations, full of both hopes and fears. Hopes that we are on the brink of finding solutions to the great problems of humanity are tempered by fears, most commonly that of being made redundant. A sober look at the facts.

Asset Publisher

A artist dancing surrounded by digital sculptures.

Four Preliminary Questions

To speak of digitalisation today, is also, inevitably, to consider artificial intelligence (AI). This is remarkable in that AI dates back to the mid-1950s, and yet it is only now that it is starting to attract attention. If we are to believe those who shout the loudest, humankind is now facing drastic changes, which are, moreover, wholly without parallel in history.

When it comes to the larger considerations of life, valuable insight may be gained from the key questions posed by the great philosopher from Königsberg, Immanuel Kant: What can I know? What should I do? What may I hope? What is man? These questions take us to the borders between and of knowledge, ethics, religion and ultimately the question of being. In this paper, I will discuss the current situation and the need for action with regard to digitalisation and AI. An interesting answer to the last question is provided by Yuval Noah Harari in his recent book Homo Deus. He argues that we are no more than a highly complex algorithm made of flesh and blood. Following this logic, there is no fundamental difference between us and a supercomputer that knows the state of our cells, our hormone levels, our sensory impressions, our experiences, and so on. This, although it seems rather like science fiction, should be kept in mind, if only as an intellectual exercise and I only mention it for the sake of completeness

In Western society we tend to define ourselves by our work, or to be more precise, by our gainful employment. In southern Germany people use the verb schaffen, literally to create. When we work, we create. What will happen to us and to our working world in this age of digitalisation? I will now turn the spotlight on this question.

Predictions and Changes

One constantly hears and reads predictions of the future based upon the past, yet these nearly always fall wide of the mark. As AI researchers, we are often asked how digitalisation will change people’s working lives. I sometimes wonder how someone like Gottlieb Daimler, at the end of the 19ᵗh century, would have responded when asked how the automobile would change the world of work. Perhaps he would have replied that the first thing to do was to build up the network of pharmacies so as to ensure enough fuel for hundreds of cars. He would likely never have imagined that one day Germany would have more cars than men, nor, one supposes, would he have foreseen the fact that the majority of the working population would simply use their cars to commute to work, where they would then be parked all day long. But that’s another issue.

Since industrialisation, the working lives of the mainly agricultural population have undergone several radical changes. These changes have not only affected working hours, but also production methods. In the early days of industrialisation, people worked 60 hours a week, including Saturdays. After the Second World War they worked 45 hours a week, whereas today some people work 35-hour weeks. Occupations and entire professions have come and gone in a short space of time. A few years ago, a multitude of workers spent their whole day doing things on typewriters and calculators that we now quickly deal with on our computers – thanks to digitalisation. The sphere of non-remunerated work has also changed dramatically. My grandmother sometimes told me about her mother’s life as a housewife in the 1920s. In winter, she had to carry coal; in summer, she had to buy milk and cheese every day because she had no refrigerator; and in the evenings, they had to head to the local pub if they wanted a beer. Washing day really did take a whole day if the laundry also had to go through the mangle. The arrival, over the last few decades, of all kinds of household appliances meant that the full-time job of housewife – which was considered a normal occupation for around half the population – changed. Now, a working couple can handle the housework in their spare time. Unless they have young children, in which case there is still a need for the mother (or father) to stay home with them.

Working hours and production methods have changed drastically since industrialisation.

The industrial and technological revolutions of the past have never led to mass unemployment – neither the emergence of assembly line production in the 1920s, nor the robots of the 1970s nor, most recently, the computers of the 1980s. Over recent years, studies have been published at regular intervals providing astonishingly detailed predictions about how our working lives will change. Foremost among these is the widely cited study by Frey and Osborne from 2013, which estimates that almost half of all jobs in the United States have the potential to be computerised. In fact, this study appeared before the most recent breakthroughs in the field of AI. But of course such studies are extremely speculative. They categorise groups of occupations according to the amount of social intelligence they require, and then work out a probability for the extent to which machines would be able to demonstrate that social intelligence. A whole dissertation could be written on each of these concepts and assumptions. A more recent study by McKinsey dated 2017 shows that technology has, in fact, always created more jobs than have been lost due to the ensuing disruption. So the key question is as follows: will the upcoming wave of digitalisation be any different?

What Are We Really Talking About?

As a specialist in this field, I would like to outline three key concepts that are relevant to the debate: digitalisation, artificial intelligence and machine learning. In a nutshell, artificial intelligence is a tool for the purpose of digitalisation; machine learning is a tool for realising artificial intelligence.

Digitalisation encompasses an array of technical processes that differ in terms of both nature and complexity. Looking back, we can roughly distinguish two waves of digitalisation. The first of these, which gained momentum in the 1990s and is still far from complete, is the switch from analogue data carriers (such as paper, film, tape) to digital equivalents that can be processed by machines. Initially, it was only the carrier medium that changed, for example, a photo was now available as an image file, or an address file, as a database. But the machine was unable to do much more than simply store and play back data. Any work on the content had to be carried out by humans. The second wave of digitalisation, which is currently in full swing, is making data understandable to machines. This requires sophisticated analysis and processing capabilities, and AI is often used to do this. The impact of digitalisation is often wrongly attributed to AI. For example, the retail trade’s shift towards online retailing is a consequence of digitalisation, and (so far) has little to do with AI.

Artificial intelligence describes Information Technology (IT) applications that aim to demonstrate intelligent behaviour. To different extents, this requires a range of key skills: perceiving, understanding, planning, acting and learning. We currently talk about weak AI, which refers to intelligently helping people to achieve their goals, i. e. smart interaction and collaboration between humans and machines. Strong AI has a more philosophical relevance. It aims to imitate humans, to be, ultimately, a homunculus.

Machine learning (ML) refers to procedures through which computer algorithms learn from data. In other words, they learn to recognise patterns or show desired behaviours without the need for each individual case to be expressly programmed. For example, in the online book trade, algorithms learn that there are certain types of books that are bought by certain categories of customers, without the need to define in advance what a romance novel is, or a young father. Autonomous vehicles can learn by simply having people control them for a while. This method is also used to train automatic image labelling. People label images with information such as whether a face appears cheerful or sad, and after several thousands or tens of thousands of examples, an algorithm can then learn to classify new images by itself. While ML is often used in AI, ML is only one method, an AI tool amongst many. Neural networks and deep learning are also often mentioned, being themselves part of ML.

What Jobs Can Be Done by AI?

Now we come to the question “What can we know?” If ML is used as the method of choice, AI can take over repetitive tasks. Indeed, even future actions, relating to changed data, can be derived from the analysis of previously existing data, even if the data situation changes. In order for this to happen, it must be possible to model the patterns or rules of the game. For example, it is relatively easy for a machine to learn how to translate words and phrases from previously translated texts, and in this way create completely new sentences without errors. But analysing existing marketing texts does not allow a machine to learn how to write good, persuasive new marketing copy. One could say that machines can read the lines, but it is only humans who can read between them. Especially in fields such as marketing, it is important to arouse unspoken expectations and garnish them with new plays on words, subtle allusions and so forth. This is where current technology has reached its limits.

But when it comes to the tasks that machines can do, they often do them much better or faster than humans. When evaluating MRI scans, systems that have been trained using tens of thousands of images and their corresponding findings are already superior to experienced physicians who have perhaps only seen a few dozen cases of rare conditions in their career. However, the machines have absolutely no understanding of medical contexts and cannot provide explanations for the diagnoses. In this respect, they remain a tool that expands people’s capabilities and backs up their decisions. In the world of translation, Google Translate claims that it machine translates 100 billion words every day. This would be impossible using human translators, just as it is impossible to do without online search engines, which are based on information retrieval using AI. Here, AI is already part of our information society and helps to guide our destinies.

There are some limitations to what machines can do, but what they are able to do, they do better and faster than humans.

AI has great potential in office and administrative work. As previously mentioned, many of us now have to do a number of tasks alongside our other work, such as making appointments, bookkeeping, archiving and filing. This work often consumes a great deal of our productive energy. The same is true of reporting, documentation, taking the minutes of meetings, and so on. One might hope that intelligent technologies will relieve us of this burden in the near future and make many public administration processes faster and more transparent.

When it comes to manual labour, the same rule of thumb applies to whether jobs can be automated. Repetitive, uniform processes are easier to learn than complex ones, which require a great deal of knowledge and the ability to apply what has been learned. One example of this is autonomous driving. Driving on a motorway in the United States can be learned relatively easily with the help of camera images, geo-coordinates, etc. It is, however, harder to learn how to drive in a mountainous village of southern Europe.

Robots can relieve workers when it comes to lifting heavy weights, overhead work, or bathing a sick or elderly person and putting them to bed. But from a technical standpoint, these kinds of mixed teams of humans and robots are much more complicated than fully automated systems. Health and safety has to be a key priority in factories where people are walking around while robots are operating. If a robot is passing a component to a human, it will not notice that the person cannot take it because they have been momentarily distracted, for example, by sneezing.

In the area of AI, there is still vast potential fordevelopment.

We can only touch upon the technological opportunities and hurdles at this point, but one thing is clear: there is vast potential for development and it would be advisable to explore this further. Such technologies also provide ample potential to achieve a more inclusive society, by supporting those who cannot – or can no longer – participate in working and social life because of cognitive or motor disabilities, or simply because of language barriers.

What Should We Do?

Previous technological and industrial revolutions have generally led to increased productivity, and this has always been accompanied by the question of distributive justice, though only between employees and employers. The rules of the game that can be used to create a balance – on the national level – and satisfy all sides to a certain degree are well known. Perhaps we should slowly adjust to a time in which we only work 20 hours a week and otherwise have time for our children, older people, new citizens, or for pursuing further education, fine arts, and so on. We have to be prepared for this. If, for many of us, the end of gainful employment comes faster than expected, we should have some ideas for new structures. In light of the above, the rule of thumb that lower-paid jobs are the first to be hit does not necessarily apply. In ten years’ time, it is more likely that a construction worker will still have a job than an HR manager.

Today, the welfare state is suffering more than ever before from the fact that global corporations such as the AGFA big four (Amazon, Google, Facebook, Apple) are diverting their profits into tax havens. The phenomenon is caused by globalisation rather than digitalisation, but the virtual nature of their products and services makes these practices much easier. There is also another question of distribution, which could even be the decisive one: namely, who has the data? It is virtually impossible for smaller companies to gain critical mass and assert themselves in the face of the huge “data cartels”. There are, however, some welcome exceptions, such as the small Cologne-based company DeepL, which offers a machine translation tool that is qualitatively superior to Google Translate. Additionally, to all intents and purposes, we submit our data voluntarily every time we search for something online or click on the news on our phones. All these useful services that are seemingly free are actually paid for with our data. We need a debate about the dat a economy and potential regulatory measures.

We also need a public debate about how we want technology to be used, with less focus on what is technically possible. If we return anew to the example of the robot that helps caregivers with hard, physical work, then it could be that the robot lifts the bedridden person while the carer makes the bed, washes them, gives them their medication and asks them how they are feeling. Sooner or later it will be possible for robots to wash the person, give them their medication and converse with them. That may sound strange at first. But in light of the shortage of nurses and carers, and situations where one nurse has to monitor three corridors with 60 patients on a night shift, perhaps it is conceivable and indeed the lesser evil if a robot can give a patient something to drink, or clean up vomit, until a member of staff has time to do it. It is a question of priorities and the affordability of alternatives. But above all, the discourse should be objective and not be dominated by doom-mongers or self-appointed technical gurus.

Finally, we still need an answer to the question: “What may we hope?” If we approach this in a general, rather than religious sense, I would say it largely depends on our mindset. Technologically advanced countries, such as Japan and Germany, are better placed to compete on the global stage than countries that still largely rely on manual labour. Despite this, in Germany one often encounters a general mistrust of new technology. This starts with the Germans’ love of manually changing car gears, something that is virtually unknown in countries like the US In Germany, even a normal electric car – let alone a self-driving car – triggers fears of loss of control. Here, we must once again develop the inventive spirit of “participating and creating” that drove people like Gottlieb Daimler onwards, despite all kinds of resistance, both human and technical. Technology should expand our horizons, and no-one should ever be replaced by technology. However, those who steadfastly oppose the march of technology will find themselves if not fully replaced, then significantly less relevant.

– translated from German –

-----

Dr. Aljoscha Burchardt is Senior Researcher and Lab Manager of the Language Technology Research Unit at the German Research Centre for Artificial Intelligence, as well as Deputy Chairman of the Berlin Scientific Society.

Choose PDF format for the full version of this article including references.

Asset Publisher

comment-portlet

Asset Publisher