What are the Social Implications of Humanoid Robots?

Sophia Robot

“I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual, characteristics such as the shape of the human body; it appears to me quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers.”

– Alan Turing

Science fiction has been imagining what it would be like to interact with humanoid robots for decades, if not longer. However, the most sophisticated robots today are a far cry from their fictional counterparts. For example, Sophia, the Hansen Robotics humanoid robot has perhaps most popularised these types. Even more realistic examples exist such as Kokoro Japan’s Nadine.

Even with their limited capabilities (mostly pre-programmed responses, and minimal facial movements) the trend is showing that these robots are getting more and more human-like every year. Though it might not happen in 5 years, due to the exponential nature of technology the turing test will most probably be beaten in the next few decades by a functional, mobile humanoid robot.

Most recently this idea was brought to the US through HBO’s Westworld tv series. The remake of the 1973 film of the same name. The highly advanced humanoid robots of this series remained, for the most part, in an isolated location, only interacting with people who chose to visit the ‘robot park’ where they were contained. But what we see today is that social robots are being used in more and more applications around us (Healthcare, elderly care, entertainment, research etc). And robot assistants, though less capable, are nevertheless being accepted into our lives at an even faster rate (Think of Google’s or Amazon’s home assistant, Woe bot or other AI digital psychologists on phones, robots for our homes like the Roomba, Grillbot, etc) So it is likely to assume that the these humanoid robots of the future will not be in an isolated robot park but integrate in our lives.

Westworld

When this happens, what will this mean for society, jobs, and us?

The sci-fi stories from Hollywood, like Westworld, have brought up a number of social issues of using humanlike robots to the forefront over the last few years.

One of the most evident issues, which is actually tied to most new technologies is the problem of relying on these robots for tasks that we were responsible for before. Think of driving a car in 50 years time. With self driving cars becoming the dominant form of transport in a decade or two, it is unlikely that most people will even know how to drive in the future.

The same can be said for cooking, cleaning, or certain tasks needed for caring for the elderly, or sick. Though it is easy to argue that nothing will be really lost if we ‘lose’ the knowhow of cleaning an apartment, the same cannot be said of tending to a loved one, or aspects of raising a child. We can already start to see some of the negative side-effects of simply using our ipads and tablets as a partial babysitter today.

But these are more surface issues as technology has always changed human tasks whenever adopted. What is particularly problematic with humanoid robots is that they will look and act like us thus offering a unique situation for human reflection to be carried out.

The problem of cruelty towards humanoid robots

If you were to kick, drop, and smash apart your robotic vacuum or similar household robot today, you would be justified in not feeling any real ethical anguish (apart from the monetary pain of replacing it). As these objects contain as much self awareness and capacity for pain as a toaster, car, or computer. Westworld and other sci-fi stories add a ‘human’ element to their robots, foreshadowing the likely scenario that our humanoid robots of the future will be capable of self awareness, feeling, expression, and pain. And in this case, we would owe the robots moral obligations not to treat them as toasters.

“insofar as they had inner lives like our own, we would have duties not to kill them

– Colin Gavaghan and Mike King

However, there is not necessarily a need for a humanoid robot to ‘feel’ anything to perform the actions that we would want: cooking, cleaning, running errands, caring, teaching etc. It is just as likely that the same deficit in capacity of the robot vacuums will be held by the humanoid robots of the future. This makes for less interesting tv and film, but does point to a more interesting problem. There is a distinct difference if the same situation were to be carried out on an unfeeling humanoid robot as on the robotic vacuum.

The fundamental problem would not be for the robot but for individuals and society as a whole.

There are two key problems that come up:

  1. By committing any cruelty to the humanoid robots, this can desensitize someone, increasing the chance that they will perpetrate this cruelty onto another human. This is the same argument Emmanuel Kant used when discussing the issue of animal abuse “Kant had odd views about animals, seeing them as mere things, devoid of moral value, but he insisted on their proper treatment because of the implications for how we treat one another: “For he who is cruel to animals becomes hard also in his dealings with men.
  2. It there are bystanders who witness this cruelty, whether they know the robot is a robot or not, some form of psychological harm may be put onto them. As we are inherently empathic creatures we feel a comparable level of pain or discomfort when we see physical or mental pain of others that we connect with (humans, animals, and sufficiently human looking robots)

These issues are of course exacerbated when it comes to sex bots, as another layer of complexity is added to the human-robot interaction. This was covered in a previous blog post if you are curious)

Overall, the idea of humanoid robots is one that has been popularised more than most technological advances due to the interesting and complicated ethical issues that arise from their existence. Harm that is caused to these future entities is the subject of much of the discussion which revolves around self-awareness, consciousness, and legal issues. However, even without the capacity for self-awareness and pain, harm inflicted upon humanoid robots is problematic for society and individuals. This is different from the ‘harm’ caused to other robotic objects like toasters or vacuums because of the human appearance these robots will have. “Essentially, the idea that even without a sense of real self awareness or consciousness, to do any negative act upon a machine that resembles a human stains the actor.” Even if we experience tremendous boons due to the labour and work tasks that these robots will be able to accomplish we should be cautious going forward with the full application of this powerful technology.

11th blog post of the Automated Podcast. Check out the podcast episodes and other blogs at https://automatedpodcast.org/

The Weekly Podcast Exploring the Impact of Technology on Jobs. Website: Automatedpodcast.org / e-mail: info@automatedpodcast.org