Skip to Content

Are Robots Taking Over? Mark Coeckelbergh’s Guide to Our Future

Will Robots Take My Job? The Ethics of Our Automated Future

Discover how robots are reshaping our world in Mark Coeckelbergh’s Robot Ethics. From job displacement to self-driving cars and healthcare automation, explore the moral dilemmas of our automated future and what it means for humanity.

Ready to explore the profound moral questions shaping our automated future? Dive into the full article to understand how Mark Coeckelbergh’s Robot Ethics challenges us to redefine what it means to be human in an increasingly robotic world.

Recommendation

Robots are already changing life and work. Each new change, from automated care in nursing homes to self-driving taxis, sex robots, and drone warfare, presents new moral challenges. How much privacy to surrender? How many human workers should robots replace? Should machines perform surgery, process loan applications, supervise children, and fight wars on our behalf? Belgian philosopher Mark Coeckelbergh examines these and other questions, concluding that the real question is, what kind of future do we want for our children?

Take-Aways

  • Robots are already changing the world; humans must consider whether those changes are desirable.
  • Sooner or later, robots will transform nearly every occupation.
  • Robotic home companions and personal assistants present new issues regarding privacy and deception.
  • Using robots in medical settings redefines quality in health care.
  • Self-driving cars and other autonomous robots require programming for ethical decision-making.
  • As robots become more lifelike in appearance, speech, and behavior, there may be ethical considerations regarding their treatment.
  • Military robots reduce the costs and risks of warfare. Some think that might not be a good thing.
  • Robot ethics are human ethics.

Summary

Robots are already changing the world; humans must consider whether those changes are desirable.

The dangers robots present to people are not primarily science fiction scenarios where sentient machines revolt against humanity. Robots are changing the world, but in more mundane ways: altering how people work, travel, interact with one another, and more. And the issues they create are more subtle: job losses, negative psychological effects, and invasions of privacy. The ethics of their use is, therefore, of the utmost importance. Used incorrectly, this technology can deepen economic disparities, harm groups with special vulnerabilities, and lead to the loss of human life and dignity.

“Some robots may be dangerous indeed — not because they will try to kill or seduce you…but usually for more mundane reasons such as because they may take your job, may deceive you into thinking they are a person, and can cause accidents when you use them as a taxi.”

Inquiries into the ethical dilemmas relating to robotics must include the question of who holds responsibility for the problematic effects of this new technology. An automaton can make decisions, but it is still just a tool that cannot bear the blame when it causes harm. Who, then? The user? The manufacturer? The programmer? The marketer? Some regulatory agency? Robots present new ethical challenges because, unlike earlier technologies that were thought of strictly as tools or machines, these promise to be companions, caregivers, pets, co-workers, supervisors, surgeons, and soldiers. As their capabilities increase, becoming more nearly like humans, society will face challenges to the understanding of what makes us human — and them not.

Sooner or later, robots will transform nearly every occupation.

People once envisioned machines as the means by which humanity would enjoy a future free of drudgery; instead, they have resulted in new kinds of servitude for many people. As Marx observed in his critique of capitalism, in general, a fortunate few benefit from technological advances. Machines make these already-wealthy individuals richer while reducing the masses’ autonomy: They serve the machines, which dictate the nature and the pace of their work.

“Is there still a place for humans in the robotic factories of the future, and if so, what will be their place, and under what conditions will they work?”

The modern industrial site involves far more human-robot interaction than in Marx’s time, bringing new challenges and concerns about worker welfare:

  1. Safety — Robotic equipment is often heavy, can carry heavy payloads, and may move in rapid and unpredictable ways. However, slowing machines down (for the safety of nearby humans) diminishes their productivity.
  2. Security — Hacked operating systems can cause dangerous malfunctions.
  3. Privacy and surveillance — What kinds of monitoring and collection of worker data should be considered acceptable? Will workers lose personal autonomy as a consequence of constant monitoring by machines?

The new industrial revolution involves automation, not only of repetitive mechanical tasks but also of complex mental work. Many of the jobs at risk of automation in the near future include customer service and administrative assistance — which involve a lot of routine work; but newer technology is bringing robots into less routine work, from medical diagnostics to loan processing.

Even in professions where robots do not eliminate jobs, human work will be different. Ideally, if robots handle more mundane tasks, humans can engage in more stimulating, creative work. But it’s equally possible that, as in the earlier industrial revolutions, many people will find themselves in high-pressure and low-meaning occupations, servicing or serving machines. Change will come at different times to different groups and places, but humanity can mitigate any negative effects through planning and forward-looking policies. Perhaps some jobs like care work, teaching, and artistic endeavors should remain solidly in human hands even if the technology to automate these jobs exists. Education will be essential to prepare the workforce of the future. It may be time to consider restructuring the socioeconomic framework through such measures as universal basic income to prevent widespread suffering.

Utopian visions of a post-work society have so far been mirages, but with such transformations looming, it is timely to ponder what gives meaning and purpose to human life — not only productive labor but creative pursuits, service to others, leisure and recreation, and time with loved ones. What can humanity do to ensure that — however you define it — the good life can be available to more than just the fortunate few?

Robotic home companions and personal assistants present new issues regarding privacy and deception.

Robotic personal assistants only respond when called by name, but they are always listening and receiving data that may be accumulating in a server somewhere — data that can be sold, used, and hacked. In the absence of legal protections, the scenario of a surveillance state is not far-fetched. As cultural artifacts, robots designed to resemble people or speak with human voice patterns may perpetuate problematic racial, gender, or other stereotypes.

Using robots to provide companionship and care for children, the elderly, or people with disabilities raises numerous concerns related to deception and dignity. The person being cared for may not understand that the companion is not human and not capable of empathy. Some argue that what matters is believing you are loved and cared for but most people want real (human) connections and see the distinction as a vital one. Robotic child care workers present some special issues. When and how should they restrain children? How will non-human caregivers affect children’s social development given that robotic playmates cannot teach true empathy or offer authentic reciprocal relationships? Who is to blame if a robot inadvertently harms a child?

“The discussion about sex robots is a good example of how personal robots, as social robots, make us reflect on human relationships. It makes us think about what good human relationships are.”

Sex robots illustrate the flip side of the deception question, as their imitation of human actions is understood as a performance for which the user willingly suspends their disbelief. And yet the potential for harm remains: rendering people incapable of handling the vicissitudes of human romantic relationships or increasing comfort with the idea of a sexual partner as a “thing” you can use and set aside at will.

Using robots in medical settings redefines quality in health care.

Robots could play important roles in health care. Already, they help to deliver medications, facilitate telehealth, and assist with complex surgeries. The ethical issues engendered by healthcare robotics raise a number of questions, including:

  1. How will people’s privacy be protected? How much surveillance is acceptable? What kinds of data are collected? Who has access to that data?
  2. How many human workers will be displaced by machines, and will certain demographic groups be disproportionately affected?
  3. How will robots change the way care is provided? How will the roles of various care providers be affected? Will patients and their families experience diminished levels of human contact and warmth?
  4. Who will take responsibility when something goes wrong?

A coherent ethic for the use of robots in medicine should be based on a more general ethic of quality in human life. Technology is desirable when it supports and extends human capabilities, but not when it infantilizes or robs people of their dignity. People must develop concrete standards of good care — both human and robotic — that take patients’ physical, emotional, and relational needs (including their need for human contact) into account. They must also bear providers’ needs for meaningful engagement in mind and support loved ones’ involvement in patient care. A patient’s financial means should not be a dominant consideration.

Self-driving cars and other autonomous robots require programming for ethical decision-making.

Most people agree that robots, which lack consciousness and emotion, are not capable of true morality. Nonetheless, they are being programmed to make decisions, and some kind of ethical framework can be incorporated into that programming. As robotic autonomy increases, the question of who is responsible when things go wrong becomes more important and complicated. Who was to blame when a self-driving car killed a pedestrian in Arizona in 2018, for example? There was a driver, but the car was in autonomous mode. The pedestrian was not walking on a designated crosswalk. Should the driver have reacted faster? Should the pedestrian have proceeded with more caution? Should the vehicle manufacturer — or its engineers — have built more safety features into the car? Should the city officials have allowed the car on the road in the first place?

The number of factors that can contribute to a robot’s failure to act as intended is overwhelming, and the number of people involved in its development and deployment is usually huge. It isn’t always possible to guess how or when a failure could occur. Still, efforts to build greater safety into machines like self-driving vehicles should incorporate input from all those potentially affected by a failure, whether directly or indirectly — asking taxi drivers, pedestrians, and cyclists to share their concerns and insights, for example, as a part of the self-driving-car development process. Regulations could also help maintain transparency. Taking a wider view, if people wish to enjoy the benefits offered by autonomous robots, they must be prepared to incorporate those robots into community planning and policymaking and to take responsibility for the inevitable tragedies.

As robots become more lifelike in appearance, speech, and behavior, there may be ethical considerations regarding their treatment.

The question of how people ought to treat lifelike machines is complicated. You can easily dismiss any direct moral standing owed to a robot, but to do so implies that it is wrong to empathize with them, which is a natural human response. A 2015 video of employees at a technology company kicking a robotic dog made many viewers uncomfortable, for instance. It can also be argued that the mistreatment of androids is wrong because such actions can lead to human degradation — that is, the person doing the action is undermining their own moral character — or can lead to a rise in similar behavior toward fellow humans. Some feel it’s unethical to create robots that could masquerade as humans, deceiving the people around them.

“Does it suffice to say that robots are ‘just machines,’ or is there a way to take seriously and at least better understand what is going on in these human-robot interactions?”

One possible answer is to consider the duty to a robot not as an entity with its own consciousness but as an entity with which people have established a relationship of some kind. This is analogous to how you feel a different duty to an animal you keep as a pet than to one you raise for meat. Just as attitudes regarding animal rights have evolved, humans’ feelings toward the machines they rely on may evolve as well.

Military robots reduce the costs and risks of warfare. Some think that might not be a good thing.

Technology has always been a factor in warfare, but the rise of fully- or partially-automated weapons systems raises new ethical dilemmas. Opponents of automated warfare argue that reducing war’s human costs could make it easier for politicians to justify military action and that noncombatants often become collateral casualties. A related issue is diffusion of responsibility, especially for unintended harm. Some say that killing is easy and impersonal for drone pilots, but drone users deny that the experience is game-like. They say that they have a far clearer view of what is happening on the ground than users of other remote weapons systems — clear enough that many drone pilots experience high levels of stress and even psychological trauma.

“Killing human beings, if ever justified, should be left to humans.”

The ultimate ethical question raised by killer robots is whether the use of fully automated weapons is justified under any circumstances. A robot that can make decisions about killing a human being is a morally repugnant concept to most, given that no machine can understand what death means to a living being. Increasingly, this is the position being taken in international law.

Robot ethics are human ethics.

The philosophical questions raised by robots, with their growing capabilities and increasingly humanlike qualities, are questions about what it means to be human.

“Robots function as mirrors that show and reflect us — that is, the human being in all its facets, and with all of its problems and challenges, including ethical ones.”

Transhumanists suggest humanity should embrace a cyborg future identity, looking for ways to maximize the quality of life within it and hoping that with enhanced intelligence people may work out superior ethical solutions. Posthumanists envision a world in which humans, robots, and animals cooperate, celebrating hybridity rather than defending barriers between categories of beings, and seeking to perfect the ecology and economy of a “technologically mediated planetary environment.” A third approach emphasizes restoring the health of planetary ecology by decentering humanistic preoccupations and using human technology to seek a better future for all life on Earth, with sustainability being the guiding ethical principle to rule those technological endeavors.

Examining these perspectives clarifies some of the bigger-picture questions of robot ethics:

  • Are humans building robots to subdue the Earth, or to sustain it?
  • Must human relationships with robots be based on human patterns of domination and submission?
  • Will technology let people escape the politics of power?
  • Are human-centered values the only ones worth considering?

Humans have to think about the questions raised by robots’ presence among them; they cannot leave that work to the machines. When people build robots, they are responsible for their impact on the world.

About the Author

Mark Coeckelbergh is a Belgian philosopher of technology, professor of philosophy of media and technology at the University of Vienna, and author of several books including New Romantic Cyborgs.