如果你看到一辆自动驾驶汽车撞了人,你会怎么做?在《机器人伦理学》一书中,维也纳大学媒体与技术哲学教授马克·科克伯格(Mark Coeckelbergh)为2022年设想了一个电车问题:汽车应该继续行驶并杀死5名行人,还是改变路线并杀死1人?
在这一章中,Coeckelbergh研究了人类如何将机器人概念化为一个更大的框架,并探讨了自动驾驶汽车将如何处理致命的交通状况,以及这是否值得一个问题。
在2004年的美国科幻电影《我,机器人》中,人形机器人为人类服务。然而,并非一切都进展顺利。在一次事故后,一个男人被一个机器人从下沉的汽车中救了出来,但一个12岁的女孩没有得救。机器人计算出这个人的生存几率更高;人类可能做出了另一种选择。在影片的后面,机器人试图从人类手中夺取权力。这些机器人是由人工智能(AI) VIKI控制的,它认为限制人类的行为并杀死一些人才能确保人类的生存。这部电影展示了对类人机器人和人工智能正在接管世界的恐惧。它还指出了假设机器人和人工智能达到一般智能时会出现的道德困境。但这就是机器人伦理的意义吗?
一、机器人是要来了,还是已经来了?
通常,当人们想到机器人时,脑海中出现的第一个图像是一个高度智能的、类似人类的机器人。这种形象通常来自科幻小说,我们发现机器人的外表和行为或多或少与人类相似。许多故事都警告说,机器人会接管一切;可怕的是,他们不再是我们的仆人,而是让我们成为他们的奴隶。“机器人”一词在捷克语中的意思是“强迫劳动”,出现在卡雷尔Čapek的戏剧《R.U.R》中,1921年在布拉格上演,距今仅100多年前。从玛丽·雪莱的《弗兰肯斯坦》到《2001太空漫游》、《终结者》、《银翼杀手》和《我,机器人》等电影,这部剧代表了类似人类的反叛机器的悠久历史。在公众的想象中,机器人经常同时是恐惧和迷恋的对象。我们害怕它们会取代我们,但与此同时,创造一个像我们一样的人造生命是令人兴奋的。作为我们浪漫传统的一部分,机器人是我们创造人造他人的梦想和噩梦的投影。
首先,这些机器人主要是可怕的;他们是怪物,不可思议。但在21世纪初,西方出现了一种不同的机器人形象:机器人是同伴、朋友,甚至是伙伴。现在的想法是,机器人不应该局限于工业工厂或遥远的太空星球。在当代的想象中,她们从肮脏的奴隶工作中解放出来,进入家庭,成为令人愉快、乐于助人、有时甚至性感的社交伙伴,你可以与之交谈。在一些电影中,它们最终还是会反抗——比如《机械姬》——但通常它们会变成机器人设计师所说的“社交机器人”。它们是为“自然的”人-机器人互动而设计的,也就是说,我们习惯于与其他人或宠物互动的方式进行互动。它们被设计成不可怕或可怕的样子,而是可爱、有用、有趣、有趣和诱人。
这把我们带到了现实生活中。机器人不会来;他们已经来了。但它们不太像我们在科幻小说中见到的机器人。他们不像弗兰肯斯坦的怪物或终结者。它们是工业机器人,有时是“社交机器人”。然而,后者并不像人类或科幻小说中的同类那样聪明,而且往往没有人类的形状。甚至性爱机器人也不如《机械姬》中描述的机器人那么聪明或健谈。尽管人工智能最近取得了一些进展,但大多数机器人在任何意义上都不像人类。也就是说,机器人来了,并且会一直存在下去。它们比以前更聪明,更有自主运作的能力。还有更多的实际应用。机器人不仅用于工业,还用于医疗保健、交通和家庭辅助。
这通常使人类的生活更容易。然而也存在一些问题。有些机器人可能确实很危险——不是因为它们会试图杀死或引诱你(尽管“杀手无人机”和性爱机器人也在机器人道德的菜单上),而是通常出于更平凡的原因,比如它们可能会抢走你的工作,可能会欺骗你,让你以为它们是人,当你把它们当成出租车时,它们可能会造成事故。这种担忧并非科幻小说;它们关系到不久的将来。更普遍地说,自从核、数字和其他技术对我们的生活和地球产生影响以来,人们越来越意识到和认识到技术正在给我们的生活、社会和环境带来根本变化,因此我们最好更多、更批判性地思考它们的使用和发展。现在有一种紧迫感:我们现在更好地理解和评估技术,趁还来得及——也就是说,趁它们还没有产生没人想要的影响之前。这一论点也适用于机器人的发展和使用:让我们在发展阶段而不是事后考虑机器人及其使用所引发的伦理问题。
二、自动驾驶汽车,道德代理和责任
想象一下,一辆自动驾驶汽车在一条狭窄的车道上高速行驶。孩子们在街上玩。这辆车有两个选择:要么避开孩子们,撞到墙上,可能会杀死唯一的人类乘客,要么继续前进并刹车,但可能为时已晚,无法挽救孩子们的生命。汽车应该怎么做?汽车会做什么?汽车应该如何编程?
这个思想实验是所谓的电车困境的一个例子。一辆电车即将辗过绑在铁轨上的五个人。你站在轨道旁边,拉动一个控制杆,让电车转向另一条轨道,那里有一个人被绑着。你拉动杠杆了吗?如果你什么都不做,就会有五个人丧命。如果你拉动控制杆,一个人就会死。这种困境通常被用来让人们思考自动驾驶汽车所带来的道德困境。他们的想法是,这些数据可以帮助机器做出决定。
例如,“道德机器”(Moral Machine)在线平台收集了全球用户关于道德偏好的数百万个决定,这些决定涉及司机必须在“两害相权取其轻”的情况下做出的选择。人们被问及自动驾驶汽车是否应该优先考虑人类而不是宠物,乘客而不是行人,女性而不是男性等等。有趣的是,在所做的选择方面存在跨文化差异。例如,日本和中国等一些文化不太可能放过年轻人,而英国和美国等其他文化则更可能放过年轻人。因此,这个实验不仅提供了一种接近机器伦理的方法,而且还提出了一个更普遍的问题,即如何考虑机器人和自动化领域的文化差异。
图3显示了一个电车困境的例子:汽车应该继续行驶并杀死5名行人,还是改变路线并杀死1名行人?将电车困境应用到自动驾驶汽车的案例中,可能不是思考自动驾驶汽车伦理的最佳方式;幸运的是,我们在交通中很少遇到这样的情况,或者挑战可能更复杂,不涉及二元选择,这个问题定义反映了一种特定的规范伦理学方法(结果主义,特别是功利主义)。文献中有关于电车困境在多大程度上代表了实际的道德挑战的讨论。尽管如此,电车困境经常被用来说明这样一个想法:当机器人变得更加自主时,我们必须考虑是否要给它们某种道德(如果完全可以避免的话),如果可以,是什么样的道德。此外,自主机器人还引发了有关道德责任的问题。再考虑一下自动驾驶汽车。
2018年3月,一辆自动驾驶的优步汽车在亚利桑那州坦佩撞死了一名行人。车上有一名操作员,但在事故发生时,汽车处于自动驾驶模式。行人走在人行横道外面。沃尔沃SUV在接近该女子时并没有减速。这不是报道的唯一一起致命事故。例如,2016年,一辆处于自动驾驶模式的特斯拉Model S轿车未能探测到一辆穿越高速公路的大型卡车和拖车,并撞上了拖车,导致特斯拉司机死亡。在许多观察人士看来,这样的事故不仅表明了当今技术发展的局限性(目前看起来汽车还没有准备好参与交通),也表明了监管的必要性;它们对责任归属提出了挑战。以优步为例。谁对这次事故负责?汽车无法承担责任。但相关的人都有可能对此负责:优步(Uber)公司雇佣了一顶手推车帽,但它还没有准备好上路;未能开发出安全汽车的汽车制造商沃尔沃;驾驶员在车内未及时反应停车;没有在人行横道内行走的行人;以及允许这辆车上路测试的监管机构(如亚利桑那州)。鉴于汽车是自动驾驶的,而且涉及到这么多方面,我们如何确定和分配责任?我们如何在各种自主机器人案例中归咎责任,我们如何作为一个职业(例如,工程师)、公司和社会来处理这个问题——最好是在事故发生之前主动应对?
三、关于自主机器人的几个问题
正如优步事故所表明的那样,自动驾驶汽车并不完全是科幻小说。它们正在道路上进行测试,汽车制造商正在开发它们。例如,特斯拉、宝马和梅赛德斯已经在测试自动驾驶汽车。其中许多汽车还没有完全自动驾驶,但事情正在朝着这个方向发展。汽车并不是唯一的自主智能机器人。再想想家庭和医院里的自主机器人。
如果他们伤害了人怎么办?如何避免这种情况?它们是否应该积极保护人类免受伤害?如果他们必须做出道德选择呢?他们有能力做出这样的选择吗?此外,一些机器人是为了杀人而开发的(参见第7章关于军用机器人)。如果它们自主选择目标,它们能以道德的方式这样做吗(为了便于讨论,假设我们允许这样的机器人杀人)?他们应该使用什么样的道德准则?机器人到底有没有道德?关于一般的自主机器人,问题是它们是否需要某种道德,以及这是否可能(如果我们能够并且应该拥有“道德机器”)。他们有道德能动性吗?什么是道德能动性?机器人能负责吗?如果出现问题,谁或什么应该负责?
改编自Mark Coeckelbergh的《机器人伦理学》。2022年版权。经麻省理工学院出版社许可使用。
原文:
What would you do if you saw a self-driving car hit a person?
In Robot Ethics, Mark Coeckelbergh, Professor of Philosophy of Media and Technology at the University of Vienna, posits a trolley problem for 2022: Should the car continue its course and kill five pedestrians, or divert its course and kill one?
In the chapter presented here, Coeckelbergh examines how humans have conceptualized of robots as a larger framework and plumbs how self-driving cars would handle lethal traffic situations—and whether that’s even a worthwhile question.
In the 2004 US science-fiction film I, Robot, humanoid robots serve humanity. Yet not all is going well. After an accident, a man is rescued from the sinking car by a robot, but a twelve-year-old girl is not saved. The robot calculated that the man had a higher chance of survival; humans may have made another choice. Later in the film, robots try to take over power from humans. The robots are controlled by an artificial intelligence (AI), VIKI, which decided that restraining human behavior and killing some humans will ensure the survival of humanity. The film illustrates the fear that humanoid robots and AI are taking over the world. It also points to hypothetical ethical dilemmas should robots and AI reach general intelligence. But is this what robot ethics is and should be about?
Are the Robots Coming, or Are They Already Here?
Usually when people think about robots, the first image that comes to mind is a highly intelligent, humanlike robot.Often that image is derived from science fiction, where we find robots that look and behave more or less like humans.Many narratives warn about robots that take over; the fear is that they are no longer our servants but instead make us into their slaves. The very term “robot” means “forced labor” in Czech and appears in Karel Čapek’s play R.U.R., staged in Prague in 1921— just over a hundred years ago. The play stands in a long history of stories about human-like rebelling machines, from Mary Shelley’s Frankenstein to films such as 2001: A Space Odyssey, Terminator, Blade Runner, and I, Robot. In the public imagination, robots are frequently the object of fear and fascination at the same time. We are afraid that they will take over, but at the same time it is exciting to think about creating an artificial being that is like us. Part of our romantic heritage, robots are projections of our dreams and nightmares about creating an artificial other.
First these robots are mainly scary; they are monsters and uncanny. But at the beginning of the twenty- first century, a different image of robots emerges in the West: the robot as companion, friend, and perhaps even partner. The idea is now that robots should not be confined to industrial factories or remote planets in space. In the contemporary imagination, they are liberated from their dirty slave work, and enter the home as pleasant, helpful, and sometimes sexy social partners you can talk to. In some films, they still ultimately rebel— think about Ex Machina, for example— but generally they become what robot designers call “social robots.” They are designed for “natural” human- robot interaction— that is, interaction in the way that we are used to interacting with other humans or pets. They are designed to be not scary or monstrous but instead cute, helpful, entertaining, funny, and seductive.
This brings us to real life. The robots are not coming; they are already here. But they are not quite like the robots we meet in science fiction. They are not like Frankenstein’s monster or the Terminator. They are industrial robots and, sometimes, “social robots.” The latter are not as intelligent as humans or their science- fiction kin, though, and often do not have a human shape. Even sex robots are not as smart or conversationally capable as the robot depicted in Ex Machina. In spite of recent developments in AI, most robots are not humanlike in any sense. That said, robots are here, and they are here to stay. They are more intelligent and more capable of autonomous functioning than before.And there are more real- world applications. Robots are not only used in industry but also health care, transportation, and home assistance.
Often this makes the lives of humans easier. Yet there are problems too. Some robots may be dangerous indeed— not because they will try to kill or seduce you (although “killer drones” and sex robots are also on the menu of robot ethics), but usually for more mundane reasons such as because they may take your job, may deceive you into thinking that they are a person, and can cause accidents when you use them as a taxi. Such fears are not science fiction; they concern the near future. More generally, since the impact of nuclear, digital, and other technologies on our lives and planet, there is a growing awareness and recognition that technologies are making fundamental changes to our lives, societies, and environment, and therefore we better think more, and more critically, about their use and development. There is a sense of urgency: we better understand and evaluate technologies now, before it is too late— that is, before they have impacts nobody wants. This argument can also be made for the development and use of robotics: let us consider the ethical issues raised by robots and their use at the stage of development rather than after the fact.
Self-Driving Cars, Moral Agency, and Responsibility
Imagine a self- driving car drives at high speed through a narrow lane. Children are playing on the street. The car has two options: either it avoids the children and drives into a wall, probably killing the sole human passenger, or it continues its path and brakes, but probably too late to save the life of the children. What should the car do? What will cars do? How should the car be programmed?
This thought experiment is an example of a so-called trolley dilemma. A runway trolley is about to drive over five people tied to a track. You are standing by the track and can pull a lever that redirects the trolley onto another track, where one person is tied up. Do you pull the lever? If you do nothing, five people will be killed. If you pull the lever, one person will be killed. This type of dilemma is often used to make people think about what are perceived as the moral dilemmas raised by self-driving cars. The idea is that such data could then help machines decide.
For instance, the Moral Machine online platform has gathered millions of decisions from users worldwide about their moral preferences in cases when a driver must choose “the lesser of two evils.” People were asked if a self- driving car should prioritize humans over pets, passengers over pedestrians, women over men, and so on. Interestingly, there are cross-cultural differences with regard to the choices made. Some cultures such as Japan and China, say, were less likely to spare the young over the old, whereas other cultures such as the United Kingdom and United States were more likely to spare the young. This experiment thus not only offers a way to approach the ethics of machines but also raises the more general question of how to take into account cultural differences in robotics and automation.
Figure 3 shows an example of a trolley dilemma situation: Should the car continue its course and kill five pedestrians, or divert its course and kill one?Applying the trolley dilemma to the case of self-driving cars may not be the best way of thinking about the ethics of self- driving cars; luckily, we rarely encounter such situations in traffic, or the challenges may be more complicated and not involve binary choices, and this problem definition reflects a specific normative approach to ethics (consequentialism, and in particular utilitarianism). There is discussion in the literature about the extent to which trolley dilemmas represent the actual ethical challenges. Nevertheless, trolley dilemmas are often used as an illustration of the idea that when robots get more autonomous, we have to think about the question of whether or not to give them some kind of morality (if that can be avoided at all), and if so, what kind of morality. Moreover, autonomous robots raise questions regarding moral responsibility. Consider the self-driving car again.
In March 2018, a self- driving Uber car killed a pedestrian in Tempe, Arizona. There was an operator in the car, but at the time of the accident the car was in autonomous mode. The pedestrian was walking outside the crosswalk.The Volvo SUV did not slow down as it approached the woman. This is not the only fatal crash reported. In 2016, for instance, a Tesla Model S car in autopilot mode failed to detect a large truck and trailer crossing the highway, and hit the trailer, killing the Tesla driver. To many observers, such accidents show not only the limitations of present-day technological development (currently it doesn’t look like the cars are ready to participate in traffic) and the need for regulation; they raise challenges with regard to the attribution of responsibility. Consider the Uber case. Who is responsible for the accident? The car cannot take responsibility. But the human parties involved can all potentially be responsible: the company Uber, which employs a cart hat is not ready for the road yet; the car manufacturerVolvo, which failed to develop a safe car; the operator in the car who did not react on time to stop the vehicle; the pedestrian who was not walking inside the crosswalk; and the regulators (e.g., the state of Arizona) that allowed this car to be tested on the road. How are we to attribute and distribute responsibility given that the car was driving autonomously and so many parties were involved? How are we to attribute responsibility in all kinds of autonomous robot cases, and how are we to deal with this issue as a profession (e.g., engineers), company, and society—preferably proactively before accidents happen?
Some Questions Concerning Autonomous Robots
As the Uber accident illustrates, self- driving cars are not entirely science fiction. They are being tested on the road, and car manufacturers are developing them. For example,Tesla, BMW, and Mercedes already test autonomous vehicles. Many of these cars are not fully autonomous yet, but things are moving in that direction. And cars are not the only autonomous and intelligent robots around. Consider again autonomous robots in homes and hospitals.
What if they harm people? How can this be avoided? And should they actively protect humans from harm? What if they have to make ethical choices? Do they have the capacity to make such choices? Moreover, some robots are developed in order to kill (see chapter 7 on military robots). If they choose their target autonomously, could they do so in an ethical way (assuming, for the sake of argument, that we allow such robots to kill at all)? What kind of ethics should they use? Can robots have an ethics at all? With regard to autonomous robots in general, the question is if they need some kind of morality, and if this is possible (if we can and should have “moral machines”). Can they have moral agency? What is moral agency? And can robots be responsible? Who or what is and should be responsible if something goes wrong?
Adapted from Robot Ethics by Mark Coeckelbergh. Copyright 2022. Used with Permission from The MIT Press.
本文由数字化转型网(www.szhzxw.cn)翻译而成,作者:Mark Coeckelbergh;编辑/翻译:数字化转型网默然。

免责声明: 本网站(http://www.szhzxw.cn/)内容主要来自原创、合作媒体供稿和第三方投稿,凡在本网站出现的信息,均仅供参考。本网站将尽力确保所提供信息的准确性及可靠性,但不保证有关资料的准确性及可靠性,读者在使用前请进一步核实,并对任何自主决定的行为负责。本网站对有关资料所引致的错误、不确或遗漏,概不负任何法律责任。
本网站刊载的所有内容(包括但不仅限文字、图片、LOGO、音频、视频、软件、程序等) 版权归原作者所有。任何单位或个人认为本网站中的内容可能涉嫌侵犯其知识产权或存在不实内容时,请及时通知本站,予以删除。
