细思极恐,法国科学家揭秘:机器人可通过施压影响人类行为

2018-08-16 15:15

小弈编译

(本文为小弈自动翻译)

[(Title)] How Rude Humanoid Robots Can Mess With Your Head.


The little humanoid robot's name is Meccanoid, and it is a scoundrel. The well-meaning human test subject asks the robot: If you were to make a friend, what would you want them to know?

这个小人形机器人的名字叫麦卡尼(Meccanoid),他很不友好。

善意人类测试项目向它发问:如果你要交朋友,你想让他们知道什么?

"That I'm bored,"  Meccanoid says.

"我很无聊,"麦卡尼说。

Alright, let's start over. A new participant asks Meccanoid the same question, but now the robot is programmed to be nice.

好的,让我们重新开始。

一位新参与者问麦卡尼相同的问题,但现在机器人的内设程序被改进了。

What does this robot want the friend to know?  "I already like him a lot,"  Meccanoid says. Much better.

这个机器人想让它的朋友知道什么?

“我已经开始喜欢他了,”麦卡尼说。

这次的答案显然好多了。

Researchers in France have been exposing human subjects to nasty and pleasant humanoids for good reason: They're conducting research into how a robot's attitude affects a human's ability to do a task.

法国研究者将人类实验者分别于友好和不友好的人形机器人交流:他们正在研究机器人的态度如何影响人类完成任务的能力。

On Wednesday, they published their research in the journal Science Robotics, an issue that also includes research on how robots can pressure children into making certain decisions.

周三,他们在《科学机器人学》杂志上发表了他们的研究成果,该成果还包括机器人如何迫使儿童做出某些决定的相关研究。

The pair of studies show how the development of advanced social robots is far outpacing our understanding of how they're going to make us feel.

这两项研究结果表明,先进社交机器人的发展远远超过了人类对它们如何影响人类感受的理解。

First, back to Meccanoid. The participants began with an exercise where they had to identify the color in which a word is printed, as opposed to the word itself. So for instance the word  "blue"  printed in green ink.

首先,回到麦卡尼。参与者首先要练习识别一个单词的印刷颜色,而不是单词本身。例如,用绿色墨水印出的“蓝色”这个词。

The temptation may be to blurt out  "blue,"  when you need to say green. This is known as a Stroop task.

当你需要说绿色时,这种干扰可能令你脱口而出“蓝色”,这被称为Stroop效应。

The participants initially did the test on their own, and then had a little conversation with Meccanoid—questions volleyed back and forth between the bot and the participant.

参与者首先自己做了测试,然后与麦卡尼进行一小段交流——彼此反复的对话。

But each participant only got to experience one of Meccanoid's mercurial moods. Then they returned to the Stroop testing while the robot watched.  

但是每个参与者只能体验到麦卡尼的一种善变情绪。然后他们当着麦卡尼的面,再次进行Stroop效应测试。

"What we've seen is that in the presence of the bad robot, the participants improved their performance significantly compared to the participants in the presence of the good robot, says study lead author Nicolas Spatola, a psychologist at the Université Clermont Auvergne in France.

该研究主要作者,法国克莱蒙特大学心理学家尼古拉斯·斯帕托拉(Nicolas Spatola)说:“我们所看到的是,相比好脾气的机器人,坏脾气的机器人能显著提高参与者的表现。”

So what's going on here?  "When we were doing the experiment, we saw how a person could be emotionally impacted by the robot,"  says Spatola.  "The bad robot is seen as more threatening."  

“当我们做这个实验的时候,我们看到一个人是如何受到机器人情绪影响的,”斯帕托拉说,“那个坏机器人被认为是更危险的。”

Despite the fact that this is a nonsentient robot, its human beholder seems to actually care what and how it thinks. Well, kinda.  "Because the robot is bad, you will tend to monitor its behavior and its movement more deeply because he's more unpredictable,"  says Spatola.

尽管这是一个不负责任的机器人,但在现实中,人类参与者似乎关心它的想法和方式。“因为机器人态度不好,你会更深入地感知它的行为和运动,因为它更不可预测,”斯帕托拉说。

That is, the participants who tangled with the bad robot were more alert, which may have made them better at the test.

也就是说,与情绪较差的机器人交流的玩家更警觉,这可能让他们在测试中表现的更好。

In the second study published Wednesday, the robots were much less ornery.

在周三发布的第二项研究中,机器人没有这么难缠。

Three small humanoids, the Nao model from SoftBank Robotics, sat around a table (adorably, the machines sat on booster seats when interacting with adults to boost them up to the same level as the big kids).

三个小型机器人(来自软银机器人的Nao模型),坐在桌子的周围(令人尊敬的是,这些机器在与成年人互动时座位会自动升高,以达到和成人一样的高度)。

They looked at a screen that showed a single vertical line on the left, and three vertical lines of various lengths on the right. Participants had to choose which of those three lines matched the length of the one on the left.

它们对着一个屏幕。屏幕左边是一条垂直线,右边是三条不同长度的垂直线。

参与者必须选择这三条线中哪一条与左边那条线的长度相匹配。

But first, their robot peers had to choose. The autonomous machines, which ran on custom software, all gave the wrong answer two thirds of the time, but that didn't faze the adult participants.

但首先,他们的机器人同伴会先做出选择。自主运行了自定义软件的机器人,三分之二的时间都给出错误的答案,但这并没有令成年参与者感到困扰。

Compared to a group of participants who did the same experiment with human adults giving wrong answers in the place of robots, these participants conformed more to their fellow humans than the machines.

与一组与人类同伴给出错误答案的相同实验比较,这些参与者更容易与人类同伴提供一致的答案。

Children, on the other hand, followed the robots down the path of incorrectness. Fully three quarters of their answers matched the robots' incorrect answers. In other words, the researchers say, the kids gave in to peer pressure.

另一方面,孩子们在机器人错误的道路上跟随着机器人,他们全部四分之三的答案与机器人错误答案相一致,换句话说,研究人员说,孩子们对来自同伴的压力有着不同的感受。

Children, after all, are prone to suspend disbelief, says Bielefeld University's Anna-Lisa Vollmer, lead author on the study.

比勒费尔德大学的Anna-Lisa Vollmer是这项研究的第一作者,她认为,毕竟孩子更容易产生信任。

"We know something similar is going on with robots: rather than seeing a robot as a machine consisting of electronics and plastic, they see a social character," she says. "This might explain why they succumb to peer pressure by the robots."

“我们知道机器人正在发生类似的事情:它们不是将机器人视为由电子和塑料组成的机器,而是当做社交角色,”她说。 “这可能解释了为什么他们会屈服于来自机器人的同伴压力。

Is this really peer pressure, though, if the kids' peers are robots?

不过,如果孩子们的同伴是机器人,他们真的会感到来自同伴的压力吗?

This is where things get tricky.  "I think that makes a big assumption about the children's reactions, because it doesn't necessarily have to have that social aspect of peer pressure,"  says Julie Carpenter, who studies human-robot interaction, but who wasn't involved in these studies.  

这是事情变得棘手的地方。 “我认为这对儿童的反应做出了很大的假设,因为它并不一定来自社交方面的同伴压力,”研究人机交互的朱莉卡彭特说,但他没有参与这些研究。

"Children and adults can over-rely on technology."  Maybe the kids didn't think of the humanoids as peers, but simply as useful technological tools.

“儿童和成人可能过度依赖技术。” 也许孩子们并不认为机器人是同伴,而只是作为有用的技术工具。

Still, both this robot and the mean/nice robots are eliciting a reaction from the human subjects. Which is what's so interesting and daunting about a near future in which we interface with machines, particularly humanoids, more and more.

尽管如此,两组实验实验都引起了人类对象的反应。 对于我们与机器,特别是人形机器人越来越多地交互的不久的将来,这是多么有趣和令人生畏。

What these studies suggest is that humanoid robots can manipulate us in complex ways. And scientists are just barely beginning to understand those dynamics.

这些研究表明人形机器人能够以复杂的方式操纵我们,而科学家只是刚刚开始了解这些动态。

Consider a super smart robotic doll that a kid develops an intense bond with. Great, fine, kids have been loving dolls for millennia. But what if that robot doll starts to exploit that bond by, say, trying to convince the kid to spend $19.99 to upgrade its software to be even smarter and even more fun?

设想一个超级聪明的机器人娃娃,与人类孩子之发展出紧密的关系。 很棒,很好,孩子们已经爱上这个机器娃娃。 但是,如果那个机器人娃娃开始利用这种联系,比如,试图说服孩子花19.99美元升级软件,使其更聪明,更有趣?

Machines don't just do things out of the blue. Someone at some point has programmed them to behave a certain way, whether that's picking the wrong line on a screen or just being mean or bilking unsuspecting kids. 

机器不会凭空行动。在某个时候,有人已设定了他们的行为方式,不管是在屏幕上挑错线,还是变成一个好脾气或坏脾气的机器人。

 "What you have to ask yourself is, what are the robot's goals?"  says Carpenter.  "Are they aligned with my own?"

“你要问自己的是,这个机器人被设计出来的目的是什么?” 卡彭特说。 “他们与我自己目标的一致吗?”

Sophisticated companion robots have arrived. But we need to be very careful about how we interact with them.

先进的机器人伴侣已经诞生了,但我们需要警惕与他们互动的方式。

观后感

已有0人参与

0票 0票 0票 0票
棒呆 深奥 枯燥 什么
给小编个吻 和小编聊聊 按在地上揍 这也叫文章
棒呆 深奥 枯燥 什么
给小编个吻 和小编聊聊 按在地上揍 这也叫文章
反馈