Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.
Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.
The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?
2 Comments
I like how you point out the (clear lack of) ethical/moral issue society has with the abuse of machines but absolutely fear the opposite, it’s a good case for some kind of regulation to be in place of the treatment of socially aware machines. However I’d like to point out that isn’t it entirely simpler for humans to entangle with machines than the opposite? Wouldn’t this be because the machine were created by humans in the first place in order to complete a set of tasks, and they are limited to the resources that humans have available? Of course if this was the future and humans had the knowledge and resources to create a robot that could simulate a human in almost every way this would be a different story, but currently isn’t that still an idea only present in a galaxy far far away?
Consider the Google Car, driving along a road with no human inside it [maybe it is going to pick a human from somewhere]. Is it permissible to engage in road rage against it, just because there is no human inside? If your answer is in the negative then you are effectively arguing that violence against AI-infused machines is immoral. If your answer is in the positive, and you think road rage against a self-driving car is alright, then you have to explain why the presence of a human on the receiving end of road rage should be the qualifier for the ethical aspects of the action in this scenario.