This is Episode 2 of Naive and Dangerous, the podcast series about emergent media I am recording together with my colleague Dr Chris Moore. In this episode we discuss the notion of the cyborg and the tension between being a cyborg and being a human. We start by unpacking the various meanings injected in the concept of a cyborg, using recent movies such as Alita Battle Angel and Ghost in the Shell as a starting point. As is our habit, we engage in extensive speculative analysis of the cyborg trope, from contemporary cinema, to cyberpunk, early science fiction imaginaries of robots, the assembly line, and ancient mythology. In the process we develop a definition of cyborg/humans and manage to have a lot of fun. Have a listen.
Here is a video of what, if there were only humans involved, would be considered a case of serious abuse and be met with counselling for all parties involved. The video is of a robot trying to evade a group of children abusing it. It is part of two projects titled “Escaping from Children’s Abuse of Social Robots,” by Dražen Brščić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda from ATR Intelligent Robotics and Communication Laboratories and Osaka University, and “Why Do Children Abuse Robots?”, by Tatsuya Nomura, Takayuki Uratani, Kazutaka Matsumoto, Takayuki Kanda, Hiroyoshi Kidokoro, Yoshitaka Suehiro, and Sachie Yamada from Ryukoku University, ATR Intelligent Robotics and Communication Laboratories, and Tokai University, presented at the 2015 ACM/IEEE International Conference on Human-Robot Interaction.
Contrary to the moral panic surrounding intelligent robots and violence, symbolized by the Terminator trope, the challenge is not how to avoid an apocalypse spearheaded by AI killer-robots, but how to protect robots from being brutalized by humans, and particularly by children. This is such an obvious issue once you start thinking about it. You have a confluence of ludism [rage against the machines] in all its technophobia varieties – from economic [robots are taking our jobs], to quasi-religious [robots are inhuman and alien], with the conviction that ‘this is just a machine’ and therefore violence against it is not immoral. The thing about robots, and all machines, is that they are tropic – instead of intent they could be said to have tropisms, which is to say purpose-driven sets of reactions to stimuli. AI infused robots would naturally eclipse the tropic limitation by virtue of being able to produce seemingly random reactions to stimuli, which is a quality particular to conscious organisms.
The moral panic is produced by this transgresison of the machinic into the human. Metaphorically, it can be illustrated by the horror of discovering that a machine has human organs, or human feelings, which is the premise of the Ghost in Shell films. So far so good, but the problem is that the other side of this vector goes full steam ahead as the human transgresses into the machinic. As humans become more and more enmeshed and entangled in close-body digital augmentation-nets [think FitBit], they naturally start reifying their humanity with the language of machines [think the quantified self movement]. If that is the case, then why not do the same for the other side, and start reifying machines with the language of humans – i.e. anthropomorphise and animate them?
Amazon’s warehouse robots in a machinic routine. I can watch this all day.
A thought-provoking look at the impact of massive automation on existing labor practices by C.G.P. Grey.
We have been through economic revolutions before, but the robot revolution is different. Horses aren’t unemployed now because they got lazy as a species, they’re unemployable. There’s little work a horse can do that do that pays for its housing and hay. And many bright, perfectly capable humans will find themselves the new horse: unemployable through no fault of their own. […]
This video isn’t about how automation is bad — rather that automation is inevitable. It’s a tool to produce abundance for little effort. We need to start thinking now about what to do when large sections of the population are unemployable — through no fault of their own. What to do in a future where, for most jobs, humans need not apply.
What collapsing empire looks like by Glenn Greenwald: – The title speaks for itself. A list of bad news from all across the US – power blackouts, roads in disrepair, no streetlights, no schools, no libraries – reads like Eastern Europe after the fall of communism, only that the fall is yet to come here.
Special Operations’ Robocopter Spotted in Belize by Olivia Koski: – Super quiet rotors, synthetic-aperture radar capable of following slow moving people through dense foliage, and ability to fly autonomously through a programmed route. This article complements nicely the one above.
Open Source Tools Turn WikiLeaks Into Illustrated Afghan Meltdown by Noah Shachtman: – Meticulous graphical representation of the WikiLeaks Afghan log. The Hazara provinces in the center of the country, and the shia provinces next to the Iranian border seem strangely quiet.
Google Agonizes on Privacy as Ad World Vaults Ahead by Jessica E. Vascellaro: – A fascinating look at the inside of the Google machine. They seem to have reached a crossroad of their own making – they either start using the Aladdin’s cave of data they have gathered already, or they keep it at arm’s length and lay the foundations of their own demise. Key statement: ‘In short, Google is trying to establish itself as the clearinghouse for as many ad transactions as possible, even when those deals don’t actually involve consumer data that Google provides or sees.’