Your thoughts on that Hitchbot story this summer? Over the holidays I decided I would blog mine, even though it means straying from the strictly tech focus of this blog. If I insult your intelligence below, I just wish that more of your common sense was out there on the web – most of what I found was fluffy-minded outrage at the “killing” of a “friendly” childbot. Anyway:
There is no basis for saying that Hitchbot could think, was conscious or felt emotion (which would be three separate propositions). Hitchbot’s makers didn’t try to prove their robot had these powers. In their statements they simply took it for granted. A proof would have been difficult in the extreme to achieve; apparently even the Turing Test isn’t universally accepted.
Perhaps the project wasn’t serious, and the twee tone of the website text was there to add “fun”. On the other hand, the real purpose may have been to see how far humans would go in treating Hitchbot as a real person; they were giving an extra push in that direction.
They stated they would take no action against whoever had destroyed the robot, but it’s unlikely that they had any legal rights over it or that any crime was committed. A robot is not a person. Only law relating to property damage would apply – provided that Hitchbot was someone’s property. Given the nature of the project it wouldn’t be easy for the makers to claim that it still belonged to them. It was hardly a public asset, and may have been nobody’s property, or have belonged to whomever was currently in possession. Without an owner, no loss could occur. If the finders owned it they were free to do as they pleased.
The Hitchbot scenario was like an elaborate version of a message in a bottle. Makers, senders and onlookers were understandably disappointed that not every finder was willing to play the game. That’s about all that can be said, although of course it was a gift to journalists that the finale took place in Philadelphia, “City of Brotherly Love” (despite the robot being nobody’s brother and impossible to love in the true sense).
Some of the Internet comment on the story by non-journalists may have just been “suspension of disbelief” taken a bit too far. The best way to enjoy a play or a film isn’t to constantly test whether the plot is realistic. People are free to approach news stories the same way. However – it wouldn’t be true to say that everyone thinks of robots as entertaining or useful but nothing more.
Rights and Duties
“The American Society for the Prevention of Cruelty to Robots” accepts that robots aren’t people (yet), but does consider that at some point they will be. Apparently they are not alone in thinking that, “should robots reach the level of self-awareness and show genuine intelligence, we must be prepared to treat them as sentient beings, and respect their desires, wants and needs as we respect those things in our human society.” Such fastidious ethics might be admirable, albeit much too pure for the rough and tumble of the real world.
But note that if sentient robots were given rights, human rights would be affected. In some cases a compromise between the two would have to be reached. It would then be politically incorrect (at least) for me to say that “I don’t care how intelligent a robot is, it is there to serve humans and has no rights. Even if it can be proved that it is a sentient being and capable of “suffering”, that is a problem created by the makers, who should now remove that capability. If the robot has evolved that characteristic on its own it should be destroyed.” This primitive attitude would not be acceptable – rights of robots would be as valid as human rights and the two would have to co-exist somehow.
We accept that our interests as humans are to some extent limited by the rights of animals (or, if animals have no rights, by our duties to them). We take most care in relation to higher animals, but humans are undoubtedly top of the tree. If robots had rights this wouldn’t necessarily be the case in some people’s eyes. They might feel that robots with fabulous computing power were actually higher beings. In that case, wouldn’t their rights take priority?
New Friends
This still sounds like science fiction, but have you noticed how many people like to avoid dealing with people? The automated tills at supermarkets seem to be a success. I wouldn’t have thought that reducing human contact in a restaurant would work, but in one episode of “This American Life” many patrons of a semi-automated restaurant preferred typing their order into a machine instead of giving it to a waiter. This article in “Wired” explains that there is a strong tendency for humans to anthropomorphise objects that have “human” characteristics. So far, so factual: it’s the last paragraph that’s scary:
“Our attitudes and ethical behaviours towards robots are just one element of Darling’s study however. The question she is tackling now focuses less on how our interactions with robots reflect our psychology and more on how robots can be used to affect change in humans. “Can we change people’s empathy with robots – might we able to use robots to make people more empathic?” she asks. This, she says is “at the core of what I view as ethics. I don’t think robot ethics is about robots, it is about humans.”
Robots changing our ethics? Why would we want that? One advantage of the old methods (reading and listening) was that you could compare the teachers’ lives against the messages they were putting forward to see whether they were practising what they preached, and how it worked out. Apparently this isn’t necessary – what we need to do is supercharge the old message by using a robot to reach the audience in a way that teachers or books can’t.
Once robots were explaining right and wrong to the human race, it wouldn’t be a big step for some people to begin thinking that robots had evolved their own superior wisdom that we should listen to. This would be a modern form of idolatry, man worshiping something he has made: a more sophisticated version of the scene described in Isaiah chapter 44 (New International Version):
He let it grow among the trees of the forest, or planted a pine, and the rain made it grow. It is used as fuel for burning; some of it he takes and warms himself, he kindles a fire and bakes bread. But he also fashions a god and worships it; he makes an idol and bows down to it. Any robot which made a convincing “higher being” could be expected to have something to say about the rights of robots and where human rights must give way to them.
The ASPCR thinks in terms of humans having the upper hand:
“Failure to recognize and grant these rights to non-human artificial intelligences would be similar to early western cultures’ failure to recognize the humanity and attendant rights of non-European peoples.”
I think they have it the wrong way round. The science of robotics is the emerging power. If robots continue improving their “human” characteristics, they may turn into the new imperialists. Just like colonists in West Africa, they’ll need help from the locals to win. It seems they may already have allies. People who don’t find robotic waiters and hotel receptionists disturbing, may in future prefer the company of such sophisticated machines to that of human beings, and be willing to defend those machines.
The whole thing is completely bonkers of course. Are there really enough nutty people out there to make it happen? Perhaps not, human nature often surprises to the good as well as the bad. If you need re-assurance though, I suggest you don’t spend too much time looking at the topic on the web.