Amid the debate about robotics, more and more is aimed at robots themselves: one unintended effect of their integration into society has been the rise in incidents involving robots being vandalized or severely damaged. Children born in and growing up in this decade, however, will encounter and engage with robotic technology in a vastly different way. For this reason, a team of researchers based in South Korea collaborated to make a tortoise-like robot that teaches children not to abuse robots.
The team presented their work at this year’s ACM/IEEE International Conference on Human Robot Interaction (HRI), which was held earlier this month in Chicago. The group of South Korea-based researchers from Naver Labs, KAIST and Seoul National University presented two papers, “Shelly, a Tortoise-Like Robot for One-to-Many Interaction with Children”, and "Designing Shelly, a Robot Capable of Assessing and Restraining Children’s Robot Abusing Behaviors”.
The shell is equipped with LEDs to entice the children (the colors also change based on Shelly’s mood), as well as sensitive vibration sensors that respond to their touch. A tortoise design is a genius concept because it reinforces the concept for the children: they understand that if they harm or abuse Shelly, it will retreat inside of its shell. It stays there for 14 seconds until it decides that the coast is clear.
The team took away first prize in the IEEE-HRI Student Robot Design Challenge.
Countering the Rise of Robot Vandalism
With the advent of artificial intelligence (AI) and the visible impact of robotics in the automation of the labor industry and various service sectors, tech giants have been ushering in a new era of unprecedented convenience and efficiency. One consequence, however, which should also be discussed, is the mixed reactions from the general public: some tech enthusiasts, business, and finance leaders are eagerly awaiting the next creation, while some workers cite growing job insecurity based on fears that robots may one day claim their jobs. In the end, some are taking out their frustration on innocent robots.
“We did learn a lot in getting some sense of people's aspirations, fears, their attitudes values and beliefs around technology and around artificial intelligence. And, of course, this is happening at a time when some of the world leaders of science and business—Elon Musk, Bill Gates and Stephen Hawking—are weighing in on the perils of a world in which artificial intelligence becomes solely the domain of the military, for example.”
One of the most widely reported cases was back in 2015. It involved hitchBOT, a friendly bot was the brainchild of McMaster University Professor David Smith and Ryerson University Assistant Professor Frauke Zeller had been hiking across several countries, relying on the kindness of strangers to complete each leg of the trip (via instructions appearing on its back), was found dismembered in Philadelphia.
On an optimistic note, however, hitchBOT delivered a posthumous message thanking all the people along the way who had supported it. Professor Smith said about the fascinating project:
The truth is that AI and robotics is a double-edged sword: the more that we create intelligent beings who are imbued with human characteristics, capabilities and behavior, the more they will be perceived as human and be vulnerable to various random attacks. Though it’s difficult to fully predict the long-term impact of learning toys like Shelly (the average age range of the sample size is 6-9, after all), the team that created the robot have taken a proactive stance based on the reality that robots are here to stay, and therefore, we should all learn to peacefully coexist.