Show logo
Explore all episodes

Humans as Robot Caretakers

  |  Command Line Heroes Team  
Tech history

Command Line Heroes • • Humans as Robot Caretakers | Command Line Heroes

Humans as Robot Caretakers | Command Line Heroes

About the episode

HitchBOT was an experiment in stewardship: A small, rudimentary robot unable to move on its own, depending on the kindness of passersby to help it along its journey. Until it met an untimely end. Trust is a two-way street, and because robots are not powered by their own moral code, they rely on humans to supply both empathy and support.

Dr. Frauke Zeller shares HitchBOT’s origin story. Eli Schwartz recounts his heartbreak upon learning what happened in Philadelphia. Dr. Julie Carpenter analyzes why it all went down. And Georgia Guthrie epitomizes the outpouring of sympathy that followed. Together, they tell a layered story about humans, and how we respond to robots. With HitchBOT, we find a little hope in the shadow of its demise.

Command Line Heroes Team Red Hat original show

Subscribe

Subscribe here:

Listen on Apple Podcasts Listen on Spotify Subscribe via RSS Feed

Transcript

The story begins the way a lot of hitchhiking adventures start out. There's an innocent faith in the kindness of strangers. But when our hitchhiker arrived in Philadelphia in the summer of 2015, it was the end of the road. Guys, over here. Hitchhiking can be dangerous … I think I found something. At least parts of something. ... especially when you're a robot. You know, humans spent a lot of time worrying about the evil things robots might do to them. We worry whether we can trust our robots, but we don't often wonder whether our robots can trust us—and the answer to that question matters more than you might think. I'm Saron Yitbarek, and this is Command Line Heroes, an original podcast for Red Hat. This season, we're exploring robots every way we can. But this time we're flipping things around a bit and looking at ourselves. Can a robot trust a human? And why should we humans care whether they can or not? To answer those questions, we begin with that hitchhiking robot you just heard about. Its name was HitchBOT, and it was designed to travel the world, relying on the kindness of strangers who gave it a ride from town to town. HitchBOT had a larger goal, though. It was designed to explore, not just new landscapes, but new dimensions of human empathy. Till today, we don't really know what happened. Frauke Zeller is an associate professor in the School of Professional Communication at Ryerson University in Toronto. She's deeply interested in human-robot interaction, and she studies this problem by putting robots in odd situations. She teamed up with David Harris Smith from McMaster University, and the pair started thinking of ways to push robots into uncomfortable situations—areas where they'd have to rely on human kindness. We came up with the idea of a hitchhiking robot, because nobody would ever expect a robot to be hitchhiking. A hitchhiking robot. A robot with no ability to walk on its own, thumbing, a ride all the way across a country. It was Smith who suggested they leave it on the side of the road and abandon little HitchBOT to its fate. What better way to see whether a robot really could trust the humans it encountered? And because I was trained more on human-robot interaction, I said, "We can't do that." First rule of thumb, you'd never leave your robot out of sight. It wasn't that the hardware was especially valuable. Inside HitchBOT's body, there was just a tablet, a GPS, a mic, and a camera. Still, Zeller worried about what might happen if they lost track of their little robot. The experiment was way too risky. But Smith kept pushing, and Zeller eventually came around. They would build a robot that needed human caretakers to survive. And then they'd abandon it to see what happened. We wanted to see if a robot could trust a human, and not the other way around. So we kind of flipped the whole script on robotics research. HitchBOT was built out of a beer cooler, some pool noodles for arms, and a bucket for a head. A tablet was embedded in its chest that could display weather reports and Wikipedia articles. Rubber boots completed the look. It was charming and helpless at the same time. And it carried a little sign that said, "Going to" whatever the destination was. We thought we can track it with GPS, so we always know where it is. We have a camera on it, so we can see what it experiences. And then we have a social media account, so we can share that with people and see how people react to it. On July 27, 2014, HitchBOT was placed beside a road in Halifax, Nova Scotia. Its destination? Victoria, British Columbia, 6,000 kilometers away. The researchers had no idea what would happen. Would people stop? Would they care about this little robot? Would HitchBOT make it to the other side of the country? Within two hours, someone had picked it up. The experiment was working. People were stopping for HitchBOT, taking photos with it, bringing it along on their adventures. Some drivers would talk to HitchBOT during the ride, asking about its journey and telling it about their own lives. The robot couldn't really respond in a meaningful way, but people didn't seem to mind. People were treating it like a travel companion. They would stop at roadside attractions, they would take it to camping sites, they would include it in family photos. It was really heartwarming to see how people embraced this little robot. Over the course of three weeks, HitchBOT crossed Canada successfully. It had attended a wedding, visited Niagara Falls, and even attended a rock concert. The experiment was such a success that Zeller and Smith decided to try it in other countries. HitchBOT traveled through Germany and the Netherlands without incident. People continued to show kindness and care for the helpless robot. And then came the American leg of the journey. In July 2015, HitchBOT was placed beside a road in Boston, Massachusetts. The plan was for it to travel across the United States to San Francisco. The team was optimistic. If HitchBOT could make it across Canada and Europe, surely it could handle America too. I was following HitchBOT's journey on social media. I thought it was such a cool experiment. Here was this innocent little robot, depending entirely on human kindness to get across the country. It was like a test of whether people would take care of something vulnerable. Eli Schwartz is an IT professional who became fascinated with HitchBOT's journey. Like thousands of others around the world, he was following the robot's progress on social media, cheering it on as it made its way across America. For the first couple of weeks, everything seemed to be going great. HitchBOT was making good progress, meeting lots of interesting people, having adventures. And then one day, the updates just stopped. Two weeks into its American journey, HitchBOT had reached Philadelphia. And there, on the night of August 1, 2015, something terrible happened. The robot was found the next morning, vandalized and dismantled. Its head was missing, its arms were torn off, and its body was badly damaged. When I found out what happened to HitchBOT, I was genuinely upset. I know it sounds silly to be emotional about a robot, but HitchBOT represented something important. It was this symbol of trust and kindness, and someone had just destroyed it for no reason. The destruction of HitchBOT became international news. People around the world were shocked and saddened by what had happened. But it also raised important questions about human nature and our relationship with robots. When we see a robot like HitchBOT, we're not just seeing a machine. We're projecting human qualities onto it. We see it as vulnerable, innocent, trusting. And when something bad happens to it, we react as if it were a living being. Julie Carpenter is a research fellow with the Ethics + Emerging Sciences Group. She studies human-robot relationships and was fascinated by the public reaction to HitchBOT's destruction. What happened to HitchBOT reveals something important about how humans relate to robots. We don't just see them as tools or machines. We form emotional connections with them, especially when they're designed to seem friendly or vulnerable. This emotional connection is something that robot designers are increasingly thinking about. As robots become more common in our daily lives, understanding how humans relate to them becomes crucial. Do we see them as partners, pets, tools, or something else entirely? The person who destroyed HitchBOT probably didn't see it the same way that most people did. While thousands of people saw a friendly, innocent robot deserving of care and protection, someone else saw it as just a object to be destroyed. That disconnect is fascinating from a psychological perspective. The destruction of HitchBOT raised uncomfortable questions about human nature. If we can't be trusted to take care of a harmless robot, what does that say about us? But the overwhelming response to HitchBOT's death told a different story. We were overwhelmed by the response. People from all over the world were sending us messages of condolence, offering to help rebuild HitchBOT, wanting to do something to make up for what had happened. The outpouring of support showed that while one person had chosen to destroy HitchBOT, thousands more cared about it and wanted to protect it. In a way, HitchBOT had succeeded in its mission even in death. It had revealed both the best and worst of human nature. HitchBOT's story shows us that most people are ready to care for robots, to see them as deserving of protection and kindness. That's actually encouraging for our robotic future. It suggests that as robots become more common, most humans will be good caretakers. But the story also highlights an important point about robot vulnerability. Unlike in science fiction, where robots are often portrayed as powerful and potentially dangerous, real robots are often fragile and dependent on human care. This creates a new kind of responsibility for humans. As we develop more sophisticated robots, we need to think about what it means to be their caretakers. Are we responsible for their wellbeing? Do we have obligations to protect them from harm? These aren't just technical questions, they're ethical ones. HitchBOT's journey also revealed something interesting about how we anthropomorphize robots. People didn't just see HitchBOT as a machine hitchhiking across the country. They saw it as a character with its own personality and agency. People would talk to HitchBOT like it was a person. They would introduce it to their families, include it in their vacation photos, tell it stories about their lives. It became more than just a robot to them. It became a companion. This tendency to anthropomorphize robots has important implications for the future. As robots become more sophisticated and more human-like, our emotional connections to them will likely become even stronger. That could be both beneficial and problematic. On one hand, emotional connections with robots could lead to better care and more thoughtful integration of robots into society. On the other hand, it could lead to unrealistic expectations about what robots can do or inappropriate emotional dependence on them. The story of HitchBOT also raises questions about trust in the age of robots. Zeller and Smith designed their experiment to see if a robot could trust humans. But in doing so, they also revealed how much humans are willing to trust robots. People didn't just pick up HitchBOT and drop it off at the next town. They invited it into their homes, took it to meet their families, shared personal stories with it. They trusted this robot in ways that surprised us. This mutual trust between humans and robots will be crucial as robots become more integrated into our daily lives. We'll need to trust robots to help us with important tasks, and robots will need to trust humans to care for them and use them responsibly. Trust is fundamental to any relationship, whether it's between humans or between humans and robots. HitchBOT showed us that this trust can develop naturally, but it also showed us how fragile it can be. After HitchBOT's destruction, Zeller and her team faced a difficult decision. Should they try again? Should they rebuild HitchBOT and send it back out into the world? The risk was real – it had been destroyed once, it could be destroyed again. We decided not to try the American journey again. The destruction in Philadelphia was traumatic for everyone involved, including the people who had been following HitchBOT's journey. We didn't want to put people through that again. But HitchBOT's story didn't end with its destruction. The robot was rebuilt and went on to star in a play in France about its adventures. And the experiment itself had generated valuable data about human-robot interaction that researchers continue to study. Even though HitchBOT was destroyed, the experiment was ultimately a success. We learned so much about how humans relate to robots, about empathy and care and trust. And we saw that the vast majority of people are good caretakers for robots. The response to HitchBOT's destruction also revealed something important about human community. People didn't just mourn the robot individually; they came together to express their shared grief and outrage. The online community that had formed around HitchBOT was incredible. People from all over the world were sharing their memories of following the robot's journey, expressing their sadness about what happened, and talking about what it meant for the future of robotics. This community response suggests that robots like HitchBOT can serve as focal points for human connection and shared values. In caring for robots, we're also caring for each other and for the values we want to see in the world. When people rallied around HitchBOT, they weren't just defending a robot. They were defending the ideas that the robot represented: kindness, trust, cooperation, care for the vulnerable. In a way, that you're breaching the trust of all those humans who care about the robot. So when we ask whether humans can be trusted with a random vulnerable robot, like HitchBOT, we're really asking whether humans are ready to support each other as we enter a robotic future. Pick it up. Yeah, it just hitchhikes. And there were instructions on what you do with this. And it's been to Germany, Netherlands and it's been across Canada. [German news announcement: No love in the city of brotherly love. HitchBOT's remains will be shipped back to Canada.] Well, it's a sad end for a much-loved robot, just two weeks into its U.S. tour. Funeral arrangements are yet to be made, but HitchBOT will be memorialized on its website. Even in its gruesome death, HitchBOT had become a media sensation. Fans around the world were downcast. Whoever had done this crime had gotten away with it, slinked back into the shadows. And all the trust, kindness, and care that HitchBOT had inspired seemed to be threatened. But folks weren't about to let HitchBOT die a meaningless death. I can't get over how much people love robots. I really had no idea before that situation happened. When HitchBOT was destroyed, a lot of people living in Philadelphia felt their city was somehow responsible. It's a place with a reputation for being a bit rough, a little inhospitable to outsiders. Georgia Guthrie found herself at the center of a group that wanted to push back on those stereotypes. She worked at a place in Philly called the Hacktory, a makerspace where people could share tools and trade information about technology, and Guthrie thought the Hacktory might be able to respond to the attack on HitchBOT. The next day, I saw that there were probably like five articles, maybe more, in our local press. And then there was some national press talking about this HitchBOT situation. So I started tweeting at the reporters who wrote the stories and saying, "Hey, we're a group in Philly. We're offering to fix the robot and send it on its way." And then it just snowballed. I could not believe where it went after that. The body parts were actually shipped back to Zeller in Canada, so rebuilding HitchBOT wasn't in the cards. But Guthrie still did not want to drop the idea. Our whole reason for existing was to be an accessible, friendly place for people to learn about technology, and then to have some technical thing that a lot of people loved be destroyed in our city—it just felt like we needed to respond. The mayor was calling her, worried about how this all looked. Others wanted to have a parade for HitchBOT. They wanted to make things right. So Guthrie decided to show the world that Philly cared about robots. They wouldn't use HitchBOT's actual name that belonged to Zeller's team, but they brought an informal group together at the Hacktory to brainstorm ideas for a new kind of robot in HitchBOT's honor, a robot inspired by HitchBOT's travels. It would boast a programmable piece of software. One that could be shared with the whole world and housed in any basic body. Their software could be built into a teddy bear for example. Or if you felt like borrowing HitchBOT's look, you could bring to life an old beer cooler with some pool noodle arms. The idea was you would have this kit and then you would give it to someone else. And then upon receiving it, they would have to do an act of kindness. And then they would share the act of kindness with some hashtags that we came up with. Our initial name for it was Philly Love Bot. Inspiring kindness in the memory of HitchBOT was a way to heal the city's relationship with robots, and with itself. So in a way HitchBOT did accomplish its goal. It might not have made it all the way across America, but it did get people to prove they cared about robots—cared more than some thought possible. Meanwhile, robots themselves can be very forgiving of our occasional betrayals. They can be rebuilt. HitchBOT, for example, was remade by Zeller's team and shipped to France where it starred in a play about its adventures. Bonjour. Et bonjour. Comment ça-va? Je suis un grand enforme. The play's director Linda Blanchet says that the audience doesn't just see a robot when they watch the play, they see a mirror. Qui es-tu? Je suis un seau plastique qui parle. And when they learn that HitchBOT was ripped to pieces, the audience is often brought to tears. To find success in this robotic revolution, we're going to need all the trust-building exercises we can find. That's why experiments like HitchBOT are so much more than a quirky adventure. We're learning how to respect and care for robots, and that's going to make all the difference to the humans who rely on those robots down the road. And what about HitchBOT's vandal, the mobile murderer who slipped back into the shadows? Folks like that might always be lurking around. But what HitchBOT's story really tells you is that there are way more people ready to build something up, than take it apart. Next time, the boundaries of trust get pushed to the limit. We're learning about robots that get turned into weapons. Who's responsible when good robots do bad things? To make sure you don't miss an episode, follow or subscribe wherever you get your podcasts. I'm Saron Yitbarek, and this is Command Line Heroes, an original podcast from Red Hat. Keep on coding.

About the show

Command Line Heroes

During its run from 2018 to 2022, Command Line Heroes shared the epic true stories of developers, programmers, hackers, geeks, and open source rebels, and how they revolutionized the technology landscape. Relive our journey through tech history, and use #CommandLinePod to share your favorite episodes.