blog

The Easiest Way For Humans to get used to Robot.

Researchers that study human-robot interaction frequently concentrate on comprehending human goals from the viewpoint of the robot so that the robot can learn to work with people more successfully. The human must also learn how the robot behaves since human-robot contact is a two-way street.

Thanks to decades of cognitive science and educational psychology research, scientists have a pretty good handle on how humans learn new concepts. So, researchers at MIT and Harvard University collaborated to apply well-established theories of human concept learning to challenges in human-robot interaction.

They examined past studies that focused on humans trying to teach robots new behaviors. The researchers identified opportunities where these studies could have incorporated elements from two complementary cognitive science theories into their methodologies. They used examples from these works to show how the theories can help humans form conceptual models of robots more quickly, accurately, and flexibly, which could improve their understanding of a robot’s behavior.

According to Serena Booth, a graduate student in the Interactive Robotics Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the paper’s lead author, humans who create more accurate mental models of a robot are frequently better collaborators. This is crucial when humans and robots collaborate in high-stakes settings like manufacturing and healthcare.

“Whether or not we make an effort to assist people in creating conceptual models of robots, they will do it nonetheless. Also, those conceptual models can be inaccurate. People may be seriously endangered as a result. We must do everything in our power to provide that person with the most accurate mental model possible “asserts Booth.

Booth and her advisor, Julie Shah, an MIT professor of aeronautics and astronautics and the director of the Interactive Robotics Group, co-authored this paper in collaboration with researchers from Harvard. Elena Glassman ’08, MNG ’11, Ph.D. ’16, an assistant professor of computer science at Harvard’s John A. Paulson School of Engineering and Applied Sciences, with expertise in theories of learning and human-computer interaction, was the primary advisor on the project. Harvard co-authors also include graduate student Sanjana Sharma and research assistant Sarah Chung. The research will be presented at the IEEE Conference on Human-Robot Interaction.

a theoretical strategy
Using two major hypotheses, the researchers examined 35 studies on human-robot teaching. According to the “analogical transfer theory,” people learn by making analogies. Humans implicitly search for anything familiar they can use to grasp a new topic or notion when they deal with it.

According to the “variation theory of learning,” deliberate variation can make concepts clear that might otherwise be challenging to understand. It contends that while dealing with a novel idea, people go through four stages: repetition, contrast, generalization, and variety.

Although many research publications only included a portion of one theory, this was most likely accidental, according to Booth. The experiments might have been more potent if the researchers had consulted these hypotheses at the commencement of their study.

For instance, researchers frequently show individuals numerous examples of the robot performing the same action while instructing humans to engage with a robot. But, according to variation theory, in order for people to construct an accurate mental model of that robot, they must be exposed to a variety of examples of the robot executing the task in various settings, as well as examples of the robot making mistakes.

According to Booth, “It is exceedingly uncommon in the literature on human-robot interaction since it defies logic, but people also need to see depressing examples to realize what the robot is not.”

The development of physical robots might benefit from these cognitive science theories. People will find it difficult to create accurate mental models of a robotic arm if it looks like a human arm but moves differently from how humans move, according to Booth. According to the analogical transfer theory, because people compare the robotic arm to the human arm they are used with, it can be confusing for them and make it harder for them to learn how to engage with the robot.

improving the explanations

Booth and her collaborators also studied how theories of human-concept learning could improve the explanations that seek to help people build trust in unfamiliar, new robots.

“In explainability, we have a really big problem of confirmation bias. There are not usually standards around what an explanation is and how a person should use it. As researchers, we often design an explanation method, it looks good to us, and we ship it,” she says.

Instead, they suggest that researchers use theories from human concept learning to think about how people will use explanations, which are often generated by robots to clearly communicate the policies they use to make decisions. By providing a curriculum that helps the user understand what an explanation method means and when to use it, but also where it does not apply, they will develop a stronger understanding of a robot’s behavior, Booth says.

They offer some suggestions for how research on human-robot teaching can be improved in light of their results. They recommend, among other things, that researchers take into account analogical transfer theory by assisting individuals in drawing relevant similarities when they are learning to work with a new robot. According to Booth, providing direction can ensure that humans draw the appropriate comparisons so they are not taken aback or perplexed by the robot’s activities.

Additionally, they contend that exposing users to both good and bad examples of robot behavior as well as how strategically altering a robot’s “policy” settings affects the behavior of the robot over time and in a variety of strategically advantageous environments might aid human learning. The mathematical function that makes up the robot’s policy gives probabilities to each possible course of action.

“While we have been conducting user studies for years, we have always relied on our own instincts to determine what would or would not be beneficial to demonstrate to a human. The next step would be to be more strict about establishing the theoretical underpinnings of this work in human cognition, “says Glassman.

Booth wants to investigate if the ideas genuinely help people learn by re-creating some of the studies she researched after she has finished her initial examination of the literature using cognitive science theories.

Read More

smartechlabs

Recent Posts

AI-Powered Soil Analysis for Precision Nutrient Management: Revolutionizing Agriculture

In the ever-evolving landscape of modern agriculture, artificial intelligence (AI) is emerging as a game-changing…

1 day ago

IoT Applications in Enhancing Manufacturing Flexibility

The Internet of Things (IoT) is reshaping the way we live, work, and produce goods.…

1 day ago

AI-Driven Process Optimization in Continuous Manufacturing

Introduction Have you ever wondered how some manufacturing industries consistently deliver high-quality products while minimizing…

1 day ago

Implementing IoT for Real-Time Monitoring of Livestock Feed Intake

In the ever-evolving landscape of modern agriculture, the integration of Internet of Things (IoT) technology…

1 day ago

Implementing IoT Solutions for Remote Equipment Diagnostics

Introduction Have you ever imagined diagnosing equipment issues without even being on-site? Welcome to the…

1 day ago

Leveraging AI for Inventory Management in Smart Manufacturing

In the ever-evolving world of manufacturing, staying competitive means adopting innovative solutions to optimize every…

1 day ago

This website uses cookies.