blog

Robots are guided by humans in the loop.

Robots are guided by humans in the loop.

Robots are guided by humans. Robots cannot see past barriers, much like humans. To reach where they’re going, they occasionally require a little assistance.

Rice University engineers have created a technique that enables people to assist robots in “seeing” their surrounds and performing jobs.

A unique approach to the age-old issue of motion planning for robots operating in surroundings where not everything is always clearly visible is the Bayesian Learning IN the Dark (BLIND) concept.

Late in May, the George R. Brown School of Engineering at Rice presented the research led by computer scientists Vaibhav Unhelkar and Lydia Kavraki, as well as co-lead authors Carlos Quintero-Pea and Constantinos Chamzas, at the International Conference on Robotics and Automation of the Institute of Electrical and Electronics Engineers.

According to the report, the program, which was principally created by graduate students Quintero-Pea and Chamzas who collaborated with Kavraki, keeps a human in the loop to “augment robot perception and, significantly, prevent the execution of unsafe motion.”

In order to help robots that have “large degrees of freedom,” or numerous moving parts, they did this by combining Bayesian inverse reinforcement learning—a method through which a system learns from continuously updated information and experience—with tried-and-true motion planning approaches.

The Rice lab used a Fetch robot, an articulated arm with seven joints, to test BLIND by instructing it to move across a barrier while grabbing a small cylinder from one table and moving it to another.

According to Quintero-Pea, “instructions to the robot get complicated if you have more joints.” You only need to say, “Lift up your hand,” when directing a person.

But when barriers obstruct the machine’s “view” of its target, a robot’s programmers must be precise about the movement of each joint at every point in its trajectory.

BLIND inserts a human mid-process to modify the choreographic options—or best guesses—suggested by the robot’s algorithm rather than programming a trajectory in advance. In this high-degree-of-freedom space, “BLIND allows us to take information in the human head and compute our trajectories,” Quintero-Pea explained.

We employ a particular technique of feedback called critique, which is essentially a binary form of feedback in which the human is given labels on segments of the trajectory.

These labels appear as a series of interconnected green dots that show potential routes. The human accepts or rejects each step that BLIND takes as it moves from dot to dot in order to fine-tune the course and effectively avoid obstructions.

According to Chamzas, because we can tell the robot, “I like this,” or “I don’t like that,” the interface is simple for people to utilize. The robot can complete its mission after it receives approval for a set of moves, he said.

“One of the most important things here is that human preferences are hard to describe with a mathematical formula,” Quintero-Peña said. “Our work simplifies human-robot relationships by incorporating human preferences. That’s how I think applications will get the most benefit from this work.”

“This work wonderfully exemplifies how a little, but targeted, human intervention can significantly enhance the capabilities of robots to execute complex tasks in environments where some parts are completely unknown to the robot but known to the human,” said Kavraki, a robotics pioneer whose resume includes advanced programming for NASA’s humanoid Robonaut aboard the International Space Station.

“It shows how methods for human-robot interaction, the topic of research of my colleague Professor Unhelkar, and automated planning pioneered for years at my laboratory can blend to deliver reliable solutions that also respect human preferences.”

Read More

smartechlabs

Recent Posts

AI-Powered Mycoprotein: Optimizing Conditions for Growing Sustainable Protein from Fungi.

Introduction: Can Fungi Really Feed the Future? Mycoprotein When people first hear about protein made…

3 weeks ago

Creating a Peer-to-Piece Tech Tool Library for a Neighborhood or Community.

Introduction Imagine this: your neighbor needs a 3D printer for a weekend project, while you’ve…

3 weeks ago

Becoming a “Sensor Network” Installer for Small Farms or Environmental Monitoring.

Introduction Imagine walking onto a small farm or a nature reserve, where tiny devices quietly…

3 weeks ago

The Personalized Children’s Book Creator: Using AI to Generate Stories Featuring the Child as the Hero.

Introduction Imagine handing your child an Interactive Personalized Storybook where they are the hero of…

3 weeks ago

Offering a “Disaster Recovery Drill” Service to Test a Company’s backups and restore processes.

Introduction Imagine waking up one morning to find your company’s entire IT infrastructure compromised—servers down,…

3 weeks ago

AI Tools for Creating Courses & Tutorials

Introduction: Can AI Really Help You Create a Course? Let’s start with the question almost…

3 weeks ago

This website uses cookies.