educating a robot to identify and pour water

Researchers at Carnegie Mellon University were able to teach a robot to recognize water and pour it into a glass with the help of a horse, a zebra, and artificial intelligence.

Robots face a difficult issue since water is transparent. Robots have previously learned to pour water, but earlier methods like heating the water and using a thermal camera or setting the glass in front of a checkerboard background don’t work well in real-world situations. Robot waitresses might refill water glasses, robot pharmacists could measure and mix medications, and robot gardeners could water plants as part of a simpler approach.

In the Robots Perceiving and Doing Lab of the Robotics Institute, Gautham Narasimhan collaborated with a group to apply AI and image translation to resolve the issue. Gautham Narasimhan graduated from the Robotics Institute in 2020 with a master’s degree.

To educate artificial intelligence to change images from one style to another, such as turning a photograph into a Monet-style painting or changing an image of a horse into a zebra, mage translation algorithms use libraries of images. Contrastive learning for unpaired image-to-image translation was the technique employed by the team for this study (CUT, for short).

David Held, an assistant professor of the Robotics Institute who counseled Narasimhan, noted that during the training phase of learning, “You need some method of notifying the algorithm what the right and wrong answers are.” However, labeling data can be a laborious procedure, particularly when training a robot to pour water, for which a person may need to name specific water droplets in an image.

“Just like we can train a model to translate an image of a horse to look like a zebra, we can similarly train a model to translate an image of colored liquid into an image of transparent liquid,” Held said. “We used this model to enable the robot to understand transparent liquids.”

A transparent liquid like water is hard for a robot to see because the way it reflects, refracts and absorbs light varies depending on the background. To teach the computer how to see different backgrounds through a glass of water, the team played YouTube videos behind a transparent glass full of water. Training the system this way will allow the robot to pour water against varied backgrounds in the real world, regardless of where the robot is located.

Even for humans, Narasimhan noted, “there are moments when it can be challenging to accurately define the boundary between water and air.”

Their technique allowed the robot to fill a glass with water to a specific height. The experiment was then carried out once more using glasses of various sizes and shapes.

Future study could improve upon this approach, according to Narasimhan, by incorporating varying lighting conditions, having the robot attempt to pour water from one container to another, or assessing both the height and amount of the water.

The study was presented last month in Philadelphia at the IEEE International Conference on Robotics and Automation. Reaction to the work has been positive, Narasimhan said.

“People in the robotics community really appreciate it when research works in the real world and not only in simulation,” said Narasimhan, who is currently employed by Path Robotics in Columbus, Ohio, as a computer vision engineer. “We wanted to do something that was straightforward yet still had impact.

Read More

Leave a Reply

Your email address will not be published. Required fields are marked *