Body

Kavraki Lab presents four papers at ICRA 2023 in London

Rice computer scientists showcase improvements for automation and robots

Kavraki Lab researchers present four papers at ICRA 2023 in London

Rice University computer scientists in the Kavraki Lab presented multiple ground-breaking results at the June 2023 International Conference on Robotics and Automation (ICRA), held this year in London. Research conducted by graduate students and postdoctoral research associates working with Noah Harding Professor of Computer Science Lydia Kavraki led to four robotics and automation papers in the areas of improved grasp and manipulation in cluttered problems, learning from human demonstrations, extracting robot skills from a single example, and using a physics simulator to help a robot expedite a reconfiguration of objects. 

“Our members have presented multiple papers at ICRA in the past, but this is the first year our lab presented four papers at once,” said Kavraki. “It is exciting to see our team members pushing the envelope in their individual areas of interest. This year’s strong ICRA showing is indicative of the amazing work the students and postdocs are accomplishing in our lab, but it also reflects the wide variety of our lab’s collaborations. Few robotics and AI labs share this kind of access to top researchers with such different areas of expertise.”

Robots that learn better by asking humans better questions

Rahul Shome standing outside on the Rice University campus

Rahul Shome and Shlok Sobti collaborated with Kavraki to incorporate human demonstrations in robot learning, publishing their results in the paper “Efficient Inference of Temporal Task Specifications from Human Demonstrations using Experiment Design.”

Shome said their paper had its roots in a 2022 conversation about “how to make robots of the future be better at learning the perfect way you like your apple pie baked. We realized that robots need to understand how to ask you the right questions to narrow down the order of ingredients.

“We were inspired by compelling applications where robots need to coexist with and be helpful to humans. Such a technology needs to be aware of human factors. A variety of real-world problems, like recipes or IKEA instructions, represent temporal tasks. Learning from demonstration is a way a human teaches a robot what the problem (or recipe) is. What we propose is to make the robot use experiment design to intelligently change the setup of the task before every prompt to the human. This makes each interaction with the human more informative and useful.”

According to Shome, the team recorded a dataset of 600 human demonstrations to use in evaluating performance of a robot inferring temporal tasks from demonstrations. “We obtained substantial progress in improving a robot’s performance in recovering correct task specifications and rules with fewer human interactions,” he said. 

Carlos Quintero Pena

Grasping and Placing Objects

Shome joined Carlos Quintero Peña to produce a second ICRA 2023 paper, “Optimal Grasps and Placements for Task and Motion Planning (TAMP) in Clutter.” As the paper’s first author, Peña coordinated research that included contributions from Rice CS assistant professor Anastasios Kyrillidis and Kavraki Lab members Zachary Kingston and Tianyan Pan. Their combined work improves robot dexterity in manipulating objects located in crowded environments like a chess board or pantry shelf. 

Peña said, “Our work on the TAMP framework for cluttered scenes is inspired by practical motivations: many of the existing TAMP algorithms become highly inefficient when the environments where the robot is acting in are cluttered, making them impractical. The main reason for this is that many grasps and possible object locations within the environment are not invalid and finding one that works takes time. 

“To address this we proposed a specialized layer capable of explicitly reasoning about complex geometric relationships between the objects and the robot hand that can find these variables for long horizon plans even in the presence of clutter. Our method uses numerical optimization methods to jointly reason over a sequence of these variables. This is important because it dramatically reduces the planning time for manipulation tasks of robots that require these capabilities, e.g., a robot that needs to clean a table or tidy up a room etc.”

Khen Elimelech

Automating Life-long Learning for Robots

In the paper “Extracting Generalizable Skills from a Single Plan Execution using Abstraction-Critical State Detection”, Khen Elimelech presented findings from a collaboration with Kavraki and Rice University Professor Moshe Vardi.

"This paper is, in fact, our third successful publication on this topic over the past year, continuing our promising line of work on robotic skill learning," said Elimelech. "To decide how to pursue their goals, robots need to reason on the outcomes of their actions. Yet, to do so effectively, they should reason on high-level skills, such as 'pick the item from the floor' or 'find the exit of the room' and not only on simple mechanical motions.”

He said one of the most prominent challenges in modern robotics research is how to acquire such skills. “Our work suggests a conceptual shift in the way robotic learning is pursued by the current state-of-the-art, which typically requires numerous repetitive examples for the robot to learn each skill. This shift makes the learning process much more efficient, intuitive, automatable, and human-like.”

Their paper explains how to leverage the concept of abstraction to extract high-level reusable skills after observing only a single successful execution –similar to the way humans learn. The idea is to understand which part of the information the robot observes is relevant to the task, and which is not affected by the skill execution and thus irrelevant. 

Elimelech said, “For example, when learning the action sequence required to tidy a room, a robot can often ignore details such as the color of the object it picks from the floor, or the location of the objects that are already in place. This information can be abstracted away and ignored, focusing the robot only on the core of the task solution, which it can utilize in the next time it faces a similar task.”

Reconfiguring Stacked Objects

Yiyuan Lee

Having co-authored three papers with Kavraki Lab colleagues and six overall papers, Yiyuan Lee is the first author of the ICRA 2023 paper, “Object Reconfiguration with Simulation-Derived Feasible Actions.” In this work, Lee, Kingston, and Wil Thomason incorporated a physics simulator within a motion planner. 

Lee said, “Humans may intuitively rearrange a cluster of items as in retrieving a favorite coffee mug that ended up on the bottom of a stack of cups. Robots should be able to do the same, without having to explicitly specify which actions are allowed and what are not. We investigated the use of a physics simulator as an implicit way to provide such knowledge. By embedding a physics simulator in our planner, we show that it is possible to identify valid actions automatically, providing a more natural approach to reconfiguring objects.”

Kavraki Lab Career Impact

Kavraki Lab Members

The trajectory of Kavraki Lab alumni leads to renowned universities around the world and a wide range of industry brands including Google, Meta, Microsoft, Schlumberger, Tesla, Two Sigma, and Waymo. Last year, Shome accepted a faculty offer from the Australian National University in Canberra. This year, another lab member, Constantinos Chamzas, accepted a position as an assistant professor at WPI. Shlok is applying his automation experience to the construction industry. Emilech, Kingston, and Thomason are thriving in several research collaborations.

Peña worked as an instructor and researcher at two of Columbia’s top universities for eight years before accepting a Fulbright Scholarship to pursue his Ph.D. with Kavraki. He is also working with Professors Kyrillidis and Unhelkar at Rice. Lee, a second year Ph.D. student, is spending the summer as a machine learning and robotics research intern in the New York offices of Samsung Research America. Pan’s goal is to make robots that can better help people and will be exploring both industry and academia after graduation. 

Prospective graduate students and postdoctoral research associates interested in the Kavraki Lab’s near-future focus on robotics and AI to create robots who work in the service of people should contact Lydia Kavraki at kavraki@rice.edu to inquire about openings.

 

Carlyn Chatfield, contributing writer