Room: 360 (SuWT13.1)   1st Oct 2023   IROS in Detroit, USA
Workshop on Integrated Perception, Planning, and Control for Physically and Contextually-Aware Robot Autonomy

About the workshop

Robots capable of autonomous navigation and manipulation with advanced perception and decision-making skills offer tremendous potential to assist people with challenging and repetitive tasks in the service industry, including transportation, logistics, and healthcare.

Recent advances in artificial perception enable robots to have semantic understanding and contextual awareness of their surroundings. Similarly, recent years have seen significant progress in decision-making for autonomous navigation and manipulation in complex situations. However, the gap between robot perception and decision-making remains large, as many techniques continue to rely on separation principles between perception, planning, and control. The objective of this workshop is to inspire the robotics community to pursue techniques that tightly integrate perception, planning, and control to achieve physically and contextually safe robot navigation and manipulation in real human environments.

Robots sharing the same environment with people need novel semantic planning objectives that integrate perception and planning at a high level to generate contextually relevant robot behavior. At a low level, integrated perception and control require new metric and contextual safety constraints to enable physically safe and socially-aware robot behavior. Uncertainty and error quantification in learning-based perception and control is essential for safe and robust robot operation.

This workshop will bring together experts from academia and industry to identify and discuss the current challenges and emerging opportunities in perception-aware robot navigation and manipulation, leading to robot perception techniques that actively plan observations and interactions to acquire informative data and robot planning and control techniques that actively utilize geometric and semantic perceptual information in generating, executing, and adapting robot actions.

Topics of interest

  • Integrated Perception, Planning, and Control
  • Perception-Aware Motion Planning and Control
  • Active and Interactive Perception
  • Semantic Perception and Semantic Planning
  • Contextual Risk and Safety Assessment of Perception, Planning, and Control
  • Uncertainty-Aware Planning and Control
  • Uncertainty and Error Quantification in Metric and Semantic Perception
  • Uncertainty and Error Quantification in Robot Skill Learning
  • Integrated Perception, Planning, Control for Safe Human-Robot Interaction
  • Egocentric and Allocentric Motion Prediction for Physical and Contextual Safety
  • Perception for Task and Motion Planning
  • Affordance-based Planning and Control

Invited Speakers

We are thrilled to have the following invited speakers presenting at the workshop.
Kostas Alexis
Norwegian University of Science and Technology
Read more →
Efi Psomopoulou
University of Bristol
Read more →
Georgia Chalvatzaki
TU Darmstadt
Read more →
Changhyun Choi
University of Minnesota
Read more →
Coline Devin
Google DeepMind
Read more →
Hyeonbeom Lee
Ajou University
Read more →
Roberto Martin-Martin
University of Texas at Austin
Read more →
Peter Karkus
NVIDIA Corporation
Read more →
Rohan Chandra
UT Austin
Read more →

Schedule

08:50 - 09:00

Opening and welcome

09:00 - 09:30

Kostas Alexis
Norwegian University of Science and Technology

Resilient Autonomy in Perceptually-degraded Environments

Abstract: Enabling autonomous robots to access, navigate and broadly operate in perceptually-degraded industrial or natural environments represents a strenuous and daunting challenge. Motivated by this fact, this talk focuses on methods and systems toward instilling resilient autonomy - using both classical and data-driven methods - across diverse robot configurations with the aim to seamlessly access and operate anywhere and subject to any conditions. Results and experiences from the victorious journey of Team CERBERUS in the DARPA Subterranean Challenge are presented, lessons learned are outlined and a multitude of experimental studies from follow-up research activities are discussed.

09:30 - 10:00

Efi Psomopoulou
University of Bristol

Physically Interactive Robots

Abstract: For robot manipulators to move out of industrial settings and into human environments, they will need physical intelligence for their interactions with the environment and humans and they will also need the dexterous capabilities of the human hand. This raises many unsolved problems, from designing the mechanisms and actuator technologies for such dexterous manipulators to their fine motor control with force and tactile sensing capabilities. These problems are interlinked: the mechanism for a manipulator is interdependent with its control which is interdependent with its sensing capabilities. This talk will present my past and recent work towards solving these problems.

10:00 - 10:30

Poster Session: Robot Learning and Perception for Navigation and Manipulation

  1. "LIVE: Lidar Informed Visual Search for Multiple Objects with Multiple Robots" by Ryan Gupta, Minkyu Kim, Juliana Rodriguez, Kyle Morgenstein, and Luis Sentis
  2. "Cooperative UAV Autonomy of Dronument: New Era in Cultural Heritage Preservation" by Pavel Petracek, Vit Kratky, Matej Petrlik, and Martin Saska
  3. "Semantic-SuPer: Employing Semantic Perception for Endoscopic Tissue Identification, Reconstruction, and Tracking" by Shan Lin, Jingpei Lu, Florian Richter, and Michael Yip
  4. "Dynamic Object Avoidance using Event-Data for a Quadruped Robot" by Shifan Zhu, Nisal Perera, Shangqun Yu, Hochul Hwang, and Donghyun Kim
  5. "Enhancing Autonomous Reinforcement Learning: A Demonstration-Free Approach via Implicit and Bidirectional Curriculum" by Daesol Cho, Jigang Kim, Hyoun, and Jin Kim
  6. "Multi-Modal Semantic Perception Using Bayesian Inference" by Parker Ewen, Gitesh Gunjal, Hao Chen, Anran Li, Yuzhen Chen, and Ram Vasudevan
  7. "Cooperative Probabilistic Trajectory Forecasting under Occlusion" by Anshul Nayak and Azim Eskandarian
  8. "Shape Reconstruction of Soft, Continuum Robots using Differentiable Rendering with Geometrical Shape Primitive" by Fei Liu and Michael Yip
  9. "PyPose v0.6: The Imperative Programming Interface for Robotics" by Zitong Zhan, Xiangfu Li, Qihang Li, Haonan He, Abhinav Pandey, Haitao Xiao, Yangmengfei Xu, Xiangyu Chen, Kuan Xu, Kun Cao, Zhipeng Zhao, Zihan Wang, Huan Xu, Zihang Fang, Yutian Chen, Wentao Wang, Xu Fang, Yi Du, Tianhao Wu, Xiao Lin, Yuheng Qiu, Fan Yang, Jingnan Shi, Shaoshu Su, Yiren Lu, Taimeng Fu, Karthik Dantu, Jiajun Wu, Lihua Xie, Marco Hutter, Luca Carlone, Sebastian Scherer, Daning Huang, Yaoyu Hu, Junyi Geng, Chen Wang
  10. "Monocular 3D Object Detection with Viewpoint-Invariant Inter-Object Estimation for Better Contextual Behavior Understanding" Minghan Zhu
  11. "Gripper-Aware GraspNet: End-Effector Shape Context for Cross-Gripper Generalization" Alina Sarmiento, Anthony Simeonov, and Pulkit Agrawal
  12. “Inflatable Fingertips with Stretchable Pressure Sensors for Adaptive Grasping and Manipulation,” by Hongyang Shi and Nanshu Lu
  13. “Motion Planning using Transformers,” by Jacob J. Johnson and Michael Yip

10:30 - 11:00

Coffee Break

11:00 - 11:30

Roberto Martin-Martin
University of Texas at Austin

Perceiving to Interact, Interacting to Perceive

Abstract: Humans and other biological agents interact with the environment to obtain the information for their tasks. Differently, robots are “afraid” of contacting the environment, strongly relying on passive modes of information gathering, which restrict their capabilities. In this talk, I will present my past and recent work to endorse robots with interactive capabilities to perceive their environment, for example, to find objects of interest and manipulate them. I will also present our recent effort to develop an affordable robotic hand that is friendly with the type of behavior my lab wants to create in robots: full of contact and where learning is enabled by physical interactions.

11:30 - 11:50

Invited PhD Talk: Differentiable Robotics: Integrated Perception, Planning, and Control with Differentiable Algorithm Networks

Abstract: What architecture will scale to human-level robot intelligence? Classical perception-planning-control methods often assume perfect models and tracktable optimization; learning-based methods are data hungry and often fail to generalize. In this talk I will introduce the Differentiable Algorithm Network (DAN), a compositional framework that fuses classical algorithmic architectures and deep neural networks. A DAN is composed of neural network modules that each encode a differentiable robot algorithm, and it is trained end-to-end from data. I will illustrate the potentials of the DAN framework through applications including visual robot navigation and autonomous vehicle control.

11:50 - 12:20

Panel Discussion: Robot Perception for Navigation and Manipulation

Abstract: Recent advances in artificial learning and perception enable robots to achieve semantic understanding and contextual awareness of their surroundings through multimodal interactive sensing and learning. This panel discussion explores the cutting-edge field of robot learning and perception techniques that actively plan observations and interactions to acquire informative data from the environment, thereby enhancing navigation and manipulation skills. Our panelists, Coline, Efi, Peter, and Roberto, will discuss the challenges and emerging opportunities in robot learning and perception for navigation and manipulation. The panel will be chaired by Nikolay Atanasov.
Coline Devin
Google DeepMind
Efi Psomopoulou
University of Bristol
Roberto Martin-Martin
University of Texas at Austin

Chair

Nikolay Atanasov
University of California San Diego

12:20 - 13:30

Lunch Break

13:30 - 14:00

Changhyun Choi
University of Minnesota

Interactive Robotic Object Perception and Manipulation

Abstract: Robotic manipulation of diverse objects in unstructured environments remains a major challenge in the field of robotics. Humans adeptly search, grasp, and manipulate objects across a variety of challenging scenarios, whereas current state-of-the-art robotic systems fail to achieve human-level proficiency. The overarching goal of my research is to develop computational learning models that enable robots to perceive the world, acquire new skills, and perform dexterous manipulation at or beyond human capabilities. In this workshop, I will present recent progress from my research group toward the research goal. Specifically, I will discuss (1) object searching and grasping under occlusion and clutter and (2) interactive manipulation for object segmentation. Both projects aim to integrate perception and decision-making to address key challenges in robotic perception and manipulation.

14:00 - 14:30

Coline Devin
Google DeepMind

Transformer-based policies for multi-task robotic manipulation

Abstract: Robot learning has been difficult to scale due to cost of obtaining data for each additional task. This talk will discuss how we can instead benefit from broad, robot-agnostic knowledge about the world and then improve by reducing the cost of acquiring each next task.

14:30 - 15:00

Poster Session: Perception-aware Robot Navigation and Manipulation

  1. "Autonomous Power Line Inspection with Drones via Perception-Aware MPC" by Jiaxu Xing, Giovanni Cioffi, Javier Hidalgo-Carrió, and Davide Scaramuzza
  2. "iPLAN: Intent-Aware Planning in Heterogeneous Traffic via Distributed Multi-Agent Reinforcement Learning" by Xiyang Wu, Rohan Chandra, Tianrui Guan, Amrit Singh Bedi, and Dinesh Manocha
  3. "Degradation-Aware Point Cloud Sampling in Robot Ego-Motion Estimation" by Pavel Petracek, Nikhil Khedekar, Morten Nissov, Kostas Alexis, and Martin Saska
  4. "Imitative Models for Passenger-Scale Autonomous Off-Road Driving" by Nitish R Dashora (University of California, Berkeley)*; Sunggoo Jung (KAIST); Dhruv Shah, Valentin Ibars, Osher Lerner, Chanyoung Jung, Rohan A Thakker, Nicholas Rhinehart, and Ali Agha
  5. "HiFaive: Learning Human-inspired Dexterous Manipulation with the Faive Robotic Hand" by Erik Bauer, Elvis Nava, and Robert Kevin Katzschmann
  6. "Hide and Seek with Visibility Constraints using Control Barrier Functions" by Shumon Koga, Minnan Zhou, Nikolay Atanasov, and Dimitra Panagou
  7. "Greedy Perspectives: Dynamic Multi-Drone View Planning for Collaborative Coverage" by Krishna Suresh, Aditya Rauniyar, Micah Corah, and Sebastian Scherer
  8. "Risk-Aware Multi-Robot Target Tracking with Dangerous Zones" by Jiazhen Liu, Peihan Li, Yuwei Wu, Vijay Kumar, and Lifeng Zhou
  9. "General In-Hand Object Rotation with Vision and Touch" by Haozhi Qi, Brent H Yi, Sudharshan Suresh, Mike Lambeta, Yi Ma, Roberto Calandra, and Jitendra Malik
  10. "Hierarchical Multi-modal Quadruped Navigation for Experience-informed Rebar Grid Traversal" by Max Asselmeier, Eohan George, Patricio A Vela, and Ye Zhao
  11. "Bridging Real-to-Sim Gaps through Online Stiffness Optimization with Perception-Enabled Residual Mapping" by Xiao Liang, Fei Liu, Yutong Zhang, and Michael Yip
  12. “Aligning Robot Navigation Behaviors with Human Intentions and Preferences,” by Haresh Karnan
  13. “Reinforcement Learning for Agile Flight: From Perception to Action,” by Yunlong Song and Davide Scaramuzza

15:00 - 15:30

Coffee Break

15:30 - 16:00

Hyeonbeom Lee
Ajou University

Autonomous Navigation of Outdoor Mobile Robot Using Monocular Depth Estimation

Abstract: This study presents a viable approach for outdoor mobile robots by integrating perception, planning, and experimentation. For the perception of mobile robots, we develop a real-time depth estimation algorithm that is gaining interest as a viable alternative to large and heavy sensors, such as LiDAR (light detection and ranging) sensors. To achieve this goal, we first designed a depth estimation network for a wide-FOV stereo camera. Then, we estimated the depth image using a convolutional neural network and improved the accuracy using stereo-matching. By exploiting our proposed planning algorithm with an optimization approach, we conducted experiments using a real drone and ground mobile robot in an outdoor environment to prove the performance. The experimental results are analyzed, and we further discuss precautions for operating outdoor mobile robots.

16:00 - 16:30

Interactive Robot Perception and Learning for Mobile Manipulation

Abstract: The long-standing ambition for autonomous, intelligent service robots that are seamlessly integrated into our everyday environments is yet to become a reality. Humans develop comprehension of their embodiments by interpreting their actions within the world and acting reciprocally to perceive it ---- the environment affects our actions, and our actions simultaneously affect our environment. Besides great advances in robotics and Artificial Intelligence (AI), e.g., through better hardware designs or algorithms incorporating advances in Deep Learning in robotics, we are still far from achieving robotic embodied intelligence. The challenge of attaining artificial embodied intelligence — intelligence that originates and evolves through an agent's sensorimotor interaction with its environment — is a topic of substantial scientific investigation and is still an open challenge. In this talk, I will walk you through our recent research works for enabling humanoid mobile manipulation robots with spatial intelligence through perception and interaction to coordinate and acquire skills that are necessary for their promising real-world applications. In particular, we will see how we can use robotic priors for learning to coordinate mobile manipulation robots, how neural representations can allow for learning safe interactions, and, at the crux, how we can leverage those representations to allow the robot to understand and interact with a scene, or guide it to acquire more “information” while acting in a task-oriented manner.

16:30 - 16:50

Rohan Chandra
UT Austin

Invited PhD Talk: Human-like Mobility to Deploy Robots... Everywhere!

Abstract: Deploying intelligent mobile robots in the real world has been a longstanding goal in robotics and AI. These environments are often dense, heterogeneous, constrained, and unstructured. In this talk, I'll discuss my research on enabling intelligent mobile robots to navigate such complex environments by instilling human-like mobility in robots. In particular, my talk will describe advanced computer vision and machine learning techniques for improved tracking of dynamic entities in dense traffic. It will also introduce an innovative model for estimating drivers' risk preferences by conceptualizing traffic as an undirected dynamic graph and applying the risk estimation algorithm to resolve conflicts between drivers at unsignalized intersections and during merging. Lastly, the talk will offer insights into the creation of simulators, tools, and datasets to spur further research, providing the audience with a comprehensive understanding of the advancements and innovations in intelligent mobile robot navigation.

16:50 - 17:20

Panel Discussion: Perception-Aware Robot Navigation and Manipulation

Abstract: Autonomous decision-making for navigation and manipulation in complex situations necessitates contextual and situational awareness. Robots must possess the ability to translate their semantic understanding of the environment into meaningful planning objectives and control constraints. At a high level, perception-aware planning can produce contextually relevant robot behavior. At a lower level, perception-aware control can facilitate contextually and physically safe robot motion. This panel discussion explores the cutting-edge field of perception-aware robot planning and control techniques that actively utilize geometric and semantic perceptual information in generating, executing, and adapting robot actions. Our panelists, Changhyun, Georgia, Kostas, and Rohan, will discuss the opportunities and challenges of perception-aware robot navigation and manipulation. The panel will be chaired by Rafael Papallas.
Rohan Chandra
UT Austin
Changhyun Choi
University of Minnesota
Kostas Alexis
Norwegian University of Science and Technology

Chair

Rafael Papallas
University of Leeds

17:20 - 17:30

Closing Remarks and Announcement of "Best Poster" Award


Keynote Presentations and Panel Discussion Recordings


Photo Gallery

Parker Ewen (left) receiving the EAISI Best Poster Presentation Award from Rafael Papallas (right)
Peter Karkus (left) receiving the MoMa PhD Talk Award from Rafael Papallas (right)
Rohan Chandra (right) receiving the SNU PhD Talk Award from Nikolay Atanasov (left)
IPPC Audience
IPPC Audience
Roberto Martin-Martin Presenting
Peter Karkus Presenting
Kostas Alexis Presenting
Panel Discussion (left to right): Rafael, Rohan, Georgia, Kostas and Changhyun
Panel Discussion (left to right): Rafael, Rohan, Georgia, Kostas and Changhyun
Efi Psomopoulou Presenting
Poster Session
Poster Session
Panel Discussion: Nikolay, Roberto, Peter, Efi and Coline
Panel Discussion: Nikolay, Roberto, Peter, Efi and Coline
Changhyun Choi Presneting
Coline Devin Presenting
Hyeonbeom Lee Presenting
Georgia Chalvatzaki Presenting
Rohan Chandra Presenting

Calls

Call for workshop papers

We are inviting paper submissions related to key challenges in perception, planning, and control for safe robot autonomy in human environments. Topics of interest include integrated perception, planning, and control; perception-aware motion planning and control; active and interactive perception; semantic perception and semantic planning; contextual risk and safety assessment of perception, planning, and control; uncertainty and error quantification in metric-semantic perception; uncertainty and error quantification in robot skill learning; egocentric and allocentric motion prediction for physical and contextual safety; perception for task and motion planning; affordance-based planning and control.

Contributed workshop paper submissions should be no longer than 4 pages (excluding references) and should follow the standard IEEE conference formatting guidelines (see here). The papers will be reviewed by a Program Committee, assembled from the organizers, the invited speakers, and other experts in the field. The Program Committee will provide at least 2 high-quality reviews per submission.

Accepted papers will be published on the workshop website and the authors will be invited to present their research during one of the workshop poster sessions. Not only will you have the chance to present your work to a diverse and engaged audience of academics, industry professionals, and fellow researchers but also you will be in the running for the Best Poster Presentation Award! Please stay tuned for the details.

Important Dates:
  • Submission deadline Aug 15, 2023 Aug 28, 2023 Aug 31, 2023
  • Acceptance notification Sep 1, 2023 Sep 8, 2023 Sep 10, 2023
  • Camera-ready submission Sep 7, 2023 Sep 20, 2023

Call for invited PhD talks

We are inviting junior researchers, who are either close to completing their PhD or are recent graduates, to share their PhD research work and research vision with the robotics community at our workshop as a 20-min workshop talk.

Applicants must have either defended their PhD thesis after October 2020 or must in their 3+ years of PhD study. Applicants are invited to submit a talk proposal in the form of an extended abstract of up to 4 pages (excluding references) summarizing your PhD research on a topic of interest to the workshop.

The submitted talk proposals will be reviewed by the workshop Program Committee and two of them will be selected for presentation based on research quality and relevance to the workshop topics. In addition to presenting, the selected two junior speakers will receive the "Best PhD Talk Award" to recognize the excellence and impact of their work. Any submitted PhD talk proposal, by default, will also be considered for a poster presentation (see call for workshop papers above).

Important Dates:
  • Extended abstract deadline: Aug 15, 2023 Aug 28, 2023 Aug 31, 2023
  • Acceptance notification Sep 1, 2023 Sep 8, 2023 Sep 10, 2023

Sponsors, Funding and Awards

The workshop will recognize outstanding robotics research on the topics of interest of the workshop via awards and monetary prizes, including financial support to encourage participation of a diverse audience (e.g., race, ethnicity, gender, age, economic status, and other).

Awards

Outstanding research contributions to our workshop will be recognized by two PhD Talk awards and one best poster presentation award.

  • MoMa PhD Talk Award ($1,000): This award will be given to one of the two PhD research talks (see "Call for invited PhD talks") selected by the program committee as a recognition of excellent robotics research on mobile manipulation. The award amount is $1,000 which is generously sponsored by IEEE RAS Technical Committee on Mobile Manipulation (MoMa). The award will be presented at the end of the PhD talk during the workshop.
  • SNU PhD Talk Award ($1,000): This award will be given to one of the two PhD research talks (see "Call for invited PhD talks") selected by the program committee as a recognition of excellent robotics research on autonomous navigation. The award amount is $1,000 which is generously sponsored by Seoul National University (SNU) Research Center for Advanced Unmanned Vehicles. The award will be presented at the end of the PhD talk during the workshop.
  • EAISI Best Poster Presentation Award ($1,000): This award will be given to the best poster presentation of selected papers (see "Call for workshop papers"). The award amount is $1,000 which is generously sponsored by TU/e Eindhoven AI Systems Institute (EAISI). The selection will be done by an award committee based on the quality of research and presentation and the award will be announced at the end of the workshop.

Diversity Grants

Our workshop accepts applications for a diversity grant from all contributing participants. A diversity grant is provided to encourage participant diversity (e.g., race, ethnicity, gender, age, economic status, and other diverse backgrounds) in our workshop. A diversity grant can be partially used for conference/workshop registration, accommodation, transportation, visa, etc.

To apply for a diversity grant, please submit a short motivation letter (maximum 400 words) with your paper submission for a poster presentation or a PhD Talk. Our total budget for diversity grants is $3,000 which is generously supported by SNU and MoMa. This total amount will be shared between selected applicants depending on their specific situations (e.g., at two or three levels --- $200, $500, $1,000).

Application for a diversity grant is optional and the grant acceptance decision will depend on the available limited budget, the number of applicants, and the relevance/motivation of the applications. We will aim to support as many people as possible to have a very diverse and inclusive research meeting.

Sponsors

The workshop awards and diversity grants are made possible thanks to the generous support of our sponsors:
  • IEEE RAS Technical Committee on Mobile Manipulation (MoMa).
  • Seoul National University (SNU) Research Center for Advanced Unmanned Vehicles.
  • TU/e Eindhoven AI Systems Institute (EAISI).

Organisers

Omur Arslan
Assistant Professor, Eindhoven University of Technology (TU/e)
o.arslan@tue.nl
Read more →
Nikolay Atanasov
Assistant Professor, University of California San Diego
natanasov@ucsd.edu
Read more →
Mehmet Dogar
Associate Professor, University of Leeds

m.r.dogar@leeds.ac.uk
Read more →
H. Jin Kim
Professor, Seoul National University

hjinkim@snu.ac.kr
Read more →
Rafael Papallas
Research Fellow, University of Leeds

r.papallas@leeds.ac.uk
Read more →