Attend a seminar on Safe Planning Under Perceptual and Linguistic Uncertainty, Nov. 7

Join the School of Manufacturing Systems and Networks, part of the Ira A. Fulton Schools of Engineering, as they host Yiannis Kantaros, an assistant professor in Washington University in St. Louis’s Preston M. Green Department of Electrical & Systems Engineering.
About
Yiannis Kantaros joined the department of electrical and systems engineering at Washington University in St. Louis as an assistant professor in January 2022. Before that, he was a postdoctoral associate in the GRASP and PRECISE labs at the University of Pennsylvania. He earned his Diploma in electrical and computer engineering from the University of Patras in Greece in 2012, followed by his master’s and doctoral degrees in mechanical engineering from Duke University in 2017 and 2018, respectively. Kantaros received the Best Student Paper Award at the second IEEE Global Conference on Signal and Information Processing, or GlobalSIP, in 2014 and was a finalist for the Best Multi-Robot Systems Paper at the IEEE International Conference on Robotics and Automation, or ICRA, in 2024. He also received the 2017–18 Outstanding Dissertation Research Award from Duke University’s department of mechanical engineering and materials science and a 2024 National Science Foundation Faculty Early Career Development Program (CAREER) Award.
Abstract
Designing robots that can navigate unfamiliar environments and follow natural language, or NL, commands is central to advancing embodied intelligence. While recent AI architectures show strong empirical performance, they often lack introspection — acting with unwarranted confidence and without awareness of their limitations, which hinders safety and reliability.
In this talk, Kantaros will introduce an introspective, neuro-symbolic autonomy framework that enables robots to execute NL tasks in unknown environments with user-specified success probabilities under linguistic and perceptual uncertainty. The neural module uses large language models, or LLMs, to translate NL commands into temporal logic specifications and employs uncertainty quantification to assess translation confidence. When uncertainty exceeds a threshold, auxiliary LLMs or human feedback are engaged for clarification. The symbolic module then plans actions for mobile robots with AI perception systems to meet these logic-based goals while reasoning explicitly about uncertainty. This approach allows robots to decide when to act and when to gather more data, ensuring task completion with the desired confidence. The talk will conclude with demonstrations on aerial, wheeled and legged robots, and a discussion of open challenges.
Safe Planning Under Perceptual and Linguistic Uncertainty seminar
Friday, Nov. 7, 2025
11 a.m.–noon
Interdisciplinary Science and Technology Building 12 (ITSB12) room 215, Polytechnic campus [map]