Frontiers: Neuro-Symbolic Adaptation for Autonomous Agents

This was our kick off for Frontiers, a new series of invited talks by researchers, engineers, and industry leaders working on the most interesting problems in foundational and applied AI.

Frontiers is an exciting new experiment for us - we want to highlight as many important researchers working at the cutting edge as possible, share their work with the world, and identify how Manifold can support, collaborate, and drive it forward.

Helen Lu, who works with the Human-Robot Interaction Lab at Tufts, joined us to share her work on Neuro-Symbolic Adaptation for Autonomous Agents.

Abstract


In dynamic open-world environments, autonomous agents often encounter novelties that hinder their ability to find plans to achieve their goals. Specifically, traditional symbolic planners fail to generate plans when the robot's knowledge base lacks the operators that can enable it to interact appropriately with novel objects in the environment. We propose a neuro-symbolic architecture that integrates symbolic planning, reinforcement learning, and a large language model (LLM) to adapt to novel objects. In particular, we leverage the common sense reasoning capability of the LLM to identify missing operators, generate plans with the symbolic AI planner, and guide the reinforcement learning agent in learning the new operators.

Check out our Events page for more info on this and other upcoming events! Want to get involved? Join our discord community here!