Background
Generally interested in building reliable and capable autonomous (driving) systems.
Currently, I work as an MLE at DeepRoute.ai, where I focus on autonomous agent intention and trajectory prediction, planning algorithms, RL self-play smart agents, and RL post-training with preference alignment for autonomous vehicles deployed at scale in real-world environments.
I earned my Ph.D. in Computer Science at Purdue University, where I am working under the guidance of Suresh Jagannathan. During my Ph.D., I primarily focused on developing robust and reliable autonomous agents. Leveraging reinforcement learning, LLM, and end-to-end planning, underpinned by rigorous formal specifications, to ensure the reliability and performance of autonomous agents’ decision-making processes.
News
-
Generate reliable and consistent robot task plans with LLMs.
-
Feb 1, 2025: FLoRA: A Framework for Learning Scoring Rules in Autonomous Driving Planning Systems Accepted by RAL
Learning interpretable temporal logic rules from driving data and using them to select the best plan proposal online.
-
Sep 4, 2024: One paper got accepted by CORL 24'
Our paper “Scaling Safe Multi-Agent Control for Signal Temporal Logic Specifications” got accepted by CORL 24’! Congratulations to Joe and the team!
-
Alert! A large set of malicious behaviors can be injected and triggered by slightly perturbing the neural path planner’s input. Check out our latest work on backdoor attacks on neural path planners.
-
Jan 29, 2024: DSCRL got accepted by ICRA 24'
Want to effectively harness learning-based navigation and control tasks with programmable logic specifications? Check out Differentiable Specification Constrained Reinforcement Learning (DSCRL), a logic-constrained, map-adaptive co-learning framework for navigation/planning and control.