2025 Distinguished TrAC Seminar Registration


Join us for the Inaugural Distinguished TrAC Lecture with Subbarao Kambhampati.



From LLMs to LRMs: The Jagged Quest to Tease Planning and Reasoning from Humanity's Digital Footprints


March 25, 2025, 1-2 pm CT

2055 Hoover Hall, Iowa State University, Ames, IA


Large Language Models, auto-regressively trained on the digital footprints of humanity, have shown impressive abilities in generating coherent text completions for a vast variety of prompts. While they excelled in producing completions in appropriate style, factuality and reasoning/planning abilities remained their Achilles heel. More recently a variety of approaches are being tried to improve the reasoning abilities and some are starting to show promise. These approaches leverage two broad and largely independent ideas: (i) test-time inference – which involves getting LLMs do more work than simply providing the most likely completion, including using them in generate and test approaches such as LLM-Modulo and (ii) post-training methods–which go beyond simple auto-regressive training on web corpora by collecting, filtering and training on derivational races (popularly referred to as chains of thought), and modifying the base LLM with it using supervised finetuning or reinforcement learning methods. Their success notwithstanding, there are considerable misunderstandings about these methods–including whether they can provide correctness guarantees, whether they do adaptive computation and whether the intermediate tokens they generate can be viewed as reasoning traces in any meaningful sense. Drawing from our ongoing work, I will present a broad unifying perspective on these approaches and their promise and limitations.