Ryan Lagasse

AI/ML Research Scientist & Engineer

Research Scientist @ Lockheed Martin AI Center
Director @ Algoverse AI Research
Berkeley AI Fellow · Blue Dot Fellow

I care about understanding AI systems mechanistically—not just observing that they work, but knowing why. The problems I keep coming back to: making interpretability tools useful at training time (not just post-hoc analysis), scalable oversight that actually scales, and architectures that support genuine reasoning and memory. Right now that means interpretability research on looping transformers and building multi-agent planning systems. The through-line is simple: alignment without mechanistic grounding is just vibes, and I want to build the science that changes that.

Highlights

Dec 2025 Promoted to AI/ML Research Scientist; founding member of Future AI Research Team leading interpretability research on looping transformers
Dec 2025 "Alignment-Constrained Pruning LLMs" accepted to DAI @ AAAI 2026
Sep 2025 "A Few Bad Neurons: Isolating and Surgically Correcting Sycophancy" accepted to CogInterp @ NeurIPS 2025
Sep 2025 "Circuit Discovery via Hybrid Attribution-Pruning Framework" accepted to MechInterp @ NeurIPS 2025
Sep 2025 "Active Inference Control: Steering, Not Just Scaling, Language Model Reasoning" accepted to Efficient Reasoning @ NeurIPS 2025
Aug 2025 "HalluTree: Explainable Multi-Hop Hallucination Detection" accepted to NewSumm @ EMNLP 2025
Jul 2025 Promoted to Director at Algoverse AI Research
Jun 2025 "Iterative RAG with Semantic Entropy" accepted to VecDB @ ICML 2025
May 2025 Joined Lockheed Martin AI Center full-time as AI/ML Research Engineer
Apr 2025 Won track at Yale Quantum Hackathon
Jan 2025 Selected as Berkeley AI Policy Fellow and Blue Dot Impact Fellow
Jan 2025 "Hybrid Quantum Algorithms for N-Body Simulations" accepted as oral presentation at QCNC 2025
Sep 2024 Led research for autonomous UAV-UGV teaming demo at EDGE24 (100% field success rate)
Dec 2023 Created and taught UConn's "Intro to Transformers" course (CSE 4095)