AI/ML Research Scientist & Engineer
I care about understanding AI systems mechanistically—not just observing that they work, but knowing why. The problems I keep coming back to: making interpretability tools useful at training time (not just post-hoc analysis), scalable oversight that actually scales, and architectures that support genuine reasoning and memory. Right now that means a lot of experimentation and ablations to better understand how LLMs work from token density to attention attribution. The through-line is simple: alignment without mechanistic grounding is just vibes, and I want to build the science that changes that.