Mathematical Machine Learning
I study how learning systems can be compared, calibrated, and trusted under distribution shift and imperfect benchmarks.
Research
I build reproducible computational systems for model evaluation, scientific inference, and uncertainty-aware decision-making. The work sits between applied mathematics, ML, and research infrastructure: controlled numerical experiments, explicit assumptions, artifact provenance, calibrated uncertainty, numerical stability checks, and statistically valid comparisons.
Research Interests
I study how learning systems can be compared, calibrated, and trusted under distribution shift and imperfect benchmarks.
I am interested in optimization as a computational lens on stability, approximation, constraints, and scientific inverse problems.
I want ML systems for science to preserve structure, expose uncertainty, and remain numerically testable.
I treat AI evaluation as an experimental system whose claims need provenance, statistical comparison, and failure analysis.
Research Statement Preview
Modern ML systems should be calibrated, auditable, stress-tested, falsifiable, and reproducible, not merely ranked by benchmark scores. I am interested in mathematical structure that makes these systems stable and reliable, and in computational practice that can test those claims at scale.
My research direction spans reliable AI for science, robust optimization, uncertainty quantification, graph-structured inference, differentiable simulation, and evaluation methods for foundation models.
Current Questions
How can stochastic approximation and optimization theory better explain the empirical behavior of modern learning systems?
How can numerical linear algebra and spectral structure improve graph learning, retrieval, and representation learning?
What does reproducibility mean for complex ML pipelines where datasets, prompts, models, tokenizers, and evaluation logic all evolve?
How can evaluation systems be designed so that empirical ML behaves more like a controlled scientific discipline?
Preparation
My systems work has repeatedly centered on the same concerns that matter in computational research: whether an experiment can be replayed, whether a metric is measuring what it claims to measure, whether a model comparison is contaminated by leakage, and whether a numerical or statistical procedure remains trustworthy when scaled.
I now want to make that computational maturity serve a more mathematical research path: optimization, stochastic modeling, numerical computation, graph methods, and reliable AI systems.