About
I'm an AI Engineer who builds production ML systems.
My work sits at the intersection of machine learning and software engineering — taking models from research to reliable, monitored production systems. I've built RAG pipelines that serve real queries, observability tooling that catches issues before users notice, and MLOps infrastructure that lets teams iterate with confidence.
Background
I started in software engineering and moved toward ML when I saw how many promising models never made it to production. The gap between "works in a notebook" and "runs reliably at scale" became my focus.
Most of my work involves:
- RAG systems — retrieval pipelines, embedding strategies, evaluation frameworks
- LLM applications — prompt engineering, orchestration, cost optimization
- MLOps — model deployment, monitoring, CI/CD for ML
- Observability — tracing, metrics, alerting for ML systems
Current Focus
I'm building tools and systems at the edge of LLM applications — particularly around evaluation, monitoring, and the infrastructure that makes AI systems trustworthy in production.
Values
- Ship, then iterate. Working software beats perfect plans.
- Measure what matters. If you can't evaluate it, you can't improve it.
- Simplicity scales. Complex systems fail in complex ways.
- Write it down. Documentation is a feature, not a chore.
If you're working on something interesting in AI infrastructure, I'd like to hear about it.