Job Title: Senior Software Engineer Company Name: Sentry Job Url: https://jobs.ashbyhq.com/sentry/95d2eeab-291d-40ad-97a2-86b104f3c7ad?utm_source=Otta Job Description: About the role As a Senior Software Engineer on Sentry’s AI/ML team, you’ll be responsible for building the evaluation infrastructure that measures the accuracy, reliability, and real-world performance of our AI systems. This role is critical to ensuring that our debugging agents and AI-powered features behave correctly, safely, and predictably as they scale. You’ll design datasets, benchmarks, and test harnesses that turn ambiguous AI behavior into measurable signals, helping the team ship AI with confidence. In this role you will Design and build robust evaluation frameworks to measure accuracy, reliability, regressions, and edge cases in AI systems Create and curate high-quality datasets, golden test cases, and benchmarks grounded in real production data Build automated test harnesses and metrics pipelines to continuously evaluate models, prompts, and agentic workflows Partner closely with applied AI engineers and product leaders to define what “good” looks like and translate it into measurable criteria Own the evaluation lifecycle for major AI initiatives, from early experimentation through production monitoring You’ll love this job if you Care deeply about correctness, rigor, and measurement in AI systems Enjoy turning fuzzy product goals and model behavior into concrete tests and metrics Like building foundational infrastructure that unlocks faster iteration and higher confidence for the entire AI team Thrive in cross-functional environments and enjoy influencing model design through better evaluation Qualifications Minimum 5+ years of professional experience with a Bachelor’s degree in computer science, machine learning, or a related field Experience building testing, evaluation, or data infrastructure for complex systems (AI/ML experience strongly preferred) Comfort writing production-quality code (we use Python and TypeScript) Experience working with structured and unstructured datasets, labeling workflows, or data quality pipelines Familiarity with modern ML systems and evaluation techniques (e.g., offline metrics, online evaluation, regression testing for models or prompts) Bonus: experience evaluating LLMs, agentic systems, or AI-assisted developer tools