Senior Software Engineer - GenAI & ML Evaluation Frameworks
This is a remote opportunity and we would be interested in applicants from USA time zones only at this time.
Senior Engineer – GenAI & ML Evaluation Frameworks
At Grafana, we build observability tools that help users understand, respond to, and improve their systems – regardless of scale, complexity, or tech stack. The Grafana AI teams play a key role in this mission by helping users make sense of complex observability data through AI-driven features. These capabilities reduce toil, lower the barrier of domain expertise, and surface meaningful signals from noisy environments.
We are looking for an experienced engineer with expertise in evaluating Generative AI systems, particularly Large Language Models (LLMs), to help us build and evolve our internal evaluation frameworks, and/or integrate existing best-of-breed tools. This role involves designing and scaling automated evaluation pipelines, integrating them into CI/CD workflows, and defining metrics that reflect both product goals and model behavior. As the team matures, there’s a broad opportunity to expand or redefine this role based on impact and initiative.
The kind of problems you’ll be tackling:
Design and implement robust evaluation frameworks for GenAI and LLM-based systems, including golden test sets, regression tracking, LLM-as-judge methods, and structured output verification.
Develop tooling to enable automated, low-friction evaluation of model outputs, prompts, and agent behaviors.
Define and refine metrics for both structure and semantics, ensuring alignment with realistic use cases and operational constraints.
Lead the development of dataset management processes and guide teams across Grafana in best practices for GenAI evaluation.
What we’re looking for:
Experience designing and implementing evaluation frameworks for AI/ML systems.
Familiarity with prompt engineering, structured output evaluation, and context-window management in LLM systems.
High autonomy to collaborate and translate team goals into clear, testable criteria supported by effective tooling.
General qualities we’re seeking:
Experience working in environments with rapid iteration and experimental development.
A pragmatic mindset that values reproducibility, developer experience, and thoughtful trade-offs when scaling GenAI systems.
A passion for minimizing human toil and building AI systems that actively support engineers.
In the United States, the Base compensation range for this role is USD 148,505 - USD 178,206. Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
*Compensation ranges are country-specific. If you are applying for this role from a different location than listed above, your recruiter will discuss your specific market’s defined pay range & benefits at the beginning of the process.
*Grafana Labs may utilize AI tools in its recruitment process to assist in matching information provided in CVs to job postings. The recruitment team will continue to review inbound CVs manually to identify alignment with current openings.
About the job
Apply for this position
Senior Software Engineer - GenAI & ML Evaluation Frameworks
This is a remote opportunity and we would be interested in applicants from USA time zones only at this time.
Senior Engineer – GenAI & ML Evaluation Frameworks
At Grafana, we build observability tools that help users understand, respond to, and improve their systems – regardless of scale, complexity, or tech stack. The Grafana AI teams play a key role in this mission by helping users make sense of complex observability data through AI-driven features. These capabilities reduce toil, lower the barrier of domain expertise, and surface meaningful signals from noisy environments.
We are looking for an experienced engineer with expertise in evaluating Generative AI systems, particularly Large Language Models (LLMs), to help us build and evolve our internal evaluation frameworks, and/or integrate existing best-of-breed tools. This role involves designing and scaling automated evaluation pipelines, integrating them into CI/CD workflows, and defining metrics that reflect both product goals and model behavior. As the team matures, there’s a broad opportunity to expand or redefine this role based on impact and initiative.
The kind of problems you’ll be tackling:
Design and implement robust evaluation frameworks for GenAI and LLM-based systems, including golden test sets, regression tracking, LLM-as-judge methods, and structured output verification.
Develop tooling to enable automated, low-friction evaluation of model outputs, prompts, and agent behaviors.
Define and refine metrics for both structure and semantics, ensuring alignment with realistic use cases and operational constraints.
Lead the development of dataset management processes and guide teams across Grafana in best practices for GenAI evaluation.
What we’re looking for:
Experience designing and implementing evaluation frameworks for AI/ML systems.
Familiarity with prompt engineering, structured output evaluation, and context-window management in LLM systems.
High autonomy to collaborate and translate team goals into clear, testable criteria supported by effective tooling.
General qualities we’re seeking:
Experience working in environments with rapid iteration and experimental development.
A pragmatic mindset that values reproducibility, developer experience, and thoughtful trade-offs when scaling GenAI systems.
A passion for minimizing human toil and building AI systems that actively support engineers.
In the United States, the Base compensation range for this role is USD 148,505 - USD 178,206. Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
*Compensation ranges are country-specific. If you are applying for this role from a different location than listed above, your recruiter will discuss your specific market’s defined pay range & benefits at the beginning of the process.
*Grafana Labs may utilize AI tools in its recruitment process to assist in matching information provided in CVs to job postings. The recruitment team will continue to review inbound CVs manually to identify alignment with current openings.