Senior Backend Engineer - Grafana Databases, Managed Services
This is a remote opportunity and we would be interested in applicants from USA time zones only at this time.
Senior Backend Engineer - Grafana Databases, Managed Services
The Opportunity:
The Managed Services team is a newly formed squad within the Databases department. It owns and operates shared, production-critical infrastructure that powers Grafana Cloud’s next-generation database products (Mimir, Loki, and Tempo). Today, this includes operating 100+ WarpStream clusters across multiple cloud providers and regions, with continued growth anticipated for the future. WarpStream acts as the streaming backbone for ingestion and read/write decoupling across databases. It sits directly on the hot path for metrics, logs, and traces, handling high-throughput, multi-consumer workloads at massive scale.
In addition to streaming infrastructure, the team works closely with high-volume analytical and storage systems that power query-heavy and aggregation-heavy workloads, where latency, compression behavior, storage layout, and scaling characteristics matter deeply.
What You’ll Be Doing:
As a Senior Engineer on Managed Services, you will take ownership of running these systems in production. This involves:
Operating and evolving 100+ multi-cloud streaming clusters and related database infrastructure
Diagnosing and eliminating cross-layer failure modes (e.g., object storage latency, noisy neighbors, control-plane bottlenecks, query performance regressions, etc.)
Designing safe upgrade and rollout strategies at scale
Improving observability, automation, and operational ergonomics
Partnering closely with database and platform teams to ensure safe scaling, partitioning, consumer fan-out, and query performance
Working directly with distributed systems behavior, Kubernetes scheduling dynamics, storage engines, compression trade-offs, etc.
Serving as a primary escalation point and on-call for relevant incidents
Owning the relationship with all system vendors, including WarpStream Labs and others.
As we are remote-first and our engineering organization is largely remote, we provide guidance and meet regularly using video calls, so an independent attitude and good communication skills are a must.
This role blends deep distributed systems work with the opportunity to influence how the team approaches reliability, scaling, and operational excellence.
We invest heavily in developer productivity. You can use modern AI coding assistants as part of your daily workflow (your choice of tools, within security guidelines), backed by a company-funded usage budget so you can iterate quickly without unnecessary friction. We encourage pragmatic AI-assisted development: faster prototyping, test generation, refactors, documentation, and incident follow-ups—always paired with strong code review and quality standards. You’ll also have access to frontier models (e.g., GPT-Codex 5/3, Claude Opus 4.6, Gemini 3 Pro).
Of course, there is an on-call component to this role and one that we take seriously. As a company, we hire globally (remote-first) to ensure our on-call remains healthy and aligned to approximately 12 daylight hours per day. You will work closely with counterparts in other regions to provide balanced coverage and shared ownership.
What Makes You a Great Fit:
Regular 1:1s with your manager and close collaboration with teammates across regions
Reviewing and defining SLOs for shared database infrastructure, proactively reducing error budgets through improvements to monitoring, automation, scaling strategies, and system design
Improving the diagnosability of core streaming and database systems in production, where possible.
Implementing solutions that ensure reliability, scalability, and performance of high-throughput, multi-cloud infrastructure
Developing fault-tolerant patterns that account for distributed system realities such as storage latency, partition imbalance, noisy neighbors, and control-plane dependencies
Planning and executing safe upgrades and rollouts across dozens of production clusters
Collaborating with database and platform engineering leaders to influence architecture, roadmap priorities, and long-term strategy
Participating in PR review and contributing to design documents, automation, tooling, and code improvements that reduce operational risk
Sharing best practices and distributed systems knowledge with partner teams
Participating in incident response, from investigation through resolution and post-incident reviews (PIR)
Requirements:
6+ years of engineering experience, including meaningful time in SRE, platform engineering, production engineering, infrastructure engineering, or distributed systems roles.
Experience operating distributed systems in production (e.g., streaming systems, analytical databases, large-scale storage backends). Examples of these include Kafka, Redpanda, WarpStream, Postgres, ClickHouse, Snowflake, or Cassandra.
Strong Kubernetes experience in AWS, GCP, or Azure, and familiarity with infrastructure-as-code tooling (Helm, Terraform, Jsonnet, etc.).
Solid understanding of distributed systems design and large-scale system trade-offs.
Proficiency in at least one programming language (Go preferred, but not required).
Working knowledge of Linux internals, networking, cloud storage, and performance/scaling behavior.
Experience participating in blameless incident response and writing high-quality post-incident reviews.
Clear communicator who can collaborate across teams and work autonomously.
Curious, pragmatic, action-oriented, and kind (this is important!)
Compensation & Rewards:
In the United States, the Base compensation range for this role is USD 154,445- USD 185,334. Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
All of our roles include Restricted Stock Units (RSUs), giving every team member ownership in Grafana Labs' success. We believe in shared outcomes—RSUs help us stay aligned and invested as we scale globally.
About the job
Apply for this position
Senior Backend Engineer - Grafana Databases, Managed Services
This is a remote opportunity and we would be interested in applicants from USA time zones only at this time.
Senior Backend Engineer - Grafana Databases, Managed Services
The Opportunity:
The Managed Services team is a newly formed squad within the Databases department. It owns and operates shared, production-critical infrastructure that powers Grafana Cloud’s next-generation database products (Mimir, Loki, and Tempo). Today, this includes operating 100+ WarpStream clusters across multiple cloud providers and regions, with continued growth anticipated for the future. WarpStream acts as the streaming backbone for ingestion and read/write decoupling across databases. It sits directly on the hot path for metrics, logs, and traces, handling high-throughput, multi-consumer workloads at massive scale.
In addition to streaming infrastructure, the team works closely with high-volume analytical and storage systems that power query-heavy and aggregation-heavy workloads, where latency, compression behavior, storage layout, and scaling characteristics matter deeply.
What You’ll Be Doing:
As a Senior Engineer on Managed Services, you will take ownership of running these systems in production. This involves:
Operating and evolving 100+ multi-cloud streaming clusters and related database infrastructure
Diagnosing and eliminating cross-layer failure modes (e.g., object storage latency, noisy neighbors, control-plane bottlenecks, query performance regressions, etc.)
Designing safe upgrade and rollout strategies at scale
Improving observability, automation, and operational ergonomics
Partnering closely with database and platform teams to ensure safe scaling, partitioning, consumer fan-out, and query performance
Working directly with distributed systems behavior, Kubernetes scheduling dynamics, storage engines, compression trade-offs, etc.
Serving as a primary escalation point and on-call for relevant incidents
Owning the relationship with all system vendors, including WarpStream Labs and others.
As we are remote-first and our engineering organization is largely remote, we provide guidance and meet regularly using video calls, so an independent attitude and good communication skills are a must.
This role blends deep distributed systems work with the opportunity to influence how the team approaches reliability, scaling, and operational excellence.
We invest heavily in developer productivity. You can use modern AI coding assistants as part of your daily workflow (your choice of tools, within security guidelines), backed by a company-funded usage budget so you can iterate quickly without unnecessary friction. We encourage pragmatic AI-assisted development: faster prototyping, test generation, refactors, documentation, and incident follow-ups—always paired with strong code review and quality standards. You’ll also have access to frontier models (e.g., GPT-Codex 5/3, Claude Opus 4.6, Gemini 3 Pro).
Of course, there is an on-call component to this role and one that we take seriously. As a company, we hire globally (remote-first) to ensure our on-call remains healthy and aligned to approximately 12 daylight hours per day. You will work closely with counterparts in other regions to provide balanced coverage and shared ownership.
What Makes You a Great Fit:
Regular 1:1s with your manager and close collaboration with teammates across regions
Reviewing and defining SLOs for shared database infrastructure, proactively reducing error budgets through improvements to monitoring, automation, scaling strategies, and system design
Improving the diagnosability of core streaming and database systems in production, where possible.
Implementing solutions that ensure reliability, scalability, and performance of high-throughput, multi-cloud infrastructure
Developing fault-tolerant patterns that account for distributed system realities such as storage latency, partition imbalance, noisy neighbors, and control-plane dependencies
Planning and executing safe upgrades and rollouts across dozens of production clusters
Collaborating with database and platform engineering leaders to influence architecture, roadmap priorities, and long-term strategy
Participating in PR review and contributing to design documents, automation, tooling, and code improvements that reduce operational risk
Sharing best practices and distributed systems knowledge with partner teams
Participating in incident response, from investigation through resolution and post-incident reviews (PIR)
Requirements:
6+ years of engineering experience, including meaningful time in SRE, platform engineering, production engineering, infrastructure engineering, or distributed systems roles.
Experience operating distributed systems in production (e.g., streaming systems, analytical databases, large-scale storage backends). Examples of these include Kafka, Redpanda, WarpStream, Postgres, ClickHouse, Snowflake, or Cassandra.
Strong Kubernetes experience in AWS, GCP, or Azure, and familiarity with infrastructure-as-code tooling (Helm, Terraform, Jsonnet, etc.).
Solid understanding of distributed systems design and large-scale system trade-offs.
Proficiency in at least one programming language (Go preferred, but not required).
Working knowledge of Linux internals, networking, cloud storage, and performance/scaling behavior.
Experience participating in blameless incident response and writing high-quality post-incident reviews.
Clear communicator who can collaborate across teams and work autonomously.
Curious, pragmatic, action-oriented, and kind (this is important!)
Compensation & Rewards:
In the United States, the Base compensation range for this role is USD 154,445- USD 185,334. Actual compensation may vary based on level, experience, and skillset as assessed in the interview process. Benefits include equity, bonus (if applicable) and other benefits listed here.
All of our roles include Restricted Stock Units (RSUs), giving every team member ownership in Grafana Labs' success. We believe in shared outcomes—RSUs help us stay aligned and invested as we scale globally.
