Member of Technical Staff - Inference
About the role
We're looking for an ML infrastructure engineer to bridge the gap between research and production at Runway. You'll work directly with our research teams to productionize cutting-edge generative models—taking checkpoints from training to staging to production, ensuring reliability at scale, and building the infrastructure that enables fast iteration.
You'll be embedded within research teams, providing platform support throughout the entire model development lifecycle. Your work will directly impact how quickly we can ship new models and features to millions of users.
A peek at our technical stack
Our API endpoints for real-time collaboration and media asset management is written in TypeScript, and runs in ECS containers on AWS Fargate. We leverage multiple AWS-native components, such as S3, CloudFront, Lambda, Kinesis, and SQS, as building blocks of our infrastructure.
Our inference backend is written in Python (PyTorch, TorchScript), and is deployed across multiple clusters / cloud providers. We use Kubernetes for container orchestration, and k8s-native components such as Flyte, Kueue, and Kyverno efficient job orchestration. We invest in prometheus and grafana for monitoring, and Terraform to manage our infrastructure.
What you’ll do
Productionize model checkpoints end-to-end: from research completion to internal testing to production deployment to post-release support
Build and optimize inference systems for large-scale generative models running on multi-GPU environments
Design and implement model serving infrastructure specialized for diffusion models and real-time diffusion workflows
Add monitoring and observability for new model releases—track errors, throughput, GPU utilization, and latency
Embed with research teams to gather training data, run preprocessing scripts, and support the model development process
Explore and integrate with GPU inference providers (Modal, E2E, Baseten, etc.)
What you’ll need
4+ years of experience running ML model inference at scale in production environments
Strong experience with PyTorch and multi-GPU inference for large models
Experience with Kubernetes for ML workloads—deploying, scaling, and debugging GPU-based services
Comfortable working across multiple cloud providers and managing GPU driver compatibility
Experience with monitoring and observability for ML systems (errors, throughput, GPU utilization)
Self-starter who can work embedded with research teams and move fast
Strong systems thinking and pragmatic approach to production reliability
Humility and open mindedness; at Runway we love to learn from one another
Nice to Have
Experience building custom inference frameworks or serving systems
Deep understanding of distributed training and inference patterns (FSDP, data parallelism, tensor parallelism)
Ability to debug low-level issues: NCCL networking problems, CUDA errors, memory leaks, performance bottlenecks
Experience with diffusion models or video generation systems
Knowledge of real-time or latency-sensitive ML applications
Runway strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on competitive market rates for our size, stage and industry, and salary is just one part of the overall compensation package we provide.
There are many factors that go into salary determinations, including relevant experience, skill level and qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.
Lastly, the provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range, which again, will be communicated to candidates.
Salary range: $240,000-290,000
About the job
Apply for this position
Member of Technical Staff - Inference
About the role
We're looking for an ML infrastructure engineer to bridge the gap between research and production at Runway. You'll work directly with our research teams to productionize cutting-edge generative models—taking checkpoints from training to staging to production, ensuring reliability at scale, and building the infrastructure that enables fast iteration.
You'll be embedded within research teams, providing platform support throughout the entire model development lifecycle. Your work will directly impact how quickly we can ship new models and features to millions of users.
A peek at our technical stack
Our API endpoints for real-time collaboration and media asset management is written in TypeScript, and runs in ECS containers on AWS Fargate. We leverage multiple AWS-native components, such as S3, CloudFront, Lambda, Kinesis, and SQS, as building blocks of our infrastructure.
Our inference backend is written in Python (PyTorch, TorchScript), and is deployed across multiple clusters / cloud providers. We use Kubernetes for container orchestration, and k8s-native components such as Flyte, Kueue, and Kyverno efficient job orchestration. We invest in prometheus and grafana for monitoring, and Terraform to manage our infrastructure.
What you’ll do
Productionize model checkpoints end-to-end: from research completion to internal testing to production deployment to post-release support
Build and optimize inference systems for large-scale generative models running on multi-GPU environments
Design and implement model serving infrastructure specialized for diffusion models and real-time diffusion workflows
Add monitoring and observability for new model releases—track errors, throughput, GPU utilization, and latency
Embed with research teams to gather training data, run preprocessing scripts, and support the model development process
Explore and integrate with GPU inference providers (Modal, E2E, Baseten, etc.)
What you’ll need
4+ years of experience running ML model inference at scale in production environments
Strong experience with PyTorch and multi-GPU inference for large models
Experience with Kubernetes for ML workloads—deploying, scaling, and debugging GPU-based services
Comfortable working across multiple cloud providers and managing GPU driver compatibility
Experience with monitoring and observability for ML systems (errors, throughput, GPU utilization)
Self-starter who can work embedded with research teams and move fast
Strong systems thinking and pragmatic approach to production reliability
Humility and open mindedness; at Runway we love to learn from one another
Nice to Have
Experience building custom inference frameworks or serving systems
Deep understanding of distributed training and inference patterns (FSDP, data parallelism, tensor parallelism)
Ability to debug low-level issues: NCCL networking problems, CUDA errors, memory leaks, performance bottlenecks
Experience with diffusion models or video generation systems
Knowledge of real-time or latency-sensitive ML applications
Runway strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on competitive market rates for our size, stage and industry, and salary is just one part of the overall compensation package we provide.
There are many factors that go into salary determinations, including relevant experience, skill level and qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.
Lastly, the provided range is the expected salary for candidates in the U.S. Outside of those regions, there may be a change in the range, which again, will be communicated to candidates.
Salary range: $240,000-290,000
