MENU
  • Remote Jobs
  • Companies
  • Go Premium
  • Job Alerts
  • Post a Job
  • Log in
  • Sign up
Working Nomads logo Working Nomads
  • Remote Jobs
  • Companies
  • Post Jobs
  • Go Premium
  • Get Free Job Alerts
  • Log in

Software Engineer L5 - Offline Inference, Machine Learning Platform

Netflix

Full-time
USA
$100k-$720k per year
machine learning
software engineering
engineer
java
python
Apply for this position

Netflix is one of the world's leading entertainment services, with over 300 million paid memberships in over 190 countries enjoying TV series, films and games across a wide variety of genres and languages. Members can play, pause and resume watching as much as they want, anytime, anywhere, and can change their plans at any time.

Machine Learning (ML) is core to that experience. From personalizing the home page to optimizing studio operations and powering new types of content, ML helps us entertain the world faster and better.

The Machine Learning Platform (MLP) organization builds the scalable, reliable infrastructure that accelerates every ML practitioner at Netflix. Within MLP, the Offline Inference team owns the batch-prediction layer—enabling practitioners to generate, store, and serve predictions for various models, including LLMs, computer-vision systems, and other foundation models. One of our most critical customer groups today is the content and studio ML practitioners in the company, whose work influences what we create and how we produce movies and shows you see when you log into the Netflix app. 

The Opportunity:

We’re looking for a talented Software Engineer L5 to join the newly formed Offline Inference team. You will design, build, and operate next-generation systems that run large-scale batch inference workloads—from minutes to multi-day jobs—while delivering a friction-free, self-service experience for ML practitioners across Netflix. Success in this role means not only building robust distributed systems, but also deeply understanding the ML development lifecycle to build platforms that truly accelerate our users.

What You’ll Do

  • Build developer-friendly APIs, SDKs, and CLIs that let researchers and engineers—experts and non-experts alike—submit and manage batch inference jobs with minimal effort, particularly in the domain of content and media

  • Design, implement, and operate distributed services that package, schedule, execute, and monitor batch inference workflows at massive scale.

  • Instrument the platform for reliability, debuggability, observability, and cost control; define SLOs and share an equitable on-call rotation

  • Foster a culture of engineering excellence through design reviews, mentorship, and candid, constructive feedback

Minimum Qualifications:

  • Hands-on experience with ML engineering or production systems involving training or inference of deep-learning models.

  • Proven track record of operating scalable infrastructure for ML workloads (batch or online).

  • Proficiency in one or more modern backend languages (e.g. Python, Java, Scala).

  • Production experience with containerization & orchestration (Docker, Kubernetes, ECS, etc.) and at least one major cloud provider (AWS preferred).

  • Comfortable with ambiguity and working across multiple layers of the tech stack to execute on both 0-to-1 and 1-to-100 projects

  • Commitment to operational best practices—observability, logging, incident response, and on-call excellence.

  • Excellent written and verbal communication skills; effective collaboration across distributed teams and time zones.

  • Comfortable working in a team with peers and partners distributed across (US) geographies & time zones.

Preferred Qualifications:

  • Deep understanding of real-world ML development workflows and close partnership with ML researchers or modeling engineers.

  • Familiarity with cloud-based AI/ML services (e.g., SageMaker, Bedrock, Databricks, OpenAI, Vertex) or open-source stacks (Ray, Kubeflow, MLflow).

  • Experience optimizing inference for large language models, computer-vision pipelines, or other foundation models (e.g., FSDP, tensor/pipeline parallelism, quantization, distillation).

  • Open-source contributions, patents, or public speaking/blogging on ML-infrastructure topics.

What We Offer:

Our compensation structure consists solely of an annual salary; we do not have bonuses. You choose each year how much of your compensation you want in salary versus stock options. To determine your personal top of market compensation, we rely on market indicators and consider your specific job family, background, skills, and experience to determine your compensation in the market range. The range for this role is $100,000 - $720,000.

Netflix provides comprehensive benefits including Health Plans, Mental Health support, a 401(k) Retirement Plan with employer match, Stock Option Program, Disability Programs, Health Savings and Flexible Spending Accounts, Family-forming benefits, and Life and Serious Injury Benefits. We also offer paid leave of absence programs.  Full-time hourly employees accrue 35 days annually for paid time off to be used for vacation, holidays, and sick paid time off. Full-time salaried employees are immediately entitled to flexible time off. See more detail about our Benefits here.

Netflix is a unique culture and environment.  Learn more here. 

Inclusion is a Netflix value and we strive to host a meaningful interview experience for all candidates. If you want an accommodation/adjustment for a disability or any other reason during the hiring process, please send a request to your recruiting partner.

We are an equal-opportunity employer and celebrate diversity, recognizing that diversity builds stronger teams. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service.

Job is open for no less than 7 days and will be removed when the position is filled.

Apply for this position
Bookmark Report

About the job

Full-time
USA
$100k-$720k per year
5 Applicants
Posted 3 days ago
machine learning
software engineering
engineer
java
python

Apply for this position

Bookmark
Report
Enhancv advertisement

30,000+
REMOTE JOBS

Unlock access to our database and
kickstart your remote career
Join Premium

Software Engineer L5 - Offline Inference, Machine Learning Platform

Netflix

Netflix is one of the world's leading entertainment services, with over 300 million paid memberships in over 190 countries enjoying TV series, films and games across a wide variety of genres and languages. Members can play, pause and resume watching as much as they want, anytime, anywhere, and can change their plans at any time.

Machine Learning (ML) is core to that experience. From personalizing the home page to optimizing studio operations and powering new types of content, ML helps us entertain the world faster and better.

The Machine Learning Platform (MLP) organization builds the scalable, reliable infrastructure that accelerates every ML practitioner at Netflix. Within MLP, the Offline Inference team owns the batch-prediction layer—enabling practitioners to generate, store, and serve predictions for various models, including LLMs, computer-vision systems, and other foundation models. One of our most critical customer groups today is the content and studio ML practitioners in the company, whose work influences what we create and how we produce movies and shows you see when you log into the Netflix app. 

The Opportunity:

We’re looking for a talented Software Engineer L5 to join the newly formed Offline Inference team. You will design, build, and operate next-generation systems that run large-scale batch inference workloads—from minutes to multi-day jobs—while delivering a friction-free, self-service experience for ML practitioners across Netflix. Success in this role means not only building robust distributed systems, but also deeply understanding the ML development lifecycle to build platforms that truly accelerate our users.

What You’ll Do

  • Build developer-friendly APIs, SDKs, and CLIs that let researchers and engineers—experts and non-experts alike—submit and manage batch inference jobs with minimal effort, particularly in the domain of content and media

  • Design, implement, and operate distributed services that package, schedule, execute, and monitor batch inference workflows at massive scale.

  • Instrument the platform for reliability, debuggability, observability, and cost control; define SLOs and share an equitable on-call rotation

  • Foster a culture of engineering excellence through design reviews, mentorship, and candid, constructive feedback

Minimum Qualifications:

  • Hands-on experience with ML engineering or production systems involving training or inference of deep-learning models.

  • Proven track record of operating scalable infrastructure for ML workloads (batch or online).

  • Proficiency in one or more modern backend languages (e.g. Python, Java, Scala).

  • Production experience with containerization & orchestration (Docker, Kubernetes, ECS, etc.) and at least one major cloud provider (AWS preferred).

  • Comfortable with ambiguity and working across multiple layers of the tech stack to execute on both 0-to-1 and 1-to-100 projects

  • Commitment to operational best practices—observability, logging, incident response, and on-call excellence.

  • Excellent written and verbal communication skills; effective collaboration across distributed teams and time zones.

  • Comfortable working in a team with peers and partners distributed across (US) geographies & time zones.

Preferred Qualifications:

  • Deep understanding of real-world ML development workflows and close partnership with ML researchers or modeling engineers.

  • Familiarity with cloud-based AI/ML services (e.g., SageMaker, Bedrock, Databricks, OpenAI, Vertex) or open-source stacks (Ray, Kubeflow, MLflow).

  • Experience optimizing inference for large language models, computer-vision pipelines, or other foundation models (e.g., FSDP, tensor/pipeline parallelism, quantization, distillation).

  • Open-source contributions, patents, or public speaking/blogging on ML-infrastructure topics.

What We Offer:

Our compensation structure consists solely of an annual salary; we do not have bonuses. You choose each year how much of your compensation you want in salary versus stock options. To determine your personal top of market compensation, we rely on market indicators and consider your specific job family, background, skills, and experience to determine your compensation in the market range. The range for this role is $100,000 - $720,000.

Netflix provides comprehensive benefits including Health Plans, Mental Health support, a 401(k) Retirement Plan with employer match, Stock Option Program, Disability Programs, Health Savings and Flexible Spending Accounts, Family-forming benefits, and Life and Serious Injury Benefits. We also offer paid leave of absence programs.  Full-time hourly employees accrue 35 days annually for paid time off to be used for vacation, holidays, and sick paid time off. Full-time salaried employees are immediately entitled to flexible time off. See more detail about our Benefits here.

Netflix is a unique culture and environment.  Learn more here. 

Inclusion is a Netflix value and we strive to host a meaningful interview experience for all candidates. If you want an accommodation/adjustment for a disability or any other reason during the hiring process, please send a request to your recruiting partner.

We are an equal-opportunity employer and celebrate diversity, recognizing that diversity builds stronger teams. We approach diversity and inclusion seriously and thoughtfully. We do not discriminate on the basis of race, religion, color, ancestry, national origin, caste, sex, sexual orientation, gender, gender identity or expression, age, disability, medical condition, pregnancy, genetic makeup, marital status, or military service.

Job is open for no less than 7 days and will be removed when the position is filled.

Working Nomads

Post Jobs
Premium Subscription
Sponsorship
Free Job Alerts

Job Skills
API
FAQ
Privacy policy
Terms and conditions
Contact us
About us

Jobs by Category

Remote Administration jobs
Remote Consulting jobs
Remote Customer Success jobs
Remote Development jobs
Remote Design jobs
Remote Education jobs
Remote Finance jobs
Remote Legal jobs
Remote Healthcare jobs
Remote Human Resources jobs
Remote Management jobs
Remote Marketing jobs
Remote Sales jobs
Remote System Administration jobs
Remote Writing jobs

Jobs by Position Type

Remote Full-time jobs
Remote Part-time jobs
Remote Contract jobs

Jobs by Region

Remote jobs Anywhere
Remote jobs North America
Remote jobs Latin America
Remote jobs Europe
Remote jobs Middle East
Remote jobs Africa
Remote jobs APAC

Jobs by Skill

Remote Accounting jobs
Remote Assistant jobs
Remote Copywriting jobs
Remote Cyber Security jobs
Remote Data Analyst jobs
Remote Data Entry jobs
Remote English jobs
Remote Spanish jobs
Remote Project Management jobs
Remote QA jobs
Remote SEO jobs

Jobs by Country

Remote jobs Australia
Remote jobs Argentina
Remote jobs Brazil
Remote jobs Canada
Remote jobs Colombia
Remote jobs France
Remote jobs Germany
Remote jobs Ireland
Remote jobs India
Remote jobs Japan
Remote jobs Mexico
Remote jobs Netherlands
Remote jobs New Zealand
Remote jobs Philippines
Remote jobs Poland
Remote jobs Portugal
Remote jobs Singapore
Remote jobs Spain
Remote jobs UK
Remote jobs USA


Working Nomads curates remote digital jobs from around the web.

© 2025 Working Nomads.