Staff Software Engineer - ML Serving Platform

Full-time
USA
$161k-$330k per year
Posted 1 year ago
Go ad-free with Premium ×
The job listing has expired. Unfortunately, the hiring company is no longer accepting new applications.

To see similar active jobs please follow this link: Remote Development jobs

The ML Platform team delivers essential tools and infrastructure utilized by hundreds of ML engineers across Pinterest, powering crucial functions such as recommendations, ads, visual search, growth/notifications, and trust and safety. Our primary objectives are to ensure ML systems maintain production-grade quality and enable rapid iteration for modelers.

We are seeking a Staff Software Engineer to join our ML Serving team and spearhead our technical strategy on our ML inference engine. The ML Serving team constructs large-scale online systems and tools for model inference, deployment, monitoring, and feature fetching/logging. As ML workloads grow increasingly large, complex, and interdependent, the efficient use of ML accelerators has become critical to our success. You’ll be part of the ML Platform team in Data Engineering, which aims to ensure healthy and fast ML in all of the 40+ ML use cases across Pinterest ranging from recommender systems, computer vision, LLM and other models.

 

What you’ll do:

  • Architect and develop large-scale, robust, and efficient ML inference engines and serving systems leveraging GPUs and other hardware accelerators

  • Formulate and implement strategic roadmaps for ML inference technologies at team and company level

  • Collaborate with cross-functional teams to drive innovative ML projects, applying advanced inference optimization techniques

  • Engage extensively with ML engineers across Pinterest to understand their technical requirements, address pain points, and create generalized solutions

  • Provide technical mentorship and guidance to junior engineers within the team

 

What we’re looking for:

  • Comprehensive understanding of production-scale ML use cases and systems, with a focus on scalability and efficiency

  • Hands-on experience in building large-scale ML systems in production environments, preferably with expertise in state-of-the-art ML inference technologies and optimizations

  • In-depth knowledge of common ML frameworks and systems, including PyTorch, TensorRT, and vLLM, along with their best practices and internal mechanisms

  • Familiarity in GPU programming and the common optimization techniques such as ML compilation and quantization

  • Strong programming skills in Python and C++, coupled with a solid grasp of distributed systems principles

 

Relocation Statement:

  • This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.

 

In-Office Requirement Statement:

  • We let the type of work you do guide the collaboration style. That means we're not always working in an office, but we continue to gather for key moments of collaboration and connection.

  • This role will need to be in the office for in-person collaboration 1-2 times/quarter and therefore can be situated anywhere in the country.

 

#LI-HYBRID

#LI-AH2

Go ad-free with Premium ×
About the Job
Full-time
USA
$161k-$330k per year
Posted 1 year ago
Check if your resume is a good fit
25/100
Get Full Report
+ 1,284 new jobs added today
30,000+
Remote Jobs

Don't miss out — new listings every hour

Join Premium

Staff Software Engineer - ML Serving Platform

The job listing has expired. Unfortunately, the hiring company is no longer accepting new applications.

To see similar active jobs please follow this link: Remote Development jobs

The ML Platform team delivers essential tools and infrastructure utilized by hundreds of ML engineers across Pinterest, powering crucial functions such as recommendations, ads, visual search, growth/notifications, and trust and safety. Our primary objectives are to ensure ML systems maintain production-grade quality and enable rapid iteration for modelers.

We are seeking a Staff Software Engineer to join our ML Serving team and spearhead our technical strategy on our ML inference engine. The ML Serving team constructs large-scale online systems and tools for model inference, deployment, monitoring, and feature fetching/logging. As ML workloads grow increasingly large, complex, and interdependent, the efficient use of ML accelerators has become critical to our success. You’ll be part of the ML Platform team in Data Engineering, which aims to ensure healthy and fast ML in all of the 40+ ML use cases across Pinterest ranging from recommender systems, computer vision, LLM and other models.

 

What you’ll do:

  • Architect and develop large-scale, robust, and efficient ML inference engines and serving systems leveraging GPUs and other hardware accelerators

  • Formulate and implement strategic roadmaps for ML inference technologies at team and company level

  • Collaborate with cross-functional teams to drive innovative ML projects, applying advanced inference optimization techniques

  • Engage extensively with ML engineers across Pinterest to understand their technical requirements, address pain points, and create generalized solutions

  • Provide technical mentorship and guidance to junior engineers within the team

 

What we’re looking for:

  • Comprehensive understanding of production-scale ML use cases and systems, with a focus on scalability and efficiency

  • Hands-on experience in building large-scale ML systems in production environments, preferably with expertise in state-of-the-art ML inference technologies and optimizations

  • In-depth knowledge of common ML frameworks and systems, including PyTorch, TensorRT, and vLLM, along with their best practices and internal mechanisms

  • Familiarity in GPU programming and the common optimization techniques such as ML compilation and quantization

  • Strong programming skills in Python and C++, coupled with a solid grasp of distributed systems principles

 

Relocation Statement:

  • This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.

 

In-Office Requirement Statement:

  • We let the type of work you do guide the collaboration style. That means we're not always working in an office, but we continue to gather for key moments of collaboration and connection.

  • This role will need to be in the office for in-person collaboration 1-2 times/quarter and therefore can be situated anywhere in the country.

 

#LI-HYBRID

#LI-AH2