Senior Research Engineer, JAX
About the Role
We are seeking a highly skilled Senior Research Engineer to collaborate closely with both Research and Engineering teams. The role involves diagnosing and resolving bottlenecks across large-scale distributed training, data processing, and inference systems, while also driving optimizations for existing high-performance pipelines.
The ideal candidate possesses a deep understanding of modern deep learning systems, combined with strong engineering expertise in areas such as layer-level optimization, large-scale distributed training, streaming, low-latency and asynchronous inference, inference compilers, and advanced parallelization techniques.
This is a cross-functional role requiring strong technical rigor, attention to detail, intellectual curiosity, and excellent communication skills. The position is embedded within the Research team and is responsible for developing and refining the technical foundation that enables cutting-edge research and translates its outcomes into production, bridging research and production engineering.
What You’ll Do
Maintain and evolve our JAX training framework, ensuring scalability and efficiency for large-scale distributed training runs.
Optimize production JAX inference systems for speech-to-text models using advanced techniques like continuous batching, model sharding, paged attention, and quantization.
Refactor and modernize model architectures and infrastructure, translating research prototypes into production-ready systems.
Investigate and resolve performance bottlenecks across the stack—from low-level kernels (XLA, Pallas) to high-level system design.
Design and deploy scalable, distributed workloads optimized for TPU and GPU architectures.
Bridge Research and Engineering teams, ensuring seamless knowledge transfer and alignment on technical priorities.
What You’ll Need
Expert-level proficiency with JAX and its ecosystem (Flax, Optax, XLA compilation pipeline).
Strong experience optimizing inference systems for production, ideally with LLMs or speech models.
Hands-on experience with TPU programming and optimization; GPU/CUDA expertise is also valuable.
Passion for refactoring and improving existing systems—you thrive on making code faster, cleaner, and more maintainable.
Familiarity with modern inference optimization techniques: continuous batching, KV-cache management, sharding strategies, quantization.
Domain knowledge in Speech-to-Text (ASR architectures, audio processing, streaming inference) is a plus.
Strong Python skills; C++ or Rust experience for kernel-level work is a plus.
Deep understanding of distributed training at scale and ML infrastructure best practices.
Excellent communication skills and a collaborative mindset—you can clearly explain complex tradeoffs and prioritize high-impact work.
Pay Transparency:
AssemblyAI strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on paying competitively for our size, stage, and industry, and are one part of many compensation, benefit, and other reward opportunities we provide.
There are many factors that go into salary determinations, including relevant experience, skill level, qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.
This is a remote role open to candidates across Europe. The provided range is listed in Swiss francs (CHF) as the position is posted in Zurich. Compensation will be adjusted to reflect local market rates and paid in the appropriate local currency for each candidate’s location. Any variations from the listed range will be clearly communicated during the interview process.
Salary range: CHF190,050.00 - CHF217,200.00
Senior Research Engineer, JAX
About the Role
We are seeking a highly skilled Senior Research Engineer to collaborate closely with both Research and Engineering teams. The role involves diagnosing and resolving bottlenecks across large-scale distributed training, data processing, and inference systems, while also driving optimizations for existing high-performance pipelines.
The ideal candidate possesses a deep understanding of modern deep learning systems, combined with strong engineering expertise in areas such as layer-level optimization, large-scale distributed training, streaming, low-latency and asynchronous inference, inference compilers, and advanced parallelization techniques.
This is a cross-functional role requiring strong technical rigor, attention to detail, intellectual curiosity, and excellent communication skills. The position is embedded within the Research team and is responsible for developing and refining the technical foundation that enables cutting-edge research and translates its outcomes into production, bridging research and production engineering.
What You’ll Do
Maintain and evolve our JAX training framework, ensuring scalability and efficiency for large-scale distributed training runs.
Optimize production JAX inference systems for speech-to-text models using advanced techniques like continuous batching, model sharding, paged attention, and quantization.
Refactor and modernize model architectures and infrastructure, translating research prototypes into production-ready systems.
Investigate and resolve performance bottlenecks across the stack—from low-level kernels (XLA, Pallas) to high-level system design.
Design and deploy scalable, distributed workloads optimized for TPU and GPU architectures.
Bridge Research and Engineering teams, ensuring seamless knowledge transfer and alignment on technical priorities.
What You’ll Need
Expert-level proficiency with JAX and its ecosystem (Flax, Optax, XLA compilation pipeline).
Strong experience optimizing inference systems for production, ideally with LLMs or speech models.
Hands-on experience with TPU programming and optimization; GPU/CUDA expertise is also valuable.
Passion for refactoring and improving existing systems—you thrive on making code faster, cleaner, and more maintainable.
Familiarity with modern inference optimization techniques: continuous batching, KV-cache management, sharding strategies, quantization.
Domain knowledge in Speech-to-Text (ASR architectures, audio processing, streaming inference) is a plus.
Strong Python skills; C++ or Rust experience for kernel-level work is a plus.
Deep understanding of distributed training at scale and ML infrastructure best practices.
Excellent communication skills and a collaborative mindset—you can clearly explain complex tradeoffs and prioritize high-impact work.
Pay Transparency:
AssemblyAI strives to recruit and retain exceptional talent from diverse backgrounds while ensuring pay equity for our team. Our salary ranges are based on paying competitively for our size, stage, and industry, and are one part of many compensation, benefit, and other reward opportunities we provide.
There are many factors that go into salary determinations, including relevant experience, skill level, qualifications assessed during the interview process, and maintaining internal equity with peers on the team. The range shared below is a general expectation for the function as posted, but we are also open to considering candidates who may be more or less experienced than outlined in the job description. In this case, we will communicate any updates in the expected salary range.
This is a remote role open to candidates across Europe. The provided range is listed in Swiss francs (CHF) as the position is posted in Zurich. Compensation will be adjusted to reflect local market rates and paid in the appropriate local currency for each candidate’s location. Any variations from the listed range will be clearly communicated during the interview process.
Salary range: CHF190,050.00 - CHF217,200.00
