MENU
  • Remote Jobs
  • Companies
  • Go Premium
  • Job Alerts
  • Post a Job
  • Log in
  • Sign up
Working Nomads logo Working Nomads
  • Remote Jobs
  • Companies
  • Post Jobs
  • Go Premium
  • Get Free Job Alerts
  • Log in

MLOps Engineer

CloudWalk

Full-time
Brazil
engineer
devops
python
aws
machine learning
Apply for this position

Who we are

CloudWalk is a fintech company reimagining the future of financial services. We are building intelligent infrastructure powered by AI, blockchain, and thoughtful design. Our products serve millions of entrepreneurs across Brazil and the US every day, helping them grow with tools that are fast, fair, and built for how business actually works. Learn more atcloudwalk.io.

Who We’re Looking For

We're looking for an MLOps Engineer to help us build ML infrastructure that scales dynamically from dozens to thousands of GPUs, reliably and efficiently.

You’ll be part of the AI R&D team, working closely with researchers and engineers to design systems for training, evaluating, and monitoring machine learning models at scale. This isn’t a research position, but your work will directly support researchers running large-scale experiments. You’ll help build fault-tolerant pipelines that preserve progress even when things break (like OOMs), and ensure model development flows can iterate with confidence.

Our current focus is on large-scale, non-interactive workloads: batch training, dataset-wide model evaluation, and metric-driven improvement loops. That said, the infrastructure you build may later support interactive tools and APIs.

You'll be contributing to system design under the guidance of senior ML researchers and infra engineers, your role is to bring modern tooling and practical engineering to a demanding, GPU-heavy environment.As a Machine Learning Engineer, your mission is to design and deploy intelligent systems that power core product experiences. You'll transform rich data into models that drive automation, personalization, and smart decision-making at scale. This role blends engineering and applied science, focused on building robust, adaptive ML systems that evolve continuously and make a tangible impact.

Responsibilities:

  • Build and maintain ML pipelines for data processing, training, evaluation, and model deployment.

  • Orchestrate batch and training jobs in Kubernetes, handling retries, failures, and resource constraints.

  • Design systems that scale dynamically from small GPU jobs to thousands of GPUs on-demand.

  • Collaborate with researchers to productionize their experiments into reproducible, robust workflows.

  • Implement model serving endpoints (REST/gRPC) and integrate with internal tooling.

  • Set up monitoring, logging, and KPI tracking for ML pipelines and compute jobs.

  • Automate CI/CD and infra provisioning for ML workloads.

  • Manage experiment tracking, model versioning, and metadata with tools like MLflow or W&B.

  • Support model serving infrastructure that may be used by internal UIs or tools in the future.

Required Skills:

  • Kubernetes: Strong experience orchestrating jobs, not just deploying services. You should be confident in managing training workloads, GPU scheduling, job retries, and Helm-based deployments.

  • Python: Comfortable writing scripts and services that glue systems together. You don’t need to be a full-stack dev, but notebooks won’t cut it. Automation is the word here.

  • ML Workflows: Familiarity with data preprocessing, training, evaluation, and deployment pipelines.

  • Model Serving: Ability to expose models via FastAPI, TorchServe, or equivalent serving stacks.

  • Linux: Strong CLI skills, you should know your way around debugging compute-heavy jobs.

  • Experience with ML metadata systems (MLflow, W&B, Neptune).

  • Know how to work side by side with AI assistants and agents.

  • Ability to communicate and debate in English and Portuguese.

Nice-to-Have:

  • Experience with orchestration tools (Airflow, Argo Workflows, Prefect).

  • Fluency in cloud environments (GCP, AWS, Azure).

  • Ability to write lean and customized Dockerfiles and Helm charts that run smoothly.

  • Exposure to distributed training frameworks (Ray, Horovod, Dask).

  • Deep understanding of GPU scheduling and tuning in Kubernetes environments.

  • Experience supporting LLM workloads or inference systems powering internal tools.

What You’ll Need to Succeed:

  • Curiosity about how things fail and how to make them not.

  • Strong debugging chops, especially in distributed, resource-constrained environments.

  • A practical mindset, you know when to patch and when to fix.

  • Ability to collaborate across ML, research, and backend teams.

  • Ownership: you care about keeping systems reliable, scalable, and clean.

Recruiting process outline:

  • Online assessment: an online test to evaluate your theoretical skills and logical reasoning.

  • Essay: a technical project for you to share your thoughts

  • Technical interview and Essay presentation.

  • Cultural interview.

If you are not willing to take an online quiz, do not apply.

Diversity and inclusion:

We believe in social inclusion, respect, and appreciation of all people. We promote a welcoming work environment, where each CloudWalker can be authentic, regardless of gender, ethnicity, race, religion, sexuality, mobility, disability, or education.

Apply for this position
Bookmark Report

About the job

Full-time
Brazil
Posted 17 hours ago
engineer
devops
python
aws
machine learning

Apply for this position

Bookmark
Report
Enhancv advertisement

30,000+
REMOTE JOBS

Unlock access to our database and
kickstart your remote career
Join Premium

MLOps Engineer

CloudWalk

Who we are

CloudWalk is a fintech company reimagining the future of financial services. We are building intelligent infrastructure powered by AI, blockchain, and thoughtful design. Our products serve millions of entrepreneurs across Brazil and the US every day, helping them grow with tools that are fast, fair, and built for how business actually works. Learn more atcloudwalk.io.

Who We’re Looking For

We're looking for an MLOps Engineer to help us build ML infrastructure that scales dynamically from dozens to thousands of GPUs, reliably and efficiently.

You’ll be part of the AI R&D team, working closely with researchers and engineers to design systems for training, evaluating, and monitoring machine learning models at scale. This isn’t a research position, but your work will directly support researchers running large-scale experiments. You’ll help build fault-tolerant pipelines that preserve progress even when things break (like OOMs), and ensure model development flows can iterate with confidence.

Our current focus is on large-scale, non-interactive workloads: batch training, dataset-wide model evaluation, and metric-driven improvement loops. That said, the infrastructure you build may later support interactive tools and APIs.

You'll be contributing to system design under the guidance of senior ML researchers and infra engineers, your role is to bring modern tooling and practical engineering to a demanding, GPU-heavy environment.As a Machine Learning Engineer, your mission is to design and deploy intelligent systems that power core product experiences. You'll transform rich data into models that drive automation, personalization, and smart decision-making at scale. This role blends engineering and applied science, focused on building robust, adaptive ML systems that evolve continuously and make a tangible impact.

Responsibilities:

  • Build and maintain ML pipelines for data processing, training, evaluation, and model deployment.

  • Orchestrate batch and training jobs in Kubernetes, handling retries, failures, and resource constraints.

  • Design systems that scale dynamically from small GPU jobs to thousands of GPUs on-demand.

  • Collaborate with researchers to productionize their experiments into reproducible, robust workflows.

  • Implement model serving endpoints (REST/gRPC) and integrate with internal tooling.

  • Set up monitoring, logging, and KPI tracking for ML pipelines and compute jobs.

  • Automate CI/CD and infra provisioning for ML workloads.

  • Manage experiment tracking, model versioning, and metadata with tools like MLflow or W&B.

  • Support model serving infrastructure that may be used by internal UIs or tools in the future.

Required Skills:

  • Kubernetes: Strong experience orchestrating jobs, not just deploying services. You should be confident in managing training workloads, GPU scheduling, job retries, and Helm-based deployments.

  • Python: Comfortable writing scripts and services that glue systems together. You don’t need to be a full-stack dev, but notebooks won’t cut it. Automation is the word here.

  • ML Workflows: Familiarity with data preprocessing, training, evaluation, and deployment pipelines.

  • Model Serving: Ability to expose models via FastAPI, TorchServe, or equivalent serving stacks.

  • Linux: Strong CLI skills, you should know your way around debugging compute-heavy jobs.

  • Experience with ML metadata systems (MLflow, W&B, Neptune).

  • Know how to work side by side with AI assistants and agents.

  • Ability to communicate and debate in English and Portuguese.

Nice-to-Have:

  • Experience with orchestration tools (Airflow, Argo Workflows, Prefect).

  • Fluency in cloud environments (GCP, AWS, Azure).

  • Ability to write lean and customized Dockerfiles and Helm charts that run smoothly.

  • Exposure to distributed training frameworks (Ray, Horovod, Dask).

  • Deep understanding of GPU scheduling and tuning in Kubernetes environments.

  • Experience supporting LLM workloads or inference systems powering internal tools.

What You’ll Need to Succeed:

  • Curiosity about how things fail and how to make them not.

  • Strong debugging chops, especially in distributed, resource-constrained environments.

  • A practical mindset, you know when to patch and when to fix.

  • Ability to collaborate across ML, research, and backend teams.

  • Ownership: you care about keeping systems reliable, scalable, and clean.

Recruiting process outline:

  • Online assessment: an online test to evaluate your theoretical skills and logical reasoning.

  • Essay: a technical project for you to share your thoughts

  • Technical interview and Essay presentation.

  • Cultural interview.

If you are not willing to take an online quiz, do not apply.

Diversity and inclusion:

We believe in social inclusion, respect, and appreciation of all people. We promote a welcoming work environment, where each CloudWalker can be authentic, regardless of gender, ethnicity, race, religion, sexuality, mobility, disability, or education.

Working Nomads

Post Jobs
Premium Subscription
Sponsorship
Free Job Alerts

Job Skills
API
FAQ
Privacy policy
Terms and conditions
Contact us
About us

Jobs by Category

Remote Administration jobs
Remote Consulting jobs
Remote Customer Success jobs
Remote Development jobs
Remote Design jobs
Remote Education jobs
Remote Finance jobs
Remote Legal jobs
Remote Healthcare jobs
Remote Human Resources jobs
Remote Management jobs
Remote Marketing jobs
Remote Sales jobs
Remote System Administration jobs
Remote Writing jobs

Jobs by Position Type

Remote Full-time jobs
Remote Part-time jobs
Remote Contract jobs

Jobs by Region

Remote jobs Anywhere
Remote jobs North America
Remote jobs Latin America
Remote jobs Europe
Remote jobs Middle East
Remote jobs Africa
Remote jobs APAC

Jobs by Skill

Remote Accounting jobs
Remote Assistant jobs
Remote Copywriting jobs
Remote Cyber Security jobs
Remote Data Analyst jobs
Remote Data Entry jobs
Remote English jobs
Remote Spanish jobs
Remote Project Management jobs
Remote QA jobs
Remote SEO jobs

Jobs by Country

Remote jobs Australia
Remote jobs Argentina
Remote jobs Brazil
Remote jobs Canada
Remote jobs Colombia
Remote jobs France
Remote jobs Germany
Remote jobs Ireland
Remote jobs India
Remote jobs Japan
Remote jobs Mexico
Remote jobs Netherlands
Remote jobs New Zealand
Remote jobs Philippines
Remote jobs Poland
Remote jobs Portugal
Remote jobs Singapore
Remote jobs Spain
Remote jobs UK
Remote jobs USA


Working Nomads curates remote digital jobs from around the web.

© 2025 Working Nomads.