Data Engineer
The Motley Fool is looking for a highly skilled Freelance Data Engineer with strong Python and SQL skills. You’ll be at the forefront of our AI integration initiatives, designing and optimizing the data pipelines and infrastructure that power our next generation of AI and machine learning products.
This is a senior-level contract role that requires a minimum of 5 years of relevant experience. It is an independent contract position requiring ~40 hours of work per week for at least 12 months.
The role is 100% remote.
Who are we?
We are The Motley Fool, a purpose-driven financial information and services firm with more than 30 years of experience making the world smarter, happier, and richer. But what does that even mean?! It means we’re helping Fools (always with a capital “F”) demystify the world of finance, beat the stock market, and achieve personal wealth and happiness through our products and services.
The Motley Fool is firmly committed to diversity, inclusion, and equity. We are a motley group of overachievers that have built a culture of trust founded on Foolishness, fun, and a commitment to making the world smarter, happier and richer. However you identify or whatever winding road has led you to us, please don't hesitate to apply if the description above leaves you thinking, 'Hey! I could do that!'
What does this team do?
Our AI & Data team builds the backbone for AI-driven initiatives across the company. We design and maintain data pipelines, enable large language model (LLM) training and inference, and productionalize AI prototypes into scalable, reliable systems.
What would you do in this role?
Design, construct, and maintain highly scalable data pipelines and data lake architectures
Implement data ingestion routines from diverse sources and formats
Optimize data flow and data collection across cross-functional teams (executive, product, analytics, design)
Work with data scientists and AI specialists to ensure data availability and quality for training/inference
Productionalize proof-of-concept models into production-ready pipelines
Oversee and enhance system observability (monitoring, logging, diagnostics)
Stay current with emerging data tools and patterns, recommending improvements
Required Skills & Experience:
5+ years of data engineering or backend development experience
Strong proficiency in Python and SQL
Experience building and maintaining data pipelines (ETL/ELT) / data lake architectures
Hands-on experience with Snowflake (tasks, exports, unstructured data, S3 integration)
Experience with Iceberg (ideally on AWS, e.g., S3 + Athena/Glue) for managing large-scale data lake tables
Cloud-based development (AWS preferred; familiarity with serverless patterns a plus)
Experience with Docker and Terraform (Infrastructure as Code)
Strong problem-solving skills and ability to work in a fast-paced, fully remote environment
Excellent communication and collaboration skills
Preferred Experience:
Experience with big data frameworks (Spark, Kafka, data streaming tech)
Familiarity with NLP and transformer models (GPT, BERT, etc.)
Knowledge of ML frameworks and libraries
Experience with workflow orchestration tools (Airflow)
Compensation:
Below is our target compensation range. While we are budget conscious, we’re also eager to find the right person for this role, so if your target is outside of this range, please don’t hesitate to apply and we’d be happy to have a conversation.
Hourly Pay Range
$95—$110 USD
About the job
Apply for this position
Data Engineer
The Motley Fool is looking for a highly skilled Freelance Data Engineer with strong Python and SQL skills. You’ll be at the forefront of our AI integration initiatives, designing and optimizing the data pipelines and infrastructure that power our next generation of AI and machine learning products.
This is a senior-level contract role that requires a minimum of 5 years of relevant experience. It is an independent contract position requiring ~40 hours of work per week for at least 12 months.
The role is 100% remote.
Who are we?
We are The Motley Fool, a purpose-driven financial information and services firm with more than 30 years of experience making the world smarter, happier, and richer. But what does that even mean?! It means we’re helping Fools (always with a capital “F”) demystify the world of finance, beat the stock market, and achieve personal wealth and happiness through our products and services.
The Motley Fool is firmly committed to diversity, inclusion, and equity. We are a motley group of overachievers that have built a culture of trust founded on Foolishness, fun, and a commitment to making the world smarter, happier and richer. However you identify or whatever winding road has led you to us, please don't hesitate to apply if the description above leaves you thinking, 'Hey! I could do that!'
What does this team do?
Our AI & Data team builds the backbone for AI-driven initiatives across the company. We design and maintain data pipelines, enable large language model (LLM) training and inference, and productionalize AI prototypes into scalable, reliable systems.
What would you do in this role?
Design, construct, and maintain highly scalable data pipelines and data lake architectures
Implement data ingestion routines from diverse sources and formats
Optimize data flow and data collection across cross-functional teams (executive, product, analytics, design)
Work with data scientists and AI specialists to ensure data availability and quality for training/inference
Productionalize proof-of-concept models into production-ready pipelines
Oversee and enhance system observability (monitoring, logging, diagnostics)
Stay current with emerging data tools and patterns, recommending improvements
Required Skills & Experience:
5+ years of data engineering or backend development experience
Strong proficiency in Python and SQL
Experience building and maintaining data pipelines (ETL/ELT) / data lake architectures
Hands-on experience with Snowflake (tasks, exports, unstructured data, S3 integration)
Experience with Iceberg (ideally on AWS, e.g., S3 + Athena/Glue) for managing large-scale data lake tables
Cloud-based development (AWS preferred; familiarity with serverless patterns a plus)
Experience with Docker and Terraform (Infrastructure as Code)
Strong problem-solving skills and ability to work in a fast-paced, fully remote environment
Excellent communication and collaboration skills
Preferred Experience:
Experience with big data frameworks (Spark, Kafka, data streaming tech)
Familiarity with NLP and transformer models (GPT, BERT, etc.)
Knowledge of ML frameworks and libraries
Experience with workflow orchestration tools (Airflow)
Compensation:
Below is our target compensation range. While we are budget conscious, we’re also eager to find the right person for this role, so if your target is outside of this range, please don’t hesitate to apply and we’d be happy to have a conversation.
Hourly Pay Range
$95—$110 USD