Staff Data Engineer
About the Role
We are seeking a Staff Data Engineer to lead the design, development, and optimization of our data infrastructure and analytics layers, enabling the creation of reliable, scalable, and high-quality data products across the company. This role will report to the Director of Data Engineering and Applications and work closely with both technical teams and non-technical stakeholders across the business. While this is an individual contributor role, it emphasizes mentorship and strategic guidance across both data engineering and analytics engineering functions.
The ideal candidate has deep experience building and maintaining modern data platforms using tools such as dbt, Airflow, and Snowflake (or equivalent), and brings strong expertise in data modeling, orchestration, and production-grade data pipelines. They excel at engaging with non-technical stakeholders to understand business needs and are skilled at translating those needs into well-defined metrics, semantic models, and self-serve analytical tools. They are comfortable shaping architectural direction, promoting best practices across the team, and thrive in environments that require cross-functional collaboration, clear communication, and a strong sense of ownership.
What You'll Work On:
Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
Champion data quality, documentation, and observability to ensure high trust in data across the organization
Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
Your Areas Of Expertise:
Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Compensation & Benefits:
At Taskrabbit, our approach to compensation is designed to be competitive, transparent, and equitable. total compensation consists of base pay + bonus + benefits + perks.
The base pay range for this position is $136,000 - $180,000. This range is representative of base pay only, and does not include any other total cash compensation amounts, such as company bonus or benefits. Final offer amounts may vary from the amounts listed above and will be determined by factors including, but not limited to, relevant experience, qualifications, geography, and level.
Staff Data Engineer
About the Role
We are seeking a Staff Data Engineer to lead the design, development, and optimization of our data infrastructure and analytics layers, enabling the creation of reliable, scalable, and high-quality data products across the company. This role will report to the Director of Data Engineering and Applications and work closely with both technical teams and non-technical stakeholders across the business. While this is an individual contributor role, it emphasizes mentorship and strategic guidance across both data engineering and analytics engineering functions.
The ideal candidate has deep experience building and maintaining modern data platforms using tools such as dbt, Airflow, and Snowflake (or equivalent), and brings strong expertise in data modeling, orchestration, and production-grade data pipelines. They excel at engaging with non-technical stakeholders to understand business needs and are skilled at translating those needs into well-defined metrics, semantic models, and self-serve analytical tools. They are comfortable shaping architectural direction, promoting best practices across the team, and thrive in environments that require cross-functional collaboration, clear communication, and a strong sense of ownership.
What You'll Work On:
Design, build, and maintain scalable, reliable data pipelines and infrastructure to support analytics, operations, and product use cases
Develop and evolve dbt models, semantic layers, and data marts that enable trustworthy, self-serve analytics across the business
Collaborate with non-technical stakeholders to deeply understand their business needs and translate them into well-defined metrics and analytical tools
Lead architectural decisions for our data platform, ensuring it is performant, maintainable, and aligned with future growth
Build and maintain data orchestration and transformation workflows using tools like Airflow, dbt, and Snowflake (or equivalent)
Champion data quality, documentation, and observability to ensure high trust in data across the organization
Mentor and guide other engineers and analysts, promoting best practices in both data engineering and analytics engineering disciplines
Your Areas Of Expertise:
Expertise in building and maintaining ELT data pipelines using modern tools such as dbt, Airflow, and Fivetran
Deep experience with cloud data warehouses such as Snowflake, BigQuery, or Redshift
Strong data modeling skills (e.g., dimensional modeling, star/snowflake schemas) to support both operational and analytical workloads
Proficient in SQL and at least one general-purpose programming language (e.g., Python, Java, or Scala)
Experience with streaming data platforms (e.g., Kafka, Kinesis, or equivalent) and real-time data processing patterns
Familiarity with infrastructure-as-code tools like Terraform and DevOps practices for managing data platform components
Hands-on experience with BI and semantic layer tools such as Looker, Mode, Tableau, or equivalent
Compensation & Benefits:
At Taskrabbit, our approach to compensation is designed to be competitive, transparent, and equitable. total compensation consists of base pay + bonus + benefits + perks.
The base pay range for this position is $136,000 - $180,000. This range is representative of base pay only, and does not include any other total cash compensation amounts, such as company bonus or benefits. Final offer amounts may vary from the amounts listed above and will be determined by factors including, but not limited to, relevant experience, qualifications, geography, and level.