Senior Data Engineer
To see similar active jobs please follow this link: Remote Development jobs
About the Job
You will be a member of the Data Application and Engineering Team. We are a force multiplier, owning the data, analysis, and knowledge infrastructure that enables ourselves and our teammates to move faster and smarter.
The Data Application and Engineering team’s mission is to empower decision making with data, maintain data integrity and security, enable scalability and agility. The team’s work includes ingest data, build ETL pipelines and create services and tools for others to use data more efficiently. You will work on developing and enhancing our data warehouse, defining processes for data monitoring and alerting as well as maintaining data integrity in our data ecosystem. You will work with cross functional teams and internal stakeholders to define requirements and build solutions to meet the requirements. You will work with other engineers to ensure that our data platform and infrastructure are scalable and reliable.
Responsibilities
Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business
Build and manage a state-of-the-art data pipeline architecture, leveraging our tech stack to fulfill business requirements.
Assemble large, complex data sets that meet functional and non-functional business requirements.
Oversee the ingestion of data into Snowflake, employing tools like Fivetran as the data integration platform, and facilitate the operation of this data through dbt and Looker.
Conduct thorough analyses and debugging of data pipeline issues, ensuring data integrity and reliability.
Communicate strategies and processes around data modeling and architecture to the data engineering as well as other teams
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Snowflake and AWS technologies.
Requirements
Minimum of 3 years experience in data engineering, with substantial work on ETL pipeline construction, preferably in environments utilizing Fivetran, dbt, Snowflake, and Looker.
Proficient in advanced SQL, with a strong background in designing and implementing ETL processes.
Demonstrable coding skills in Python.
Proven track record of managing large datasets, including their processing, transformation, and transportation.
Experienced with cloud services, particularly AWS, and familiar with services like EC2, SQS, SNS, RDS, and Cache.
Bachelor’s degree in Computer Science, Software Engineering, or a related field.
Deep understanding of the complete data stack, including Apache Hadoop, Apache Spark, Spark Streaming, Kafka, and the ability to adapt and learn new technologies.
Direct experience in deploying machine learning models into production environments, particularly using Java/Python.
Familiarity with data visualization and business intelligence tools, specifically Looker/Sigma, to translate data into actionable insights.
Compensation & Benefits:
At Taskrabbit, our approach to compensation is designed to be competitive, transparent and equitable. Our total compensation consists of base pay + bonus + benefits + perks.
The base pay range for this position is $115,000 - $160,000. This range is representative of base pay only, and does not include any other total cash compensation amounts, such as company bonus or benefits. Final offer amounts may vary from the amounts listed above, and will be determined by factors including, but not limited to, relevant experience, qualifications, geography, and level.
Senior Data Engineer
To see similar active jobs please follow this link: Remote Development jobs
About the Job
You will be a member of the Data Application and Engineering Team. We are a force multiplier, owning the data, analysis, and knowledge infrastructure that enables ourselves and our teammates to move faster and smarter.
The Data Application and Engineering team’s mission is to empower decision making with data, maintain data integrity and security, enable scalability and agility. The team’s work includes ingest data, build ETL pipelines and create services and tools for others to use data more efficiently. You will work on developing and enhancing our data warehouse, defining processes for data monitoring and alerting as well as maintaining data integrity in our data ecosystem. You will work with cross functional teams and internal stakeholders to define requirements and build solutions to meet the requirements. You will work with other engineers to ensure that our data platform and infrastructure are scalable and reliable.
Responsibilities
Work on high impact projects that improve data availability and quality, and provide reliable access to data for the rest of the business
Build and manage a state-of-the-art data pipeline architecture, leveraging our tech stack to fulfill business requirements.
Assemble large, complex data sets that meet functional and non-functional business requirements.
Oversee the ingestion of data into Snowflake, employing tools like Fivetran as the data integration platform, and facilitate the operation of this data through dbt and Looker.
Conduct thorough analyses and debugging of data pipeline issues, ensuring data integrity and reliability.
Communicate strategies and processes around data modeling and architecture to the data engineering as well as other teams
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, Snowflake and AWS technologies.
Requirements
Minimum of 3 years experience in data engineering, with substantial work on ETL pipeline construction, preferably in environments utilizing Fivetran, dbt, Snowflake, and Looker.
Proficient in advanced SQL, with a strong background in designing and implementing ETL processes.
Demonstrable coding skills in Python.
Proven track record of managing large datasets, including their processing, transformation, and transportation.
Experienced with cloud services, particularly AWS, and familiar with services like EC2, SQS, SNS, RDS, and Cache.
Bachelor’s degree in Computer Science, Software Engineering, or a related field.
Deep understanding of the complete data stack, including Apache Hadoop, Apache Spark, Spark Streaming, Kafka, and the ability to adapt and learn new technologies.
Direct experience in deploying machine learning models into production environments, particularly using Java/Python.
Familiarity with data visualization and business intelligence tools, specifically Looker/Sigma, to translate data into actionable insights.
Compensation & Benefits:
At Taskrabbit, our approach to compensation is designed to be competitive, transparent and equitable. Our total compensation consists of base pay + bonus + benefits + perks.
The base pay range for this position is $115,000 - $160,000. This range is representative of base pay only, and does not include any other total cash compensation amounts, such as company bonus or benefits. Final offer amounts may vary from the amounts listed above, and will be determined by factors including, but not limited to, relevant experience, qualifications, geography, and level.
