Data Engineer - Analytics Data Engineering
Role Description
In this role you will build large, scalable analytics pipelines using modern data technologies. This is not a “maintain existing platform” or “make minor tweaks to current code base” kind of role. We are effectively building from the ground up and plan to leverage the most recent Big Data technologies. If you enjoy building new things without being constrained by technical debt, this is the job for you!
Our Engineering Career Framework is viewable by anyone outside the company and describes what’s expected for our engineers at each of our career levels. Check out our blog post on this topic and more here.
Responsibilities
Help define company data assets (data model), Spark, SparkSQL and HiveSQL jobs to populate data models
Help define and design data integrations, data quality frameworks and design and evaluate open source/vendor tools for data lineage
Work closely with Dropbox business units and engineering teams to develop strategy for long term Data Platform architecture to be efficient, reliable and scalable
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
On-call work may be necessary occasionally to help address bugs, outages, or other operational issues, with the goal of maintaining a stable and high-quality experience for our customers.
Requirements
5+ years of Spark, Python, Java, C++, or Scala development experience
5+ years of SQL experience
5+ years of experience with schema design, dimensional data modeling, and medallion architectures
Experience with the Databricks platform and data lake architectures for large-scale data processing and analytics
Excellent product strategic thinking and communications to influence product and cross-functional teams by identifying the data opportunities to drive impact
BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience
Experience designing, building and maintaining data processing systems
Preferred Qualifications
7+ years of SQL experience
7+ years of experience with schema design, dimensional data modeling, and medallion architectures
Experience with Airflow or other similar orchestration frameworks
Experience building data quality monitoring using MonteCarlo or similar tools
Compensation
Poland Pay Range
183 600 zł—248 400 zł PLN
About the job
Apply for this position
Data Engineer - Analytics Data Engineering
Role Description
In this role you will build large, scalable analytics pipelines using modern data technologies. This is not a “maintain existing platform” or “make minor tweaks to current code base” kind of role. We are effectively building from the ground up and plan to leverage the most recent Big Data technologies. If you enjoy building new things without being constrained by technical debt, this is the job for you!
Our Engineering Career Framework is viewable by anyone outside the company and describes what’s expected for our engineers at each of our career levels. Check out our blog post on this topic and more here.
Responsibilities
Help define company data assets (data model), Spark, SparkSQL and HiveSQL jobs to populate data models
Help define and design data integrations, data quality frameworks and design and evaluate open source/vendor tools for data lineage
Work closely with Dropbox business units and engineering teams to develop strategy for long term Data Platform architecture to be efficient, reliable and scalable
Conceptualize and own the data architecture for multiple large-scale projects, while evaluating design and operational cost-benefit tradeoffs within systems
Collaborate with engineers, product managers, and data scientists to understand data needs, representing key data insights in a meaningful way
Design, build, and launch collections of sophisticated data models and visualizations that support multiple use cases across different products or domains
Optimize pipelines, dashboards, frameworks, and systems to facilitate easier development of data artifacts
On-call work may be necessary occasionally to help address bugs, outages, or other operational issues, with the goal of maintaining a stable and high-quality experience for our customers.
Requirements
5+ years of Spark, Python, Java, C++, or Scala development experience
5+ years of SQL experience
5+ years of experience with schema design, dimensional data modeling, and medallion architectures
Experience with the Databricks platform and data lake architectures for large-scale data processing and analytics
Excellent product strategic thinking and communications to influence product and cross-functional teams by identifying the data opportunities to drive impact
BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent technical experience
Experience designing, building and maintaining data processing systems
Preferred Qualifications
7+ years of SQL experience
7+ years of experience with schema design, dimensional data modeling, and medallion architectures
Experience with Airflow or other similar orchestration frameworks
Experience building data quality monitoring using MonteCarlo or similar tools
Compensation
Poland Pay Range
183 600 zł—248 400 zł PLN