(Senior) Analytics Engineer
Company Description
InPost Group is an innovative European out-of-home deliveries company, revolutionizing the way parcels are delivered to customers. With operations across several countries, our network of intelligent lockers (Paczkomat®) provides customers with a fast, convenient, and secure delivery option. Our mission is to provide best-in-class user experience for merchants and consumers. 'Simplify everything' – redefining e-commerce logistics. We work by innovating the market with constant technological research and with meticulous attention to the customer.
The Data & AI department is seeking a (Senior) Analytics Engineer to join our Core Team. In this role, you'll shape analytical standards and implement innovative solutions, impacting our operations across Poland and 7 international markets. Remote work is possible.
Daily you’ll work with: Apache Spark in Databricks, Databricks various features, Python\PySpark, SQL, Kafka, Power BI, GitLab, Google BigQuery, inhouse data modeling tool.
Job Description
On a daily basis you will:
drive innovation and improvements by evaluating new tools (e.g., Data Quality monitoring) and platform features (e.g., Genie Space on Databricks).
monitor the effectiveness of solutions by tracking implemented actions (e.g., naming convention adherence, metadata completeness, MR quality).
define workflows and coding standards for style, maintainability, and best practices on the analytical platform.
evangelize platform users on the best practices for its use and encouraging teams to continuously improve their working methods. Advocate for coding standards through various workshops and guidelines.
monitor the market for new tools and methodologies in data product development area.
while the role involves conceptual work, you'll also have opportunities for hands-on coding, such as analyzing AI readiness and implementing AI solutions to automate data development tasks.
work with various Data&AI competencies (Data Consultants, Data Engineers, AI Engineers, Cloud Engineers, Data Architect)
Qualifications
Which skills should you bring to the pitch:
At least 5 years of experience in an analytical role working with large datasets
Experience in data modeling and implementing complex data-driven solutions is a strong plus
Excellent proficiency in Python/PySpark for data analysis, SQL for data processing, bash scripting to manage Git repositories
Comprehensive understanding of the technical aspects of data warehousing, including dimensional data modeling and ETL/ELT processes
Experience with real-time data processing and the ability to handle data from various backend/frontend systems.
Familiarity with cloud-based data platforms (GCP/Azure/AWS)
The ability to present technical concepts and solutions to diverse audiences
Self-motivated with the ability to work independently and manage multiple tasks
Excellent interpersonal skills with the ability to collaborate effectively with cross-functional teams
Fluent in English: verbal and written
Nice to have:
Experience in working with Apache Spark in Databricks
Familiarity with modern data building tools like Apache Airflow, DBT
Familiarity with data visualization tools such as PowerBI/Tableau/Looker
Knowledge of data governance principles and practices
Ability to thrive in a highly agile, intensely iterative environment
Positive and solution-oriented mindset
Additional Information
The course of the recruitment process:
Step 1: HR Interview
Step 2: Devskiller test
Step 3: Technical Interview (60 min)
Step 4: Home task
Step 5: Home task presentation and discussion (60 min)
(Senior) Analytics Engineer
Company Description
InPost Group is an innovative European out-of-home deliveries company, revolutionizing the way parcels are delivered to customers. With operations across several countries, our network of intelligent lockers (Paczkomat®) provides customers with a fast, convenient, and secure delivery option. Our mission is to provide best-in-class user experience for merchants and consumers. 'Simplify everything' – redefining e-commerce logistics. We work by innovating the market with constant technological research and with meticulous attention to the customer.
The Data & AI department is seeking a (Senior) Analytics Engineer to join our Core Team. In this role, you'll shape analytical standards and implement innovative solutions, impacting our operations across Poland and 7 international markets. Remote work is possible.
Daily you’ll work with: Apache Spark in Databricks, Databricks various features, Python\PySpark, SQL, Kafka, Power BI, GitLab, Google BigQuery, inhouse data modeling tool.
Job Description
On a daily basis you will:
drive innovation and improvements by evaluating new tools (e.g., Data Quality monitoring) and platform features (e.g., Genie Space on Databricks).
monitor the effectiveness of solutions by tracking implemented actions (e.g., naming convention adherence, metadata completeness, MR quality).
define workflows and coding standards for style, maintainability, and best practices on the analytical platform.
evangelize platform users on the best practices for its use and encouraging teams to continuously improve their working methods. Advocate for coding standards through various workshops and guidelines.
monitor the market for new tools and methodologies in data product development area.
while the role involves conceptual work, you'll also have opportunities for hands-on coding, such as analyzing AI readiness and implementing AI solutions to automate data development tasks.
work with various Data&AI competencies (Data Consultants, Data Engineers, AI Engineers, Cloud Engineers, Data Architect)
Qualifications
Which skills should you bring to the pitch:
At least 5 years of experience in an analytical role working with large datasets
Experience in data modeling and implementing complex data-driven solutions is a strong plus
Excellent proficiency in Python/PySpark for data analysis, SQL for data processing, bash scripting to manage Git repositories
Comprehensive understanding of the technical aspects of data warehousing, including dimensional data modeling and ETL/ELT processes
Experience with real-time data processing and the ability to handle data from various backend/frontend systems.
Familiarity with cloud-based data platforms (GCP/Azure/AWS)
The ability to present technical concepts and solutions to diverse audiences
Self-motivated with the ability to work independently and manage multiple tasks
Excellent interpersonal skills with the ability to collaborate effectively with cross-functional teams
Fluent in English: verbal and written
Nice to have:
Experience in working with Apache Spark in Databricks
Familiarity with modern data building tools like Apache Airflow, DBT
Familiarity with data visualization tools such as PowerBI/Tableau/Looker
Knowledge of data governance principles and practices
Ability to thrive in a highly agile, intensely iterative environment
Positive and solution-oriented mindset
Additional Information
The course of the recruitment process:
Step 1: HR Interview
Step 2: Devskiller test
Step 3: Technical Interview (60 min)
Step 4: Home task
Step 5: Home task presentation and discussion (60 min)