Analytics Engineer
About the role:
We are looking for an Analytics Engineer to work within our Data Platform team, part of the Infrastructure Domain. This role will collaborate with teams across Product and Engineering to simplify access to cleaned and transformed data from our two brands.
Together, you will build new data models and optimise existing ones to drive insights and recommendations from our data. You will integrate dbt into analysts' and engineers' ways of working, reduce the complexity of our existing SQL, and empower data consumers with reliable data to further Zepz’s mission. This role is great for someone who enjoys variety, can work independently, and wants to own projects.
Recent projects include:
Repointing modelling for one brand from a monolith to microservices
Refactoring intensive Compliance queries into a dbt workflow
Modelling a multi-brand conversion funnel
Tech Stack:
We have recently consolidated technologies across the two brands and built the new Zepz Data Platform. Our tech stack now includes:
Fivetran
Databricks
dbt
Metaplane
Astronomer
OpenMetaData
Mode
What you will own:
Building and maintaining data models to expose reliable data for analysis and reporting.
Communicating with engineering and business stakeholders to understand the commercial requirements, translating them into a technical solution to ensure reliable, self-serve data is available for decision-making.
Developing standards and best practices for data consumption, including educating data consumers on data quality, availability, and interpretation.
Identifying opportunities to reduce complexity and increase efficiency across data models and our data warehouse.
Ensuring the data and output are of high quality—tested, automated, scalable, and documented.
This role will have an immediate impact on the quality of data presented and used to solve business problems.
What you bring to the table:
You can work independently with dbt to design and implement data models and coach others to do so.
You are comfortable with daily use of SQL and have an interest in reading and extracting data transformation processes from Python scripts.
You have experience using modern data warehouses (e.g., Databricks, Snowflake, BigQuery) and orchestration tools (e.g., Airflow and Fivetran).
You see yourself as a problem-solver who wants to understand the business problem and communicate the commercial impact alongside the technical solution.
You are an advocate for data-driven decision-making who strives to improve processes, establish best practices, and define standards.
You can collaborate across multiple time zones where there is a focus on asynchronous communication and documentation.
Nice to have:
You have worked at a scaling startup previously.
You have experience working with batch and real-time data platforms.
Analytics Engineer
About the role:
We are looking for an Analytics Engineer to work within our Data Platform team, part of the Infrastructure Domain. This role will collaborate with teams across Product and Engineering to simplify access to cleaned and transformed data from our two brands.
Together, you will build new data models and optimise existing ones to drive insights and recommendations from our data. You will integrate dbt into analysts' and engineers' ways of working, reduce the complexity of our existing SQL, and empower data consumers with reliable data to further Zepz’s mission. This role is great for someone who enjoys variety, can work independently, and wants to own projects.
Recent projects include:
Repointing modelling for one brand from a monolith to microservices
Refactoring intensive Compliance queries into a dbt workflow
Modelling a multi-brand conversion funnel
Tech Stack:
We have recently consolidated technologies across the two brands and built the new Zepz Data Platform. Our tech stack now includes:
Fivetran
Databricks
dbt
Metaplane
Astronomer
OpenMetaData
Mode
What you will own:
Building and maintaining data models to expose reliable data for analysis and reporting.
Communicating with engineering and business stakeholders to understand the commercial requirements, translating them into a technical solution to ensure reliable, self-serve data is available for decision-making.
Developing standards and best practices for data consumption, including educating data consumers on data quality, availability, and interpretation.
Identifying opportunities to reduce complexity and increase efficiency across data models and our data warehouse.
Ensuring the data and output are of high quality—tested, automated, scalable, and documented.
This role will have an immediate impact on the quality of data presented and used to solve business problems.
What you bring to the table:
You can work independently with dbt to design and implement data models and coach others to do so.
You are comfortable with daily use of SQL and have an interest in reading and extracting data transformation processes from Python scripts.
You have experience using modern data warehouses (e.g., Databricks, Snowflake, BigQuery) and orchestration tools (e.g., Airflow and Fivetran).
You see yourself as a problem-solver who wants to understand the business problem and communicate the commercial impact alongside the technical solution.
You are an advocate for data-driven decision-making who strives to improve processes, establish best practices, and define standards.
You can collaborate across multiple time zones where there is a focus on asynchronous communication and documentation.
Nice to have:
You have worked at a scaling startup previously.
You have experience working with batch and real-time data platforms.