Data Engineer II (L3)
About Your Team
At Teachable, our Data team aims to support company data-driven decision-making. Reporting to the Data Engineering Manager, you will work closely with our stakeholders, gather requirements, model data and support functions across Teachable’s business.
What is the role?
We are seeking a skilled Data Engineer to join our dynamic Data team. The ideal candidate will have a comprehensive understanding of the data lifecycle from ingestion to consumption, with a particular focus on data modeling. This role will support various business domains, such as Product, Marketing and Finance, by organizing and structuring data to support robust analytics and reporting.
This role will be part of a highly collaborative corporate made up of US and Brazil-based employees.
In this role you'll:
Data Ingestion to Consumption: Manage the flow of data from ingestion to final consumption. Organize data, understand modern data structures and file types, and ensure proper storage in data lakes and data warehouses.
Data Modeling: Develop and maintain entity-relationship models. Relate business and calculation rules to data models to ensure data integrity and relevance.
Pipeline Implementation: Design and implement data pipelines using preferrable SQL or Python to ensure efficient data processing and transformation.
Reporting Support: Collaborate with business analysts and other stakeholders to understand reporting needs and ensure that data structures support these requirements.
Documentation: Maintain thorough documentation of data models, data flows, and data transformation processes. Documentation also supports knowledge sharing, status updates with non-technical stakeholders.
Collaboration: Work closely with other members of the Data Team and cross-functional teams to support various data-related projects.
Quality Assurance: Implement and monitor data quality checks to ensure accuracy and reliability of data.
What You’ll Bring
2+ years of experience working within Data Engineering, Analytics Engineering, Business Intelligence or similar functions.
Experience in database languages (PostgreSQL, Redshift, S3), indexing and partitioning to handle large volumes of data and create optimized queries and databases, and file manipulation and organization, such as Parquet.
Experience with the 'ETL/ELT as code' approach for building Data Marts and Data Warehouses.
Experience with cloud infrastructure (AWS) and knowledge of solutions like Athena, Redshift Spectrum, SageMaker and Apache Airflow for creating DAGs.
Experience with error and inconsistency alerts, including detailed root cause analysis, correction, and improvement proposals.
Critical thinking for evaluating contexts and making decisions about delivery formats that meet the company’s needs (e.g., materialized views, etc.).
Knowledge in Python is a plus.
Data Engineer II (L3)
About Your Team
At Teachable, our Data team aims to support company data-driven decision-making. Reporting to the Data Engineering Manager, you will work closely with our stakeholders, gather requirements, model data and support functions across Teachable’s business.
What is the role?
We are seeking a skilled Data Engineer to join our dynamic Data team. The ideal candidate will have a comprehensive understanding of the data lifecycle from ingestion to consumption, with a particular focus on data modeling. This role will support various business domains, such as Product, Marketing and Finance, by organizing and structuring data to support robust analytics and reporting.
This role will be part of a highly collaborative corporate made up of US and Brazil-based employees.
In this role you'll:
Data Ingestion to Consumption: Manage the flow of data from ingestion to final consumption. Organize data, understand modern data structures and file types, and ensure proper storage in data lakes and data warehouses.
Data Modeling: Develop and maintain entity-relationship models. Relate business and calculation rules to data models to ensure data integrity and relevance.
Pipeline Implementation: Design and implement data pipelines using preferrable SQL or Python to ensure efficient data processing and transformation.
Reporting Support: Collaborate with business analysts and other stakeholders to understand reporting needs and ensure that data structures support these requirements.
Documentation: Maintain thorough documentation of data models, data flows, and data transformation processes. Documentation also supports knowledge sharing, status updates with non-technical stakeholders.
Collaboration: Work closely with other members of the Data Team and cross-functional teams to support various data-related projects.
Quality Assurance: Implement and monitor data quality checks to ensure accuracy and reliability of data.
What You’ll Bring
2+ years of experience working within Data Engineering, Analytics Engineering, Business Intelligence or similar functions.
Experience in database languages (PostgreSQL, Redshift, S3), indexing and partitioning to handle large volumes of data and create optimized queries and databases, and file manipulation and organization, such as Parquet.
Experience with the 'ETL/ELT as code' approach for building Data Marts and Data Warehouses.
Experience with cloud infrastructure (AWS) and knowledge of solutions like Athena, Redshift Spectrum, SageMaker and Apache Airflow for creating DAGs.
Experience with error and inconsistency alerts, including detailed root cause analysis, correction, and improvement proposals.
Critical thinking for evaluating contexts and making decisions about delivery formats that meet the company’s needs (e.g., materialized views, etc.).
Knowledge in Python is a plus.