Staff Data Engineer

G-P
Full-time
India
Posted 1 year ago
Go ad-free with Premium ×
The job listing has expired. Unfortunately, the hiring company is no longer accepting new applications.

To see similar active jobs please follow this link: Remote Development jobs

Job Summary:

We are seeking a highly experienced Staff Data Engineer to lead the design, development, and optimisation of our data architecture, pipelines, and workflows. This role will serve as a technical lead within the organisation, setting best practices, mentoring team members, and solving complex data challenges to enable data-driven decision-making at scale.

As a Staff Data Engineer, you will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to design systems that transform raw data into actionable insights while ensuring scalability, security, and reliability.

Key Responsibilities:

Technical Leadership

  • Design, and implement scalable and reliable data pipelines, ensuring the processing of large volumes of structured and unstructured data.

  • Define and enforce data engineering best practices, coding standards, and architectural principles across teams.

  • Conduct code reviews and provide mentorship to junior and senior data engineers.

Data Pipeline Development

  • Build and maintain batch and real-time data pipelines using tools such as Apache Spark, Kinesis, and AWS services.

  • Works with multiple teams to coordinate the event-driven architecture, managing inter-dependencies and promoting consistency.

  • Ensure data quality, governance, and security by implementing monitoring, validation, and compliance tools.

Collaboration & Cross-Functional Engagement

  • Partner with product, analytics, and data science teams to understand business requirements and translate them into technical solutions.

  • Work closely with DevOps and software engineering teams to deploy and maintain production-ready data infrastructure.

Innovation & Scalability

  • Evaluate and recommend emerging technologies and frameworks to ensure the data platform remains future-proof.

  • Drive initiatives to improve the performance, scalability, and efficiency of existing systems.

 

Required Skills & Experience

  • 12+ years of experience in data engineering field, with at least 2 years in a senior or staff-level role.

  • Expertise in designing and implementing scalable data architectures for big data platforms.

  • Strong programming skills in Python, Scala.

  • Deep experience with distributed data processing systems such as Apache Spark, Databricks, Delta Lake.

  • Proficiency with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (Dynamo).

  • Strong understanding of ETL/ELT workflows, data warehousing concepts, and modern data lake architectures.

  • Employ the established Data Governance model to sustain Data Quality for the data objects and implement the necessary operating mechanisms to ensure compliance

  • Knowledge of CI/CD practices.

  • Excellent problem-solving skills and the ability to design creative, efficient solutions for complex data challenges.

  • Background in AI, machine learning pipelines is a plus

  • Proactive, self-driven, and detail-oriented with a strong sense of ownership.

Go ad-free with Premium ×
About the Job
Full-time
India
Posted 1 year ago
Check if your resume is a good fit
25/100
Get Full Report
+ 1,284 new jobs added today
30,000+
Remote Jobs

Don't miss out — new listings every hour

Join Premium

Staff Data Engineer

G-P
The job listing has expired. Unfortunately, the hiring company is no longer accepting new applications.

To see similar active jobs please follow this link: Remote Development jobs

Job Summary:

We are seeking a highly experienced Staff Data Engineer to lead the design, development, and optimisation of our data architecture, pipelines, and workflows. This role will serve as a technical lead within the organisation, setting best practices, mentoring team members, and solving complex data challenges to enable data-driven decision-making at scale.

As a Staff Data Engineer, you will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to design systems that transform raw data into actionable insights while ensuring scalability, security, and reliability.

Key Responsibilities:

Technical Leadership

  • Design, and implement scalable and reliable data pipelines, ensuring the processing of large volumes of structured and unstructured data.

  • Define and enforce data engineering best practices, coding standards, and architectural principles across teams.

  • Conduct code reviews and provide mentorship to junior and senior data engineers.

Data Pipeline Development

  • Build and maintain batch and real-time data pipelines using tools such as Apache Spark, Kinesis, and AWS services.

  • Works with multiple teams to coordinate the event-driven architecture, managing inter-dependencies and promoting consistency.

  • Ensure data quality, governance, and security by implementing monitoring, validation, and compliance tools.

Collaboration & Cross-Functional Engagement

  • Partner with product, analytics, and data science teams to understand business requirements and translate them into technical solutions.

  • Work closely with DevOps and software engineering teams to deploy and maintain production-ready data infrastructure.

Innovation & Scalability

  • Evaluate and recommend emerging technologies and frameworks to ensure the data platform remains future-proof.

  • Drive initiatives to improve the performance, scalability, and efficiency of existing systems.

 

Required Skills & Experience

  • 12+ years of experience in data engineering field, with at least 2 years in a senior or staff-level role.

  • Expertise in designing and implementing scalable data architectures for big data platforms.

  • Strong programming skills in Python, Scala.

  • Deep experience with distributed data processing systems such as Apache Spark, Databricks, Delta Lake.

  • Proficiency with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (Dynamo).

  • Strong understanding of ETL/ELT workflows, data warehousing concepts, and modern data lake architectures.

  • Employ the established Data Governance model to sustain Data Quality for the data objects and implement the necessary operating mechanisms to ensure compliance

  • Knowledge of CI/CD practices.

  • Excellent problem-solving skills and the ability to design creative, efficient solutions for complex data challenges.

  • Background in AI, machine learning pipelines is a plus

  • Proactive, self-driven, and detail-oriented with a strong sense of ownership.