Data Engineer (P3)
To see similar active jobs please follow this link: Remote Development jobs
See Yourself at Twilio Join us as our next Data Engineer (L3) on the Customer Data Platform team.
About the Job
As a Data Engineer(L3), you will play a key role in building and maintaining data infrastructure that processes large-scale datasets efficiently and reliably. You’ll contribute to the design and implementation of high-volume pipelines, collaborate with engineers across teams, and help ensure our platform remains robust, scalable, and easy to use.
This is a great role for someone with a strong data engineering background who’s ready to step into broader responsibilities and help shape the evolution of Customer data platform.
Responsibilities
Build and maintain scalable data pipelines using Spark, Scala, and cloud-native services.
Improve the performance and reliability of our real-time and batch data processing systems.
Contribute to platform features that support key customer-facing products such as Identity resolution, audience segmentation and real-time personalization.
Work closely with Staff and Principal Engineers to execute on architectural decisions and implementation plans.
Collaborate across product and engineering teams to deliver high-impact, customer-facing capabilities.
Write clean, maintainable, and well-tested code that meets operational and compliance standards.
Participate in code reviews, technical discussions, and incident response efforts to improve system quality and resiliency.
Qualifications
Required:
5-7 years of industry experience in backend or data engineering roles.
Strong programming skills in Scala, Java, or a similar language.
Solid experience with Apache Spark or other distributed data processing frameworks.
Working knowledge of batch and stream processing architectures.
Experience designing, building, and maintaining ETL/ELT pipelines in production.
Familiarity with AWS and tools like Parquet, Delta Lake, or Kafka.
Comfortable operating in a CI/CD environment with infrastructure-as-code and observability tools.
Strong collaboration and communication skills.
Nice to Have:
Experience with Trino, Flink, or Snowflake.
Familiarity with GDPR, CCPA, or other data governance requirements.
Experience with high-scale event processing or identity resolution.
Exposure to multi-region, fault-tolerant distributed systems
Location
This role is remote and based in India (only Karnataka, Maharashtra, Telangana, New Delhi, TamilNadu)
Travel
Occasional travel may be required for team meetings or company events.
What We Offer
Segment offers a competitive salary, equity, generous time-off, healthcare, wellness leave, and a supportive remote-first culture. You’ll get to work on complex engineering challenges with a team that values mentorship, autonomy, and career growth.
Data Engineer (P3)
To see similar active jobs please follow this link: Remote Development jobs
See Yourself at Twilio Join us as our next Data Engineer (L3) on the Customer Data Platform team.
About the Job
As a Data Engineer(L3), you will play a key role in building and maintaining data infrastructure that processes large-scale datasets efficiently and reliably. You’ll contribute to the design and implementation of high-volume pipelines, collaborate with engineers across teams, and help ensure our platform remains robust, scalable, and easy to use.
This is a great role for someone with a strong data engineering background who’s ready to step into broader responsibilities and help shape the evolution of Customer data platform.
Responsibilities
Build and maintain scalable data pipelines using Spark, Scala, and cloud-native services.
Improve the performance and reliability of our real-time and batch data processing systems.
Contribute to platform features that support key customer-facing products such as Identity resolution, audience segmentation and real-time personalization.
Work closely with Staff and Principal Engineers to execute on architectural decisions and implementation plans.
Collaborate across product and engineering teams to deliver high-impact, customer-facing capabilities.
Write clean, maintainable, and well-tested code that meets operational and compliance standards.
Participate in code reviews, technical discussions, and incident response efforts to improve system quality and resiliency.
Qualifications
Required:
5-7 years of industry experience in backend or data engineering roles.
Strong programming skills in Scala, Java, or a similar language.
Solid experience with Apache Spark or other distributed data processing frameworks.
Working knowledge of batch and stream processing architectures.
Experience designing, building, and maintaining ETL/ELT pipelines in production.
Familiarity with AWS and tools like Parquet, Delta Lake, or Kafka.
Comfortable operating in a CI/CD environment with infrastructure-as-code and observability tools.
Strong collaboration and communication skills.
Nice to Have:
Experience with Trino, Flink, or Snowflake.
Familiarity with GDPR, CCPA, or other data governance requirements.
Experience with high-scale event processing or identity resolution.
Exposure to multi-region, fault-tolerant distributed systems
Location
This role is remote and based in India (only Karnataka, Maharashtra, Telangana, New Delhi, TamilNadu)
Travel
Occasional travel may be required for team meetings or company events.
What We Offer
Segment offers a competitive salary, equity, generous time-off, healthcare, wellness leave, and a supportive remote-first culture. You’ll get to work on complex engineering challenges with a team that values mentorship, autonomy, and career growth.