Staff Data Engineer - tvScientific
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a Staff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
Design and implement robust data infrastructure in AWS, using Spark with Scala
Evolve our core data pipelines to efficiently scale for our massive growth
Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
Collaborate with our cross-functional teams to design data solutions that meet business needs
Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
Leverage and optimize AWS resources while designing for scale
Collaborate closely with our Data Science and Product teams
How we'll define success:
Successful design and implementation of scalable and efficient data infrastructure
Timely delivery and optimization of data assets and APIs
High attention to detail in implementation of automated data quality checks
Effective collaboration with cross-functional teams
What we're looking for:
Production data engineering experience
Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
Experience in delivering significant technical initiatives and building reliable, large scale services
Experience in delivering APIs backed by relationship-heavy datasets
Familiarity with data lakes, cloud warehouses, and storage formats
Strong proficiency in AWS services
Expertise in SQL for data manipulation and extraction
Excellent written and verbal communication skills
Bachelor's degree in Computer Science or a related field
Nice-to-haves:
Experience in adtech
Experience implementing data governance practices, including data quality, metadata management, and access controls
Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
Familiarity with data table formats like Apache Iceberg, Delta
Previous experience building out a Data Engineering function
Proven experience working closely with Data Science teams on machine learning pipelines
In-Office Requirement Statement:
We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
#LI-SM4
#LI-REMOTE
About the job
Apply for this position
Staff Data Engineer - tvScientific
About tvScientific
tvScientific is the first and only CTV advertising platform purpose-built for performance marketers. We leverage massive data and cutting-edge science to automate and optimize TV advertising to drive business outcomes. Our solution combines media buying, optimization, measurement, and attribution in one, efficient platform. Our platform is built by industry leaders with a long history in programmatic advertising, digital media, and ad verification who have now purpose-built a CTV performance platform advertisers can trust to grow their business.
As a Staff Data Engineer at tvScientific, you will be a key player in implementing the robust data infrastructure to power our data-heavy company. You will collaborate with our cross-functional teams to evolve our core data pipelines, design for efficiency as we scale, and store data in optimal engines and formats. This is an individual contributor role, where you will work to define and implement a strategic vision for data engineering within the organization.
What you'll do:
Design and implement robust data infrastructure in AWS, using Spark with Scala
Evolve our core data pipelines to efficiently scale for our massive growth
Store data in optimal engines and formats, matching your designs to our performance needs and cost factors
Collaborate with our cross-functional teams to design data solutions that meet business needs
Design and implement knowledge graphs, exposing their functionality both via Batch Processing and APIs
Leverage and optimize AWS resources while designing for scale
Collaborate closely with our Data Science and Product teams
How we'll define success:
Successful design and implementation of scalable and efficient data infrastructure
Timely delivery and optimization of data assets and APIs
High attention to detail in implementation of automated data quality checks
Effective collaboration with cross-functional teams
What we're looking for:
Production data engineering experience
Proficiency in Spark and Scala, with proven experience building data infrastructure in Spark using Scala
Experience in delivering significant technical initiatives and building reliable, large scale services
Experience in delivering APIs backed by relationship-heavy datasets
Familiarity with data lakes, cloud warehouses, and storage formats
Strong proficiency in AWS services
Expertise in SQL for data manipulation and extraction
Excellent written and verbal communication skills
Bachelor's degree in Computer Science or a related field
Nice-to-haves:
Experience in adtech
Experience implementing data governance practices, including data quality, metadata management, and access controls
Strong understanding of privacy-by-design principles and handling of sensitive or regulated data
Familiarity with data table formats like Apache Iceberg, Delta
Previous experience building out a Data Engineering function
Proven experience working closely with Data Science teams on machine learning pipelines
In-Office Requirement Statement:
We recognize that the ideal environment for work is situational and may differ across departments. What this looks like day-to-day can vary based on the needs of each organization or role.
Relocation Statement:
This position is not eligible for relocation assistance. Visit our PinFlex page to learn more about our working model.
#LI-SM4
#LI-REMOTE
