Distributed Systems Engineer (Kafka)
To see similar active jobs please follow this link: Remote Development jobs
Job Summary
We are in search of a talented backend developer with a strong background in Kafka, systems, and data integration. Our team leverages databases, event streaming, Cloud Service Provider (CSP) services, and EDB software to create fault-tolerant and high-performance data solutions. We're looking for candidates who are passionate about distributed systems, machine learning, analytics, and data engineering to enhance our transactional systems excellence.
In this role, you'll work with EDB's data integration and migration team to deliver next-generation data movement and integrated data platform solutions to our customers. Together, we'll build systems that not only attract customers to Postgres but also amplify the value of their data.
**Candidate Note: This role is 100% remote. We are only looking for candidates located in Brazil.
What your impact will be:
Develop backend systems that power exceptional user experiences by working closely with frontend engineers, designers, and product management
Collaborate with our globally distributed engineering team using tools like GitHub, Jira, and Slack
Participate in all stages of the product development life cycle spanning ideation, design, implementation, testing, deployment, and operation
Contribute to continuous improvement of our platform through knowledge sharing and mentoring within the team and across EDB organizations
Approach problem-solving with an inquisitive, innovative, and detail-oriented mindset while maintaining a focus on delivering value to our customers
What you will bring:
2+ years hands on operational experience with Apache Kafka. Ability to fully manage and maintain kafka infrastructure for scalability and performance.
3+ years experience in a back-end development role using Go, Java
3+ years experience with distributed systems (e.g. database systems, Kubernetes clusters, cloud infrastructure)
Proficiency in building production software components (CLIs, services, clients) that interact over HTTP and gRPC
1+ years experience with kubernetes (or any other container orchestration tool like openshift, eks…)
Experience with cloud environment: AWS, Azure, or GCP (Terraform or other infra-as-code solutions preferred)
Comfortable working iteratively in our lightweight Scrum process and open to feedback from others
Calm and methodical approach with good communication skills
What will give you an edge:
Understanding of the Postgres ecosystem and experience developing against Postgres
Experience working with Oracle database systems, especially change-data-capture and REDO log interactions
Experience with AWS Kinesis, Azure Event Hub, or GCP PubSub
Experience with ANTLR
“Full stack” experience into the front-end layer with basic understanding of React, Vue.js, and/or Typescript
Experience with deploying and operating software on Kubernetes
Strong opinions on the state of data engineering, ML/AI, and data movement solutions or academic literature – and what’s next in the field
Experience with devops/SRE responsibilities and have worked in a “you build it, you run it” environment
Experience with logs and metrics integration (Prometheus, Grafana, CSP services)
Experience with change-data-capture (CDC) / ETL / ELT systems (Debezium, FiveTran)
Experience with Spark, Flink, Hive Metastore (or other metadata/catalog stores)
Experience with the Snowflake or Databricks platforms
Involvement in OSS communities (not just Postgres, though that’s a double plus!)
About the job
Distributed Systems Engineer (Kafka)
To see similar active jobs please follow this link: Remote Development jobs
Job Summary
We are in search of a talented backend developer with a strong background in Kafka, systems, and data integration. Our team leverages databases, event streaming, Cloud Service Provider (CSP) services, and EDB software to create fault-tolerant and high-performance data solutions. We're looking for candidates who are passionate about distributed systems, machine learning, analytics, and data engineering to enhance our transactional systems excellence.
In this role, you'll work with EDB's data integration and migration team to deliver next-generation data movement and integrated data platform solutions to our customers. Together, we'll build systems that not only attract customers to Postgres but also amplify the value of their data.
**Candidate Note: This role is 100% remote. We are only looking for candidates located in Brazil.
What your impact will be:
Develop backend systems that power exceptional user experiences by working closely with frontend engineers, designers, and product management
Collaborate with our globally distributed engineering team using tools like GitHub, Jira, and Slack
Participate in all stages of the product development life cycle spanning ideation, design, implementation, testing, deployment, and operation
Contribute to continuous improvement of our platform through knowledge sharing and mentoring within the team and across EDB organizations
Approach problem-solving with an inquisitive, innovative, and detail-oriented mindset while maintaining a focus on delivering value to our customers
What you will bring:
2+ years hands on operational experience with Apache Kafka. Ability to fully manage and maintain kafka infrastructure for scalability and performance.
3+ years experience in a back-end development role using Go, Java
3+ years experience with distributed systems (e.g. database systems, Kubernetes clusters, cloud infrastructure)
Proficiency in building production software components (CLIs, services, clients) that interact over HTTP and gRPC
1+ years experience with kubernetes (or any other container orchestration tool like openshift, eks…)
Experience with cloud environment: AWS, Azure, or GCP (Terraform or other infra-as-code solutions preferred)
Comfortable working iteratively in our lightweight Scrum process and open to feedback from others
Calm and methodical approach with good communication skills
What will give you an edge:
Understanding of the Postgres ecosystem and experience developing against Postgres
Experience working with Oracle database systems, especially change-data-capture and REDO log interactions
Experience with AWS Kinesis, Azure Event Hub, or GCP PubSub
Experience with ANTLR
“Full stack” experience into the front-end layer with basic understanding of React, Vue.js, and/or Typescript
Experience with deploying and operating software on Kubernetes
Strong opinions on the state of data engineering, ML/AI, and data movement solutions or academic literature – and what’s next in the field
Experience with devops/SRE responsibilities and have worked in a “you build it, you run it” environment
Experience with logs and metrics integration (Prometheus, Grafana, CSP services)
Experience with change-data-capture (CDC) / ETL / ELT systems (Debezium, FiveTran)
Experience with Spark, Flink, Hive Metastore (or other metadata/catalog stores)
Experience with the Snowflake or Databricks platforms
Involvement in OSS communities (not just Postgres, though that’s a double plus!)