What are the responsibilities and job description for the Kafka Engineer / Streaming Data Platform Engineer position at I-Giants?
Job Details
Job Title: Kafka Engineer / Streaming Data Platform Engineer
Location: O'Fallon, MO
Day 1 Onsite Hybrid (3 days/week) Employment Type: C2H
Experience: 9 years
About the Role:
We are looking for a highly skilled Kafka Engineer with strong expertise in Apache Kafka, Apache Flink, and distributed systems to join our team at our Client in O'Fallon, MO. This role demands hands-on experience with real-time stream processing, security protocols, and observability practices. You will be instrumental in architecting and maintaining scalable, resilient, and secure streaming data platforms in a hybrid cloud environment.
Key Responsibilities:
- Design, implement, and maintain high-throughput, low-latency data pipelines using Apache Kafka and Apache Flink.
- Manage Kafka cluster configurations, tuning, monitoring, and troubleshooting.
- Build and enhance observability dashboards and monitoring alerts for Kafka and Flink using platform metrics and logs.
- Develop and support event-driven architectures using message brokers like Kafka and Pulsar.
- Implement secure Kafka configurations using SSL/TLS, certificate-based authentication, and MSSL.
- Collaborate on implementing Infrastructure-as-Code (IaC)for cloud-based deployments.
- Participate in debugging JVM-based applications, including analysis of heap dumps, thread dumps, and performance bottlenecks.
- Contribute to CI/CD workflows using industry-standard tools such as Jenkins, Azure DevOps, and XL Release.
- Build and secure APIs aligned with enterprise API governance standards and assist in API cataloging.
- Work in Agile teams, applying Scrum or Kanban methodologies, and contribute to sprint planning and release cycles.
Required Skills & Experience:
Strong expertise in Apache Kafka setup, administration, security, performance tuning, producer/consumer configurations
Hands-on with Apache Flink DataStream API, windowing, state management, checkpointing
Proficiency in Java, with experience in three or more languages (Java, SQL, .NET, JavaScript, etc.)
Experience in Change Data Capture (CDC), schema registry, and Avro serialization
In-depth understanding of SSL/TLS, certificate handling, and encryption/decryption in Kafka
Experience working with cloud platforms (AWS/Google Cloud Platform/Azure), Kubernetes, and Docker
Ability to work with non-functional requirements such as throughput, scalability, authentication, authorization, and regulatory compliance
Skilled in creating CI/CD pipelines, release orchestration, and infrastructure automation
Familiarity with debugging techniques for distributed JVM-based systems
Strong problem-solving and collaboration skills in a modern SDLC (DevOps, Continuous Delivery)
Preferred Qualifications:
- Experience with Swagger/OpenAPI, API versioning, and API marketplaces
- Familiar with batch vs streaming architectural patterns
- Exposure to big data pipelines and large-scale message processing
- Strong understanding of security best practices in distributed and microservices architecture
Salary : $50 - $55