What are the responsibilities and job description for the Senior Kafka Engineer position at URSI Technologies Inc.?
Job Details
Job Title: Senior Kafka Engineer
Location: Tempe, AZ
Duration: 6 Months
Experience Required: 8 Years
Job Summary:
We are seeking a highly experienced Kafka Engineer with expertise in Confluent Kafka, Java/Scala, and distributed systems. The ideal candidate will have a strong background in designing scalable, fault-tolerant data pipelines, optimizing performance, and troubleshooting Kafka messaging systems in cloud environments. Familiarity with Agile methodologies and a strong automation mindset are essential.
Key Responsibilities:
Identify and resolve Kafka messaging issues in a timely and effective manner.
Collaborate with business and IT teams to understand requirements and deliver scalable solutions using Agile methodologies.
Independently design and implement solutions across multiple environments: DEV, QA, UAT, and PROD.
Provide technical leadership, mentorship, and code reviews for engineers on the project.
Administer and maintain distributed Kafka clusters across environments and troubleshoot cluster performance issues.
Design and implement subsystems, microservices, and related components.
Promote and practice an automate-first philosophy throughout the development and deployment lifecycle.
Contribute hands-on coding, primarily in Java or Scala, with potential use of Python depending on project needs.
Required Skills & Expertise:
Confluent Kafka Expertise:
Deep understanding of Kafka core concepts: producers, consumers, topics, partitions, brokers, replication, etc.
Programming Proficiency:
Strong experience in Java or Scala; Python is a plus.
System Design & Architecture:
Proven ability to design robust, high-throughput, and low-latency Kafka-based data pipelines.
Data Serialization & Schema Management:
Familiarity with JSON, Avro, and Protobuf.
Experience managing data schema evolution.
Kafka Streams API: (Preferred but optional)
Knowledge of stream processing using Kafka Streams.
Monitoring & Troubleshooting:
Experience with tools for Kafka cluster health monitoring and issue resolution.
Cloud Integration:
Experience deploying and managing Kafka on AWS, Azure, or Google Cloud Platform (Google Cloud Platform).
Distributed Systems:
Solid understanding of distributed systems principles and data consistency models.