What are the responsibilities and job description for the Bigdata Engineer position at Trident Consulting Inc.?
Job Details
Trident Consulting is seeking a " Bigdata Engineer " for one of our clients in "Irving, TX
Job Title: Bigdata Engineer
Location: Irving, TX (3 days onsite in a week) - Only local to TX Employment Type: Full-time
Key skills- Spark, PySpark, Java, Python, Scala, Devops, CI-CD, Kafka, Spring framework, AWS and MongoDB/NoSQL DB
Our Data Platform Engineering team is on the cutting edge. We research, adapt, and deploy the latest open-source data platforms to meet unique needs. We're a collaborative group that thrives on technical challenges and the satisfaction of building highly performant systems.
We're seeking a passionate and highly skilled Lead Java Data Engineer to guide and mentor a talented team of engineers in building and maintaining next-generation data platform. If you're a natural leader with a deep understanding of Java, distributed systems, and a passion for pushing the boundaries of Big Data technology, we want to hear from you.
Responsibilities:
* Technical Leadership: Provide technical leadership and mentorship to a team of data engineers, fostering a culture of collaboration, innovation, and continuous learning.
* Architecture & Design: Lead the design and development of highly scalable, low-latency, fault-tolerant data pipelines and platform components that meet Citi's evolving business needs.
* Open Source Expertise: Stay abreast of emerging open-source data technologies and evaluate their suitability for integration into Citi's platform.
* Performance Optimization: Continuously identify and implement performance optimizations across the data platform to ensure optimal efficiency and responsiveness.
* Collaboration & Communication: Partner closely with stakeholders across engineering, data science, and business teams to understand requirements and translate them into robust technical solutions.
* Delivery & Execution: Drive the timely and high-quality delivery of data platform projects, adhering to agile methodologies and best practices.
Qualifications:
* Java Expertise: 5 - 8 years of hands-on experience developing high-performance Java applications (Java 11 preferred) with a strong foundation in core Java concepts, OOP, and OOAD.
* Data Engineering Fundamentals: Proven experience building and maintaining data pipelines using technologies like Kafka, Apache Spark, or Apache Flink. Familiarity with event-driven architectures and experience in developing real-time, low-latency applications is essential.
* Distributed Systems: Deep understanding of distributed systems concepts and experience with MPP platforms such as Trino (Presto) or Snowflake.
* Deployment & Orchestration: Experience deploying and managing applications on container orchestration platforms like Kubernetes, OpenShift, or ECS.
* Leadership & Communication: Demonstrated ability to lead and mentor engineering teams, communicate complex technical concepts effectively, and collaborate across diverse teams.
* Problem Solving & Analytical Skills: Excellent problem-solving skills and a data-driven approach to decision-making.
This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required.Great candidates are curious about data engineering and understanding business processes through the lens of the data traces they leave, knowledgeable in data technologies (databases, streaming systems like Kafka, data modeling and transformations, data quality tools and methods as well as Java and Python) and equally importantly eager to learn more and master new technologies.