What are the responsibilities and job description for the Data Engineer position at Adaptive Technology Insights?
We are seeking a highly skilled Data Engineer requiring expertise in Java, Scala/Kafka, GCP/AWS, and SQL. The ideal candidate will work on designing, developing, and optimizing data pipelines and workflows while collaborating with cross-functional teams to ensure seamless data integration and processing.
Key Responsibilities:
- Design, develop, and maintain scalable and efficient data pipelines.
- Work with Java and Scala/Kafka to process and stream large datasets.
- Utilize GCP/AWS services to manage and store data efficiently.
- Develop and optimize SQL queries for data retrieval and reporting.
- Collaborate with data scientists, analysts, and engineers to support business requirements.
- Implement best practices for data governance, security, and compliance.
- Troubleshoot and resolve issues related to data pipelines and infrastructure.
Required Skills & Qualifications:
- Experience: 5-10 yrs in data engineering or a related role.
- Programming: Proficiency in Java and Scala/Kafka for data processing.
- Cloud Platforms: Hands-on experience with GCP (Google Cloud Platform) or AWS (Amazon Web Services).
- Database & SQL: Strong knowledge of SQL and database management.
- Data Streaming: Experience with Kafka for real-time data processing.
- Big Data Tools: Exposure to Spark, Hadoop, or similar technologies (preferred).
- Problem-Solving: Ability to troubleshoot and optimize data workflows.
Preferred Qualifications:
- Experience with data warehousing solutions (e.g., BigQuery, Redshift, Snowflake).
- Familiarity with ETL tools and data transformation processes.
- Strong knowledge of CI/CD pipelines for data deployment.