What are the responsibilities and job description for the AWS Data Engineer (GC/USC Only) - (W2 Only) - JDT 2024 position at Techlink Systems?
Job Title: IT Data Integration Engineer/Developer
Work Type: Hybrid (Less than 60% onsite)
Travel: Yes (5% Quarterly)
Overtime: Not Required
Job Description
Daily Responsibilities:
1. Develop and Maintain Data Integration Solutions:
- Design and implement data integration workflows using AWS Glue/EMR, Lambda, and Redshift.
- Leverage PySpark, Apache Spark, and Python to process large datasets effectively.
- Ensure accurate and efficient extraction, transformation, and loading (ETL) of data into target systems.
2. Ensure Data Quality and Integrity:
- Validate and cleanse data to maintain high-quality standards.
- Implement monitoring, validation, and error-handling mechanisms to ensure data integrity across pipelines.
3. Optimize Data Integration Processes:
- Improve the performance, scalability, and cost-efficiency of data workflows on AWS infrastructure.
- Identify and resolve performance bottlenecks by fine-tuning queries and optimizing Redshift performance.
- Regularly refine integration processes to align with evolving business needs.
4. Support Business Intelligence and Analytics:
- Translate business requirements into technical specifications for data pipelines.
- Ensure timely availability of integrated data for analytics and business intelligence purposes.
- Collaborate with data analysts and business stakeholders to address their data needs effectively.
5. Maintain Documentation and Compliance:
- Document data integration workflows, processes, and technical specifications.
- Ensure adherence to data governance policies, industry standards, and regulatory requirements.
Role Focus:
The IT Data Integration Engineer/Developer will design, develop, and manage data integration processes to ensure seamless data flow across the organization. The role involves integrating data from diverse sources, transforming it to meet business requirements, and loading it into systems like data warehouses or lakes. The objective is to enable data-driven decision-making by providing high-quality, consistent, and accessible data.
Desired Qualifications:
Education & Experience:
- Bachelor’s degree in Computer Science, Information Technology, or a related field (Master’s degree preferred).
- 7–10 years of experience in data engineering, database design, and ETL processes.
- 5 years of experience with AWS tools and technologies (e.g., S3, EMR, Glue, Athena, Redshift, Postgres, RDS, Lambda, PySpark).
- 5 years of experience in programming languages like PySpark and Python.
- 3 years of experience with databases, data marts, or data warehouses.
Skills & Knowledge:
- Proven experience in ETL development, system integration, and CI/CD implementation.
- Expertise in complex database objects for moving data across multiple environments.
- Strong understanding of data security, privacy, and compliance standards.
- Proficiency in Agile development practices, including sprint planning and retrospectives.
Key Attributes:
- Excellent problem-solving and communication skills.
- Ability to collaborate with cross-functional teams effectively.
- Commitment to data quality and attention to detail.
- Ability to provide technical guidance and mentorship to junior developers.
- Continuous learning mindset to stay updated on emerging technologies and best practices in data engineering.
Salary : $50 - $70