You haven't searched anything yet.
1. Snowflake, Snowpark: The candidate should have a deep understanding of Snowflake data warehousing platform and be proficient in using Snowpark for data processing and analytics.
2. AWS services (Airflow): The candidate should have hands-on experience with AWS services, particularly Apache Airflow for orchestrating complex data workflows and pipelines.
3. AWS services (Lambda): Proficiency in AWS Lambda for serverless computing and event-driven architecture is essential for this role.
4. AWS services (Glue): The candidate should be well-versed in AWS Glue for ETL (Extract, Transform, Load) processes and data integration.
5. Python: Strong programming skills in Python are required for developing data pipelines, data transformations, and automation tasks.
6. DBT: Experience with DBT (Data Build Tool) for modeling data and creating data transformation pipelines is a plus.
Responsibilities:
- Design, develop, and maintain data pipelines and ETL processes using Snowflake, AWS services, Python, and DBT.
- Collaborate with data scientists and analysts to understand data requirements and implement solutions.
- Optimize data workflows for performance, scalability, and reliability.
- Troubleshoot and resolve data-related issues in a timely manner.
- Stay updated on the latest technologies and best practices in data engineering.
- Bachelor's degree in Computer Science, Engineering, or related field.
- Proven experience in data engineering roles with a focus on Snowflake, AWS services, Python, and DBT.
- Strong analytical and problem-solving skills.
- Excellent communication and teamwork abilities.
- AWS certifications (e.g., AWS Certified Data Analytics - Specialty) are a plus.
Contractor
$111k-136k (estimate)
06/29/2024
07/27/2024
sysmind.com
Princeton Junction, NJ
200 - 500