What are the responsibilities and job description for the ETL Developer || Must be Local position at Cbase Inc?
Position : ETL engineer
Location : Chicago, IL
Duration : 12 Months with ext
2 days Onsite in a week
Qualifications
Top 5 Skill sets
1. Python or PySpark
2.Complex SQL Development debugging optimization
3.AWS - Glue Step Functions
4.Knowledge of inner working of Databases - like AWS RDS MySQL
5. Big Data Processing
Nice to have skills or certifications:
Experience as a lead for decent sized ETL team
Experience with Apache Iceberg
Observability tools like Dynatrace or DataDog
Must extensively experience in building ETL jobs and hands on coder for large data set in an enterprise setup.
Job Summary:
An ETL developer needs to design build test and maintain systems that extract load and transform data from multiple different systems.
Primary Responsibilities:
Leads Designs implements deploys and optimizes backend ETL services.
Support a massive scale enterprise data solution using AWS data and analytics services.
Analyze and interpret complex data and related systems and provides the efficient technical solutions.
Provide support to ETL schedule and maintain compliance to same. -
Develop and maintain standards to ETL codes and maintain an effective project life cycle on all ETL processes.
Coordinate with cross functional teams like architect’s platform engineers’ other developers and product owners to build data processing procedures.
Perform root cause analysis on production issues and perform routine monitoring on databases and provide support to ETL environments.
Help create functional specifications technical designs and working with business process area owners.
Implement industry best practices code and configuration for production and non-production environments in an highly automated environment.
Provides technical advice effort estimate impact analysis.
Provides timely project status and issue reporting to management.
Qualifications:
6 years’ experience using ETL tools to perform data cleansing data profiling transforming and scheduling various workflows.
Expert level proficiency with writing debugging and optimizing SQL. - 3-4 years programming experience using Python or PySpark/Glue required.
Knowledge of common design patterns models and architecture used in Big Data processing.
3-4 years' experience with AWS services such as Glue S3 Redshift Lambda Step Functions RDS Aurora/MySQL Apache Iceberg CloudWatch SNS SQS EventBridge.
Capable of troubleshooting common database issues familiarity with observability tools.
Self-starter responsible professional and accountable.
A finisher seeing a project or task through to completion despite challenges.