What are the responsibilities and job description for the ETL Developer position at Rishabh RPO?
Role : ETL Developer
Location : Chicago IL Hybrid (Day 1 Onsite)
Duration : 12 Months
Note : Top 5 Skill sets
1. Python or PySpark
2.Complex SQL Development debugging optimization
3.AWS Glue Step Functions
4.Knowledge of inner working of Databases like AWS RDS MySQL
5. Big Data Processing
Nice to have skills or certifications :
1. Experience as a lead for decent sized ETL team
2. Experience with Apache Iceberg
3.Observability tools like Dynatrace or DataDog
Please do not send any candidate who does not have an extensive experience in building ETL jobs and not hands on coder for large data set in an enterprise setup.
Job Description :
An ETL developer needs to design build test and maintain systems that extract load and transform data from multiple different systems.
Primary Responsibilities :
Leads Designs implements deploys and optimizes backend ETL services.
Support a massive scale enterprise data solution using AWS data and analytics services.
Analyze and interpret complex data and related systems and provides the efficient technical solutions.
Provide support to ETL schedule and maintain compliance to same.
Develop and maintain standards to ETL codes and maintain an effective project life cycle on all ETL processes.
Coordinate with cross functional teams like architects platform engineers other developers and product owners to build data processing procedures.
Perform root cause analysis on production issues and perform routine monitoring on databases and provide support to ETL environments.
Help create functional specifications technical designs and working with business process area owners.
Implement industry best practices code and configuration for production and nonproduction environments in an highly automated environment.
Provides technical advice effort estimate impact analysis.
Provides timely project status and issue reporting to management.
Qualifications :
6 years experience using ETL tools to perform data cleansing data profiling transforming and scheduling various workflows.
Expert level proficiency with writing debugging and optimizing SQL.
34 years programming experience using Python or PySpark / Glue required.
Knowledge of common design patterns models and architecture used in Big Data processing.
34 years experience with AWS services such as Glue S3 Redshift Lambda Step Functions RDS Aurora / MySQL Apache Iceberg CloudWatch SNS SQS EventBridge.
Capable of troubleshooting common database issues familiarity with observability tools.
Selfstarter responsible professional and accountable.
A finisher seeing a project or task through to completion despite challenges.
Key Skills
SQL,Pentaho,PL / SQL,Microsoft SQL Server,SSIS,Informatica,Shell Scripting,T Sql,Teradata,Data Modeling,Data Warehouse,Oracle
Employment Type : Full Time
Vacancy : 1