What are the responsibilities and job description for the Senior Integration Engineer (Astronomer Airflow Experience) - NYC, NY( Hybrid) - Contract position at iPivot?
Hi,
I am Suresh from IPivot. Please find the job description below for your reference. If interested, reply with an updated resume.
Role: Senior Integration Engineer (Astronomer Airflow Experience)
Location: NYC, NY Hybrid (3days/week onsite)
Duration: Contract
Note: Open for W2 Contract / Visa Transfers only
Job Description
This position is for an Integration engineer with a background in Airflow, Python, Pyspark, SQL, Databricks and data warehousing for enterprise level systems.
The position calls for someone that is comfortable working with business users along with business analyst expertise.
Required Skills
3 years Astronomer/Airflow DAG development
5 years Python coding experience.
5 years - SQL Server based development of large datasets
5 years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
Experience in any cloud data warehouse like Synapse, Big Query, Redshift, Snowflake.
Experience in Data warehousing - OLTP, Dimensions, Facts, and Data modeling.
Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
Experience with Cloud based data architectures, messaging, and analytics.
Cloud certification(s).
Major Responsibilities
Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
Design, develop, and deploy Spark program in data bricks environment to process and analyze large volumes of data.
Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
Proficient in developing programs in Python and SQL
Experience with Data warehouse Dimensional data modeling.
Working with event based/streaming technologies to ingest and process data.
Working with structured, semi structured and unstructured data.
Optimize Databricks jobs for performance and scalability to handle big data workloads.
Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
Implement best practices for data management, security, and governance within the Databricks environment.
Experience designing and developing Enterprise Data Warehouse solutions.
Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.
Education
Minimally a BA degree within an engineering and/or computer science discipline
Master's degree strongly preferred Thanks and Regards,
Suresh Durgam
Senior Recruiter
M: (732) 813-4401
E: durgams@ipivot.io
A: 405, Ridge Road, Dayton NJ 08810
W: ipivot.io
I am Suresh from IPivot. Please find the job description below for your reference. If interested, reply with an updated resume.
Role: Senior Integration Engineer (Astronomer Airflow Experience)
Location: NYC, NY Hybrid (3days/week onsite)
Duration: Contract
Note: Open for W2 Contract / Visa Transfers only
Job Description
This position is for an Integration engineer with a background in Airflow, Python, Pyspark, SQL, Databricks and data warehousing for enterprise level systems.
The position calls for someone that is comfortable working with business users along with business analyst expertise.
Required Skills
3 years Astronomer/Airflow DAG development
5 years Python coding experience.
5 years - SQL Server based development of large datasets
5 years with Experience with developing and deploying ETL pipelines using Databricks Pyspark.
Experience in any cloud data warehouse like Synapse, Big Query, Redshift, Snowflake.
Experience in Data warehousing - OLTP, Dimensions, Facts, and Data modeling.
Previous experience leading an enterprise-wide Cloud Data Platform migration with strong architectural and design skills.
Experience with Cloud based data architectures, messaging, and analytics.
Cloud certification(s).
Major Responsibilities
Build and optimize data pipelines for efficient data ingestion, transformation and loading from various sources while ensuring data quality and integrity.
Design, develop, and deploy Spark program in data bricks environment to process and analyze large volumes of data.
Experience of Delta Lake, DWH, Data Integration, Cloud, Design and Data Modelling.
Proficient in developing programs in Python and SQL
Experience with Data warehouse Dimensional data modeling.
Working with event based/streaming technologies to ingest and process data.
Working with structured, semi structured and unstructured data.
Optimize Databricks jobs for performance and scalability to handle big data workloads.
Monitor and troubleshoot Databricks jobs, identify and resolve issues or bottlenecks.
Implement best practices for data management, security, and governance within the Databricks environment.
Experience designing and developing Enterprise Data Warehouse solutions.
Proficient writing SQL queries and programming including stored procedures and reverse engineering existing process.
Perform code reviews to ensure fit to requirements, optimal execution patterns and adherence to established standards.
Education
Minimally a BA degree within an engineering and/or computer science discipline
Master's degree strongly preferred Thanks and Regards,
Suresh Durgam
Senior Recruiter
M: (732) 813-4401
E: durgams@ipivot.io
A: 405, Ridge Road, Dayton NJ 08810
W: ipivot.io