What are the responsibilities and job description for the Azure Databricks Engineer (Hybrid) position at Serigor Inc?
Job Title: Azure Databricks Engineer (Hybrid)
Location: Raleigh, NC
Duration: 12 Months
Job Description:
The client Web Systems Team seeks an Azure Databricks Engineer who will work with existing staff to plan and design ETL pipelines and product solutions using Azure Databricks. The person filling this role will create resilient processes to ingest data from a variety of on-prem and cloud transactional databases and APIs. Responsibilities will also include developing business requirements, facilitating change management documentation, and actively collaborating with stakeholders. This individual will work closely with a development technical lead and discuss all aspects of the design and planning with the development team.
Roles And Responsibilities
Skill
Required / Desired
Amount
of Experience
Excellent interpersonal skills as well as written and communication skills.
Required
5
Years
Able to write clean, easy-to-follow Databricks notebook code
Required
2
Years
Deep knowledge of data engineering best practices, data warehouses, data lakes, and the Delta Lake architecture
Required
2
Years
Good knowledge of Spark and Databricks SQL/PySpark
Required
2
Years
Technical experience with Azure Databricks and cloud providers like AWS, Google Cloud, or Azure
Required
2
Years
In-depth knowledge of OLTP and OLAP systems, Apache Spark, and streaming products like Azure Service Bus
Required
2
Years
Good practical experience with Databricks Delta Live Tables
Required
2
Years
Knowledge of object-oriented languages like C#, Java, or Python
Desired
7
Years
Location: Raleigh, NC
Duration: 12 Months
Job Description:
The client Web Systems Team seeks an Azure Databricks Engineer who will work with existing staff to plan and design ETL pipelines and product solutions using Azure Databricks. The person filling this role will create resilient processes to ingest data from a variety of on-prem and cloud transactional databases and APIs. Responsibilities will also include developing business requirements, facilitating change management documentation, and actively collaborating with stakeholders. This individual will work closely with a development technical lead and discuss all aspects of the design and planning with the development team.
Roles And Responsibilities
- Research and engineer repeatable and resilient ETL workflows using Databricks notebooks and Delta Live Tables for both batch and stream processing
- Collaborate with business users to develop data products that align with business domain expectations
- Work with DBAs to ingest data from cloud and on-prem transactional databases
- Contribute to the development of the Data Architecture for client
- By following practices for keeping sensitive data secure
- By streamlining the development of data products for use by data analysts and data scientists
- By developing and maintaining documentation for data engineering processes
- By ensuring data quality through testing and validation
- By sharing insights and experiences with stakeholders and engineers throughout client
Skill
Required / Desired
Amount
of Experience
Excellent interpersonal skills as well as written and communication skills.
Required
5
Years
Able to write clean, easy-to-follow Databricks notebook code
Required
2
Years
Deep knowledge of data engineering best practices, data warehouses, data lakes, and the Delta Lake architecture
Required
2
Years
Good knowledge of Spark and Databricks SQL/PySpark
Required
2
Years
Technical experience with Azure Databricks and cloud providers like AWS, Google Cloud, or Azure
Required
2
Years
In-depth knowledge of OLTP and OLAP systems, Apache Spark, and streaming products like Azure Service Bus
Required
2
Years
Good practical experience with Databricks Delta Live Tables
Required
2
Years
Knowledge of object-oriented languages like C#, Java, or Python
Desired
7
Years