What are the responsibilities and job description for the Azure Data Bricks Engineer at Camden, NJ (onsite) Hybrid position at Reliable Software Resources Inc?
My name is Pardha and I am a professional Recruiter at DataFactZ. Our client is interested in hiring an Azure Data Bricks Engineer in Camden, NJ Hybrid
Azure Data Bricks Engineer
Location: Camden, NJ (onsite) Hybrid
Long-term
ROLE SUMMARY
The Azure Data Analytic Engineer will be the AZURE SME tasked with the development and optimization of cloud-based Business Intelligence solutions. Advances data analytics capabilities and drives innovative solutions. Possesses deep technical expertise in data engineering and plays instrumental role in managing data integrations from on-premises Oracle systems, Cloud CRM (Dynamics), and telematics. Collaborates closely with Data Science and Enterprise Data Warehouse teams and business stakeholders.
Primary Responsibilities:
Data Ingestion and Storage:
- Designs, develops, and maintains scalable, efficient data pipelines using Data Factory, and Databricks, leveraging Py Spark for complex data transformations and large-scale processing.
- Builds and manages extract, transform, and load (ETL)/extract, load, transform (ELT) processes to seamlessly extract, transform, and load data from on-premises Oracle systems, customer relationship management (CRM) technology, and connected vehicles into data storage solutions, such as Data Lake Storage and SQL Database.
- Integrates and harmonizes data from diverse sources including on-premises databases, cloud services, application programming interfaces (APIs), and connected vehicle telematics.
- Ensures consistent data quality, accuracy, and reliability across all integrated data sources.
Data Engineering:
- Creates high-code data engineering solutions using Databricks to clean, transform, and prepare data for in-depth analysis.
- Develops and manages data models, schemas, and data warehouses, utilizing Lakehouse Architecture to enhance advanced analytics and business intelligence.
- Leverages Unity Catalog to ensure unified data governance and management across the enterprise's data assets.
- Optimizes data storage, retrieval strategies, and query performance to drive scalability and efficiency in all data operations.
GitHub Development:
- Utilizes GitHub for version control and collaborative development, implementing best practices for code management, testing, and deployment.
- Develops workflows for continuous integration (CI) and continuous deployment (CD), ensuring efficient delivery and maintenance of data solutions.
Technical Expertise:
- Extensive experience with Data Factory, Databricks, and Synapse, as well as proficiency in Python and PySpark.
- Data Integration: Experience integrating data from on-premises Oracle systems and connected vehicle data into cloud-based solutions.
- Lakehouse Architecture & Governance: Deep knowledge of Lakehouse Architecture and Unity Catalog for enterprise data governance.
- Version Control & Collaboration: Demonstrated proficiency in GitHub for development, collaboration, and deployment in large-scale environments.
- Infrastructure as Code (IaC): Experience with Infrastructure as Code tools such as Resource Manager (ARM) templates or terraform.
- Problem-Solving & Troubleshooting: Strong analytical skills with the ability to diagnose and resolve complex data infrastructure challenges.
Pardha S S Moyida
Pardha.moyida@datafactz.com
Salary : $115,300 - $138,500