What are the responsibilities and job description for the Azure Data Engineer position at Method360 Talent Acquisition?
Job Title: Azure Data Engineer
Employment Type: Contract (W2 or C2C)
Start: 3/17/2025
Duration: 6 months
End Client: Confidential
Industry: Health
Workplace Type: Hybrid (Dallas, Texas, or Richmond, Virginia
Job Description: The Azure Data Engineer will be responsible for the migration of on-prem critical applications (database to cloud), evaluating ELT tools, and working with the data and advance analytics backlog to create Azure data ingestion and orchestrate pipelines leveraging Azure Data Frame and Azure Databricks while following company data compliance and Matillion Architecture. The primary focus is on building scalable and secure data pipelines, transforming raw data into meaningful insights, and enabling advanced analytics through the Azure platform.
Key Responsibilities:
- Data Ingestion:
- Building pipelines to extract data from multiple sources (on-premises, cloud, or APIs).
- Utilizing tools like Azure Data Factory, Azure Synapse Pipelines, or Databricks.
- Data Transformation:
- Designing ETL/ELT processes for data cleansing, enrichment, and transformation.
- Leveraging tools like Azure Data Factory, Azure Databricks, and SQL Server Integration Services (SSIS).
- Data Storage:
- Designing and managing data storage solutions using Azure Blob Storage, Azure Data Lake, Azure SQL Database, Cosmos DB, or Synapse Analytics.
- Data Security & Compliance:
- Ensuring data is secure by implementing encryption, access controls, and compliance with regulations (e.g., GDPR, HIPAA).
- Big Data & Analytics:
- Implementing big data solutions using Apache Spark, Databricks, and Synapse Analytics for advanced analytics.
- Integrating with Azure Machine Learning or Power BI for insights.
- Real-Time Data Processing:
- Working with tools like Azure Stream Analytics, Event Hubs, and IoT Hub for real-time data ingestion and processing.
- Monitoring and Optimization:
- Optimizing pipelines for cost and performance.
- Monitoring workflows and systems using Azure Monitor or Log Analytics.
Key Skills:
- Proficiency in Python, SQL, and possibly Scala or Java for Spark jobs.
- Designing star schemas, snowflake schemas, and other data models.
- Strong understanding of Azure services like Data Factory, Synapse, Databricks, and more.
- Familiarity with Hadoop, Spark, and distributed data processing systems.
- Knowledge of CI/CD pipelines and version control for data workflows.
- Problem-solving, critical thinking, and collaboration with business stakeholders.
Nice to Have:
- Microsoft Certified: Azure Data Engineer Associate (DP-203): Covers data storage, processing, security, and integration.
- Databricks Unity Catalog: Covers centralized access control, auditing, lineage and data discovery.