What are the responsibilities and job description for the Data Engineer position at Panzer Solutions LLC?
Position: Data Engineer
Location: Seattle WA (onsite)
Duration: 6 Months contract
Position Summary:
Key Responsibilities:
Develop, optimize, and maintain data pipelines using Azure Data Factory (ADF), DBT Labs, Snowflake, and Databricks.
Create reusable jobs and a configuration-based integration framework to optimize development and scalability.
Manage data ingestion for structured and unstructured data:
Landing/Lakehouse: ADLS.
Sources: ADLS, Salesforce, SharePoint Document Libraries.
Partner Data: DHS, IHME, WASDE, etc.
Implement and optimize ELT processes, source-to-target mapping, and transformation logic in tools like DBT Labs, Azure Data Factory, Databricks Notebooks, and Snow SQL.
Collaborate with data scientists, analysts, data engineers, report developers, and infrastructure engineers for end-to-end support.
Co-develop CI/CD best practices, automation, and pipelines with infrastructure engineers for code deployments using GitHub Actions.
Automate source-to-target mappings, data pipelines, and data lineage in Collibra.
Required Experience:
Hands-on experience building pipelines with ADF, Snowflake, Databricks, and DBT Labs.
Expertise in Azure Cloud, with integration experience involving Databricks, Snowflake, and ADLS Gen2.
Proficient in data warehousing and lakehouse concepts, including ELT processes, Delta Tables, and External Tables for structured/unstructured data.
Experience with Databricks Unity Catalog and data-sharing technologies.
Strong skills in CI/CD tools (Azure DevOps, GitHub Actions) and version control systems (GitHub).
Proven cross-functional collaboration and technical support experience for data scientists, report developers, and analysts.