What are the responsibilities and job description for the Sr Data Engineer - Azure position at Inabia Software & Consulting Inc.?
Client looking for 10 Years of experience.
Title: Sr Data Engineer - Azure
Duration: Contract
Location: Bellevue WA(Hybrid)
Locals Required
Job Description:
We operate in an Azure Databricks Lakehouse. We’ll need a person with:
* Azure experience – ADF for orchestration, ADLS for storage, Azure DevOps for CI/CD
* Databricks experience – all compute/ETL leverages Databricks and is programmed leveraging Spark (PySpark, SparkSQL)
* PowerShell experience – this is our scripting language of choice
* SQL proficiency – it’s used everywhere (TSQL, PostgreSQL)
* Proficiency with parquet and delta formats
Additionally – they will need experience in:
* SDLC CI/CD – we follow a standard deployment process (dev, test, prod) that includes peer reviewed code. They need to be comfortable with standard DevOps practices.
* Should have a deep understanding of indexes and partitioning.
* Should be proficient optimizing code for performance (able to read a DAG, determine where CBO is using most resources)
* Should be proficient in writing code in a matter that it can run repeatedly and produce the same state (we have a custom SQL Deployment framework)
Title: Sr Data Engineer - Azure
Duration: Contract
Location: Bellevue WA(Hybrid)
Locals Required
Job Description:
We operate in an Azure Databricks Lakehouse. We’ll need a person with:
* Azure experience – ADF for orchestration, ADLS for storage, Azure DevOps for CI/CD
* Databricks experience – all compute/ETL leverages Databricks and is programmed leveraging Spark (PySpark, SparkSQL)
* PowerShell experience – this is our scripting language of choice
* SQL proficiency – it’s used everywhere (TSQL, PostgreSQL)
* Proficiency with parquet and delta formats
Additionally – they will need experience in:
* SDLC CI/CD – we follow a standard deployment process (dev, test, prod) that includes peer reviewed code. They need to be comfortable with standard DevOps practices.
* Should have a deep understanding of indexes and partitioning.
* Should be proficient optimizing code for performance (able to read a DAG, determine where CBO is using most resources)
* Should be proficient in writing code in a matter that it can run repeatedly and produce the same state (we have a custom SQL Deployment framework)