What are the responsibilities and job description for the Data Engineer position at Robert Half?
Job Description
Job Description
We are currently offering a permanent employment opportunity for a Data Engineer, onsite in Draper, Utah. The individual in this role will be primarily responsible for developing and optimizing data pipelines, designing ETL / ELT processes, and ensuring data accessibility. This position requires close collaboration with various teams to troubleshoot issues and refine data models. The work schedule for this IT position is Monday to Friday, from 9 AM to 5 PM.
Responsibilities :
- Develop and maintain data pipelines for seamless integration from Microsoft Azure, Salesforce, and external systems.
- Design, optimize, and implement ETL / ELT processes to keep data structured, normalized, and easily accessible.
- Collaborate with cross-functional teams to troubleshoot challenges and refine data models.
- Mentor other developers, sharing best practices and continuous improvement strategies for our data infrastructure.
- Support business intelligence and reporting needs by ensuring the data infrastructure is robust and efficient.
- Apply your expertise in Python for scripting and data processing tasks.
- Utilize ETL / ELT tools and frameworks to manage and transform data.
- Work with relational databases such as PostgreSQL, MySQL, SQL Server, and NoSQL databases like MongoDB.
- Implement Data Warehousing Solutions like Snowflake or Redshift and big data frameworks like Hadoop and Apache Spark.
- Employ containerization and orchestration tools like Kubernetes in your work.
- Implement Agile Development Framework in your work processes.
- Understand and apply data governance frameworks, like NIST CSF 2.0.
- Use GitHub or similar for CI / CD pipeline proficiency.
- Display excellent communication and collaboration skills to work effectively across teams.
- Demonstrate your ability to work with large datasets and complex business logic.
- Apply your strong problem-solving skills, with the ability to debug and optimize data processes.
- Minimum of 2 years of experience in a role as a Data Engineer or similar
- Proficient in ETL (Extract, Transform, Load) processes and methodologies
- Strong coding skills in Python
- Experience with Continuous Integration / Continuous Delivery (CICD) processes
- Familiarity with NoSQL databases
- Proficiency in using Azure Data Factory
- Experience in data integration and data warehousing is essential
- Ability to work in a team and independently
- Excellent problem-solving skills and attention to detail
- Strong communication skills, both written and verbal.