What are the responsibilities and job description for the Data Engineer 4 #: 25-09369 position at HireTalent - Staffing & Recruiting Firm?
Job Title: Data Engineer 4
Job Location: Remote
Job Duration: 2 years on W2
Job Description
Responsibilities
The Senior Data Engineer will collaborate with product owners, developers,database architects, data analysts, visual developers and data scientists on datainitiatives and will ensure optimal data delivery and architecture is consistent throughout ongoing projects.
Must be self-directed and comfortable supporting the data needs of the product roadmap.
The right candidate will be excited by the prospect of optimizing and building integrated and aggregated data objects to architect and support our next generation of products and data initiatives.
Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing for greater scalability Comprehensive documentation and knowledge transfer to Production Support Work with Production Support to analyze and fix Production issues.
Participate in an Agile / Scrum methodology to deliver high -quality software releases every 2 weeks through Sprint Refine, plan stories and deliver timely
Analyze requirement documents and Source to target mapping
Must Have Skills
5+ years of experience designing, developing and supporting complex data pipelines.
5+ years of Spark experience in batch and streaming mode
5+ years of advanced SQL experience for analyzing and interacting with data
5+ years’ experience in Big Data stack environments like Databricks, AWS EMR etc.
3+ years of experience in scripting using Python
3+ years of experience working on clound environment like AWS.
Strong understanding of solution and technical design.
Experience building cloud scalable high-performance data lake solutions
Experience with relational SQL & tools like Snowflake
Aware of Datawarehouse concepts
Performance tuning with large datasets
Experience with source control tools such as GitHub and related dev processes
Experience with workflow scheduling tools like Airflow or Databricks Workflow
Strong problem solving and analytical mindset
Able to influence and communicate effectively, both verbally and written, with
team members and business stakeholders
Good to Have
Skills
Experience in building streaming solutions using Spark structured streaming and
Kafka.
Experience and knowledge of Databricks.
Experience in Semantic modelling and cube solutions like AAS or AtScale.
Job Location: Remote
Job Duration: 2 years on W2
Job Description
Responsibilities
The Senior Data Engineer will collaborate with product owners, developers,database architects, data analysts, visual developers and data scientists on datainitiatives and will ensure optimal data delivery and architecture is consistent throughout ongoing projects.
Must be self-directed and comfortable supporting the data needs of the product roadmap.
The right candidate will be excited by the prospect of optimizing and building integrated and aggregated data objects to architect and support our next generation of products and data initiatives.
Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing for greater scalability Comprehensive documentation and knowledge transfer to Production Support Work with Production Support to analyze and fix Production issues.
Participate in an Agile / Scrum methodology to deliver high -quality software releases every 2 weeks through Sprint Refine, plan stories and deliver timely
Analyze requirement documents and Source to target mapping
Must Have Skills
5+ years of experience designing, developing and supporting complex data pipelines.
5+ years of Spark experience in batch and streaming mode
5+ years of advanced SQL experience for analyzing and interacting with data
5+ years’ experience in Big Data stack environments like Databricks, AWS EMR etc.
3+ years of experience in scripting using Python
3+ years of experience working on clound environment like AWS.
Strong understanding of solution and technical design.
Experience building cloud scalable high-performance data lake solutions
Experience with relational SQL & tools like Snowflake
Aware of Datawarehouse concepts
Performance tuning with large datasets
Experience with source control tools such as GitHub and related dev processes
Experience with workflow scheduling tools like Airflow or Databricks Workflow
Strong problem solving and analytical mindset
Able to influence and communicate effectively, both verbally and written, with
team members and business stakeholders
Good to Have
Skills
Experience in building streaming solutions using Spark structured streaming and
Kafka.
Experience and knowledge of Databricks.
Experience in Semantic modelling and cube solutions like AAS or AtScale.