What are the responsibilities and job description for the Senior Data Engineer position at Pronix Inc?
Job Details
Hello,
Role: Senior Data Engineer
Location: McLean, VA (Hybrid) Local Candidates Only
Duration: 12 Months
Job Type: Contract to Hire
Visas: USC
Location: McLean, VA (Hybrid) Local Candidates Only
Duration: 12 Months
Job Type: Contract to Hire
Visas: USC
Key Skills:Python, R, SQL, Spark, AWS, Snowflake
Candidate's Impact:
* As a Senior Data Engineer, you are expected to work under limited direction and independently determine and develop approaches to solution. Understand requirements and design for the build and deployment.
* Scrum/agile based development and maintain efficient data pipelines and ETL process for large-scale data integration.
* Collaborate with data analysts and other engineer to define data needs and build solution that drive business insights.
* Develop, test, and deploy code for data extraction, transformation and loading processes (ETL/ELT).
* Develop and maintain automated test scripts using Pytest to ensure high-quality software releases.
* Candidate will be a mentor and bring up-to-speed junior resources and consultants on technologies/products that the team works on and Represent and Demonstrate your team's work to customers.
* Will demonstrate your coaching and mentoring skills to help teams continuously grow and improve in delivering high quality software solutions.
* Support line of business areas with more advanced statistical and quantitative analysis.
* Ability to run models, look for exceptions and take corrective actions where necessary.
* Use technology tools to conduct analysis; apply techniques such as SQL querying and macro development to extract data for populating models.
* Release Support: Work closely with internal/external teams to help with the release process, ensuring smooth and error-free releases to production.
* Interact directly with customers to gather feedback, troubleshoot issues, and ensure that the product meets their expectations.
* As a Senior Data Engineer, you are expected to work under limited direction and independently determine and develop approaches to solution. Understand requirements and design for the build and deployment.
* Scrum/agile based development and maintain efficient data pipelines and ETL process for large-scale data integration.
* Collaborate with data analysts and other engineer to define data needs and build solution that drive business insights.
* Develop, test, and deploy code for data extraction, transformation and loading processes (ETL/ELT).
* Develop and maintain automated test scripts using Pytest to ensure high-quality software releases.
* Candidate will be a mentor and bring up-to-speed junior resources and consultants on technologies/products that the team works on and Represent and Demonstrate your team's work to customers.
* Will demonstrate your coaching and mentoring skills to help teams continuously grow and improve in delivering high quality software solutions.
* Support line of business areas with more advanced statistical and quantitative analysis.
* Ability to run models, look for exceptions and take corrective actions where necessary.
* Use technology tools to conduct analysis; apply techniques such as SQL querying and macro development to extract data for populating models.
* Release Support: Work closely with internal/external teams to help with the release process, ensuring smooth and error-free releases to production.
* Interact directly with customers to gather feedback, troubleshoot issues, and ensure that the product meets their expectations.
Qualifications:
* Bachelor's degree in computer science or related discipline; advanced Studies/ Degree preferred.
* 5 to 7 years of experience in data engineering with strong proficiency in Python programing language (Nice to have: Scala, Java).
* Strong proficiency in spark SQL, SQL, gremlin, GraphQL and database management (Snowflake cloud-base warehousing).
* Strong experience with data processing frameworks, DataFrames, Apache Spark, Graph DBs
* Experience with cloud platform AWS (EMR, EKS, Lambda).
* Experience writing statistical and/or optimization programs to develop models and algorithms.
* Programming languages may include-but are not limited to-Python, R
* Solid understanding of SDLC practices including development, testing, and release management.
* Experience with version control systems such as Git, BitBucket
* Experience with RESTful API design and development.
* Familiarity with PySpark for large-scale data processing and analysis.
* Bachelor's degree in computer science or related discipline; advanced Studies/ Degree preferred.
* 5 to 7 years of experience in data engineering with strong proficiency in Python programing language (Nice to have: Scala, Java).
* Strong proficiency in spark SQL, SQL, gremlin, GraphQL and database management (Snowflake cloud-base warehousing).
* Strong experience with data processing frameworks, DataFrames, Apache Spark, Graph DBs
* Experience with cloud platform AWS (EMR, EKS, Lambda).
* Experience writing statistical and/or optimization programs to develop models and algorithms.
* Programming languages may include-but are not limited to-Python, R
* Solid understanding of SDLC practices including development, testing, and release management.
* Experience with version control systems such as Git, BitBucket
* Experience with RESTful API design and development.
* Familiarity with PySpark for large-scale data processing and analysis.
If you're interested, please share your resume to or call me on
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.