What are the responsibilities and job description for the Sr. Data Engineer – Snowflake & AWS position at tsworks?
The position is a Contract to Hire role based in Durham, NC with a hybrid work model requiring 3 days per week in our client’s office. You will be working on internal and client projects, collaborating with teams and stakeholders as needed.
Experience: 10 Years
About tsworks:
tsworks is a leading technology innovator, providing transformative products and services designed for the digital-first world. Our mission is to provide domain expertise, innovative solutions, and thought leadership to drive exceptional user and customer experiences. Demonstrating this commitment, we have a proven track record of championing digital transformation for industries such as Banking, Travel and Hospitality, and Retail (including ecommerce and omnichannel), as well as Distribution and Supply Chain, delivering impactful solutions that drive efficiency and growth. We take pride in fostering a workplace where your skills, ideas, and attitude shape meaningful customer engagements
About This Role
We are seeking an experienced, driven and motivated Sr. Data Engineer to design and deliver robust data environment for our clients. The ideal candidate will have strong hands-on experience with Data Engineering patterns on AWS Cloud, Data Lake House Architecture using AWS and Snowflake, dimensional modelling, data governance and providing data to support business intelligence and analytics with large data ecosystems.
We prefer candidates with Snowflake, AWS and Fabric Certifications.
· Position: Senior Data Engineer
· Experience: 10 Years
· Location: Multiple Client Locations in USA
Mandatory Required Qualification
· Strong proficiency in AWS data services such as S3 buckets, Glue and Glue Catalog, EMR, Athena, Redshift, DynamoDB, Quick Sight, etc.
· Strong hands-on experience building Data Lake-House solutions on Snowflake, and using features such as streams, tasks, dynamic tables, data masking, data exchange etc.
· Hands-on experience using scheduling tools such as Apache Airflow and data governance products such as Collibra
· Expertise in DevOps and CI/CD implementation
· Excellent Communication Skills
In This Role, You Will
· Design, implement, and manage scalable and efficient data architecture on the AWS cloud platform.
· Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes.
· Perform complex data transformations and processing using PySpark (AWS Glue, EMR or Databricks), Snowflake's data processing capabilities, or other relevant tools.
· Hands-on experience working with Data Lake solutions such as Apache Hudi, Delta Lake or Iceberg.
· Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs.
· Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions.
· Integrate data from various sources, both internal and external, ensuring data quality and consistency.
· Ensure data models are designed for scalability, reusability, and flexibility.
· Implement data quality checks, validations, and monitoring processes to ensure data accuracy and integrity across AWS and Snowflake environments.
· Adhere to data governance standards and best practices to maintain data security and compliance.
· Handling performance optimization in AWS and Snowflake platforms
· Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights.
· Provide guidance and mentorship to junior team members to enhance their technical skills.
· Maintain comprehensive documentation for data pipelines, processes, and architecture within both AWS and Snowflake environments including best practices, standards, and procedures.
Skills & Knowledge
· Bachelor's degree in computer science, Engineering, or a related field.
· 10 Years of experience in Information Technology, designing, developing and executing solutions.
· 4 Years of hands-on experience in designing and executing data solutions on AWS and Snowflake cloud platforms as a Data Engineer.
· Strong proficiency in AWS services such as Glue, EMR, Athena, Databricks, with file formats such as Parquet and Avro.
· Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required.
· Hands-on experience in handling real-time data streams from Kafka or Kinesis is required.
· Expertise in DevOps and CI/CD implementation.
· Hands-on experience with SQL and NoSQL databases.
· Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems.
· Knowledge of data quality, governance, and security best practices.
· Familiarity with machine learning concepts and integration of ML pipelines into data workflows
· Hands-on experience working in an Agile setting.
· Is self-driven, naturally curious, and able to adapt to a fast-paced work environment.
· Can articulate, create, and maintain technical and non-technical documentation.
· AWS and Snowflake Certifications are preferred.