What are the responsibilities and job description for the Sr. Data Engineer position at Argo Group?
The Senior Data Engineer is a full-time/permanent position located on-site in our San Antonio office.
The ideal candidate will have a proven track record in implementing data ingestion and transformation pipelines for large scale organizations. The candidate should have 8 years of experience and love being a problem solver, translating requirements into data metrics and schemas that are easy to understand. We are seeking someone with deep technical skills in a variety of technologies like Snowflake, Talend, AWS storage integration solutions and DevOps. Candidate should also have strong interpersonal skills, enabling them to guide people through conversations to collect the information as needed.
Who You Are
You're someone who wants to see the impact of your work making a difference every day by driving outcomes and results to stakeholders and solving client needs. Your friends describe you as thorough, analytical, and detail-oriented, and someone who gets things done. You are someone with high standards who leads by example, and who will take pride in Argo like we do. Most of all, you love solving difficult business & technical challenges to make a difference in the lives of other people.
Qualifications:
Core Skills: SQL • ETL •Talend • Snowflake • Scripting • Git • Batch and streaming technologies.
- 8 years’ experience in Data Warehousing and Data Engineering.
- 3 years’ strong experience with Snowflake, Talend.
- Strong SQL and PL/SQL skills and ability to write queries and data extracts.
- Experience working with different file formats like Parquet, Avro, JSON etc.
- Good understanding of Snowflake database architecture and ability to design and build optimal data processing pipelines.
- Demonstrated skill in designing highly scalable ETL processes with complex data transformations, data formats including data cleansing, data quality assessment, error handling and monitoring.
- Expertise in building and managing large volume data processing (both streaming and batch) platform is a must.
- Design, develop, manage, and monitor complex ETL data pipelines and support it through all environment runways.
- Experience with Python/JavaScript or other scripting languages is a plus.
- Proficient utilizing platforms and technologies which support DevOps and SDLC leveraging CI/CD principles and best practices.
- Ability to work with developers to build CI/CD pipelines, Self-service Build tools, to automate deployment processes.
- Working knowledge and experience of using orchestration frameworks like Airflow, UC4, and AWS Step functions will be good.
- Knowledge of Containerization (Docker/Kubernetes) is a plus.
- Provide Support and troubleshooting for data platforms. Must be willing to provide escalated on-call Support for complicated and/or critical incidents.
- Work well within an Agile or have sound Agile Scrum Team Knowledge.
- Manage and prioritize multiple assignments.
- Ability to work individually and as a team.
- Provide technical guidance and mentoring for other team members.
- Good communication and cross functional skills.