What are the responsibilities and job description for the Data Engineer (Remote) position at authority?
The Data Analytics team at Authority Brands is seeking a skilled Data Engineer with at least 4 years of experience, strong SQL expertise, Python programming skills, and hands-on experience with AWS, including AWS Glue. You will collaborate with our data analytics and development teams to build, maintain, and optimize our data pipelines and infrastructure. We are looking for a candidate who enjoys solving complex data challenges and is eager to contribute to a dynamic team.
Key Responsibilities:
Develop and optimize complex SQL queries to ensure efficient data retrieval, manipulation, and reporting.
Collaborate with data analysts, data scientists, and cross-functional teams to understand their data requirements and deliver solutions.
Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions.
Recognize and adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation.
Analyze and solve problems at their root, stepping back to understand the broader context in a timely manner.
Design, build, and maintain efficient and reliable data pipelines that support the company's data platform.
Leverage Python for ETL processes, automation scripts, and data pipeline development.
Utilize AWS services, including S3, RDS, Redshift, Lambda, and AWS Glue, to design and maintain cloud-based data infrastructure.
Work with AWS Glue for ETL transformations, data cataloging, and automation of data processes.
Monitor and improve the performance of data pipelines and data processing systems.
Qualifications:
Bachelor’s degree in Computer Science, Engineering, or a related field.
Minimum of 4 years of experience in data engineering or a related role.
Strong proficiency in SQL for querying, optimizing, and managing large datasets.
Experience with data modeling and schema design.
Solid programming skills in Python, with experience in building and optimizing ETL pipelines.
Hands-on experience with AWS cloud services, including data-related tools such as S3, Redshift, RDS, Lambda, and AWS Glue.
Familiarity with version control (e.g., Git), CI/CD pipelines, and agile methodologies.
Experience working with large datasets and building scalable solutions.
Strong problem-solving skills and ability to work independently or within a team.
Excellent communication skills with the ability to explain technical concepts to non-technical stakeholders.
Preferred Skills:
Experience with machine learning pipelines or data science frameworks.