What are the responsibilities and job description for the Data Engineer (Middle) ID28109 position at AgileEngine?
AgileEngine is one of the Inc. 5000 fastest-growing companies in the U and a top-3 ranked dev shop according to Clutch. We create award-winning custom software solutions that help companies across 15 industries change the lives of millions
If you like a challenging environment where you’re working with the best and are encouraged to learn and experiment every day, there’s no better place - guaranteed! :)
What you will do
If you like a challenging environment where you’re working with the best and are encouraged to learn and experiment every day, there’s no better place - guaranteed! :)
What you will do
- Lift and Shift ETL pipelines from Legacy to New environments;
- Monitor data pipelines, identify bottlenecks, optimize data processing and storage for performance and cost-effectiveness;
- Analyze sources and build Cloud Data Warehouse and Data Lake solution;
- Collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders.
- 3 years of professional experience in a Data Engineering role;
- Proficiency in programming languages commonly used in data engineering such as Python, SQL, and optionally Scala for working with data processing frameworks like Spark and libs like Pandas;
- Proficiency in designing, deploying, and managing data pipelines using Apache Airflow for workflow orchestration and scheduling;
- Ability to design, develop, and optimize ETL processes to move and transform data from various sources into the data warehouse, ensuring data quality, reliability, and efficiency;
- Knowledge of big data technologies and frameworks such as Apache Spark for processing large volumes of data efficiently;
- Extensive hands-on experience with various AWS services relevant to data engineering, including but not limited to Amazon MWAA, Amazon S3, Amazon RDS, Amazon EMR, AWS Lambda, AWS Glue, Amazon Redshift, AWS Data Pipeline, Amazon DynamoDB;
- Deep understanding and practical experience in building and optimizing cloud data warehousing solutions;
- Ability to monitor data pipelines, identify bottlenecks, and optimize data processing and storage for performance and cost-effectiveness;
- Excellent communication skills to collaborate effectively with cross-functional teams including data scientists, analysts, software engineers, and business stakeholders;
- Bachelor’s degree in computer science/engineering or other technical field, or equivalent experience;
- Upper-intermediate English level.
- Familiarity with the fintech industry, understanding of financial data, regulatory requirements, and business processes specific to the domain;
- Documentation skills to document data pipelines, architecture designs, and best practices for knowledge sharing and future reference;
- GCP services relevant to data engineering;
- Snowflake;
- OpenSearch, Elasticsearch;
- Jupyter for analyze data;
- Bitbucket, Bamboo;
- Terraform.
- Professional growth
- Competitive compensation
- A selection of exciting projects
- Flextime