What are the responsibilities and job description for the Data Engineer position at Agile Dream Team?
At Agile Dream Team , we harness the power of data to drive intelligent business decisions. We are looking for a highly skilled Data Engineer to design, build, and maintain scalable data pipelines and infrastructure for AI-driven applications.
Learn more about us at : www.agiledreamteam.com
Role Overview
As a Data Engineer , you will be responsible for building and optimizing data pipelines , ensuring data quality, and enabling real-time and batch processing for analytics and AI models. You will work closely with Data Scientists, AI Engineers, and Software Developers to develop scalable data solutions that support business intelligence and machine learning applications
Key Responsibilities
- Design, develop, and maintain scalable ETL / ELT pipelines for structured and unstructured data.
- Build data architectures that support batch and real-time data processing .
- Optimize data storage, retrieval, and performance for analytics and AI applications.
- Work with big data processing frameworks such as Apache Spark, Apache Flink, or Kafka.
- Implement data governance, security, and compliance standards .
- Ensure high data quality through data validation, monitoring, and anomaly detection .
- Automate data ingestion, transformation, and processing for AI / ML workflows.
- Deploy and manage data solutions in AWS, Azure, or GCP cloud environments.
- Collaborate with Data Scientists and AI Engineers to enable data-driven AI models .
- Utilize SQL and NoSQL databases for efficient storage and retrieval of large datasets.
Required Skills & Experience
Proficiency in data engineering frameworks : Apache Spark, Hadoop, Airflow, DBT.
Strong knowledge of SQL, NoSQL, and data modeling techniques .
Experience in ETL / ELT pipeline development using Python, Scala, or Java.
Hands-on experience with cloud data platforms (AWS Redshift, BigQuery, Snowflake, Databricks).
Expertise in real-time streaming technologies (Apache Kafka, Kinesis, Pulsar).
Experience with data warehouse and lakehouse architectures .
Strong understanding of data partitioning, indexing, and performance tuning .
Familiarity with containerized deployments (Docker, Kubernetes) for data pipelines.
Experience in CI / CD automation for data workflows .
Preferred Qualifications
Experience with machine learning data pipelines and feature engineering .
Hands-on knowledge of data lake technologies (Delta Lake, Iceberg, Hudi).
Familiarity with Terraform, Ansible, or other infrastructure-as-code (IaC) tools .
Understanding of graph databases (Neo4j, ArangoDB) and time-series databases .
Experience in data observability, lineage tracking, and governance .
Why Join Us?
Work with cutting-edge data technologies in a forward-thinking AI / ML company.
100% remote role with a flexible schedule.
Opportunities for growth and continuous learning in Data Engineering and AI .
Engage in high-impact data projects that power real-world AI applications.
Competitive salary
Get to know us and apply today! Apply Here
Ready to shape the future of data engineering? Apply now!