What are the responsibilities and job description for the Sr Azure Data Engineer position at ZapCom Group?
Job Details
About Us
Zapcom is a global Product Engineering and Technology Services company, specializing in bespoke, customer-centric solutions across industries like BFSI, e-commerce, retail, travel, transportation, and hospitality. Headquartered in the US, with a presence in India, Europe, Canada, and MENA, we excel in transforming ideas into tangible outcomes using AI, ML, Cloud solutions, and full-stack development.
At Zapcom, we value accountability, ownership, and equality, empowering you to excel. We listen to your aspirations and provide the support needed to achieve them. Our diverse, collaborative culture ensures every voice is heard, driving innovation and business value. With global opportunities and expansion plans, now is the perfect time to join our team. Work on impactful projects that shape the future. Apply today and be part of something extraordinary!
Key Responsibilities :
- Design and implement data ingestion, transformation, and movement using Azure Data Factory (ADF), Azure Synapse, and Data Lake.
- Collaborate with business stakeholders, data scientists, and engineers to build robust data solutions.
- Develop and manage high-volume, real-time, and batch data ingestion pipelines using Azure Data Factory (ADF).
- Implement event-driven architectures for real-time data movement and processing.
- Develop large-scale data processing solutions using Azure Databricks and Apache Spark with PySpark, Scala, or Python.
- Optimize data partitioning, caching, and indexing for efficient performance.
- Manage complex transformations and aggregations for structured and unstructured datasets.
- Design and implement high-performance data models in Azure Synapse Analytics using dedicated SQL Pools and Spark Pools.
- Optimize query performance, workload management, and cost efficiency in Synapse Analytics.
- Implement column store indexes, partitioning strategies, and data caching to enhance performance.
- Design and manage secure, scalable data lakes using Azure Data Lake Storage Gen2 (ADLS Gen2).
- Implement RBAC (Role-Based Access Control), encryption, and data masking to ensure security.
- Implement Azure Monitor, Log Analytics, and Application Insights for data pipeline monitoring and troubleshooting.
- Optimize cost management, auto-scaling, and performance tuning across Azure services.
Key Skills Required:
- 8 years of experience in data engineering, ETL development, and cloud-based data integration,
- Strong experience with Azure Data Factory (ADF): ETL/ELT pipeline orchestration and data movement.
- Azure Databricks: Large-scale data transformation and big data processing with Apache Spark, PySpark, Scala, or Python.
- Azure Synapse Analytics: Data warehousing, SQL Pools, Spark Pools, and performance tuning.
- Azure Data Lake Storage Gen2 (ADLS Gen2): Secure, scalable data lake architecture.
- SQL Server: Advanced T-SQL, stored procedures, indexing, and query optimization.
- Power BI, Tableau, or other BI tools for data visualization.
- Financial/Banking experience.