What are the responsibilities and job description for the Data Engineer position at SPECTRAFORCE?
Job Title -: Data Engineer
Location-: Glendale, AZ (Hybrid 3 days onsite)
Duration-: 12 months
Description
We are seeking a highly skilled and motivated Software Engineer specializing in Data Engineering to join our growing team. This critical role will focus on designing, developing, and optimizing our data infrastructure within the Azure cloud environment. The ideal candidate possesses a deep understanding of data engineering principles and extensive hands-on experience with Azure Data Lake, Azure Data Factory, Databricks, and SAP Business Objects. You will play a key role in building and maintaining robust data pipelines, ensuring data quality, and enabling data-driven insights for the business. This position requires a U.S. Citizenship due to project requirements.
This position is designated as part-time telework per our global telework policy and will require at least three days of in-person attendance per week at the assigned office or project. Weekly in-person schedules will be determined by the individual and their supervisor, in consultation with functional or project leadership#LI-TN1
Roles & Responsibilities:
Design, develop, and optimize scalable and efficient data processing pipelines and architectures within Azure Data Lake and Databricks, leveraging best practices for performance and maintainability.
Implement and manage complex ETL (Extract, Transform, Load) processes to seamlessly integrate data from diverse sources (e.g., databases, APIs, streaming platforms) into Azure Data Lake, ensuring data quality and consistency.
Develop and maintain interactive dashboards and reports using SAP Business Objects and Power BI, translating complex data into actionable business insights. Focus on performance optimization and data accuracy.
Leverage Azure Data Factory for data orchestration, workflow automation, and scheduling, ensuring reliable and timely data delivery.
Implement and maintain Azure Security & Governance policies, including access control, data encryption, and compliance frameworks, to ensure data protection and adherence to industry best practices.
Optimize data storage and retrieval mechanisms within Azure, including performance tuning of Databricks clusters and Azure SQL databases, to improve query performance and scalability.
Collaborate effectively with cross-functional teams (e.g., business analysts, data scientists, product managers) to understand business requirements, translate them into technical solutions, and communicate technical concepts clearly.
Implement data quality checks and validation rules throughout the data pipeline to ensure data accuracy, completeness, and consistency.
Monitor, troubleshoot, and enhance existing data solutions, proactively identifying and resolving performance bottlenecks and data quality issues.
Create and maintain comprehensive technical documentation, including design specifications, data flow diagrams, and operational procedures, to facilitate knowledge sharing and team collaboration.
Skills Required
- 4 years of hands-on experience in data engineering, data warehousing, and cloud-based data platforms.
- Deep expertise in Azure Data Lake, Azure Data Factory, Azure Security & Governance, Databricks, and SAP Business Objects.
- Strong proficiency in SQL, including complex query writing, query optimization, and performance tuning.
- Proven experience in developing and maintaining Power BI dashboards and reports.
- Hands-on experience with Azure services such as Azure Synapse Analytics, Azure SQL Database, and Azure Blob Storage.
- Solid understanding of data modeling concepts, ETL processes, and big data frameworks (e.g., Spark).
- Experience in optimizing and managing large-scale datasets in cloud environments.
- Experience developing and maintaining ETL packages using SSIS and reports using SSRS.
- Strong analytical and problem-solving skills with a keen attention to detail.
- Excellent communication and collaboration skills.
- Master's degree in a relevant field.
- Familiarity with machine learning models and data science concepts.
- Understanding of DevOps practices and CI/CD pipelines for data applications.
- Experience with data governance tools and frameworks.
- Experience with other cloud platforms (e.g., AWS, GCP).
Salary : $57