Job Posting for Data Engineer at Initialize
Data Developer (Python/PySpark and Azure Databricks)- Remote (Romania, Poland or Portugal), 6 months
Skills/experience
Proficiency in Python, PySpark/SQL, Databricks Jobs API, DataFrames, and unit/integration/e2e testing. Strong understanding and experience with OOP principles and practices (SOLID, DRY). Proven ability to write modular, testable, and maintainable code. Proven ability to design scalable, cost-effective, and high-performance solutions, including cluster sizing, optimization, and cost estimation. Deep understanding of modern data architecture, including medallion architecture and data warehousing concepts. Experience applying development practices such as code reviews, pair programming, logging and monitoring, and documentation.
Desirable
Experience with core Azure services such as Databricks, KeyVault, Data Factory, Service Bus, Event Hub, and Data Lake Store Gen2. A DevOps mindset with a focus on automation, CI/CD, Infrastructure as Code, and Pipelines as Code. Experience working within Agile SCRUM/Kanban workflows. Experience building CI/CD and IaC Pipelines (yaml, ARM Templates, Terraform) Nice-to-have one or more recent Data Development Certifications
Team player with strong collaboration and communication skills. Good proficiency in English, both written and verbal.
URGENT ROLE APPLY NOW!
View More
Apply for this job
Receive alerts for other Data Engineer job openings
Report this Job