What are the responsibilities and job description for the Databricks Engineer @ REMOTE position at Vedasoft Inc?
Job Details
Skilled Databricks Engineer with expertise in Python and PySpark to design, develop, and optimize data pipelines and workflows. Experience working with Apache Spark, ETL processes, and big data architectures Azure environment.
Key Responsibilities:
Develop and maintain scalable data pipelines using Databricks, PySpark, and Python
Work with structured and unstructured data in cloud-based environments
Implement data transformation, cleansing, and integration solutions
Collaborate with data engineers, analysts, and stakeholders to meet business requirements
Requirements:
Strong experience with Databricks, Python, and PySpark
Hands-on experience with Apache Spark and big data processing
Proficiency in SQL and working with cloud Azure cloud platform
Experience with ETL development and performance tuning
Knowledge of data lake, data warehouse, and data modeling principles
--------------