What are the responsibilities and job description for the Mid Level Developer (Databricks) position at Canopy One Solutions Inc?
Job Details
Greetings from Canopy One Solutions,
Hope your day is treating you well!
Please glance the requirement & respond me back with your finest consultant Resumes & Contact Details.
Hope your day is treating you well!
Please glance the requirement & respond me back with your finest consultant Resumes & Contact Details.
Project Details:
Role:Mid Level Developer (Databricks)
Location:Seffner, FL
Duration:6 months/longterm
Type:CTH/W2
Job Description:
- Must have a Databricks Certification (Advanced Data Engineer Certification or Equivalent Fabric Certification)
- The ideal candidate will have experience in data engineering, an understanding of machine learning workflows, and knowledge of modern data architectures.
NOTES:
- If the candidate does not have the Databricks cert, but is qualified, they must agree to get the cert within two months of being on assignment.
Responsibilities:
- Design, develop, and maintain scalable data pipelines and workflows using Databricks.
- Integrate and process structured and unstructured data using Delta Live Tables, Streaming Data Processing.
- Support and optimize machine learning workflows by preparing and managing training and inference datasets.
- Collaborate with data scientists and AI teams to support AI/BI Genie integrations.
- Implement Snowflake and Star Schema architectures to optimize data storage and query performance.
- Work on BI reporting pipelines to ensure compatibility with advanced analytics tools.
- Develop and maintain documentation of data processes and architectures.
- Optimize Spark jobs and workflows for performance and cost-efficiency.
- Troubleshoot and resolve data quality and pipeline issues.
- Stay updated on advancements in data engineering, machine learning, and AI technologies.
Requirements:
- Databricks Certification (e.g., Databricks Certified Professional Data Engineer).
- 3 7 years of experience in data engineering roles.
- Proficiency in Python, SQL, and Apache Spark.
- Hands-on experience with Delta Live Tables and big data processing frameworks.
- Understanding machine learning models, pipelines, and AI tools.
- Familiarity with AI/BI Genie or similar tools for business intelligence.
- Experience with Data Warehouse Architecture (e.g., Snowflake, Star Schema, Kimball methodology).
- Knowledge of cloud platforms like AWS, Azure, or Google Cloud Platform.
- Strong problem-solving skills and ability to work in a collaborative environment.
Employers have access to artificial intelligence language tools (“AI”) that help generate and enhance job descriptions and AI may have been used to create this description. The position description has been reviewed for accuracy and Dice believes it to correctly reflect the job opportunity.