What are the responsibilities and job description for the Databricks Platform Admin on AWS position at Intone Networks?
Overview: This role requires a highly skilled Databricks Platform Administrator responsible for hands-on management, optimization, and maintenance of Databricks environments on AWS. The ideal candidate will have extensive experience in data engineering, programming, and cloud-based integration platforms, ensuring seamless data flow and interoperability between various systems and applications within our organization. Primary Responsibilities: • Build/Design large-scale application development projects and programs with a hands-on approach. • Ensure the technical validity of solutions and actively drive their implementation. • Develop and maintain detailed business and technical process documentation and training materials and build/code frameworks • Review problem logs, identify recurring issues, implement long-term solutions, and automate solutions • Hands-on development, admin, design, and performance tuning Minimum Qualifications: • 5 years of hands-on experience with a BS or MS in Computer Science or equivalent education and experience. • 3 years of hands-on experience in framework development and building integration layers to solve complex business use cases, with a strong emphasis on Databricks and AWS. Technical Skills: • Strong hands-on coding skills in Python. • Extensive hands-on experience with Databricks for developing integration layer solutions. • AWS Data Engineer or Machine Learning certification or equivalent hands-on experience with AWS Cloud services. • Proficiency in building data frameworks on AWS, including hands-on experience with tools like AWS Lambda, AWS Glue, AWS SageMaker, and AWS Redshift. • Hands-on experience with cloud-based data warehousing and transformation tools such as Delta Lake Tables, DBT, and Fivetran. • Familiarity with machine learning and open-source machine learning ecosystems. • Hands-on experience with integration tools and frameworks such as Apache Camel and MuleSoft. • Solid understanding of API design principles, RESTful services, and message queuing technologies. • Familiarity with database systems and SQL. • Hands-on experience with Infrastructure as Code (IaC) tools like Terraform and AWS CloudFormation. • Proficiency in setting up and managing Databricks workspaces, including VPC management, security groups, and VPC peering. • Hands-on experience with CI/CD pipeline management using tools like AWS CodePipeline, Jenkins, or GitHub Actions. • Knowledge of monitoring and logging tools such as Amazon CloudWatch, Datadog, or Prometheus. • Hands-on experience with data ingestion and ETL processes using AWS Glue, Databricks Auto Loader, and Informatica.