What are the responsibilities and job description for the Databricks Architect(AWS/Spark)/Atlanta, GA (Hybrid) position at Radiansys Inc.?
Hi
We are looking for Databricks Architect(AWS/Spark)/Atlanta, GA (Hybrid). Anyone interested can share your resume at pkumar@radiansys.com
Job Title: Data Architect(Databricks/AWS/Spark)/Atlanta, GA (Hybrid)
Location: Atlanta, GA (Hybrid)
Full Time Only
Must have experience with Databricks, AWS, Apache Spark
Essential functions:
1. Data Architecture & Engineering:Design and implement medallion architecture (raw, silver, gold layers) to enable efficient data ingestion, processing, and quality management.
- Develop standardized ETL and streaming pipelines using Databricks, Apache Spark, and Apache Airflow, ensuring low-latency data processing.
- Define and enforce data quality and observability frameworks, integrating dashboards and monitoring tools to maintain high data integrity.
- Optimize data pipeline performance and infrastructure costs, identifying bottlenecks and areas for improvement.
2. Technical Leadership & Strategy:Lead the technical discovery and ongoing development, assessing current systems, identifying pain points, and defining the target state architecture.
- Provide technical recommendations and a roadmap for implementation, ensuring best practices in data engineering and architecture.
- Guide the selection and implementation of cloud-based data platforms to support scalability, efficiency, and future growth.
- Ensure compliance with security, governance, and regulatory requirements in data handling and processing.
3. Cross-Team Collaboration & Stakeholder Engagement:Act as the technical point of contact between engineering teams, business stakeholders, and management.
- Work closely with team members to ensure smooth collaboration and knowledge transfer.
- Translate business requirements into technical solutions, ensuring alignment between data engineering practices and business objectives.
4. Project Delivery & Execution:
- Define best practices, coding standards, and development workflows for data engineering teams.
- Ensure a smooth transition from discovery to implementation, providing hands-on guidance and technical oversight.
- Participate in planning and work closely with the Delivery Manager to manage timelines and priorities, various program-related topics
- Monitor and troubleshoot data pipeline performance, ensuring high availability and reliability of data systems.
Qualifications:
- Cloud provider: AWS
- Programming language: Python
- Frameworks and technologies: AWS Glue, Apache Spark, Apache Kafka, Apache Airflow
- Experience working with on-premise will be a plus
- Azure Databricks is a MUST
Would be a plus:
- Data Engineering & Architecture: Deep experience with data platforms, specifically using Databricks, Apache Spark, and Apache Airflow.
- Proven capability in designing and implementing medallion architectures (raw, silver, gold).
- ETL Frameworks & Ingestion Patterns: Ability to establish standard ingestion patterns and create a uniform ETL framework for both batch and streaming data.
- Data Quality & Observability: Experience in setting up data quality frameworks, developing dashboards, and implementing observability tools to ensure data integrity.
Regards,
Pinku Kumar
Talent Acquisition – Radiansys Inc.
39510 Paseo Padre Pkwy #110, Fremont, CA 94538
Direct: 510 790 2000 Ext 1006
Email: pkumar@radiansys.com