What are the responsibilities and job description for the Data Architect position at Saxon Global?
Job Title: Data Architect
Duration: 6-12 months
Location: Onsite 4 days/week, Seattle, Washington
Role Overview:
We are seeking a strategic Data Architect who will play a critical role in shaping and leading the data engineering efforts within our organization. The ideal candidate will bring a deep understanding of Databricks, Business Intelligence (BI), and ETL best practices, and will be responsible for designing scalable data solutions that meet both current and future business needs. This is a highly collaborative, leadership-driven role that requires not only technical expertise but also the ability to effectively communicate with stakeholders, guide junior team members, and influence technical direction across the organization.
Key Responsibilities:
- Lead Data Architecture & Engineering: Take ownership of the overall data architecture and engineering efforts, with a specific focus on utilizing Databricks for building and optimizing data pipelines. Ensure best practices in ETL (Extract, Transform, Load) processes and overall data flow.
- Stakeholder Collaboration: Work closely with business stakeholders and technical teams to understand data requirements, align them with business objectives, and ensure solutions are scalable and future-proof.
- Design Scalable Data Pipelines: Design and implement robust, scalable, and efficient data pipelines for analytics, reporting, and AI applications. Ensure data is organized, accessible, and available for decision-makers and end users.
- Optimize Workflows and Frameworks: Enhance data engineering workflows by creating reusable components, streamlining the development process, and optimizing existing systems for better performance and cost-efficiency.
- Integration with DevOps, MLOps, and DataOps: Work closely with DevOps, MLOps, and DataOps teams to ensure seamless integration of data solutions into operational environments and to automate deployment processes.
- Performance and Cost Evaluation: Regularly monitor, evaluate, and optimize the performance of data systems, databases, and storage solutions. Identify opportunities for improvement, particularly around system performance, data processing times, and overall cost-efficiency.
- Mentorship and Team Leadership: Provide mentorship and guidance to junior data engineers, helping to elevate their technical skills and knowledge. Foster a collaborative, high-performance culture within the data engineering team.
- Drive Innovation: Keep abreast of the latest industry trends, tools, and technologies. Research, test, and integrate cutting-edge solutions to continuously improve the organization's data engineering capabilities.
Skills & Experience:
- Databricks Expertise: Proven experience working with Databricks and its related tools, including using it for data processing, transformations, and building data pipelines. Familiarity with Apache Spark is a plus.
- Business Intelligence (BI) & ETL: Strong background in BI tools and ETL processes, with hands-on experience designing and implementing scalable and efficient data solutions that support business analytics and reporting.
- Data Architecture: Strong understanding of data architecture principles, including how to design and implement scalable, secure, and efficient data systems.
- Scalable Data Pipelines: Demonstrated ability to design, build, and optimize scalable data pipelines, ensuring data accessibility, quality, and performance at all stages.
- Stakeholder Management & Communication: Exceptional communication skills with the ability to work cross-functionally with business stakeholders and technical teams to define requirements and ensure alignment with business objectives.
- Leadership & Mentorship: Proven leadership experience, particularly in mentoring junior team members, guiding technical decisions, and driving best practices within the team.
- DevOps, MLOps, and DataOps: Experience working within a DevOps, MLOps, or DataOps environment, ensuring seamless integration and deployment of data solutions.
- Problem Solving & Optimization: Strong problem-solving skills with the ability to identify inefficiencies in data systems and pipelines and implement cost-effective solutions.
Preferred Additional Skills:
- Experience with cloud platforms like Azure, AWS, or Google Cloud.
- Familiarity with AI/ML solutions, particularly in integrating data pipelines with machine learning models.
- Knowledge of data storage and database management systems such as SQL, NoSQL, and Data Lakes.
Salary : $60