What are the responsibilities and job description for the Data Architect/ Data Steward position at VRIO Digital?
The Ideal Professional For This Cloud Architect Role Will
- Have a passion for design, technology, analysis, collaboration, agility, and planning, along with a drive for continuous improvement and innovation.
- Exhibit expertise in managing high-volume data projects that leverage Cloud Platforms, Data Warehouse reporting and BI Tools, and the development of relational databases.
- Research, identify, and internally market enabling data management technologies based on business and end-user requirements.
- Seek ways to apply new technology to business processes with a focus on modernizing the approach to data management.
- Consult with technical subject matter experts and develop alternative technical solutions. Advise on options, risks, costs versus benefits, and impact on other business processes and system priorities.
- Demonstrate strong technical leadership skills and the ability to mentor others in related technologies.
Qualifications
- Bachelor's degree in a computer-related field or equivalent professional experience is required.
- Preferred master’s degree in computer science, information systems or related discipline, or equivalent and extensive related project experience.
- 10 years of hands-on software development experience building data platforms with tools and technologies such as Hadoop, Cloudera, Spark, Kafka, Relational SQL, NoSQL databases, and data pipeline/workflow management tools.
- 6 years of experience working with various cloud platforms (at least two from among AWS, Azure & GCP).
- Experience in multi-cloud data platform migration and hands-on experience working with AWS, AZURE / GCP
- Experience in Data & Analytics projects is a must.
- Data modeling experience – relational and dimensional with consumption requirements (reporting, dashboarding, and analytics).
- Thorough understanding and application of AWS services related to Cloud data platform and Datalake implementation – S3 Datalake, AWS EMR, AWS Glue, Amazon Redshift, AWS Lambda, and Step functions with file formats such as Parquet, Avro, and Iceberg.
- Must know the key tenets of architecting and designing solutions on AWS and Azure Clouds.
- Expertise and implementation experience in data-specific areas, such as AWS Data Lake, Data Lakehouse Architecture, and Azure Synapse and SQL Datawarehouse.
- Apply technical knowledge to architect and design solutions that meet business and IT needs, create Data & Analytics roadmaps, drive POCs and MVPs, and ensure the long-term technical viability of new deployments, infusing key Data & Analytics technologies where applicable.
- Be the Voice of the Customer to share insights and best practices, connect with the Engineering team to remove key blockers, and drive migration solutions and implementations.
- Familiarity with tools like DBT, Airflow, and data test automation.
- MUST have experience with Python/PySpark/Scala in Big Data environments.
- Strong skills in SQL queries in Big Data tools such as Hive, Impala, Presto.
- Experience working with and extracting value from large, disconnected, and/or unstructured datasets.
- Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency, and workload management.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL), and working familiarity with a variety of databases.
- Experience building and optimizing ‘big data’ data pipelines, architectures, and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.