What are the responsibilities and job description for the Data Architect – Spark Scala position at Virtusa?
Data Science ( ML Models), Agile Methodology and Banking Expertise
8-10 years of experience in Big Data with strong expertise in Spark and Scala
Mandatory Skills: Big Data Primarily Spark and Scala
Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, shell script, Autosys, Devops.
Good to Have : Experience in Data Science ( ML Models), Agile Methodology and Banking Expertise
Strong Communication Skills
You will be responsible for many fast-changing, moving parts and get them to come together as a product.You will need excellent communication and collaboration skills.
You will be responsible for identifying and managing risks, making sound judgments about quality, and speed of deliverables and deployment to production.
You will be a key player in our Data transformation to digitize our business.
You will be responsible for leading development on TTS Big Data platform.
Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr),
API development and use of JSON/XML/Hypermedia data formats
Docker, Kubernetes, Cloudera/Hortonworks/AWS EMR, S3
8-10 years of experience in Big Data with strong expertise in Spark and Scala
Mandatory Skills: Big Data Primarily Spark and Scala
Strong Knowledge in HDFS, Hive, Impala with knowledge on Unix , Oracle, shell script, Autosys, Devops.
Good to Have : Experience in Data Science ( ML Models), Agile Methodology and Banking Expertise
Strong Communication Skills
You will be responsible for many fast-changing, moving parts and get them to come together as a product.You will need excellent communication and collaboration skills.
You will be responsible for identifying and managing risks, making sound judgments about quality, and speed of deliverables and deployment to production.
You will be a key player in our Data transformation to digitize our business.
You will be responsible for leading development on TTS Big Data platform.
Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr),
API development and use of JSON/XML/Hypermedia data formats
Docker, Kubernetes, Cloudera/Hortonworks/AWS EMR, S3