In this contingent resource assignment, you may : Consult on complex initiatives with broad impact and large-scale planning for Software Engineering. Review and analyze complex multi-faceted, larger scale or longer-term Software Engineering challenges that require in-depth evaluation of multiple factors including intangibles or unprecedented factors.
Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables.
Strategically collaborate and consult with client personnel.
Required Qualifications :
5 years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following : work or consulting experience, training, military experience, education.
Design and implement automated spark-based framework to facilitate data ingestion, transformation and consumption.
Implement security protocols such as Kerberos Authentication, Encryption of data at rest, data authorization mechanism such as role-based access control using Apache ranger.
Design and develop automated testing framework to perform data validation.
Enhance existing spark-based frameworks to overcome tool limitations, and / or to add more features based on consumer expectations.
Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark, MongoDB, Kafka and object storage architecture.
Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure
Collaborate with application partners, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.
Support ongoing data management efforts for Development, QA and Production environments
Provide tool support, help consumers troubleshooting pipeline issues.
Utilizes a thorough understanding of available technology, tools, and existing designs.
Leverage knowledge of industry trends to build best in class technology to provide competitive advantage.
Required Qualification :
5 years of experience of software engineering experience
5 years of experience delivering complex enterprise wide information technology solutions
5 years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop
5 years of Apache Spark design and development experience using Scala, Java, Python or Data Frames with Resilient Distributed Datasets (RDDs), Parquet or ORC file formats
6 years of ETL (Extract, Transform, Load) Programming experience
2 years of Kafka or equivalent experience
2 years of NoSQL DB like Couchbase / MongoDB experience.
5 experience working with complex SQLs and performance tuning
Desired Qualification :
3 years of Agile experience
2 years of reporting experience, analytics experience or a combination of both
2 years of operational risk or credit risk or compliance domain experience
2 years of experience integrating with RESTful API
Keep a pulse on the job market with advanced job matching technology.
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution.
Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right.
Surveys & Data Sets
What is the career path for a Big Data Hadoop Engineer?
Sign up to receive alerts about other jobs on the Big Data Hadoop Engineer career path by checking the boxes next to the positions that interest you.