What are the responsibilities and job description for the Big Data Engineer position at Donato Technologies, Inc.?
Donato Technologies, established in 2012, excels as a comprehensive IT service provider renowned for delivering an exceptional staffing experience and prioritizing the needs of both clients and employees. We specialize in staffing, consulting, software development, and training, catering to small and medium-sized enterprises. While our core strength lies in Information Technology, we also deeply understand and address the unique business requirements of our clients, leveraging IT to effectively meet those needs. Our commitment is to provide high-quality, customized solutions using the optimal combination of technologies.
Job Summary –
Please share your resumes at resumes@donatotech.net or you can reach out team at 469 548 7454
Job Summary –
- Hands-on Javaspark SME with multiple project experience with Data planforms comprising of Hadoop, Teradata Data Warehouse, Ab Initio, Informatica, Java Spark (DPL), SSIS, AWS Lake Formation (S3), Snowflake
- Ability to design, build and unit test applications on Spark framework on Java
- Build JavaSpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well.
- Develop and execute data pipeline testing processes and validate business rules and policies
- Optimize performance of the built Spark applications in Hadoop using configurations around Spark Context, Spark-SQL, Data Frame, and Pair RDD's.
Please share your resumes at resumes@donatotech.net or you can reach out team at 469 548 7454