What are the responsibilities and job description for the Java Developer with Big Data (Fulltime) position at TechIntelli Solutions?
Position: Java Developer with Big Data (Spark)
Location: Austin, TX – Day 1 On-Site (3 Days Onsite)
Duration: Fulltime
Job Description:
We are looking for a Java Developer with strong Big Data experience to join our team in Austin, TX. The ideal candidate will have expertise in Java, Apache Spark, and Big Data technologies, with experience in designing and developing scalable data processing solutions.
Key Responsibilities:
• Develop, optimize, and maintain Big Data pipelines using Apache Spark.
• Write clean, efficient, and well-tested Java code for large-scale data applications.
• Work with distributed computing frameworks to process and analyze large datasets.
• Optimize performance and scalability of data processing applications.
• Collaborate with cross-functional teams including Data Engineers, DevOps, and Product teams.
• Utilize Hadoop ecosystem tools (HDFS, Hive, Kafka, etc.) for data storage and processing.
• Participate in code reviews, architecture discussions, and performance tuning.
Required Skills:
• 5 years of Java development experience with strong object-oriented programming skills.
• 3 years of experience in Big Data technologies, especially Apache Spark.
• Hands-on experience with Hadoop, HDFS, Hive, and Kafka.
• Expertise in writing optimized SQL queries and working with relational/noSQL databases.
• Experience with distributed systems, microservices, and cloud environments (AWS/GCP preferred).
• Strong understanding of data structures, algorithms, and parallel computing.
• Experience with CI/CD pipelines, containerization (Docker, Kubernetes), and DevOps practices is a plus.