What are the responsibilities and job description for the Spark Developer position at Photon?
Who are we?
For the past 20 years, we have powered many Digital Experiences for the Fortune 500. Since 1999, we have grown from a few people to more than 4000 team members across the globe that are engaged in various Digital Modernization. Our current focus and innovation in Digital Hyper expansion TM offers nearly limitless opportunities for career growth. For a brief 1-minute video about us, you can check out https://youtu.be/uJWBWQZEA6o.
Photon is an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status. We bring out the best in each other.
Position : Sr Scala/Spark Developer
Location : Tampa, FL(onsite)
Key Responsibilities:
- Develop, test, and deploy data processing applications using Apache Spark and Scala.
- Optimize and tune Spark applications for better performance on large-scale data sets.
- Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions.
- Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions.
- Design and implement high-performance data processing and analytics solutions.
- Ensure data integrity, accuracy, and security across all processing tasks.
- Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies.
- Implement version control and CI/CD pipelines for Spark applications.
Required Skills & Experience:
- Minimum 8 years of experience in application development.
- Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing.
- Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop.
- Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi.
- Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data.
- Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL.
- Knowledge of data warehousing concepts, dimensional modeling, and data lakes.
- Ability to troubleshoot and optimize Spark and Cloudera platform performance.
- Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).