What are the responsibilities and job description for the Sr. Software Engineer, Data Infrastructure position at CoreWeave?
About the Team
The Data Engineering team builds foundational datasets and analytics services that enable BI and data science across CoreWeave. We seek to democratize insights and foster a culture where data-driven decision-making thrives at every level.
About the Role
We’re looking for a seasoned Software Engineer with an SRE’s skillset to build and scale foundational data storage and processing infrastructure that will power BI and data science initiatives across CoreWeave. This engineer will take ownership of our data lake and clustered computing services, creating a robust ecosystem for batch processing and data science applications. You’ll be instrumental in designing, implementing, and maintaining systems that enable efficient and secure use of big data technologies, developing tools and frameworks to enhance usability across the organization.
Excellent SREs interested in Software Engineering or Data Platforming may apply.
Responsibilities
- Architect, deploy, and scale data storage and processing infrastructure to support analytics and data science workloads.
- Manage and maintain data lake and clustered computing services, ensuring reliability, security, and scalability.
- Build and optimize frameworks and tools to simplify big data technology usage.
- Collaborate with cross-functional teams to align data infrastructure with business goals and requirements.
- Ensure data governance and security best practices across all platforms.
- Monitor, troubleshoot, and optimize system performance and resource utilization.
Qualifications
- You thrive in a fast-paced, complex, work environment and you love tackling hard problems.
- 5 years of experience with Kubernetes and Helm, with a deep understanding of container orchestration.
- 7 years of programming experience in C , C#, Java, or Python.
- 5 years of experience scripting in Python or Bash for automation and tooling.
- Strong understanding of data storage technologies and distributed computing.
- Proficiency in security best practices and managing access in complex systems.
- Hands-on experience administering and optimizing clustered computing technologies on Kubernetes, such as Spark, Ray, and Kafka is preferred.
Our compensation reflects the cost of labor across several US geographic markets. The base pay for this position ranges from $175,000-$205,000. Pay is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
Salary : $175,000 - $205,000