What are the responsibilities and job description for the Cluster Deployment Software Engineer position at Cerebras Systems?
Cerebras Systems builds the world's largest AI chip, 56 times larger than GPUs. Our novel wafer-scale architecture provides the AI compute power of dozens of GPUs on a single chip, with the programming simplicity of a single device. This approach allows Cerebras to deliver industry-leading training and inference speeds and empowers machine learning users to effortlessly run large-scale ML applications, without the hassle of managing hundreds of GPUs or TPUs.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
About The Role
The cluster deployment team is responsible for developing software that manages software deployments and ongoing maintenance operations that occur in our global data centers. Our customers and internal teams depend on the reliability, availability, and security of Cerebras’ compute clusters. We’re a group of talented engineers excited to propel the company’s mission of delivering novel AI hardware by providing secure, efficient rollouts of data center infrastructure that supports it.
In this role, you will work on the core data center deployment software stack, automating and orchestrating the software configuration processes for a variety of hardware platforms including Cerebras systems, x86 servers, network switches, and other data center appliances. You will design, implement, and maintain tools and automation scripts—using Python, Ansible, Bash, and specialized CLIs—to manage software deployments and ongoing maintenance operations. You’ll also be expected to raise organizational standards by mentoring junior engineers, improving coding practices, and continuously improving our deployment systems. Join a team of talented engineers and take on challenging scaling and infrastructure management problems as the company grows.
Responsibilities
About
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.
Cerebras' current customers include global corporations across multiple industries, national labs, and top-tier healthcare systems. In January, we announced a multi-year, multi-million-dollar partnership with Mayo Clinic, underscoring our commitment to transforming AI applications across various fields. In August, we launched Cerebras Inference, the fastest Generative AI inference solution in the world, over 10 times faster than GPU-based hyperscale cloud inference services.
About The Role
The cluster deployment team is responsible for developing software that manages software deployments and ongoing maintenance operations that occur in our global data centers. Our customers and internal teams depend on the reliability, availability, and security of Cerebras’ compute clusters. We’re a group of talented engineers excited to propel the company’s mission of delivering novel AI hardware by providing secure, efficient rollouts of data center infrastructure that supports it.
In this role, you will work on the core data center deployment software stack, automating and orchestrating the software configuration processes for a variety of hardware platforms including Cerebras systems, x86 servers, network switches, and other data center appliances. You will design, implement, and maintain tools and automation scripts—using Python, Ansible, Bash, and specialized CLIs—to manage software deployments and ongoing maintenance operations. You’ll also be expected to raise organizational standards by mentoring junior engineers, improving coding practices, and continuously improving our deployment systems. Join a team of talented engineers and take on challenging scaling and infrastructure management problems as the company grows.
Responsibilities
- Develop, document, and maintain automation tools and scripts to deploy and configure software clusters in data centers, improving deployment efficiency and reducing operational overhead.
- Troubleshoot issues related to software deployments, system performance, and server management across our distributed infrastructure.
- Identify opportunities to automate manual processes, improve system reliability, and optimize scalability.
- Collaborate with cross-functional teams for design review and operations, ensuring robust management of servers, storage, networking, power, and cooling equipment.
- 3 years of professional software development experience.
- Proficiency in Python and strong experience with automation frameworks (e.g., Ansible) and Bash scripting.
- Experience designing or architecting software systems.
- Professional experience with one or more of the following:
- Data center networking (configuration, maintenance, troubleshooting, protocols like BGP).
- Kubernetes (Helm chart development, troubleshooting containerized services).
- Solid understanding of Linux server administration and troubleshooting.
About
People who are serious about software make their own hardware. At Cerebras we have built a breakthrough architecture that is unlocking new opportunities for the AI industry. With dozens of model releases and rapid growth, we’ve reached an inflection point in our business. Members of our team tell us there are five main reasons they joined Cerebras:
- Build a breakthrough AI platform beyond the constraints of the GPU.
- Publish and open source their cutting-edge AI research.
- Work on one of the fastest AI supercomputers in the world.
- Enjoy job stability with startup vitality.
- Our simple, non-corporate work culture that respects individual beliefs.
Apply today and become part of the forefront of groundbreaking advancements in AI!
Cerebras Systems is committed to creating an equal and diverse environment and is proud to be an equal opportunity employer. We celebrate different backgrounds, perspectives, and skills. We believe inclusive teams build better products and companies. We try every day to build a work environment that empowers people to do their best work through continuous learning, growth and support of those around them.
This website or its third-party tools process personal data. For more details, click here to review our CCPA disclosure notice.