Demo

Software Engineer, Large Scale Pre-Training Performance

DeepMind
Mountain View, CA Full Time
POSTED ON 3/3/2025
AVAILABLE BEFORE 4/28/2025

At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualities to create extraordinary impact. We are committed to equal employment opportunity regardless of sex, race, religion or belief, ethnic or national origin, disability, age, citizenship, marital, domestic or civil partnership status, sexual orientation, gender identity, pregnancy, or related condition (including breastfeeding) or any other basis as protected by applicable law. If you have a disability or additional need that requires accommodation, please do not hesitate to let us know.

 
Snapshot

We are seeking a software engineer to define, drive, and critically contribute to the next generation of the state-of-the-art ML models on TPU. As part of the Pre-Training team you will co-design the model, and implement critical components across Model architecture, ML frameworks, custom kernels and platform, to deliver frontier models with maximum efficiency.
 
About Us
 
Artificial Intelligence could be one of humanity’s most useful inventions. At Google DeepMind, we’re a team of scientists, engineers, machine learning experts and more, working together to advance the state of the art in artificial intelligence. We use our technologies for widespread public benefit and scientific discovery, and collaborate with others on critical challenges, ensuring safety and ethics are the highest priority.
 
The Role

We’re looking for a Software Engineer to re-define efficient training of frontier LLMs at massive scale. This role offers an opportunity to influence the design of frontier LLM models, and drive an effort to ensure efficient training and inference.

Key responsibilities:

  • Being responsible for Pre-Training efficiency and optimising the performance of the latest models on Google’s fleet of hardware accelerators - throughout the entire LLM research, training and deployment lifecycle.

  • Being responsible for guiding model design to ensure inference-efficiency.

  • Greatly improving the performance of LLM models on hardware accelerators by optimizing at all levels, including developing custom kernels when necessary.

  • Collaborating with the compiler, framework, and platform teams. And ensure efficient training at industry-largest scale.

  • Profile models to identify performance bottlenecks and opportunities for optimization.

  • Develop low-level custom kernels for maximum performance of the most critical operators.

  • Collaborating with research teams by enabling new critical operators in advance of their availability in frameworks and compilers.

About You

You're an engineer looking to re-define efficient training of frontier LLMs at massive scale and have:

  • A proven track record of critical contributions to the distributed training of LLMs at 1e25 FLOPs scale on modern GPU/TPU clusters  

  • Experience in programming hardware accelerators GPU/TPUs via ML frameworks (e.g. JAX, PyTorch) and low-level programming models (e.g. CUDA, OpenCL)

  • Experience in leveraging custom kernels and compiler infrastructure to improve performance on hardware

  • Experience with Python and neural network training (publications, open-source projects, relevant work experience, etc.)

The US base salary range for this full-time position is between $235,000 - $350,000 bonus equity benefits. Your recruiter can share more about the specific salary range for your targeted location during the hiring process.

Application deadline: March 12, 2025

Note: In the event your application is successful and an offer of employment is made to you, any offer of employment will be conditional on the results of a background check, performed by a third party acting on our behalf. For more information on how we handle your data, please see our Applicant and Candidate Privacy Policyopen_in_new.

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Software Engineer, Large Scale Pre-Training Performance?

Sign up to receive alerts about other jobs on the Software Engineer, Large Scale Pre-Training Performance career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$146,673 - $180,130
Income Estimation: 
$176,149 - $220,529
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$77,657 - $95,021
Income Estimation: 
$97,257 - $120,701
Income Estimation: 
$123,167 - $152,295
Income Estimation: 
$146,673 - $180,130
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at DeepMind

DeepMind
Hired Organization Address Zurich, MT Full Time
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualitie...
DeepMind
Hired Organization Address Mountain View, CA Full Time
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualitie...
DeepMind
Hired Organization Address Mountain View, CA Full Time
At Google DeepMind, we value diversity of experience, knowledge, backgrounds and perspectives and harness these qualitie...
DeepMind
Hired Organization Address Cambridge, MA Full Time
We are hiring for this role in Cambridge (US), MTV or New York. Please clarify in the application questions which locati...

Not the job you're looking for? Here are some other Software Engineer, Large Scale Pre-Training Performance jobs in the Mountain View, CA area that may be a better fit.

AI Assistant is available now!

Feel free to start your new journey!