Demo

Research Scientist, Interpretability

Anthropic
San Francisco, CA Full Time
POSTED ON 12/20/2024
AVAILABLE BEFORE 2/20/2025

About the role:

When you see what modern language models are capable of, do you wonder, "How do these things work? How can we trust them?"
 
The Interpretability team at Anthropic is working to reverse-engineer how trained models work because we believe that a mechanistic understanding is the most robust way to make advanced systems safe. We’re looking for researchers and engineers to join our efforts. 
 
People mean many different things by "interpretability". We're focused on mechanistic interpretability, which aims to discover how neural network parameters map to meaningful algorithms. If you're unfamiliar with this type of research, you might be interested in this introductory essay, or Zoom In: An Introduction to Circuits. (For a broader overview of work in this space, one of our team's alumni maintains a helpful reading list.)
 
Some useful analogies might be to think of us as trying to do "biology" or "neuroscience" of neural networks, or as treating neural networks as binary computer programs we're trying to "reverse engineer".
 
A few places to learn more about our work and team at a high level are this introduction to Interpretability from our research lead, Chris Olah; a discussion of our work on the Hard Fork podcast produced by the New York Times, and this blog post (and accompanying video) sharing more about some of the engineering challenges we’d had to solve to get these results.

Some of our team's notable publications include A Mathematical Framework for Transformer Circuits, In-context Learning and Induction Heads, and Toy Models of Superposition. This work builds on ideas from members' work prior to Anthropic such as the original circuits thread, Multimodal Neurons, Activation Atlases, and Building Blocks.

We aim to create a solid foundation for mechanistically understanding neural networks and making them safe (see our vision post). In the short term, we have focused on resolving the issue of "superposition" (see Toy Models of Superposition, Superposition, Memorization, and Double Descent, and our May 2023 update), which causes the computational units of the models, like neurons and attention heads, to be individually uninterpretable, and on finding ways to decompose models into more interpretable components. Our recent work finding millions of features on Sonnet, one of our production language models, represents progress in this direction. This is a stepping stone towards our overall goal of mechanistically understanding neural networks.

We often collaborate with teams across Anthropic, such as Alignment Science and Societal Impacts to use our work to make Anthropic’s models safer. We also have an Interpretability Architectures project that involves collaborating with Pretraining. If you would be especially excited to work on a project that touches upon the intersection of Interpretability and another team, feel free to note down the specific team(s) you’d be interested in collaborating with.

Responsibilities:

  • Develop methods for understanding LLMs by reverse engineering algorithms learned in their weights
  • Design and run robust experiments, both quickly in toy scenarios and at scale in large models
  • Build infrastructure for running experiments and visualizing results
  • Work with colleagues to communicate results internally and publicly

You may be a good fit if you:

  • Have a strong track record of scientific research (in any field), and have done some work on Interpretability
  • Enjoy team science – working collaboratively to make big discoveries
  • Are comfortable with messy experimental science. We're inventing the field as we work, and the first textbook is years away
  • You view research and engineering as two sides of the same coin. Every team member writes code, designs and runs experiments, and interprets results
  • You can clearly articulate and discuss the motivations behind your work, and teach us about what you've learned. You like writing up and communicating your results, even when they're null
Familiarity with Python is required for this role.

If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution. Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right. Surveys & Data Sets

What is the career path for a Research Scientist, Interpretability?

Sign up to receive alerts about other jobs on the Research Scientist, Interpretability career path by checking the boxes next to the positions that interest you.
Income Estimation: 
$108,245 - $136,486
Income Estimation: 
$136,683 - $171,343
Income Estimation: 
$164,600 - $211,482
Income Estimation: 
$224,177 - $300,651
Income Estimation: 
$213,290 - $266,052
Income Estimation: 
$225,010 - $318,974
Income Estimation: 
$182,205 - $244,055
Income Estimation: 
$68,606 - $89,684
Income Estimation: 
$88,975 - $120,741
Income Estimation: 
$68,121 - $81,836
Income Estimation: 
$71,928 - $87,026
Income Estimation: 
$125,958 - $157,570
Income Estimation: 
$82,813 - $108,410
Income Estimation: 
$120,989 - $162,093
Income Estimation: 
$74,806 - $91,633
Income Estimation: 
$71,928 - $87,026
Income Estimation: 
$145,337 - $174,569
Income Estimation: 
$102,775 - $137,396
Income Estimation: 
$153,127 - $203,425
Income Estimation: 
$139,626 - $193,276
Income Estimation: 
$164,650 - $211,440
Income Estimation: 
$130,030 - $173,363

Sign up to receive alerts about other jobs with skills like those required for the Research Scientist, Interpretability.

Click the checkbox next to the jobs that you are interested in.

  • Clinical Data Analysis Skill

    • Income Estimation: $60,086 - $76,278
    • Income Estimation: $61,264 - $82,603
  • Clinical Data Management Skill

    • Income Estimation: $61,448 - $87,156
    • Income Estimation: $64,637 - $113,224
View Core, Job Family, and Industry Job Skills and Competency Data for more than 15,000 Job Titles Skills Library

Job openings at Anthropic

Anthropic
Hired Organization Address San Francisco, CA Full Time
About the role As a Threat Investigator, you will be conducting investigations around adversarial actors, identifying vu...
Anthropic
Hired Organization Address Seattle, WA Full Time
About the role As a designer at Anthropic, you’ll work alongside product managers, engineers, and AI researchers to shap...
Anthropic
Hired Organization Address San Francisco, CA Full Time
About the role As an Enterprise Account Executive at Anthropic, you’ll drive adoption of safe, frontier AI by securing s...
Anthropic
Hired Organization Address San Francisco, CA Full Time
About the role As a member of our Strategic Product Management (SPM) team at Anthropic, you’ll own and lead strategic in...

Not the job you're looking for? Here are some other Research Scientist, Interpretability jobs in the San Francisco, CA area that may be a better fit.

Research Manager, Interpretability

Anthropic, San Francisco, CA

Research Scientist

Leadstack Inc, Foster, CA

AI Assistant is available now!

Feel free to start your new journey!