What are the responsibilities and job description for the Team Manager, Alignment Finetuning position at Anthropic?
About the role:
You want to enable and support a team that is developing and implementing new alignment techniques for language models, aimed at improving model values, honesty, and character. As a Team Manager on the Alignment Finetuning team, you'll partner with a Technical Research Lead to drive the execution of critical alignment initiatives, support the growth and development of your team members, and ensure smooth collaboration across Anthropic's research organization.
Our team works on implementing and scaling techniques like synthetic data generation and training models to assist in model training. You'll help create an environment that enables technical excellence while maintaining focus on our core mission of making AI systems more reliable and aligned with human values.
Note: This role is expected to be based in San Francisco, with at least 3 days per week in office.
Representative projects:
- Partner with the research lead to develop and execute the team’s roadmap
- Build and improve processes for evaluating the effectiveness of the team’s alignment interventions
- Coordinate cross-functional collaboration between Alignment Finetuning and other teams like T&S, Applied Finetuning, and Alignment Science
- Support the development and growth of researchers and engineers working on novel alignment techniques
- Drive recruiting efforts to grow the team while maintaining high standards
You may be a good fit if you:
- Have 5 years of technical experience in software engineering, ML/AI, or related field
- Have 2 years of experience managing technical teams
- Are an excellent listener and communicator
- Take ownership over your team's overall output and performance
- Have experience supporting and enabling research teams
- Build strong relationships across various stakeholder groups
- Have a demonstrated ability to understand and support technical work
- Care deeply about AI safety and alignment
Strong candidates may also:
- Have experience with ML/AI projects and understanding of fundamental concepts
- Have background working with research organizations
- Have experience managing research or exploratory projects
- Have experience with org design and process improvement
- Have experience recruiting for and managing teams through periods of growth
- Have familiarity with reinforcement learning and language models