What are the responsibilities and job description for the Research Engineer / Research Scientist, Alignment position at OpenAI?
About the Team
The Alignment team at OpenAI is dedicated to ensuring that our AI systems are safe, trustworthy, and consistently aligned with human values, even as they scale in complexity and capability. Our work is at the cutting edge of AI research, focusing on developing methodologies that enable AI to robustly follow human intent across a wide range of scenarios, including those that are adversarial or high-stakes. We concentrate on the most pressing challenges, ensuring our work addresses areas where AI could have the most significant consequences. By focusing on risks that we can quantify and where our efforts can make a tangible difference, we aim to ensure that our models are ready for the complex, real-world environments in which they will be deployed.
The two pillars of our approach are : harnessing improved capabilities into alignment, making sure that our alignment techniques improve, rather than break, as capabilities grow, and centering humans by developing mechanisms and interfaces that enable humans to both express their intent and to effectively supervise and control AIs, even in highly complex situations.
About the Role
As a Research Engineer / Research Scientist on the Alignment team, you will be at the forefront of ensuring that our AI systems consistently follow human intent, even in complex and unpredictable scenarios. Your role will involve designing and implementing scalable solutions that ensure the alignment of AI as their capabilities grow and that integrate human oversight into AI decision-making.
This role is based in San Francisco, CA. We use a hybrid work model of 3 days in the office per week and offer relocation assistance to new employees.
In this role, you will :
We are seeking research engineers and research scientists to help design and implement experiments for alignment research. Responsibilities may include :
Develop and evaluate alignment capabilities that are subjective, context-dependent, and hard to measure.
Design evaluations to reliably measure risks and alignment with human intent and values.
Build tools and evaluations to study and test model robustness in different situations.
Design experiments to understand laws for how alignment scales as a function of compute, data, lengths of context and action, as well as resources of adversaries.
Design and evaluate new Human-AI-interaction paradigms and scalable oversight methods that redefine how humans interact with, understand, and supervise our models.
Train model to be calibrated on correctness and risk.
Designing novel approaches for using AI in alignment research
You might thrive in this role if you :
Are a team player – willing to do a variety of tasks that move the team forward.
Have a PhD or equivalent experience in research in computer science, computational science, data science, cognitive science, or similar fields.
Have strong engineering skills, particularly in designing and optimizing large-scale machine learning systems(e.g., PyTorch).
Have a deep understanding of the science behind alignment algorithms and techniques.
Can develop data visualization or data collection interfaces (e.g., TypeScript, Python).
Enjoy fast-paced, collaborative, and cutting-edge research environments.
Want to focus on developing AI models that are trustworthy, safe, and reliable, especially in high-stakes scenarios.