What are the responsibilities and job description for the Member of Technical Staff (Open Role) position at Adaptive ML?
About the team
Adaptive ML is helping companies build singular generative AI experiences by democratizing the use of reinforcement learning. We are building the foundational technologies, tools, and products that allow models to learn directly from user interactions and self-improve based on simple guidelines. Our founders come from diverse backgrounds and previously worked together in creating state-of-the-art open-access large language models. We closed a $20M seed with Index & ICONIQ in early 2024 and are live with our first enterprise customers.
Our Technical Staff develops the foundational technology that powers Adaptive ML in alignment with requests and requirements from our Commercial and Product teams. We are committed to building robust, efficient technology and conducting at-scale, impactful research to drive our roadmap and deliver value to our customers.
About the role
This is an open-role, describing a generic position in our Technical Staff. If any of the below seems like a fit, please apply!
As a Member of Technical Staff, you will contribute to building the foundational technology that powers Adaptive ML, primarily by working on our internal LLM Stack, Adaptive Harmony. We believe that generative AI is best approached as a "big science"combining large-scale engineering with rigorous empirical research. As such, we emphasize scalability and systematic, empirical demonstrations in our approach. We are looking for self-driven, business-minded, and ambitious individuals interested in supporting real-world deployments of a highly technical product. As this is an early role, you will have the opportunity to shape our research efforts and product as we grow.
This role is ideally in-person at our Paris or New York office, but we are also open to fully remote work.
Examples of tasks our Technical Team pursue on a daily basis :
- Develop robust software in Rust, interfacing between easy-to-use Python recipes and high-performance, distributed training code running on hundreds of GPUs;
- Profile and iterate GPU inference kernels in Triton or CUDA, identifying memory bottlenecks and optimizing latency-and decide how to adequately benchmark an inference service;
- Develop and execute an experiment analyzing nuances between DPO and PPO in a fair and systematic way;
- Build data pipelines to support reinforcement learning from noisy and diverse user' interactions across varied tasks;
- Experiment with new ways to combine adapters and steer the behavior of language models;
- Build hardware correctness tests to identify and isolate faulty GPUs at scale.
Your responsibilities
Generally,
On the engineering side,
On the research side,
Nearly all members of our Technical Staff hold a position that is a blend of engineering and research.
Your (ideal) background
The background below is only suggestive of a few pointers we believe could be relevant. We welcome applications from candidates with diverse backgrounds; do not hesitate to get in touch if you think you could be a great fit,even if the below doesn't fully describe you.
Benefits