What are the responsibilities and job description for the Research Engineer / Research Scientist (Inference) position at Inflection AI?
Research Engineer / Research Scientist (Inference)
Inflection AI is a public benefit corporation leveraging our world class large language model to build the first AI platform focused on the needs of the enterprise.
Who we are:
Inflection AI was re-founded in March of 2024 and our leadership team has assembled a team of kind, innovative, and collaborative individuals focused on building enterprise AI solutions. We are an organization passionate about what we are building, enjoy working together and strive to hire people with diverse backgrounds and experience.
Our first product, Pi, provides an empathetic and conversational chatbot. Pi is a public instance of building from our 350B frontier model with our sophisticated fine-tuning (10M examples), inference, and orchestration platform. We are now focusing on building new systems that directly support the needs of enterprise customers using this same approach.
Want to work with us? Have questions? Learn more below.
About The Role
a Member of Technical Staff, Research Engineer on our Inference team, you will be essential to the real-time performance and reliability of our AI systems. Your role is pivotal in optimizing inference pipelines, reducing latency, and translating cutting-edge research into enterprise-ready applications.
This is a good role for you if you:
- Have extensive experience deploying and optimizing large-scale language models for real-time inference.
- Are skilled with performance-enhancing tools and frameworks such as ONNX, TensorRT, or TVM.
- Thrive in fast-paced environments where real-world application performance is paramount.
- Understand the intricate trade-offs between model accuracy, latency, and scalability.
- Are passionate about delivering robust, efficient, and scalable inference solutions that drive our enterprise success.
Responsibilities include:
- Optimizing inference pipelines to maximize model performance and minimize latency in production environments.
- Collaborating with ML researchers and engineers to deploy inference solutions that meet rigorous enterprise standards.
- Integrating and refining tools to streamline the transition from research prototypes to production-ready systems.
- Continuously monitoring and tuning system performance with real-world data to drive improvements.
- Pioneering innovations in model inference that are critical to the success of our AI platform.
Salary : $175,000 - $350,000