What are the responsibilities and job description for the Research Scientist (Test Time Compute) position at Naptha AI?
Job Details
Job Description
We are seeking an exceptional AI Research Scientist to join Naptha AI at the ground floor, focusing on advancing the state of the art in test time compute optimization for large language models. In this role, you will be responsible for researching and developing novel approaches to improve inference efficiency, reduce computational requirements, and enhance model performance at deployment. Working directly with our technical team, you will help shape the fundamental architecture of our inference optimization platform.
This role is critical in solving core technical challenges around model compression, efficient inference strategies, and deployment optimization. You will work at the intersection of machine learning, systems optimization, and hardware acceleration to develop practical solutions for real-world model deployment and scaling.
Core ResponsibilitiesResearch & DevelopmentDesign and implement novel architectures for efficient model inference
Develop frameworks for model compression and quantization
Research approaches to optimize test-time computation across different hardware
Create efficient protocols for distributed inference and resource management
Implement and test new ideas through rapid prototyping
Stay at the forefront of developments in ML efficiency and inference optimization
Identify and solve key technical challenges in model deployment
Develop novel approaches to model compression and acceleration
Bridge theoretical research with practical implementation
Contribute to the academic community through publications and open source
Help design and implement efficient inference pipelines
Develop scalable solutions for model deployment and serving
Create tools and frameworks for performance monitoring and optimization
Collaborate with engineering team on implementation
Build proofs of concept for new optimization techniques
Work closely with engineering team to implement research findings
Mentor team members on advanced optimization techniques
Contribute to technical strategy and roadmap
Collaborate with external research partners when appropriate
Help evaluate and integrate external research developments
Strong background in machine learning and systems optimization
Deep understanding of model compression and efficient inference techniques
Hands-on experience with modern ML frameworks and deployment tools
Experience with ML infrastructure and hardware acceleration
Track record of implementing efficient ML systems
Excellent programming skills (Python required, C /CUDA a plus)
Strong analytical and problem-solving abilities
PhD in Machine Learning, Computer Science, Mathematics, or equivalent experience is a plus
Published research in relevant fields is a plus
Python programming and ML frameworks (PyTorch, TensorFlow)
Experience with model optimization techniques (quantization, pruning, distillation)
MLOps and efficient model deployment
Hardware acceleration (GPU, TPU optimization)
Version control and collaborative development
Experience with large language models
Initial technical interview
Research presentation
System design discussion
Technical challenge
Team collaboration interview
Competitive salary with significant equity stake
Remote-first work environment
Full medical, dental, and vision coverage
Flexible PTO policy
Learning and development budget
Conference and research publication support
Home office setup allowance
Must be comfortable with ambiguity and rapid iteration typical of pre-seed startups
Strong bias for practical implementation of research ideas
Passion for advancing the field of efficient ML systems
Interest in open source contribution and community engagement
Naptha AI is committed to building a diverse and inclusive workplace. We are an equal opportunity employer and welcome applications from all qualified candidates regardless of background.