What are the responsibilities and job description for the Senior Distributed Training Research Engineer (AI Labs) position at Krutrim?
Senior Distributed Training Research Engineer (Frontier LLMs)
Location : Palo Alto (CA, US)
Type of Job : Full-time
About Krutrim :
Krutrim is building AI computing for the future. Our envisioned AI computing stack encompasses the AI computing infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered end applications. We are India’s first AI unicorn and built the first foundation model from the country.
Our AI stack is empowering consumers, startups, enterprises and scientists across India and the world to build their end AI applications or AI models. While we are building foundational models across text, voice, and vision relevant to our focus markets, we are also developing AI training and inference platforms that enable AI research and development across industry domains. The platforms being built by Krutrim have the potential to impact millions of lives in India, across income and education strata, and across languages.
The team at Krutrim represents a convergence of talent across AI research, Applied AI, Cloud Engineering, and semiconductor design. Our teams operate from three locations : Bangalore, Singapore & San Francisco.
Job Description :
We are seeking an experienced Senior Generative AI Model Research Engineer to efficiently train frontier and foundation multimodal large language models. In this critical role, you will be responsible for scalable training methodologies to develop a variety of generative AI models such as large language models, voice / speech foundation models, vision and multi-modal foundation models using cutting-edge techniques and frameworks. In this hands-on role, you will optimize and implement state of art neural architecture, robust training and inference infrastructure to efficiently take complex models with hundreds of billions and trillions of parameters to production while optimizing for low latency, high throughput, and cost efficiency.
Key Responsibilities :
- Architect Distributed Training Systems : Design and implement highly scalable distributed training pipelines for LLMs and frontier models, leveraging model parallelism (tensor, pipeline, expert) and data parallelism techniques.
- Optimize Performance : Utilize deep knowledge of CUDA, C , and low-level optimizations to enhance model training speed and efficiency across diverse hardware configurations.
- Implement Novel Techniques : Research and apply cutting-edge parallelism techniques like Flash Attention to accelerate model training and reduce computational costs.
- Framework Expertise : Demonstrate proficiency in deep learning frameworks such as PyTorch, TensorFlow, and JAX, and tailor them for distributed training scenarios.
- Scale to Hundreds of Billions of Parameters : Work with massive models, ensuring stable and efficient training across distributed resources.
- Evaluate Scaling Laws : Design and conduct experiments to analyze the impact of model size, data, and computational resources on model performance.
- Collaborate : Partner closely with research scientists and engineers to integrate research findings into production-ready training systems.
Qualifications :
Join Krutrim to shape the future of AI and make a significant impact on 100s of millions of lives across India and the world. If you're passionate about pushing the boundaries of AI and want to work with a team at the forefront of innovation, we want to hear from you!