What are the responsibilities and job description for the Multimodal Research Engineer (AI Labs) position at Krutrim?
Multimodal and Vision AI Research Engineer / Scientist
Location : Palo Alto (US)
Type of Job : Full-time
About Krutrim :
Krutrim is building AI computing for the future. Our envisioned AI computing stack encompasses AI infrastructure, AI Cloud, multilingual and multimodal foundational models, and AI-powered applications. As India’s first AI unicorn, we built the country’s first foundation models in LLM and VLM domains, empowering consumers, startups, enterprises, and researchers to develop AI applications. We focus on foundational models across text, voice, and vision while developing AI training and inference platforms to drive innovation. Our teams, spanning Bangalore, Singapore, and San Francisco, bring expertise across AI research, applied AI, cloud engineering, and semiconductor design.
Job Description : We are seeking experienced Multimodal and Vision AI Engineers / Scientists to research, develop, optimize, and deploy Vision-Language Models (VLMs) , multimodal generative models , diffusion models , and traditional computer vision techniques . You will work on foundational models integrating vision, language, and audio, optimize AI architectures, and push the boundaries of multimodal AI research.
Responsibilities :
- Research, design, and train multimodal vision-language models (VLMs) , integrating deep learning , transformers , and attention mechanisms .
- Develop and optimize small-scale distillation of VLMs for efficient deployment on resource-constrained devices.
- Implement state-of-the-art object detection (YOLO, Faster R-CNN) , segmentation (Panoptic Segmentation) , classification (ResNets, Vision Transformers) , and image generation (Stable Diffusion, Stable Cascade) .
- Train or fine-tune vision models for representation (e.g., Vision Transformers, Q-Former, CLIP, SigLIP) , generation , and video representation (e.g., Video-Swin Transformer) .
- Work with diffusion models and generative models for conditional image generation and multimodal applications .
- Optimize CNN-based architectures for computer vision tasks like recognition , tracking , and feature extraction .
- Implement and optimize audio models for representation (e.g., W2V-BERT) and generation (e.g., Hi-Fi GAN, SeamlessM4T) .
- Innovate with multimodal fusion techniques such as early fusion , deep fusion , Mixture-of-Experts (MoE) , FlashAttention , MQA , GQA , MLA , and other transformer architectures .
- Advance video analysis , video summarization , and video question-answering models to enhance multimedia understanding .
- Implement optimization techniques like quantization , distillation , sparsity , streaming , and caching for scalable model deployment .
- Integrate and tailor deep learning frameworks like PyTorch , TensorFlow , DeepSpeed , Lightning , Habana , and FSDP .
- Deploy large-scale distributed AI models using MLOps frameworks such as AirFlow , MosaicML , Anyscale , Kubeflow , and Terraform .
- Publish research in top-tier conferences (NeurIPS, CVPR, ICCV, ICLR, ICML) and contribute to open-source AI projects .
- Collaborate with engineering teams to productionize research advancements into scalable services and products .
Qualifications :