What are the responsibilities and job description for the Software Engineer, LLM Inference Engine and Product position at Waveforms AI, Inc?
Job title : Software Engineer, LLM Inference Engine and Product / Member of Technical Staff
Who We Are WaveForms AI is an Audio Large Language Models (LLMs) company building the future of audio intelligence through advanced research and products. Our models will transform human-AI interactions making them more natural, engaging and immersive.
Role overview : The Software Engineer, LLM Inference Engine and Product will focus on developing and optimizing a real-time inference engine for multimodal large language models (LLMs) that handle audio and text inputs seamlessly. This role involves leveraging technologies such as LiveKit, RTC engines, WebRTC, and FastAPI to create an efficient, real-time API layer. You will contribute to cutting-edge AI systems that enable smooth user experiences across platforms, including iOS, Android, and desktop.
Key Responsibilities
- Real-time Inference Development : Build and optimize a robust inference engine that supports multimodal LLMs, handling real-time audio and text inputs.
- Technology Integration : Leverage tools like LiveKit, RTC engines, WebRTC, and FastAPI to enable low-latency, real-time communication and inference.
- End-to-End Pipeline Design : Create and maintain the complete inference pipeline, from data ingestion to model serving, ensuring real-time performance.
- Cross-platform Compatibility : Ensure the inference engine operates efficiently across various platforms, including mobile (iOS / Android) and desktop.
- Optimization & Performance Tuning : Optimize the inference system to reduce latency, improve throughput, and enhance user experience.
- API Development : Design and maintain scalable APIs to support real-time LLM interaction for diverse applications.
Required Skills & Qualifications
Minimum Experience