What are the responsibilities and job description for the Staff Software Engineer, ML Acceleration IC position at Stack AV?
About the Role:
The training and deployment team, part of the ML Platform org at Stack AV, is responsible for the platform that helps the AI team to build models, optimize, test, and deploy them on the autonomous vehicles. We are seeking an experienced and hands-on engineer for our ML acceleration team. The ideal candidate will have a deep understanding of GPUs and optimization, excellent collaboration skills, and the ability to drive technical excellence.
Responsibilities:
- Analyze and profile ML models to identify performance bottlenecks.
- Use OSS tooling to enhance our platform to enable ML engineers to profile models and optimize them (e.g., through quantization).
- Automate the process of exporting the model to optimized format (e.g., TensorRT) and deploying them. This including transformer-based models such as VLM models.
- Implement optimizations using CUDA, Triton, and custom kernels.
- Collaborate with ML researchers to balance model accuracy and speed.
- Develop and implement efficient model export, optimization, and profiling solutions to enhance performance and streamline deployment of machine learning models across various hardware platforms.
- Collaboration: Collaborate with cross-functional teams to understand data requirements and design appropriate solutions.
- Technology Stack: Stay updated with the latest technologies and trends in ML inference and ML accelerators.
- Performance Optimization: Identify and resolve performance bottlenecks in models.
- Promote Engineering Excellence: Maintain a high bar for engineering excellence in their own work but also set a culture of engineering excellence within the team.
Qualifications:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- 5 years of experience (including experience with GPU programming and optimization).
- Strong programming skills in C and Python.
- Proven experience in GPU programming and optimization.
- Familiarity with deep learning frameworks, especially PyTorch.
- CUDA programming.
- Triton language for GPU kernels.
- PyTorch optimization techniques.
- TensorRT implementation.
- ONNX model conversion and deployment.
- Custom GPU kernel development.
- Strong analytical and problem-solving skills.
- Excellent verbal and written communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. #LI-AW1