What are the responsibilities and job description for the Software Engineer - ML/LLM Inference position at Alldus?
My client is searching for a talented engineer to work on ML / LLM inference and serving. They specialize in developing next-gen LLM fine-tuning and inference engines.
We are seeking a talented and motivated Software Engineer specializing in Machine Learning (ML) and Large Language Model (LLM) inference to join our dynamic ML Inference team. In this role, you will bridge the gap between AI / ML research and systems programming to build and enhance our next-generation LLM Inference Engine. You will play a crucial role in optimizing the performance, scalability, and efficiency of our LLM serving systems.
Key Responsibilities :
Develop and Enhance Inference Engine :
Design, implement, and optimize the next-generation LLM Inference Engine.
Integrate the latest LLM inference techniques from research to enhance latency and throughput.
Performance Optimization :
Conduct deep performance optimizations across multiple layers of the technology stack, including PyTorch, C , and CUDA.
Analyze and improve system performance to meet the demands of various use cases.
Customer Collaboration :
Work closely with customers to understand specific performance requirements and optimize solutions accordingly.
Provide technical expertise and support to ensure successful deployment and operation of inference systems.
Technical Leadership :
Define the roadmap and technical vision for the inference stack.
Lead initiatives to drive innovation and maintain the competitive edge of our inference technologies.
Infrastructure Development :
Collaborate with partner teams to build and maintain scalable, multi-replica serving infrastructure.
Ensure the reliability and scalability of LLM serving systems to handle increasing workloads.
Qualifications : Technical Skills :
Proficiency in systems programming languages such as C .
Strong experience with machine learning frameworks, particularly PyTorch.
Expertise in GPU programming and CUDA for performance optimization.
Solid understanding of AI / ML concepts, especially related to large language models.
Experience :
Proven experience in developing and optimizing ML / LLM inference systems.
Demonstrated ability to integrate research advancements into production systems.
Experience with performance tuning and profiling across various technology stacks.
Keep a pulse on the job market with advanced job matching technology.
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution.
Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right.
Surveys & Data Sets
What is the career path for a Software Engineer - ML/LLM Inference?
Sign up to receive alerts about other jobs on the Software Engineer - ML/LLM Inference career path by checking the boxes next to the positions that interest you.