What are the responsibilities and job description for the Senior ML Platform Engineer (Serving Infrastructure) position at Vlink?
Job Title : ML Engineer
Location : Remote
Employment Type : (Full-time or contract)
Duration : Long Term
About VLink : Started in 2006 and headquartered in Connecticut, VLink is one of the fastest growing digital technology services and consulting companies. Since its inception, our innovative team members have been solving the most complex business, and IT challenges of our global clients.
Job Description :
Role Overview : We're looking for an experienced engineer to build our ML serving infrastructure. You'll create the platforms and systems that enable reliable, scalable model deployment and inference. This role focuses on the runtime infrastructure that powers our production ML capabilities.
Key Responsibilities :
- Design and implement scalable model serving platforms for both batch and real-time inference
- Build model deployment pipelines with automated testing and validation
- Develop monitoring, logging, and alerting systems for ML services
- Create infrastructure for A / B testing and model experimentation
- Implement model versioning and rollback capabilities
- Design efficient scaling and load balancing strategies for ML workloads
- Collaborate with data scientists to optimize model serving performance
Technical Requirements :
Nice to Have :
TorchServe for PyTorch models
Model quantization (INT8, FP16)
Pre / post-processing pipeline optimization
Employment Practices :
EEO, ADA, FMLA Compliant
VLink is an equal opportunity employer. At VLink, we are committed to embracing diversity, multiculturalism, and inclusion. VLink does not discriminate on the basis of race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by law. All aspects of employment including the decision to hire, promote, or discharge, will be decided on the basis of qualifications, merit, performance, and business needs.