What are the responsibilities and job description for the Machine Learning Engineer/ML Ops Engineer- NO C2C position at PDSSOFT INC.?
ML Ops Engineer
Location : San Francisco,CA(Hybrid)
Duration : Long Term
Job Description :
We are looking for an ML Ops Engineer in Bay Area who can assist us with deployment of and infrastructure for machine learning models. For some years now, we have built and maintained our own ML Ops platform on top of Sagemaker, with associated infrastructure in AWS. We are going to move to using DataRobot for our model development and deployment needs in the near future. We also currently use Airflow in several different areas, specifically the MWAA hosted-Airflow offering from AWS.
This role involves supporting two main areas, ML Ops and our "Central Airflow" platform, which supports the Data Engineering team.
1) For ML Ops, we support the Data Science team with development and deployment of ML models. Currently this is done in Sagemaker / AWS, but in the near future this will be done in DataRobot. This involves infrastructure in AWS, as well as CI / CD, Python for the modelling / platform component, etc
2) For Airflow, there are typically daily requests for creating secrets / variables in Airflow, deploying DAGs, and reviewing modifications that Data Engineers submit to ensure they are following best practices, and deploying changes to Airflow which support new packages, platforms, etc including containerized solutions.
We are looking for the following skills, primarily in Python, Airflow, and AWS :
REQUIRED
Python (4 years)
REST APIs (2 years)
Airflow (3 years)
- DAG authoring
- Deployment / administration
Sagemaker Studio, ML Model Monitoring,
AWS infra / networking (4 years, ideally managed via Terraform or other Infrastructure-as-Code tech)
Terraform (1 years)
BONUS
Python (7 years)
Docker (1 years)
Linux admin (1 years)
Github Actions
Azure