What are the responsibilities and job description for the Senior Data Scientist position at MSys Technologies - USA?
Job Details
Job Title: Senior Data Scientist
Job Location: San Jose, CA (Onsite)
Duration: 3 Months (High possibility of extension)
Job Summary:
We are seeking a Senior Data Scientist with expertise in AI-driven triage optimization, log analysis, and system migrations to support Client's QA leadership. The goal is to reduce triage time from 5 days to 2-3 days by leveraging AI, automation, and predictive analytics. The ideal candidate must have strong programming expertise in Java, C , Python, and experience working with Kafka, distributed systems, and event-driven architecture.
Key Responsibilities:
- Triage Optimization: Design and implement AI/ML solutions to accelerate defect triage and root cause analysis using Tracia.
- Log Analysis & Debugging: log collection, debugging, and visualization to identify patterns and reduce resolution time.
- Streaming & Event-Driven Processing: Work with Kafka and other streaming platforms to process, analyze, and categorize real-time logs for faster issue detection.
- Migration Support: Assist in transitioning legacy triage processes to AI-enhanced automated workflows.
- AI/ML Implementation: Build predictive models for log classification, anomaly detection, and failure analysis to improve issue resolution.
- Automation & Workflow Improvement: Develop automated data processing and log analysis pipelines to minimize manual effort.
- Performance Optimization: Optimize defect detection using AI-powered tools and enhance triage accuracy.
- Collaboration: Work closely with QA, DevOps, and Engineering teams to integrate AI-driven solutions into existing workflows.
- Real-time Monitoring & Dashboards: Create real-time dashboards using Tracia for tracking system health and triage efficiency.
- Continuous Improvement: Refine AI models, enhance automation strategies, and implement best practices for faster issue resolution.
Required Skills & Qualifications:
- 7 years of experience in Data Science, AI/ML, and automation within QA, DevOps, or Software Engineering environments.
- Strong programming skills in Java, C , and Python for automation and log analysis.
- Experience with Kafka and event-driven architecture for log streaming, analysis, and real-time processing.
- Hands-on experience in log tracking, debugging, and visualization.
- Proven expertise in reducing triage time and implementing AI-driven log analysis and defect prediction models.
- Experience with AI/ML frameworks (TensorFlow, PyTorch, Scikit-learn).
- Hands-on experience with log aggregation tools (Splunk, ELK Stack, Prometheus, Grafana).
- Expertise in automation frameworks (Selenium, Robot Framework, or custom Python-based automation).
- Deep understanding of DevOps practices, CI/CD pipelines, and cloud platforms (AWS, Azure, Google Cloud Platform).
- Experience in migrations from legacy triage processes to AI-based automation.
- Strong problem-solving and debugging skills, especially in log analysis and system performance optimization.
Preferred Qualifications:
- Knowledge of Client, virtualization, and storage technologies.
- Familiarity with predictive analytics and AI-based root cause analysis.
- Experience working with incident management and observability tools.
- Exposure to distributed computing frameworks (Spark, Flink, or similar).