What are the responsibilities and job description for the Sr. Gen AI Data Engineer position at Virtusa?
Model Development
Develop and implement generative AI models for various applications (text, image, audio, video).
Research & Application
Research and apply cutting-edge techniques in deep learning, NLP, and computer vision.
Data Pipelines: Design and build robust data pipelines for training and evaluating large-scale generative models.
Performance Optimization: Optimize model performance, including reducing latency and improving accuracy.
Cross-Functional Collaboration: Collaborate with cross-functional teams (product, engineering, research) to define and deliver AI-powered solutions.
Architecture Selection: Evaluate and select appropriate generative AI architectures and frameworks.
Model Fine-Tuning: Fine-tune pre-trained models for specific use cases and domains.
Documentation
Develop and maintain documentation for models, data pipelines, and deployment processes.
Production Monitoring: Monitor model performance in production and implement necessary updates and improvements.
Staying Current: Stay abreast of the latest advancements in generative AI and related fields.
Ethical Considerations: Address ethical considerations and biases in generative AI models.
Security Implementation: Implement security measures to protect sensitive data and models.
Prototyping & Experimentation: Prototype and experiment with new generative AI concepts and applications.
Tool Development: Contribute to the development of internal tools and libraries for generative AI.
Troubleshooting: Troubleshoot and resolve technical issues related to generative AI models and infrastructure.
Technical Skills: Programming: Proficiency in Python.
Deep Learning Frameworks: Expertise in TensorFlow and/or PyTorch.
Generative Models: Deep understanding of GANs, VAEs, and transformer models.
NLP: Strong knowledge of natural language processing techniques.
Computer Vision: Familiarity with computer vision concepts and libraries.
Cloud Computing: Experience with cloud platforms (AWS, Google Cloud, Azure) for AI deployment.
Data Processing: Experience with data preprocessing and feature engineering
Develop and implement generative AI models for various applications (text, image, audio, video).
Research & Application
Research and apply cutting-edge techniques in deep learning, NLP, and computer vision.
Data Pipelines: Design and build robust data pipelines for training and evaluating large-scale generative models.
Performance Optimization: Optimize model performance, including reducing latency and improving accuracy.
Cross-Functional Collaboration: Collaborate with cross-functional teams (product, engineering, research) to define and deliver AI-powered solutions.
Architecture Selection: Evaluate and select appropriate generative AI architectures and frameworks.
Model Fine-Tuning: Fine-tune pre-trained models for specific use cases and domains.
Documentation
Develop and maintain documentation for models, data pipelines, and deployment processes.
Production Monitoring: Monitor model performance in production and implement necessary updates and improvements.
Staying Current: Stay abreast of the latest advancements in generative AI and related fields.
Ethical Considerations: Address ethical considerations and biases in generative AI models.
Security Implementation: Implement security measures to protect sensitive data and models.
Prototyping & Experimentation: Prototype and experiment with new generative AI concepts and applications.
Tool Development: Contribute to the development of internal tools and libraries for generative AI.
Troubleshooting: Troubleshoot and resolve technical issues related to generative AI models and infrastructure.
Technical Skills: Programming: Proficiency in Python.
Deep Learning Frameworks: Expertise in TensorFlow and/or PyTorch.
Generative Models: Deep understanding of GANs, VAEs, and transformer models.
NLP: Strong knowledge of natural language processing techniques.
Computer Vision: Familiarity with computer vision concepts and libraries.
Cloud Computing: Experience with cloud platforms (AWS, Google Cloud, Azure) for AI deployment.
Data Processing: Experience with data preprocessing and feature engineering