What are the responsibilities and job description for the Security Engineer (Offensive AI / GenAI Security) - C2C - Mountain View CA / New York position at VySystems?
Requirements:
Bachelor’s or Master’s degree in Computer Science, Information Security, AI/ML, or a related technical field.
Minimum of 5 years of experience in penetration testing and vulnerability management including substantial exposure to AI/ML or Generative AI specific security testing.
Demonstrable experience in both manual and automated offensive security tactics.
Proficient in developing and implementing security tools and processes for Generative AI security testing environments.
Deep understanding of authentication protocols, data integrity checks, and secure data handling specific to AI/ML models.
Technical fluency in AI technologies, including experience with Generative AI models, Machine Learning techniques, and prompt engineering (OpenAI, Google Gemini, Claude etc.)
Strong programming skills in Python, Perl, Bash, or similar languages, with specific tools development expertise for AI security.
Outstanding communication and presentation skills to effectively share insights and recommendations across technical and non-technical teams.
Critical thinking and advanced problem-solving skills dedicated to the AI security landscape.
Relevant certifications such as OSCP, OSWE OSEP, CRTE, CRTP with added preference for AI-specific security training or credentials.
This role is pivotal in strategically advancing our capabilities in the face of fast-evolving AI/ML technologies and threats. We are eager to welcome a proactive, knowledgeable, and tactical security professional who is passionate about pioneering AI security initiatives.
Bachelor’s or Master’s degree in Computer Science, Information Security, AI/ML, or a related technical field.
Minimum of 5 years of experience in penetration testing and vulnerability management including substantial exposure to AI/ML or Generative AI specific security testing.
Demonstrable experience in both manual and automated offensive security tactics.
Proficient in developing and implementing security tools and processes for Generative AI security testing environments.
Deep understanding of authentication protocols, data integrity checks, and secure data handling specific to AI/ML models.
Technical fluency in AI technologies, including experience with Generative AI models, Machine Learning techniques, and prompt engineering (OpenAI, Google Gemini, Claude etc.)
Strong programming skills in Python, Perl, Bash, or similar languages, with specific tools development expertise for AI security.
Outstanding communication and presentation skills to effectively share insights and recommendations across technical and non-technical teams.
Critical thinking and advanced problem-solving skills dedicated to the AI security landscape.
Relevant certifications such as OSCP, OSWE OSEP, CRTE, CRTP with added preference for AI-specific security training or credentials.
This role is pivotal in strategically advancing our capabilities in the face of fast-evolving AI/ML technologies and threats. We are eager to welcome a proactive, knowledgeable, and tactical security professional who is passionate about pioneering AI security initiatives.