What are the responsibilities and job description for the Security Engineer - Offensive AI / GenAI Security position at E-Solutions?
Role : Security Engineer - Offensive AI / GenAI Security
Location : Mountain View CA / New York (100% Onsite)
Below are the notes based on discussion with client
Primary Skills
Penetration Testing (4-5 years exp) . [Does not want Architect / Staff level candidate]
Exposure to AI Models / ML
Should be able to R&D and evolved as AI Pen Tester .
Key Responsibility :
Day-to-day work will involve Pen Testing .
Execute hands-on penetration testing and security assessments on Generative AI Applications, AI / ML components, Web Applications, Web Services and integrated systems to pinpoint vulnerabilities .
The target applications are mix of AI applications and General applications
Tools :
Typical PenTesting : Manual and Burp Suite
Below is the actual JD :
Job Summary :
We are looking for a seasoned Security Engineer specializing in Offensive AI and Generative AI Security to join our team. With over 5 years of experience in penetration testing, vulnerability management, and AI / ML security practices, the ideal candidate will possess robust expertise in both manual and automated security testing methods tailored for AI-driven systems. This role demands deep technical know-how in developing and utilizing tools for offensive security testing specifically focused on AI and machine learning models.
Responsibilities :
Execute hands-on penetration testing and security assessments on Generative AI Applications, AI / ML components, Web Applications, Web Services and integrated systems to pinpoint vulnerabilities.
Lead the development of security utilities and tools designed to automate offensive security testing of AI-models and Generative AI ecosystems.
Engineer and automate comprehensive security testing procedures for Generative AI platforms using programming skills in Python, Perl, and Bash.
Utilize advanced knowledge of OWASP, SANS25, CVE, and MITRE alongside specific AI-related security frameworks to guide in-depth security assessments and threat modeling.
Collaborate with AI model developers and data scientists to understand AI architectures and develop tailored security practices and tools.
Conduct systematic vulnerability management programs specifically designed around AI and Generative AI technologies, ensuring meticulous execution, reporting, and follow-up remediations.
Develop security assessment methodologies, procedures, and testing suites that are specifically crafted for AI and machine learning environments.
Stay abreast of the latest in security, AI developments, and threats, integrating fresh insights into security strategies and test designs.
Manage and lead security review processes for third-party AI vendors and technology partners, ensuring adherence to our stringent security standards and protocols.
Adopt and adapt existing penetration testing tools, as well as develop proprietary tools necessary for effective Offensive AI security testing.
Work dynamically across various teams including product development and AI development groups, ensuring a complete and unified approach to AI Security.
Document and report on security findings, challenges, and progress in a comprehendible and detailed manner suited for both AI specialists and non-specialist stakeholders.
Identify and drive the implementation of best practices and security solutions for continuous improvement of the organization's AI security posture.
Requirements :
Bachelor’s or Master’s degree in Computer Science, Information Security, AI / ML, or a related technical field.
Minimum of 5 years of experience in penetration testing and vulnerability management including substantial exposure to AI / ML or Generative AI specific security testing.
Demonstrable experience in both manual and automated offensive security tactics.
Proficient in developing and implementing security tools and processes for Generative AI security testing environments.
Deep understanding of authentication protocols, data integrity checks, and secure data handling specific to AI / ML models.
Technical fluency in AI technologies, including experience with Generative AI models, Machine Learning techniques, and prompt engineering (OpenAI, Google Gemini, Claude etc.)
Strong programming skills in Python, Perl, Bash, or similar languages, with specific tools development expertise for AI security.
Outstanding communication and presentation skills to effectively share insights and recommendations across technical and non-technical teams.
Critical thinking and advanced problem-solving skills dedicated to the AI security landscape.
Relevant certifications such as OSCP, OSWE OSEP, CRTE, CRTP with added preference for AI-specific security training or credentials.
This role is pivotal in strategically advancing our capabilities in the face of fast-evolving AI / ML technologies and threats. We are eager to welcome a proactive, knowledgeable, and tactical security professional who is passionate about pioneering AI security initiatives.
Keep a pulse on the job market with advanced job matching technology.
If your compensation planning software is too rigid to deploy winning incentive strategies, it’s time to find an adaptable solution.
Compensation Planning
Enhance your organization's compensation strategy with salary data sets that HR and team managers can use to pay your staff right.
Surveys & Data Sets
What is the career path for a Security Engineer - Offensive AI / GenAI Security?
Sign up to receive alerts about other jobs on the Security Engineer - Offensive AI / GenAI Security career path by checking the boxes next to the positions that interest you.
Not the job you're looking for? Here are some other Security Engineer - Offensive AI / GenAI Security jobs in the Mountain View, CA area that may be a better fit.
We don't have any other Security Engineer - Offensive AI / GenAI Security jobs in the Mountain View, CA area right now.