What are the responsibilities and job description for the Contract position for LLM RAG and Security Engineer Location: Sunnyvale, CA / Austin, TX position at Veah Consulting Services?
Location: Sunnyvale, CA / Austin, TX
Duration: 6 Months
Exp:8 Years
Experienced LLM RAG & Security Engineer who specializes in evaluating LLM applications and Retrieval-Augmented Generation (RAG) systems, with a strong background in Red Team testing to identify security vulnerabilities in LLM RAG applications. The ideal candidate will have hands-on experience with adversarial testing frameworks such as Garak, PyRIT, or Giskard, ensuring the robustness, security, and reliability of AI-driven systems.
Key Responsibilities:
* Develop & Optimize LLM Applications: Design and implement LLM-powered applications using state-of-the-art models, ensuring efficiency and scalability.
* RAG System Development: Build and fine-tune Retrieval-Augmented Generation (RAG) pipelines for enhanced contextual accuracy and retrieval efficiency.
* Red Team Testing & Security Assessments: Conduct adversarial testing to uncover vulnerabilities such as prompt injection, jailbreaks, data leakage, and bias exploitation.
* Testing with Security Tools: Utilize Garak, PyRIT, Giskard, and other adversarial testing frameworks to evaluate LLM security and model robustness.
* Threat Analysis & Risk Mitigation: Identify LLM security risks, propose mitigation strategies, and work closely with engineering teams to implement secure AI solutions.
* Model Fine-tuning & Guardrails: Implement guardrails, prompt filtering, and defensive techniques to enhance the security posture of deployed LLM applications.
* Collaboration with AI & Security Teams: Work alongside ML Engineers and Data Scientists to integrate security best practices into AI pipelines.
* Performance & Compliance Monitoring: Ensure LLM applications meet security, compliance, and ethical AI standards (e.g., GDPR, AI Act).
Required Skills & Experience:
* Strong experience in LLM Evaluation, application development and RAG architecture.
* Hands-on experience in Red Team testing for LLM security vulnerabilities.
* Proficiency with adversarial testing tools like Garak, PyRIT, and Giskard.
* Deep understanding of LLM security risks, including prompt injection, data exfiltration, and model manipulation.
* Solid programming skills in Python, with experience in Hugging Face, LangChain, LlamaIndex, or similar frameworks.
* Knowledge of cybersecurity principles, AI security guidelines, and risk assessment methodologies.
* Experience in ML model evaluation, monitoring, and compliance.
* Strong problem-solving skills with an analytical mindset.
Preferred Qualifications:
* Previous experience in LLM security research or penetration testing of AI models.
* Background in NLP, information retrieval, and AI ethics.
* Familiarity with secure model deployment in cloud environments