MENU
  • Remote Jobs
  • Companies
  • Go Premium
  • Job Alerts
  • Post a Job
  • Log in
  • Sign up
Working Nomads logo Working Nomads
  • Remote Jobs
  • Companies
  • Post Jobs
  • Go Premium
  • Get Free Job Alerts
  • Log in

Senior AI Product Security Researcher

GitLab

Full-time
Anywhere
$124k-$266k per year
researcher
security
python
software engineering
penetration testing
Apply for this position

An overview of this role

We are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab's AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.

In this role, you'll be at the forefront of AI security research, working with GitLab Duo Agent Platform, GitLab Duo Chat, and AI workflows that represent the future of human/AI collaborative development. You'll develop novel testing methodologies for AI agent security, conduct hands-on penetration testing of multi-agent orchestration systems, and translate emerging AI threats into actionable security improvements. Your research will directly influence how we build and secure the next generation of AI-powered DevSecOps tools, ensuring GitLab remains the most secure software factory platform on the market.

This position offers the unique opportunity to shape AI security practices in one of the world's largest DevSecOps platforms, working with engineering teams who are pushing the boundaries of what's possible with AI-assisted software development. You'll have access to cutting-edge AI systems and the freedom to explore creative attack scenarios while contributing to the security of millions of developers worldwide.

What You'll Do

  • Identify and validate security vulnerabilities in GitLab's AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenarios

  • Execute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniques

  • Research emerging AI security threats and attack techniques to assess their potential impact on GitLab's AI-powered platform

  • Design and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitation

  • Create detailed technical reports and advisories that translate complex findings into actionable remediation strategies

  • Collaborate with AI engineering teams to validate security fixes through iterative testing and verification

  • Contribute to the development of AI security testing frameworks and automated validation tools

  • Partner with Security Architecture to inform architectural improvements based on research findings

  • Share knowledge and mentor team members on AI security testing techniques and vulnerability discovery

What You'll Bring

  • 5+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML security

  • Hands-on experience discovering and exploiting vulnerabilities in AI systems and platforms

  • Strong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitation

  • Proficiency in Python with experience in AI frameworks and security testing tools

  • Experience with offensive security tools and vulnerability discovery methodologies

  • Ability to read and analyze code across multiple languages and codebases

  • Strong analytical and problem-solving skills with creative thinking about attack scenarios

  • Excellent written communication skills for documenting technical findings and creating security advisories

  • Ability to translate technical findings into clear risk assessments and remediation recommendations

Nice to have Qualifications:

  • Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures

  • Published security research or conference presentations on AI security topics

  • Background in software engineering with distributed systems expertise

  • Security certifications such as OSCP, OSCE, GPEN, or similar

  • Experience with GitLab or similar DevSecOps platforms

  • Knowledge of AI agent communication protocols and multi-agent architectures

About the team

Security Researchers are a part of our Security Platforms and Architecture team, who address complex security challenges facing GitLab and its customers to enable GitLab to be the most secure software factory platform on the market. Composed of Security Architecture and Security Research, we focus on systemic product security risks and work cross-functionally to mitigate them while maintaining Engineering's development velocity.

How GitLab will support you

  • Benefits to support your health, finances, and well-being

  • All remote, asynchronous work environment

  • Flexible Paid Time Off

  • Team Member Resource Groups

  • Equity Compensation & Employee Stock Purchase Plan

  • Growth and Development Fund

  • Parental leave

  • Home office support

Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you're excited about this role, please apply and allow our recruiters to assess your application.

The base salary range for this role’s listed level is currently for residents of listed locations only. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, and alignment with market data. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary.

California/Colorado/Hawaii/New Jersey/New York/Washington/DC/Illinois/Minnesota pay range

$124,000—$266,400 USD

Apply for this position
Bookmark Report

About the job

Full-time
Anywhere
$124k-$266k per year
Posted 3 hours ago
researcher
security
python
software engineering
penetration testing

Apply for this position

Bookmark
Report
Enhancv advertisement

30,000+
REMOTE JOBS

Unlock access to our database and
kickstart your remote career
Join Premium

Senior AI Product Security Researcher

GitLab

An overview of this role

We are seeking a Senior AI Product Security Researcher to join our Security Platforms & Architecture Team to conduct cutting-edge security research on GitLab's AI-powered DevSecOps capabilities. As GitLab transforms software development through intelligent collaboration between developers and specialized AI agents, we need security researchers who can proactively identify and validate vulnerabilities before they impact our platform or customers.

In this role, you'll be at the forefront of AI security research, working with GitLab Duo Agent Platform, GitLab Duo Chat, and AI workflows that represent the future of human/AI collaborative development. You'll develop novel testing methodologies for AI agent security, conduct hands-on penetration testing of multi-agent orchestration systems, and translate emerging AI threats into actionable security improvements. Your research will directly influence how we build and secure the next generation of AI-powered DevSecOps tools, ensuring GitLab remains the most secure software factory platform on the market.

This position offers the unique opportunity to shape AI security practices in one of the world's largest DevSecOps platforms, working with engineering teams who are pushing the boundaries of what's possible with AI-assisted software development. You'll have access to cutting-edge AI systems and the freedom to explore creative attack scenarios while contributing to the security of millions of developers worldwide.

What You'll Do

  • Identify and validate security vulnerabilities in GitLab's AI systems through hands-on testing, developing proof-of-concept exploits that demonstrate real-world attack scenarios

  • Execute comprehensive penetration testing targeting AI agent platforms, including prompt injection, jailbreaking, and workflow manipulation techniques

  • Research emerging AI security threats and attack techniques to assess their potential impact on GitLab's AI-powered platform

  • Design and implement testing methodologies and tools for evaluating AI agent security and multi-agent system exploitation

  • Create detailed technical reports and advisories that translate complex findings into actionable remediation strategies

  • Collaborate with AI engineering teams to validate security fixes through iterative testing and verification

  • Contribute to the development of AI security testing frameworks and automated validation tools

  • Partner with Security Architecture to inform architectural improvements based on research findings

  • Share knowledge and mentor team members on AI security testing techniques and vulnerability discovery

What You'll Bring

  • 5+ years of experience in security research, penetration testing, or offensive security roles, with demonstrated expertise in AI/ML security

  • Hands-on experience discovering and exploiting vulnerabilities in AI systems and platforms

  • Strong understanding of AI attack vectors including prompt injection, agent manipulation, and workflow exploitation

  • Proficiency in Python with experience in AI frameworks and security testing tools

  • Experience with offensive security tools and vulnerability discovery methodologies

  • Ability to read and analyze code across multiple languages and codebases

  • Strong analytical and problem-solving skills with creative thinking about attack scenarios

  • Excellent written communication skills for documenting technical findings and creating security advisories

  • Ability to translate technical findings into clear risk assessments and remediation recommendations

Nice to have Qualifications:

  • Direct experience testing AI agent platforms, conversational AI systems, or AI orchestration architectures

  • Published security research or conference presentations on AI security topics

  • Background in software engineering with distributed systems expertise

  • Security certifications such as OSCP, OSCE, GPEN, or similar

  • Experience with GitLab or similar DevSecOps platforms

  • Knowledge of AI agent communication protocols and multi-agent architectures

About the team

Security Researchers are a part of our Security Platforms and Architecture team, who address complex security challenges facing GitLab and its customers to enable GitLab to be the most secure software factory platform on the market. Composed of Security Architecture and Security Research, we focus on systemic product security risks and work cross-functionally to mitigate them while maintaining Engineering's development velocity.

How GitLab will support you

  • Benefits to support your health, finances, and well-being

  • All remote, asynchronous work environment

  • Flexible Paid Time Off

  • Team Member Resource Groups

  • Equity Compensation & Employee Stock Purchase Plan

  • Growth and Development Fund

  • Parental leave

  • Home office support

Please note that we welcome interest from candidates with varying levels of experience; many successful candidates do not meet every single requirement. Additionally, studies have shown that people from underrepresented groups are less likely to apply to a job unless they meet every single qualification. If you're excited about this role, please apply and allow our recruiters to assess your application.

The base salary range for this role’s listed level is currently for residents of listed locations only. Grade level and salary ranges are determined through interviews and a review of education, experience, knowledge, skills, abilities of the applicant, equity with other team members, and alignment with market data. See more information on our benefits and equity. Sales roles are also eligible for incentive pay targeted at up to 100% of the offered base salary.

California/Colorado/Hawaii/New Jersey/New York/Washington/DC/Illinois/Minnesota pay range

$124,000—$266,400 USD

Working Nomads

Post Jobs
Premium Subscription
Sponsorship
Free Job Alerts

Job Skills
API
FAQ
Privacy policy
Terms and conditions
Contact us
About us

Jobs by Category

Remote Administration jobs
Remote Consulting jobs
Remote Customer Success jobs
Remote Development jobs
Remote Design jobs
Remote Education jobs
Remote Finance jobs
Remote Legal jobs
Remote Healthcare jobs
Remote Human Resources jobs
Remote Management jobs
Remote Marketing jobs
Remote Sales jobs
Remote System Administration jobs
Remote Writing jobs

Jobs by Position Type

Remote Full-time jobs
Remote Part-time jobs
Remote Contract jobs

Jobs by Region

Remote jobs Anywhere
Remote jobs North America
Remote jobs Latin America
Remote jobs Europe
Remote jobs Middle East
Remote jobs Africa
Remote jobs APAC

Jobs by Skill

Remote Accounting jobs
Remote Assistant jobs
Remote Copywriting jobs
Remote Cyber Security jobs
Remote Data Analyst jobs
Remote Data Entry jobs
Remote English jobs
Remote Spanish jobs
Remote Project Management jobs
Remote QA jobs
Remote SEO jobs

Jobs by Country

Remote jobs Australia
Remote jobs Argentina
Remote jobs Brazil
Remote jobs Canada
Remote jobs Colombia
Remote jobs France
Remote jobs Germany
Remote jobs Ireland
Remote jobs India
Remote jobs Japan
Remote jobs Mexico
Remote jobs Netherlands
Remote jobs New Zealand
Remote jobs Philippines
Remote jobs Poland
Remote jobs Portugal
Remote jobs Singapore
Remote jobs Spain
Remote jobs UK
Remote jobs USA


Working Nomads curates remote digital jobs from around the web.

© 2025 Working Nomads.