MENU
  • Remote Jobs
  • Companies
  • ✦ Go Premium
  • Job Alerts
  • Post a Job
  • Log in
  • Sign up
Working Nomads logo Working Nomads
  • Remote Jobs
  • Companies
  • Post Jobs
  • ✦ Go Premium
  • Get Free Job Alerts
  • Log in

Abuse Investigator - Child Safety

OpenAI

Full-time
USA
$158k-$425k per year
investigator
sql
security
analyst
reporting
Apply for this position

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.

The Intelligence and Investigations team supports this mission by identifying, analyzing, and investigating misuse of our products, particularly novel or emerging abuse patterns. Our work enables partner teams to develop data-backed product policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, legitimate applications.

About the Role

As a Child Safety Investigator on the Intelligence & Investigations team, you will identify and disrupt actors attempting to use OpenAI’s products to sexually exploit minors both online and in the real world. OpenAI maintains strict prohibitions in this area and reports apparent CSAM and other credible child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), consistent with applicable law and our policies.

This role requires domain-specific expertise, technical fluency, and the ability to operate in ambiguous, high-impact situations. You will conduct in-depth investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions — including escalations requiring legal review and external reporting.

You will also help develop detection strategies that proactively surface high-risk behavior, especially cases that evade existing safeguards. This role includes responding to time-sensitive escalations. Investigations may involve exposure to sensitive and disturbing material, including sexual or violent content.

In this role, you will:

  • Investigate high-severity child safety violations and disrupt malicious actors in partnership with Policy, Legal, Integrity, Global Affairs, Security, and Engineering teams, including through cross-platform and cross-internet research

  • Support investigations across other high-risk harm areas where child safety concerns intersect

  • Conduct open-source and cross-platform research to contextualize actors and abuse networks

  • Develop detection signals, behavioral heuristics, and tracking strategies to proactively identify high-risk users using tools such as SQL, Databricks, and Python

  • Communicate investigation findings clearly and effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries

  • Develop a deep, working understanding of OpenAI’s products, internal data systems, and enforcement mechanisms

  • Collaborate with engineering and data partners to improve investigative tooling, data quality, and analyst workflows

  • Support time-sensitive escalations and high-priority investigations requiring rapid analysis and sound judgment

  • Represent investigative findings and work externally with the press, governments, NGOs, and law enforcement agencies

  • Participate in a rotating on-call schedule to support timely response to high-priority safety incidents and sensitive investigations

You might thrive in this role if you:

  • Have deep expertise in online child safety and child exploitation threats

  • Have familiarity or proficiency with technical investigations, especially using SQL, Python, Notebooks and scripts in a government, law enforcement and/or tech-company setting

  • Speak one or more languages in addition to English

  • Have at least 5+ years of experience tracking threat actors in abuse domains

  • Have worked on time-sensitive escalations involving high-risk harm

  • Have presented analytic findings to senior stakeholders or external partners

  • Have experience scaling and automating processes, especially with language models

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Apply for this position
Bookmark Report

About the job

Full-time
USA
Senior Level
$158k-$425k per year
Posted 17 hours ago
investigator
sql
security
analyst
reporting

Apply for this position

Bookmark
Report
Enhancv advertisement
+ 1,284 new jobs added today
30,000+
Remote Jobs

Don't miss out — new listings every hour

Join Premium

Abuse Investigator - Child Safety

OpenAI

About the Team

OpenAI’s mission is to ensure that general-purpose artificial intelligence benefits all of humanity. We believe achieving this goal requires real-world deployment and continuous iteration based on how our products are used—and misused—in practice.

The Intelligence and Investigations team supports this mission by identifying, analyzing, and investigating misuse of our products, particularly novel or emerging abuse patterns. Our work enables partner teams to develop data-backed product policies and build scalable safety mitigations. By precisely understanding abuse, we help ensure OpenAI’s products can be used safely to build meaningful, legitimate applications.

About the Role

As a Child Safety Investigator on the Intelligence & Investigations team, you will identify and disrupt actors attempting to use OpenAI’s products to sexually exploit minors both online and in the real world. OpenAI maintains strict prohibitions in this area and reports apparent CSAM and other credible child sexual exploitation threats to the National Center for Missing and Exploited Children (NCMEC), consistent with applicable law and our policies.

This role requires domain-specific expertise, technical fluency, and the ability to operate in ambiguous, high-impact situations. You will conduct in-depth investigations into user behavior, analyze product data, identify emerging threat patterns, and support enforcement actions — including escalations requiring legal review and external reporting.

You will also help develop detection strategies that proactively surface high-risk behavior, especially cases that evade existing safeguards. This role includes responding to time-sensitive escalations. Investigations may involve exposure to sensitive and disturbing material, including sexual or violent content.

In this role, you will:

  • Investigate high-severity child safety violations and disrupt malicious actors in partnership with Policy, Legal, Integrity, Global Affairs, Security, and Engineering teams, including through cross-platform and cross-internet research

  • Support investigations across other high-risk harm areas where child safety concerns intersect

  • Conduct open-source and cross-platform research to contextualize actors and abuse networks

  • Develop detection signals, behavioral heuristics, and tracking strategies to proactively identify high-risk users using tools such as SQL, Databricks, and Python

  • Communicate investigation findings clearly and effectively to internal stakeholders through written briefs, data-backed recommendations, and escalation summaries

  • Develop a deep, working understanding of OpenAI’s products, internal data systems, and enforcement mechanisms

  • Collaborate with engineering and data partners to improve investigative tooling, data quality, and analyst workflows

  • Support time-sensitive escalations and high-priority investigations requiring rapid analysis and sound judgment

  • Represent investigative findings and work externally with the press, governments, NGOs, and law enforcement agencies

  • Participate in a rotating on-call schedule to support timely response to high-priority safety incidents and sensitive investigations

You might thrive in this role if you:

  • Have deep expertise in online child safety and child exploitation threats

  • Have familiarity or proficiency with technical investigations, especially using SQL, Python, Notebooks and scripts in a government, law enforcement and/or tech-company setting

  • Speak one or more languages in addition to English

  • Have at least 5+ years of experience tracking threat actors in abuse domains

  • Have worked on time-sensitive escalations involving high-risk harm

  • Have presented analytic findings to senior stakeholders or external partners

  • Have experience scaling and automating processes, especially with language models

About OpenAI

OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity. 

We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.

For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.

Background checks for applicants will be administered in accordance with applicable law, and qualified applicants with arrest or conviction records will be considered for employment consistent with those laws, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act, for US-based candidates. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.

To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.

We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.

OpenAI Global Applicant Privacy Policy

At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.

Working Nomads

Post Jobs
Premium Subscription
Sponsorship
Reviews
Job Alerts

Job Skills
Jobs by Location
Jobs by Experience Level
Jobs by Position Type
Jobs by Salary
API
Scam Alert
FAQ
Privacy policy
Terms and conditions
Contact us
About us

Jobs by Category

Remote Administration jobs
Remote Consulting jobs
Remote Customer Success jobs
Remote Development jobs
Remote Design jobs
Remote Education jobs
Remote Finance jobs
Remote Legal jobs
Remote Healthcare jobs
Remote Human Resources jobs
Remote Management jobs
Remote Marketing jobs
Remote Sales jobs
Remote System Administration jobs
Remote Writing jobs

Jobs by Position Type

Remote Full-time jobs
Remote Part-time jobs
Remote Contract jobs

Jobs by Region

Remote jobs Anywhere
Remote jobs North America
Remote jobs Latin America
Remote jobs Europe
Remote jobs Middle East
Remote jobs Africa
Remote jobs APAC

Jobs by Skill

Remote Accounting jobs
Remote Assistant jobs
Remote Copywriting jobs
Remote Cyber Security jobs
Remote Data Analyst jobs
Remote Data Entry jobs
Remote English jobs
Remote Entry Level jobs
Remote Spanish jobs
Remote Project Management jobs
Remote QA jobs
Remote SEO jobs

Jobs by Country

Remote jobs Australia
Remote jobs Argentina
Remote jobs Belgium
Remote jobs Brazil
Remote jobs Canada
Remote jobs Colombia
Remote jobs France
Remote jobs Germany
Remote jobs Ireland
Remote jobs India
Remote jobs Japan
Remote jobs Mexico
Remote jobs Netherlands
Remote jobs New Zealand
Remote jobs Philippines
Remote jobs Poland
Remote jobs Portugal
Remote jobs Singapore
Remote jobs Spain
Remote jobs UK
Remote jobs USA


Working Nomads curates remote digital jobs from around the web.

© 2026 Working Nomads.