MENU
  • Remote Jobs
  • Companies
  • Go Premium
  • Job Alerts
  • Post a Job
  • Log in
  • Sign up
Working Nomads logo Working Nomads
  • Remote Jobs
  • Companies
  • Post Jobs
  • Go Premium
  • Get Free Job Alerts
  • Log in

AI Safety Investigator

Future of Life Organizations

Full-time
Anywhere
$90k-$150k per year
investigator
risk management
communication
crisis
Apply for this position

The Future of Life Institute (FLI) is hiring an AI Safety Investigator to bring the spirit of investigative journalism to FLI. The investigator will document safety practices at the industry's most powerful corporations, explain incidents to the general public and help FLI incentivise a race to the top on safety. In this high-impact role, you'll prepare and build out our semiannual AI Safety Index and conduct investigative deep-dives into corporate AI safety practices. The AI Safety Investigator will report to FLI's Head of EU Policy and Research and work closely with our President Max Tegmark.

FLI works to reduce global catastrophic risks from transformative technologies and develop optimistic yet realistic visions of the future.

As AI Safety Investigator, you will:

  Investigate corporate practices

- Build a network of key current and former employees at the largest corporations to understand current policies and approaches.

- Conduct desk research and survey major AI corporations.

- As AI incidents occur, quickly prepare a summary of available public and private information on what went wrong.

  Lead the development of the AI Safety Index

- Take ownership of one of FLI's flagship projects, conducting research against indicators of safety to score and rank AI corporations.

- Find ways for the Index to improve, become more robust, relevant, accessible, and accurate.

  Communicate insights to the public

- Compress complex information into concise, structured formats suitable for index metrics.

- Help create attention-grabbing yet informative data visualisations and written media that effectively communicate AI incidents.

- Work with internal and external communication partners to find ways to amplify the key findings to larger or new audiences. 

Required qualifications:

  • Willingness to travel to the SF Bay Area, California on occasion to follow leads and build relationships.

  • Fluency in English, both written and oral

Required skills and qualities:

  • Self directed, a desire to work independently with minimal supervision. 

  • An instinct for identifying and following leads, and good judgement to understand the appropriate next steps to take.

  • High attention to detail and commitment to verifying the truth.

  • Ability to communicate technical concepts in clear and engaging ways.

  • Capacity to rapidly assess AI safety incidents and develop infographics or media briefings.

Preferred attributes:

  • A background in journalism, research or conducting investigations.

  • Strong understanding of AI capabilities, technical safety research, and current risk management approaches (e.g. safety frameworks).

  • Existing network within the AI safety or AI development space. 

$90,000 - $150,000 a year

Exact compensation will vary depending on experience and geography.

Additional benefits include, health insurance, 24+ days of PTO per year, paid parental leave, 401k matching in the US, and a work from home allowance for the purchase of office supplies or equipment.

Application Process

Application Deadline: Thursday 4th September 2025. Rolling applications may be considered after the application deadline if the position has not been filled.

Start Date: We'd like the chosen candidate to start as soon as possible after accepting an offer.

Application Process: Apply by uploading your resume, alongside a short answer to the following questions:

- In 250 words or less, please outline your personal view on current AI safety practises at the major AI corporations. You can focus in on one corporation if you like, or give your view about the space more generally.

- In 250 words or less, please outline the challenges you anticipate there might be in investigating AI corporations and the way(s) in which you will seek to overcome them.

Please apply via our website. Email applications are not accepted.

FLI aims to be an inclusive organization. We proactively seek job applications from candidates with diverse backgrounds. If you are passionate about FLI’s mission and think you have what it takes to be successful in this role even though you may not check all the boxes, please still apply. We would appreciate the opportunity to consider your application.

Questions may be directed to jobsadmin@futureoflife.org.

About the Future of Life Institute

Founded in 2014, FLI is an independent non-profit working to steer transformative technology towards benefitting life and away from extreme large-scale risks. Our work includes grantmaking, educational outreach, and policy engagement.

Our work has been featured in The Washington Post, Politico, Vox, Forbes, The Guardian, the BBC, and Wired.

Some of our achievements include:

- Pause Giant AI Experiments, an open letter calling for a 6 month pause on the training of AI systems more powerful than GPT-4. The letter has been signed by more than 30,000 people, including Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, and Andrew Yang.

- The Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.

- Slaughterbots, a viral video campaign raising awareness about the dangers of lethal autonomous weapons.

- The Future of Life Award, which retrospectively awards unsung heroes who made the world a better place. Past winners include individuals who prevented nuclear wars, helped to eradicate smallpox, and solved the ozone crisis.

- Worldbuild.ai, which imagines flourishing futures with strong AI and works out how to get there.

FLI is a largely virtual organization, with a team of >25 distributed internationally, mostly in Europe and the US. We have four offices: Campbell in California, Brussels in Belgium, London in the UK, and Washington DC. We meet in person as a full team twice a year.

Apply for this position
Bookmark Report

About the job

Full-time
Anywhere
$90k-$150k per year
Posted 4 days ago
investigator
risk management
communication
crisis

Apply for this position

Bookmark
Report
Enhancv advertisement

30,000+
REMOTE JOBS

Unlock access to our database and
kickstart your remote career
Join Premium

AI Safety Investigator

Future of Life Organizations

The Future of Life Institute (FLI) is hiring an AI Safety Investigator to bring the spirit of investigative journalism to FLI. The investigator will document safety practices at the industry's most powerful corporations, explain incidents to the general public and help FLI incentivise a race to the top on safety. In this high-impact role, you'll prepare and build out our semiannual AI Safety Index and conduct investigative deep-dives into corporate AI safety practices. The AI Safety Investigator will report to FLI's Head of EU Policy and Research and work closely with our President Max Tegmark.

FLI works to reduce global catastrophic risks from transformative technologies and develop optimistic yet realistic visions of the future.

As AI Safety Investigator, you will:

  Investigate corporate practices

- Build a network of key current and former employees at the largest corporations to understand current policies and approaches.

- Conduct desk research and survey major AI corporations.

- As AI incidents occur, quickly prepare a summary of available public and private information on what went wrong.

  Lead the development of the AI Safety Index

- Take ownership of one of FLI's flagship projects, conducting research against indicators of safety to score and rank AI corporations.

- Find ways for the Index to improve, become more robust, relevant, accessible, and accurate.

  Communicate insights to the public

- Compress complex information into concise, structured formats suitable for index metrics.

- Help create attention-grabbing yet informative data visualisations and written media that effectively communicate AI incidents.

- Work with internal and external communication partners to find ways to amplify the key findings to larger or new audiences. 

Required qualifications:

  • Willingness to travel to the SF Bay Area, California on occasion to follow leads and build relationships.

  • Fluency in English, both written and oral

Required skills and qualities:

  • Self directed, a desire to work independently with minimal supervision. 

  • An instinct for identifying and following leads, and good judgement to understand the appropriate next steps to take.

  • High attention to detail and commitment to verifying the truth.

  • Ability to communicate technical concepts in clear and engaging ways.

  • Capacity to rapidly assess AI safety incidents and develop infographics or media briefings.

Preferred attributes:

  • A background in journalism, research or conducting investigations.

  • Strong understanding of AI capabilities, technical safety research, and current risk management approaches (e.g. safety frameworks).

  • Existing network within the AI safety or AI development space. 

$90,000 - $150,000 a year

Exact compensation will vary depending on experience and geography.

Additional benefits include, health insurance, 24+ days of PTO per year, paid parental leave, 401k matching in the US, and a work from home allowance for the purchase of office supplies or equipment.

Application Process

Application Deadline: Thursday 4th September 2025. Rolling applications may be considered after the application deadline if the position has not been filled.

Start Date: We'd like the chosen candidate to start as soon as possible after accepting an offer.

Application Process: Apply by uploading your resume, alongside a short answer to the following questions:

- In 250 words or less, please outline your personal view on current AI safety practises at the major AI corporations. You can focus in on one corporation if you like, or give your view about the space more generally.

- In 250 words or less, please outline the challenges you anticipate there might be in investigating AI corporations and the way(s) in which you will seek to overcome them.

Please apply via our website. Email applications are not accepted.

FLI aims to be an inclusive organization. We proactively seek job applications from candidates with diverse backgrounds. If you are passionate about FLI’s mission and think you have what it takes to be successful in this role even though you may not check all the boxes, please still apply. We would appreciate the opportunity to consider your application.

Questions may be directed to jobsadmin@futureoflife.org.

About the Future of Life Institute

Founded in 2014, FLI is an independent non-profit working to steer transformative technology towards benefitting life and away from extreme large-scale risks. Our work includes grantmaking, educational outreach, and policy engagement.

Our work has been featured in The Washington Post, Politico, Vox, Forbes, The Guardian, the BBC, and Wired.

Some of our achievements include:

- Pause Giant AI Experiments, an open letter calling for a 6 month pause on the training of AI systems more powerful than GPT-4. The letter has been signed by more than 30,000 people, including Yoshua Bengio, Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, and Andrew Yang.

- The Asilomar AI Principles, one of the earliest and most influential sets of AI governance principles.

- Slaughterbots, a viral video campaign raising awareness about the dangers of lethal autonomous weapons.

- The Future of Life Award, which retrospectively awards unsung heroes who made the world a better place. Past winners include individuals who prevented nuclear wars, helped to eradicate smallpox, and solved the ozone crisis.

- Worldbuild.ai, which imagines flourishing futures with strong AI and works out how to get there.

FLI is a largely virtual organization, with a team of >25 distributed internationally, mostly in Europe and the US. We have four offices: Campbell in California, Brussels in Belgium, London in the UK, and Washington DC. We meet in person as a full team twice a year.

Working Nomads

Post Jobs
Premium Subscription
Sponsorship
Free Job Alerts

Job Skills
API
FAQ
Privacy policy
Terms and conditions
Contact us
About us

Jobs by Category

Remote Administration jobs
Remote Consulting jobs
Remote Customer Success jobs
Remote Development jobs
Remote Design jobs
Remote Education jobs
Remote Finance jobs
Remote Legal jobs
Remote Healthcare jobs
Remote Human Resources jobs
Remote Management jobs
Remote Marketing jobs
Remote Sales jobs
Remote System Administration jobs
Remote Writing jobs

Jobs by Position Type

Remote Full-time jobs
Remote Part-time jobs
Remote Contract jobs

Jobs by Region

Remote jobs Anywhere
Remote jobs North America
Remote jobs Latin America
Remote jobs Europe
Remote jobs Middle East
Remote jobs Africa
Remote jobs APAC

Jobs by Skill

Remote Accounting jobs
Remote Assistant jobs
Remote Copywriting jobs
Remote Cyber Security jobs
Remote Data Analyst jobs
Remote Data Entry jobs
Remote English jobs
Remote Spanish jobs
Remote Project Management jobs
Remote QA jobs
Remote SEO jobs

Jobs by Country

Remote jobs Australia
Remote jobs Argentina
Remote jobs Brazil
Remote jobs Canada
Remote jobs Colombia
Remote jobs France
Remote jobs Germany
Remote jobs Ireland
Remote jobs India
Remote jobs Japan
Remote jobs Mexico
Remote jobs Netherlands
Remote jobs New Zealand
Remote jobs Philippines
Remote jobs Poland
Remote jobs Portugal
Remote jobs Singapore
Remote jobs Spain
Remote jobs UK
Remote jobs USA


Working Nomads curates remote digital jobs from around the web.

© 2025 Working Nomads.