Font ResizerAa
The Popular StoryThe Popular Story
  • Lifestyle
  • Sports
  • Entertainment
  • News
  • World
Search
  • Lifestyle
  • Sports
  • Entertainment
  • News
  • World
Follow US
Copyright © 2024 MP Media. All Rights Reserved.
The Popular Story > Blog > World > Do you have knowledge of ‘dirty bombs’? The bizarre question being asked by Anthropic and OpenAI to new hires | World News
World

Do you have knowledge of ‘dirty bombs’? The bizarre question being asked by Anthropic and OpenAI to new hires | World News

By Mohit Patel Last updated: March 17, 2026 4 Min Read
Share


Contents
Why Anthropic and ChatGPT are hiring experts in dirty bombsExperts warn of regulatory gapsGuardrails becoming a priority for AI developers
Do you have knowledge of ‘dirty bombs’? The bizarre question being asked by Anthropic and OpenAI to new hires

If you happen to be someone who understands how dangerous weapons are designed or handled, the artificial intelligence industry may be looking for you. In a surprising twist, leading technology companies are now recruiting experts in chemical weapons, explosives and radiological threats. The goal is not to build such weapons but to prevent AI tools from helping others do so. According to a BBC report, US AI firm Anthropic has advertised a role requiring expertise in chemical weapons defence and dirty bombs, while ChatGPT developer OpenAI is offering salaries of up to $455,000 for researchers focused on biological and chemical risks.

Why Anthropic and ChatGPT are hiring experts in dirty bombs

As AI systems become increasingly capable of answering complex technical questions, companies are facing a new challenge. What if someone attempts to use these systems to obtain information about building weapons?Anthropic’s job listing seeks candidates with experience in chemical weapons or explosives defence, along with knowledge of radiological dispersal devices, commonly known as dirty bombs. The company says the role is intended to ensure that its AI models cannot be manipulated into generating harmful instructions.According to the BBC, the expert would help strengthen safety policies and technical guardrails designed to prevent users from extracting dangerous information.Anthropic is not the only company adopting this approach. OpenAI, the developer behind ChatGPT, has also advertised a position for a researcher specialising in biological and chemical risks.The role focuses on studying how advanced AI models could potentially be misused and developing systems to prevent such behaviour. The company is offering salaries of up to $455,000 for experts who can help address these risks.The hiring reflects growing recognition within the AI industry that powerful language models could inadvertently generate highly sensitive technical knowledge if proper safeguards are not in place.

Experts warn of regulatory gaps

While companies say these roles are meant to strengthen safeguards and prevent misuse, some researchers argue that the broader implications of exposing AI systems to sensitive weapons-related knowledge deserve closer examination. As AI models become increasingly capable of synthesising complex technical information, experts worry about whether it is possible to completely eliminate the risk of misuse once such knowledge becomes part of safety testing or evaluation.Dr Stephanie Hare, a technology researcher and co-presenter of the BBC’s AI Decoded programme, has questioned whether it is entirely safe for AI systems to interact with information related to explosives or radiological weapons, even when the intention is to build protective guardrails. She also notes that there is currently no dedicated international treaty or regulatory framework governing how artificial intelligence systems should handle such sensitive knowledge.

Guardrails becoming a priority for AI developers

AI developers have increasingly warned that their technology could pose serious risks if misused. As a result, many companies are investing heavily in safety research.Anthropic has previously stated that its AI systems should not be used in autonomous weapons or mass surveillance. Its co-founder Dario Amodei has argued that the technology is not yet reliable enough for such applications.By hiring specialists who understand chemical weapons and explosive threats, companies hope to design safeguards that prevent AI from generating harmful instructions while allowing the technology to remain useful for research, education and legitimate problem-solving.The unusual job listings reflect a growing reality in the AI era. As the technology becomes more powerful, the challenge is not just building smarter systems but ensuring they cannot be turned into dangerous tools.



Source link

Sign Up For Daily Newsletter

Be keep up! Get the latest breaking news delivered straight to your inbox.
[mc4wp_form]
By signing up, you agree to our Terms of Use and acknowledge the data practices in our Privacy Policy. You may unsubscribe at any time.
Share This Article
Facebook Twitter Email Copy Link Print
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

SUBSCRIBE NOW

Subscribe to our newsletter to get our newest articles instantly!

[mc4wp_form]

HOT NEWS

How to identify sweet anjeer while shopping and never pick low-quality figs

Anjeer, also referred to as figs, is one of the most popular dry fruits due…

March 17, 2026

Mohit Patel: The Visionary Mind Behind MP Media, Monax, and The Popular Story

In the competitive era of digital media, branding, and youth culture, very few names are…

April 23, 2025

At AI Summit, PM Modi’s nameplate carries a ‘Bharat’ message | India News

NEW DELHI: Prime Minister Narendra Modi on Thursday addressed the plenary session at the AI…

February 19, 2026

YOU MAY ALSO LIKE

Sargasso Sea: Exploring the unique North Atlantic sea with no shores | World News

Image Credit: AI Generated The idea of sailing endlessly across a seemingly infinite horizon of blue water with no sign…

World
March 17, 2026

Benjamin Netanyahu: ‘Yes, Mike, I’m alive’: Netanyahu mocks death rumours again in new video with US envoy; says ‘crossed off two names today’

Netanyahu mocks death rumours again in new video with US envoy Israeli Prime Minister Benjamin Netanyahu has again mocked online…

World
March 17, 2026

Good news for Indian Green Card applicants: EB-2, EB-3 dates move forward in April 2026 visa bulletin

The April 2026 visa bulletin brought good news for Indian Green Card applicants as the employment-based Green Card process sees…

World
March 17, 2026

Who is Joe Kent? Trump’s counterterrorism chief resigns amid US‑Iran war

A senior US counterterrorism official has resigned in protest against the Trump administration’s handling of the war with Iran, becoming…

World
March 17, 2026
Copyright © 2020 MP Media All rights reserved.
Welcome Back!

Sign in to your account

Lost your password?