Contact Us

OpenAI, Anthropic brief US House Committee over new AI models

Anadolu Agency TECH
Published April 29,2026
Subscribe

Tech companies OpenAI and Anthropic have briefed US House Homeland Security Committee personnel on their new artificial intelligence (AI) models as well as their implications for cybersecurity, Axios reported on Tuesday.

This marked one of the first briefings that lawmakers have had with the AI giants about the cyber threats posed by their new models, the report said.

Anthropic has held off on the public release of its Mythos Preview model fearing the damage it can cause by exploiting security flaws. While OpenAI has decided on a tier-based approach for releasing its GPT-5.4-Cyber model.

The report said that OpenAI and Anthropic briefed congressional staff in separate classified sessions last Thursday.

A committee aide called them "proactive engagement with these companies on recent frontier model developments," particularly their impact on critical infrastructure cybersecurity

The aide added that the discussions also referenced a recent White House memo that alleged that China is running "industrial-scale" efforts to distill and copy American AI models.

House Homeland Security Chair Andrew Garbarino has been holding private roundtables with tech and AI leaders, the report mentioned, adding that the committee has also held multiple hearings on generative AI's national security risks, including state-backed cyberattacks.

"Productive partnerships between industry and government are essential to help us stay ahead of the evolving threat landscape, ensure the government is prepared to securely harness AI for its defensive capabilities, and support and protect American AI development as adversaries like China seek to gain an advantage by any means," Garbarino told Axios.

Committee members said another briefing last week, on jailbroken AI models, systems altered to bypass safety safeguards, heightened urgency around regulation.

Demonstrations showed members how such tools could enable scenarios like school shootings or bombings.

"What I just saw in there, with just a short amount of time typing in questions, is very scary. These models are very powerful," Rep. August Pfluger said afterward.

"We see how powerful it is, and it should be used for good, but guardrails need to be attached... Congress and the executive branch need to work with our industry partners to make sure that we keep kids safe."