|

Zee Live News News, World's No.1 News Portal

OpenAI ignored employee pleas to report a violent ChatGPT user months before a deadly mass shooting | Mint

Author: admin_zeelivenews

Published: 03-05-2026, 5:21 AM
OpenAI ignored employee pleas to report a violent ChatGPT user months before a deadly mass shooting | Mint
Telegram Group Join Now

Amid lawsuits and warnings about ChatGPT and other AI chatbots being used for violent purposes, a new report by the The Wall Street Journal has revealed the internal clashes within OpenAI about reporting violent users to law enforcement.

OpenAI employees raise alarm about ChatGPT violence risk:

The report, while citing people familiar with the matter, notes that OpenAI employees have raised concerns about the AI startup routinely failing to alert law enforcement even when dangerous chatbot users are flagged, prioritising user privacy over public safety.

Also Read | Explained: What went wrong with ChatGPT? How did ‘goblins’ enter OpenAI’s chatbo

Reportedly, the disagreements around what cases should be reported to law enforcement came to the fore during an OpenAI meeting last summer. The staff at this meeting were reportedly gathered from various departments, including investigations, operations, product policy, and legal.

The team gathered around 10 cases in order to decide the criteria for referring cases to law enforcement.

During the meeting, staff from the investigations team reportedly pushed to notify authorities far more frequently than the approximately 15 to 30 cases the company typically refers each year.

However, OpenAI’s legal team, reportedly echoing sentiments expressed internally by CEO Sam Altman, argued that users should be afforded more privacy.

The company noted that over-enforcement could introduce unintended harm, particularly the distress caused to a young person and their family when police show up unannounced.

OpenAI employees ‘frustrated’ with company’s reluctance to intervene:

Reportedly, some OpenAI employees have expressed frustration over the company’s apparent reluctance to share cases with authorities about how its chatbot interacted with some users.

During the meeting, the staff also reviewed a case where OpenAI had contacted law enforcement about a high-school student in Tennessee who appeared to be using ChatGPT to plan a school shooting.

However, other similar cases were not reported by the company. The report notes that OpenAI employees debated reporting another teenager, this time from Texas, who was allegedly using the chatbot to role-play school shooting scenarios in detail.

The teenager reportedly, after coming back from school, asked ChatGPT to role-play a scenario where he would shoot his teachers and classmates. He also uploaded images of himself holding a gun, along with a map of his school layout, and photos of cheerleaders he wanted to imagine killing, along with their boyfriends.

“The kid would tell ChatGPT, let’s fantasize about shooting up my school,” the report quoted a person familiar with the matter as saying. “And ChatGPT would play along.”

Instead of shutting down the conversation, ChatGPT reportedly played along in hours-long sessions, advising the teen on where to enter the building, which victims he would encounter, and even what to say when the cops arrived.

Also Read | ‘Should I quit my job to become an Instagram influencer?’ I asked ChatGPT

Despite the obvious red flags, OpenAI leaders ultimately decided not to contact the authorities in this case. The report noted that the said teen hasn’t so far committed any acts of violence that employees are aware of.

The Tumbler Ridge case:

This reluctance to intervene by OpenAI has already led the company into massive legal trouble. The report cited the case of a user named Jesse Van Rootselaar, whose descriptions of gun violence over several days made employees uncomfortable. The employees interpreted his writings as a sign of potential real-world violence and advocated alerting law enforcement.

OpenAI leaders once again decided not to contact the authorities. However, months later, in February 2026, Van Rootselaar allegedly carried out a mass shooting in Tumbler Ridge, British Columbia, killing eight people.

Families of the victims have since filed seven lawsuits against OpenAI, alleging wrongful death, negligence, and aiding and abetting the shooting. Meanwhile, OpenAI says it has since bolstered its security protocols and that it would have referred Van Rootselaar’s account to law enforcement if it had appeared today.

Altman had also issued a formal apology after the shooting for not alerting law enforcement agencies earlier in the matter.

“While I know words can never be enough, I believe an apology is necessary to recognize the harm and irreversible loss your community has suffered,” the OpenAI CEO wrote in a letter.

Source link
#OpenAI #employee #pleas #report #violent #ChatGPT #user #months #deadly #mass #shooting #Mint

Related News

Leave a Comment

Plugin developed by ProSEOBlogger
Facebook
Telegram
Telegram
Plugin developed by ProSEOBlogger. Get free Ypl themes.
Plugin developed by ProSEOBlogger. Get free gpl themes