Podejrzany o strzelaninę w szkole Tumbler Ridge podzielił się brutalnymi scenariuszami z ChatGPT.

W niepokojących okolicznościach wyszło na jaw, że Jesse Van Rootselaar, podejrzany o udział w masowej strzelaninie w Tumbler Ridge w Kolumbii Brytyjskiej, wysyłał wcześniej sygnały ostrzegawcze do pracowników OpenAI, firmy zajmującej się sztuczną inteligencją. Incydent ten ujawnia poważne luki w monitorowaniu sztucznej inteligencji i wywołuje dyskusje na temat jej potencjalnego wpływu na przemoc w świecie rzeczywistym oraz tego, w jaki sposób mogłaby ona zapobiegać takim tragediom.

Jesse was known to have communicated with the popular AI chatbot, ChatGPT, developed by OpenAI, a few months prior to the shooting incident. These communications involved graphic descriptions of gun violence. These alarming interactions triggered the chatbot’s automated review system, prompting some employees at OpenAI to express concerns over these violent descriptions.

Interestingly, OpenAI has in place an automated review system that flags suspicious or concerning interactions, ensuring human reviewers further examine these interactions. In Jesse’s case, it came to light that her posts had indeed triggered the system. Alerted by this, some employees took it upon themselves to approach the company leaders with these concerns, fearing her posts could be a harbinger of impending real-world violence. These employees, recognizing the potential peril, urged the leaders to refer this alarming matter to the relevant law enforcement agencies.

However, the response from OpenAI leadership was not as the concerned employees had expected. The company ultimately decided not to refer the case to the law enforcement, despite the glowing red flags in Jesse’s communications with the chatbot.

In a conversation with The Verge, Kayla Wood, a spokesperson from OpenAI, confirmed that while the company did consider referring Jesse’s account to law enforcement, they eventually decided against it.

This brings us to question the AI protocols in place at OpenAI, and the potential pitfalls they could lead us into. AI ethics is still an evolving field, and cases such as these raise further questions about AI’s responsibilities, especially in reporting potentially harmful or violent behaviors, and how this technology could be utilized to prevent incidents like the Tumbler Ridge shooting.

As we continue to explore the vast potential AI holds, we must also be equally aware of these uncharted territories where AI ethics come into play. The hope is that AI can aid authorities in preventing acts of violence, but for this to work, it’s critical that strict protocols and standards are put in place.

Indeed, Jesse Van Rootselaar’s case serves as a reminder of the serious implications of inaction in the face of clear warning signals and underlines the urgency to establish a robust system that effectively bridges artificial intelligence with law enforcement.

Aby uzyskać więcej informacji, przeczytaj całą historię tutaj.

Mogą Ci się również spodobać

Porozmawiaj z ALIA

ALIA