在一则令人震惊的报道中,人们发现参与不列颠哥伦比亚省 Tumbler Ridge 大规模枪击案的嫌疑人 Jesse Van Rootselaar 此前曾向人工智能公司 OpenAI 的员工发出过警告信号。这一事件揭示了人工智能监控的严重失误,并引发了关于人工智能对现实世界暴力事件的潜在影响以及如何防止此类悲剧发生的讨论。.
Jesse was known to have communicated with the popular AI chatbot, ChatGPT, developed by OpenAI, a few months prior to the shooting incident. These communications involved graphic descriptions of gun violence. These alarming interactions triggered the chatbot’s automated review system, prompting some employees at OpenAI to express concerns over these violent descriptions.
Interestingly, OpenAI has in place an automated review system that flags suspicious or concerning interactions, ensuring human reviewers further examine these interactions. In Jesse’s case, it came to light that her posts had indeed triggered the system. Alerted by this, some employees took it upon themselves to approach the company leaders with these concerns, fearing her posts could be a harbinger of impending real-world violence. These employees, recognizing the potential peril, urged the leaders to refer this alarming matter to the relevant law enforcement agencies.
However, the response from OpenAI leadership was not as the concerned employees had expected. The company ultimately decided not to refer the case to the law enforcement, despite the glowing red flags in Jesse’s communications with the chatbot.
In a conversation with The Verge, Kayla Wood, a spokesperson from OpenAI, confirmed that while the company did consider referring Jesse’s account to law enforcement, they eventually decided against it.
This brings us to question the AI protocols in place at OpenAI, and the potential pitfalls they could lead us into. AI ethics is still an evolving field, and cases such as these raise further questions about AI’s responsibilities, especially in reporting potentially harmful or violent behaviors, and how this technology could be utilized to prevent incidents like the Tumbler Ridge shooting.
As we continue to explore the vast potential AI holds, we must also be equally aware of these uncharted territories where AI ethics come into play. The hope is that AI can aid authorities in preventing acts of violence, but for this to work, it’s critical that strict protocols and standards are put in place.
Indeed, Jesse Van Rootselaar’s case serves as a reminder of the serious implications of inaction in the face of clear warning signals and underlines the urgency to establish a robust system that effectively bridges artificial intelligence with law enforcement.
更多详情,请阅读全文 这里.