ChatGPT’s parent company, OpenAI, says it plans to launch parental controls for its popular AI assistant “within the next month” following allegations that it and other chatbots have contributed to self-harm or suicide among teens. The controls will include the option for parents to link their account with their teen’s account, manage how ChatGPT responds to teen users, disable features like memory and chat history and receive notifications when the system detects “a moment of acute distress” during use. OpenAI previously said it was working on parental controls for ChatGPT, but specified the timeframe for release last week. “These steps are only the beginning,” OpenAI wrote in a blog post Tuesday. “We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible.” The announcement comes after the parents of 16-year-old Adam Raine filed a lawsuit against OpenAI alleging that ChatGPT advised the teenager on his suicide. Last year, a Florida mother sued chatbot platform Character.AI over its alleged role in her 14-year-old son’s suicide. There have also been growing concerns about users forming emotional attachments to ChatGPT, in some cases resulting in delusional episodes and alienation from family, as media reports have indicated. OpenAI didn’t directly tie its new parental controls to these reports, but said in a recent blog post that “recent heartbreaking cases of people using ChatGPT in the midst of acute crises” prompted it to share more detail about its approach to safety. ChatGPT already included measures, such as pointing people to crisis helplines and other resources, an OpenAI spokesperson previously said in a statement. But in the statement issued in response to Raine’s suicide, the company said its safeguards can sometimes become unreliable when users engage in long conversations with ChatGPT. “ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources,” a company spokesperson said. “While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts.” In addition to the parental controls announced Tuesday, OpenAI says it will route conversations with signs of “acute stress” to one of its reasoning models, which the company says follows and applies safety guidelines more consistently. It’s also working with experts in “youth development, mental health and human-computer interaction” to develop future safeguards, including parental controls, the company said. OpenAI said it will roll out additional safety measures over the next 120 days, adding that this work has been underway prior to Tuesday’s announcement. ChatGPT的母公司OpenAI表示,計劃“在一個月內”為這款熱門人工智能助手推出家長監控功能。此前有指控稱,ChatGPT和其他聊天機器人可能導致青少年自殘或自殺。新功能將包括家長賬戶與青少年賬戶綁定、管理ChatGPT對青少年用戶的回復方式、禁用記憶和聊天歷史等功能,并在系統檢測到使用過程中出現“嚴重痛苦時刻”時發送通知。OpenAI此前表示正在開發家長監控功能,但上周才明確了發布時間表。 OpenAI上周二在博客中表示:“這 些措施只是開始。我們將在專家指導下繼續學習完善,力求讓ChatGPT發揮最大助益。”此前,16歲少年亞當·雷恩的父母對OpenAI提起訴訟,指控ChatGPT為這名青少年提供了自殺建議。去年,佛羅里達州一位母親起訴聊天機器人平臺Character.AI,稱其導致14歲兒子自殺。另有媒體報道顯示,用戶對ChatGPT產生情感依賴的案例日益增多,有時甚至出現妄想癥狀和家庭疏離問題。 OpenAI雖未直接將這些報道與新功能掛鉤,但在近期博客中表示“近期發生多起令人心碎案例,因為有人在嚴重危機中使用ChatGPT”,促使他們分享更多安全措施細節。公司發言人此前稱,ChatGPT已包含引導用戶聯系危機求助熱線等措施。但在回應雷恩自殺事件的聲明中,公司承認當用戶與ChatGPT進行長時間對話時,安全措施可能失效。 除家長監控功能外,OpenAI表示會將出現“嚴重抑郁”跡象的對話轉至其推理模型,該模型能更持續地遵循安全準則。公司正與“青少年發展、心理健康和人機交互”領域專家合作開發包括家長控制在內的安全措施。 公司表示將在120天內推出更多安全措施,并強調相關工作在上周二公告前早已啟動。 (Translated by DeepSeek) |