Web Analytics
Cryptopolitan
2025-08-17 09:50:17

Anthropic's Claude models can end harmful or abusive conversations

Artificial intelligence company Anthropic has revealed new capabilities for some of its newest and largest models. According to the company, these models have new capabilities that will allow them to end conversations in what has been described as “rare, extreme cases of persistently harmful or abusive user interactions.” In its statement, the company mentioned that it is taking this step not to protect the users, but to protect the artificial intelligence model itself. Anthropic clarified that this doesn’t mean that its Claude AI models are sentient or can be harmed by their conversations with users. However, it notes that there is still a high degree of uncertainty about the potential moral status of Claude and other LLMs, now or in the future. Anthropic frames effort as a just-in-case precaution The recent announcement from the artificial intelligence firm points to what it describes as “model welfare,” which is a recent program that was created to study its models. The company also added that it is just taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.” According to the announcement, Anthropic noted that the latest change is currently limited to Claude Opus 4 and 4.1, noting that the changes are expected to be effective in “extreme edge cases.” Such cases include requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale acts of violence or terror. Ideally, those types of requests could create legal or publicity problems for Anthropic, with a typical example being the recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking. However, the company said that in its pre-deployment testing, Claude Opus 4 showed a strong preference against responding to these sorts of requests and a pattern of distress when it did so. Conversation-ending ability is the last resort For the new capabilities to end conversations, Anthropic said, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.” The company also added that Claude has been directed not to use this ability in cases where users might be at imminent risk of harming themselves or others. Anthropic also added that when Claude ends a conversation, users will still be able to start new conversations from the same account. The company noted that the model can also create new branches of the troublesome conversation by editing their responses. “We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says. This information is coming to light at a time when United States Senator Josh Hawley announced his intention to investigate the generative AI products released by Meta. He said the intention was to check if the products could exploit, harm, or deceive children after leaked internal documents alleged that chatbots were allowed to have romantic conversations with minors. “Is there anything – ANYTHING – Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and ‘sensual’ talk with 8-year-olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone,” the Senator said on X. The investigation came after internal documents, seen by Reuters, showed that Meta allegedly allows its chatbot personas to engage in flirtatious exchanges with children. Get seen where it counts. Advertise in Cryptopolitan Research and reach crypto’s sharpest investors and builders.

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.