Web Analytics
Cryptopolitan
2026-02-23 19:21:44

Anthropic says DeepSeek, Moonshot AI, and MiniMax created over 24,000 fake accounts to extract data from Claude

Anthropic says three Chinese AI firms built more than 24,000 fake accounts to pull data from its Claude system. The company says the goal was to boost their own models fast. The firms named were DeepSeek , Moonshot AI, and MiniMax. Anthropic said those accounts sent over 16 million prompts into Claude to gather responses and patterns that could be reused for training. Anthropic shared the details in a blog post on Monday. The company said the activity was a form of distillation. That process uses outputs from one model to train another model. Dario Amodei leads Anthropic. Anthropic allegedly said DeepSeek ran about 150,000 interactions with Claude. Moonshot AI logged more than 3.4 million prompts. MiniMax reached over 13 million prompts. Anthropic said the scale shows a clear intent to extract value at speed. OpenAI flags similar behavior in Washington Earlier this month, OpenAI sent a memo to House lawmakers accusing DeepSeek of using the same distillation tactic to copy its systems. Sam Altman runs OpenAI . After first naming OpenAI, the company told lawmakers that DeepSeek tried to mimic its products through large prompt volumes. Anthropic said distillation itself has valid uses. Companies use it to build smaller versions of their own models. Anthropic also said the same method can create rival systems in a fraction of the time and at a fraction of the cost. Synthetic data now plays a large role in training big foundation models. Developers use it because high-quality real data is limited. Many labs are also building agentic systems that can take action for users. In a July technical report, Moonshot said it used synthetic data to train its Kimi K2 model. Anthropic said the activity raises national security concerns. The company stated that foreign labs that distill American models can feed those capabilities into military, intelligence, and surveillance systems. Markets react as Anthropic launches new security tool Anthropic also rolled out a new security tool for Claude on Friday in a limited research preview. The tool scans software code for weaknesses and suggests fixes. Anthropic plans to hold an enterprise briefing on Tuesday with more product announcements. Markets reacted fast. Cybersecurity stocks fell for a second day on Monday as investors worried that new AI tools could replace older security services. CrowdStrike dropped about 9 percent. Zscaler also fell about 9 percent. Netskope slid nearly 10 percent. SailPoint declined 6 percent. Okta, SentinelOne, and Fortinet each lost more than 4 percent. Palo Alto Networks was down 2 percent. Cloudflare fell 7 percent after recent gains tied to Moltbot interest. The iShares Cybersecurity and Tech ETF fell almost 4 percent. The Global X Cybersecurity ETF hit its lowest level since November 2023. The pressure extends beyond security stocks. AI tools that build apps and websites from simple prompts have shaken software companies this year. Salesforce has lost about one-third of its value. ServiceNow has fallen more than 34 percent. Microsoft has dropped roughly 20 percent. Bank of America said the Anthropic tool mainly threatens code scanning platforms such as GitLab and JFrog. GitLab fell 8 percent on Friday. JFrog dropped 25 percent the same day. Join a premium crypto trading community free for 30 days - normally $100/mo.

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.