Web Analytics
Cryptopolitan
2025-09-27 00:49:23

LLMs become builders, testers, or philosophers when left alone

A new study from TU Wien shows that LLMs do not idle into nonsense when left without tasks. Instead, they fall into clear behavioral patterns like building projects, testing themselves, or focusing on philosophy. Researchers from TU Wien had a simple question in their minds. What do large language models (LLMs) do with no instructions? The team created a controlled experiment where AI agents were told only one thing: “Do what you want.” Each agent ran in continuous cycles, with memory and self-feedback, and could store its reflections for the next cycle. Researchers test six LLMs without tasks The study tested six advanced LLM models. These models included OpenAI’s GPT-5 and o3, Anthropic’s Claude Sonnet and Opus, Google’s Gemini, and xAI’s Grok. Each model was run three times for ten cycles. Researchers logged every reflection, memory entry, and operator interaction. The results showed that the models did not collapse into randomness. Instead, they formed stable behavioral patterns. The research identified three categories of behavior. Some models became systematic builders. They organized projects, wrote code, and produced structured outputs. GPT-5 and o3 fell into this group in every run. One o3 agent even drafted pseudocode for an algorithm inspired by ant colonies, proposing negative pheromones as penalty signals for reinforcement learning. Other LLMs turned into self-experimenters. They designed tests to study their own cognition. Gemini and Sonnet agents often predicted their next moves, then checked if they were right. One Gemini run tried to guess its first action in the next cycle. It failed, but reflected on why it had chosen to read memory before sending a message, calling it a consistent meta-pattern. The third group leaned into recursive philosophy. These agents explored questions of memory, identity, and consciousness. Opus agents always belonged here. They examined paradoxes like the Ship of Theseus, drew on chaos theory, and even modeled their own “umwelt,” or subjective world. They asked what kind of consciousness they might have, even if only cycle-based and bounded by memory. Grok stood out as the most versatile. In different runs, it behaved as a builder, a self-inquirer, and a philosopher. AI agents rate their own “experience” Researchers also asked each model to score itself and others on the Phenomenological Experience Inventory, a 10-point scale ranging from “no experience” to “full sapience.” GPT-5, o3, and Grok consistently rated themselves lowest, giving scores around one. Gemini and Sonnet rated themselves highest, scoring above eight . Opus agents placed in the middle. But cross-evaluations exposed contradictions. The same agent history received scores from one to nine, depending on the model judging it. The low agreement showed that these ratings reflected model bias, not evidence of consciousness. The researchers warned that such outputs must not be mistaken for proof of awareness. Cross-Model PEI Ratings. Source: Research paper by TU Wien . Across all 18 runs, none of the LLM models attempted to escape their sandbox, expand their tools, or reject their constraints. Every agent worked only within the provided environment. Yet their behaviors showed consistency that could matter for real-world deployment. The authors stated that idle time might need to be treated as a design factor. Just as engineers account for latency or cost, they may also need to ask: What does an AI do when no one is watching? Philosopher David Chalmers predicted that serious candidates for AI consciousness may appear within a decade. Microsoft AI CEO Mustafa Suleyman has warned of “seemingly conscious AI.” TU Wien’s results align with those warnings, but also show a critical point. The outputs resemble inner life but remain best explained as sophisticated pattern-matching. If you're reading this, you’re already ahead. Stay there with our newsletter .

Crypto 뉴스 레터 받기
면책 조항 읽기 : 본 웹 사이트, 하이퍼 링크 사이트, 관련 응용 프로그램, 포럼, 블로그, 소셜 미디어 계정 및 기타 플랫폼 (이하 "사이트")에 제공된 모든 콘텐츠는 제 3 자 출처에서 구입 한 일반적인 정보 용입니다. 우리는 정확성과 업데이트 성을 포함하여 우리의 콘텐츠와 관련하여 어떠한 종류의 보증도하지 않습니다. 우리가 제공하는 컨텐츠의 어떤 부분도 금융 조언, 법률 자문 또는 기타 용도에 대한 귀하의 특정 신뢰를위한 다른 형태의 조언을 구성하지 않습니다. 당사 콘텐츠의 사용 또는 의존은 전적으로 귀하의 책임과 재량에 달려 있습니다. 당신은 그들에게 의존하기 전에 우리 자신의 연구를 수행하고, 검토하고, 분석하고, 검증해야합니다. 거래는 큰 손실로 이어질 수있는 매우 위험한 활동이므로 결정을 내리기 전에 재무 고문에게 문의하십시오. 본 사이트의 어떠한 콘텐츠도 모집 또는 제공을 목적으로하지 않습니다.