xiand.ai
AI

Analyst Questions Cognitive Cost of Outsourcing Thought to Large Language Models

Erik Johannes Husom detailed concerns regarding the uncritical acceptance of Large Language Models (LLMs) for cognitive tasks, arguing that outsourcing expression risks eroding personal voice and tacit knowledge. The analysis suggests that while some view LLMs as mere efficiency tools, the line between assistance and replacement is dangerously thin in current interfaces.

La Era

Analyst Questions Cognitive Cost of Outsourcing Thought to Large Language Models
Analyst Questions Cognitive Cost of Outsourcing Thought to Large Language Models
Publicidad
Publicidad

Erik Johannes Husom published an extensive analysis on January 30, 2026, examining the cognitive implications of outsourcing thinking to generative AI tools like Large Language Models (LLMs). The core concern addressed is the potential for mental atrophy, based on the intuitive principle that cognitive skills degrade without consistent use, according to Husom's observations.

Husom referenced arguments from Andy Masley, who challenged the 'lump of cognition fallacy'—the idea that thinking is a finite resource that can be depleted by outsourcing. While agreeing that thinking often generates new avenues for thought, Husom contends the issue is more complex than simply freeing up capacity for higher-order tasks.

The analyst highlighted several categories where outsourcing cognition is detrimental, largely agreeing with Masley's list, particularly concerning activities that build tacit knowledge or require genuine personal presence. Husom focused significant attention on the category: 'Is deceptive to fake,' extending it beyond intimate dating app exchanges to general personal communication.

In direct communication, Husom argued that allowing LLMs to transform phrasing constitutes a breach of expectation, as the chosen words intrinsically carry relational meaning beyond mere information transfer. This blurring of authorship, especially in public writing, necessitates a clarification of societal expectations regarding machine co-authorship, as reported in recent Norwegian media debates.

Two primary objections were raised against using LLMs to refine personal text: the inextricable link between meaning and expression in language, and the self-inflicted loss of developmental opportunity. When phrasing is delegated, the crucial thinking process involved in developing ideas becomes severely amputated, preventing individuals from discovering their authentic voice.

Husom noted that the current interface design of chatbots makes it exceptionally difficult to draw a firm line between simple grammatical assistance and full generative replacement. The leap from traditional autocorrect to generative models is too significant, meaning users often drift into having the model write for them rather than merely assisting expression.

While many users prioritize utilitarian efficiency—finishing reports or emails quickly—Husom suggests this efficiency comes at the cost of cognitive growth. For LLMs to genuinely aid skill development, the interface must evolve beyond current chatbot designs to better scaffold the user's own thinking process, rather than substituting it entirely.

Publicidad
Publicidad

Comments

Comments are stored locally in your browser.

Publicidad
Publicidad