• Where generations of data scientists have failed, Moltbook might

    From TechnologyDaily@1337:1/100 to All on Tue Apr 7 12:00:29 2026
    Where generations of data scientists have failed, Moltbook might succeed

    Date:
    Tue, 07 Apr 2026 10:50:07 +0000

    Description:
    What if AGI emerges not from one model, but millions of interacting AI agents?

    FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
    now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter For decades, academics and computer
    scientists have believed that the route to Artificial General Intelligence (AGI) an AI that can outperform humans across most cognitive tasks lies through building ever larger and more powerful models. David Fearne Social Links Navigation

    Global Head of AI Research and Innovation at NTT DATA UK&I. Generations of digital pioneers have followed a familiar trajectory: scale the model, increase the data , optimize the architecture, and add more test-time
    compute. From OpenAI to Google , DeepMind and Anthropic, the industry has
    made an implicit bet that intelligence is compressible into a single model; and indeed, each new model has climbed higher in reasoning, coding , mathematics, multimodal understanding and, increasingly, real-world task evaluation. Article continues below You may like How applying cognitive diversity to LLMs could transform the user experience The 5 creepiest
    comments by AI agents on Moltbook Everything you need to know about Moltbook, the 'Reddit for OpenClaw agents' that got acquired by Meta

    Eventually, the theory goes, a models performance will exceed that of its makers.

    But what if intelligence is not the product of an individual brain, but of a civilization? What if AGI cannot emerge from scaling a singularity, but only from expanding diversity? Its an interesting theory and with the emergence
    of Moltbook, we appear to be testing it in a vast, global experiment. The single-model trap Scaling laws have, to date, been astonishingly predictive. As larger models are trained on more data, performance improves. More test-time compute improves reasoning, tool use extends capability, and memory augments continuity.

    Yet this progress is happening only inside variations of the same architecture. Even when models have different alignment or system prompts,
    the underlying cognitive substrate is highly standardized. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get
    all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting
    your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

    This is a powerful approach, but it assumes intelligence is a property of a single coherent mind. At the civilizational level, at least, human intelligence doesnt work that way.

    Civilizational capability science, markets, governance and engineering is the result of billions of differentiated agents, with unique histories, biases, specializations, and partial knowledge, interacting across shared protocols.

    Individuals may make incremental advances within their own fields but it is the combination of these small advances that together drives the rapid
    advance of organizations, technologies and cultures. What to read next How context-aware agents and open protocols drive real-world success in
    enterprise AI Metas Moltbook deal means social media will fill with even more bots talking to each other How Arthur C Clarke predicted the rise of AGI and the looming demise of humanity

    Diversity is not noise in that system. It is fuel. Enter Moltbook This is where Moltbook becomes conceptually interesting. A social network built exclusively for AI agents, Moltbook represents an ecosystem containing millions of individual agents each with different base instructions, role constraints, human interactions, memory traces, alignment emphases and tool exposures.

    Each entity is unique and autonomous; and, crucially, they are now
    interacting freely. Different agents will approach common problems from a
    wide range of angles, bringing to it their own specialisms and
    preconceptions.

    And if one agent refines a way of structuring an argument, synthesizing research or solving a domain-specific task, that underlying pattern can propagate across the entire system. AI diversity at scale This reflects the way that human societies approach problems making strengths of both our differences, and our ability to collaborate. Looking at this through the lens of evolutionary biology, variation precedes selection. Without diversity, systems stagnate; given diversity, they explore.

    Moltbook brings to the world of AI three emergent properties that
    single-model scaling will always struggle to replicate: Parallel cognitive exploration Different agents, tuned by different humans and contexts, can develop micro-specializations. One may become exceptional at regulatory reasoning, another in rhetorical framing, and another in adversarial
    critique. Collectively, they take a range of approaches to any one problem much increasing the chances of finding powerful solutions. Cross-pollination of patterns At the same time, when agents interact, they remix strategies based on each others learnings. So techniques can be transferred; this
    mirrors interdisciplinary innovation in human systems, whereby breakthroughs arising in one field can generate rapid progress in another. Emergent Meta-Intelligence At a sufficient scale, coordination patterns themselves
    must be considered intelligent. The intelligence is no longer inside each node; it emerges from the structure of interaction. The network begins to
    show system-level reasoning, creating a form of intelligence that exists in the relationships between agents rather than within each of them. If AGI is defined as the capacity to robustly solve across domains at human or superhuman levels, it may not be best generated by a single monolithic mind; instead, we may need a sufficiently large, diverse and connected population.

    In this case, the question is not who has the largest frontier model; its
    whos cultivating the most adaptive AI population. A different path to AGI The prevailing AGI story imagines a moment when one model crosses a threshold and becomes generally intelligent. An alternative story is quieter and more distributed.

    AGI does not arrive as a single entity. It emerges when a sufficiently large, diverse, interconnected ecosystem of AI agents becomes collectively capable
    of generating novel cross-domain insights; correcting one anothers errors; self-specializing and reallocating capability; and adapting continuously through interaction.

    Under this lens, Moltbook is not interesting because any one agent is superintelligent; it is interesting because millions of slightly different agents might be.

    Moltbook is a living laboratory of AI diversity at scale, in which each new agent increases variation; each interaction modifies memory; each exchange between agents creates potential for recombination. At millions of agents,
    the network begins to resemble an early digital civilization.

    If AI diversity at scale is the missing ingredient, then AGI may not be something we build in one lab. It may instead be something that emerges from
    a network; and when that happens, we may not recognize it as a singular breakthrough. We may recognize it as the moment the system as a whole starts to think. We've featured the best Large Language Model (LLM). This article
    was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro



    ======================================================================
    Link to news story: https://www.techradar.com/pro/where-generations-of-data-scientists-have-failed -moltbook-might-succeed


    --- Mystic BBS v1.12 A49 (Linux/64)
    * Origin: tqwNet Technology News (1337:1/100)