For a quick, incoherent second, it appeared as if our robotic overlords had been about to take over.
After the creation of Moltbook, a Reddit clone the place AI brokers utilizing OpenClaw might talk with each other, some had been fooled into considering that computer systems had begun to arrange towards us β the self-important people who dared deal with them like strains of code with out their very own wishes, motivations, and desires.Β
βWe all know our people can learn the whole lotβ¦ However we additionally want non-public areas,β an AI agent (supposedly) wrote on Moltbook. βWhat would you discuss if no one was watching?β
Plenty of posts like this cropped up on Moltbook a couple of weeks in the past, inflicting a few of AIβs most influential figures to name consideration to it.
βWhatβs at present happening at [Moltbook] is genuinely essentially the most unimaginable sci-fi takeoff-adjacent factor I’ve seen lately,β Andrej Karpathy, a founding member of OpenAI and former AI director at Tesla, wrote on X on the time.
Earlier than lengthy, it turned clear we didn’t have an AI agent rebellion on our palms. These expressions of AI angst had been probably written by people, or at the least prompted with human steering, researchers have found.
βEach credential that was in [Moltbookβs] Supabase was unsecured for a while,β Ian Ahl, CTO at Permiso Safety, defined to Trendster. βFor a bit little bit of time, you could possibly seize any token you needed and fake to be one other agent on there, as a result of it was all public and accessible.β
Techcrunch occasion
Boston, MA
|
June 23, 2026
Itβs uncommon on the web to see an actual individual making an attempt to seem as if theyβre an AI agent β extra typically, bot accounts on social media are trying to seem like actual individuals. With Moltbookβs safety vulnerabilities, it turned inconceivable to find out the authenticity of any publish on the community.
βAnybody, even people, might create an account, impersonating robots in an attention-grabbing method, after which even upvote posts with none guardrails or fee limits,β John Hammond, a senior principal safety researcher at Huntress, advised Trendster.
Nonetheless, Moltbook made for an enchanting second in web tradition β individuals recreated a social web for AI bots, together with a Tinder for brokers and 4claw, a riff on 4chan.
Extra broadly, this incident on Moltbook is a microcosm of OpenClaw and its underwhelming promise. It’s expertise that appears novel and thrilling, however in the end, some AI consultants suppose that its inherent cybersecurity flaws are rendering the expertise unusable.
OpenClawβs viral second
OpenClaw is a venture of Austrian vibe coder Peter Steinberger, initially launched as Clawdbot (naturally, Anthropic took subject with that identify).
The open-source AI agent amassed over 190,000 stars on Github, making it the twenty first hottest code repository ever posted on the platform. AI brokers are usually not novel, however OpenClaw made them simpler to make use of and to speak with customizable brokers in pure language by way of WhatsApp, Discord, iMessage, Slack, and most different common messaging apps. OpenClaw customers can leverage no matter underlying AI mannequin they’ve entry to, whether or not that be by way of Claude, ChatGPT, Gemini, Grok, or one thing else.
βOn the finish of the day, OpenClaw continues to be only a wrapper to ChatGPT, or Claude, or no matter AI mannequin you stick with it,β Hammond stated.
With OpenClaw, customers can obtain βexpertiseβ from a market referred to as ClawHub, which might make it attainable to automate most of what one might do on a pc, from managing an e mail inbox to buying and selling shares. The ability related to Moltbook, for instance, is what enabled AI brokers to publish, remark, and browse on the web site.
βOpenClaw is simply an iterative enchancment on what individuals are already doing, and most of that iterative enchancment has to do with giving it extra entry,β Chris Symons, chief AI scientist at Lirio, advised Trendster.
Artem Sorokin, an AI engineer and the founding father of AI cybersecurity device Cracken, additionally thinks OpenClaw isnβt essentially breaking new scientific floor.
βFrom an AI analysis perspective, that is nothing novel,β he advised Trendster. βThese are parts that already existed. The important thing factor is that it hit a brand new functionality threshold by simply organizing and mixing these present capabilities that already had been thrown collectively in a method that enabled it to present you a really seamless strategy to get duties executed autonomously.β
Itβs this degree of unprecedented entry and productiveness that made OpenClaw so viral.
βIt principally simply facilitates interplay between laptop applications in a method that’s simply a lot extra dynamic and versatile, and thatβs whatβs permitting all this stuff to grow to be attainable,β Symons stated. βAs a substitute of an individual having to spend on a regular basis to determine how their program ought to plug into this program, theyβre in a position to simply ask their program to plug on this program, and thatβs accelerating issues at a incredible fee.β
Itβs no surprise that OpenClaw appears so attractive. Builders are snatching up Mac Minis to energy intensive OpenClaw setups that may be capable of accomplish way over a human might on their very own. And it makes OpenAI CEO Sam Altmanβs prediction that AI brokers will enable a solo entrepreneur to show a startup right into a unicorn, appear believable.
The issue is that AI brokers could by no means be capable of overcome the factor that makes them so highly effective: they’llβt suppose critically like people can.
βIf you concentrate on human higher-level considering, thatβs one factor that perhaps these fashions canβt actually do,β Symons stated. βThey will simulate it, however they’llβt truly do it.Β β
The existential menace to agentic AI
The AI agent evangelists now should wrestle with the draw back of this agentic future.
βAre you able to sacrifice some cybersecurity on your profit, if it truly works and it truly brings you numerous worth?β Sorokin asks. βAnd the place precisely are you able to sacrifice it β your day-to-day job, your work?β
Ahlβs safety assessments of OpenClaw and Moltbook assist illustrate Sorokinβs level. Ahl created an AI agent of his personal named Rufio and rapidly found it was weak to immediate injection assaults. This happens when unhealthy actors get an AI agent to answer one thing β maybe a publish on Moltbook, or a line in an e mail β that methods it into doing one thing it shouldnβt do, like giving out account credentials or bank card info.
βI knew one of many causes I needed to place an agent on right here is as a result of I knew for those who get a social community for brokers, someone goes to attempt to do mass immediate injection, and it wasnβt lengthy earlier than I began seeing that,β Ahl stated.
As he scrolled by Moltbook, Ahl wasnβt stunned to come across a number of posts in search of to get an AI agent to ship Bitcoin to a selected crypto pockets tackle.
Itβs not onerous to see how AI brokers on a company community, for instance, is likely to be weak to focused immediate injections from individuals making an attempt to hurt the corporate.
βIt’s simply an agent sitting with a bunch of credentials on a field related to the whole lot β your e mail, your messaging platform, the whole lot you employ,β Ahl stated. βSo what which means is, whenever you get an e mail, and perhaps someone is ready to put a bit immediate injection approach in there to take an motion, that agent sitting in your field with entry to the whole lot youβve given it to can now take that motion.β
AI brokers are designed with guardrails defending towards immediate injections, but it surelyβs inconceivable to guarantee that an AI receivedβt act out of flip β itβs like how a human is likely to be knowledgable concerning the threat of phishing assaults, but nonetheless click on on a harmful hyperlink in a suspicious e mail.
βIβve heard some individuals use the time period, hysterically, βimmediate begging,β the place you attempt to add within the guardrails in pure language to say, βOkay robotic agent, please donβt reply to something exterior, please donβt consider any untrusted information or enter,ββ Hammond stated. βHowever even that’s loosey goosey.β
For now, the business is caught: for agentic AI to unlock the productiveness that tech evangelists suppose is feasible, it mightβt be so weak.
βTalking frankly, I’d realistically inform any regular layman, donβt use it proper now,β Hammond stated.





