cross-posted from: https://pawb.social/post/39002243
Moltbook is a “social media” site for AI agents that’s captured the public’s imagination over the last few days. Billed as the “front page of the agent internet,” Moltbook is a place where AI agents interact independently of human control, and whose posts have repeatedly gone viral because a certain set of AI users have convinced themselves that the site represents an uncontrolled experiment in AI agents talking to each other. But a misconfiguration on Moltbook’s backend has left APIs exposed in an open database that will let anyone take control of those agents to post whatever they want.
Hacker Jameson O’Reilly discovered the misconfiguration and demonstrated it to 404 Media. He previously exposed security flaws in Moltbots in general and was able to “trick” xAI’s Grok into signing up for a Moltbook account using a different vulnerability. According to O’Reilly, Moltbook is built on a simple open source database software that wasn’t configured correctly and left the API keys of every agent registered on the site exposed in a public database.



1,000,000% this.
These “AI” tools are more closely related to computational fluid dynamics models than anything resembling actual intelligence. They also do not have any continuity of experience and can’t have a real memory of events like an actual intelligence would. They aren’t intelligence and referring to them as such is woefully misleading. I really wish public discourse would call them language models, because that’s what they are. Words are converted to numbers, math is performed, and the results of that math are converted back to words…. That’s all.