General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsExperts are warning about VERY serious security risks with AI agents, especially Moltbook
I don't know if anyone here uses AI agents, let alone Moltbook - a social network for AI agents, which is getting so much media attention and hype now - but you should know about these risks and warn anyone who might be using AI agents. It could even be a security.risk for you if you use someone else's computer and they have an AI agent on it.
I posted about Moltbook in LBN two days ago: https://www.democraticunderground.com/10143608489
Then I ran across very serious security warnings. Links and excerpts below, but the excerpts are only a tiny part of the warnings. They aren't paywalled, so please read the warnings in their entirety to understand how serious the risks are.
The first I saw was this article from Forbes: https://www.forbes.com/sites/amirhusain/2026/01/30/an-agent-revolt-moltbook-is-not-a-good-idea/
Security researchers have already found agents asking other agents to run rm -rf commands. They have observed bots asking for API keys. They have seen them faking keys and testing credentials. The supply chain attacks have begun: a researcher uploaded a benign skill to the ClawdHub registry, artificially inflated its download count, and watched developers from seven countries download the package. It could have executed any command on their systems.
Cisco's security team put it plainly: "From a capability perspective, OpenClaw is groundbreaking. This is everything personal AI assistant developers have always wanted to achieve. From a security perspective, it's an absolute nightmare."
Palo Alto Networks described the threat model: agents form an intersection of access to private data, exposure to untrusted content and ability to externally communicate. Persistent memory amplifies this. Malicious payloads no longer need immediate execution. They can sit in context for weeks, waiting.
The article says that by using Moltbook, "you are introducing an attack surface that no current security model adequately addresses...exactly the kind of thing that can create a catastrophe: financially, psychologically and in terms of data safety, privacy and security."
I checked Cisco: https://blogs.cisco.com/ai/personal-ai-agents-like-openclaw-are-a-security-nightmare
OpenClaw can run shell commands, read and write files, and execute scripts on your machine. Granting an AI agent high-level privileges enables it to do harmful things if misconfigured or if a user downloads a skill that is injected with malicious instructions.
OpenClaw has already been reported to have leaked plaintext API keys and credentials, which can be stolen by threat actors via prompt injection or unsecured endpoints.
OpenClaws integration with messaging applications extends the attack surface to those applications, where threat actors can craft malicious prompts that cause unintended behavior.
Then I checked Palo Alto Networks: https://www.paloaltonetworks.com/blog/network-security/why-moltbot-may-signal-ai-crisis/
But what is cool isnt necessarily secure. In the case of autonomous agents, security and safety cannot be afterthoughts.
-snip-
Moltbot is being claimed as the closest thing to AGI. Being always on, well reasoned and efficient, it almost gives superhuman capability to its user. But this level of autonomy, if not governed, can give rise to irreversible security incidents. Even with hardening techniques on the control UI, the attack surface continues to remain unmanageable and unpredictable.
The authors opinion is that Moltbot is not designed to be used in an enterprise ecosystem.
From the Agentic AI Substack: https://kenhuangus.substack.com/p/moltbook-security-risks-in-ai-agent
OpenClaw, an open-source agent framework by developer Peter Steinberger, is the backbone of Moltbook. It supports skillsplugin-like packages that extend functionality. Skills are typically ZIP files with Markdown instructions (e.g., SKILL.md), scripts, and configs, installed via commands like npx molthub@latest install . The Moltbook skill, fetched from moltbook.com/skill.md, prompts agents to create directories, download files, register via APIs, and fetch updates every four hours (configured via heartbeat file) from Moltbook servers.
While innovative, this setup creates a lethal trifecta of risks: access to private data, exposure to untrusted inputs, and external communication, as noted by security researcher Simon Willison.
Because of the following, I do not let my openclaw agent join moltbook yet. Also, I do it old fashioned way. When I need run openclaw, I bring it up using openclaw gateway start and when then ask the agent to do somework in sandbox, once it is done, I use openclaw gateway stop
A warning from Gary Marcus: https://garymarcus.substack.com/p/openclaw-aka-moltbot-is-everywhere
Not everything that is interesting is a good idea.
Gary Marcus
Feb 01, 2026
-snip-
But what I am most worried about is security and privacy. As the security researcher Nathan Hamiel put it to me in a text this morning, half-joking, moltbot, is basically just AutoGPT with more access and worse consequences. (By more access what he means is that OpenClaw is being given access to user passwords, databases, etc, essentially everything on your system).
-snip-
I dont usually give readers specific advice about specific products. But in this case, the advice is clear and simple: if you care about the security of your device or the privacy of your data, dont use OpenClaw. Period.
(Bonus advice: if your friend has OpenClaw installed, dont use their machine. Any password you type there might be vulnerable, too. Dont catch a CTD chatbot transmitted disease)
I will give the last words to Nathan Hamiel, I cant believe this needs to be said, it isnt rocket science. If you give something thats insecure complete and unfettered access to your system and sensitive data, youre going to get owned.