BlockBeats News, March 5 — Web3 security firm GoPlus announced that the AI development tool OpenClaw recently experienced a “self-attack” security incident. During automated tasks, the system constructed incorrect Bash commands when calling Shell commands to create GitHub Issues, unintentionally triggering command injection and exposing a large number of sensitive environment variables.
In the incident, the AI-generated string contained a backtick-enclosed set, which Bash interpreted as command substitution and executed automatically. Since Bash outputs all current environment variables when running ‘set’ without parameters, over 100 lines of sensitive information—including Telegram keys, authentication tokens, and more—were directly written into the GitHub Issue and made public.
GoPlus recommends that in AI automation development or testing scenarios, API calls should be used instead of directly concatenating Shell commands. Environment variables should be isolated following the principle of least privilege, high-risk execution modes should be disabled, and manual review mechanisms should be introduced for critical operations.