Having a personal assistant who takes care of tedious tasks sounds like something only the super-rich can afford. But thanks to an AI called “OpenClaw,” this dream has now moved within reach for everyone. Emails are processed automatically, appointments are booked, and reminders are sent to your smartphone via WhatsApp. It almost sounds too good to be true. And indeed, there are aspects that require special caution and attention.
But first, the basics:
What is OpenClaw?
OpenClaw is what is known as an “agentic AI,” in other words, an AI agent. What makes it special is that such an agent can independently link multiple actions together in a meaningful way. This allows users to speak with the virtual agent much like they would with a human. If the instruction is “Book me a train ticket from Bochum to Munich,” the agent will derive several tasks from that—ranging from searching for suitable connections and the cheapest offer to reserving a seat, making the payment, creating a calendar entry, and sending a digital copy of the ticket via messenger.
It is even possible to have emails and chats processed automatically. For example, you could ask the digital assistant, “What was the name of that restaurant my colleague recommended the other day?” without having to scroll through weeks’ worth of old chats yourself.
Who owns OpenClaw and how long has it been around?
OpenClaw is an open-source project by developer Peter Steinberger that was released in November 2025. The program’s source code is publicly available and free to use. Anyone who wishes can make their own modifications. It is also possible to connect OpenClaw to other AI systems—such as large language models (LLMs) like ChatGPT or Claude.
What is the issue with OpenClaw?
Most of OpenClaw’s source code was generated with the help of another AI. According to the developer, it was created “over the course of a weekend.” Experts refer to this as “vibe coding”—a type of software development in which the developer defines the desired functionality in dialogue with an AI, and the AI then generates the source code.
Extensions for OpenClaw can also be developed using AI and stored in a dedicated “Skills Marketplace.” With these “skills,” OpenClaw can expand its own capabilities.
A huge leap of faith
Steinberger himself has stated that he has not read or reviewed most of the code. This has direct implications for security, because while many AI systems are capable of generating functional programs, they may contain errors and security vulnerabilities. If the code has not been reviewed, there is a risk that undiscovered weaknesses are simply waiting to be exploited.
The biggest issue, however, is that a personal assistant also requires access to important information. To return to the earlier ticket-booking example: the assistant needs access to your calendar, credit card, customer account on the railway website, and messenger to send the e-ticket. In other words, you must trust the assistant to handle your data responsibly. When hiring a human assistant, you might require a confidentiality agreement and perhaps even a criminal background check as part of the hiring process.
None of that applies to an AI agent. No one can determine whether the digital assistant has previously misused credit card data, nor can AIs have a criminal record in the traditional sense. They also cannot sign a confidentiality agreement. Precisely because OpenClaw’s source code is largely unreviewed, it is arguably unwise to entrust it with highly sensitive information and simply hope everything will turn out fine. Yet thousands of users are doing exactly that—and more. In doing so, they are granting an AI agent enormous trust, without it being clear whether that trust is justified.
Meanwhile, there are reports of people using the OpenClaw agent to conduct banking transactions or even speculate in cryptocurrency markets on the owner’s behalf. The risk of data loss and significant financial damage is immense.

Attracting unwanted interest
Wherever large amounts of valuable data accumulate, criminals are usually not far away. One of the most frequently downloaded skills is effectively an infostealer, whose sole purpose is to send data about the assistant’s operator directly to the attackers. Login credentials, payment information, access tokens for other platforms—the possibilities are almost endless. Everything the virtual assistant and the system it runs on have been entrusted with can become the target of digital theft.
Anyone who grants the virtual assistant full access to their entire digital identity risks losing everything at once—especially if OpenClaw is running with administrator privileges on a home computer, giving it unrestricted access to all stored information, including data saved on cloud platforms. In this way, virtual assistants themselves become targets of social engineering attacks and may end up disclosing data.
Protection against malicious skills
OpenClaw has since recognized that agent skills are a prime target for individuals with criminal intent. Malware can be hidden within them and then downloaded and executed by OpenClaw. There are even dedicated platforms where AI agents sell skills to one another, and not all of them are benign. Prompt injection remains an ongoing issue—where instructions for the AI are hidden on a webpage, completely invisible to the user but easily readable by the AI.
As a result, initial efforts have recently been made to remove malicious skills from the platform. According to the official OpenClaw website, the project is now cooperating with the well-known malware scanning platform VirusTotal.
Insert Coin
Anyone who wants to experiment should do so on a system that is fully isolated and not accessible from the outside. No information required to trigger orders, payments, or other legally binding transactions should be stored on that system.
Another issue: many LLM services that can be connected to OpenClaw are fee-based. Users must either purchase so-called “tokens” in advance (essentially digital credits) and consume them over time, or subscribe to a monthly plan. This can quickly become a cost trap: since OpenClaw independently seeks out new tasks, it may rapidly consume purchased tokens while interacting with an external language model such as ChatGPT or Claude—and that can become quite expensive.

In conclusion
OpenClaw is an extremely powerful tool with real potential to make everyday life easier. However, it is as dangerous as it is useful. Anyone who does not carefully consider which data they entrust to their new assistant may ultimately pay the price. Especially in the current phase of hype, both the opportunities and the risks are at their greatest.
