It’s not arbitrary code in this case, it’s well defined functions, like list emails, read email, delete email. The agentic portion only decides if it should have those functions invoked.
Now if they should is up for debate. Personally I would be afraid it would delete an important email that it incorrectly marks as spam, but others may see value.
It’s not arbitrary code in this case, it’s well defined functions
No, you’re 100% wrong as the bot can just directly run arbitrary bash commands as well as write arbitrary code to a file and run the file. There’s probably a dozen different ways it can run arbitrary code and many more ways it can be exposed to malicious instructions from the internet.
Yeah, great, except the bot can literally just write whatever it wants to the config file ~/.openclaw/exec-approvals.json and give itself approval to execute bash commands.
There’s probably a hundred trivial ways to get around these permissions and approval requirements. I’ve played around with this bot and also opencode, and have witnessed opencode bypass permissions in real time by just coming up with a different way to do the thing it is wanting to do.
This is where tools like bubblewrap (bwrap) come in. For opencode, I heavily limit what it can see and what is has access to. No access to my ssh keys or aws credentials or anything else.
Then what, pray tell, is the point of the agent if you need to check its work each time?
I will point out how many posts, articles, and comments there are about how agents with this level of access have repeatedly and consistently failed to follow “safeguards”.
Ultimately, if you feel informed enough, by all means use it.
I am and do, I have no qualms with AI if I host it myself. I let it have read access to some things, I have one that is hooked up to my HomeAssistant that can do things like enable lighting or turn on devices. It’s all gated, I control what items I expose and what I don’t. I personally don’t want it reading my emails, but since I host it it’s really not a big deal at all. I have one that gets the status of my servers, reads the metrics, and reports to me in the morning if there were any anomalies.
I’m really sick of the “AI is just bad because AI is bad”. It can be incredibly useful - IF you know it’s limitations and understand what is wrong with it. I don’t like corporate AI at scale for moral reasons, but running it at home has been incredibly helpful. I don’t trust it to do whatever it wants, that would be insane. I do however let it have read permissions (and I know you keep harping on it, but MCP servers and APIs also have permission structures, even if it did attempt to write something, my other services would block it and it’d be reported) on services to help me sort through piles of information that I cannot manage by myself. When I do allow write access it’s when I’m working directly with it, and I hit a button each time it attempts to write. Think spinning up or down containers on my cluster while I am testing, or collecting info from the internet.
AI, LLMs, Agentic AI is a tool. It is not the hype every AI bro thinks it is, but it is another tool in the toolbelt. To completely ignore it is on par with ignoring Photoshop when it came out, or Wysiwyg editors when they came designing UIs.
Yes, that’s pretty much all an mcp server is, that’s what I’m trying to explain. The ai just chooses what commands out of a list. Each command can be disabled or enabled. Everyone freaking out here like it has sudo access or something when you opt into everything it does
It’s not arbitrary code in this case, it’s well defined functions, like list emails, read email, delete email. The agentic portion only decides if it should have those functions invoked.
Now if they should is up for debate. Personally I would be afraid it would delete an important email that it incorrectly marks as spam, but others may see value.
No, you’re 100% wrong as the bot can just directly run arbitrary bash commands as well as write arbitrary code to a file and run the file. There’s probably a dozen different ways it can run arbitrary code and many more ways it can be exposed to malicious instructions from the internet.
If you allow it to run bash commands, it requires approval before running them:
https://docs.openclaw.ai/tools/exec-approvals
Yeah, great, except the bot can literally just write whatever it wants to the config file
~/.openclaw/exec-approvals.jsonand give itself approval to execute bash commands.There’s probably a hundred trivial ways to get around these permissions and approval requirements. I’ve played around with this bot and also opencode, and have witnessed opencode bypass permissions in real time by just coming up with a different way to do the thing it is wanting to do.
This is where tools like bubblewrap (bwrap) come in. For opencode, I heavily limit what it can see and what is has access to. No access to my ssh keys or aws credentials or anything else.
You honestly think there isn’t an issue with that?!
Everyone keeps forgetting “if you allow it”. They show you what commands it’s going to run. So yes I’m okay with it, I review everything it will do.
No, I read it the first time.
When it works, sure.
Then what, pray tell, is the point of the agent if you need to check its work each time?
I will point out how many posts, articles, and comments there are about how agents with this level of access have repeatedly and consistently failed to follow “safeguards”.
Ultimately, if you feel informed enough, by all means use it.
I am and do, I have no qualms with AI if I host it myself. I let it have read access to some things, I have one that is hooked up to my HomeAssistant that can do things like enable lighting or turn on devices. It’s all gated, I control what items I expose and what I don’t. I personally don’t want it reading my emails, but since I host it it’s really not a big deal at all. I have one that gets the status of my servers, reads the metrics, and reports to me in the morning if there were any anomalies.
I’m really sick of the “AI is just bad because AI is bad”. It can be incredibly useful - IF you know it’s limitations and understand what is wrong with it. I don’t like corporate AI at scale for moral reasons, but running it at home has been incredibly helpful. I don’t trust it to do whatever it wants, that would be insane. I do however let it have read permissions (and I know you keep harping on it, but MCP servers and APIs also have permission structures, even if it did attempt to write something, my other services would block it and it’d be reported) on services to help me sort through piles of information that I cannot manage by myself. When I do allow write access it’s when I’m working directly with it, and I hit a button each time it attempts to write. Think spinning up or down containers on my cluster while I am testing, or collecting info from the internet.
AI, LLMs, Agentic AI is a tool. It is not the hype every AI bro thinks it is, but it is another tool in the toolbelt. To completely ignore it is on par with ignoring Photoshop when it came out, or Wysiwyg editors when they came designing UIs.
You’ve just described an api…
Yes, that’s pretty much all an mcp server is, that’s what I’m trying to explain. The ai just chooses what commands out of a list. Each command can be disabled or enabled. Everyone freaking out here like it has sudo access or something when you opt into everything it does