Hacker Newsnew | past | comments | ask | show | jobs | submit | speakingmoistly's commentslogin

Given that it's a pretty niche thing, a link or post context wouldn't hurt.

That being said, I would assume that like most DIY things, part of the purpose of it is the creative journey that leads you to assembling yours. You learn things along the way and have an interesting artifact to show for it once you're done.



I have yet to read into the actual announcement, but I would bet dollars to donuts that the check is really "run the plan through another LLM whose system prompt is prefixed with 'you are great at safety checks'".

> Maybe the root issue is that Anthropic is operating at a loss on their subscriptions

I'm pretty sure all of the LLM subscription businesses operate at a loss when it comes to fixed-price subscription [1]. The price needs to be low enough for regular consumers to consider "sane" (most people wouldn't pay more than 20$ monthly, it fits in well with the rest of the subscriptions people have been trained to tolerate), but usage quotas probably can't (citation needed, but it feels like a reasonable take) be constrained to an amount that would allow for real profit while remaining high enough to be useful (or at least non-obstructive).

[1] https://xcancel.com/sama/status/1876104315296968813


The people selling those services have worked really hard to get everyone else to think that more AI usage inevitably leads to more success (and inversely, that too little of it is a signal that you're being left behind), but I have yet to see this actually backed up by anything else than marketing.

LLMs remain a tool, they amplify what's there (good and bad), but the real predictor of success is the humans using it. Think of it as baking: adding LLMs to your toolbox is a lot like getting moving from mixing dough by hand to having a stand mixer. You'll be able to mix a lot more dough with the mixer, maybe even have your chefs and apprentices do something else than mixing dough for hours. Even if you deck out your kitchen with the latest machines though, your bread will only be as good as the knowhow of your people and the recipes you use. The same applies to software.

One of the key selling points of LLMs has been that it makes execution cheaper (in software-land, at least). This is arguable, but assuming it does, execution is still not the bottleneck as they'd like you to think. In my experience, deciding what to build, collecting insights on what's out there and how the rubber meets the road with users, and building organizational alignment on the path ahead remains the hardest, most tedious part of the process. LLMs may help with some of it, but in the end, this is still people-driven, and regardless of how many tokens or fancy models you have at your disposal, that's what will decide if your solution to a problem stands out against your competition.


"You get a raffle entry with every 10 minutes of active usage."

I'll definitely be deterred from reading further once I get a smell of it being AI-generated (or too heavily AI-assisted) and would be less likely to read from whoever published it again. When it comes to translation, it depends more on the quality of the text (I always value a human translation higher than an AI / automated one, and low quality translation would turn me away).

The equation may also be a bit different if there's disclosure. If an author is being upfront about AI assistance that would materially affect the text, I might read further (provided that it's not just "AI wrote this and I barely proof-read it).

Ultimately, I'd be less likely to spend time and effort consuming something I have the feeling wasn't really the product of someone else's time and effort.


The better place to flag this would probably be on their issue tracker. The release on the marketplace being five hours ago, it's not unlikely that GH releases are manual and lagging behind (seeing plenty of this in projects that publish container images, the updated image comes through Renovate hours before anything shows up on release pages).


Thanks, I followed their security.md to contact them. Appreciate the insight on a possible standard lack of synchronous versions.


> Appreciate the insight on a possible standard lack of synchronous versions.

Looking closer at the commit and release history, it looks like poor release hygiene, really. Commits hint at a 0.44.0 release that doesn't show up in tags and the changelog file that is included with the source (in the extension that you pull down and the repository) isn't readily maintained.

The absence of a verifiable link between the marketplace artifacts and the underlying code should probably give people pause about the trustworthiness of the extension. I bet a good chunk of what's on that marketplace is in that situation.


I hope their foray into compute works out better than their shoemaking. Bought a pair, didn't even last a season before starting to fall apart (which would still be very on brand for the general AI sector).


Yet another exhibit to add to the "why do we give so much airtime to these folks and their hottakes?" pile.

I'm sure people like Karp would love humanities gone, because that would mean the ethical and social impact of the work they advocate for gets less attention.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: