Hacker Newsnew | past | comments | ask | show | jobs | submit | catlifeonmars's commentslogin

> Everything changed in the past 6 months and coding LLMs went from being OK-ish to insanely good. People also got better at using them.

I hear this literally every 6 months :)


It hasn't been true forever, but it has been true over the last 18 months or so.

Can’t you just use Anthropic models through bedrock?

This article has “why stabbing yourself with a screwdriver is bad” vibes.

Yes. It really makes no sense to take a screwdriver instead of a knife.

> I’m certainly not endorsing it, do think it’s pretty problematic, and I’m glad it’s getting some visibility. But I do take some issue with the alarmist framing of what’s going on.

Speaking has someone who shares the same lack of surprise, perhaps some alarm is warranted. Just because it’s ubiquitous doesn’t mean it’s ok. This feels very much frog in boiling water for me.

Why do you think the alarmist framing is unwarranted?


I do think a degree of alarm is appropriate.

But it’s critical to sound the correct alarm.

To me, it seems like the authors pulled the fire alarm for a single building when in reality there’s a tornado bearing down.

And by doing so, everyone is scrambling about a fire instead of the response a tornado siren would cause.

They’re both dangerous and worthy of an immediate reaction, but the confusion and misdirection this causes seems deeply problematic.

When people realize the fire wasn’t real, they start to question the validity of the alarm. The tornado is still out there.

I realize this analogy is a bit stretched.

As someone who has spent quite a lot of time steeped in security/privacy research, the stuff described in the article has been happening pervasively across the industry.

People absolutely should be alarmed. Many of us have been alarmed for quite some time. Raising the alarm by saying “LinkedIn is searching your computer” isn’t it.


I think this is a great analogy. I read quite a bit of the site and it's wildly blown out of proportion and severely lacking in context.

How many phone apps do you think are trying to detect what else is installed on your phone? I was part of an acquisition of a company with a very large mobile user base and our new parent was shocked we weren't trying to passively collect device information like this. They for sure were.

And on the flip side, as others have done well to point out, there are a LOT of legitimate reasons to fingerprint users for anti-fraud/abuse and I am 100% convinced that we're all better off for this.

Maybe thats all this story is about, maybe not, but this article leaves out an incredible amount of complexity.


What’s going on here in main.tsx? (Elided linter comments for conciseness)

    // Exit if we detect node debugging or inspection
    if ("external" !== 'ant' && isBeingDebugged()) {
        process.exit(1);
    }


Anecdata, but I don’t find myself making major changes to code during review often. That speaks to a deeper problem (miscommunication of requirements, author skill, overly pedantic reviewers, etc).

Also, I don’t have time to wait around for a review to work on other parts of the same codebase.


I think it’s really context dependent. I haven’t found LLMs to increase my productivity in coding in my field because the quality of the output matters much more than the quantity. I don’t think it’s the same across the board though, and there are plenty of domains where code generation is a force multiplier. Sometimes you need a chainsaw and sometimes you need a scalpel and in my own experience I have found that using coding agents as scalpels is not a very efficient use of my time. shrug

I’ve had both experiences, sometimes at the exact same company.

Maybe I’m just getting extremely lucky, but I don’t use AI to code at work and I’m still keeping up with my peers who are all Clauded up. I do a lot of green field network appliance design and implementation and have not felt really felt the pressure in that space.

I do use Claude code at home maybe a couple hours a week, mostly for code base exploration. Still haven’t figured out how to fully vibe code: the generated code just annoys me and the agents are too chatty. (Insert old man shaking fist at cloud).


We're witnessing a divergence between Coders and Clauders, with the latter dominating the market at a lower cost of labor + subscription fee to the almighty AI providers. Coders may be called in, hopefully with better renumeration, to review and debug the massive amount of code being generated. Either that or they will also be replaced by specially trained/prompted language models doing the review.

> + subscription

With how much some people spend on tokens that they've shared on here, and concerns about raising prices, I've kind of been wondering if we're actually heading to a point where seniors who don't use AI are going to be cheaper than juniors who do.


In the future Claude will keep a tight ship on dissenters. If your monthly quota doesn't exceed the 10k worth of tokens your employer will be notified and you will be flagged as a "dissenter". Your lease will be cancelled, because who would trust someone ignorant enough to not use LLMs in their daily life, and you'll be vetoed from the field for life, for clanker companies will proclaim that anyone who doesn't use LLM-assisted coding should be culled and so they'll run a tight ship.

And executives will get millions in bonuses for figuring it out, and the remaining programmers, probably one or two, will raise their necks over who's the best prompter and how everyone else was dumber than them for not figuring it out.


ai skeptic fanfic evolves in fascinating ways every day

Take it a step further: AI generated AI skeptic fanfic :D

Yeah, the AI productivity gains are a myth in my experience.

> the generated code just annoys me and the agents are too chatty

I’ve eyerolled way less with Codex CLI and the GPT models than with Claude.


I haven’t spent much time with Codex+GPT, will definitely give it another look

I wonder if you could run multiple models and average out the timestamps, kind of like how atomic clocks are used together and not separately

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: