Hacker Newsnew | past | comments | ask | show | jobs | submit | jbethune's commentslogin

This was a bit word-salad-y but I share the same basic concern. I think more I worry about the tendency toward greater and greater cognitive off-loading to LLMs. My sister told me a story the other day about how she caught her plumber using chatgpt on his phone to fix an issue with her bathroom. I just think it's good for humans to know how to do stuff.

I've always wanted to be more of a handy man, never knew where to start. I used LLMs to create a toolkit and then used it to fix various stuff around the house. I'm at the point where I'm comfortable with beginner projects moving onto intermediate, and I feel like the quality of my works beats those of hired help at my level of competence. So... I'm glad I could off-load some cognition to LLMs and get to the actual useful parts.

Same. I find LLMs are pretty good at helping me understand things I didn't know that I didn't know, that I really needed to know. Which has never been easy to get in a timely manner exclusively through more classic means of research.

That and it's good for going down rabbit holes of questioning a topic that, before, you'd have to reserve for when you accidentally find a resource that perfectly answers that niche question line, or when you're somehow in conversation with someone who has expertise in the thing you're questioning.


Even in this audience is a significant portion that doesn't have a clue about the limits and pitfalls of AI. For people outside tech it is obviously worse, much likely the vast majority. If you succeed with it, it doesn't mean everybody does.

your sister offloaded to her plumber.

her plumber offloaded to chatgpt.

"i just think it's good for humans to know how to do stuff."

are we talking about your sister or her plumber?


The plumber obviously. Not everyone needs to know how to be a plumber, but a plumber should know how to be a plumber

Im a software engineer and know how to be a software engineer, yet I find LLMs quite helpful. Why should a plumber be any different.

Because if a plumber moves fast and breaks things, I've got shit all over the place.

That, and also the plumber loses their license. So perhaps the solution is professional licensing for software engineers.

I feel like a licencing process for software engineers would

A) test lots of skills that are common but not universal. I'm thinking javascript trivia here, where I don't write any javascript in my professional capacity as a software engineer; but there are many people who think Software Engineer == Javascript Programmer

B) shine too much of a light on the fact that this industry is full of people who demand high salaries but can't program their way out of a paper bag


I think that's coming regardless. AI just might be the perfect storm to bring the timeline in considerably.

Engineer is a protected title in Canada after all

the question was rhetorical. but, since you responded – do you think that there are limits to who can or should use ai? if the plumber's use of ChatGPT improved outcomes, isn't that preferable?

some knowledge is likely "cached" in the plumber. maybe he doesn't ask the same question twice. i'm sympathetic to the plumber, but i think your concerns of erosion of knowledge or skill are worth pushing on further.


> do you think that there are limits to who can or should use ai?

I don't think there should be imposed limits, but there might be an upper bound where expertise becomes atrophied by depending on AI too much.

> if the plumber's use of ChatGPT improved outcomes, isn't that preferable?

In the short term sure, and maybe even in the long term for the customer. I think the risk to the plumber is losing some of their expertise by outsourcing to AI. But who knows, maybe the plumber has excellent memory and only accumulates knowledge each time they use AI.

Some of the article is lost in the plumber example. I doubt plumbers are spending much time exploring new ways of solving problems, and might even benefit from having a narrower range of outcomes. Other fields that require both expertise and novel solutions will be at a disadvantage if they become more homogenized by depending on AI. Not only is the range of solutions reduced, but getting there is faster, so people end up in a local maxima. Maybe they get stuck there, maybe not, but that's the risk I see.

You don't imagine any long term risks by outsourcing expertise to AI?


I do, but again, it was a rhetorical question; a paradoxical thought experiment.

Which part of being a plumber? Was the house installed with something non-typical? Would you rather have them take an additional 30 minutes looking up their technical manual?

Without further knowledge of what was going on it's hard to say why they used ChatGPT.


> Would you rather have them take an additional 30 minutes looking up their technical manual?

Yes


You know that plumbers charge by the hour, right?

Water damage is generally more expensive than a half hour of a plumber's time

How do you know ChatGPT is referencing the right information if you need to look it up in a manual?

The issue here is that the sister could have used ChatGPT herself, so why bother hiring the plumber. The plumber has provided less value than was expected. But make no mistake: the value the sister was looking for was to have someone else deal with it, and there's a price that the sister was willing to pay for the service of having someone else deal with it.

In the comments of this HN post, there is a dead comment from someone who posted an LLM's summary of another comment. It's dead because it offers very little/no value: that summary could be obtained directly from ChatGPT by anyone who wants a summary.

The sister offloaded plumbing to the plumber under the economic principle of comparative advantage. The plumber undermines the value they provide by outsourcing yet again. What value is provided by the middle man who does nothing but proxy the issue? Is the person who does this really a plumber? Is a plumber merely someone who has plumbing tools like wrenches and pipe tape?

That the plumber also wanted to outsource it is the concern: right now, the plumber is able to make money because of the difference between what is charged to deal with a problem and what it costs for them to deal with it. Knowledge and experience has become a commodity, which we probably can't do anything about, but along with that comes all the drawbacks (and advantages) of things, and humans, being comoditized.


This is assuming that ChatGPT had everything needed to do the work. If the plumber was asking specific questions, based on their previous experience and knowledge about what needed to be done, the sister might not have been able to get the same result from her use of ChatGPT that the plumber received.

Experts look things up all the time, because no one can hold all the knowledge of a field in their head. Being an expert means being able to know what to look up and how to use the information retrieved from looking something up.

In the plumber example, ChatGPT is going to tell them to do things using the terminology that plumbers know, and tell them to do tasks that plumbers know how to do. The sister would have to continually look up more and more things about how to do basic plumbing tasks, rather than just looking up particular novelties.


Yes, this is why I mentioned comparative advantage.

So you are saying that a plumber does not in fact need to know how to be a plumber?

Sure, but.. I've been coding for 40 years and I don't know everything. To me, a LOT depends on what the plumber asked chatgpt about. For example: building codes in that city, to figure out what his options are - like, is he allowed to just put in any old toilet, or is there a gpf restriction? What's the replacement part number for faucet XYZ's gasket? Those seem reasonable.

"how do I fix a clogged toilet?" would be bad..


I cling a bit to a prompt i sent a while ago about just tossing a chopped pepper into a recipe for baked ziti. I had a recipe that i followed fairly tightly with slight changes to see how they would work out each time. Instead of prompting "when should i add chopped bell pepper?" the small change of just, "what are my options for when to add chopped bell pepper?" opened up a variety of different methods i could try when returning to that recipe, and decide what i like best based on the outcome.

The first prompt style is I think a way society towards drifts incidentally towards a less interesting one, with less variety in solutions. The second one i think allows people to still exercise their potential to try a variety of things and keep that variety.


>like, is he allowed to just put in any old toilet, or is there a gpf restriction?

And if the LLM gets that wrong? It's his job to know the codes or how to go to a reliable resource to find out the correct codes.


Hopefully he would be using the LLM as an enhanced search engine that can point him to relevant authoritative sources that he can use to fact-check its output. I have done that in the past to some effect.

People don’t understand they need to do that.

Maybe he just needs a reminder and he’ll have an oh yeah moment when he reads the output maybe he’ll ask it for primary sources. There’s a lot of bad faith going around.

Presumably in his jurisdiction he should know what official resources to consult. But the point about it depending on his question is definitely fair.

Would you prefer to have:

The plumber who turned up leave without fixing the problem,

The plumber fixing something that he didn't know how to do by looking up the answer.

The plumber attempting to fix something that they didn't know how to do.

While it's great to have the plumber who knows how to do everything, they are rare and in high demand, so cost way more than you can afford.


I would prefer to have a plumber with some kind of reference that doesn't just make shit up 10% of the time -- plumbing mistakes are insanely costly (i once owned a house that was destroyed by a plumbing mistake that was made by a previous owner)

Did she also catch him with a wrench?

I mean, yes but LLMs have been making me more cognitively active. I've learned how to do more stuff that I would have without them and it's a decent multiplier not some rounding error.

Obviously you can have a plumber that knows his stuff and the one that doesn't. The good one can check some details and will recognize bs. If you already have the bad one it's probably if better if he uses LLM rather than when doesnt.


Great work. I like the approach of creating a schema to work with the OpenAPI spec.

I would second this. Would be very helpful. Had a fat finger issue a while ago that this would have saved.

Local urbex and exploration of 'haikyo' areas. Easy for me since I am in Tokyo and it's super walkable. I have taken to just getting on the train and getting off at random stations and walking in a random direction for a couple of kilometers. Every now and then I run into interesting abandoned buildings or neat shrines. Also makes for good exercise.

Saved. Very useful. Normally I just dig around the Github UI to see what I can glean from contributor graphs and issues but these git commands are a pretty elegant solution as well.

Anecdotally I can say this is true both in education and software development. The diversity of approaches and writing styles among junior developers used to be fun to observe and mentor. Now with everyone using AI coding agents there is a same-y-ness to people's work that makes it harder to see what the writer actually knows or doesn't know. A friend of mine who teaches high school English has said the same thing about student work.

Congratulations on the launch! Will definitely test this out.

I think this is a model issue. I have heard similar complaints from team members about Opus. I'm using other models via Cursor and not having problems.

Forked. Very cool. I appreciate the simplicity and documentation.

This is super cool. Wish I had this back in my sysadmin days.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: