Hacker Newsnew | past | comments | ask | show | jobs | submit | danny_codes's commentslogin

That’s not how printing money works

This HSR line makes a good deal of sense if it could be built. Your analysis of flying is wildly incorrect. The train will be faster door to door than the average flight.

But the main benefit will come from massive secondary economic growth. Cities along the line will experience dramatic increases in demand and economic activity. Cities at the terminus will have a massively increased share of workers in the commute threshold.

Further, the HSR line serves as a backbone for future lines, tying the state together.

The plan is very good on paper. It’s getting creamed in implementation because CA is not politically aligned on this project. Half the state doesn’t want it to succeed at all.


I don’t think the tech or the route is remotely the problem. This is purely a matter of political will.

Not sure if you’ve read Abundance. But the basic idea is that rich, developed countries have onerous processes in place to satisfy many needs, which is antithetical to building stuff.

For example, CA requires strict analysis and studies. CA has myriad legislation to protect private property. It has restrictions on what can be purchased, from whom, and from whence labor can be sourced. Together, this vast web of limitations makes big projects like HSR extremely expensive and unwieldy.

It’s not that the scope or ambition of the project is a problem in itself. It’s that the mega project comes along with many requirements aside from just building it.

Effectively, CA is working at cross-purpose.

The resolution is actually very simple. You just exempt your mega-project from all the legislation constraining it.

If CA wanted to they could simply change the law. Skip labor-sourcing laws, skip community feedback, skip permitting and approval (aside from safety), skip domestic parts requirements, and apply eminent domain with no feedback process.

We don’t do that for political reasons. This isn’t a technical problem


> We don’t do that for political reasons.

We don't do that because of all the reasons those laws were put into place.


Not so, Wikipedia is perfectly free.

It’s subjective, because it’s art. There’s no right answer.

If you like listening to AI generated content, then that’s fine! I’m glad you found something you enjoy.

For me, I consume art because I want to understand other people. For example, when I go to an art museum I want to emotionally connect with the artist: to feel what they were feeling, or understand an idea they’re conveying. I have little desire to emotionally connect with stochastic token sampling. It seems a vapid way to spend time


You still assume the artist in those examples is real. It could be a team, a ghost artist, etc - yea it's less likely than music, but still. The connection itself is quite difficult too, given the ease in which someone could plagiarize others work - sure they have mechanical skill, but did they really invest in the painting or was it ripped off from others ideas?

I suspect your connection to real artists won't be impacted. This, like the music example, just highlights our assumptions.

I'm not defending this AI garbage fwiw, i just don't think it's as interesting as most people make it out to be. I adore music, and i connect with songs i connect with. I don't typically think about the possible ghost writers, teams of writers, ghost players, etc. The music either speaks to me or it doesn't.

Though i'm not trying to connect to the musician as a person. However, as i was illustrating - if i really wanted to connect to musicians at face value, that ship sailed many, many years ago. Far before AI.

There are ways to mitigate this, but that balance will always be there - it was before AI, and it will be after. It's an evolution. Not an enjoyable one perhaps, but it is nonetheless.


This seems like a wildly unlikely risk. Innovations in this space are just mathematical ideas, easy to write down in a paper and replicate.

It’s much more likely that performance will plateau and open weights will catch up asymptotically


Mathematical ideas are very difficult to protect. But models can also be improved with brute force improvements in size. Imagine Mythos is a 32 trillion parameter model for example. That could be very difficult to replicate even though everybody knows exactly how it works.

> It’s much more likely that performance will plateau and open weights will catch up asymptotically

I really don't think so. This almost never structurally happens.

I think it'll be more like Linux on the Desktop.

Or Ubuntu on the smartphone.

Or Firefox.

We'll have open weights, but 99% of everything will go through hyperscalers.


> I really don't think so. This almost never structurally happens. > I think it'll be more like Linux on the Desktop.

I think it will be Linux on the server, or the one that runs your watch, your phone, the radio or infotainment system in your car, maybe your thermostat, a bunch of medical devices and military devices, running in space shuttles and space stations and... You get the point. It's on everything.


The smartphone is the most important piece of infrastructure in the modern world, yet we have basically two vendors.

Unless something dramatically changes, that's the world we're in for.

Chinese foundation model providers are releasing fewer weights as they "catch up", not more. There's little incentive for anyone to dump on the market if they can't collect the proceeds.


> The smartphone is the most important piece of infrastructure in the modern world, yet we have basically two vendors.

> Unless something dramatically changes, that's the world we're in for.

If you limit LLM use to cellphones, maybe, but that seems awful silly right now. And why would you when there's so many B2B or B2C tools and products for it to go in. No reason to consider the market to be that constrained IMO.


> There's little incentive for anyone to dump on the market if they can't collect the proceeds.

Foreign state actors are not lacking incentives when the entire US economy is propped up by overvalued and overhyped AI. Like dumping a model that runs at Opus 4.6 brains at a fraction of the price on non-nvidia hardware.


I doubt people using LLMs aggressively today and not understanding what the LLM is doing or why it works (or doesn’t) are positioning themselves for success. How long can one learn nothing before they fall behind those who kept learning?

It’ll be interesting to see


This is a tired, weak, and pathetic argument. Opposition to technology is very reasonable if that technology is doing more harm than good.

In the case of present-day LLMs, the vast majority of the public finds them to be more harmful than beneficial.

Why accept a decreasing quality of live instead of sensible regulation?


> the vast majority of the public finds them to be more harmful than beneficial.

Examples of ridiculous and incorrect beliefs once held by majorities:

- Spontaneous generation

- "Miasma" causes disease

- Earth is at the centre of the universe

- The heart is the seat of thought and the brain is useless

- Cold weather causes colds

Don't trust "the vast majority" to get anything right, ever.


Examples of reasonable beliefs held by the public:

Killing is bad. Kids should be protected.

I mean you have a point it’s just not particularly useful or helpful for the conversation


"Won't somebody think of the children" is constantly used sarcastically in order to dismiss the concerns of people who want to ban something they claim is harmful to children. This is often a completely justified rejoinder - many regulatory policies that thoughtless people argue for in the name of children's safety are counterproductive, disproportional, or otherwise harmful.

I understand your point and clearly see that LLMs cannot be compared to audio ... but ...

Back when I was a kid, music, audio and sound systems had high quality as a standard.

Nowadays people listen to music mostly with bluetooth headphones which basically recompress an already compressed audio signal to send them in low quality. Also, it is more and more difficult to find OK stereos that play music in good quality. Either, you have to pay very high prices for overpriced "audiophile" equipment, or you are stuck with cheap chinese MP3 players.

Yet, society and markets have spoken. Sometimes society is happy to accept marginally worse products in exchange of price and convenience.


What would that sensible regulation look like?

Perhaps your vantage point from industry is in fact myopic. We all have our own biases.

That's a flippant reply.

Programming is a practical skill, and its most common expression is industrial or commercial, not academic proofs of concept. The post addresses students who will enter industry; that's the focus of the professor's own post.

And I sympathize with many points being made here. However, the point of refactoring code is somewhat odd and detached from the real life constraints of programming in the wild.

Like, sure, in the ivory tower, you can confine yourself to nicely bounded problems and tidy little toy POCs. You can survive doing those things, because the selective pressures allow for it. I love those things, personally. They help me understand the nature of the thing. And in an academic settings, you can refine and refactor the hell out of those things to your heart's content (not that there is necessarily an objective end point to refactoring; code organization is subject to goals and constraints which can shift around).

But the reality of software in a commercial setting is not the tidy one you can expect in an academic setting. It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it? Whether you should refactor something is not just a question of whether it suits your conceptual tastes or even whether it is more maintainable. Unlike algorithms and principles and even techniques, software is not eternal. It is ephemeral. It's shelf-life is bounded. It is a piece of a larger business process. You're not refining some theory or some grasp of a Platonic ideal. You're mostly just putting into place plumbing to get something done. Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.

So, in light of that, there are actually quite absurd things to say given the difference between the privilege of academia and the gritty reality of industrial and commercial software development. If we were to force our professor into the world of industry, he would quickly lose his job or he would quickly learn that some of his strange idealism is silly and detached from the reality that his students will face.


  > It's messy, subject to commercial pressures, to a hierarchy of values that doesn't place "refactoring" at the top of the list. And why would it?
Probably because it's a good way to be more profitable.

Code that's easier to understand is easier to: maintain, generate new features for, fix bugs, onboard new engineers, etc

Code that's well written: executes faster (saving computational costs), scales better, has higher uptimes/more robust, reduces bandwidth, and so on.

The thing is the business people will never understand this. Why would they? They're not programmers. They're not in the weeds. But that's what your job is as an engineer. To find all these invisible costs.

I'm pretty confident the industry is spending billions unnecessary. Hell, I'm sure Google alone is wasting over $100m/yr due to this.

Don't be penny wise and pound foolish. You're smarter than that. I know everyone here is smarter than that. So don't fall for the trap


Save us the patronizing tone.

I am well aware of stupidity in industry. However, I am also wise enough to recognize the opposite error. (I myself have academic tendencies and a background aligned with that. I have chosen jobs that payed less, because the subject matter was more interesting for me. I'm not some vulgar, money-chasing techbro here.) The via media demands that we recognize the distinction between general truths and practical realities. As I wrote elsewhere in this thread, yes, properly refactored code is easier to maintain, easier to read, easier to change, and theoretically, commercially preferable. It also makes programming more satisfying, helping retention. But that describes a feature of such code. It doesn't tell us what the right course of action is in a particular situation. The notion that refactoring is unconditionally the right course of action when code is not in some ideal state is simply wrong. It really does depend on the situation. Sometimes, refactoring is the wrong thing to do.

I'm not making some outrageous claim here. This follows from basic truths about the nature of what it means to be practical, and if industry is anything, it is practical.


The professor is obviously not advising naive absolutism. He’s saying care deeply about your craft, and good judgement will follow from that.

Actually caring is what gives someone the itch to go back and improve things, versus happily calling it a day once minimum acceptable value has been delivered. The rampant enshittification of basically everything should make it clear which disposition is in short supply.

> Have the courage to go slowly, especially when everyone else is telling you that you need to go fast and cut corners.

The advice is aimed at students who haven’t yet decided which type they want to be. In fact it’s directly telling them to think for themselves and not blindly listen to you or anyone else here making the same case.


  > Save us the patronizing tone.
If you come out swinging you can't get mad when others swing back. You're not a victim, you're an instigator. You called danny_codes flippant for suggesting there are different biases. You called it absurd. You escalated it. And then you escalated it again.

  > It doesn't tell us what the right course of action is in a particular situation.
That's because there is never an objectively correct course of action. There is no optimal solution. In fact, there can't be when the situation evolves. The objective isn't even defined, let alone well defined. I don't understand your point because no one was suggesting it was always the right answer. Don't strawman here. Of course it depends on the situation, that's true about almost everything. It doesn't need to be said explicitly because it's so well understood. Don't inject absolute qualifiers into statements that don't have them.

  > I'm not making some outrageous claim here.
Your current claim? No. To be frank, you didn't claim much. But your prior claim? Yes. Yes you were. You were creating strawman then just as you did now.

  >> Unlike algorithms and principles and even techniques, software is not eternal. 
Not even algorithms are eternal. But I'm going to assume you're meaning the types of algoritms you see in textbooks because interpreting "algorithms" by its actual definition makes your comment weird. Since all programs are algorithms.

  >> [Software] is ephemeral. It's shelf-life is bounded.
And this is going to be something nearly everyone is already going to assume. It doesn't need to be stated. It doesn't need to be differentiated because it is already the working assumption.

  >> You're not refining some theory or some grasp of a Platonic ideal
And this is the real strawman. You're made a wild assumption about what others are claiming. There is such a wide range of viewpoints between "the way things are done now" and "chasing perfection." Anyone that thinks perfection exists in code is incredibly naive. You and I both know this, and so does anyone working in industry or academia (save maybe some juniors). There's a huge difference between saying "this isn't good enough" and "it's not good until its perfect." If someone talks about climbing a mountain you can't respond by saying it is impossible to climb to the moon.

  >> Whether you should refactor something, when you should refactor something, is a matter of prudential judgement, which is to say, of practical reason.
Whether you should do anything is a matter of prudential judgement. It's wild to say this while accusing people of chasing perfection. You think people are just yoloing their way to perfection?! Seriously? The article and thread context is literally asking that people use more prudential judgement. To not be myopic. And you have the audacity to say "think about it". What do you think we're doing here?

Completely fair - but at least my PoV comes from having actually worked as a SWE, you know? I feel like the best understanding this fellow can have is purely secondhand from watching the success / failures of his students.

I also think I get doubly upset from advice like this because it’s given and marketed to impressionable young students. Even agreeing with all the moral points he’s made, I truly think this advice would set up a new grad for failure and have them focusing on the wrong skills for this market.

The bit about ignoring trends feels too head in the sand for my liking :/


Fads come and go in industry. This version of LLMs will come and go as well, as will the coding languages and paradigms we used before (and, presuming you want your code to actually run, still do with some decent frequency).

Will LLMs in their current ergonomics have staying power? Perhaps. Nobody can predict the future. But I don’t think it’s a given in the least


Automatic coding systems have way too much economic value to be considered a "fad". I don't think you need to be Nostradamus to predict that we're never going back to manual coding. Sure, the systems will evolve and improve, but they're certainly not going anywhere.

> Automatic coding systems have way too much economic value to be considered a "fad".

Which is why they very carefully worded it more as 'LLMs in their current form', twice.


Yes, if you stake out an argument carefully enough, you can make its perimeter infinite and its area zero.

How do you know they didn't? My college professor was formerly at NASA, where this stuff is important.

I recognize not everyone's work is [as] important, but we should still strive for excellence (and safety.)


One check of their LinkedIn.

When I started studying CS, the "industry" thought students should be taught COBOL, and maybe some PL/I and Fortran, because obviously that was what the market wanted.

I worked at a FAANG in a senior role for around 6 years and I completely agree with the article. (I left before LLM/agent use became widespread, but I would have flamed out anyway if it was forced upon me.)

It's scary just how quickly the past has been buried: Decades of accumulated insight on best practices, all discarded in service of the new electric Christ.

This hit very close home. I'm a 44 year old developer, with Software Engineering Bachellors and CompSci MPhil and PhD. All my life I spearheaded "best practices" and code quality (from Fred Brooks, Joel Sposky, Martin Fowler, etc...).

But since LLMs arrived... things have become crazy. The layer of "obscurity" that permeates code writing seems to make a lot of those "standards" moot or just not really pragmatically possible to follow.


The blacksmith's lament.

Buddy... The whole point of the post is that he wants his students to question whether "succeeding in this market" is really the right choice.

It's really not though.

The point is to decide what success is for yourself. Learn everything you can about the thing you might decide to automate. But think before you automate and how you do so because it could cause more harm then good.


i was writing a bit of a lengthy reply, but yeah this is the whole point really.

making that money, getting that job title, being at that company, working on that project -- are these success?

or is success simply doing the best job possible when writing code?


The irony is that writing the best code possible is now a recipe for unemployment.

The right choice is rather to strive for perfect - and be unemployed?

To me it was actually not clear what his point was.

"Above all, be motivated by love instead of fear."

Sounds great. But not that practical.


Why isn't it practical? In my life, I've encountered many SWEs that have changed careers. I've met them in national parks working as rangers. In real estate, grocery store butchers, and yak ranchers. Yet I've never once encountered a SWE that was once doing something non-technical and decided to switch.

Purely anecdotal, I know. But still, I prefer to think that all those people discovered this practical advice and are far happier for it. I've never met one that regretted their decision.


Oh, I would consider becoming a park ranger as well, but as a european, I also did not had to go deep in dept, to become a SWE.

And a professor should take that into account and give practical advice. In the real world, solving haskell challenes (of which the prof is fan of) is unfortunately not that useful. People have real needs for working software to solve their real pain points. Not to worship code quality.

Some projects need obviously better code quality (airplanes, medical equipment..) - but not all of them. And if you want to have sacred code when coding a crude throw away app .. you won't get enough money for that. And positions for academics are limited.


Microslop!

It’s astonishing how bad their software is now. I guess 20 years of outsourcing and bean-counting will do that


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: