Hacker Newsnew | past | comments | ask | show | jobs | submit | computerphage's commentslogin

Show me a graph of your javelin skill doubling every six months and I'll start asking myself if you'll be the next champion

I could easily make that graph a reality and sustain that pace for a couple years, considering I'm starting from 0 javelin skill.

You could also nerf your performance at random times and then get good at it again, and extend the illusion for longer.

It is a simple mathematical fact that if you get married one year and have twins the next, your household will contain over a million people within 20 years.


It didn't used to be regulated. How do you explain the fact that not everyone used to be an opiate addict?

Because we regulated it when it got bad. Other countries have had opioid epidemics and they’ve had to intervene. China is a very famous example because the British didn’t like the crackdown as it affected other trade

We regulated it as a means to police certain communities.

Hell, Billie holiday suffered from addiction but that bastard Harry Anslinger ordered doctors to not treat her and not provide her with methadone. She died with police stationed at her hospital room door. The FBI also harassed her over songs about lynching.

Nixon is literally on tape saying to go after weed and heroin more harshly as an excuse to arrest more anti-war protesters and civil rights protestors.

I'd argue "cracking down" has done little for controlling drug abuse and has primarily been a method for selective policing. Particularly in the states.


But how do you explain the fact that, before it got bad, it wasn't /already/ bad. You've gotta have some model for why there was a change if the rate wasn't previously at 100% and the claim is that the number "should be 100% unless it was regulated"

> Barely passing body-text contrast in dark themes

This has been killing me recently. Apparently I need slightly higher contrast than some people, and these vibe coded UIs are basically unreadable to my eyes


Yeah, when I'm writing code I try to avoid zeros and ones, since those are the most common bits, making them essentially noise


Do you? What's the technical detail here? Why can't you get the model's prediction, even for that first token?


I mean mathematically you need at least one vector to propagate through the network, don't you? That would be a one hot encoding of the starting token. Actually interesting to think about what happens if you make that vector zero everywhere.

In the matmul, it'd just zero out all parameters. In older models, you'd still have bias vectors but I think recent models don't use those anymore. So the output would be zero probability for each token, if I'm not mistaken.



OpenAI wants to not be responsible for "accidents" that kill more than 100 people, despite some critics arguing that their current actions are likely to cause such harms.


Why are people dangerous? You can just not listen to them.


Do you have locks on your doors?


Why isn't the AI story believable? It seems to me that AI is getting more and more productive


Sure but the lower hanging fruit is mostly squeezed, so what else is driving the idea of _job replacement_ if the next branch up of the tree is 3-5 years out? I've seen very little to indicate beyond tooling empowering existing employees a major jump in productivity but nothing close to job replacement (for technical roles). Often times it's still accruing various forms of technical debt/other debts or complexities. Unless these are 1% of nontechnical roles it doesn't make much sense other than their own internal projection for this year in terms of the broader economy. Maybe because they have such a larger ship to turn that they need to actually plan 2-3 years out? I don't get it, I still see people hire technical writers on a daily basis, even. So what's getting cut there?


Is there any quantitative evidence for AI increasing productivity? Other than AI influencer blog posts and pre-IPO marketing from AI companies?


What exactly would that evidence look like, for you?

It definitely increases some types of productivity (Opus one-shot a visualization that would have likely taken me at least a day to write before, for work) - although I would have never written this visualization before LLMs (because the effort was not worth it). So I guess it's Jevons Paradox in action somewhat.

In order to observe the productivity increases you need a good scale where the productivity would really matter (the same way that when a benchmark is saturated, like the AIME, it stops telling us anything useful about model improvement)


"What exactly would that evidence look like, for you?"

https://fred.stlouisfed.org/series/MFPPBS https://fred.stlouisfed.org/series/OPHNFB

Productivity is by definition real output (usually inflation adjusted dollars) per unit of input. That could be per hour worked, or per representative unit of capital + labor mix.

I would accept an increase in the slope of either of these lines as evidence of a net productivity increase due to artificial intelligence (unless there were some other plausible cause of productivity growth speed up, which at present there is not).


There are two sides to this that I see:

First, I'd expect the trajectory of any new technology that purports to be the next big revolution in computing to follow a distribution pattern of that similar to the expansive use of desktop computing and productivity increases, such as the 1995-2005 period[0]. There has not been any indication of such an increase since 2022[1] or 2023[2]. Even the most generous estimation, which Anthropic itself estimated in 2025 the following

>Extrapolating these estimates out suggests current-generation AI models could increase US labor productivity growth by 1.8% annually over the next decade[3]

Which not only assumes the best case scenarios, but would fail to eclipse the height of the computer adoption in productivity gains over a similar period, 1995-2005 with around 2-2.5% annual gain.

Second is cost. The actual cost of these tools is multiples more expensive than it was to adopt computing en masse, especially since 1995. So any increase in productivity they are having is not driving overall costs down relative to the gains, in large part because you aren't seeing any substantial YoY productivity growth after adopting these AI tools. Computing had a different trend, as not only did it get cheaper over time, the relative cost was outweighed by the YoY increase of productivity.

[0]: https://www.cbo.gov/sites/default/files/110th-congress-2007-...

[1]: First year where mass market LLM tools started to show up, particularly in the software field (in fact, GitHub Copilot launched in 2021, for instance)

[2]: First year where ChatGPT 4 showed up and really blew up the awareness of LLMs

[3]: https://www.anthropic.com/research/estimating-productivity-g...


Well you would think if there is increased productivity there would be at least a couple studies, some clear artifacts, or increased quality of software being shipped.

Except all we have is "trust me bro, I'm 100x more productive" twitter/blog posts, blant pre-IPO AI company marketing disguised as blog posts, studies that show AI decreases productivity, increased outages, more CVEs, anecdotes without proof, and not a whole lot of shipping software.


If that's the case I feel like you couldn't actually be using them or paying attention. I'm a big proponent and use LLMs for code and hardware projects constantly but Gemini Pro and ChatGPT 5.2 are both probably the worst state we've seen. 6 months ago I was worried but at this point I have started finding other ways to find answers to things. Going back to the stone tablets of googling and looking at Stackoverflow or reddit.

I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.


> I still use them but find that more of the time is spent arguing with it and correcting problems with it than actually getting any useful product.

I feel the same. They're better at some things yes, but also worse at other things. And for me, they're worse at my really important use cases. I could spend a month typing prompts into Codex or AntiGravity and still be left holding the bag. Just yesterday I had a fresh prompt and Geminin bombed super hard on some basic work. Insisting the problem was X when it wasn't. I don't know. I was super bullish but now I'm feeling far from sold on it.


Ai is definitely able to sling out more and more lines of code, yes. Whether those LOC are productive...?


Tomorrow's Calc app will have 30mil lines of code and 1000 npm dependencies!


and 2+2 will output 4 almost all the time.. just like a human would.


What's your point?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: