Hacker Newsnew | past | comments | ask | show | jobs | submit | skerit's commentslogin

I'll believe in this miracle model when I see it.

It comes over as so incredibly insane to me that people from the late 80s (people working with computers! Reporting on them!) would look at their current technology stack and basically go: "I have no idea whatsoever what else we can do with these things, we've reached the end"

The lack of imagination is just disturbing.


On the other end, you have people who have no idea how insanely fast computers are today, and how little computing power is "really" needed for most things that computer users do - or how much you can do with one average machine ("Oh no, 1000 requests per second - let's erect another rube goldberg machine to handle that!").

The 80s and 90s were filled with new things computers could do - spreadsheets, wysiwyg word processors, games - things that simply were impossible before (or not done).

In the 2000s through now we've mostly had improvements - 4k Youtube is much better than realplayer, but it's still just "online video". AI is definitely a "new" thing and it's somewhat awoken a similar spirit to the 80s/90s - but not the same breadth. Dad bringing home a computer because he wants to do spreadsheets and you finding it can run DooM or even play music.


I was born in 1981 and my first computer was an Andrea’s 1512 IBM XT clone. I then had a 386SX-16 in 1991 and a 486DX2-66 in 1994.

Anyway a while ago I was reading an article authored by a guy who lived through the same era as I grew up laughing at modern developers whom he had asked to size a machine to add all the integers from 1 to 100. Setting aside that 7 year old Gauss found the closed form of that sum (the triangular number formula) in about ten minutes and got the correct result of 5050 without any of the arithmetic busywork, it’s totally insane what some of the answers involved… with some involving the terms “Big Data” (yes, it was that era of hype, before “Crypto” and “AI”) and some even (allegedly) mentioning ‘clusters’. I really wish I could find you a link.


The first 80286-based system (IBM PC AT), 80386 (Compaq Deskpro 386), and 80486 all had people writing about their suitability as servers, with the consensus's implication being that normal people didn't need them.

The Pentium is the first one, I think, that this didn't happen, because by then it turned out that people need a computer that can do what they are currently doing—but faster—much more often than they need servers.


That's not so different than today, wherein:

All we really have to look forward to in the future of increasing-performance personal computing is doing the same things as yesterday, but doing them faster.

The future after today will probably turn out more interesting than that, of course, but we can't know that until it happens.

And the future after 1988 certainly turned out to be a very interesting time in computing -- but they had no idea what was in store. Perhaps you can use your time machine to go back and let them know?


Um. That never happened. No-one ever felt that. Not a soul.

Everyone - everyone knew it was the start of a revolution.


I'm sure it'll be better than Opus 4.6, but so much of this seems hype. Escaping its sandbox, having to do "brain scans" because it's "hiding its true intent", bla bla bla.

If it manages to work on my java project for an entire day without me having to say "fix FQN" 5 times a day I'll be surprised.


I have the 20x plan and use it together with my husband. 4 days in to our weekly usage window and we're only at 54% (and we both use it the entire day)

I have no idea how people are hitting the limits so fast.


I retired my 5700 XT a few years ago. Wasn't there some kind of hardware problem with it? It kept locking up my Linux kernel.

Still using my RX 5700 XT. The amdgpu driver had a major issue resuming from suspend a few months ago[0], but other than that, I'm not aware of (nor have I experienced) any stability issues. Maybe you had a bad card.

0: https://gitlab.freedesktop.org/drm/amd/-/issues/4531


I don't like to spend much on hardware, so I bought an 5700XT a few years ago and run a "steam machine" of sorts. Never had any Linux-related problems.

Neat. Is it a single under-trained token in GPT-5.2? Or is something else going on?


Perhaps, the word does have it's own token, " geschniegelt"(geschniegelt with a space in front of it), is token 192786 in the tokenizer that GPT-5 apparently uses.

https://raw.githubusercontent.com/niieani/gpt-tokenizer/refs...


Isn't giving this word a token something deeply wasteful? When some more common things are multiple tokens.

Indeed, how do they deal with Chinese? Are some ideograms multiple tokens?


It simply means the tokenizer's training corpus may have included a massive amount of German literature or accidentally oversampled a web page where that word was frequently repeated. Look up "glitch tokens" to learn more.


Based on their tokenizer tool[1], for GPT 5.x "geschniegelt" is tokenized into three tokens:

  (ges)(chn)(iegelt)
[1]: https://platform.openai.com/tokenizer


It's a single token in the most common usage, that is, with a space in front of it

"This word is geschniegelt" is [2500, 2195, 382, 192786]

Last token here is " geschniegelt"


Maybe this is why? Most of the training data has the single token version, so the three tokens version was undertrained?


I've been running my own (admittedly naïve) experiments of new, wacky ideas for both LLMs (well, SLMs) and for Image-Super-Resolution models.

I'm just trying different kinds of attention mechanisms, different configurations of the network, adding loops, ... All kind of wacky ideas. And the real weird thing is that 99% of the ideas I try work at all.


Do we also have stills of all the episodes? Or only audio?


There are production stills that are used like a slide show and combined with the recorded audio. Certain episodes have been reconstructed using animation such that the basic scene blocking and events are played out alongside the recorded audio.


This is kind of what LoopLM is doing, no? https://arxiv.org/abs/2510.25741


Thanks. This is cool


Gemini might be great at benchmarks, it is terrible at actual agentic coding. So Anthropic seems like a more logical choice.


The particulars don't matter. OpenAI will never do this.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: