It comes over as so incredibly insane to me that people from the late 80s (people working with computers! Reporting on them!) would look at their current technology stack and basically go: "I have no idea whatsoever what else we can do with these things, we've reached the end"
On the other end, you have people who have no idea how insanely fast computers are today, and how little computing power is "really" needed for most things that computer users do - or how much you can do with one average machine ("Oh no, 1000 requests per second - let's erect another rube goldberg machine to handle that!").
The 80s and 90s were filled with new things computers could do - spreadsheets, wysiwyg word processors, games - things that simply were impossible before (or not done).
In the 2000s through now we've mostly had improvements - 4k Youtube is much better than realplayer, but it's still just "online video". AI is definitely a "new" thing and it's somewhat awoken a similar spirit to the 80s/90s - but not the same breadth. Dad bringing home a computer because he wants to do spreadsheets and you finding it can run DooM or even play music.
I was born in 1981 and my first computer was an Andrea’s 1512 IBM XT clone. I then had a 386SX-16 in 1991 and a 486DX2-66 in 1994.
Anyway a while ago I was reading an article authored by a guy who lived through the same era as I grew up laughing at modern developers whom he had asked to size a machine to add all the integers from 1 to 100. Setting aside that 7 year old Gauss found the closed form of that sum (the triangular number formula) in about ten minutes and got the correct result of 5050 without any of the arithmetic busywork, it’s totally insane what some of the answers involved… with some involving the terms “Big Data” (yes, it was that era of hype, before “Crypto” and “AI”) and some even (allegedly) mentioning ‘clusters’. I really wish I could find you a link.
The first 80286-based system (IBM PC AT), 80386 (Compaq Deskpro 386), and 80486 all had people writing about their suitability as servers, with the consensus's implication being that normal people didn't need them.
The Pentium is the first one, I think, that this didn't happen, because by then it turned out that people need a computer that can do what they are currently doing—but faster—much more often than they need servers.
All we really have to look forward to in the future of increasing-performance personal computing is doing the same things as yesterday, but doing them faster.
The future after today will probably turn out more interesting than that, of course, but we can't know that until it happens.
And the future after 1988 certainly turned out to be a very interesting time in computing -- but they had no idea what was in store. Perhaps you can use your time machine to go back and let them know?
I'm sure it'll be better than Opus 4.6, but so much of this seems hype. Escaping its sandbox, having to do "brain scans" because it's "hiding its true intent", bla bla bla.
If it manages to work on my java project for an entire day without me having to say "fix FQN" 5 times a day I'll be surprised.
I have the 20x plan and use it together with my husband. 4 days in to our weekly usage window and we're only at 54% (and we both use it the entire day)
I have no idea how people are hitting the limits so fast.
Still using my RX 5700 XT. The amdgpu driver had a major issue resuming from suspend a few months ago[0], but other than that, I'm not aware of (nor have I experienced) any stability issues. Maybe you had a bad card.
I don't like to spend much on hardware, so I bought an 5700XT a few years ago and run a "steam machine" of sorts. Never had any Linux-related problems.
Perhaps, the word does have it's own token, " geschniegelt"(geschniegelt with a space in front of it), is token 192786 in the tokenizer that GPT-5 apparently uses.
It simply means the tokenizer's training corpus may have included a massive amount of German literature or accidentally oversampled a web page where that word was frequently repeated. Look up "glitch tokens" to learn more.
I've been running my own (admittedly naïve) experiments of new, wacky ideas for both LLMs (well, SLMs) and for Image-Super-Resolution models.
I'm just trying different kinds of attention mechanisms, different configurations of the network, adding loops, ... All kind of wacky ideas. And the real weird thing is that 99% of the ideas I try work at all.
There are production stills that are used like a slide show and combined with the recorded audio.
Certain episodes have been reconstructed using animation such that the basic scene blocking and events are played out alongside the recorded audio.
reply