I was required to use Cursor for my job when I first started, but once I figured out how to use the command line version of Codex, I kind of stopped seeing the point. It just kind of seemed like a bloated, overpriced wrapper around what I could do with the included ChatGPT membership I already had for work.
Maybe I was missing something, but I do not understand how it is worth sixty billion dollars.
The biggest issue I have with premature optimization is stuff that really doesn't matter.
For example, in Java I usually use ConcurrentHashMap, even in contexts that a regular HashMap might be ok. My reasoning for this is simple: I might want to use it in a multithreaded context eventually and the performance differences really aren't that much for most things; uncontested locks in Java are nearly free.
I've gotten pull requests rejected because regular HashMaps are "faster", and then the comments on the PR ends up with people bickering about when to use it.
In that case, does it actually matter? Even if HashMap is technically "faster", it's not much faster, and maybe instead we should focus on the thing that's likely to actually make a noticeable difference like the forty extra separate blocking calls to PostgreSQL or web requests?
So that's the premature optimization that I think is evil. I think it's perfectly fine at the algorithm level to optimize early.
I can fully understand "bickering" about someone sprinkling their favourite data type over a codebase which consistently used a standard data type before. The argument that it might be multithreaded in the futurure does not hold if the rest of the code base clearly was not writen with that in mind. That could even be counter productive, should someone get the misguided idea that it was ready for it.
Make a (very) good argument, and suggest a realtistic path to change the whole codebase, but don't create inconsistency just because it is "better". It is not.
I don’t expose it at the boundaries, just within functions. With everything outside of the function I take a Map interface in and/or return the Map out.
You are communicating with future readers of the code. The presence of ConcurrentHashMap will lead future engineers into believing the code is threadsafe. This isn't true, and believing it is dangerous.
It’s actually a lot better. That’s literally the whole point of interfaces and polymorphism: to make it so the outside does not care about the implementation.
Locks are cheap performance-wise (or at least they can be) but they’re easy to screw up and they can be difficult to performance test.
ConcurrentHashMap has the advantage of hiding the locking from me and more importantly has the advantage of being correct, and it can still use the same Map interface so if it’s eventually used downstream somewhere stuff like `compute` will work and it will be thread safe without and work with mutexes.
The argument I am making is that it is literally no extra work to use the ConcurrentHashMap, and in my benchmarks with JMH, it doesn’t perform significantly worse in a single-threaded context. It seems silly for anyone to try and save a nanosecond to use a regular HashMap in most cases.
Terminology doesn't bother me nearly as much as people defining custom operators.
I used to think it was cute the you could make custom operators in Haskell but as I've worked more with the language, I wish the community would just accept that "words" are actually a pretty useful tool.
I also find it a bit annoying is that most people just make shit up about stuff that is "faster". Instead of measuring and/or looking at the compiled bytecode/assembly, people just repeat tribal knowledge about stuff that is "faster" with no justification. I find that this is common amongst senior-level people at BigCos especially.
When I was working in .NET land, someone kept telling me that "switch statements are faster" than their equivalent "if" statements, so I wrote a very straightforward test comparing both, and used dotpeek to show that they compile to the exact same thing. The person still insisted that switch is "faster", I guess because he had a professor tell him this one time (probably with more appropriate context) and took whatever the professor said as gospel.
I've seen a lot of requests to obtimize code where we can measure the optimal versions saves a few nanoseconds. I just deleted some 'optimal code' that took a lot of mutexes and so was only faster when there is no contetion but in real world multi-writer situations the easy code wins. (Shared memory vs local socket for ipc)
I don't write a lot of super low level stuff, so maybe things are different there, but at least in the normal user space level I've found it pretty rare that explicit mutexes ever beat the performance of an (in my opinion) easier design using queues and/or something like ZeroMQ.
Generally I've found that the penalty, even without contention, is pretty minimal, and it almost always wins under contention.
To be fair the code in question was written many years ago - before anyone I know had heard of zeromq (it existed but wasn't known). it would be possible to optimize the mutexs out I'm sure - but the big problem wasn't speed it was complexity of code that I now maintain. Since local sockets are easier and faster I'm safe deleting the more complex code that should have never been written.
I dunno, my wife has has the same iPhone 11 Pro Max since 2020. She had to get the battery replaced once at an Apple store, which I believe cost $99, and it took like thirty minutes and it wasn't that hard.
I'll admit it's a little annoying that I have to pay a hundred bucks to get the battery replaced, but the phone is otherwise fine and still gets updates, so I don't know that I buy that it's "planned obsolescence".
It's planned obsolescence through price. Your wife paid >50% of the phone's value just to replace the battery. Many people won't think that's worth it. It could have been a $30 user replaceable battery.
But that's not what the regulation is saying, is it?
It says
* replaceable with 'commercially available tools' (which means: Apple could just sell you a 'iphone battery replacment tool kit for 1000 Euros)
* has excemptions for high-cycle / long-lived batteries
* ... nothing about the price of the battery (which can be 1000 Euros)
* ... or that the battery/the battery's form factor can't be trademarked, essentially locking you into 'Apple batteries' and preventing aftermarket ones.
Also, I'd rather have a less bulky phone with fewer mechanical parts that can break as compared to a more user-maintainable. Because of 'high-security' software (think: banking apps, or - I assume - the soon-to-be-released EUId wallet), the thing is basically worthless after four years anyways and needs replacement.
I'd wager that ... nothing at all will change in 2027.
I've gotten recruiters reach out for jobs because of my fairly high Hacker News karma. This isn't speculation on my end, they actually told me that.
I agree it's idiotic; I'm quite confident that it wouldn't be that hard to cheat this system, and even if there absolutely no way to cheat the system, it's not like Hacker News points translate to smartness; my most upvoted posts have basically nothing to do with software engineering.
Honestly I think the thing that most humans appreciate is effort. Using AI tools is not inherently "bad", but these very-literally mass produced AI songs are almost by definition low-effort and as a result pretty bland and unlikeable.
Digital music has always been fine to me, as long as the song being produced feels like it took a human some amount of effort.
Yeah I agree with that nuance, as I personally enjoy making AI covers of songs I like in genres that I can't produce myself (old vintage blues covers of 80s new wave songs if you must know). It's a fair amount of work prompting and curating (and editing in some cases). I think they are cool and have shared a few, but they do tend to get lumped in with "ai slop" and some people take offense.
I think a lot of people make an assumption that problems like this are fixed-sized; that by making getting a song easier, that that's the end of the the line.
In my mind the better mindset is to think that the problems are not fixed size, and instead these tools can allow for bigger and cooler projects, and/or projects that wouldn't be possible (or at least would be infeasible) without some kind of technological assistance.
AI tools can be used to create slop that is either "bad" or extremely bland at an effectively-infinite speed. It could also be used to make some really cool and interesting stuff if a person is really willing to spend time and effort to make it cool. Usually this requires more than just "prompting" though.
The difference between this and what we are seeing in this article is you aren't sitting down, grinding out dozens/hundreds of these, then spamming them with little to no regard for anyone else for profit. You do it at small scale, for yourself/friends, and clearly care about the results. You are trying to make something intentionally.
Not in my book, I know a lot of low/no effort attempts to spam "content creation" channels, no curation, etc that I'd call slop before this. I'm trying to use AI to generate something that did not/would not exist otherwise. It's admittedly probably better because it's using human-written lyrics for the covers (and memories?), but to be honest, 80s new wave lyrics can be pretty hokey. "Any AI = slop" is probably more a belief system than an objective measure.
I don't think what you're doing, or at least what you described, is inherently slop. If you're actively putting in effort to make something you think is cool and to make something you're actively proud of (or at least something that you genuinely want people to enjoy), I don't think that's "slop", or at least I don't think it's bad.
It's certainly different than those low-effort channels that mass upload hundreds of videos a day because they're able to automate the entire video-making process; those are completely soulless, again almost by definition. Those exist to just try and effectively skim revenue from adsense (or subscriber revenue in the case of Deezer), and making something that people will actually "enjoy" isn't the purpose.
Of course, this isn't a new problem; I remember a few years ago (before generative AI became viable for this stuff), there were "tutorials" on the best way to upload hours and hours of noise or silent music to Spotify to extract revenue, and of course let's not forget the infamous "Elsagate" stuff that plagued YouTube. AI has maybe accelerated the problem but it certainly wasn't the first thing to create "slop".
I'm hardly the first person to make this point, but AI is a tool. Tools can be good or bad; if AI is a tool that you can use to actively help you be more creative then I don't think that's bad. If you're just generating something to pad a resume or extract ad revenue, that's slop.
I dunno; there has always been shit videos on YouTube, obviously, but there used to be a sort of natural filter of videos that had nice transitions and decent narration and dialog that was more or less grammatically correct that made it so that I would mostly watch videos I enjoyed.
Now that AI has cargo-culted these traits I'm getting a lot of recommendations of videos that will initially seem "ok", and then I realize after about a minute that the narration will have some weirdness, and the script will have a lot of the typical ChatGPT "tells", and of course the video comes off as pretty low effort after that.
My YouTube recommendations have become increasingly useless, which honestly might be a good thing because it's made it so that I have less desire to use YouTube.
The weirdness is creeping in to regular Youtube content too. For example, I like to watch Ryan Hall's stream during extreme weather (tornado season in the US). In his forecast videos he has to start with something weird to prove to the audience they're not watching a fake AI generated channel, like eat a banana or apple while talking and wave the fruit around. It was very strange until i realized what he was doing. He also started wearing a suit which is very out of character for him, that must also confuse AI trained on his previous videos.
The first is AI-generated content. This can start with nothing more than an idea. Some of it is uniquely-presented stuff that's actually kind of interesting: I got sucked into a nice Ken Burns-style narrated documentary about the rise and fall of Baldwin Piano a few weeks ago. It was a little wordy, but it worked. It took awhile before a very glaring error in diction made me rewind for a double-take, note that no human would ever make that mistake while narrating, and then burn the channel from my feed.
The second problem is very different: Cloning individual people and channels. When a person (or nearly as likely, a bot) elects to use a bot to clone someone else's style, persona, and everything else then that's... that's very unsettling.
---
The first problem? It's whatever. I don't like it, but there may come a time when I accept it. At this point it's mostly harmless and really guilty of nothing more than wasting some of my time now and then.
The second problem? It can be reprehensible.
And it's particularly bad with a channel like Ryan Hall. I don't have any idea of how he is as a person (never meet your heroes), but I like to presume that he's generally a swell guy. And moreover: He's important.
When the weather turns iffy, I put his stream on and it's mostly just background noise. I usually give it very little attention.
But when he mentions the name of the small city I live in then that means that shit is just about to get very real here -- very soon. That's astoundingly useful to me, and the safety of the people I care about.
I also find a lot of value in obvious parody. It's can be fun, and it can make people think. The music of Weird Al or There I Ruined It, the crazy stories in The Onion, the memes. That's all good. But this Ryan Hall business? It's bad.
So, there's definitely a line.
And I don't know where the line should be drawn. But using bots to deceive and thereby dilute the value of the content of Ryan Hall's channel is definitely on the wrong side of that line.
Maybe this will become The Year of the Self Hosted.
For stuff that I don't particularly care about privacy I've kept on the cloud (e.g. my blog, which is public anyway and as such is probably training bots regardless), but for stuff that I don't want to be used to train their models and/or sell to advertisers I have moved to be self hosted on my own network.
Maybe I was missing something, but I do not understand how it is worth sixty billion dollars.
reply