It's ridiculous to me that they're concerned about the trustworthiness of AI-generated code when their code quality is so low. They don't even have automated tests and ignore attempts to add them.[0, 1, 2, 3]
Last I checked, there's little validity checking in the code, so it's possible to broadcast nonsense values (like GPS coordinates outside of Earth's bounds) and the code happily accepts it.
And that's fine if they're just like a scrappy upstart doing their best, but it annoys me to be so high and mighty about their code quality when they don't invest in it.
I really want to like MeshCore but I feel like its stewardship makes it hard. The main two people I know running it are Scott Powell and Liam Cottle, both of whom are trying to build businesses on closed-source layers on top of the firmware. I don't think there's anything wrong with an open-core business model (I ran such a business myself), but it creates perverse incentives where the core maintainers try to suppress information about the open-source alternatives and push their own closed-source paid products.
Also, MeshCore's recommended broadcast settings for the US are illegal.[4] I emailed the Liam and Scott about this months ago, and they ignored me.
Wow, #4 is frustrating. (Disclosure: am a ham, but not one of the uptight ones. I'm not personally offended when someone breaks the rules, and I'm not gonna run off and call the FCC or something. But I am concerned when they don't seem to know or care why.)
First, I don't know if their interpretation of the rules is correct. For the sake of argument, I'll assume it is. More importantly, most other people in that thread seem to be going along with the idea that it is correct. This is how it reads to me:
Submitter: We're violating the rules and should make this change.
Replier 1: That change would be inconvenient in Seattle so we're not doing it.
Replier 2: It wouldn't work well in Boston, either, so it's a no-go.
Part of me wants to shake them. This isn't 'Nam. There are rules. Whatever you think about the FCC regulations, they're not voluntary, and they certainly don't have an opt-out for "it wouldn't work as well that way". To a first approximation, everyone else using the public airwaves is more or less following the law. If following the law makes your project not work as well, that's your problem. It's on you to fix your project so that it's legal to use.
I'm not one of those old hams who gets hyper cranky about this stuff, but I do understand how they come to be that way. The only reason we can use the spectrum at all is that people are mostly using it legally so that their work isn't interfering with everyone else trying to use the same public resource.
This is also a loaded question. The only specifics they've offered are that he simply used Claude Code. Um... OK? Do the tests pass? Did his changes add any security flaws? Regressions that were untested?
What do you mean? Is the non-existent millionth floor of the Empire State Building still part of the Empire State Building?
Also, I'm assuming we're in agreement that software should not accept invalid GPS coordinates from untrusted peers regardless of semantics about whether or not they're within Earth's bounds.
I can't read that without assuming the real intent is to deliver bombs accurately, but the startup pitching it knows that'd get bad press, and the investors all know exactly what it really means...
Meshtastic/MeshCore have nothing valuable to offer in terms of delivering bombs accurately. Moreover, militaries already have access to much more robust radio messaging hardware and protocols for data and location transmission.
The main reason both Meshtastic and MeshCore have location data as a part of the protocol is because they emerged from the Ham community which has always taken its role in search and rescue seriously, and because it also appealed early on for other off-grid uses like hiking.
> It's ridiculous to me that they're concerned about the trustworthiness of AI-generated code when their code quality is so low.
Agreed, but at least it's somewhat sensibly structured. AI? Good lord you'll end up with a slopaghetti mess.
> They don't even have automated tests and ignore attempts to add them.[0, 1, 2, 3]
Two people, 540 issues and 270 PRs open at the moment. Not wanting to be that guy... but do the math. The reviewer team is small as hell and after this drama (which probably kept both of them busy with BS) they'll likely be even less willing to trust others.
If you want to stand a better chance at getting your code into other people's hands, go and contact the person behind the Evo fork. IIRC he's part of Hansemesh, Germany's biggest regional MC.
I have heard indirectly multiple times now that the only two ways to get a PR of interest merged is to either gather enough people to Like the issue on Github or to join the Discord and ask.
I don't understand what you mean. What separates this from other fingerprinting techniques your company monetizes?
No software wants to be fingerprinted. If it did, it would offer an API with a stable identifier. All fingerprinting is exploiting unintended behavior of the target software or hardware.
It makes sense to me, they're likely not trying to actually fingerprint Tor users. Those users will likely ignore ads, have JS disabled, etc. the real audience is people on the web using normal tooling.
They can just flag all Tor users as high risk. They don't strictly need to fingerprint them when it's generally fine for websites to just block signups for Tor users or require further identification via phone number or something.
You want fingerprinting to identify low risk users to skip the inconvenient security checks.
Most users seem to not care about ad tech/tracking as much as technical users. Even further, most seem to want to enable more tracking to [protect the children or whatever the reason is] pretty regularly (at least in opinion polls about various legislation). ToR users are not at all like that + could be harmed in a very different way... so I think it's fair to frame them differently even if I'd personally say people should be wanting to treat both as similar offenses because neither should be seen as okay in my eyes.
> Most users seem to not care about ad tech/tracking
I don't think this is true.
Most people don't understand that they're being tracked. The ones that do generally don't understand to what extent.
You tend to get one of two responses: surprise or apathy. When people say "what are you going to do?" They don't mean "I don't care" they mean "I feel powerless to do anything about it, so I'll convince myself to not care or think about it". Honestly, the interpretation is fairly similar for when people say "but my data isn't useful" or "so what, they sell me ads (I use an ad blocker)". Those responses are mental defenses to reduce cognitive overload.
If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast." The number of people that are going to be okay with that will plummet. As soon as you change it from "Meta" to "some guy named Mark". You'll still get nervous jokes of "you're wasting money, I'm boring" but you think they wouldn't get upset if you actually hired a PI to do that?
The problem is people don't actually understand what's being recorded and what can be done with that information. If they did they'd be outraged because we're well beyond what 1984 proposed. In 1984 the government wasn't always watching. The premise was more about a country wide Panopticon. The government could be watching at any time. We're well past that. Not only can the government and corporations do that but they can look up historical records and some data is always being recorded.
So the reason I don't buy the argument is because 1984 is so well known. If people didn't care, no one would know about that book. The problem is people still think we're headed towards 1984 and don't realize we're 20 years into that world
> As soon as you change it from "Meta" to "some guy named Mark".
There is a huge difference between those.
If someone hires a PI to follow me, they are spending like $10000/week on that. Which means that their expected value is more than that, or that PI will never pay for itself. Where will this value come from? Likely from me, after all it's me they are tracking. So I am really worried, as I am about to lose a huge amount of money (or something else valuable).
On the other hand, if a store installs a whole bunch of cameras so I am tracked anytime I am in there, then it probably costs them only a few cents to track me. So I really don't care much about how losing anything valuable.
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person.
This is exactly what I was saying - if you look at the polls, people actually tend to support things like the UK's Online Safety Act. Explaining it more does not usually result in a change of that. The difference with a PI is you're asking about them individually instead of everyone - of course they trust themselves, they just want everyone surveilled for that same feeling of confidence.
> If you don't buy my belief then reframe the question to make things more apparent. Instead asking people how they feel about Google or Meta tracking them, ask how they feel about the government or some random person. "Would you be okay if I hired a PI to follow you around all day? They'll record who you talk to, when, how long, where you go, what you do, what you say, when you sleep, and everything down to what you ate for breakfast."
Yes and no, because people still will think that when it's done at scale it's different from some stalker following YOU explicitly, and not just following everybody. Also, the mental model is "they just want to sell me something, but I can just ignore and don't buy if I'm not really interested". And especially going down this second rabbit-hole opens a whole world about consumerism that not many people are comfortable with.
At the same time there are people that are totally against consumerism that should be more informed and care more about tracking and privacy; with those people it's probably easier to have that conversation.
Some good counterpoints. But you're suggesting more people would be okay with 'PI following them' hypothetical than GP suggests—simply with the knowledge that others are subject to the same degree of surveillance?
I'm not so sure that counterpoint in particular holds. I think to say the "number of people that are going to be okay with that will [still] plummet" is an understatement. I'd go so far as to say no one, at least no rational person, would be okay with a "record [of] who you talk to, when, how long, where you go, what you do, what you say, when you sleep", etc., just because of the scale.
Let me focus it from a slightly different side: my believe - from observing the world around me - is that physical privacy violation is perceived differently from a software one because of the side-effects: you gaze out of your window and see the same car with some guy in it parked there, you see the same car following you when you are going to the mall etc. There is some similar side-effect with online tracking, which is the typical "ad in my Instagram feed for something I searched for last week in Google", and there are people that are "scared" by this. But since it's just about buying things, well hey I might actually tap on that Instagram ad!
I see some success by telling people "what if was our government doing the same thing to us, even by extorting private companies? what if that same government, or the next one, just hates you for whatever reason?"
I take your point about the 'abstract' nature of online privacy. But another angle might be suggesting to those that are ambivalent on the issue that the pervasive (and for all intents and purposes, permanent) recordkeeping nature of 'software surveillance' should be much scarier than some guy sitting outside. I mean, at the very least, even with some guy sitting outside, you'd still have privacy inside.
But again, I hear you. Most people unfortunately have come to view the issue as being just about targeted advertising (which some go so far as to espose as a good thing).
This is a lot of text to say that people don't recognize digital tracking as a threat, even when it is explained to them. Which is basically exactly what parent post you replied to said.
My read of the comment is that it's almost never actually fully explained to them. And that they would almost certainly care if they actually understood what was happening. That's my experience. Once you explain that it's more information than a private investigator tailing you all day, stealing your phone could gather people usually wise up to the fact that they actually don't like it.
> Most users seem to not care about ad tech/tracking as much as technical users.
Part of the problem is the misconception that the data being collected is only being used to determine which ads to show them. Companies love to frame it that way because ultimately people don't actually care that much about which ads they get shown. The more people get educated on the real world/offline uses of the data they're handing over the more they'll start to care about the tracking being done.
This is definitely a point that should be emphasized more in this discussion. Even still, where it ultimately falls flat (currently) is the lack of hard proof to show people that it's truly happening.
Also, the degree to which some are more comfortable with the personal privacy/'feeling of personal safety' tradeoff notwithstanding, the examples that do get media traction are predictably extremes that the average person doesn't feel applies to them.
In my experience those users express a mix of surprise and irritation when they get ads about something they did minutes or hours before, but they accept that's the way things are.
I joke that I'm a no-app person, because I install very few apps and I use anti tracking tech on my phone that's even hard to explain or recommend to non technical friends. I use Firefox with uMatrix and uBlock Origin and Blockada. uMatrix is effective but breaks so many sites unless one invests time in playing with the matrix. Blockada breaks many important apps (banking) less one understands whitelisting.
No, it is not. I'm talking in the context of OP, which refers to a fingerprinting "vulnerability", specifically using the word "vulnerability" to describe it.
Instead of trying convince-by-assertion, maybe you could try offering an actual objection to the argument raised up-thread?
On what basis do you claim that software developers, who did not establish a means of for third parties to get a stable identifier, nevertheless intended that fingerprinting techniques should work?
TBF the idea that any and all fingerprinting falls under the umbrella of exploiting a vulnerability was also presented as an assertion. At least personally I think it's a rather absurd notion.
Certainly you can exploit what I would consider a vulnerability to obtain information useful for fingerprinting. But you can also assemble readily available information and I don't think that doing so is an exploit though in most cases it probably qualifies as an unfortunate oversight on the part of the software developer.
You haven’t made an actual argument. You’ve made a repeated assertion that you feel so religiously about that you simultaneously can’t justify it and get very abrasive when someone asks you to back it up.
1) wanting functionality that isn't provided and working around that
and
2) restoring such functionality in the face of countermeasures
The absence of functionality isn't a clear signal of intent, while countermeasures against said functionality is.
And then there is the distinction between the intent of the software publisher and the intent of the user. There is a big ethical difference between "Mozilla doesn't want advertisers tracking their users" and "those users don't want to be tracked". If these guys want to draw the line at "if there is a signal from the user that they want privacy, we won't track them", I think that's reasonable.
The presence of the "Do Not Track" header was a pretty clear indicator of the intent of the user. Fingerprinting persisted exactly in the face of such countermeasures.
Even if the intent is clear I don't think the act of reading an available field qualifies as exploiting a vulnerability. IMO you need to actually work around a technical measure intended to stop you for it to qualify as an exploit.
Sure, my wording isn't perfect. I don't have a watertight definition ready to go. To my mind the spirit of the thing is that (for example) if a site has an http endpoint that accepts arbitrary sql queries and blindly runs them then sending your own custom query doesn't qualify as an exploit any more than scraping publicly accessible pages does. Whereas if you have to cleverly craft an sql query in a way that exploits string escapes in order to work around the restrictions that the backend has in place then that's technically an exploit (although it's an incredibly minor one against a piece of software whose developer has put on a display of utter incompetence).
The point isn't my precise wording but the underlying concept that making use of freely provided information isn't exploiting anything even if both the user and the developer are unhappy about the end result. Security boundaries are not defined post hoc by regret.
Side channels that enable intended behavior, versus a flat-out bug like the above, though the line can often be muddied by perspective.
An example that comes to mind that I've seen is an anonymous app that allows for blocking users; you can programmatically block users, query all posts, and diff the sets to identify stable identities. However, the ability to block users is desired by the app developers; they just may not have intended this behavior, but there's no immediate solution to this. This is different than 'user_id' simply being returned in the API for no reason, which is a vulnerability. Then there's maybe a case of the user_id being returned in the API for some reason that MIGHT be important too, but that could be implemented another way more sensibly; this leans more towards vulnerability.
Ultimately most fingerprinting technologies use features that are intended behavior; Canvas/font rendering is useful for some web features (and the web target means you have to support a LOT of use cases), IP address/cookies/useragent obviously are useful, etc (though there's some case to be made about Google's pushing for these features as an advertising company!).
> Ultimately most fingerprinting technologies use features that are intended behavior
Strong disagree.
> IP address/cookies/useragent obviously are useful
Cookies are an intended tracking behavior. IP Address, as a routing address, is debatable.
> Canvas/font rendering is useful for some web features
These two are actually wonderful examples of taking web features and using them as a _side channel_ in an unintended way to derive information that can be used to track people. A better argument would be things like Language and Timezone which you could argue "The browser clearly makes these available and intends to provide this information without restriction." Using side channels to determine what fonts a user has installed... well there's an API for doing just that[0] and we (Firefox) haven't implemented it for a reason.
n.b. I am Firefox's tech lead on anti-fingerprinting so I'm kind of biased =)
The thing is, technology is either enabling something or not. The exploration space might be huge, but once an exploit is found, the exploitation code / strategy / plan can trivially proceed and be shared worldwide. So you have to deal with this when you design and patch systems.
Example: preserving paths in URLs. Safari ITP aggressively removes “utm_” and other well-known querystring parameters even in links clicked from email. Well, it is trivial to embed it in a path instead, so that first-party websites can track attribution, eg for campaign perfomance or email verification links etc. In theory, Apple and Mozilla could actually play a cat-and-mouse game with links across all their users and actually remove high-entropy path segments or confuse websites so much that they give up on all attribution. Browser makers or email client makers or messenger makers could argue that users don’t want to have attribution of their link clicks tracked silently without their permission. They could then say if users really wanted, they could manually enter a code (assisted by the OS or browser) into a website, or simply provide interactive permission of being tracked after clicking a link, otherwise the website will receive some dummy results and break. Where is the line after all?
A vulnerability is distinct from unintended behavior.
Unintended identification is less than ideal but frankly is just the nature of doing business and any number of niceties are lost by aggressively avoiding fingerprinting.
In software intentionally optimized to avoid any fingerprinting however it is a vulnerability.
The distinction being that fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy but in something like Tor Browser that fingerprinting can be life or death for a whistleblower, etc. It's the distinction between an annoyance and an execution.
> fingerprinting in general is a less than ideal side effect that gives you a minor loss in privacy
In what way is collecting a record of a person's browsing history a "minor loss" of privacy. For many people, tracking everywhere they go online would easily expose the most sensitive personal information they have.
I think HN needs a refresher on responsible disclosure, and that even vulnerability scanners engage in this practice for obvious reasons in that it benefits both parties. One party gains exposure, and the other gets exposure and their bug squashed without the bug wrecking havoc while they try to squash it.
Logically, they are doing correlation via publically available information - maybe better than others can - and an identifier would hurt their business since competition can use it as well.
>You rarely see anything more than a couple hundred bucks a month. There are notable exceptions, but unfortunately a lot of those notable exceptions are scammy, spammy business models.
I suspect this is largely sampling bias.
I host meetups for indie founders, and several attendees earn their living through solo businesses. When I go to conferences like Microconf, I meet lots more.
The problem with measuring financial success by who posts about it on HN is:
* The more someone is making at their solo business, the less they want to blab about it and attract competitors.
* The people earning at the low end are more desperate for people to see what they're doing so they can pick up new customers, so they're more likely to talk about their work.
* The more successful founders are busier and spend less time posting on HN.
> The more someone is making at their solo business, the less they want to blab about it and attract competitors.
Exactly! And this is why every time I see someone selling a course while bragging about making a lot of money, I know for sure they are _not_ making money.
"It is inconceivable that anyone will divulge a truly effective get-rich scheme for the price of a book. There is ample opportunity to use wealth in this world, and neither I nor my friends, nor anyone else I have ever met, has so much of it that they are interested in putting themselves at a disadvantage by sharing their secrets."
Victor Niederhoffer, The Education of a Speculator
There are a lot of people out there hawking a lot of different schemes that they say will get you where you want to go. “Build relationships,” they’ll say, or “work smarter, not harder,” or “first, decide what it is you really want.” And it sort of seems like they must know what they’re talking about, because aren’t they successful themselves? Don’t they have trophy spouses and expensive haircuts and mansions on the coast? Surely they’re in possession of some secret system for achieving one’s dreams!
Well, yeah, they’ve got a system. Their system is selling hope to schmucks like you. Their seminars, their self-help books, their crazy diets and exercise plans? That stuff doesn’t help you. It helps them.
It's possible that's where I found it originally, but my memory suggests to me that I found it somewhere else, on a blog, and that the continuation was different.
I have always felt that online courses selling on how to sell online courses are underserved market... You do not hear too much about those, not that I have looked.
A family member of mine has done really well out of Amazon fba. She took someone's course and that got her going. I did the same course but really struggled to get going. I gave up. It wasn't for me.
But yeah I'm guessing the guy selling the course makes more off that than his fba business
> I host meetups for indie founders, and several attendees earn their living through solo businesses. When I go to conferences like Microconf, I meet lots more.
> It for sure is but it's being ised to refute an affirmative assertion, not make it's own assertion.
To refute assertion you need to claim negation of that assertion, which is assertion in itself, as every negation can be rewritten to become affirmation, and vice versa.
This is not true. Producing a counter example to an assertion refutes the assertion.
While you can certainly argue that such a counter example entails the negation of the original assertion, that is not the same thing as claiming the negation of the assertion.
Putting forth an argument or demonstrating a counterexample is not the same as asserting all of the logical consequences of that argument.
The second statement doesn't actually imply that all cats are black but it does refute that all cats are white. It doesn't make it's own claim about all cats, it just adds an anecdote that doesn't conform to the first statement.
I don't stand to gain anything by exaggerating the results of the meetup attendees. Unless you think I'm trying to recruit people on HN to come to my meetups and meet successful founders?
I know over a dozen 1-2 person SaaS, not including my own. Some of them have hired some help now but they are still more on the "lifestyle business" side. They are in many different spaces, and founders from around the world. I am not a big networker, but this is my niche and it's big enough that I just know a slice.
I don't know any. I know a couple of people who had ideas that became bigger startups, but only 1 who was a friend rather than via networking. And I know a few people who did try the small saas or other small software based business but they all failed and now have jobs.
Ok? My point wasn't that they don't exist, I was just pointing out that anecdata from one person who is deeply into a group where it's a thing has large counter points. And I thought my example worth mentioning because my group is into it but nobody amongst it has made a success of it.
This is such a middlebrow dismissal. Like yeah, people are speculating based personal experience and knowledge, so what? If you have a different viewpoint or something specific you'd like to see data on call it out and engage in the discussion. Don't just be the "data or GTFO" guy because that's a super bland and pointless take.
> I host meetups for indie founders, and several attendees earn their living through solo businesses. When I go to conferences like Microconf, I meet lots more.
I'm not claiming that all indie founders are successful. I'm disputing the claim that almost all indie founders are struggling by saying I regularly meet indie founders who are successful. Not like driving exotic cars successful, but making a good living, in some cases with income on par with mid-to-senior FAANG dev jobs.
When I worked at two different FAANG companies, both legal orientation sessions taught this specific scenario as an example of something that's not attorney-client privileged.
If you email your lawyer to ask legal questions, that's privileged communication.
If you just cc a lawyer on a thread while you talk to other people, adding the lawyer doesn't make the conversation privileged or protected.
> 5 out of 1000+ reports to be valid is statistically worse than running a fuzzer on the codebase.
Carlini said "hundreds" of crashes, not 1000+.
It's not that only 5 were true positives and the rest were false positives. 5 were true positives and Carlini doesn't have bandwidth to review the rest. Presumably he's reviewed more than 5 and some were not worth reporting, but we don't know what that number is. It's almost certainly not hundreds.
Keep in mind that Carlini's not a dedicated security engineer for Linux. He's seeing what's possible with LLMs and his team is simultaneously exploring the Linux kernel, Firefox,[0] GhostScript, OpenSC,[1] and probably lots of others that they can't disclose because they're not yet fixed.
I don't understand this critique. Carlini did use Claude Code directly. Claude Code used the Claude Opus 4.6 model, but I don't know why you'd consider it inaccurate to say Claude Code found it.
GPT 5.4 might be capable of finding it as well, but the article never made any claims about whether non-Anthropic models could find it.
If I wrote about achieving 10k QPS with a Go server, is the article misleading unless I enumerate every other technology that could have achieved the same thing?
Also, he did compare with earlier versions that, before 4.5, were dramatically worse at finding the same problems. There's even a graph. That seems to pretty solidly support the idea that this is "gain of function" as it were...
According to Willy Tarreau[0] and Greg Kroah-Hartman[1], this trend has recently significantly reversed, at least form the reports they've been seeing on the Linux kernel. The creator of curl, Daniel Steinberg, before that broader transition, also found the reports generated by LLM-powered but more sophisticated vuln research tools useful[2] and the guy who actually ran those tools found "They have low false positive rates."[3]
Additionally, there was no mention in the talk by the guy who found the vuln discussed in the TFA of what the false positive rate was, or that he had to sift through the reports because it was mostly slop — or whether he was doing it out of courtesy. Additionally, he said he found only several hundred, iirc, not "thousands." All he said was:
"I have so many bugs in the Linux kernel that I can’t report because I haven’t validated them yet… I’m not going to send [the Linux kernel maintainers] potential slop, but this means I now have several hundred crashes that they haven’t seen because I haven’t had time to check them." (TFA)
He quite evidently didn't have to sift through thousands, or spend months, to find this one, either.
No, they haven't. Read the ai slop you posted carefully.
It's a policy update that enables maintainers to ignore low effort "contributions" that come from untrusted people in order to reduce reviewing workload.
A powertool that needs discretion and good judgement to be used well is being restricted to people with a track record of displaying good judgement. I see nothing wrong here.
AI enables volume, which is a problem. But it is also a useful tool. Does it increase review burden? Yes. Is it excessively wasteful energy wise? Yes. Should we avoid it? Probably no. We have to be pragmatic, and learn to use the tools responsibly.
I never said anything is wrong with the policy. Or with the tool use for that matter.
This whole chain was one person saying “AI is creating such a burden that projects are having to ban it”, someone else being willfully obtuse and saying “nuh uh, they’re actually still letting a very restricted set of people use it”, and now an increasingly tangential series of comments.
I feel like you're still failing to grasp the point.
The only difference is that before AI the number of low effort PRs was limited by the number of people who are both lazy and know enough programming, which is a small set because a person is very unlikely to be both.
Now it's limited to people who are lazy and can run ollama with a 5M model, which is a much larger set.
It's not an AI code problem by itself. AI can make good enough code.
It's a denial of service by the lazy against the reviewers, which is a very very different problem.
No one is missing your point. The issue is that you are responding a point no one made.
The grounding premise of this comment chain was “AI submitted patches being more of a burden than a boon”. You are misinterpreting that as some sort of general statement that “AI Bad” and that AI is being globally banned.
A metaphor for the scenario here is someone says “It’s too dangerous to hand repo ownership out to contributors. Projects aren’t doing that anymore.” And someone else comes in to say “That’s not true! There are still repo owners. They are just limiting it to a select group now!” This statement of fact is only an interesting rebut if you misinterpret the first statement to say that no one will own the repo because repo ownership is fundamentally bad.
> It's a denial of service by the lazy against the reviewers, which is a very very different problem.
And it is AI enabling this behavior. Which was the premise above.
Yes, but technically no different than "good contributions from humans are still accepted, AI slop can fuck off".
Since the onus falls on those "people with a track record for useful contributions" to verify, design tastefully, test and ensure those contributions are good enough to submit - not on the AI they happen to be using.
If it fell on the AI they're using, then any random guy using the same AI would be accepted.
Same. Codex and Claude Code on the latest models are really good at finding bugs, and really good at fixing them in my experience. Much better than 50% in the latter case and much faster than I am.
I have so many bugs in the Linux kernel that I can’t
report because I haven’t validated them yet… I’m not going
to send [the Linux kernel maintainers] potential slop,
but this means I now have several hundred crashes that they
haven’t seen because I haven’t had time to check them.
—Nicholas Carlini, speaking at [un]prompted 2026
The article quote was being given as the supposed source for "Claude Code also found one thousand false positive bugs, which developers spent three months to rule out", so should substantiate that claim - which it doesn't.
If the claim was instead just "a good portion of the hundreds more potential bugs it found might be false positives", then sure.
Worth noting that the Blackberry-style devices are also closed source and the hardware and software is way worse than Blackberry was 22 years ago.[0]
[0] https://mtlynch.io/first-impressions-of-meshcore/#this-is-no...
reply