This non-determinism would not and did not cause replays to diverge (the PRNG seed was most likely stored and would reproduce exactly the same results).
Looks like CyberNews have edited the article with more info since first I saw it, it used to look quite suspicious and untrustworthy, it now has more info. Still doesn't say exactly what a record is, or how many uniques there are.
I presume the database exists, but some of the details don't add up. IDMerit say "IDMERIT’s systems and security infrastructure have never been compromised", "there has never been a data breach or exfiltration from [our partners'] systems during, before, or after this event" and "IDMerit does not own, control or store customer data". But Cybernews says that they "promptly secured the database" after being notified. Cybernews also didn't give the reason why they thought this was to do with IDMerit (unless I missed it). I can't quite make head nor tail of it.
It's a weird article. For one, the researcher says "they believe" the data belongs to IDMerit but apparently aren't sure. IDMerit denies it's the owner of the data nor is it any of their partners. And there's very few details about where or how they found this database. It's possibly some kind of hoax or ransom attempt? Or there's really just billions of unaccounted databases of private data just sitting all over the Internet.
The cybernews article does have some screenshots showing names like “idmb2c” … also that IDMerit was contacted in November and the ports were closed a day later.
- IDMerit asked the security researcher for proof, the researcher asked for money first, so IDMerit balked
- IDMerit basically says they have no proof they were hacked, so they weren't
- The researcher is a freelancer... for CyberNews...
Even if somebody followed up with IDMerit, it's likely they will say they are not affected. The security researcher is probably the only person who could prove whether they were or not vulnerable, at this point. If they don't come forward, we can only assume they weren't vulnerable, but we don't know. This is a good lesson for responsible disclosure in the future.
...also, this is yet another example of why we need a regulated Software Building Code, with penalties for not conforming to it. If somebody is found to be hosting a public Mongo instance with no authentication, it should be reported to a state or federal agency, so that real penalties can be applied, the way they are for other code violations. And they shouldn't have been allowed to launch with that in the first place. It shouldn't be up to random "security researchers" to police businesses.
So this is a good thing even for coreutils itself, they will slowly find all of these untested bits and specify behaviour more clearly and add tests (hopefully).
Man, if I had a nickel every time some old Linux utility ignored a command-line flag I'd have a lot of nickels. I'd have even more nickels if I got one each time some utility parsed command-line flags wrong.
I have automated a lot of things executing other utilities as a subprocess and it's absolutely crazy how many utilities handle CLI flags just seemingly correct, but not really.
This doesn't look like a bug, that is, something overlooked in the logic. This seems like a deliberately introduced regression. Accepting an option and ignoring it is a deliberate action, and not crashing with an error message when an unsupported option is passed must be a deliberate, and wrong, decision.
It certainly doesn't look intentional to me- it looks like at some point someone added "-r" as a valid option, but until this surfaced as a bug, no one actually implemented anything for it (and the logic happens to fall through to using the current date).
It's wrong (and coreutils get it right) but I don't see why it would have to be deliberate. It could easily just not occur to someone that the code needs to be tested with invalid options, or that it needs to handle invalid options by aborting rather than ignoring. (That in turn would depend on the crate they're using for argument parsing, I imagine.)
Could parsing the `-r` be added without noticing it somehow?
If it was added in bulk, with many other still unsupported option names, why does the program not crash loudly if any such option is used?
A fencepost error is a bug. A double-free is a bug. Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.
At least from what I can find, here's the original version of the changed snippet [0]:
let date_source = if let Some(date) = matches.value_of(OPT_DATE) {
DateSource::Custom(date.into())
} else if let Some(file) = matches.value_of(OPT_FILE) {
DateSource::File(file.into())
} else {
DateSource::Now
};
And after `-r` support was added (among other changes) [1]:
let date_source = if let Some(date) = matches.get_one::<String>(OPT_DATE) {
DateSource::Human(date.into())
} else if let Some(file) = matches.get_one::<String>(OPT_FILE) {
match file.as_ref() {
"-" => DateSource::Stdin,
_ => DateSource::File(file.into()),
}
} else if let Some(file) = matches.get_one::<String>(OPT_REFERENCE) {
DateSource::FileMtime(file.into())
} else {
DateSource::Now
};
Still the same fallback. Not sure one can discern from just looking at the code (and without knowing more about the context, in my case) whether the choice of fallback was intentional and handling the flag was forgotten about.
> Accepting an unsupported option and silently ignoring it is not, it takes a deliberate and obviously wrong action.
No, it doesn't. For example, you could have code that recognizes that something "is an option", and silently discards anything that isn't on the recognized list.
I would say that Canonical is more at fault in this case.
I'm frankly appalled that an essential feature such as system updates didn't have an automated test that would catch this issue immediately after uutils was integrated.
Nevermind the fact that this entire replacement of coreutils is done purely out of financial and political rather than technical reasons, and that they're willing to treat their users as guinea pigs. Despicable.
What surprises me is that the job seems rushed. Implementation is incomplete. Testing seems patchy. Things are released seemingly in a hurry, as if meeting a particular deadline was more important for the engineers or managers of a particular department than the qualify of the product as a whole.
This feels like a large corporation, in the bad sense.
Yes, it was basically gone for a decade or more. There’s no shared code. Though I’m sure they may have looked at the old code for inspiration for some of the Win32 stuff.
This non-determinism would not and did not cause replays to diverge (the PRNG seed was most likely stored and would reproduce exactly the same results).
reply