Unsurprisingly, it's similar in Scottish Gaelic and Welsh. Mandarin and Cantonese (and probably other Sinitic languages) don't either, with much the same arrangement as the Celtic languages.
There are many yes-like words, but not exactly "yes". There's "it is", "it's correct", and "it's good" that may be used depending on context to mean yes.
Amazing! I wonder about the universality of head nods (for yes) and head shakes (for no), and whether languages that lack a "yes" or "no" might still use nods.
> There are several exceptions: in Greece, Iran, Turkey, Bulgaria,[2] Albania, and Sicily a single nod of the head up (not down) indicates a "no".
Though for Bulgaria at least it seems it's shifting with many people who lived abroad at some point(myself included) and have picked up the correct/more universal form. Which makes it worse: Now no one really knows whether you mean yes or no...
There's also the head bobble, common in south India, which looks a lot like an ongoing head shake but generally means something like ongoing agreement.
Referring to it as "ongoing agreement" reminds me of something I think of as the "confessional no." When addicts talk about their addiction, there's often a rhythmic shaking of the head from side to side like a continual "no," but that speeds up and slows down with some relationship to the grammar of the sentences used.
You can probably make yourself do it by saying out loud: "When I was young, I thought I understood everything." It's like an ongoing disagreement.
What's being argued here is like saying "french" and "francais" are completely unrelated words because they sound different, despite being somewhat similar in sound and having a clear shared origin and meaning.
You can go pretty much anywhere on earth and say "huh? uh? ummm?" in response to something that you don't understand and people will get that you don't understand. Say another word, like "Great", and they might not know that you're affirming whatever they said.
And "yes" doesn't exist in all languages. Many languages require the speaker to repeat back part of the statement or the verb to affirm something.
Different languages have different ways of recording/formalizing what are fundamentally the same onomatopoeic inspirations, though.
In English, frogs croak; in Japanese, they kerokero. It’s clear that the same sound is being portrayed there, just broken up differently due to English having an alphabet where Japanese has an abugida.
Neither language’s encoding from the source “sound of a frog croaking” to a word is lossless, because humans can’t make that exact sound with their vocal cords; and so none of our systems of talking about sounds, be they written languages, spoken languages, or formalisms like IPA, can contain a good representation of the required movements. But both languages above get close enough that we can recognize the sound being referred to from its per-language encoding when we hear it spoken, and from there, know what the word is, even without speaking the relevant language.
The idea here is that there is a thing humans can communicate by making the sound of their language’s equivalent word for “huh?”, which speakers of all other human languages can equally recognize as having the same meaning as their own language’s word “huh?”, without understanding anything about the language being spoken. That communicability is what effectively makes the words “the same” word.
It’s not really that the word (written symbol; exact spoken phonemes) for “huh?” is the same between languages. It’s the abstract sound pattern that’s the same. Just like a frog’s croak, as an abstract sound pattern, is the same as any other frog’s croak, and unlike anything that’s not a frog’s croak. The difference is that all humans can make a “huh?” noise; and so we do get perfect mutual intelligibility out of it. We don’t manage to represent the “sound that ‘huh?’ makes” well in human languages; but when we say it, we still say it the “right” (mutually-intelligible) way nevertheless.
From Middle English ye, ȝea, ya, ȝa, from Old English ġēa, iā (“yea, yes”), from Proto-Germanic ja (“yes, thus, so”), from Proto-Indo-European yē (“already”)
Wiktionary about yes says
From Middle English yes, yis, from Old English ġēse, ġīse, ġȳse, ġīese (“yes, of course, so be it”), equivalent to ġēa (“yes", "so”) + sī(e) (“may it be”), from Proto-Indo-European yē (“already”). Compare yea.
So "yes", seeming so simple, was once a compound word.
Latin had a few ways to affirm or negate. Both "ita" (it is) and the more forceful "ita vero" (it is true) could be used for yes, but you could also repeat the verb from the question with modification to affirm. "Have you eaten?" -> "I have eaten."
For negation you could do the same verb-repeating, but add "non" to negate.
The argument is that the word for "huh" in all languages can be traced to some common ancestor language. "eh" and "huh" are very similar phonetically, and could readily evolve from one to the other or from some common ancestor that's neither).
FWIW i've just verified this on a DS1019+ running the latest DSM. My certificate disappeared after rebooting and the default synology one (which I had deleted) reappeared.
> Why would an issuer ever let a client decide what algo to use?
JWT, like SAML, is made to support separate identity providers and the service providers. In the spirit of generality, this means the identity provider(s) could be from a different vendor, operated by a different organization. E.g., you could let users access their account on your service based a token issued from Google. But that means Google chooses the algorithm, not you!
And it's a standard, so you don't have to write any code of your own. Just import the right middleware for your framework and you're set!
So the temptation is there for library authors to support all the defined algorithms, and just enable everything by default to be as compatible as possible - after all, you can just look at the header to see which algorithm to use!
EDIT: the secondary spec describing the algorithms is at least clear on the use of none, I missed that at first:
Implementations that support Unsecured JWSs MUST NOT accept such objects as valid unless the application specifies that it is acceptable for a specific object to not be integrity protected. Implementations MUST NOT accept Unsecured JWSs by default.
Still, my point about it missing from RFC7115 stands.
---- Original comment ---------
The standard says you should support NONE as the algorithm and that you should use the algorithm the client sends you, all the while completely failing to mention the issues with that, both in its Security Considerations section (which mentions even more "obvious" things like "use keys with high entropy") and in the description of the algorithm to decode a token (which initial implementers probably relied upon to get to a "correct" implementation). Sorry, that is a failure of the spec as well in my book.
If you spec something with risks, at least mark the critical parts clearly with "point away from foot".
A better standard IMHO would have suggested the API for the decode functions, making it clear that the algorithm used should be whiteli
I don't think the spec meant to read that you must allow the client to be able to forge tokens by accepting tokens issued by it without an algo or signature.
If you issue tokens with none, then you will have to accept them when clients send them back. This is obviously a very bad idea, but that's all th spec says. If the issuer chooses to be insecure, that is a valid choice.
If you issue tokens with a specific algo, and clients send them back with a different or none header, you know they have been forged.
The spec allows issuers to decide whether to use none, it doesn't say you must trust none tokens if you know you didn't issue them.
And the spec doesn't spell it out, and initial libraries implementations thus forgot to include things like "let the user specify which algos to accept". And if common libraries provide simple APIs, users expect that these APIs still provide good security.
A standard promoted as "the standard for secure tokens" should not aim for "You can use the pieces to build a correctly behaving system" or "the spec allows secure implementations", it should aim for "if you use this and follow some spelled-out basic rules you get fool-proof secure tokens" and make wrong usage as hard has possible.
The spec doesn't govern what applications can and cannot accept, it governs what contents are valid in tokens. 'None' is valid, that means my parser library will accept it, it doesn't mean my application must accept the token as valid.
Example: The fact that my service has an http stack which must parse a cookie header doesn't mean my app must accept its contents as valid. There's a lot of confusion on this thread about which components should/must do what things.
I guess I'm missing something here because it seems like the spec includes an ability that everyone here is saying nobody should ever use. Seems useless, by definition!