Hacker Newsnew | past | comments | ask | show | jobs | submit | mkj's commentslogin

> Tokio’s dominance is function coloring at ecosystem scale

That isn't function colouring, but rather plain incompatible APIs/runtime. You could have the equivalent with non-async ecosystems.


What it really is: LLM-generated puffery.

How is docker a context switch overhead? It's the same processes running on the same kernel.

You're adding all of the other supporting processes within the container that needn't be replicated.

It depends, you could have an application with something like

FROM scratch

COPY my-static-binary /my-static-binary

ENTRYPOINT “/my-static-binary”

Having multiple processes inside one container is a bit of an anti-pattern imo


Sidecars? Not in a simple app.

What does ed448 mitigate against vs ed25519?

The simplified answer is, larger keys that demand a far larger effort to break, in a way similar to RSA-4096 vs RSA-2048.

The predicted timelines for quantum computer advances (and the requirements for practical applications) have shrunk dramatically in the past 15 years. What used to be a no-later-than-2035 recommendation for getting off e.g. RSA-2048 in good time, is today no-later-than-2030. The admission of 256-bit curves for ECDSA/ECDH has been supplanted by 384-bit curves already years ago.

In the absolutely ground shaking event that a future application of quantum computation somehow manages to cut Ed448's equivalent security of ~224 bits in half, exploring even a small portion of a 112-bit space will still cost more electrical energy than we can possibly provide.


The whole point is that RSA and ECDH can't be made safe against quantum computers by making the keys bigger. The speedup is exponential and so breaking a 4096-bit key is only twice as hard as a 2048-bit key. The 'cutting in keysize in half' is true in principle in general (but not in practice, as the article points out), but for some algorithms it's much worse.

Just to be clear, I'm not advocating for Ed448 for the KEX - we already have ML-KEM and SNTRUP in OpenSSH and everyone should start using those. I'm advocating for Ed448 DSA ("SSH pubkey").

Looking at the PR discussed, it's 34 commits! I'd probably ignore that too as a maintainer. The PR description isn't particularly motivating, "Cleans up the implementation", "see #6735 for the actual motivation".

Fair call-out, although couple things to point out, I am used to a Squash Merge workflow which I think makes reviews easier based on comments as the reviewer gets to see what changed after their comment easier. Many of the commits are merge commits. If you actually look at the timeline of the original PR, you will see that it also started with a smaller scope but as time passed I also went through the cycle of "while at it, let me also fix this" loop that I mentioned in the article.

The point of the article is: there is a feature that people would like, there is someone who wants to add it, the appropriate time and a lot more for this feature to be merged has been spent yet the feature is nowhere to be found. That's the two way street I am trying to get across. I wish I wasn't even able to open the PR, I wish the maintainer would utilize more automation tools to groom feature requests and potential contributors with agreed upon plans and agreed upon timelines so that both sides time could be used much more effectively.

As far as PR descriptions etc goes, I asked multiple times what the best route to merging would be. If that went through better descriptions, I was happy to do that, as you can see, I wasn't aware of the "no conventional commits" rule, so in my next PRs I used the correct approach, but that should be completely automatable. Yes, I should have spent more time studying Jellyfin's conventions, but I shouldn't have to, not because its unfair for me, simply because there are more contributors than maintainers, so maintainers should not rely on desired behavior from contributors, they should force that behavior as much as possible.


Many of those are "Merge branch 'master' into armanc/subtitle-sync-refactor". Rebasing the PR on top of master would bring that down to like 15 or something.

Fair enough. A 15 commit PR is still pretty long winded.

Isn't Nix just reinventing what Vesta did for software reproducibility decades earlier? https://vesta.sourceforge.net/


Are you saying Bram hasn't worked on VCS problems much? https://web.archive.org/web/20071213090008/http://codeville.... is 20 years.


It looks like firecracker already supports ACPI vmgenid, which will trigger Linux random to reseed? https://github.com/firecracker-microvm/firecracker/blob/main...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

So that just (!) leaves userspace PRNGs.


BSD0 doesnt


Does that page even say which RISC-V CPUs are being used that are slow? I couldn't see it, which seems a bit of pointless complaining.


> RISC-V builders have four or eight cores with 8, 16 or 32 GB of RAM (depending on a board).

Which boards are used specifically should not matter much. There's not much available.

Except for the Milk-V Pioneer, which has 64 cores and 128GB ram. But that's an older architecture and it's expensive.


Intriguing work! Does it panic on any bad inputs? That's better than memory unsafety of libxml2, but still a DoS concern for some servers.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: