Not really a meaningful comparison because you haven't defined how much of a temperature difference versus how much of a water flow difference we're talking about here. But for most people, at least if they take the most basic level of care to not use water at like 85 degrees or 110 degrees, then no, that's just not really true.
There's a lot of folklore out there that's lingered from the early 2000s espresso community where it was widely believed that temperature was the holy grail control parameter, but now with modern instrumentation and temperature probes, it's been pretty much debunked. Temperature stability throughout a shot makes almost zero perceivable difference in taste.
It takes brew temperature differences swings of around 5 degrees Celsius before people can start to notice any difference better than random chance, and almost a 10 degree brew temperature difference before it gets to the territory of 'ruining a shot'.
Meanwhile, very small differences in puck preparation, including micron-differences in grind size, or sub-gram-level differences in coffee quantity have profound differences in flow rate, which has a very strong affect on coffee extraction levels, which has immediately recognizable differences in the produced flavours that a trained palette can reliably detect. This is before we even start talking about channeling which has an enormous affect on the coffee.
Manual control of the applied pressure can and does allow skilled people to compensate for those differences in flow rate, and combined with very basic attention to brew temperature, does help shot consistency.
So much bs that I don't even want to go further into discussion, sorry. I say this as someone who has made several thousands of espressos on E61 group machine. I'll let you have your own opinion but anyone who has made more than a few espressos will immediately understand if and when the temperature drifted away. Pressure? I've made espressos at 6 bars and 9 bars. Makes literally almost no impact or whatsoever. You're right though that 5 degrees Celsius is probably about the right minimum amount when the espresso starts to change in taste, and there's remarkably many machines which cannot sustain the temperature in shot after shot workloads.
A good machine will always need time to heat up since the temperature stability, which is very important for getting good espresso shots, correlates with the weight of the device. For most machines this means 30-45 min or so no matter what the manufacturer is saying and in practice this isn't much of a problem once you plug the machine through a smart plug which you can program to turn on the device before you're getting up in the morning.
Secondly, adjusting pressure is almost a completely unnecessary feature so I'm not sure why do you chose to point that out as a major differentiator. 9 bars is just fine. In similar category goes the PID for adjusting the temperature. While on the paper it sounds cool in reality you will not use them 99% of the time. There's many prosumer machines which don't allow you neither of those and are still perfectly fine machines.
Do you know what is it that it allows for such a quick warm up? Small boiler? Saturated group? Maybe my comment is more relatable to HX machines then although I don't quite get how is it possible to warm up so quickly - the machine is still a 20kg piece and you can't beat physics with such large thermal mass.
250ml brew boiler with a small saturated group, and a separate 1.6L steam boiler. It's definitely aimed at a home environment but will easily keep up with making several drinks in a row. Compared to previous HX machines I've owned, it heats up much quicker and is more stable. I think not having a massive lump of brass like an E61 style machine helps a lot.
They have separate tanks for keeping water heated up to temperature wit a much smaller volume and don’t use saturated groups so the total hot mass is much smaller.
I don't think we are. We will not be able to keep the peace with code production velocity and I anticipate that focus will be moved strongly to testing and validation
The problem jj is trying to solve is not entirely clear to me but I guess there is enough people who aren't able to find their way with git so for them it probably makes switching to jj more appealing, or at least that's my first impression without going too deep into the documentation.
I wouldn't say it's that people are not able to find their way with Git. I was a competent Git user and would carefully and squash my commits. It's just easier and nicer with Jujutsu.
The way all changes (except those in the ignore file) are automatically incorporated into the current commit means I don't have to faff about with stash when I need to pivot and then try to remember which commit to pop against. I can just hop around the tree with gay abandon. That alone is invaluable.
Then add in the fact that a change to history gets rippled down the descendent commits. And the fact that conflicts are recorded in the history and can be dealt with at your leisure. Or the fact that `jj undo` is a thing.
There must be some kind of split in how people work or something. I’ve never had the desire to jump around the git tree. I never squash commits. I basically never stash changes. All the things that people say jj makes easier are things I never even want to do. Not because they’re not easy with git, but because it sounds hard to keep straight in my head.
Maybe. Different organisations work at different paces and with different contention rates. If you're on a small team and less being tugged about then you might not find value with this stuff.
But I frequently have cases where I have some changes I'm making to repo `acme`. I'll put a PR up for review and then I'll start on a second PR for the same repo. I stack these commits on top of my previous PR. If I then notice a problem in the earlier work I can easily fix it and have the changes ripple down to the later PR. Or if somebody else merges something in against `main` it's really easy using `jj rebase` to move my commits against the new version of `main`. With a single `jj rebase` I can move all of my stacked PRs over to the new version of `main` and have every stacked PR updated.
Yes, exactly that. In Jujutsu you don't have Branches like you do in Git. You have branches in the sense that you have forks in the tree and you can place a "bookmark" against any revision in that tree. (When exporting to a Git repo those bookmarks are mapped to Git branch heads.)
So yeah if I have revision `a` with two children `b` and `c`, and even if those children have their own children, a change to `a` will get rippled down to `b` and `c` and any further children. It's a bit like Git rerere if you've used it, except you're not forced to fix every conflict immediately.
Any conflicts along the way are marked on those revisions, you just fix the earliest conflicts first and quite often that'll ripple down and fix everything up. Or maybe there'll be a second conflict later down the stack of commits and you'll just fix that one the same way.
To fix a conflict you typically create a new revision off the conflict (effectively forking the tree at that point) using `jj new c` (let's call the result `cxy`) fix the revision in that commit and then you can `jj squash` that revision `cxy` back into `c`. This, again, gets rippled down fixing up all of the descendent commits.
Yes, I understand that but what I'm saying is that the problem definition isn't completely clear to me. I'm not saying that there is none, it's just that it may not be obvious at the first read.
You're condescending for no valid reason and I will tell you that what you say is not correct. Models superseded "plumbing" tasks and went well into the engineering grounds a generation or two ago already. Evidence is plenty. We see models perfectly capable reasoning about the kernel code yet you're convinced that game engines are somewhat more special. Why? There're plenty of such examples where AI is successfully applied to hard engineering tasks (database kernels), and where it became obvious that the models are almost perfectly capable reasoning about that tbh quite difficult code. I think you should reevaluate your stance and become more humble.
Link me the research on the hard engineering tasks they've done on database kernels, I'd love to see it, sounds interesting.
As long as people comment, "Only bad/stupid engineers hand-write code because LLMs are better in every way," and that's objectively not true in various engineering circles, I'll keep trolling them and being just as hyperbolic in the inverse because it amuses me. Don't take things too seriously on the internet; you'll have a bad time ;)
I am an experienced C++ developer, I know what happens in this particular case, but this type of minutiae are only interesting to the developers who have never had an actually hard problem to solve so it's a red flag to me as well. 10 years ago I would have thought differently but today I do not. High performance teams do not care about this stuff.
reply