3. The history of Tech is people from other fields, usually career drop outs, deciding they could have a more comfortable life if they gave up their passion and did something easy with insufficient history to have its own academic standards.
I've never had that problem. Maybe you don't setup much swap space as I would expect the system to run well enough to save your work and shutdown for most common leaks.
Swap usually makes it worse, without swap there is some chance that the Linux OOM killer does something useful and saves the system. With swap, it becomes a frozen system that never manages to kill anything due to all the swapping. You can wait 5 minutes, 10 minutes, or 15 minutes, but the system never recovers without a reboot.
That only works if the system is accessing mostly anonymous pages. If the load on the system is accessing plenty of mmapped code/data pages, it can still trash those even if swap is disabled. I've still seen systems hanging for 10+ minutes without recovering even though swap was already disabled.
The Linux kernel OOM killer only acts if there's nothing left that can be discarded, which often happens way too late to save the system. You need a user-mode OOM-killer like earlyoom if you want to keep the system responsive.
Not a long time. With swap enabled, when a process consumes too much memory your system goes from perfect performance to cursor lagging to everything is frozen and you can't even switch to a TTY within 5-10 seconds.
Without swap, the system lags for a couple seconds, OOM killer frees up memory and you're good to go again. The only slowdown is any pages that were kicked out from the file cache. But those quickly come back after the OOM killer does its thing.
What if the thing you kill is in the critical stack to saving your work? If it isn't I don't really understand why you would be swapping it in a lot
I would view the OOm solution as a compute as cattle thinb, but here we are talking about a user desktop where the user can take the best action for themselves once they realize there's a problem.
This sounds nothing like the gradual leak problem described.. An OOM killer is great once you actually run out of resources, removing virtual resources to always run out and play roulette is using it as a fad hammer.
You have only about 10sec between the system getting slower and the system locking up completely. If you manage to hit the Magic SysRq key combination to trigger OOM manually, that can save the system, but you have to be quick.
My previous Linux laptop with 8GB of RAM (2014–2017 or so, I think) I never even got round to setting up swap, and I only ran into problems two or three times total (when running two or three Firefox instances and Chromium and a VM taking 2GB and a bunch of other things running—some of the biggest consumers, namely browsers, actually notice if they’re using too much of your RAM and adjust how they work so they don’t ask for so much RAM, at what I believe is a fairly slight performance cost).
On my current laptop and with my current habits, I’m consuming a lot more memory, and so the technique I use to avoid OOM is simply having 40GB of RAM. (As it happens, I do actually have swap set up at present, because certain circumstances meant I wanted to hibernate it occasionally; should disable swap again now I’m back to not needing to hibernate.)
> some of the biggest consumers, namely browsers, actually notice if they’re using too much of your RAM and adjust how they work so they don’t ask for so much RAM
Maybe there are reasons, but it is not nearly good enough -- I frequently run out of RAM and encounter OOM kills (prefer not to deal with linux swap), usually requiring a reboot. I really wish that I could just set an upper limit on (e.g.) firefox RAM usage -- 8GB for example -- instead of its insistence on using all of the unused RAM minus a couple to several hundred MB, which does not leave enough room for memory usage spikes. There might even be a way to set this buried somewhere in the config parameters, but I could never find it. It must be technically possible to set some limit because otherwise the browser would not be able to maintain a somewhat consistent usage just below the total system RAM.
I had to look a few times to convince myself they weren't at least using tenure of ex-employees. But I'm not sure they could actually search linkedin that effectively on anything but current employees without hitting monetized APIs, etc.
I worked in such a ghost kitchen in the 1990s (we didn't have a license to sell food on premises.)
You want a gas conveyor oven which actually takes a lot of space and needs ~3 meters in front for assembly and 2 meters behind it. Walk in fridge and freezer are at least another shipping container in size. Then there's clean up and other prep space..
For us, wages were more than rent and probably our competitors rents. 2/3rds of wages would be drivers at slow times going to something like 4/5ths during the Superbowl.
Well, the article brings up iodine overdose from popular medications at the time, but you pretty much can't get too much iodine from iodized salt without having consumed way too much salt.
I think it's been described as "as good as dead" so technically not, but rare anyone gets their other time reduced to anywhere around 80% unless their side project is no longer really a side project.
If it's difficult to keep people programming 2 hours a day, losing 8 a week to a side project was naturally making fuller projects than the meeting encased projects.
This is unrelated to the thread. The point was obfuscation of the fact that the profit is from a specific copyrighted work. I.e. selling 600 versions of a game each with a different default skin hides no income from tax authorities.
You're referencing profit instead of revenue, which the thread was referring to. Again, you can't hide revenue as easily. Splitting it up 600 ways doesn't solve the problem that game XYZ sold 10 million copies from the 600 subsidiary distributors. Nintendo could hide the profit that way, saying the subsidiaries took 100% of the revenue as costs, but the total revenue can't be hidden. It's either a reported sale or not, there's not really a way to fudge that.
When a country defaults nothing violent happens internationally.. Their payments or lack of payments deteriorate their currency from hard to soft and they have trade that's mostly limited to hard currency they get from exports.
I think this would be particularly brutal if unrelated international trade were still in dollars and everyone wanted to cash out.
Economy size is very interesting in itself and has a lot of relevance in some decisions, but per capita has always seemed more important to me, i.e. the Italy and Russia comparison as similar size economies is an interesting one.
> but per capita has always seemed more important to me
> i.e. the Italy and Russia comparison as liknande size economies
Every time I hear "hur-dur economy less than Italy" (mostly on Reddit, ofc) I always remember a funny trivia: there are more pensioners in Russia (40M+) than everyone in 15-65 y/o gap in Italy (36M).
So yes, economy size without a tie to the population size and social, geographic and other measurements is.. misleading at best.