Reading the article, it seems to boil down to the following two observations:
1. ARM64 is actually less "smart" than x64. While Intel's Core i9 tries to
be clever by aggressive boosting and throttling, Snapdragon just delivers
steady and consistent performance. This lack of variability makes it easier
for the OS to schedule tasks.
2. It is possible that the ARM build is more efficient than the x64 build,
because Windows has less historical clutter on ARM than x64.
So, has CPU throttling become too smart to the point it hurts?
It should be noted that this is a server OS, but it has been tested on a desktop x86 CPU.
The x86 server CPUs, like AMD Epyc or Intel Xeon, have a lower range within which the clock frequency may vary and their policies for changing the clock frequency are less aggressive than for desktop CPUs, so they provide a more constant and predictable performance, which favors multi-threaded workloads, unlike in desktop CPUs, where the clock frequency control algorithms are tuned for obtaining the best single-thread performance, even if that hurts multi-threaded performance.
> The x86 server CPUs, like AMD Epyc or Intel Xeon, have a lower range within which the clock frequency may vary and their policies for changing the clock frequency are less aggressive than for desktop CPUs
Probably we need to compare Xeon/EPYC with something like AWS Graviton or Ampere Altra to get an accurate picture here. That said, I think "Windows Server works fast on Snapdragon" is both crazy and fascinating; I wasn't even sure if that was possible.
There is actually a clear, concise and actionable answer to this question:
- Hide under the nearest table or desk (if you are at home or in the office).
- Grab the nearest pole or handrail (if you are on a train).
The basic idea is that the most common cause of death in an earthquake is being crushed by falling objects, so you should use every second to minimize the risk.
Here are a few common mistakes:
- Do not attempt to stop furniture from falling (you'll get crushed by it)
- Do not try to run outside (you'll get hurt by falling walls)
- Do not try to turn off the gas (most systems have automatic shutoffs)
- And for Catfish's sake, do not use your precious 45 seconds to open the social media.
There was a similar case in Japan recently: alt.ai
This company purported to sell AI transcription service. Raised capital from notable local VCs. Did IPO in Oct 2023.
It turned out that more than 90% of its sales were fake. The CXOs were arrested and the company was liquidated last month.
Personally I never get the appeal of going public on fake sales. By design, the amount you need to fake grows bigger and bigger over time. So the collapse is inevitable.
Bank of France "transported" their reserve by selling the gold held in New York, and subsequently buying the same amount in European market.
They opted to do so because it's just more efficient. It takes a lot of efforts to physically move 129 tonnes of gold after all. And as a side effect of this relocation project, they ended up recording a capital gain. It's nothing-burger.
For context, in 2025H1, 480 tons where moved from CH to the US (I assume originating from UK after being recast).
My guess is that the choice to sell rather than transport was also due to using the (at the time) price divergence between US and European markets. (arbitrage + not having to pay transport + refining)
PE is a very broad practice. It's kinda hard to make a single-blanket argument for it (it's like asking "Is Software good for society?" Yes, maybe?).
So here are some positive things that I think PE funds can contribute:
1) Private equity serves as an exit path for small business builders. Suppose that you have built a small, profitable trucking company. Now you are old and want to retire. You kids have no interest in the business, and have already built different careers elsewhere than managing a fleet of Super Greats. Oftentimes, PE funds are only realistic buyers of your business.
2) At a more subtle level, PE can supply better management. For example, a supermarket owner I know accepted capital from a PE fund specifically to acquire better talent (his remark: "very talented people are rarely excited to operate a rural food & beverage shop").
3) PE-backed companies are, arguably, structurally better than the public counterparts. The cliche is that many public firms are run like third-world fiefdoms (the board are focused on empire building; the executives are spending money lavishly on perks). Most of these concerns vanish once each director are given a shared, transparent objective set by the deal structure. (As Henry Kravis often remarks, PE is mostly about alignment of the interests)
I'll sketch a few points to illustrate the inner workings here:
- It's hard to buy a decent company at 5x EBITDA today. A typical EBITDA multiple nowadays is like 10x-15x.
(e.g. EQT bought SUSE for $3B in 2023, and the adjusted EBITDA was $240M, which implies 12x EBITDA)
- Debts are tranched. Banks typically get a senior slice, often secured by real assets (a.k.a. collateral), so they can recoup the money even when the company goes straight into a ditch. The real risk lies in the junior loans ("mezzanine"), which demand very high yields to compensate for that risk.
- In a typical PE deal, most profits are earned at exit, not via dividends en route. So managers have incentive to make the target company (look) better for the next buyer, rather than neglecting it.
A more fundamental reason why the situation you describe rarely happens is that PE fund managers treat their operation as an "on-going" business. Lenders are gonna be really pissed if they lose their money. So fund managers try to avoid that scenario to keep the credit flowing for their next deal.
I think the "Miscellaneous Ramblings" on the final page really illustrates the color of his personality:
Section 13.3: Miscellaneous Ramblings
And, of course, thanks to everyone else. If you contributed to the developement of xv in some way, and I
somehow forgot to put you in the big list, my humble apologies. Documentation and careful record-
keeping are not my strong suits. “Heck,” why do you think it takes me a year and a half to come up with a
minor new release? Because, while I love to add new features to the code, I dread documenting the dumb
things. Besides, we all know that writing the documentation is the hardest part of any program.
Particularly when the good folks at id Software insisted upon releasing DOOM II...
And finally, thanks to all the folks who’ve written in from hundreds of sites world-wide. You’re the ones
who’ve made xv a real success. (Well, that’s not actually true. My love of nifty user-interfaces, all the
wonderful code I’ve gotten from the folks listed above, and the fact that xv actually serves a useful purpose
(albeit “displaying pictures of naked women”) are the things that have made xv a real success. You folks
who’ve written in have given me a way to measure how successful xv is.) But I digress. Thanks!
By the way, when I last counted (in October 1992), xv was in use at 180 different Universities, and dozens
of businesses, goverment agencies, and the like, in 27 countries on 6 of the 7 continents. Since then, I’ve
received messages from hundreds of new sites. And xv has been spotted in Antartica, bringing the total to
7 of 7 continents, and allowing me to claim that xv is, in fact, truly global software. That’s probably a
good thing. Does anybody know if there’s a Unix workstation in the Space Shuttle?... :-)
I listened to the podcast linked in the article, and my understanding of the timeline is:
- The owner originally had two dogs. Both disappeared from her backyard one day. One dog returned home. The other vanished without a trace.
- Eleven years later, a random girl found the missing dog outside. She befriended the dog and brought him home. She talked with her parents and contacted ACCT Philly, who in turn found the original owner through a microchip.
Does this make sense? To me, this story managed to be a rare mix of heartwarming, insightful and frustrating.
Eleven years seems like a very long time to be a Philly street dog - kinda makes you wonder if it wasn't adopted by somebody in the interrim before ending up with the girl somehow.
It fetches the number of mispredicted instructions from Linux's perf
subsystem, which in turn gathers the metrics from CPU's PMU
(Performance Monitoring Unit) interface.
Internally Python holds a string as an array of uint32. A utf-8 representation is created on demand from it (and cached). So pansa2 is basically correct [^1].
IMO, while this may not be optimal, it's far better than the more arcane choice made by other systems. For example, due to reasons only Microsoft can understand, Windows is stuck with UTF-16.
[1] Actually it's more intelligent. For example, Python automatically uses uint8 instead of uint32 for ASCII strings.
There is no caching of a "utf-8 representation". You may check for example:
>>> x = '日本語'*100000000
>>> import time
>>> t = time.time(); y = x.encode(); time.time() - t # takes nontrivial time
>>> t = time.time(); y = x.encode(); time.time() - t # not cached; not any faster
Generally, the only reason this would happen implicitly is for I/O; actual operations on the string operate directly on the internal representation.
Python uses either 8, 16 or 32 bits per character according to the maximum code point found in the string; uint8 is thus used for all strings representable in Latin-1, not just "ASCII". (It does have other optimizations for ASCII strings.)
The reason for Windows being stuck with UTF-16 is quite easy to understand: backwards compatibility. Those APIs were introduced before there supplementary Unicode planes, such that "UTF-16" could be equated with UCS-2; then the surrogate-pair logic was bolted on top of that. Basically the same thing that happened in Java.
> There is no caching of a "utf-8 representation".
No there certainly is. This is documented in the official API documentation:
UTF-8 representation is created on demand and cached in the Unicode object.
https://docs.python.org/3/c-api/unicode.html#unicode-objects
In particular, Python's Unicode object (PyUnicodeObject) contains a field named utf8. This field is populated when PyUnicode_AsUTF8AndSize() is first called and reused thereafter. You can check the exact code I'm talking about here:
The C API may provide for it, but I'm not seeing a way to access that from Python. This sort of thing is provided for people writing C extensions who need to interface to other C code.
(And the code search seems to be broken; it can't find me the definition of `unicode_fill_utf8` although I'm sure it's obvious enough.)
1. ARM64 is actually less "smart" than x64. While Intel's Core i9 tries to be clever by aggressive boosting and throttling, Snapdragon just delivers steady and consistent performance. This lack of variability makes it easier for the OS to schedule tasks.
2. It is possible that the ARM build is more efficient than the x64 build, because Windows has less historical clutter on ARM than x64.
So, has CPU throttling become too smart to the point it hurts?
reply