Just use ssh from Cygwin. DLL hell was rarely a problem, just always install everything via setup.exe.
The single biggest problem it has is slow forking. I learned to write my scripts in pure bash as much as possible, or as a composition of streaming executables, and avoid executing an executable per line of input or similar.
As a dependency of a shipping Windows application that needs to cleanly coexist side-by-side with existing Cygwin installations and optionally support silent install/upgrade/uninstall through mechanisms like SCCM, Intune, and Group Policy?
Not so much.
I do use the setup program to build the self-contained Cygwin root that's ultimately bundled into my program's MSI package and installed as a subdirectory of its Program Files directory, however.
I've never had a problem installing from setup, but some tools were (maybe still are, it is a long time since I've needed anything not in the main repo) ported to windows using the cygwin dlls were distributed with their own versions and could clobber the versions you have otherwise (and have their versions clobbered when you fix that).
> slow forking
There isn't much that can be done about that: starting up and tearing down a process on Windows is much more resource intensive operation than most other OSs because there is a lot going on by default that on other OSs a process ops into, only if it needs to, by interacting with GUI libraries and such. This is why threads were much more popular on Windows: while they are faster than forking on other OSs too, especially of course if data needs to be shared between the tasks because IPC is a lot more expensive than just sharing in-process memory, the difference is not as stark as seen under Windows so the potential difficulties of threaded development wasn't always worth the effort.
Cygwin can't do anything about the cost of forking processes, unfortunately.
Slow forking is only the second biggest problem IMO. The biggest is the lack of proper signals. There's a bunch of software out there that just isn't architected to work well without non-cooperative preemption.
That's fake cooperative emulation of signals. It isn't preemptive (unless someone got a kernel driver approved while I wasn't looking?) thus many things either work poorly or not at all. Pause-the-world GC algorithms are a good example. Coroutine implementations also have to be cooperative.
If you're curious, I believe the issue was discussed at length in the Go GitHub issues years ago. Also on the mailing lists of many other languages.
Cygwin bash isn't slow either. The problem is a typical bash script isn't a series of bash operations, it's a series of command line program executions.
For example, someone might do something like this (completely ignoring the need to quote in the interests of illustrating the actual issue, forking):
for x in *; do
new_name=$(echo $x | sed 's/old/new/')
mv $x $new_name
done
Instead of something like this:
for x in *; do
echo $x
done | sed -r 's|(.*)old(.*)|mv \1old\2 \1new\2|' | grep '^mv ' | bash
This avoids a sed invocation per loop and eliminates self-renames, but it's harder to work with.
Of course the code as written is completely unusuable in the presence of spaces or other weird characters in filenames, do not use this.
Cygwin implements a POSIX API on Win32 with a smattering of Nt* calls to improve compatibility but there's a lot of hoop jumping and hackery to get the right semantics. Fork isn't copy on write, for one thing.
I was a Cygwin user from about 1999 to 2022 or so, spent a little time on wsl2 (and it's what I still use on my laptop) but I'm fully Linux on the desktop since last year.
Ha that tracks my own usage and timeline almost precisely, although I was using cygwin and WSL2 in parallel for a while. Lot of complaints about cygwin speed here, but NTFS filesystem access is actually a lot faster on cygwin than WSL2!
You come with a belief, then you wonder why other people don't have the belief. The belief was exogenous for you. Why do you believe the belief is not exogenous for others?
I guess you never talk to coworkers about your weekend. That's on the job. I see you mention the water cooler; how dare you talk there?
There's an element of revisionism to this perspective. It used to be thought that integration with the global economy would gradually bring more alignment with Western values as well.
The ideas was that a rising middle class would demand more say in running the country. That elites would need to become accountable to the people, ideally via democracy. That geopolitical competition would be positive sum.
That idea was minorly present during Clinton and Bush, by the time Obama was in office I think it was clear that was never going to happen. The book covers the period from 2016 on, so long after that neocon dream.
The idea lingered for longer than that. China under Hu Jintao wasn’t exactly friendly to the west, but it was Xi who really set China on its present course to build a multi-polar world, make real noise about reunification with Taiwan, etc.
This new direction didn’t become clear to both sides of the aisle in the US until a year or two into Xi’s tenure. If someone else other than Xi had been chosen, we would likely have a very different China today.
Not sure which exact model you're talking about, but I've run the 30B and the 3.5 32B models and both can get some things done and can waste tons of time getting some things completely wrong.
They're fun to mess around with to figure out what they can and can't do, but they're certainly not not tools in the way I can count on Codex.
I've tasted madiera-style wine from Crimea IIRC from the 1800s. It was in an Enomatic machine in Hedonism Wines in London. I could not say it was appreciably better than something more modern.
On the other hand, I've had a glass of tawny port from the 1970s that Chez Bruce had on the menu at one point that kicked off a fruitless search into every barrel aged fortified wine I could get my hands on to recapture. It had a caramel taste that lingered for 30 minutes or more after the last sip. Amazing stuff.
Yes, but this is intentional, and that's what's out of line. The main content stays the same but exercises and case studies are rotated out to force an upgrade.
The business strategy class I took in college in Ireland used the same book for two or three years, even though the book was reshuffled every year, just to enable some spreading of the financial burden on students.
I exported all my email with Google Takeout, and Claude Code was able to write me a threaded email viewer local web app with basic search (chained ripgrep) in about 10 minutes, for any time I need to search archived emails.
The lack of robotics mention somewhat undermines this article.
I don't think it's intrinsically wrong, we are in a late stage of a transformation. Software is eating the world and AI is (so far) most profitably an automation of software.
There is plenty of money to be made along the way. I don't really buy the article's seeming confusion about where the money is going to come from. Anthropic is making billions and signing up prodigious amounts of recurring revenue every month.
>> At the early stage of a surge, investment tends to be patchy and not fully understood—the sector exists but it is not completely legible yet.
He says this in the context that AI clearly doesn’t fit this pattern, as the investment has been enormous.
I feel like he and everyone else has a scale problem, due to the tendency to equate AI to LLMs - the investment is patchy and not fully understood - I really don’t think we’ve seen anything more than the pretremors at this point - as the scale of the change is just as incomprehensible to the world at large now as it was when the steam engine was just a slightly better way of getting water out of a mine than a donkey.
Anthropic today, who next week? If locally run models ever get to the point where they can reliably solve... 85% of what the frontier cloud models can do, I think many would be willing to accept slightly less problem solving ability and just run the thing locally.
All hypothetical, but if compute + AI research continues at pace, in 5 years we should see extremely good local models.
As a user of local models, it's well above 85% already. I use frontier models at work and local models for home use because my day to day tasks are well within what DeepSeek can handle.
The question is whether robotics will look like a some number of platforms with little development to adapt to different scenarios, or a million types of machines that are highly fit for purpose.
Because the first situation won't create that many jobs. The second one might.
I expect hybrids. Something general has to be adaptable for what will be an expensive capital purchase.
The human form factor - torso up anyway - is probably easier to bootstrap on a general basis; keyed off of human data. But I don't like the failure modes of bipedal robots - imagine a robot flailing around trying to regain balance, in any setting with humans around.
The single biggest problem it has is slow forking. I learned to write my scripts in pure bash as much as possible, or as a composition of streaming executables, and avoid executing an executable per line of input or similar.
reply