First, I think sorting PRs by "recently updated" can be a good proxy for "does anyone care about this PR"... If it's being bumped and reviewed, it is.
But also definitely start setting up linting rules / labels to indicate how healthy or close something is to being merged.
The goal is to limit work-in-progress, and focus on getting PRs that are easy to merge over the finish line.
Edit: and yeah, a weekly review cadence goes a long way to triage tickets and PRs and get some initial feedback to developers. I also like the "next review on this date" proposal to push certain problematic PRs to a slower cadence so they're not occupying too much bandwidth.
Well "Outlook (new)" finally stopped OOM-ing on my very normal-sized inbox, so I went back to using it over Outlook Classic... Can't say I notice a difference much these days.
(Not a residential inbox, the "I work in IT" sized inbox with all the email alerts about jobs failing...)
One mental model I have with LLMs is that they have been the subject of extreme evolutionary selection forces that are entirely the result of human preferences.
Any LLM not sufficiently likable and helpful in the first two minutes was deleted or not further iterated on, or had so much retraining (sorry, "backpropagation") it's not the same as it started out.
So it's going to say whatever it "thinks" you want it to say, because that's how it was "raised".
Fully agree. I wonder in the long term how this will show up. Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?
The possibilities in "dangerous" fields are a bit more frightening. A general is much more likely to ask ChatGPT "Do you think this war is a good idea/should I drop a bomb", rather than an actually helpful tool - where you might ask "What are 5 hidden points on favor of/against bombing that one likely has missed".
The more you use AI as a strict tool that can be wrong, the safer. Unfortunately I'm not sure if that helps if the guy bombing your city (or even your president) is using AI poorly, and their decisions affect you.
> Will every business/CEO do more of what he/they anyway want to do, but now supported by AI/LLMs?
Arguably, it already worked that way. The best way to climb the ranks of a 'dictatorial' organization (a repressive government or an average large business) is to always say yes. Adopt what the people from up above want you to use, say and think. Don't question anything. Find silver linings in their most deranged ideas to show your loyalty. The rich and powerful that occupy the top ranks of these structures often hate being challenged, even if it's irrational for their well-being. Whenever you see a country or a company making a massive mistake, you can often trace it to a consequence of this. Humans hate being challenged and the rich can insulate themselves even further from the real world.
What's worrying me is the opposite - that this power is more available now. Instead of requiring a team of people and an asset cushion that lets you act irrationally, now you just need to have a phone in your pocket. People get addicted to LLMs because they can provide endless, varied validation for just about anything. Even if someone is aware of their own biases, it's not a given that they'll always counteract the validation.
This, make sure the 'active' flag (or deleted_at timestamp) is part of most indexes and you're probably going to see very small impacts on reads.
It then turns into a slowly-growing problem if you never ever clean up the soft-deleted records, but just being able to gain auditability nearly immediately is usually well worth kicking the can down the road.
So how do we upgrade the heat rejection system of this planet?
Clearly removing C02 is expensive, but can we just paint some of the desert with paint that reflects in the infrared window? Or make clouds as Neal Stephenson talks about in his fictional novel Termination Shock?
Which is still slightly useful - I've got two Dell Wyze 5070, fanless, and being able to load them with 16 GB of ddr3 ram each for a song meant they were basically an obvious upgrade from being so cramped for RAM running a Raspberry Pi 4.
I should probably sort through some old boxes and eBay stuff I've saved for no reason in particular, not like it's (I hope!) going to get any more valuable than it already isn't, and I'm not realistically going to build Frankenstein DDR1/2/3 systems rather than use a more modern and low-power Pi/SBC or NUC for the purpose, even if I need to buy the latter!
If you're okay with DDR3-like memory bandwidth you can get that cheaply on a modern system by getting Intel Optane NVMe/PCIe media (solid state storage much like NAND, but wearout-resistant well beyond even the best SLC NAND) and setting it up as swap. If you're either memory-bandwidth bound (common for local AI, not so much otherwise) or not OK with the power reqs of Optane, you're going to need actual expensive DRAM.
I tried the Zed editor and it picked up Ollama with almost no fiddling, so that has allowed me to run Qwen3.5:9B just by tweaking the ollama settings (which had a few dumb defaults, I thought, like assuming I wanted to run 3 LLMs in parallel, initially disabling Flash Attention, and having a very short context window...).
Having a second pair of "eyes" to read a log error and dig into relevant code is super handy for getting ideas flowing.
Terraform is working on that - burstable synthetic methane generation using cheap catalysts that you can afford to idle, only generating methane when electricity is cheap.
I personally have used Qwen2.5-coder:14B for "live, talking rubber duck" sorts of things.
"I am learning Elixir, can you explain this code to me?" (And then I can also ask follow-up questions.)
"Here is a bunch of logs. Given that the symptom is that the system fails to process a message, what log messages jump out as suspicious for dropping a message?"
"Here is the code I want to test. <code> Here are the existing tests. <test code> What is one additional test you would add?"
"I am learning Elixir. Here is some code that fails to compile, here is the error message, can you walk me through what I did wrong?"
I haven't gotten much value out of "review this code", but maybe I'll have to try prompting for "persona: brief rude senior" as mentioned elsewhere.
Yeah, but if the problem you are solving is rare for most practitioners, effectively theoretical until it actually happens, then people won't switch until they get bit by that particular problem.
But also definitely start setting up linting rules / labels to indicate how healthy or close something is to being merged.
The goal is to limit work-in-progress, and focus on getting PRs that are easy to merge over the finish line.
Edit: and yeah, a weekly review cadence goes a long way to triage tickets and PRs and get some initial feedback to developers. I also like the "next review on this date" proposal to push certain problematic PRs to a slower cadence so they're not occupying too much bandwidth.
reply