Hacker Newsnew | past | comments | ask | show | jobs | submit | _fizz_buzz_'s commentslogin

What hardware do you use? I want to experiment with running models locally.

OP’s Qwen3.6 27B Q6 seems to run north of 20GB on huggingface, and should function on an Apple Silicon with 32GB RAM. Smaller models work unreasonably well even on my M1/64GB MacBook.

I am getting 10tok/sec on a 27B of Qwen3.5 (thinking, Q4, 18GB) on an M4/32GB Mac Mini. It’s slow.

For a 9B (much smaller, non-thinking) I am getting 30tok/sec, which is fast enough for regular use if you need something from the training data (like how to use grep or Hemingways favorite cocktail).

I’m using LMStudio, which is very easy and free (beer).


Not who you asked, but I've got a Framework desktop (strix halo) with 128GB RAM. In linux up to about 112GB can be allocated towards the GPU. I can run Qwen3.5-122B (4-bit quant) quite easily on this box. I find qwen3-coder-next (80b param, MOE) runs quite well at about 36tok/sec. Qwen3.5-27b is a bit slower at about ~24tok/sec but that's a dense model.

Yeah, taking the spice list as the starting point works much better, imo. I also prepopulate the CLAUDE.md file with some information like the pinout/pinmux of the MCU otherwise claude might run in circles trying targeting the wrong pin (to be fair that also happens to me, lol).

Spicelib really just makes calls to the selected spice engine (in my case ngspice). In this setup spicelib‘s main job is to parse the raw spice data and have a unified interface regardless which spice engine is selected. But to answer the question: the path to the spice model must currently be set explicitly.

Oh, I remember seeing Jumperless a while ago, but completely forgot about. Combining this with something like Jumperless does sound interesting. What does your setup look like? Does Claude tell you: "try 1k resistor in parallel here"?

It's just measurements for now. But sourcing ideas from the model could be interesting!

I haven't tried it with codex yet. But my approach is currently a little bit different. I draw the circuit myself, which I am usually faster at than describing the circuit in plain english. And then I give claude the spice netlist as my prompt. The biggest help for me is that I (and Claude) can very quickly verify that my spice model and my hardware are doing the same thing. And for embedded programming, Claude automatically gets feedback from the scope and can correct itself. I do want to try out other models. But it is true, Claude does like to congratulate itself ;)

Claude can absolutely correct itself and change the source code on the MCU and adapt. However, it also does make mistakes, such as claiming it matched the simulation when it obviously didn't. Or it might make dubious decisions e.g. bit bang a pin instead of using the dedicated uart subsystem. So, I don't let it build completely by itself.

I have a feature request: I build an mcp server, but now it has over 60 tools. Most sessions i really don’t need most of them. I suppose I could make this into several servers. But it would maybe be nice to give the user more power here. Like let me choose the tools that should be loaded or let me build servers that group tools together which can be loaded. Not sure if that makes sense …

If only 100 people in the world are using this language, who are you even merging code with, lol.


Some of the other people?


Can agents not checkout different branches and then work on them? It's what people also do. I have a hard time to understand what problem is even solved here.


Yes, this is the obvious solution. Multiple agents working on multiple features should use feature branches.

Can’t believe how this whole AI movement seems to want to reinvent software engineering, poorly.


Their goal is not to give us a better tool, it's to get us to think our old tools are rubbish so we give them money instead.


to be entirely fair while git is getting better, the tooling UI/UX is still designed with expectation someone read the git book and understood exactly how it works.

Which should be basic skill on anyone dealing with code, but Git is not just programmer's tool any more for a long time so better UI is welcome


Has that ever been achieved in software/dev industry?


claude can use worktrees.. so if you have a system with say 10 agents, each one can use a worktree per session.. no need to clone the the repo 10 times or work on branches. Worktreeees.


The grid is not unstable and it was also not unstable the last view days when prices were negative. Germany has one of the most stable grids in the world. Negative prices are good when you need to buy electricity and they are bad when you are selling, but of course generally in a functioning market there shouldn't really be too many days with negative prices. It does mean that there isn't enough storage currently on the grid.


They are working on that https://battery-charts.de/


Sure, and negative prices will send a strong signal to the market to hurry up with adding storage. So, this will probably be more of a temporary situation and in the future there will be very few days with negative prices, however there will then also be fewer days with very high prices.


Now what we need is a cheap grid interconnect for home users running solar panels that automatically starts charging a battery when grid prices go negative, to absorb that extra power.


That already exists


Yeah, the curtailment is a simple way to deal with instability. I wonder who chooses which power plants should curtail their output. The Bundesnetzagentur?


The plants that are willing to give supply for the most negative price are the ones that will not be curtailed. So market forces. Basically at such points power plants are paying for the privilege to be allowed to supply power. This is dominated by restart costs and as such is often paid by classic "baseload" plants such as nuclear ones. i.e. they will accept losing money during one part of the day/week so that they can make money during a different part of the day/week.


So, the economic incentives favor peaker plants and storage (natural gas plants, peaker coal plants, batteries, pumped storage).


Yes, and no, maybe ;) the economic incentives are designed to always provide enough power and no more at the cheapest possible point for that time slot. The market (if free enough) searches for that point over time. One possible solution may be peaker plants (this was financially so in the burn fuel age) another maybe overbuilding (e.g. your home backup or off-site generator power that are sized at peak load/demand, not the actual demand). All constrained by what is physically possible on a grid.

Peaker plants gamble that there are going to be peaks (sure financially plan for but they are not guaranteed to make their profits).

In the peaker plant categories the storage options are different from the spin options because the incentives are slightly different. Specifically battery storage is not just a peak plant exercise it is a grid connection optimization exercise. Grid connections limit how much power one can sell from a generator. A battery system can be placed on the grid or between the grid and the generator. In the case of between grid and generator, it allows a generator to run at it's optimal speeds more often than not, and sell more because one can guarantee a wider range of output for a longer amount of time.

Some of the first battery storage systems were sold to gas peaker plants because it allowed them more time to react. i.e. idle at a more efficient level their gas turbines or even shut them off and start them on demand.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: