Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

All right guys, this is your time - what consumer device do you use for local LLM inference? GPU poor answers only


An AMD AI max+ 395 - I use the one from frame.work (https://frame.work/de/en/desktop) with 128GB unified RAM and it can run a 120b model (gpt-oss:120b) just fine.

See Wendel's review here - https://www.youtube.com/watch?v=L-xgMQ-7lW0

There are other mini-pc manufacturers, the mainboard is the important part.


Wow, that's quite beefy.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: