i've not had the issue with codex, i was testing a public api i work on for issues, codex was happy to attempt to break it but did refuse to create a script that would automate the issue it found.
This can result in some funny interactions. I don't know if Claude will say anything, but I've had some models act "surprised" when I commented on something in their thinking, or even deny saying anything about it until I insisted that I can see their reasoning output.
I some times play about with local models via ollama/comfyui and more recently ace-step to generate music.
This is short bursts of heat 5-10 m during the render I would not be happy with that for multiple hours a day. I am sure that would have a negative effect on battery health.
i'm using the cursor cli when i just its build in it scored 10/16 tokens but i also have my own custom cli tool that does tasks for my job when i used that it scored 15/16. it missed the token on the Content Negotiation test.
i like ollama, mostly because the cli is pretty nice. its desktop app has stupid choices like if a model can support tools then the ui should give me the "search" option but it only shows for cloud models.
i have ran lmstudio for a while but i don't really use local models that much other than to mess about.
reply