Hacker Newsnew | past | comments | ask | show | jobs | submit | qskousen's commentslogin

My father just closed his local small shop, as it was no longer paying the bills. It's harder and harder to compete with the internet.

I've experimented with this with diffusion models with a safetensors - gguf tool I wrote. even with relatively few sample images (~10k, still enough to keep my 3090 spinning for days straight) the benefits are quite noticeable - a smaller file with overall better results.


Wendell at level 1 techs built one for Steve of gamers nexus! It was interesting. I was going to link the video but couldn't find it, sorry.


This is pretty interesting, based on the blog post, it seems like they are using a technique similar to what I have been using to generate "layer sensitivity" data in my (still pretty beta) ggufy project, which is more aimed at diffusion (image) models. https://github.com/qskousen/ggufy


Just yesterday I watched this video: https://m.youtube.com/watch?v=7bSzp-QildA I am not a graphics programmer, but from what I understood I think he talks about doing what you are describing with Vulkan.


I've done it with a 6800XT, which should be similar. It's a little trickier than with an Nvidia card (because everything is designed for CUDA) but doable.


It seems like you are saying the AI features don't work if you don't have a GPU, if I understood correctly, but I have my install on a server with no GPU and the object search and facial recognition features work fine. Probably slower to generate the embeddings, but I don't have any comparison to make.


My number one reason for moving away from using LXD in production after this change is that LXD is only available through snap, which caused multiple downtimes in the cluster because of the forced updates.


Exactly. And depending on whether you are installing it with snap or other package managers, like pacman in arch, it'll actually use differently folders for configs, so if you are writing automation for say automatically manage remotes without relying on the cli, you'll have to account for that. Better to just use Incus whenever possible.


This guy who forked Incus had already started providing rpm packages for LXD. At least on openSUSE but probably more.


It does - the inference speed is much slower than a consumer video card. The draw for the Spark and systems like it are the massive amounts of memory available to the GPU.


I had been unemployed for a year and worked a lot on DiffKeep (https://github.com/DiffKeep/DiffKeep), a cross platform AI generated image management program. Fortunately / unfortunately I got a job and haven't been able to dedicate much time to it lately.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: