Hacker Newsnew | past | comments | ask | show | jobs | submit | Perz1val's commentslogin

Ok, single file, blah, blah. Realistically how do you sync that and how do you resolve conflicts? What happens if two devices add a password while offline, then go online?

I actually was a Bitwarden user at first, but over time in reality the frequency that I change email/password is not that much. It's not like I change those things every hour or every day like with my work files/documents and need constant syncing to the drive. And the chance that I add/change passwords at 2 devices at a close time is even less.

So gradually I don't feel I need syncing that much any more and switched to Keepass. I made my mind that I'll only change the database from my computer and rclone push that to any cloud I like (I'm using Koofr for that since it's friendly to rclone) then in any other devices I'll just rclone pull them after that when needed. If I change something in other devices (like phones), I'll just note locally there and change the database later.

But ofc if someone needs to change their data/password frequently then Bitwarden is clearly the better choice.


Should have switched the model to Haiku

Also unrelated, but more linux gamers proves my personal observation that on the spectrum of computer literacy gamers are just below powerusers and programmers. We see more less technical people migrate over to Linux gradually and now it's gamers turn. Well, that's kind of obvious for everybody except Microsoft apparently.


It does not matter, you (rust devs) won't use anything else either way and other people just don't care


But it can't, we see models get larger and larger and larger models perform better. <Thinking> made such huge improvements, because it makes more text for the language model to process. Cavemanising (lossy compression) the output does it to the input as well.


but some tokens are not really needed? This is probably bad because it is mismatched with training set, but if you trained a model on a dataset removing all prepositions (or whatever caveman speak is), would you have a performance degradation compared to the same model trained on the same dataset without the caveman translation?


I think good, less thinking for you, more thinking you will do


I'm not sure if you're being sarcastic or not, but I did find the caveman examples harder to read than their verbose counterpart.

The verbose ones I could speed read, and consume it at a familiar pace... Almost on autopilot.

Caveman speak no familiar no convention, me no know first time. Need think hard understand. Slower. Good thing?


That was my point. You (and I) tend to read verobose text and not argue with it, our brains are spoonfed reasoning chains and they seem to make sense. Caveman breaks that, so we have to actually think, there is no "thinking" done for us

I'd have mined the copied libraries with something that makes it possible to later change terms and extract fees, as it'd be expected that nobody reads the terms for such service


I'll never touch any git wrapper, because they've lied to me before and I can use git already. Everything that was there to be sped up has already been made into zsh functions.


Nvidia can also say no, they won't have choice but yield or not have AI at all


It reads like a joke without a punchline


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: