I haven't heard anything about this, but I really wish it was there by default. I don't think the way it works right now fits anyone's expectations of what the lockfile is supposed to do; the whole point of storing the resolved versions in a file is to, well, lock them, and implicitly updating them every time you build doesn't do that.
Since you're here, and you happened to indirectly allude to something that seems to have become increasingly common in the Rust world nowadays, I can't help but be curious about your thoughts on libraries checking their lockfiles into version control. It's not totally clear to me exactly when or why it became widespread, but it used to be relatively rare for me to see in open source libraries in the first few post-1.0 years of Rust, whereas at this point I think it's more common for me to see than not.
Do you think it's an actively bad practice, completely benign, or something in between where it makes sense in some cases but probably should still be avoided in others? Offhand, the only variable I can think of that might influence a different choice is that maybe closed-source packages been reused within a company (especially if trying to interface with other package management systems, which I saw firsthand when working at AWS but I'm guessing is something other large companies would also run into), but I'm curious if there are other names nuances I haven't thought of
For researchers who notice new releases as soon as they are published and discover malice based on that alone, I agree, and every step of that can be automated to some level of effectiveness.
But for researchers who aren't sufficiently effective until the first victim starts shouting that something went sideways, the malicious actor would be wise to simply ensure no victim is aware until well after the cooldown period, implementing novel obfuscation that evades static analysis and the like.
Novel obfuscation, with a novel idea, is hard to invent. Novel obfuscation, where it is only new to that codebase, is easy(ier) to flag as suspicious.
While bad actors would be wise to ensure low-cooldown users are unaware, I would not say they can "simply" ensure that.
Code with any obfuscation that evades static analysis should become more suspicious in general. That's a win for users.
A longer window of time for outside researchers is a win for users -- unless the release fixes existing problems.
What we need is allowing the user to easily change from implicitly trusting only the publisher to incorporate third parties. Any of those can be compromised, but users would be better served when a malicious release must either (1) compromise multiple independent parties or (2) compromise the publisher with an exploit undetectable during cooldown.
Any individual user can independently do that now, but it's so incredibly time-consuming that only large organizations even attempt it.
> Now we've got the best map app on the planet. We learned about persistence, and we did exactly the right thing having made the mistake
Does anyone here actually use apple maps? (Not counting when it opens by accident because apple opens it instead of google maps when you click on an address in calendar)
I do most of the time. It works well enough, and more importantly the reviews aggregate from sources that aren’t Google which is a benefit from someone trying to de-Google. Yelp, trip advisor, etc.
Also, Google Maps has started pushing more and more of the AI stuff in the app… which I find undesirable. Apple Maps has a much cleaner UI nowadays.
I’ve used it for years, including 3 separate cross-country trips (in the US), and it has been incredibly reliable. Its traffic data is very current, and it has proactively rerouted me multiple times because of this data as well as weather data.
The pain points experienced early on have all cleared up for me. I understand this is not everyone’s experience (I’ve heard complaints from those not in the US), but it’s hard for me to justify switching away from it in its current state.
I liked the idea of the new layout with the agent thread on the left, it goes hand-in-hand with having multiple threads that are easy to switch between and running concurrently, but I switched back because my file tree disappeared and I couldn't easily see how to add it back
In the new layout, the project panel and git panel are just moved to the right side, so that the agent panel could be on the left, and you could still view both at the same time.
Out of curiosity, how many concurrent users could you get with a hosting setup at that price? If let's say 10% of those 1000 users were using it at the same time would it handle it? What about 30% or 100%?
You made a good point that I didn't think through fully. It's the concurrent user aspect that heavily impacts things. Currently, you'd probably need quite a bit more investment to the point of having a mini data center to do what I'm proposing.
However, we've been seeing advancements in compressing context and capabilities of smaller models that I don't think it'd be too far off to see something like what I'm talking about within the next 5 years.
His opinion is subjective obviously, but as someone who's switched to iphone I still miss the apps I gave up - specifically AntennaPod and OpenTracks. I believe the iphone ecosystem has the advantage that it has higher-paying users which is attractive to developers, but really it's just a matter of there being different market segments and depending on who you are (or who you aspire to be) you might fit better into one than the other.
reply