I wonder if vibe coding dev-ops will follow the path blazed by virtual machine managers vs bare metal servers. If the bare metal server crashes, you had to go out and, like a rancher’s calf, nurse it back to health. If the VM crashes, you take it out into the pasture and shoot it (and re-spin up another VM).
In the vibe coded world, if a bug is found (or a relied-upon api is deprecated, or a a dependency is found to suffer a security vulnerability, a vendor changes. etc) do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recent failure mode?
> In the vibe coded world, if a bug is found [...] do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recebt failure mode?
That sounds like a horrible plan, LLMs are non-deterministic (practically speaking, I know they can be run with temperature=0 locally, but not really relevant to the way anyone is writing code with them now).
Feeding the same spec in with some changes to deal with the one bug you discovered and regenerating all the code is likely to create a system that has new bugs (unrelated to the one you fixed by amending the spec) that may not have existed the last go-around.
Are you wondering if in the future AI will take a spec in natural language and convert it into thousands or millions of lines of code every time a bug is surfaced?
Treating quality investigative reporting like the scarce resource that it is, as one of the most well-known can you shed any light on why Reuters would delegate resources to commission investigative reporters to unmask Banksy (in a world where all-things-Epstein represents an unending source of investigative opportunities in the public interest)?
I'm all ears:
1. Feel free to share why unmasking Banksy was in the public interest
2. Whether you feel all other public interest priorities had been served by investigative reporting prior to commissioning his unmasking.
I have no idea, nor care, whether or not unmasking Banksy, specifically, was in the public interest. My only point is that it's not limited to topics that you consider important.
As for your #2, that seems reminiscent of "why are we going to space when there are so many problems here on Earth."
Sure, disinformation narratives get seeded all the time to inoculate the population from any narrative that a vested interest determines is counter-agenda by rendering the narrative into an anti-meme.
• this person who suspects a research-related origin of covid is not a published, experienced virologist. Conclusion to draw: only virologists funded by research grants have credibility to sound-off publicly on covid origins. 'research-related covid origin' becomes an anti-meme.
• this person who asserted 'X' is an antisemite. (conclusion to draw: 'only people who accept 'not-X' are not antisemites' X becomes an anti-meme.
• this person who saw [unexplained craft in a sky / in a hangar] has Y derogatory items in their reputation. conclusion to draw: ;only people with derogatory reputations see UFOs' [unexplained craft] becomes an anti-meme.
Hm all good examples! In these cases the memetic component doesn't suppress knowledge of itself, but rather works to suppress knowledge of something else. Most propaganda or "submarine articles" could be seen in this lens. It seems to also seems to be a specific case of the "memetic/anti-memetic duality" that the other commenter mentioned, where in practice anti-memes have a memetic component that allows spread and an anti-memetic component that tries to suppresses information.
Well I think we’ve all seen the clickbait-y headlines declaring that X phenomenon has been ‘DEBUNKED’,and those headlines are definitely engineered to spread (and benefit from performance metrics feedback).
To go further, Eric Weinstein vecame knwon for coining the term ‘pre-bunked’ narratives. This was a version of memetic inoculating where the debunking had to get out ahead of the inconvenient narrative requiring debunking. A good and (by now) pretty uncontested example of this was Peter Daszak’s actions throughout the first half of 2020, with The Lancet Letter (aka Calisher et al, The Lancet, 2020) he organized (with Nobel signatories no less) providing a massive pre-bunk at a time when few in the public were seriously countenancing any pandemic origin, much less a research-related one.
I'm not worked up at all about the auto-pen. But presidents should not be pardoning friends and family (although friends seem to get pardoned quite frequently). If a president feels it's important to do so, that president should wait until they are an ex-president and petition the next person in power.
So I know you're making a joke/statement about how the post-9/11 volunteers got royally screwed over and are (understandably) disillusioned with war but I just wanted to add something here.
This change increases maximum enlistment age. Maximum reenlistment age is something else entirely. To reenlist, you need to be able to complete 20 years of service by age 62. So if you joined at 18 and did 8 years then you can technically rennlist up to age 50. Not that you would or should but you can.
With all the posts lately about Karpathy's autoresearch, it remains unclear to me whether this name is intended to convey that this LLM-codebase should be useful for research across all domains - like molecular biology, aircraft control, sociological, ww2 history, etc. or is it intended only to discover new LLM capabilities.
In the vibe coded world, if a bug is found (or a relied-upon api is deprecated, or a a dependency is found to suffer a security vulnerability, a vendor changes. etc) do we simply kill the codebase and vibecode up a fresh one de novo from the same prompts as the original, adding only knowledge of the recent failure mode?
reply