Someone here mentioned a whole ago that the labs deliberately haven't tried to train these characteristics out of their models, because leaving them in makes it easier to identify, and therefore exclude, LLM-generated text from their training corpus.
But it's odd that these characteristics are the same across models from different labs. I find it hard to believe that researchers across competing companies are coordinating on something like that.
> I don’t care if you make an honest mistake. Hell, I don’t even care if you make a careless mistake, as long as you fix yourself. Everyone messes up - it’s how you act afterwards that matters.
You're not the one in control of their employment status and workplace reputation.
It looks like Slidev is designed for presentations about software development, judging from its feature set. Quarto is more general-purpose. (That's not to say Quarto can't support the same features, but currently it doesn't.)
I'm not affiliated with Slidev. I was just curious.
From Claude docs: Planning is most useful when you’re uncertain about the approach, when the change modifies multiple files, or when you’re unfamiliar with the code being modified. If this isn't true, skip the plan.
Isn't PPPoE the opposite — carrying a non-Ethernet protocol over Ethernet?
reply