Who said that it should be a compile time error? That’s just a convention, and this is definitely not a bad one. No one is going to like the need to pass each time a proof that `a ≥ b` for every `a - b` invocation. Taking into account that this proof will most likely be an implicit argument, that would be a really annoying thing to use.
On the other hand, array indices by default do require such a proof, i.e., this code produces a compile time error:
>It definitely is a bad convention because it's highly surprising.
You know that `Nat` represents non-negative numbers, and you see that `1 - 2` does not produce a compile error. What value do you expect then? What’s so surprising about choosing zero as a default value here? Do you expect it to panic or what?
I would expect it to require a proof that 1 - 2 is non-negative. That's kind of the raison d'etre of Lean isn't it?
The reason they don't do that is because Lean treats proofs as manually generated explicit objects, unlike other languages like Dafny (IIRC) where they are implicit. Requiring explicit proofs for every subtraction was presumably seen as too onerous.
Which is fine... BUT they then should have said "so we're going to define a more convenient operator which is LIKE subtraction but isn't actually standard subtraction, and therefore we won't use the standard subtraction notation for it".
If they had used something like 1 -_ 2 then that would be much less surprising because you'd think "oh right, it's the special saturating subtraction".
Similarly for Uint8.ofNat it should have been Uint8.ofNatWrapping or similar.
>I would expect it to require a proof that 1 - 2 is non-negative. That's kind of the raison d'etre of Lean isn't it?
The reason is to be able to write mathematical proofs, including proofs about your code, but not to attach proofs to every single function. This definition of subtraction does not prevent you from reasoning about it and requiring `a ≥ b` in the proofs/code for which this is really important.
>Requiring explicit proofs for every subtraction was presumably seen as too onerous.
Lean can deduce proofs implicitly as well. It’s just not a very reliable mechanism. That is, imagine your code breaking after an update, because Lean suddenly can’t deduce `a ≥ b` automatically for you anymore.
>Which is fine... BUT they then should have said "so we're going to define a more convenient operator which is LIKE subtraction but isn't actually standard subtraction, and therefore we won't use the standard subtraction notation for it".
What is a standard subtraction over natural numbers at all? As you know, under a standard addition natural numbers form a monoid but not a group.
> What is a standard subtraction over natural numbers at all?
If you need something that is always defined then you have to use a non-standard subtraction (i.e. saturating subtraction). In other words the `-` operator should not work for Nat. It should require you to use `-_`.
Unfortunately Lean’s distribution went from somewhat about 15 MiB in times of Lean 3 to more than 2,5 GiB when unpacked nowadays for no good reason. This is too much. Even v4.0.0-m1 was a 90 MB archive. Looks like that Lean’s authors do not care about this anymore.
Lean 3 was the least bloated theorem prover among Lean, Coq and Agda, and Lean 4 is the most bloated among this Big Three. This is very sad.
Personally, I stopped using Lean after the last update broke unification in a strange way again.
No, it’s still linked dynamically and its kernel is still in C++ (see https://github.com/leanprover/lean4/tree/master/src/kernel, this part of a codebase has hardly changed since Lean 3). Almost all the space in the package (more than 2.5 GiB) is taken up by .olean/.ilean/.ir files, approximately 1 GiB of which is generated from the code of Lean’s frontend itself (i.e., parser, elaborator, core tactics, and so on) and the other 1 GiB from a standard library. As you might guess, these files are IR and essentially a compiled Lean’s environment (something like a Lisp image), so that Lean can load them straight up without recompiling and rechecking everything.
There were some proposals like compressing all the .olean files, but (as far as I know) none of them were implemented. Well, even if some proposals were implemented, their contribution was effectively negated anyway.
So instead of using programming languages designed specifically to effectively express algorithms and data structures, we are going to use natural language like English that is clearly not expressive enough for this? It’s like rewriting a paper about sheaf cohomology in plain English without any mathematical notation and expecting it to be accessible to everyone.
:) Not exactly. We'll use English to get a kinda description, then test and debug to make that functional, then cycling the functionality with users to nail down what is actually needed. Which won't be written down anywhere. Like before. Except with autocomplete that tries to predict a page or two of code at a time. Often pretty accurately.
I do not think you are saying same thing here :). No one doubts we can put "make a todo app" into english, and that you can yeah test with users. But that's different from a task which would articulate, in only english, the precise layout and architecture of the MVC that make the app possible.
English is fine, but I am personally a lot faster in my mind and fingers and IDE with a language suited for this stuff. AI guys just want to be teachers deep down I think :).
I saw this DSL on HN yesterday, and this syntax is total garbage. It’s some stupid mixture of different PLs. Are you seriously OK with this so that you keep posting it here? I don’t even want to look through source code knowing what garbage it is at the surface level.
describe "hello, world"
it "calls the route"
let world = "world"
when calling GET /hello/{{world}}
then status is 200
and selector `p` text equals "hello, {{world}}"
What don't you like about it? I think it's interesting
It’s funny to see that even nowadays just a few people understand Windows 8’s UI, while the majority in these comments just blindly shits at it. Not surprising, though, since there are so many happy users of crap UI’s like KDE around.
Sadly, this clone looks very‐very bad, just like millions of WP8‐like launchers compared to the actual WP8.
I feel the same, because it seems that the only desktop-ready OS under GPL today is GNU/Linux, and it feels too bloated nowadays (not to mention that Linux is effectively stuck under GPLv2). Something like FreeBSD feels much lighter and better still being desktop‐ready. Looks like that guys from Hyperbola think the same and that’s why they are doing HyperbolaBSD.
Btw there’s some progress in GNU Hurd, but they are still far from being desktop-ready.
There needs to be a new rule in technical discussion communities that outlaws bland comments that just spew "too bloated" and "feels much lighter". It is completely useless fluff description text.
No, it’s just you having some strange prejudices about these words (probably driven by blind faith in some overhyped technologies), so go better overregulate your preferred echo chamber.
They now provide at least somehow working x86_64 images. It’s of course funny for a project started in the 90s to get x86_64 support only in the 2020s, but it’s still progress in relative terms.
IANAL, but you can’t actually just relicense code, even if it’s under BSD‐like license. What you can do is to release this code in the binary form without providing the source code.
Correctness of the kernel and consistency of the theory implemented in it are different things. Gödel’s theorems prevent you from proving the latter, but not the former.
On the other hand, array indices by default do require such a proof, i.e., this code produces a compile time error:
Kevin Buzzard even wrote a blog post about a similar question about division by zero: https://xenaproject.wordpress.com/2020/07/05/division-by-zer...