Can you link any evidence of federal agents pulling occupy protestors off the streets into unmarked cars? I cannot find any and the linked article does not discuss them.
Well, the compiler constrains the type automatically as far as it can go, so a `switch` on the `.type` property will constrain it down to the specific case in the union. Practically speaking it's not all that different to f.e. Haskell, except you're manually tagging the different cases. You get exhaustiveness checks when checking cases by adding a `assertExhaustive(value: never)` in your `default:` case, which will only match if you handled all the cases.
For development purposes this is all essentially the same as proper sum types, _with worse ergonomics_.
My point was mostly that practically speaking you can have a full featured `Result<T, E>` type that is certainly good enough. What TypeScript lacks is what most halfway languages lack; higher-kinded types, etc., and good ergonomics that follow, such as type classes. These are also not a given in more esteemed type system circles: OCaml doesn't have them either (and most features that are "coming soon" in OCaml are bordering on vaporware).
Most nice type system features you can get by way of combinations of 2-3 distinct type system features in TypeScript. In terms of end results the only things that differentiate TypeScript from languages with more traditional type systems is that you have to use `TSLint` to absolutely eradicate `any` from usage completely (wheras you don't have it at all in a better type system) and the ergonomics for functional programming just aren't that great in JS.
This isn't the TS part of the syntax that's the issue, it's just that anyone who's used a ML descendant in the last 40+ years will have noticed that (automatic) currying is a massive boon to functional programming (and on top of that C-syntax is just about the worst for FP with all the noise that comes with just calling functions). Even something small as not being able to define operators is pretty disruptive to nice code. People like to rag on languages that let you do this, but there are many patterns in FP that absolutely are best encoded via well known and generally accepted operators.
I think the go language has taken a position along the lines of "re usability and abstraction are overrated". I certainly think there is some truth to that, I am really enjoying working with go on smaller projects and it is this philosophy that has made understanding the code much easier.
But I wonder how it really scales on a large code base like this? Some of the best projects I've worked on leverage usability more effectively to create a sort of vocabulary. They're far more concise, there is rarely more than one source of truth, they're far easier to change and improve. Does this hold true for 540,000 lines of go code?
> But I wonder how it really scales on a large code base like this? Some of the best projects I've worked on leverage usability more effectively to create a sort of vocabulary. They're far more concise, there is rarely more than one source of truth, they're far easier to change and improve. Does this hold true for 540,000 lines of go code?
Doesn't this article speak to this? It mentions juju has over a million lines.
Yeah, but there is no comparison to the same project done in Lisp, Haskell, Java, etc.
All the author is doing is relating their success using Go, which is great, but there is no comparison to how it would have fared in another language, except his previous frustrations with C#, on other projects, I guess.
It would be ridiculous to expect Canonical (and even moreso, the author alone) to rewrite juju in another language as a simple comparison. Even if it were feasible, the comparison would be polluted by the experience gained by building the application initially (or you could rebuild with a completely new team of developers, but then you're introducing a whole new set of variables). The best we can reasonably do is compare applications aggregating on language.
And go can not write abstract functions and methods.
I can not write a function dealing with arbitrary values.
Somethind I’d do in Java with
public static <T extends IIncrementable> T increment(T val) { val.increment(); return val; }
is impossible to do in Go.
You can not abstract over types, and write metric fucktons of duplicated code. I’ve tried porting some of my Java code, and it quickly grew to some classes being duplicated hundreds of times, once for every type. Fixing bugs became a nightmare.
My system is usually designed that I provide a function that generates a value of type T, a function to filter T (T -> boolean), and a way to display Ts.
So, the library now gets these functions, and does all the filtering on other threads.
So either I have to replicate the entire async code for every type, or I can’t keep type safety.
Don't keep type safety then. Think of Go as half-way between Python and Haskell in that respect. Types are great when they're useful, but they're not required.
If you’re serious, that’s... the worst solution I’ve heard yet.
That’s the same mistake C did with void, Java did with Object and corrected with generics in 1.5, just called interface{} this time.
And while with python and Java, even if I circumvent the type system, I can still use annotations (see typed python, or JetBrains @Contract, Google’s @IntRange, etc), with Go I have nothing of that sort.
Types are useful to provide safety, if your type system has to be turnt off, then I am losing all that safety, and might as well code in PHP (although they also* saw that mistake, and are fixing it now, with 7.0 and later).
To actually answer the question - yes it holds true, yes it scales (in my experience). There are a lot of abstractions that we built for Juju. And we absolutely tried to ensure there was only a single source of truth for everything. It would have been totally unworkable if we couldn't reuse logic etc. We may not always have chosen the best abstraction, but that's a problem in any language.
Importantly for me, this diet drastically reduces cravings and reduces the impact of hunger, making it far easier to keep to vs other diets. But it can be quite restrictive, you'll likely find that eating out severely limits your options and it is tricky to keep to if you're traveling.
The main downside to using Rust for web apps - at least in my opinion: is having to learn and use a type system which includes affine types: lifetimes, borrowing etc. But if you've already put in the effort to get to grips with this part of the language, that's not as big a drawback. Then you get all the other positives of the language, like its strong type system, fast speed, sensible package manager, lack of GC pauses etc.
I'm in full agreement with this. The learning curve (and the compilation speed I guess--but that's improving quickly) is the main drawback to writing Web apps in Rust. If you've already learned it, though, then it's basically just down to the typical static vs. dynamic typing tradeoff, with Rust just being like any other statically typed language.
> Then you get all the other positives of the language, like [...] lack of GC pauses etc.
One thing I've been curious about: is it the case that there are no pauses, or that you know when they happen?
I know one of the concerns with GC is it can just kind of happen whenever and pause whatever else you've got going on. It also has to do extra work to track down what sort of memory is reachable or not.
Now is rust's RAII-style approach actually faster or do you just have more control over it? It doesn't have to trace reachable memory from the roots like a GC, but it still does have to do some work, right?
It's both faster and you have more control over it; the two are linked in this case. This system lets the compiler do the hard work to figure out when it's ok to deallocate some piece of memory, unlike a GC, which figures it out at runtime. One you've determined that some memory can be deallocated, the actual process is very quick, usually just updating a list of free blocks in the memory allocator and maybe some statistics.
Rust does also support reference counting when you want it, which can bring its own set of performance issues in certain circumstances. Think more about general slowness due to constantly incrementing and decrementing references in an atomic fashion when you pass RC-ed data between functions, rather than the pauses seen with garbage collection. But how bad that is depends greatly on how and where you're using reference counted variables, and the addition of the lifetime/borrow system makes it easier to avoid some of the more egregious cases here.
Additionally, some people are doing work on GCs that work with Rust. The two links below have some good descriptions of the progress and challenges related to that.
> In many cases, that up-front cost is paid back many fold later, when other programmers (or you yourself, after you’ve forgotten the details of the code) come and try to read and work with the code. Go optimizes more for understanding the code later than for building big things quickly.
I believe this sounds attractive in part because programmers are so excited by the idea of solving problems, and it's true that a specialized implementation can always beat a general one. But it does not sound particularly sustainable.
A language that doesn't provide adequate abstractions often results in requiring its users to develop those missing abstractions over and over again. With the requisite possibility of bugs or implementation naivety. As both a Rust and Go user, I do find the talk of generics a little over played. Go has great facilities for code reuse that often mean you won't miss generics as much as you might think. But I also see Go projects growing in size quicker than I can appreciate. The increased volume of code, each piece being specialized for its particular purpose - harder to fit in my head at one time. The cost of changing the code becomes more severe, each instance of an abstraction in the code being different enough to warrant specialized attention, rather than fixing code in a single place.
In every organization I've worked at, as the size of a code base grows, so does the cost of change. A language that doesn't scale well is only going to exacerbate that problem. Perhaps Go seems great now because we've yet to really see any projects written in it at this scale.
> The reality is, that when OOP is correctly executed
Can you find anyone who agrees on what "correctly executed OOP" is? I don't think there is a consensus on this idea, nor any particularly objective criteria for how to achieve good OOP.
> For example, primitive types are not objects (no, auto-boxing doesn't make them objects).
Does the presence of inline assembler make C++ a machine language? I don't think so. Similarly I don't think the presence of primitives in Java make it less of an OO language. They're a leaky abstraction meant to improve the language's performance, which arguably make the language less ergonomic to use.