Hacker Newsnew | past | comments | ask | show | jobs | submit | astn's commentslogin

I've got nice analytical solution for the quadratic bezier: https://astiopin.github.io/2019/01/04/qbez-parabola.html


Yeah, sometimes the compiler unrolls too much and innocent looking one-liner can be compiled into a monstrosity like this:

https://godbolt.org/z/aKtko5


Are there no compilers which attempt to look at that code, decide 'that looks like 1<<(num-2) when n>=2', and replace the code entirely?

There must be so many examples of bubble sort where quicksort would be better, and other code patterns which can be identified and replaced with something orders of magnitude faster.


Is that more performant?


>It requires knowing your foreground and background colors at render time, which is an unnatural requirement when you want to do GPU-accelerated compositing

Most of the apple hardware supports ARB_blend_func_extended or even EXT_framebuffer_fetch, which lets you to implement subpixel blending with ease. Otherwise you just have to fall back to traditional alpha blending and render each glyph in three passes for each color channel, which is not that hard either.



This site looks great. Unfortunately, it is the exact opposite of low-traffic.


>It could've been a word processor and it would still be terrible.

Like vim or emacs?

>window panes are very much a standardized UI part which has a completely non-standard interaction design in Blender. For no reason.

It's called "tiled window management". Lots of people find it more comfortable to use than the classic overlapping windows.


> It's called "tiled window management". Lots of people find it more comfortable to use than the classic overlapping windows.

True, but when speaking about "lots", we have to keep in mind that tiled window management for the general public died together with Windows 1.0. That probably puts the percentage of the public who's ever used tiled window management around 0.001% of the total PC using population :)


> True, but when speaking about "lots", we have to keep in mind that tiled window management for the general public died together with Windows 1.0.

And came back with Windows 8 :-)

> That probably puts the percentage of the public who's ever used tiled window management around 0.001% of the total PC using population :)


Well, the awesome success of Windows 8 kind of reinforces my point :-)


But the market share of Windows 8.1 is currently 8,40%.

https://www.netmarketshare.com/operating-system-market-share...


Defenders of open source, rally around the OS-outcast - for he who is condemned by the public, may he be horrible and atrocious as can be, is our brethren in spirit.

You know how bad it is, if Microsoft lowest-point is your high point.


Like vim or emacs?

Yes. There's reasons the popularity of those is waning.

It's called "tiled window management". Lots of people find it more comfortable to use than the classic overlapping windows.

It's got nothing to do with tiling. There's plenty of other software out there that's meant to be used with tiled windows. The interactions in Blender are unique to Blender, non-intuitive and non-discoverable.


> There's plenty of other software out there that's meant to be used with tiled windows.

There is a tiling window manager INSIDE blender used to control its window frames. It works pretty much like any other tiling manager. You immediately grasp the idea if you've ever used one.


>I don't know why they would put out a few hundred gigabytes of fake files.

Looks like you can check it now. The keys went public:

pre-commitment 3: UK FCO f33a6de5c627e3270ed3e02f62cd0c857467a780cf6123d2172d80d02a072f74

pre-commitment 2: Ecuador eae5c9b064ed649ba468f0800abf8b56ae5cfe355b93b1ce90a1b92a48a9ab72

pre-commitment 1: John Kerry 4bb96075acadc3d80b5ac872874c3037a386f4f595fe99e687439aabd0219809

https://twitter.com/wikileaks


I speculate that these are only SHA256 hashes of keys or documents that will be released in the future. It's pre-commitment. I didn't try decrypting one of the files with a key yet, though. I'm busy with other things (like browsing HN, aaah).


The tweets are just hashes of the insurance files to confirm their validity, not the decryption keys (yet).


Those tweets came out 7 hours before the Internet link message.


"Draw the first letter of the alphabet please".

'a'

No. The right answer is 'A'. We will stop here because it's obvious that you don't have the necessary skills.


If the HN crowd allowed for levity, this would certainly be an instance of the oft-celebrated "birth of a meme".


In C++ lambdas have limited size (about 3 machine words). If it's context is larger than that, the data may be allocated on heap (if compiler can't optimize that allocation out).


What you're talking about is std::function, a wrapper around raw lambda objects that performs dynamic dispatch. Since any lambda can be wrapped in std::function yet std::function, like all C++ types, must have a fixed size, it uses the heap as you say.

If you stick to the raw lambdas, however, the compiler knows the size and doesn't need to use the heap. Only problem is that their type can't be named. In C++, you can work around this with auto.

Without getting too much into it, Rust has a similar distinction between raw lambdas and Box<Fn(...)>. This library uses the former.


I though in c++ the lambda get created on the stack like any other object.


They do, but to actually use it, to pass it anywhere, you have to either wrap it in some other object (std::function, as comex pointed out) or use a template.


Java.


That's the short answer, but to elaborate, it probably has more to do with the culture surrounding the language than the overhead of the JVM, although that also plays a role. What I mean by "culture" is the widespread notion (cargo-cult?) among Java programmers that adding more classes and abstraction is always better, almost like a "best practice", leading to "enterprise" monstrosities with deep inheritance hierarchies and ridiculous amounts of indirection to accomplish the simplest of tasks. The low consideration given to memory management in general (there's a GC, but that doesn't mean you should abandon all thought about memory allocation --- an analogy I like to use is how it's possible to get as good fuel economy with an automatic transmission as a manual, with the right technique) also contributes to the bloat.

The JVM itself has a certain amount of unavoidable overhead, but even if it was e.g. 10x slower than native code at best, I don't think that's the main problem. I've used systems that were more than 10x slower in benchmark-terms and had less than 1/10th the memory, yet felt much more responsive and performant. The problem is the culture that encourages this massive resource waste and selfish conservation of developer's time --- at the expense of everyone else.


>The JVM itself has a certain amount of unavoidable overhead

That is not a "certain amount of overhead" but the inherent incompatility with the modern hardware. With Java writing cache-friendly code is extremely difficult: boxing and indirections are encouraged while primitive types are cumbersome and value types are possible only through the direct byte manipulation. Memory overhead is enourmous. A simple collection like a hashmap of short strings can have up to a 75% overhead.


Didn't want to say it outright, but that's my theory too. Having a VM manage everything in a resource/power constrained environment is a crazy idea in the first place. Oracle's JVM is competitive because of heroic engineering, in spite of Java's design -- not because of it. And it still has tradeoffs, like insane memory usage. ART/Dalvik are operating under different constraints, which probably contributes significantly to Android's handicap.

That can't be the whole story though, because C# and VB.NET both (seem to) perform decently under a managed runtime on Windows Phone. Wonder how big of a role the CLR has in typical WP apps and the WP core, as opposed to unmanaged C/C++ code.


.NET is AOT compiled to native code since Windows Phone 8.

On Windows Phone 8.x, it uses MDIL (Machine Dependent Intermediate Language) meaning native code with symbolic names for the on-device linker.

On Windows Phone 10 onwards, it makes use of .NET Native.

Both are based on Visual C++'s backend, which is way more world battle tested than ART.

.NET also supports value types.

Also the XAML layouts are compiled to binay, not interpreted on load like on Android (aka inflated).

Windows Phone also only supports asynchronous code, graphics and sound APIs must be hardware accelerated.


The Go authors had a pretty good article on what's wrong with Java's performance : pointers everywhere. Every last little thing that isn't a primitive type is a pointer. Everywhere, in every bit of code.

That means a "new Object()" takes up 16 bytes (8 bytes for the object, 8 for the pointer to it). That means you fill a cache line by allocating 4 objects, or 2 objects containing a single reference, or ...

So in java you should never program a line drawing loop by using 2 vectors, because 2 vectors, each with 2 32-bit ints take up 82 (2 pointers to the objects you're using) + 82 (overhead for the objects) + 4*2 (the actual data) 40 bytes of data. No way you can fit that in registers and still use registers to actually calculate things. So instead you should use 4 ints and just forget about the objects, and even that will only work if you never call any functions.

Same loop in C/C++/Pascal/Go/... using structs takes 8 bytes (they don't keep structs on the heap), which, if necessary, fits in 1 register (granted, in practice we're talking 2 registers, but still).

People might reply to this with benchmarks, but if you actually analyse the java code where java beats or is comparable with C/C++ you're going to see zero object allocations. You're not even going to see them using bool in the extreme cases, rather they'll bitshift into ints to effectively generate packed bools (certainly in SAT benchmarks). This is not realistic java code, which would have been way slower.

Java's memory model is the main culprit at this point in time. Java can do incredible tricks with programs, and actually exposes them, enabling lots of language creativity on the JVM. But there's a pretty sizeable cost in speed and memory usage.


People might reply to this with benchmarks, but if you actually analyse the java code where java beats or is comparable with C/C++ you're going to see zero object allocations.

I've noticed that tends to be true in general for benchmarks of high-level languages which show them performing as well as or sometimes even better than C/C++ --- the code performs so well because it's essentially using none of the other language features that most code in the language does. I touch upon this in my other comment here about culture: the language theoretically allows you to write quite efficient code, but it doesn't look "idiomatic" or perhaps isn't a "best practice", so it's discouraged and isn't done. The entire dogma of avoiding any optimisation compounds this problem even more, since once programmers finally realise they have performance issues, they've already created such complex and inefficient code that it's even harder to do any optimisation on.

On the other hand, idiomatic C tends to be written in a simple and straightforward style that is naturally quite efficient already. C++ is similar, although templates, OOP, and all the other new features can lead to inefficient code if not used in moderation.

I suppose the ultimate example of what could be called "intrinsically efficient" is assembly language. With Asm, every instruction, every byte you can save from typing is one the machine also doesn't have to execute, so you're basically forced to optimise as you write. There's certainly no desire to overengineer things, simply because of the extreme tedium and futility of doing so. With no IDE to help you generate classes and autocomplete indirections, it really changes your perspective of what constitutes efficient code.


Even on Assembly one might suffer from macro opcodes and the respective microcode expansion, for the sets that don't map 1:1 to what the CPU does.


> "new Object()" takes up 16 bytes ....

Only in JVMs and AOT compilers that don't do escape analysis.

Also don't forget Smalltalk, which also only does references, was running in the Alto, Dolphin, and Dorado workstations.

For example the Dorado was:

- 128-512 kB

- 606x808 pixels

- 4 74181 CPUs

So how does that compare to a beefy Android device?

Also J2ME and Embedded Java are running quite well in many embedded platforms, in a few hundred KB steering soft real time systems like robots and missile radar controls.

So yes, Java might not offer all the memory control features that other GC enabled languages do, going back to Algol 68, Mesa/Cedar, Eiffel, Modula-3, ....

But given the performance of commercial JVM vendors, I would say Google has a lot of blame as well.

EDIT: Forgot to add that when Java 10 comes out with value types and reified generics (according to the roadmap) this will become a moot point, except of course for Android Java given Google's unwillingness to provide support for the real thing.


Maybe I'm unclear on some of the details here, but here goes

https://docs.oracle.com/javase/specs/jls/se8/html/jls-12.htm...

"Whenever a new class instance is created, memory space is allocated for it with room for all the instance variables declared in the class type and all the instance variables declared in each superclass of the class type, including all the instance variables that may be hidden (§8.3)."

Doesn't seem to allow for escape analysis eliminating the object. Plus escape analysis wouldn't really save you. These are class instances, you pretty much have to declare them before the scope you use them in, if you're using them in the condition of a while loop (which would be the way to use them).

I seem to have this experience in practice. If you have a value type and loop over it, creating a "dummy" instance of it outside of the loop, then erase and reset it's inner state on every loop iteration is far faster than creating an instance inside the loop. So I don't think escape analysis optimizes this case.


Because JIT compilers are able to bend the rules if proven correct.

For example, if you declare something like

    final class Point {
          final int x, y;

          //...
    }
The Azul JIT compiler will transform it into a struct, just like in C, via their "StructuredArray and ObjectLayout" optimizations.

IBM J9 also does similar optimizations via packed objects, as they call it.

JIT compilers also remove locks and synchronized blocks if heuristics prove their are never needed in the dataflow.

In any case, by Java 10, real value types are expected to be part of the language.


Even if you're right. The issue with coding to optimizations is that it's really, really brittle. You change the position of a variable and suddenly your application runs 10x slower.

Why ? Because it just went from O(N) with no allocations to O(N^2) because it has to constantly extend and walk a list (the free memory list, which is lower bounded by the number of iterations of the loop) on every iteration of the loop (malloc is O(N), so calling malloc in a loop automatically increases it's complexity, and compilers change stack allocations into mallocs due to optimizations).

I've seen this happen many times. And then, after 2 weeks of searching you find the cause : someone changed 3 * i into i * 3 in a method which caused autoboxing to suddenly actually occur.

Counting on compiler optimizations to save your ass is incredibly, incredibly brittle.


> Even if you're right.

I can provide some presentations from Java Language Summit

> Counting on compiler optimizations to save your ass is incredibly, incredibly brittle.

I agree.

Ada, Delphi, Modula-3, Oberon or Eiffel could have been in Java's place with the right stewardship, but sadly it wasn't it.

I also don't see it getting replaced anytime soon, hence why I welcome the idea of eventually getting value types and proper AOT on the reference JDK while keeping the huge set of libraries that we have available.

In any case, I am both a language geek and a polyglot developer, so I have fun discussing this kind of subjects not being language zealot.


c# has value types right?


And Windows Phone has .NET.

The problem is that Dalvik never had a GC and JIT support that could compare with other implementations.

Even ART seems to still have lots of optimization opportunities to explore.


Dalvik is so terrible even v8 outperforms it.


if it was that easy, google would have moved away from java years ago...


From their Google IO presentations and their atitude towards NDK users vs how other mobile tems deal with their devs, I would say Java runs strong within who calls the shots at the Android's team.

Even if some of the code looks like written by devs recovering from years of exposition to hungarian notation.


The NDK is not part of their priorities indeed (although to be fair it seems that things are slowly getting better with a team dedicated to integrating Clion).

They are smart engineers though and I have no doubt that if C++ had been the best choice for the platform, we would not be writing apps in java ...

tbh, I am really tired of the simplistic 'because java' argument with nothing to back it up ...

I have no love for the language (although I think it gets more flak than it deserves) but I have spent a lot of time working on the performances of Android apps and none of the issues I have fixed would have been any different in another language.


I would also used Java if Oracle hadn't dropped the ball in mobile support, as if they couldn't provide JIT and AOT compilers.

So given that I enjoy C++, when conding on my own, that is what I end up using for hobby coding between mobile platforms. But the NDK and JNI wrapping take the fun out of it.


I am curious : on what kind of mobile apps are you working on your free time ?

By design, the NDK can only access a very small part of the platform APIs.

It is not an issue if you are making something where you are supposed to use the NDK (like a drawing app or a game), but for a 'traditional' app, that's another matter.


Very basic games or for the business logic with the UI done in Java and XAML - C++/CX.

But if Clojure did support properly Android and CoreCLR, I would probably use it instead.

As things are, I might go Xamarin in the future.


if you want multiplatform support (or just really like c#/VS), xamarin can be a good choice for an hobby app.

Otherwise, Kotlin borrows many niceties with c# and allows you to directly use the platform APIs.


This could be done much simplier by masking the boat with stencil buffer.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: