Hacker Newsnew | past | comments | ask | show | jobs | submit | _rlh's commentslogin

“It’s a problem that only go can solve”

I had this discussion a decade ago and concluded that a reasonable fair scheduler could be built on top of the go runtime scheduler by gating the work presented. The case was be made that the application is the proper, if not only, place to do this. Other than performance, if you encountered a runtime limitation then filing an issue is how the Go community moves forward.


Go's allocator draws from the Hoard work as do most modern alloc/free implementations. Similar C/C++/Rust flavor implementations do not seem to "inevitably leads to memory fragmentation issues". Perhaps this fragmentation concern is a myth carried over from earlier malloc/free or gc algorithms.


Fred Brooks discussed this in the unfortunately named pun "The Mythical Man-Month". Most of the gray beards have read it, ask to borrow it, it will make their day. The punchline was on the IBM 360 they stopped fixing bugs when the fix cause the same or more bugs than the unfixed bug, which soon became all bugs.

Well aware of Brooks, when the loop var semantics were changed Go did an analysis showing that many more bugs were fixed than created by the change.


It was a memory model / two word atomicity problem. The mutator uses two writes, one for type and one for value to create the interface. The GC concurrently reads the 2 words of the interface to see if the value is a pointer or not. This is a race that was considered too expensive / complicated to fix.


Go's defrag techniques and why they work are discussed in the Hoard papers and have proven their value not only in Go but in most malloc implementations.

There is a relationship between cache locality, moving colocated objects to different cache lines to control fragmentation, value types, and interior pointers. Perhaps it is subtle but cache optimization is real important for performance and is not ignored by Go in the language spec or the runtime implementation.


Still one of the best ideas in the field in recent years. I will note that it also works for non-moving GC collectors and if they are precise, like Go, they can also update pointers and eliminate the redundant page table entries.


Just for fun set the Java heap to .4 Gigs or use GOGC to set the Go heap to 1.7 Gigs. If Go is faster then try some other sizes and draw a graph to see what the lines look like.


I think you are confusing memory management with memory model. Memory management is about garbage collection, RC, malloc / free, and allocations. Memory models are about what happens when you read and write to shared mutable memory. I'm not a Erlang programmer but in general the Actors concurrency model does not support shared mutable memory. Don't be clever.


Actually Go has the reputation of having solved many runtime problems including the GC tail latency problem.


This thread reads eerily like threads about Go's low latency GC from 2015 and how 10ms isn't good enough and throughput will be impacted and on and on. Three years later Go treats any 500 microsecond pause as a bug as Go continues to focus on throughput. Shenandoah is being put together by some very very smart people and I'm optimistic that the only thing that stands in the way of Java reaching the "500 microseconds is a bug" level is engineering hours and resources. More kudos for this achievement are in order.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: