Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tail call optimization in ECMAScript 6 (2ality.com)
62 points by shawndumas on July 1, 2015 | hide | past | favorite | 13 comments


Guess ES6 takes some wind out of this guy's rant: http://journal.stuffwithstuff.com/2013/07/18/javascript-isnt...

He argues that JS only has 3 of the 9 essential elements of Scheme.

ES6 gains tail call elimination and lexical block scope. Though I guess it arguably gives up "minimalism." So now it's 4/9. If it gets macros (which there has been talk of happening in the future) it'll be 5/9.


It still doesn't seem like scheme even if it gets those features. Lots of languages have those features and we don't consider scheme.


How does one feature detect whether a browser supports TCO? It's an invisible feature. Without it, tail-calling code will seem to work, but could explode the JS stack.


Good question. One hacky way of detecting TCO could be to cause a stack overflow:

  var supportsTco = false;
  try {
    (function rec(n) { if(n) rec(n - 1) })(20000);
    supportsTco = true;
  } 
  catch (e) { /* Ignore stack overflow error */ }
This works for me on Firefox and Chrome (by "works" i mean it doesn't, because no browser implements TCO yet, but the code detects it). But it's not bullet-proof: it would fail on a JS runtime with no TCO but a higher stack size limit, or on a runtime that detects this specific case of recursion and optimizes into a loop (or completely eliminates the innocuous function call).

A side-question: does anybody else feel that stack overflows just don't belong in high level languages? I mean, most high level languages remove the distinction between stack-allocated values and heap-allocated ones: it's all references. It doesn't make sense to worry about whether a Ruby (or JS, or Python, etc) object lives on the heap or on the stack. But yet, you still have to worry about the function call stack on these languages, and they all have tiny limits (in the order of the thousands stack frames) compared to the heap size limit.

Wouldn't it be possible for a language to make the stack grow without an arbitrary limit (besides the limit of physical memory)? And if so, why do these high level languages that focus on ease of use not implement the stack that way?


> One hacky way of detecting TCO could be to cause a stack overflow

You can do this to test your own browser, but it's rather pointless in practice unless you can do it at runtime without crashing the browser or causing it to become unresponsive. Otherwise, you'll have to manually keep track of which (browser, version) combination supports it and which ones don't.

> Wouldn't it be possible for a language to make the stack grow without an arbitrary limit (besides the limit of physical memory)?

Yes, some languages already do this. For example, you'll never get a literal "stack overflow" error in Go[0]. (You'll never get one in Java either - it'll be an OutOfMemoryException or whatever they call it)

[0] http://dave.cheney.net/2013/06/02/why-is-a-goroutines-stack-...


> [...] it's rather pointless in practice unless you can do it at runtime without crashing the browser or causing it to become unresponsive.

The snippet i posted doesn't crash the browser and seems to take around 1 or 2 milliseconds to run. Considering usual page load times, and if detecting whether TCO works is important for an application, it doesn't seem like an unreasonable solution (albeit pretty ugly hehe).

Thanks for the pointers about languages that don't have this limitation. Unfortunately, it doesn't seem to be the case for Java. This simple recursive function throws a StackOverflowError in less than 100,000 iterations: http://ideone.com/Tf4n3I, and it's not because the program is consuming all physical memory:

  /usr/bin/time -v java StackOverflowExample
  Exception in thread "main" java.lang.StackOverflowError
          at StackOverflowExample.rec(StackOverflowExample.java:7)
          at StackOverflowExample.rec(StackOverflowExample.java:8)
          at StackOverflowExample.rec(StackOverflowExample.java:8)
          ...
  Command exited with non-zero status 1
          Command being timed: "java StackOverflowExample"
          User time (seconds): 0.13
          System time (seconds): 0.01
          ...
          Maximum resident set size (kbytes): 27000
          ...


I would use two mutually recursive functions rather than a single recursive function, as it’s possible for an implementor to implement TCE for recursive functions, but not for the general case.

I’d also up the number. That code runs fine in Safari for 20,000, I had to use 200,000 to get it to break.


While it's nice that TCO is in the ES6 specification the fact is not one JavaScript engine implements it. Not one. The Babel transpiler implements a rather limited form of TCO, but hey, that's better than nothing. Meanwhile an alternative to TCO is to use a trampoline which I've blogged about here https://taylodl.wordpress.com/2013/06/07/functional-javascri...


I've always wondered... what does a tail call recursion look like in a debugger?

Essentially, it is just a loop, but is it worth providing some special indication of tail recursion?


It is straightforward to provide the type of stack trace information people want for debugging in the presence of tail call elimination (not optimization ;) ) http://www.brinckerhoff.org/clements/papers/cf-toplas04.pdf

You can even try it out in Racket.

Edit: to clarify, my parenthetical was meant to say tail call optimization is not an optimization but a matter of correctness (so best to call it elimination instead), not to make a distinction between the two.

Also, that isn't quite the right paper but I don't have time to find the one I meant right now...


How does it work? There have been bugs I could not diagnose because the caller was optimized out in the stack trace.

I took a look at the paper. It's very dense but seems to be a technique for annotating stack frames with permissions? That sounds interesting but is no help for debugging.


Yes, the paper is about showing that you can do the equivalent of annotating stack frames for security in a language with tail calls. The same ideas work debugging though.

Basically, the key idea is that you can manage the debugging information yourself. So for simple tail recursion, you can have the first call push a marker on the stack and subsequent calls update a count on the marker (if you care about the depth). For more complicated interactions, you can record whatever debugging information you need, even a full stack by associating the information with a cell under the top-most stack frame. (Of course, this debugging information may grow with the depth of recursion, depending on whether you try to keep the full stack or not, but will still be more space efficient than keeping around all of the stack frames.)

The main point is that keeping track of your execution context doesn't have to be done by walking the call stack, you just need to have a protocol for maintaining whatever state you want on entry/exit.


Depends on the implementation.

Support for TCO means that an unbounded amount of tail calls can be active at a time. A normal call pushes a frame to the stack. Most implementations don't push a frame for a tail call. However when debugging is turned on, one could keep, say, the 10 last frames corresponding to tail calls. That would give you the best of both worlds.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: