> Docs > Async Programming > Goto and Tail Call
Even though our Async APIs provide some control structures, e.g. sequence and loop, they are not enough to model more complex imperative algorithms.
One solution is to express algorithms in flowcharts with GOTO
s, then program GOTO
s as tail calls. See Guy Steele. "Lambda: The Ultimate GOTO".
For example, consider this fictitious algorithm in pseudo code
int f0(int x) { int y; G1: y=f1(x) G2: x=f2(y) if(x<y) goto G1 G3: y=f3(x) if(y<0) goto G2 return y; }
Don't mind what it does and how, we just want to convert it to code straightforwardly.
It can be programmed in ordinary Java code as
class F0 { // state variables int x; // init by constructor int y; int g1() { y = f1(x); return g2(); // "goto G2" } int g2() { x = f2(y); if(x<y) return g1(); return g3(); } int g3() { y = f3(x); if(y<0) return g2(); return y; } } int f0(int x) { return new F0(x).g1(); }
Notice how each GOTO
is converted to a tail call.
This program works, except that it's prone to stack-overflow if recursion becomes too deep, because Java does not support tail call optimization yet.
Now suppose f1, f2, f3
are async functions
Async<Integer> f1(int x){ ... }
we can program the flowchart in a similar way in our Async APIs:
class F0 { int x; int y; Async<Integer> g1() { return f1(x).then(v-> { y = v; return g2(); // tail call }); } Async<Integer> g2() { return f2(y).then(v-> { x = v; if(x<y) return g1(); return g3(); }); } Async<Integer> g3() { return f3(x).then(v-> { y = v; if(y<0) return g2(); return Async.success(y); }); } }
Our Async library performs tail call optimization, therefore this program is safe regardless of depth of recursion.
If an async action A1
is defined as
asyncX.then( v -> A2 )
where A2
is another async action, A2
is a tail call by A1
. The completion of A1
is identical to the completion of A2
- - at the same time, with the same result. This gives us the opportunity to internally optimize the two actions as one.
In the previous example, action g2
may trigger tail call g1
or g3
, depending on the execution path.
Let's see another example: Define async action pause(n)
recursively as "sleep 1 second, then pause(n-1)
".
Async<Void> pause(int n) { if(n<=0) return Async.VOID; return Fiber .sleep(Duration.ofSeconds(1)) .then( v -> pause(n-1) // tail call ); }
There's a chain of tail calls pause(n)->pause(n-1)->...->pause(x)
, where pause(x)
is the one that is currently sleeping. We do tail call optimization to remove intermediary nodes, keeping only pause(n)->pause(x)
, a much shorter chain. Therefore the memory consumption of pause(n)
is bounded, no matter how large n
is.
In the following demonstration, forEach()
is implemented with tail recursion:
public static <T> Async<Void> forEach(AsyncIterator<T> iter, ConsumerX<T> action) { return forEachR(iter, action) .catch_(End.class, e->null); // the control exception to end the loop } private static <T> Async<Void> forEachR(AsyncIterator<T> iter, ConsumerX<T> action) { return iter.next().map( t -> { action.accept(t); return null; }) .then( v -> forEachR(iter, action) ); // tail recursion // recursion ends when `iter.next()` or `action.accept()` fails. }
In contrast, the following implementation is not tail recursion; it may cause OutOfMemoryError
.
public static <T> Async<Void> forEachX(AsyncIterator<T> iter, ConsumerX<T> action) { return iter.next().map( t -> { action.accept(t); return null; }) .then( v -> forEachX(iter, action)) // recursion, but not at the tail! .catch_(End.class, end -> null) ; }