Core Java

From Imperative to Pure-Functional and Back Again: Monads vs. Scoped Continuations

  • This post accompanies this video and won’t make too much sense without it

Last month I gave a talk at the Curry On conference, a new conference co-located with the academic, programming language conference, ECOOP. Curry On is intended to bridge the gap between academia. My academic interests do not include programming languages, which I view as the sub-discipline of computer science that has consistently over-promised and under-delivered more than any other (with the possible exception of AI). I am more interested in algorithms than abstractions, and programming language research is mostly concerned with the latter. Nevertheless, as a developer, I must use the abstractions provided by the programming languages I choose to use, and it was with some alarm that I had noted a flow of certain abstractions from academic languages to the mainstream that, in some cases, make a bad fit and mostly cause pain. As an example, I’d like you to ponder the fact that many, many more people now use monads in Java than in Haskell.

In my talk I made the case that the core abstraction of imperative programming is the blocking thread. Once you take it away, you lose most other imperative abstractions like control flow and exception handling (requiring them to be re-implemented in libraries), and most of the advantages imperative languages bring like post-mortem debugging, profiling, and automatic backpressure. It also makes code harder to write and read. Asynchronous programming is, I claim, anathema to imperative languages, whether or not you use monads to ease its pain. The mismatch between async and imperative is fundamental. All the while, we can reach for an abstraction just as powerful as monads – if not more so – which is a natural fit for imperative languages, meshing perfectly with their structure and abilities.

If you haven’t yet, now would be a good time to watch the talk:

In my talk, I claimed that just as monads are an uber-abstraction of pure-functional programming, continuations are the uber-abstraction of imperative programming and introduced an abstraction that I called “scoped continuations”, which is little more than delimited continuations with a special sauce (I have no idea if the concept had been discussed elsewhere; if it had, I’d love to know its proper name [See addition at end bottom of the post]).

As I’d thought of the idea not long before giving the talk, I was unprepared when presenting scoped continuations, and, as I’ve since given the topic some more consideration recently, I’d like to continue the discussion of the idea. I made three claims:

  1. Scoped continuations fit naturally with imperative code
  2. Scoped continuations are as powerful as monads
  3. Scoped continuations compose better than monads

I think I made the case for point #1, as scoped continuations let you keep imperative control flow, and they preserve the stack context, which is essential for post-mortem debugging and profiling. I was much more vague when it came to #2, noting intuitively the connection between monads and continuations and providing some examples, but stopping short of a proof, and a member of the audience rightly called me out for that.

Round One: Chaining – Delimited Continuations vs. Monads

After the talk, I spoke with Julian Arni who showed me a blog post, The Mother of All Monads, by Dan Piponi. The relevant Reddit discussion) led me to this 1994 proof by Andrzej Filinski1 that delimited continuations (called partial or composable continuations in Filinski’s paper) can represent any monadic composition. He says:

We show that any monad whose unit and extension operations are expressible as purely functional terms can be embedded in a call-by-value language with “composable continuations”…

… It is somewhat remarkable that monads have had no comparable impact on “impure” functional programming. Perhaps the main reason is that… the monadic framework is already built into the semantic core of eager functional languages with effects, and need not be expressed explicitly. “Impure” constructs, both linguistic (e.g., updatable state, exceptions, or first-class continuations) and external to the language (I/O, OS interface, etc.), all obey a monadic discipline. The only aspect that would seem missing is the ability for programmers to use their own, application-speci c monadic abstractions – such as nondeterminism or parsers – with the same ease and naturality as built-in e ffects.

… In the following, we will show that… a language… with fi rst-class continuations is already “monadically complete” in the sense that any program expressible in the somewhat contorted monadic style can also can be written in direct style.

I don’t have the necessary background to follow Filinski’s paper, but, if I’m not mistaken, the difficulty in the proof stems from the fact that the transformation from the monadic form to continuations (what he calls “direct style”) is not a simple mathematical mapping of the monadic functions or the monadic composer (what Haskell calls bind), but requires a deeper transformation of their source-code representation. I will, however, present a specific implementation of delimited continuations in a way that, hopefully, explains the intuition behind the moand-continuation similarity.

A delimited continuation captures a section of the call-stack. It lets us pause a computation and later resume it. Let’s look at a delimited continuation API in Java:

public class Continuation<T> implements Runnable, Serializable, Cloneable {
   public Continuation(Callable<T> target) { ... }
   public T run() { ... }
   public boolean isDone() { ... }
   public T getResult() { ... }

   public static Continuation<?> suspend(Consumer<Continuation<?>> ccc) { ... }

The suspend method (which works like Scheme’s shift) pauses the current continuation (provided we’re running inside one), and calls the (optionally) provided callback ccc (the name ccc is an acronym for Called with Current Continuation, which is a play on Scheme’s call-cc). The run function (which corresponds to Scheme’s reset) executes the continuation until it suspends or terminates. So, for example:

class Foo {
    static int foo() {
       return 3;

    static void bar() {

    public static void main(String[] args) {
        Continuation<Integer> c = new Continuation(Foo::foo);; // prints "Pausing..."; // prints "Pausing...";
        System.out.println(c.getResult()); // prints "3"

Because suspend returns the continuation and passes it to a callback, we can extend the Continuation class and add some internal fields to yield a ValuedContinuation:

public class ValuedContinuation<T, Out, In> extends Continuation<T> {
    private Out pauseOut;
    private In pauseIn;
    private RuntimeException pauseInException;

    public run(In in);
    public run(RuntimeException e);
    public Out getPauseValue() { ... }

    public static <Out, In> In pause(Out value) {...}
    public static      <In> In pause(Consumer<ValuedContinuation<?, ?, In>> ccc) {...}
    public static   <V, In> In pause(V x, BiConsumer<V, ValuedContinuation<?, ?, In>> ccc) {...}

ValuedContinutation lets us pass values in and out of the continuation. If we call pause(3), the value 3 will be returned by getPauseValue, and if we resume the continuation with run(5), the value 5 will be returned by pause. run(new RuntimeException()) would cause pause to throw that exception. For example:

ValuedContinuation<Void, Integer, Integer> c = new ValuedContinuation<>(() -> {
            int x = pause(5);
            x = pause(x + 10);
            x = pause(x * 100);
            return null;

while(!c.isDone()) {;
   System.out.println(c.getPauseValue()); // prints: 5, 13, 300

Now we’re in a position to understand the intuition behind the claim that continuations can express any monad: Our monadic composer (or bind) would be the callback, ccc, passed to pause; the code following each pause is the next monadic function in the monadic sequence, and calling is applying the next monadic function in the chain.

The difference is that monadic functions trampoline back to the enclosing composer (bind), while here we call the composer (our ccc) inside our continuation. As I claim in the talk, the advantage continuations have in imperative languages is that they interact well with all imperative concepts like imperative control flow and exceptions and preserve the stack context which is important for debugging and profiling.

Before we move on, let’s take a look at an example that makes use of the ccc callback. It is an example of the “future monad” in continuation form. Suppose we have an asynchronous service:

interface AsyncHandler<T> {
    void success(T result);
    void failure(RuntimeException error);

interface AsyncService<T> {
    void submit(AsyncHandler<T> callback);  

We can then define this method:

static <T> Consumer<ValuedContinuation<?, ?, T>> await(AsyncService<T> service) {
    return c -> {
        service.submit(new AsyncHandler<T>() {
              public void success(T result) {

              public void failure(RuntimeException error) {

which we’ll use in code running inside a continuation like so:

String y = pause(await(service));

The above pauses the continuation until the service request completes, and then resumes it with the result.

Round Two: Composing – Scoped Continuations vs. Monad Transformers

In the talk I also claimed that monads are hard to compose2, even in pure-functional languages, which are a great fit for monads. Composing monads (i.e. writing monadic code that uses exceptions and IO and produces a sequence) requires use of monad transformers which are quite difficult to understand as they make use of very high-order functions to form a brain-teasing chain of lambdish indirection.

To create easily-composable continuations, in my talk I introduced scoped continuations, which are a variant of delimited continuations. Scoped continuations are nested continuations where at any level, code is free to suspend any of its enclosing continuations. The idea is very similar to nested try/catch blocks, where, depending on the exception type, execution jumps to the catch block at the appropriate nesting scope.

To test how well the idea works well in practice, I’ve implemented a scoped continuation prototype in Java and Clojure. You can find code using scoped continuations in the cont branch of Quasar and Pulsar, respectively, here and here.

To implement continuations I used Quasar’s instrumentation, which was quite straightforward (while scoped continuations may some day find their way into upstream Quasar, this won’t happen soon, as we first need to make instrumentation completely transparent and hands-off, which we hope to do when Java 9 is released). The hard part was supporting cloning of nested continuations (needed for the nondeterministic continuation introduced below) in an environment where references to continuations may exist not only on the stack, but also on the heap. I tried three different approaches, and I’m not too pleased with any of them.

For scoped continuations, we need to change the Continuation (and similarly ValuedContinuation) class slightly:

public class Continuation<S extends Suspend, T> implements Runnable, Serializable, Cloneable {
   public Continuation(Class<S> scope, Callable<T> target) { ... } // <-- scope
   public T run() { ... }
   public boolean isDone() { ... }
   public T getResult() { ... }

   public static Continuation<?> suspend(S scope, Consumer<Continuation<?>> ccc) { ... } // <-- scope

Scopes are global names. In Java, I’ve chosen to represent a scope just like exception scopes are represented: as a class name (in the current implementation, scopes are classes extending Suspend which is an exception type).

Scoped continuations are defined and used so:

class ACont<T> extends ValuedContinuation<AScope, T> {
    public Continuation(Callable<T> target) {
        // ...

    public static AScope A = new AScope();

// similarly BCont, and then:

static void foo() {
    Continuation<Void> c = new ACont(() -> {
        // ...
        Continuation<Void> c = new BCont(() -> {
            // ...
            suspend(B, ...); // suspends the enclosing BCont
            // ...
            suspend(A, ...); // suspends the enclosing ACont
            // ...
        // ...
    // ...

In Clojure, scopes are global symbols, and scoped continuations can be defined so:

                   ; ....
                                      ; ....
                                      (pause B ...)
                                      ; ...
                                      (pause A ...)
                                      ; ...
    ; ...

The idea of scoped continuations is that suspending any enclosing continuation scope is comparable to a monadic functions returning to any enclosing composer (bind). But in the case of scoped continuations, we don’t need to monad transformers to transform either the composer or the chained monadic functions.

To get a feel for how such compositions would look like in real code, I implemented two continuation types: CoIterable – which, like Python generators, generates an Iterable with a continuation and corresponds to Haskell’s list monad – and Ambiguity – which implements nondeterministic computations with backtracking a-la Scheme’s amb and corresponds to Haskell’s amb monad.

In isolation, CoIterable is used like this:

Iterable<Integer> range(int from, int to) {
    return new CoIterable<>(() -> {
        for (int i = from; i < to; i++)

For examples of operators of CoIterable like flatmap, map and filter see here, and note the extra flexibility continuations give us over monads. Since monadic functions trampoline back to the composer, the filter and map operations must be implemented in terms of the single flat-mapping composer, while with continuations, we have the freedom to choose our own composition rule from within the continuation, and can implement filter and map independently of flatMap for better performance.

And here’s an example of Ambiguity used in isolation:

Ambiguity<Integer> amb = solve(() -> {
        int a = amb(1, 2, 3); // a is either 1, 2, or 3
        int b = amb(2, 3, 4); // b is either 2, 3, or 4

        assertThat(b < a);    // ... but we know that b < a
        return b;
    });; // returns 2 as that's the only possible solution for b

And now, let’s see how the two compose seamlessly:

Ambiguity<Integer> amb = solve(() -> {
    Iterable<Integer> a = iterable(() -> {
        produce(amb(2, 1)); // pauses on Ambiguity and CoIterable
        produce(amb(3, 10));

    int sum = 0;
    for (int x : a) { // using imperative loops on purpose; functional would work, too
        sum += x;
        assertThat(x % 2 == 0); // we assert that all elements are even

    return sum;
});; // returns 12

Note how the a continuation suspends both on the Ambiguity as well as on the CoIterable scopes. It creates a list whose first element is either 2 or 1, and whose second element is either 3 or 10, yielding four possible lists: (2, 3), (2, 10), (1, 3) and (1, 10). Later, we assert that all elements must be even, which means that the only valid list for a is (2, 10), and the only possible value for sum is 12.

As a final example (more examples may be found in the tests here and here; Clojure examples can be found here) let’s complicate things further with another level of nesting:

Fiber<Integer> f = new Fiber<>(() -> {
    Ambiguity<Integer> amb = solve(() -> {
        Iterable<Integer> a = iterable(() -> {
            produce(amb(2, 1));
            sleep(20); // pauses on the Fiber scope
            produce(amb(3, 10));

        int sum = 0;
        for (int x : a) {
            sum += x;
            assertThat(x % 2 == 0);
        return sum;


f.get(); // returns 12

We’ve now nested the whole thing inside a fiber – Quasar’s lightweight thread implementation – which is little more than a continuation scheduled by Java’s ForkJoin scheduler. Now, the nested code inside a pauses on three different scopes without breaking sweat and without transformers of any sort.

But what about type safety?

Haskell has a very rich type system, which monads use to great effect. By looking at a (monadic) function’s signature, you can immediately tell which monad type it can “live” in, and you cannot use it anywhere outside that monad. It turns out, scoped continuations can be just as safely-typed without losing any of their desirable properties. For that, we need a (simple) type system that lets us declare:

void foo() suspends A, B

Which means that foo may suspend continuations in both A and B scopes, and can therefore only be called in code that’s within both scopes. The Continuation class would then be defined as (in pseudo-Java):

public class Continuation<S extends Suspend, T> implements Runnable, Serializable, Cloneable {
   public Continuation(Class<S> scope, [Callable<T> suspends S|Others] target) { ... }
   public T run() suspends Others { ... }

   public static Continuation<?> suspend(S scope, Consumer<Continuation<?>> ccc) suspends S

So the continuation can run any target code that possibly suspends on the parameterized S scope, and possibly on other scopes, and the run method, swallows the S scope but still suspends the other scopes.

As it turns out, we already have such a type system – almost: Java’s checked exceptions. If we had made the Suspend scope, from which all scopes descend, we could have used Java’s throws just like suspend in the pseudo-Java above. The reason I haven’t done so is that Java’s type system doesn’t let you capture multiple checked exception types like I did with Others above, which would mean we’d need explicit cases for explicit scope arities (functions that suspend one scope, two scopes etc.) which might make things cumbersome.

Then, we could also improve ValuedContinuation’s type safety by parameterizing the scope, so that we’d have:

void foo() suspends CoIterableScope<Integer>

Which would only let foo be called within a CoIterable that produces a sequence of Integers (rather than, say, Strings). Unfortunately we can’t quite do that, either, as Java does not currently allow generic exception types.

To be continued?

I hope that by discussing scoped continuations in greater depth I’ve been able to explain the idea better than the hand-waving I used in my talk, and I’m glad to have found Filinski’s proof (which is probably well known in PL circles).

I hope my talk has convinced you that monads have no place in imperative languages (except for parallel computations, maybe), and if not, I’d love to hear why not. I also believe that scoped continuations compose better than monads even in PFP languages (and also that monads are, in general, not a very good way to model effects, but that’s a whole other discussion).

Finally, while I strongly believe imperative languages should have some form of lightweight threads (AKA fibers, AKA user-mode threads, sort-of AKA green-threads), and threads (of any kind) are nothing but continuations scheduled by an appropriate scheduler, I do not necessarily think that imperative languages should directly expose scoped continuations as an abstraction. After all, abstractions exist to increase code reuse, assist in code maintenance, and help verification: in short they exist to reduce the cost of development, and – at least from a non-research perspective – that is the only metric by which they are judged3. I think continuations are the elegant imperative counterpart to PFP’s elegant monads, but I’m not yet convinced of their utility in practice.

If you’d like to know more about continuations, this is the history of their development which gives credit to all the right people.

Addendum 1

Since first publishing this blog post, I have managed to find a reference to scoped continuation in a 1993 paper by Philip Wadler called Monads and composable continuations, where he refers to scoped continuations simply as “composable continuations with multiple levels”. As Wadler showed delimited continuations are expressible by monads and Filinsky showed (a year later) that monads are expressible as delimited continuations, it stands to reason that the two are duals. Nevertheless, it stands to reason that even as duals, each is more suitable to a particular programming style, and there is little doubt that continuations are more appropriate for impure, call-by-value languages (imperative and functional-imperative). Wadler concludes his paper by saying:

One goal of composable continuations with multiple levels was to be able to factor different effects into different levels. Danvy and Filinski claim it is relatively easy to combine different effects uniformly in this way. Monads are also intended to factor effects in a way which eases their combination. However, there is no uniform rule for combining any two monads. This paper has used monads to shine some light on composable continuations. Will composable continuations shed light on the problem of combining monads?

Addendum 2

In an online discussion, a reader commented that I have misunderstood monads by talking about what they look like instead of what they are. I think that this is no more than a difference in interpretation, so I’d like to clarify:

As it has been proven (I think) that any effect can be modeled by monads, you could say that all effects are monadic, but just like the mathematician in the famous joke, that’s absolutely true yet absolutely useless (depending on your point-of-view, I guess).

From a mathematical point of view, whenever two things are isomorphic they are the “same”. But from a programming point of view, the two can be very different, as abstractions are psychological interactions with the programmer’s mind, and two isomorphic mathematical concepts can psychologically interact very differently with the programmer. Therefore, if I don’t have to “think in monads” when working with an abstraction, then the abstraction is not a monad, even if an isomorphism exists between them.

According to the mathematical interpretation, being “against monads” is as nonsensical as being against the number 1. While in my interpretation, representing the number 1 in Arabic numerals, in Church numerals, or in set-theory numerals are very much psychologically different and hence substantially different in programming languages, as programming languages are first and foremost a kind of human languages. In a programming language, abstractions are defined (and measured) both by mathematical as well as psychological (or economic) properties.

I’m an “algorithmist”, not an “abstractionist” (and, unfortunately, I think those two CS perspectives are often at odds), so I measure the usefulness of an abstraction only in the change in cost it introduces to writing and maintaining my algorithms, so to me, monads are a design pattern rather than a mathematical object expressed in some particular notation.

  1. I then found this post which says that Filinski’s proof does not extend to monads that take advantage of lazy (call-by-name) evaluation
  2. For example, try to compose Java streams with CompletableFutures. It ain’t easy.
  3. See this HN discussion on the subject.

Ron Pressler

Ron Pressler is the founder of Parallel Universe, a company building a JVM-based server-side stack for easily writing high-performance, low latency applications that work in harmony with modern hardware architecture.
Notify of

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Inline Feedbacks
View all comments
Back to top button