Implementing algebraic effects through typeclass prolog

Monads are a popular way to model effects in purely functional programming languages, but they come with problems. For instance, to compose two effects, we need a way to compose their associated monads. Since there isn't really a way to define some product on the set of all monads, the solution is to use Monad transformers.11 A monad transformer is a thing that takes a monad and returns a monad. These are annoying to use (at least for me) and are inflexible. An alternative that some (mostly research) languages are starting to adopt is algebraic effects.

Algebraic effects are much easier to compose and reason about. They can be seen as the type safe incarnation of delimited continuations.22 We used delimited continuations to implement async and await syntax in this post. While they mostly exist as special features in languages that few people use, we can actually implement them in Haskell through a few language extensions and simple typeclass metaprogramming acrobatics33 While rewriting this, I realized that I basically rewrote the core of freer from first principles. That makes it easier to motivate this post I suppose. .

1. Obligatory pragmas

We will be writing a lot of type level code and consequently, we will need to give the compiler a a lot of LANGUAGE pragmas.

To be able to construct a [* -> *]. This is useful, because we want to use the familiar list structure, but at the type level.

{-# LANGUAGE DataKinds #-}
{-# LANGUAGE FunctionalDependencies #-}
{-# LANGUAGE QuantifiedConstraints #-}
{-# LANGUAGE ScopedTypeVariables #-}
{-# LANGUAGE TypeApplications #-}
{-# LANGUAGE TypeFamilies #-}
{-# LANGUAGE UndecidableInstances #-}

2. Imports

At some point, we are going to express the constraint "every element of that type-level list of type constructors is a functor. This entails both being able to pattern match on types (using type families), but also produce the kind of thing we can put to the left of a => in a type signature. Those things are constraints and the Haskell compiler will try to solve the at compile time.

import GHC.Exts (Constraint)
import Data.Proxy

3. Recursive types as the fixpoint of a functor

One important fact about types that hold an infinite number of values is that most of the time, they are an instance of what is called an inductive data type. In Haskell, an inductive data type can be constructed by taking the fixpoint of a positive type constructor called its generator. This means (approximately) that we apply the type constructor to some other type an infinite amount of times. Let's say we have the following type constructor:

data N n = S n | Z

In type algebra44 Not an actual thing people say, but I use this term to point to the fact that the universe of types behaves (under isomorphism) like a semiring. It has all the things you expect from the natural numbers: addition, multiplication, distribution and the right units. , we can try and see what happens when we apply it to \(
\top \) an infinite number of times.

\begin{equation*}
\begin{aligned}
N^{\infty}(\top)
&= Z + S(N^{\infty-1}(\top)) \\
&= Z + S(Z) + S(N^{\infty-2}(\top)) \\
&= \dots \\
&= Z + S(Z) + S(S(Z)) + S(S(S(Z))) + \dots \\
\end{aligned}
\end{equation*}

If we look at the unrolling closely, the results of this infinite type series is precisely the natural numbers.

In Haskell, one can codify this idea of looping a type constructor unto itself an infinite number of times through the Fix type constructor:

data Fix f = In (f (Fix f))

Obviously, nobody would want to work with Fix N instead of Nat, but the former has richer structure that can be exploited to produce extra generic code. In the following sections, we are going to take the idea of a functor fix point and extend it to create a nice API for algebraic effects.

4. Syntax trees as fixpoint of sum of functors

Let's think about the idea of an abstract syntax tree. This will be a tree that has a couple of different node types. Some of these nodes are going to have children, some won't. Some of these nodes are going to contain extra information, some won't. We can represent a specific node type, its number of children and the extra stuff contains through a one-argument type constructor. Let's define the type constructors associated to the node types of the trees of arithmetic expressions.

data Plus s = Plus s s
  deriving (Functor, Show)

data Zero s = Zero
  deriving (Functor, Show)

data Times s = Times s s
  deriving (Functor, Show)

data One s = One
  deriving (Functor, Show)

Once we have the building blocks of our expression tree, we still need to assemble them! Currently, we only have a bunch of one-level expression trees with holes. The first step will be to merge them into a single type constructor (with many value constructors), the second step will be to take its fixpoint, which will close the loop and give us a closed type.

It is straightforward to define a :+: operator on one-argument type constructors

data (a :+: b) x = InL (a x) | InR (b x)
  deriving (Functor, Show)

Then, to get the actual type of arithmetic expression trees, we would simply have to do this:

type Expr = Fix (Plus :+: Zero :+: Times :+: One)

however, this is not the best we can do. While this way of summing pairs of functor together until there is only one left is intuitive, it uniformity (and polymorphism-friendlyness) the structure of a real "sum all the functors in this list" operator.

5. The Sum constructor

If we have a list of type constructors fs :: [* -> *] , we can sum them all in one shot through what looks a lot like the definition of the list type. An element of ((the sum of constructors) applied to a type)55 I wish English had better ways of making precedence explicit. is either an element of the first constructor applied to the type or an element of ((the sum of the other constructors) applied to the type).

data Sum (fs :: [* -> *]) (x :: *) where
  Head :: f x -> Sum (f ': fs) x
  Tail :: Sum fs x -> Sum  (f ': fs) x

One advantage of this representation over the :+: one is the fact that taking the sum of an empty list automatically gives us the empty type (notice there is no constructor that produces a Sum '[] x) while it would have been necessary to use a special empty type in the case of the binary constructor addition.

Now let's Fix our Sum and see it in action action!

zeroPlusOne :: Fix (Sum [Zero, Plus, One])
zeroPlusOne = In $ Tail $ Head $ Plus (In $ Head $ Zero) (In $ Tail $ Tail $ Head $ One)

Actually using this way to write the terms would be pretty awful. We will borrow the technique presented in the paper Data types à la carte[1] to implement smart constructors that let us express the same terms, but in a more concise way.

6. Free monads

Recall the definition of a monad: "A monad \( T \) is a monoid in the category of endofunctors". In simpler english, this means that we have two natural transformations

\begin{equation*}
\begin{aligned}
\eta : I \to T \\
\mu : TT \to T \\
\end{aligned}
\end{equation*}

Haskell's66 I'm not sure there is an even simpler english available to express this. monad type class presents \( \eta \) as the polymorphic function pure and \( \mu \) as >>=.77 The more categorical translation of \( \mu \) would be join, but working with it would make for less efficient programs. We will still work with this formulation in the next section, though, because the >>=-based free monad is less intuitive to understand. For a type constructor \( T \), having an implementation of those two functions that respect the monad identities88 The monad identities (recall it is "just" a monoid) are basically associativity of \( \mu \) and the left and right unit properties of \( \eta \). is basically what is meant by a monad.

An important construction that we will use later is the free monad. We call this monad "Free" because it doesn't add new equalities (or computational behavior) to the functor we use to create it. Moreover, the way we construct it is pretty much identical to the way we construct the free monoid over an alphabet. The latter construction is more intuitive, so let's go over free monoids first.

A monoid is a set \( M \) with a neutral element \( e \in M \) with a "multiplication" \(
\cdot : M \times M \to M \).

Examples of monoids are:

  1. Natural numbers, zero, addition.
  2. Positive natural numbers, one, multiplication.
  3. Functions \( \mathbb{R} \to \mathbb{R} \), the identity function \( f(x) = x \), composition.

The free monoid on an alphabet set \( A \) is the "monoid with the least equalities" that we can construct from that set. We do this by starting with our alphabet \( A \) and forcing the necessary equalities to be true (by adding elements and quotienting them) until the monoid axioms are respected.

  1. Let the set be \( A \).
  2. Add to \( A \) a fresh element \( e \) that is different from all element we have until now. This will be our unit element.
  3. For each pair \( x, y \in A \), add a fresh element \( ab \). This new element will let us interpret the multiplication.
  4. For each element \( x \), quotient the set until \( ex = x = xe \)
  5. For each elements \( x, y, z \), quotient the set until \( (xy)z = x(yz) \).

It turns out that what we end up with is precisely the list of elements of our original alphabet set. The unit \( e \) is the empty list and multiplication is concatenation of lists. In other words, the elements of the free monoid with alphabet \( A \) are:

\[ \text{for some $n \in \mathbb{N}$}, A^n\]

where \( A^n \) is the n'th power (using the cartesian product) of \( A \).

Now let's repeat the same trick for monads. If the elements of the free monoid are powers of the alphabet set, then the free monad are the powers of the alphabet functor!99 The reason the free monoid is "n-fold cartesian product of \( A \) with itself" while the free monad is "n-fold composition of \( T \) with itself" is probably because \( \times \) and \( \circ \) are, respectively, the products within the monoidal categories \( \mathbf{Set} \) and \( [C, C] \) where the two instances of generalized monoids live (see the nlab page on the topic). I'm not actually sure it is the case that "the free monoid Generated by \( x \) in the monoidal category \( (C, \otimes, 1) \) is the n-fold \( \otimes \)-product of \( x \) with itself)", but that might be interesting to prove I guess.

\[ \text{Free}(T) = \text{for some $n \in \mathbb{N}$}, T^n = T \circ \dots \circ T \]

  1. A natural transformation \( \eta : I \to T \)
  2. A natural transfor

The free monad Free f makes f into a monad, because a term of type Free f x is inhabited by terms of shape \( f^n(x) \) for some \( n \in \mathbb{N} \). \( f^0(x) = x \) is Return and the Join operation works becaus \( f^n(f^m(x)) = f^{n+m}(x) \).

data Free f x = Return x | Join (f (Free f x))
  deriving Functor

7. Applications of the free monad

While the previous section explains the categorical "what?" of monads, the operational "why" is still probably unclear. Earlier I wrote that the idea of a "free" monad/monoid is to take a carrier and make it into a monad/monoid without endowing it with any computational behavior. For monoids that make sense. The way to make a set a monoid without actually defining a multiplication operation on it is to consider lists of elements and the way to make a functor1010 Recall, the fixpoint of a functor/type constructor turns it into a tree) into a monad is to take the tree (with holes) generated by iterating that fixpoint.

Another way of seeing Free F x is to say "abstract syntax trees with node type F with variables x".

We can see this by trying to construct terms from Free Plus String

xPlusY :: Free Plus String
xPlusY = Join $ Plus (Return "x") (Return "y")

Another consequence of this "syntax tree with variables" is that fmap'ing another tree, then applying join (this is the definition of >>=) gives you substitution.

Finally, the free monad lets us build expressions while deferring the decision of how to interpret the >>= operator. This will be useful when we implement effect handlers.

8. Effect type constructor

Now that we have a way to take a sum of a list of functors and we understand what taking the free monad of the result means, let's package both of those ideas into a single datatype.

data Eff (fs :: [* -> *]) (x :: *) where
  Pure :: x -> Eff fs x
  Impure :: Sum fs (Eff fs x) -> Eff fs x

-- To unwrapp empty effects into pure values
unwrapEff :: Eff '[] x -> x
unwrapEff (Impure _) = error "impossible"
unwrapEff (Pure x) = x

However, this structure will be very annoying to work with. For one, you will have to insert a bunch of Tails everytime you want to construct a term and any time you change the ordering of the effects in the Sum. We need something better. Let's leverage the compiler's type class resolution mechanism to automatically generate smart constructors that add the appropriate number of Tail to any term from a functor in the [* -> * ] list.

9. Type class prolog

Before we go and perform programming language magic tricks, let's think a bit about what "type class" and "prolog" mean.

Prolog is a programming language where we program by stating derivation rules1111 Derivation rules as in the type of stuff you see in my post on formal logic and type systems. as "facts". After enumerating all the facts, we give a goal – a judgment (with variables) that we want to prove – to a proof search engine and watch it intelligently explore the space of proofs until it finds values for the variables that make the statement true. As an example, let's look at the append.

append :: [x] -> [x] -> [x]
append [] ys = ys
append (x:xs) ys = x:(append xs ys)

In haskell, we would define the function by pattern matching and recursion.1212 This doesn't say much, because we always do that. When a program wants to compute append x y, it first determines if x is an empty list or not, then evaluates the body of the appropriate clause. In prolog, we would instead enumerate the conditions under which appending X and Y equals Z.

append([], Y, Y).
append([H|X], Y, [H|Z]) :- append(X, Y, Z).

Here, instead of having a two argument function, we have a three place relation. Here is what the clauses say:

first clause
appending the empty list to any list Y equals the same list Y.
second clause
If appending X to Y equals Z, then appending [H|X] to Z equals [H|Z]

The fact that in prolog, results (Z) variable is explicit lets us do two things:

Express nondeterminism
We can have different ways for a thing to be true.
Any argument can be the input
By asking to solve append("hello", Y, "hello, world"), we get a procedure to strip a prefix from a string.

Type classes are Haskell's mechanism for having arguments that are autofilled by the compiler1313 This is more often described as a "mechanism for ad-hoc polymorphism", but ad-hoc polymorphism really is just "fill this argument with a value that is consistent with the type". . Out of the box, typeclasses don't let us do crazy stuff (typeclass resolution guarantees termination) but with a bunch of extensions , we end up with a general purpose logic programming language that is executed at Haskell's compile time. Here is what the append function looks like at the typeclass level:

-- Here
class Append (a :: [k]) (b :: [k]) (c :: [k]) | a b -> c where
  appendProxy :: Proxy a -> Proxy b -> Proxy c
  appendProxy _ _ = Proxy

instance Append '[] a a
instance Append t m r => Append (h':t) m (h':r)

Now, you can execute the following in a REPL to see what Haskell' constraint solver found for the value.

:set -XDataKinds
:t appendProxy (Proxy :: Proxy '[1, 2]) (Proxy :: Proxy '[1, 2])

While in theory we can ask Haskell to infer the value of a type variables (like we do for variables in prolog), we are interested mostly in generating the proof of statements we assert to be true. The proof will be presented to us by the compiler as implementations of certain "witness functions" that will be useful.

10. Inclusion relation between a constructor and a list of constructors

The first step in implementing our smart constructors for Eff is going to be the "this functor is an element of this functor list" type class which will be witnessed by functions that transform a (element of a) functor into (an element of) the sum of functors.1414 (it is very similar to the \( \prec \) typeclass they use in [1])

class (e :: * -> *) :∈: (es :: [* -> *]) where
  inject :: e x -> Sum es x

the goal is that if we have a :: F x, we also have inject a :: Sum [..., F, ...] x.

Here are the two ways a functor can be an element of a list of functors.

First, it can be the first functor in the list. When that is the case, Head is the constructor that embeds it into the sum of functors.

instance {-# OVERLAPPING #-} f :∈: (f ': fs) where
  inject = Head

Otherwise, the functor can be an element of the list if it is an element of the tail of that list.

instance f :∈: fs => f :∈: (g':fs) where
  inject = Tail . inject

Now, this typeclass makes using the Sum construction a lot easier. There is no need now to add a bunch of Tail call like we did earlier with zeroPlusOne. We can just call inject and it will automatically call the right chain of Heads and Tails.

zeroPlusOne' :: Fix (Sum [Zero, Plus, One])
zeroPlusOne' = In $ inject $ Plus (In $ inject $ Zero) (In $ inject $ One)

Here is an other example:

data A s = A s
data B s = B s

test1 :: Sum '[A, B] Int
test1 = inject $ B 12

11. Subset relation

One way of seeing the smart constructor we just built is as a way to "raise" a term that comes from some list of constructors.1515 Remember, Fix (Sum '[Zero, Plus, One]) x means "syntax tree where each node is either a Zero, a Plus, One or a variable with a name from x". An similar kind of operation we might (will) want to do will be raising an expression from a Sum to a Sum of a super set of the functors. We are going to proceed in a similar way as we did for the :∈:.

class (l :: [* -> *]) :⊆: (r :: [* -> *]) where
  permute :: Sum l x -> Sum r x

First, the empty set is a subset of everything, because it is possible to transform the sum of no functors (it is always uninhabited) into the sum of anything. This is the base case of our type class recursion.

instance '[] :⊆: fs where
  -- no constructors for Sum '[] x
  permute _ = error "impossible"

For the recursive step, if the head of the LHS is a member of the RHS and the tail of the LHS set is a subset of the RHS, then the LHS is a subset of the RHS.

instance (f :∈: gs, fs :⊆: gs) => (f ': fs) :⊆: gs where
  permute (Head h) = inject h
  permute (Tail t) = permute t

Here is an example of ho we might use this relation:

data C s = C s

example :: Sum '[A, B] Int
example = inject $ B 12

example' :: Sum '[B, C, A] Int
example' = permute example

Because of the way we set up our derivation rules, whenever all the functors in a set are known, the instance search algorithm will automatically generate the necessary code to rearange the order of a functor sum.

The name I gave to the function witnessing the relation is not random, we can also use it to reorder the functors in our sum. Think of how complicated it would be to do that manually!

example'' :: Sum '[A, B, C] Int
example'' = permute example'

12. Show for Eff fs x

Now that we have built smart constructors for Sums of functors, we are almost ready to attack Eff itself. However, to make our lives while debugging easier, we will start by implementing Show for Eff. As you will see, this is not trivial, so this will make for a good exercise.

First of all, why can't we just add deriving Show to the definition of Eff? Simply put, the deriving facility isn't compatible with weird constructors. Remember, Eff takes:

  1. A type-level list of type constructors
  2. A type

to produce a closed type at the end. It is not obvious a priori, but a term of type Eff something should satisfy Show precisely when all type constructors in the list of effects satisfy PreservesShow.

We say that a type constructor * -> * "preserves show" when applying it to a type that satisfies Show produces another type that satisfies Show. To state that at the level of typeclasses, we will need to introduce a quantifier at a weird spot where we never say quantifiers before!

class (forall x. Show x => Show (f x)) => PreservesShow f

Here we define a typeclass with no function witnessing it (we will reuse show) that says: "whenever (whenever a type is showable, f of that type is showable), f preserves show". Furthermore, we know that Sum fs preserves show whenever all of the elements of fs preserve it too. We codify this using yet another weird Haskell feature: type families.

type family AllPreserveShow (fs :: [* -> *]) :: Constraint where
  AllPreserveShow (f ': fs) = (PreservesShow f, AllPreserveShow fs)
  AllPreserveShow '[] = ()

Type families are a way to define functions that operate at the level of types. However, here we use them to turn a list of type constructor into a constraint.

The above code says: "AllPreserveShow applied to the empty list gives the empty, trivially satisfied constraint and when applied to a head and a tail, returns the conjunction of PreservesShow head with AllPreserveShow tail".

Once we have stated all necessary definitions, we can quite easily show that PreservesShow (Sum fs) whenever AllPreserveShow fs.

instance (AllPreserveShow fs, Show x) => Show (Sum fs x) where
  show (Head h) = show h
  show (Tail t) = show t

-- therefore
instance AllPreserveShow fs => PreservesShow (Sum fs) where

We also do the same thing for Eff fs x. Because it uses Sum fs under the hood, it also preserves show whenever Sum fs preserves show.

instance (AllPreserveShow fs, Show x) => Show (Eff fs x) where
  show (Pure x) = show x
  show (Impure t) = show t

-- therefore
instance AllPreserveShow fs => PreservesShow (Eff fs) where

13. Eff fs is a monad

We're at the last stretch. The next goal now is to implement the Monad type class for Eff fs, but as it was the case for Show in the last section, it is only true under certain conditions, i.e. every type constructor in fs is actually a Functor. Up until now, I have been using the terms "type constructor" and "functor" pretty much interchangeably, but that ends now. for everything to work well, we actually need the type constructors we use to be functors. This is not an ergonomics loss, however, because the Functor instance definitions can be generated with deriving pragma (????).

We define a type family to express "all constructors in this list are functors" like earlier. This is simpler than the AllPreserveShow because Functor is already a constraint on * -> *.

type family AllFunctors (fs :: [* -> *]) :: Constraint where
  AllFunctors (f ': fs) = (Functor f, AllFunctors fs)
  AllFunctors '[] = ()

Again, a sum of * -> *s is a functor if and only if they all are.

instance AllFunctors fs => Functor (Sum fs) where
  fmap f (Head h) = Head $ fmap f h
  fmap f (Tail t) = Tail $ fmap f t

By the same reasoning as with Show, Eff fs is a functor whenever fs are.

instance AllFunctors fs => Functor (Eff fs) where
  fmap f (Pure x) = Pure $ f x
  fmap f (Impure fx) = Impure $ (fmap . fmap) f fx

Now, the following is the most important definition in this entire post. The purpose of everything we did was to get us closer to the following definition:

effJoin :: AllFunctors fs => Eff fs (Eff fs x) -> Eff fs x
effJoin (Pure x) = x
effJoin (Impure fx) = Impure $ (fmap effJoin) fx

This function takes a syntax tree (with nodes generated from fs) that has a second syntax tree as variables and fuses the two trees together. This corresponds to the \( \mu : TT \to T \) natural transformation we talked about earlier and we can use it to derive the equivalent >>= (bind) operator which is so emblematic that it's in Haskell's logo itself.

haskell-logo.png

effBind :: AllFunctors fs => Eff fs x -> (x -> Eff fs y) -> Eff fs y
effBind fx f = effJoin $ fmap f fx

Finally, from that operator, we can quickly define the Applicative, then Monad instances for Eff fd.

instance (AllFunctors fs) => Applicative (Eff fs) where
  pure = Pure
  ff <*> fx = effBind ff (\f -> fmap f fx)

instance (AllFunctors fs) => Monad (Eff fs) where
  (>>=) = effBind

14. Why do we need Monad (Eff fs)?

I hear you ask: "At the beginning, you presented algebraic effects as an alternative to monads, so why are you making Eff fs a monad?"

The reason algebraic effects are interesting is because they compose very well. Monads, in their normal form, don't. What we do here is compose the effects before turning them into a monad. This way, we only have to think about a single "monad constructor": Eff.

15. Lift a term into an effect

Now that we have built ourselves an ontology of typeclasses, functors and monads, we need utilities to actually use the algebraic effects. The first one of them will be this simple effInj function which will let us turn a naked effect into a fully-clothed effectful value.

effInj :: (Functor f, f :∈: fs) => f x -> Eff fs x
effInj e = Impure $ inject $ fmap Pure e

16. Example

Here is how to use it. Let's say we defined two effects for asking for a string and asking for an integer. The function that will find itself inside the GetInt or the GetString will be the continuation (you can read more about continuations in My last post) to which will be passed the String or Int. Here, we use id, because the continuation of our block will automatically be add through the monadic operations.

data GetString a = GetString (String -> a)
  deriving Functor

data GetInt a = GetInt (Int -> a)
  deriving Functor

type MyEff = [GetString, GetInt]

comp :: Eff MyEff String
comp = do
  int <- effInj $ GetInt id
  str <- effInj $ GetString id
  return $ str ++ show int

17. Permuting effects

Here is an utility function to embed an effectful value into a bigger effect space.

effPerm :: forall fs gs x. (AllFunctors fs, fs :⊆: gs) => Eff fs x -> Eff gs x
effPerm (Pure x) = Pure x
effPerm (Impure sx) = Impure $ permute @fs @gs $ fmap (effPerm @fs @gs) sx

18. Handlers

The idea of a handler is what makes algebraic effects ergonomic. It lets us easily handle one of the effects using the other effects in the set.

handle :: forall fs g gs x. (Functor g, AllFunctors gs, AllFunctors fs,fs :⊆: (g ': gs))
  => (forall x. g x -> Eff gs x) -> Eff fs x -> Eff gs x
handle trans (Pure x) = (Pure x)
handle trans (Impure x) = case permute @fs @(g ': gs) x of
  Head h -> let val = fmap (handle trans) h
                val' = trans val
            in effJoin val'
  Tail t -> let val = fmap (handle @fs @g @gs trans) t
            in Impure val

You use handle a bit like you use try / catch blocks in other programming languages. Whenever the effect you want to eliminate is used, you must do so using one of the remaining effects in the set.

comp' :: Eff '[GetString] String
comp' = handle func comp
  where func :: GetInt x -> Eff '[GetString] x
        func (GetInt f) = return $ f 0

19. More advanced handler

Let's say you want an effect GenSym :: (Int -> x) -> GenSym x that generates unique integers. We need some kind of way to remember the last number we gave out. A Handler f fs lets us handle a f effect using fs and also, at the same time, give the next Handler f fs to use.

data Handler (f :: * -> *) (fs :: [* -> *]) where
  Handler :: (forall x. f x -> Eff fs (x, Handler f fs)) -> Handler f fs

Now we need a handleFold function.

handleFold :: forall fs g gs x. (Functor g, AllFunctors gs, AllFunctors fs,fs :⊆: (g ': gs))
  => Handler g gs -> Eff fs x -> Eff gs x
handleFold _ (Pure x) = Pure x
handleFold (Handler fold) (Impure x) = case permute @fs @(g ': gs) x of
  Tail t -> Impure $ fmap (handleFold (Handler fold)) t
  Head h -> do
    (val, fold') <- fold h
    handleFold fold' val

Example:

data GenSym x = GenSym (Int -> x)
  deriving Functor

syms :: Eff '[GenSym] [Int]
syms = do
  x <- effInj $ GenSym id
  y <- effInj $ GenSym id
  z <- effInj $ GenSym id
  return [x, y, z+1]

symsEvaled :: Eff '[] [Int]
symsEvaled = handleFold (fh 0) syms
  where fh :: Int -> Handler GenSym '[]
        fh k = Handler $ \(GenSym cont) -> return $ (cont k, fh (k+1))

20. References

[1]
W. Swierstra, “Data types à la carte,” J. funct. program., vol. 18, no. 4, pp. 423–436, Jul. 2008, doi: 10.1017/S0956796808006758.

Footnotes:

1

A monad transformer is a thing that takes a monad and returns a monad.

2

We used delimited continuations to implement async and await syntax in this post.

3

While rewriting this, I realized that I basically rewrote the core of freer from first principles. That makes it easier to motivate this post I suppose.

4

Not an actual thing people say, but I use this term to point to the fact that the universe of types behaves (under isomorphism) like a semiring. It has all the things you expect from the natural numbers: addition, multiplication, distribution and the right units.

5

I wish English had better ways of making precedence explicit.

6

I'm not sure there is an even simpler english available to express this.

7

The more categorical translation of \( \mu \) would be join, but working with it would make for less efficient programs. We will still work with this formulation in the next section, though, because the >>=-based free monad is less intuitive to understand.

8

The monad identities (recall it is "just" a monoid) are basically associativity of \( \mu \) and the left and right unit properties of \( \eta \).

9

The reason the free monoid is "n-fold cartesian product of \( A \) with itself" while the free monad is "n-fold composition of \( T \) with itself" is probably because \( \times \) and \( \circ \) are, respectively, the products within the monoidal categories \( \mathbf{Set} \) and \( [C, C] \) where the two instances of generalized monoids live (see the nlab page on the topic). I'm not actually sure it is the case that "the free monoid Generated by \( x \) in the monoidal category \( (C, \otimes, 1) \) is the n-fold \( \otimes \)-product of \( x \) with itself)", but that might be interesting to prove I guess.

10

Recall, the fixpoint of a functor/type constructor turns it into a tree)

11

Derivation rules as in the type of stuff you see in my post on formal logic and type systems.

12

This doesn't say much, because we always do that.

13

This is more often described as a "mechanism for ad-hoc polymorphism", but ad-hoc polymorphism really is just "fill this argument with a value that is consistent with the type".

14

(it is very similar to the \( \prec \) typeclass they use in [1])

15

Remember, Fix (Sum '[Zero, Plus, One]) x means "syntax tree where each node is either a Zero, a Plus, One or a variable with a name from x".

Date: 2024-05-29 Wed 00:00

Author: Justin Veilleux

Validate