Writing a Ray Tracer in Haskell (Part 1)

I started making a ray tracer in Haskell for fun so I’m going to blog about it. A ray tracer is a piece of software that generates an image of a 3D scene by simulating the way light moves around the scene and enters a camera. Since light travels in straight lines until it interacts with matter, this is done by tracing straight light rays around the scene.

This isn’t going to be a tutorial, but I’ll go over some of the quirks of writing a ray-tracer in a pure functional language where you don’t have access to mutable state.

I used two books for reference, Ray Tracing from the Ground Up by Kevin Suffern, and Physically Based Rendering by Matt Pharr and Greg Humphreys.

Why write a ray tracer in Haskell?

There were a lot of things I didn’t know how to do in Haskell before starting this project, like for example how do you load a mesh from a file into a Haskell-style Algebraic Data Type (ADT). Writing a ray tracer is a great way to learn.

Also, I thought that, in principle, a ray tracer should be easier to express in Haskell because all of the operations in a ray tracer have well defined functions because they’re derived from the rendering equation. For example, when shading a surface point, we take a few inputs, including the incoming light direction, the surface and light color, and the surface normal and produce an outgoing light ray direction. There’s no need to modify the existing state, or even store the old state of the ray for later use, the previous ray can be discarded.

The basics: ray intersections

I started out writing the data types to represent shapes and rays. Like so:

data Shape = Plane (Point V3 Double) (V3 Double)
| Sphere (Point V3 Double) Double
| AABB (M44 Double) (V3 Double) (V3 Double)
| Triangle (Point V3 Double) (Point V3 Double) (Point V3 Double) (V3 Double)
| Disk (Point V3 Double) (V3 Double) Double
| Rectangle (Point V3 Double) (V3 Double) (V3 Double) (V3 Double)
deriving (Show, Eq)

data Ray = Ray { rayOrigin :: Point V3 Double
, rayDirection :: V3 Double
deriving (Show, Eq)

In order to store the location where a light ray hits an object, I created an intersection data structure:

data Intersection = Intersection { intersectionPoint :: Point V3 Double
, intersectionNormal :: V3 Double
, tMin :: Double
deriving (Show, Eq)

Then performing a ray-shape intersection is as simple as it would be in any other language, just take a ray and a shape and return an intersection. In this case, we’ll return a Maybe Intersection, because it’s possible that the ray misses the object entirely, in which case you get a result of Nothing. Here’s an example of how that works with a ray and a flat plane:

rayIntersection :: Ray -> Shape -> Maybe Intersection

rayIntersection (Ray {rayOrigin = ro, rayDirection = rd}) (Plane planePoint planeNormal) =
let denominator = (rd `dot` planeNormal)
in if (denominator > -rayEpsilon) && (denominator < rayEpsilon)
then Nothing
else let t = (planePoint .-. ro) `dot` (planeNormal ^/ denominator)
in if t <= rayEpsilon
then Nothing
else Just (Intersection {intersectionPoint = ro .+^ (rd ^* t), intersectionNormal = planeNormal, tMin = t})

Lights, cameras, materials, shaders

There are a few other things you need to represent a scene, including lights:

data Light = EnvironmentLight (Color Double)
| PointLight (Point V3 Double) (Color Double)
| DirectionalLight (V3 Double) (Color Double)
| DiskLight (Point V3 Double) (V3 Double) Double (Color Double) -- Point, normal, and radius
| SphereLight (Point V3 Double) Double (Color Double) -- Point and radius
| RectangleLight (Point V3 Double) (V3 Double) (V3 Double) (Color Double) -- Point and radius
deriving (Show, Eq)


data Camera = Camera (Point V3 Double) (V3 Double) (V3 Double) -- Origin, look, and up
deriving (Show, Eq)

and materials:

data Material = ColorMaterial (Color Double) -- Color (No shading)
| MatteMaterial (Color Double) Double -- Diffuse, kD
| PlasticMaterial (Color Double) Double (Color Double) Double Double -- Diffuse, kD, Specular, kS, kExp
deriving (Show, Eq)

In order to tie all of these systems together, I created a class to represent an object, which has a shape, a material, and a function, called a shader, from a ShadePoint to a Color.

data Object = Object Shape Material (ShadePoint -> Color Double)

A ShadePoint contains everything you need to shade a surface point of an object: the material of the object, the surface normal, the incoming light ray direction, and the outgoing light ray direction:

data ShadePoint = ShadePoint Material (V3 Double) (V3 Double) (V3 Double)
deriving (Show, Eq)

Here’s an example of a diffuse shader (diffuseF is a helper function):

diffuseF :: Color Double
-> Double
-> Color Double
diffuseF diffuse kD =
let invPi = 1.0 / pi
in diffuse ^* (kD * invPi)

lambertShader :: ShadePoint -> Color Double
lambertShader (ShadePoint (ColorMaterial color) normal wIn wOut) = color
lambertShader (ShadePoint (MatteMaterial diffuse kD) normal wIn wOut) = diffuseF diffuse kD
lambertShader (ShadePoint (PlasticMaterial diffuse kD _ _ _) normal wIn wOut) = diffuseF diffuse kD


If you’re going to trace rays, there’s no point tracing against a single triangle. To make a meaningful render, you need to trace a scene like this:

data Scene = ListScene [Object]
| KDScene KDTree
deriving (Show)

I started out with the most basic kind of scene, just a list of objects and made a scene with a KD-tree accelerator later. I’ll talk about that in a later post, but for now let’s look at how to trace a list scene.

The approach to tracing a list scene is to simply compare every ray for intersections against every object. This produces a very slow, but correct render.

Here’s the entire code for the tracer (Don’t worry about the LowDiscrepancySequence for now, I’ll describe that in a later post):

traceRays :: (LowDiscrepancySequence s)
=> Scene
-> Color Double
-> Ray
-> s
-> ((TraceResult, Ray), s)

traceRays (ListScene objects) bgColor ray gen =
((foldl' (\traceResult@(TraceResult (Intersection {tMin = traceTMin}) material shader) (Object shape objectMaterial objectShader) ->
case rayIntersection ray shape of
Nothing -> traceResult
Just objectIntersection@(Intersection {tMin = tm}) ->
if tm < traceTMin
then TraceResult objectIntersection objectMaterial objectShader
else traceResult) (emptyTraceResult bgColor) objects, ray), gen)

That’s it, a single foldl’. You take the minimum intersection point and that’s the point you shade. The shadow rays are cast in a separate function which is pretty much the same.

Lessons learned

I was surprised by a few things when I started developing the ray tracer, and I think that this is relevant to any projects which involve large amounts of data and computations:

  1. The tracer is very slow compared to C++. The tracer renders at approximately 36000 rays per second on a Core i5-8250U on 4 cores tracing a mesh with 16300 faces. This is a lot slower than I was hoping for, but still faster than it would be if written in a scripting language like Python. I didn’t spend that much time on optimization, but it’s definitely not performant without some more time spent on optimization. Switch from ADTs to other types and using more strict evaluation would probably help a lot but those are not obvious to me as a Haskell beginner.
  2. The code is really small. 1052 lines of code for a ray tracer with random sampling, a KD-tree accelerator, and mesh loading is incredibly concise. I like this because it makes it feasible for me to develop a large project like this on my own in my spare time. It might also be important if your work involves fast prototyping too.
  3. Using generic types makes computations even slower. My original code used the generic Floating typeclass. Removing the typeclass and specializing the code to use Doubles resulted in a 10x speedup!
  4. If you don’t use strict folds when working with lots of data, your program will crash. If you use foldl or foldr over enough data, your program will run out of memory from all of the thunks it allocates. Use foldl’ to avoid this.
  5. Dealing with I/O was actually pretty easy. There are only a couple of files which actually deal with I/O in the project and the rest of the code is 100% pure.
  6. Adding multithreading was annoying, but ultimately required very few changes. It took a lot of digging to figure out how to get the tracer to run on multiple cores, but in the end I just used a parListChunk and it worked.
  7. Modifying the code is super simple. Because everything is pure, adding a feature like random numbers to the program takes a fraction of the time it normally would. Iterating on a pure functional program is super fast.
  8. Lambdas are the most natural way to express shaders. In a ray tracer, when a ray hits an object, you need to determine how the light will reflect off of the surface and what color the resulting ray will be. This is done using a function called a shader. In most languages, shaders are bound to a surface using an ID and accessed using function pointers. In Haskell, you can simply store the shader as a lambda inside the object!

You can find the source code for this project under an MIT license at https://github.com/WhatTheFunctional/HaskellTracer.


A Brief Introduction to the λ-Calculus (Part 2)

In the second part of this series on λ-calculus, we’ll explore how to perform recursion in λ-calculus and how to represent numbers in λ-calculus using Peano arithmetic. At the end of this post, I’ll show how to code an algorithm to compute the nth number in the Fibonacci sequence using λ-calculus.

A word of warning, this post contains some challenging concepts. Take it slow and make sure you understand each step before moving on to the next one.

Self application

To begin with, we’ll look at an interesting behavior that happens when you call the self_apply function with itself as an argument.

Recall that self_apply is defined as:

self_apply = λi.(i i)

Applying self_apply to itself:

(self_apply self_apply)
=> (λi.(i i) λi.(i i))
=> (λi.(i i) λj.(j j)) [α-conversion]
=> (λj.(j j) λj.(j j)) [β-reduction]
=> (self_apply self_apply)

Notice that applying α-conversion and β-reduction the statement (self_apply self_apply) results in (self_apply self_apply). No matter how many operations are applied to this expression we get the same expression as a result.

If you had a machine which could choose which operation to apply to a given λ-calculus expression and which would apply them automatically, this expression would cause an automated infinite self repeating operation. We’ll take advantage of this behavior soon.


Recall from part 1 that we can’t perform recursion in λ-calculus by using function names inside function definitions. Let’s temporarily lift that restriction to imagine an example of what we’d like to achieve:

WARNING: What follows is not real λ-calculus! It is only presented for illustrative purposes.

f = λi.(f (G i))

Repeatedly expanding the definition of such a function would look like this:

(f x)
=> (λi.(f (G i)) x)
=> (λi.(λi'.(f (G i')) (G i)) x)
=> (λi.(λi'.(λi''.(f (G i'')) (G i')) (G i)) x)

I had to rename variables as part of these pseudo-operations because this isn’t actually valid λ-calculus.

Let’s stop the recursion there by deleting the f and see what would result if we could terminate this iteration. Again, this is not a real λ-calculus operation:

(λi.(λi'.(λi''.(G i'') (G i')) (G i)) x)
=> (λi'.(λi''.(G i'') (G i')) (G x)
=> (λi''.(G i'') (G (G x))
=> (G (G (G x)))

As you can see, this would prefix a series of calls to G applied to x.

This illustrates the kind of behavior we want, repeatedly expanding f, resulting in a series of recursive operations.

We can achieve this kind of recursion without referring to the function name in its function body by taking the next iteration of f as an argument, f’, to the function f. We’re back to real λ-calculus from now on:

f = λf'.λi.λc.(c i (f' f' (G i)))

Notice that we need to pass a copy of the f function to f as its first argument to use for further recursion. We use a selector function, c, to terminate the recursion by discarding the (f’ f’ (G i)) expression if c is true. In the case where c is false we continue the recursion by selecting the (f’ f’ (G i)) expression. We have two instances of f’ in the recursion expression so that further recursive calls to f can re-use the copy of f we already have.

When c is false, f returns a function, (f’ f’ (G i)), which takes a selector function as an argument to terminate the iteration. This way we can specify when to terminate the function by passing it a series of false and true values.

We can call the function recursively to duplicate G a finite number of times in front of x as follows. We need to call f with itself as an argument followed by its intended argument, x, and a string of Boolean selector functions:

(f f x false false true)
=> (λf'.λi.λc.(c i (f' f' (G i))) f x false false true)
=> (λi.λc.(c i (f f (G i))) x false false true)
=> (λc.(c x (f f (G x))) false false true)
=> ((false x (f f (G x))) false true)
=> ((λf'.λi.λc.(c i (f' f' (G i))) f (G x)) false true)
=> ((λc.(c (G x) (f f (G (G x)))) false true)
=> ((false (G x) (f f (G (G x)))) true)
=> ((λf'.λi.λc.(c i (f' f' (G i))) f (G (G x))) true)
=> ((λi.λc.(c i (f f (G i))) (G (G x))) true)
=> ((λc.(c (G (G x)) (f f (G (G (G x)))))) true)
=> (true (G (G x)) (f f (G (G (G x)))))
=> (G (G x))

Notice how we use the false function to continue the recursion and the true function to terminate the recursion.


This kind of manual recursive function call is fine for performing recursion when we know the number of iterations we need ahead of time, but what about if we don’t know the number of iterations ahead of time? We need an automatic mechanism for performing recursion which can terminate when we encounter a terminating condition.

We saw a function, self_apply, which performs an automatic infinite operation sequence in the first section of this article. Notice in that section how the body (i i) causes the infinite repetition? Let’s modify self_apply by adding in a conditional function as an argument that can conditionally discard the repetition (i i):

half_recur = λf.λi.(f (i i))

If we pass false to this function, the (i i) self application is not evaluated:

half_recur false F G 
=> λf.λi.(f (i i)) false F G
=> λi.(false (i i)) F G
=> (false (F F)) G
=> (λfirst.λsecond.second (F F)) G
=> λsecond.second G
=> G

false terminates the automatic self application!

If we apply this function to true, we get the self apply behavior, (F F), as a result.

Applying half_recur to a function, F, we get another function:

half_recur F
=> λf.λi.(f (i i)) F
=> λi.(F (i i))

By applying this function to itself, we get the famous Y-combinator, also known as the fixed point combinator, discovered by Haskell Curry:

Y = λf.(λi.(f (i i)) λi.(f (i i)))

The Y-combinator has a very useful behavior when it’s applied to a function:

(Y F)
=> (λf.(λi.(f (i i)) λi.(f (i i))) F)
=> (λi.(F (i i)) λi.(F (i i)))
=> (λi.(F (i i)) λi'.(F (i' i'))) [α-conversion]
=> (F (λi'.(F (i' i')) λi'.(F (i' i'))))
=> (F (λi.(F (i i)) λi.(F (i i)))) [α-conversion]
== (F (Y F))

The Y-combinator duplicates the function it’s passed as a prefix to itself!

This behavior allows us to perform general recursion.

Notice that this is not necessarily an infinite recursion. If F is a conditional function, a false condition would discard the Y-combinator and terminate the recursion!

As a personal note, I think that Curry’s discovery of the Y-combinator was an act of sheer genius. It is a remarkable mechanism.


In the late 19th century, Guiseppe Peano created a set of axioms for natural numbers which we can use to perform arithmetic in λ-calculus. (I’m going to skip over the undecidability of Peano’s arithmetic for the purpose of this article).

Peano’s axioms use a “successor” function, as well as defining the number 0 as a natural number. We’ll use the following two functions for zero and successor:

zero = identity == λi.i
successor = λn.λs.((s false) n)

This kind of numbering system, described in Greg Michaelson’s book, is an alternative to Church Numerals. Michaelson’s formulation of Peano’s arithmetic makes it easier for us to perform comparisons on numbers.

Any natural number can be created by applying the successor function a number of times to zero. For example:

one = (successor zero)
two = (successor (successor zero))
three = (successor (successor (successor zero)))

Every time we apply successor to a number, it creates a pair of arguments false and n to a function s. We can define the natural numbers like so:

one = successor zero
=> λn.λs.((s false) n) zero
=> λs.((s false) zero)

two = successor one == successor successor zero
=> λn.λs.((s false) n) λn.λs.((s false) n) zero
=> λn.λs.((s false) n) λs.((s false) zero)
=> λn.λs.((s false) n) λs'.((s' false) zero)
=> λs.((s false) λs'.((s' false) zero))

and so on...

Notice that all numbers are functions which take a selector function as an argument.

We choose to define numbers this way because in the case where we apply a number to true, the result will be false, unless the number is zero, in which case we’ll get true as a result:

zero true
=> λi.i true
=> true

one true
=> λs.((s false) zero) true
=> ((true false) zero)
=> ((λfirst.λsecond.first false) zero)
=> (λsecond.false zero)
=> false

This is because true has the same behavior as select_first and the first argument which will be passed to the selector for any non-zero number will always be false because of how we defined successor (successor always puts false as the first argument to the selector).

This behavior allows us to create an is_zero function:

is_zero = λn.(n true)

is_zero applied to zero will evaluate to true, and if applied to any other natural number, it will evaluate to false.

Since we defined natural numbers as recursive applications of the successor function, we can discard one application of the successor function to get the predecessor of a number. We’ve previously seen a function which will discard the first argument in a pair of functions, the select_second function (a.k.a false).

Here’s what happens when we apply a non-zero natural number (successor X) to false:

((successor X) false)
=> ((λn.λs.((s false) n) X) false)
=> (λs.((s false) X) false)
=> ((false false) X)
=> ((λfirst.λsecond.second false) X)
=> (λsecond.second X)
=> X

By applying the number to false, we’ve stripped the successor function prefix from X!

We can define a predecessor function like this:

predecessor = λn.((is_zero n) n (n false))

We need to handle zero as a special case, so we use is_zero in this function.


In order to add two numbers to one another, we’ll take the approach of incrementing one number while decrementing the other:

add_iter = λadd_iter'.λx.λy.((is_zero y) x (add_iter' (successor x) (predecessor y)))

We can then define the add function by calling this increment/decrement function recursively with the Y-combinator:

add = Y add_iter

Let’s try adding 1+1.

This next section is pretty long, but it demonstrates that addition can work just by applying the four operations of λ-calculus:

add one one
=> (Y add_iter) one one
=> λf.(λi.(f (i i)) λi.(f (i i))) add_iter one one
=> (λi.(add_iter (i i)) λi’.(add_iter (i’ i’))) one one
=> add_iter (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) one one

Replacing (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) by (Y add_iter)

=> add_iter (Y add_iter) one one
=> λadd_iter'.λx.λy.((is_zero y) x (add_iter' (successor x) (predecessor y))) (Y add_iter)) one one
=> λx.λy.((is_zero y) x ((Y add_iter) (successor x) (predecessor y))) one one
=> λy.((is_zero y) one ((Y add_iter) (successor one) (predecessor y))) one
=> ((is_zero one) one ((Y add_iter) (successor one) (predecessor one)))
=> (false one ((Y add_iter) (successor one) (predecessor one)))
=> (false one ((Y add_iter) (successor one) (predecessor one)))
=> ((Y add_iter) (successor one) (predecessor one))

Replacing successor one by two and predecessor one by zero

=> (Y add_iter) two zero
=> (λf.(λi.(f (i i)) λi.(f (i i))) add_iter add_iter) two zero
=> ((λi.(add_iter (i i)) λi.(add_iter (i i))) add_iter) two zero
=> ((λi.(add_iter (i i)) λi’.(add_iter (i’ i’)))) two zero
=> add_iter (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) two zero

Replacing (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) by (Y add_iter)

=> add_iter (Y add_iter) two zero
=> λadd_iter'.λx.λy.((is_zero y) x (add_iter' (successor x) (predecessor y))) (Y add_iter) two zero
=> λx.λy.((is_zero y) x ((Y add_iter) (successor x) (predecessor y))) two zero
=> λy.((is_zero y) two ((Y add_iter) (successor two) (predecessor y))) zero
=> ((is_zero zero) two ((Y add_iter) (successor two) (predecessor zero)))
=> (true two ((Y add_iter) (successor two) (predecessor zero)))
=> two

Yes, I know that took a lot of work; but it is a valid arithmetic procedure which is performed using nothing but functions!


Now that we have numbers, arithmetic, and recursion, it’s trivial to define a function to calculate the nth number in the Fibonacci sequence:

fib_iter = λfib_iter'.
λn.((is_zero n)
((is_zero (predecessor n))
(fib_iter' (predecessor n))
(fib_iter' (predecessor (predecessor n))))))

fib = Y fib_iter


That’s essentially all we need to use λ-calculus to perform general computations on the natural numbers. As Countess Ada Lovelace famously realized, any machine which can perform general numerical computations is sufficiently powerful to perform computations on any kind of data.

There’s a lot more to λ-calculus than what I’ve discussed in these posts, but this should be enough to give you the tools you need to continue exploring λ-calculus.


An Introduction to Functional Programming Through Lambda Calculus by Greg Michaelson

Lambda Calculus at Wikipedia

The Lambda Calculus at Stanford Encyclopedia of Philosophy

Haskell theoretical foundations – Lambda calculus

Normal, Applicative and Lazy Evaluation

A Brief Introduction to the λ-Calculus (Part 1)

In this post, I’ll be discussing the untyped λ-calculus (lambda calculus). λ-calculus forms the basis of all functional programming languages and is one of the three theoretical models of computing, the other two being Turing Machines and Recursive Function Theory.

Alonso Church created λ-calculus in the 1930s as a formal system of mathematical logic for computation based on function abstraction and application. Church envisioned a simple language which contains only functions. λ-calculus doesn’t even have Boolean values or numbers. We’ll explore how to represent Boolean values using only functions below and we’ll cover representing numbers using only functions in a later post.

The most important distinction between imperative languages and functional languages is how they perform abstraction. In an imperative language, abstraction is performed by assigning names to variables which can change value over time. This is similar to how the parts of a Turing Machine can change over time; e.g. the tape of the machine can move. In functional programming languages, abstraction is performed by assigning names to values and functions which never change and computing new values by applying functions to values. In λ-calculus, functions are immutable values which we can name.

For example, here is the identity function:


The function starts with the λ symbol, followed by a name, representing the argument of the function. A period separates the argument of the function from the body of the function. The body of the function can be any λ-calculus expression. Names in λ-calculus can be any string of characters except spaces, parentheses, . and λ.

For convenience, we can give names to our functions (technically this is an extension of the λ-calculus). Let’s define a few example functions so you can get used to the syntax:

identity = λi.i
self_application = λi.ii
apply = λx.(λy.xy)

Don’t worry too much about what these do just yet, you’ll probably be able to figure out what they do after you learn about β-reduction below.

Notice that the body of a function can also contain other functions. Functions are first class in λ-calculus, just like in any other functional programming language!

In the identity function above, i is used to form an abstraction. In the function, i can refer to anything until the function is specialized by application. For example, suppose we have a name, G, we could apply the identity function to the name G to specialize the function. We indicate that we want to apply the function by placing it in front of the argument G in parentheses:

(λi.i G)

In order to actually apply the value, we replace the instances of i in the function body with the argument, G. In this case, we get the result G:

=> G

The process we performed above is an operation. We use => to indicate that an operation was performed on (λi.i G) to compute G. We could also write the steps above like this:

(λi.i G) => G

Operations advance an algorithm to its next step, eventually resulting in a solution. In imperative languages, there are many operations, each of which alter the value of variables or the program counter. In λ-calculus there are only four operations:

  • λ-abstraction (lambda abstraction)
  • β-reduction (beta reduction)
  • α-conversion (alpha conversion)
  • η-conversion (eta conversion)

It’s important to note that naming functions is not an operation. Function names are assigned statically, they can not be used inside their own function definition. They are simply an alias for the function and can’t be used for recursion.

Operations in the λ-calculus


λ-abstraction is simply the introduction of a lambda function. For example, we could create a new function using the λ-abstraction operation:


Also, notice that there are two names, x and y, in the body of the function but only x appears in the function argument. This is a valid λ-calculus expression.

We say that the name x is bound and the name y is free. A function which has no free variables is called a combinator and a function with at least one free variable is called a closure.

Sometimes the parentheses are excluded from nested functions like this:

apply = λx.λy.xy

In this case, you can always add parentheses by following the rule that lambda abstractions are right-associative.


β-reduction is process of applying a function to a value, as we saw above:

(λi.ij G) => Gj

For convenience, the parentheses are often excluded. If they are not used, you can always add them by using the rule that function applications are left-associative.


α-conversion allows us to rename an argument to avoid a name collision. To do this, we choose a different name for the argument and replace the old name with the new one wherever it exists in the function body:

λi.ij => λp.pj

For example, the two instances of j in this expression refer to different values, so we need to rename them using α-conversion before applying β-reduction:

(λi.iji λj.j) => (λi.iji λx.x) => (λx.x jλx.x) => jλx.x


η-conversion is used when the argument of a function only appears as the last term of the function body. In this case, the function can be simplified to remove the argument:

λi.RWCi => RWC

Applicative versus normal order reduction

Unlike in imperative languages, in λ-calculus, the order in which these operations are performed is undefined. For example when a function has an argument which contains an expression which could be simplified by β-reduction, we can choose which β-reduction to apply first. Consider the following expression:

(λi.ij (λx.xy G))

Here we apply β-reduction to the argument first. This is called applicative order reduction:

(λi.ij (λx.xy G)) => (λi.ij Gy) => Gyj

Here we apply β-reduction to the left-most function first. This is called normal order reduction:

(λi.ij (λx.xy G)) => (λx.xy G)j => Gyj

Church and Rosser showed that all evaluation orders of an expression in λ-calculus result in the same value.

This property of λ-calculus and functional programming languages in general is called execution order independence. Execution order independence enables the parallel execution of many lambda calculus expressions.

Making decisions

In order to model general computation, we need a way to choose from two alternatives. In order to do this, we introduce the select_first and select_second functions:

select_first = λfirst.λsecond.first
select_second = λfirst.λsecond.second

select_first consumes one argument, and then a second, and discards the second argument:

select_first A B
=> ((λfirst.λsecond.first A) B
=> (λsecond.A B)
=> A

select_second chooses the second argument:

select_second A B
=> ((λfirst.λsecond.second A) B
=> (λsecond.second B)
=> B

Consider the structure of an if-then statement in an imperative language.

if condition then A else B

If the condition is true, then we want to evaluate the first expression, A, and if the condition is false, then we want to evaluate the second expression, B.

But select_first has the behavior of evaluating the first expression!

select_first A B => A

We can rename select_first to true:

true A B => A

This performs a behavior equivalent to:

if true then A else B

We can similarly use select_second to represent false:

false A B => B

This performs a behavior equivalent to:

if false then A else B


Then the whole if-then statement can be expressed as a function cond that takes another function, c, which chooses either the expression a or the expression b:

cond = λa.λb.λc.((c a) b)

Let’s evaluate cond with true as an argument:

cond A B true
=> λa.λb.λc.((c a) b) A B λfirst.λsecond.first
=> λb.λc.((c A) b) B λfirst.λsecond.first
=> λc.((c A) B) λfirst.λsecond.first
=> ((λfirst.λsecond.first A) B)
=> (λsecond.A B)
=> A

It’s easy to see how passing false to cond will produce the desired behavior of evaluating to B.


Not can be expressed as:

if condition then false else true

So we can simply apply false and true to cond to get a definition for not:

not = λx.(((cond false) true) x)

Applying not to true, we get:

not true
=> λx.(((cond false) true) x) true
=> (((cond false) true) true)
=> λa.λb.λc.((c a) b) false true true
=> λb.λc.((c false) b) true true
=> λc.((c false) true) true
=> ((true false) true)
=> ((λfirst.λsecond.first false) true)
=> (λsecond.false true)
=> false


And can be expressed as:

if A then B else false

If A is true then the expression is true if B is true, otherwise it’s false. If A is false, then it doesn’t matter what value B is, the expression is false.

and = λx.λy.((x y) false)

Applying true and true to this function we get:

and true true
=> ((λx.λy.((x y) false) true) true)
=> (λy.((true y) false) true)
=> ((true true) false)
=> ((λfirst.λsecond.first true) false)
=> (λsecond.true false)
=> true


Or can be expressed as:

if A then true else B

If A is true then it doesn’t matter what B is, the expression is true. If A is false, then the expression is true if B is true, otherwise it’s false.

or = λx.λy.(((cond true) y) x)

Let’s apply false and true to or:

or false true
=> ((λx.λy.(((cond true) y) x) false) true)
=> (λy.(((cond true) y) false) true)
=> (((cond true) true) false)
=> (((λa.λb.λc.((c a) b) true) true) false)
=> ((λb.λc.((c true) b) true) false)
=> (λc.((c true) true) false)
=> ((false true) true)
=> ((λfirst.λsecond.second true) true)
=> (λsecond.second true)
=> true


At this point, you have a good enough grasp of the concepts of λ-calculus to make simple, non-recursive functions which act on Boolean values.

Next time we’ll cover how to represent numbers in the λ-calculus and how to perform recursion using the Y-combinator!

You can continue reading part 2 of this series here.

A side note on computability

A function which can be evaluated in the λ-calculus with a finite number of operations is called λ-computable. All λ-computable functions are computable on a Turing Machine and all Turing computable functions are λ-computable (See Church-Turing thesis). As a result, Turing Machines and the λ-calculus are equivalent in terms of what kinds of functions they can evaluate. Because of this equivalence, any problem which can be solved efficiently with an imperative programming language can also be solved efficiently with a functional programming language.

Formal syntax of λ-calculus

<expression> := <name> | <function> | <application>
<function> := λ<name>.<expression>
<application> := (<function> <expression>)


An Introduction to Functional Programming Through Lambda Calculus by Greg Michaelson

Lambda Calculus at Wikipedia

The Lambda Calculus at Stanford Encyclopedia of Philosophy

Haskell theoretical foundations – Lambda calculus

Normal, Applicative and Lazy Evaluation

Making a Haskell Interface for the Rosie Pattern Language

I discovered the Rosie Pattern Language at Dr. Jamie Jennings’ talk about it at Strange Loop 2018. The Rosie Pattern Language (RPL) is a DSL for parsing strings which is more convenient and easier to work with than regular expressions. RPL is a Lua library with a C Foreign Function Interface (FFI). It also has Python and Go interfaces, but I noticed that there wasn’t a way to call it from Haskell.

I met Dr. Jennings after the talk and offered to add a Haskell interface for RPL, which I started working on after Strange Loop. I haven’t used Haskell’s C FFI before, so I thought I’d write a post about it. The documentation for the FFI is a little sparse, so I’m going to walk through how I built the interface step by step.

Making a FFI Project

In order to build a project using the Haskell FFI, you need to link to C .o files when you call ghc. I modified the Makefile used by the RPL Go interface to link to the .o when building a Rosie.hs module:

LUA_FILES=$(wildcard ../../../submodules/rosie-lpeg/src/*.o)
LUA_EXTRA_FILES=$(wildcard ../../../submodules/lua-cjson/*.o)
LIB_LUA_FILES=$(wildcard ../liblua/*.o)
Rosie: Rosie.hs ../binaries/$(ROSIE_OBJECT_FILE_NAME)
    ghc --make -main-is Rosie -o Rosie Rosie.hs ../binaries/$(ROSIE_OBJECT_FILE_NAME) $(LUA_FILES) $(LIB_LUA_FILES) $(LUA_EXTRA_FILES)

As you can see, all I had to do to call ghc with the FFI was to add the –make flag, specify a main module using -main-is (in this case Rosie) as well as listing all .o files I wanted to link to.

You can see the full Makefile I wrote here: https://gitlab.com/lemms/rosie/blob/master/src/librosie/haskell/Makefile

Building a Rosie Haskell module

The first thing you need to add when you use the C FFI is the Foreign Function Interface language extension:

{-# LANGUAGE ForeignFunctionInterface #-}

I made a module called Rosie which exports some RPL data types and all of the functions exported by the RPL C libary:

module Rosie (RosieStr(..),
              main) where

All of the functions starting with cRosie are direct calls to the RPL C interface.

rosieStringLength, rosieStringChars, and unsafeNewEmptyRosieString are helper functions I wrote to work with RosieString data in Haskell.

The RPL C functions are very low level, only supporting manual memory management so you need to remember to free the memory you allocated when you’re done with it. The Rosie module lets you call these unsafe allocations but I also added newRosieString and newRosieEngine in order to add automatic memory management to the Rosie module.

Managing foreign data structures in Haskell

Haskell can only refer to C data structures by pointer. In general you will have a handle to a foreign data structure which looks like this:

Ptr RosieEngine

In order to get the Ptr data type, you need to call:

import Foreign.Ptr

If you don’t need to access the internals of a C data structure, as in this RosieEngine example, you can simply specify an empty value constructor:

data RosieEngine = RosieEngine

The C struct engine contains references to Lua interface libaries. It’s difficult to determine the memory layout for the Lua interface. Instead of opening that can of worms, I just forbid access to the internals of the RosieEngine.

If you want internal access to the C data structures from Haskell, you need to specify the internal components using the record syntax:

data RosieStr = RosieStr
                { len :: Word32
                , ptr :: CString
                } deriving (Show, Eq)

CString is a C style string data structure you can access by calling:

import Foreign.C

and Word32 is a 32 bit unsigned integer which you can import using:

import Data.Word (Word32)

Note: CString is just a type alias to Ptr CChar.

In order to read and write from a RosieStr in the C code, I had to make RosieStr an instance of the Storable typeclass:

instance Storable RosieStr where
    alignment _ = 8
    sizeOf _    = 16
    peek ptr    = RosieStr
        <$> peekByteOff ptr 0
        <*> peekByteOff ptr 8
    poke ptr (RosieStr l p) = do
        pokeByteOff ptr 0 l
        pokeByteOff ptr 8 p

alignment is used to specify the alignment of each item in the record in memory. alignment is the least common multiple of the sizes of all data types in the record. In my case, I have a Word32, which is 4 bytes and a Ptr CChar, a pointer which on a 64 bit machine is 8 bytes in size; so alignment = 8, the lowest common multiple of 4 and 8.

peek and poke specify how a RosieStr is read from and written to memory. They use peekByteOff and pokeByteOff to read and write bytes from a pointer and offset in memory. Each element must be aligned to its size in memory. len is at offset 0, which is fine because 0 is a multiple of 4 (the size of Word32). ptr can’t be at offset 4, even though that would result in a tightly packed data structure because 4 isn’t a multiple of 8 (the size of Ptr), so we need to leave 4 bytes of empty space in the data structure and put ptr at an offset of 8 bytes.

In some cases, we may need to create a new RosieStr which is empty. I made a helper function for this which uses the new function from the Foreign.Marshall.Utils library:

import Foreign.Marshal.Utils

unsafeNewEmptyRosieString :: IO (Ptr RosieStr)
unsafeNewEmptyRosieString = new (RosieStr { len = 0, ptr = nullPtr })

I made a similar data structure for pattern matches called RosieMatch.

Calling foreign functions from Haskell

Now that we’ve covered how to make a foreign data structure, let’s look at how to call foreign functions. You can import a foreign function from a linked .o file using the foreign import ccall statement:

foreign import ccall "rosie_new_string_ptr" cRosieNewString :: CString -> Word32 -> IO (Ptr RosieStr)

This statement imports the C function rosie_new_string_ptr as cRosieNewString (which I modified to match Haskell’s function naming conventions). It has the type CString-> Word32 -> IO (Ptr RosieStr). You can call cRosieNewString with a CString and a Word32 length to create a new RosieStr on the heap.

Note: There is a function in the RPL C interface called rosie_new_string which returns a Rosie string by value. Haskell doesn’t support foreign functions which return foreign structs by value, so I couldn’t add that to the Haskell interface.

It’s very important for type safety that this function evaluates to a value in the IO monad! Foreign functions can have uncontrolled side-effects which must be captured or the guarantees provided by Haskell will be broken. The RPL interface functions can do unsafe things like leak memory and modify their inputs. The IO monad captures these side effects appropriately.

All I had to do to import all of the RPL functions is add a foreign import ccall statement for each of them.

At this point we have everything we need to call Rosie Pattern Language from Haskell!

Making a safer RPL interface

It’s still possible to leak memory with cRosieNewString, cRosieNew, and unsafeNewEmptyRosieString because the user has to manually call cRosieStringFree, cRosieFinalize, and free to free the memory they allocated for each.

Instead of relying on these manual memory allocation functions, I made newRosieString and newRosieEngine to allocate ForeignPtr managed memory pointers. Let’s look at newRosieString:

newRosieString :: String -> IO (ForeignPtr RosieStr)
newRosieString s = do
    let l = fromIntegral (length s)
    pRString <- withCString s (\cString -> cRosieNewString cString l)
    newForeignPtr ptrRosieFreeStringPtr pRString

newRosieString takes a Haskell string, and extracts its length using (length s). It then calls withCString with the Haskell string s and a lambda function. withCString converts the Haskell string s into a CString and passes it as the argument to the lambda function. The lambda function calls cRosieNewString to make a new Ptr RosieString. Finally, I create a new ForeignPtr RosieStr by calling:

newForeignPtr ptrRosieFreeStringPtr pRString

newForeignPtr takes a pointer to a function for freeing the Ptr RosieStr and the Ptr RosieStr itself and evaluates to a ForeignPtr RosieStr which will garbage collect itself when all references to it are unreachable.

In order to get a function pointer to free the Rosie string, I used:

foreign import ccall "&rosie_free_string_ptr" ptrRosieFreeStringPtr :: FunPtr (Ptr RosieStr -> IO ())

Importing a foreign function with a & prefix imports the function as a function pointer.

You need to call the following to import newForeignPtr and ForeignPtr:

import Foreign.ForeignPtr

Finally, let’s look at newRosieEngine:

foreign import ccall "&rosie_finalize" ptrRosieFinalize :: FunPtr (Ptr RosieEngine -> IO ())

newRosieEngine :: ForeignPtr RosieStr -> IO (ForeignPtr RosieEngine)
newRosieEngine messages = 
    withForeignPtr messages (\m -> do
        engine <- cRosieNew m
        newForeignPtr ptrRosieFinalize engine)

newRosieEngine uses withForeignPtr instead of withCString. It converts its first argument from a ForeignPtr RosieStr to a Ptr RosieStr and passes it as the argument to the lambda. The lambda constructs a new Ptr RosieEngine by calling cRosieNew and makes a memory managed ForeignPtr RosieEngine using newForeignPtr.

Right now, in order to call Rosie functions, you need to convert your Haskell data structures to Ptrs and ForieignPtrs using the with and withForeignPtr functions. My plan is to add some functional helper functions to make this process less verbose.

You can see the full Rosie module here: https://gitlab.com/lemms/rosie/blob/master/src/librosie/haskell/Rosie.hs


Rosie Pattern Language

Rosie Pattern Language GitLab

Rosie Pattern Language IBM





Getting Started with Clojure on Windows

I’ve been taking a break from writing blog posts because I’ve been feeling pretty burned out the past few months, but I’m trying to get back to functional programming following my experiences at Strange Loop 2018.

I heard a lot of good things about Clojure at Strange Loop, so I’m going to start learning the language. In this post I’ll be discussing how to get Clojure working on Windows because developing for the language is only officially supported on Linux and Mac.

I’m also going to discuss how to get Clojure working on Windows Subsystem for Linux (WSL).

Clojure on Windows

If you don’t have Windows Powershell, I recommend getting that first. It’s generally a better than the Windows Command Line.

Also, you need to install Java with a version greater than 1.6 because Clojure runs on the JVM. You can check your Java version with the following command:

java -version

Getting Leiningen

In order to install Clojure on Windows, you must first install Leiningen. I tried using the lein script on the Leiningen website, but it didn’t work for me at the time of writing. Instead, I used Chocolatey to perform the installation. (More information can be found about this at https://chocolatey.org/packages/lein)

First, run Powershell as administrator by right clicking on the Powershell icon and run the following command to install Chocolatey:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

Then run the Leiningen installation using Chocolatey:

choco install lein

It’s possible that the Leiningen installation may fail, in which case you can try running:

lein self-install

When I tried this on my machine, the self installer failed to download the Leiningen jar file. If this happens to you, you can download the latest jar file here: https://github.com/technomancy/leiningen/releases

Rename the standalone .zip file extension to .jar and place it in your %HOME%/.lein/self-installs/ directory.

After restarting the Powershell, you should be able to run the following command to create a Clojure project:

lein new app my-app

You can run the app by changing to the app directory and calling:

lein run

You can run a REPL using the following command:

lein repl

You can find more information about working with Leiningen here: https://www.braveclojure.com/getting-started/

Clojure on WSL

Getting Windows Subsystem for Linux (WSL)

WSL is a convenient way to cross compile for Linux if you have a Windows 10 machine.

You can find instructions for installing WSL here: https://docs.microsoft.com/en-us/windows/wsl/install-win10

Once you’ve restarted your computer, you can install Ubuntu 18.04 LTS by searching in Windows Store for Ubuntu:


Getting Clojure on WSL

Check your Java version with:

java -version

You should have a Java version greater than 1.6 to run Clojure.

Once you’ve installed Ubuntu 18.04 LTS on WSL, you can run the following command to install Leiningen:

sudo apt-get install leiningen

and you’re ready to get started with Clojure development!






A thought experiment: Category Theory and Quantum Computing

This week I’m taking a break from my regular Haskell posts. A few weeks ago I posted about High Level Quantum Assembly using Haskell and that got me thinking about what a high level quantum computing language might look like. This week I’m going to attempt to perform a thought experiment and imagine what a hypothetical high-level quantum computing language might look like using some very basic category theory and type theory.

Properties of quantum gates

I’m going to massively simplify quantum computing, because I don’t really understand the physics behind it. Basic low-level quantum computing involves two components, qubits and quantum gates which act on them.

There are plenty of interesting articles on quantum gates including Demystifying Quantum Gates — One Qubit At A Time by Jason Roell, Quantum Gates and Circuits: The Crash Course by Anita Ramanan, and this excellent introductory talk Quantum Computing for Computer Scientists by Andrew Helwer. I’m not going to discuss low-level quantum gates in detail; in fact that would be counter-productive because in order to create a high-level quantum computing language, we need to be able to forget about the details of how qubits work. What we want is to generalize the objects of quantum computing so that we don’t need to worry about these details any more.

Before we start generalizing, let’s examine the qualities of qubits and gates.

Qubits are represented by vectors with two components. The two components represent two orthogonal dimensions in some Hilbert space represented in Dirac notation as |0k> and |1k>, where k represents the index of the qubit in a system of multiple qubits. The multiple qubits’ vectors are stacked on one another to produce a single vector with 2k elements. In addition, when you have a system containing multiple entangled qubits, you are operating on the tensor product of all of the qubits in the system. The tensor product of k entangled qubits with one-another produces a vector which contains 2k elements.

For example:

[a, b] ⊗ [c, d] = [a * c,  a * d, b * c, b * d]
[a, b] ⊗ [c, d] ⊗ [e, f] = [a * c,  a * d, b * c, b * d] ⊗ [e, f]
    = [a * c * e, a * d * e, b * c * e, b * d * e,
       a * c * f, a * d * f, b * c * f, b * d * f]

Note that the state of any qubit system is ultimately represented as a vector.

When a qubit in a qubit set is measured, its superposition is “collapsed”, which forces it to assume a value of |0> or |1>. The likelihood of the qubit assuming a |0> or |1> value is based on the value of the qubit’s vector before the measurement. Again, I’m not sure exactly what this means physically, but I do understand that this operation is non-reversible, which distinguishes it from other operations on qubits.

Quantum gates act on qubits, performing operations which can change the phase of a single qubit or multiple qubits. I have no clue how this happens physically, but the effect of this operation on a qubit can be entirely captured by a unitary matrix. For example, the SWAP operation has the following matrix:

[1, 0, 0, 0
 0, 0, 1, 0
 0, 1, 0, 0
 0, 0, 0, 1]

Since this is how quantum gates operate, we can model quantum systems as matrix multiplications applied to vectors. Specifically, “A gate which acts on k qubits is represented by a 2k x 2k unitary matrix.” [Wikipedia]

A quantum category

Since quantum gates are equivalent to matrix multiplications on qubit state vectors, we can rely on the properties of matrix multiplication to create an abstraction.

Given two matrices, M and N, which are applied to a vector v in sequence, NMv, there exists a matrix NM which produces an identical result. The matrix NM is called the composition of M and N. Since the effect of quantum gates can be modeled by a unitary matrix, then equivalently, for every two quantum gates M and N, there exists a gate NM, which is the composition of M and N. In other words, quantum gates are composable.

Furthermore, since matrix multiplication is associative, quantum gate applications are associative. Therefore, for quantum gates M, N, and O and a qubit state vector v, O(NM)v == (ON)Mv. (Please let me know if this is not the case, I haven’t seen anything in my brief literature review which contradicts this statement).

In addition, for every vector v, there is an identity matrix I, such that Iv == v. Equivalently, there is a quantum identity gate; if you don’t apply a gate, you get the same qubit state vector you started with.

Since quantum gates are composable, associative, and have an identity, quantum gates form a category! Since we have a category, we can use category theory to describe a model for abstract quantum operations! Let’s specialize this category with types, to create a type theoretic model for quantum operations. We’ll start by creating a category called Quantum with two type constructors, Measured Bool and Super Bool, which represent the value of a qubit in its measured state and its superposition state.

data Quantum = Measured Bool | Super Bool


Now we can define operations on the value of a qubit which go from a measured qubit to a superposition qubit. For example, we could apply a Hadamard gate to Measured Bool to create a Super Bool:


We could also apply a Hadamard gate to a Super Bool to produce another Super Bool:


Here’s the type of the Hadamard function:

hadamard :: Quantum Bool -> Quantum Bool

In fact, we can apply all quantum gates to Measured Bool or Super Bool, with the requirement that the codomain of the gate functions must be the Super Bool type.

We can apply the constraint that all operations on the Quantum meta-type must be reversible, so that we preserve the quantum properties of the system. There is one exception to this constraint, the measure function:


This breaks our rule. How can we make everything consistent? The answer is that since Measured Bool is really just the classical type Bool, we can move it out of the Quantum metatype:


Now every function in Quantum can be reversible! We change our definition of Quantum like this:

data Quantum = Super Bool

There’s no real reason to restrict ourselves to the Bool type. It’s possible to represent other types such as Bitset and Int with classical types, so we can imagine representing a Bitset or an Int as a collection of qubits. A Super Int could simply be a superposition of all possible Int values. What would we need a Super Int for? I have no clue; but it’s technically possible to have one, so why not?

In fact we can represent all classical pure types using qubits, so let’s generalize the diagram above with the set of all pure types, T. Let’s rename the Super value constructor to Quantum too:

data Quantum a = Quantum a


We need to define a measure function for all types in Quantum T, but that detail is left as an exercise for the reader.

This simplifies our definition of Quantum functions; all functions in the Quantum category are now reversible.

For example, hadamard still has the same type:

hadamard :: Quantum Bool -> Quantum Bool

but now we only need one version of H, rather than two:


A quantum Applicative Functor

There’s one problem with our Quantum category; we can no longer move any classical data into it! Let’s fix that by making an Applicative Functor for our category.

To start with, let’s make Quantum an instance of Functor:

instance Functor (Quantum a) where
    fmap f (Quantum a) = Quantum (f (measure a))

Now we can take any classical function, f, and apply it to any Quantum data a, by measuring it first. Note that by definition fmap must involve a measurement of the superposition, collapsing the superposition. For example, if we wanted to apply the classical not function to the result of calling hadamard on a Quantum Bool, we could do the following:

hadamardNot :: Quantum Bool -> Quantum Bool
hadamardNot x = fmap not (hadamard x)

Suppose we have a list of Quantum Bool, and we want to hadamardNot each of the elements, we can now use regular Haskell to do this:

hadamardNotList :: [Quantum Bool] -> [Quantum Bool]
hadamardNotList x = fmap hadamardNot x

Next, let’s make Quantum an instance of Applicative:

instance Applicative (Quantum a) where
    pure x = Quantum x
    (Quantum f) <*> (Quantum x) = Quantum (f (measure x))

Note that apply (<*>) by definition must also involve a measurement of the superposition, collapsing the superposition.

Now we can use pure to take classical data or functions from T into Quantum T:


For example, we could move a Bool into Quantum, call hadamard on it, and apply a classical not function to it like this:

let qnot = (Quantum not)
in qnot <*> (hadamard (pure True))

This would have the effect of moving the True value into a quantum register, applying the H gate, measuring the result and taking the not of that result. A useless operation, but I’m sure more useful computations exist.

Note that it’s still possible to make functions which reside entirely in the Quantum category, so we could define a function bell:

bell :: (Quantum Bool, Quantum Bool) -> (Quantum Bool, Quantum Bool)
bell (x, y) = cnot (hadamard x) y

Functions in Quantum which don’t involve fmap, pure, <*>, or measure are reversible.

At this point, it’s pretty easy to imagine compound quantum data types, for example a binary tree of qubits could be defined like this:

data QubitTree = Leaf | Node (Quantum Bool) QubitTree QubitTree

You could imagine other kinds of data structures, for example a graph G = (V, E), where V is a set of vertices, each of which contains a qubit, and E, the set of edges, represent entangled qubit pairs. Each qubit would be entangled with all of its neighbors on the graph.

Or you could move a compound data structure into the Quantum Applicative Functor like this:

makeQuantumList :: [a] -> Quantum [a]
makeQuantumList x = pure x

A quantum Monad

The next obvious step is to make Quantum an instance of Monad, which is quite simple:

instance Monad (Quantum a) where
    return x = Quantum x
    x >>= f = f (measure x)

So we can chain functions which generate a Quantum value from a classical value using bind. Again, by definition, a bind (>>=) must also involve a measurement of the superposition, collapsing the superposition. I don’t even have an example of a function which might take a classical value and evaluate to a superposition, so I’m just going to pretend that there are two of them called foo and bar:

foo :: String -> Quantum Int
bar :: Int -> Quantum Float

We could chain these operations one after another using bind:

return "Quantum" >>= foo >>= bar

This is an extremely useless operation, but maybe someone will figure out how to make the Quantum Monad useful.

Again, it’s important to note that functions in Quantum which don’t involve return and bind are reversible.

There is a possible extension of the Quantum category where you can preserve the reversibility of operations even in the presence of measure, fmap, apply, pure, bind and return, by introducing another typeclass Measured. The measure, fmap, apply, pure, bind and return operations would take a Quantum value to a Measured value, but that complicates things significantly, so I don’t really want to go into detail about it.

Strongly-typed quantum computing?

It looks like I just ended up adding quantum computations to Haskell without actually inventing a new language after all. This was an interesting thought experiment, but I’m still not sure if it’s useful. At least it’s a fun way to spend a weekend!

P.S. Please cite this article if you build upon the ideas described here.

Google Sheets and Haskell

This week, I’m playing with some web programming in Haskell. I don’t have much experience with accessing web services in my day job so I decided learn about them by making a little flash-card app which accesses the Google Sheets API to retrieve flash cards.

Haskell OAuth2?

Although the Google Sheets API doesn’t have official support for Haskell, it is built upon the OAuth2 API, as described in Using OAuth 2.0 for Web Server Applications.

Haskell has an interface to OAuth 2.0 called hoauth2. Unfortunately, the documentation for hoauth2 is so sparse that I couldn’t figure out how to use it (the only documentation they have is a single web-app built with the WAI framework). Since I want to make a command line application and not an app which you interact with in a browser, I used Haskell’s HTTP client package and Google’s authorization URLs directly.

Making a Google API project

Next I’ll cover what you need to do to enable the Google API for your project by describing how I enabled Sheets for my flash card application.

First open https://console.developers.google.com/ and create a new project:



You should see your project name appear in the upper left of the page:


Click on Enable APIs and Services:


Search for the API you want to enable:


Click Enable to enable the API for your project:


Next, you’ll have to create credentials for your project:


Add credentials to your project. I’m creating a CLI tool that accesses application data:


Name your OAuth 2.0 client:


Set up the consent screen:


Finally, note down your client ID and download the credentials file:


The file will be called client_id.json. This will be the token you’ll use to verify your app with the Google API.

OAuth Authorization

Google has a great explanation for how to use OAuth2 with Mobile and Desktop Applications. Unfortunately, they don’t have a Haskell API, so we need to modify their suggestions to work with the http-client.

First, we need to import the HTTP client modules and ByteString, which is used to read from a HTTP message:

import qualified Data.ByteString.Char8 as C

import Network.HTTP.Client
import Network.HTTP.Client.TLS
import Network.HTTP.Types.Status

In addition, some requests must use the Data.Text format for strings:

import qualified Data.Text as T

We’ll also need to request permission from the user to access their sheets. This is done by opening a web browser with a page which the user can use to generate a token to access their account. We can open a web browser using the Web.Browser library:

import Web.Browser

Finally, we need a JSON parser to decode GET messages. I used the Aeson library for this:

import Data.Aeson
import Data.Aeson.Types
import qualified Data.Map as M

The main function of this application is runFlashCardsMaybe:

runFlashCardsMaybe :: MaybeT IO ()
runFlashCardsMaybe = do lift $ putStrLn "Running flash cards"
                        args <- lift $ getArgs
                        if length args < 4
                        then lift $ putStrLn "Usage: GoogleSheetsDemo-exe <client_id> <client_secret> <spreadsheet_id> <rows_to_read>"
                        else let clientID = args !! 0
                                 clientSecret = args !! 1
                                 spreadSheetID = args !! 2
                                 rowsToRead = args !! 3
                             in do connection <- setupConnection clientID clientSecret
                                   flashCards <- getFlashCards spreadSheetID rowsToRead connection
                                   doFlashCards flashCards

MaybeT is a Monad Transformer, which means that it adds Maybe functionality to the IO Monad. I haven’t covered Monad Transformers yet in my blog, but for now, you can think of them as a multi-layered Monad, similar to Maybe IO.

The function gets the arguments passed via the CLI for the clientID and clientSecret which we got in the previous step, the user’s spreadsheet ID and the number of rows to read from the spreadsheet.

There are three steps to the application, setupConnection, getFlashCards, and doFlashCards, in that order.

The first part of connecting to Google API is setting up a connection. Here’s the function I used to connect:

setupConnection :: String -> String -> MaybeT IO Connection
setupConnection clientID clientSecret
    = do manager <- lift $ newManager tlsManagerSettings
         lift $ openBrowser ("https://accounts.google.com/o/oauth2/v2/auth?" ++
                             "scope=https://www.googleapis.com/auth/spreadsheets&" ++
                             "response_type=code&" ++
                             "state=security_token%3D138r5719ru3e1%26url%3Doauth2.example.com/token&" ++
                             "redirect_uri=urn:ietf:wg:oauth:2.0:oob&" ++
                             "client_id=" ++ clientID)
         lift $ putStrLn "Please enter authorization code:"
         lift $ hFlush stdout
         authCode <- lift $ getLine
         initialRequest <- lift $ parseRequest "https://www.googleapis.com/oauth2/v4/token"
         let pairs = fmap (\(x, y) -> (C.pack x, C.pack y))
                          [("code", authCode),
                           ("client_id", clientID),
                           ("client_secret", clientSecret),
                           ("redirect_uri", "urn:ietf:wg:oauth:2.0:oob"),
                           ("grant_type", "authorization_code")]
             request = urlEncodedBody pairs initialRequest
         response <- lift $ httpLbs request manager
         if responseStatus response == status200
         then do let body = responseBody response
                 do bodyData <- MaybeT $ return $ (decode body :: Maybe AuthResponse)
                    MaybeT $ return $ createConnection manager bodyData
         else MaybeT $ return $ Nothing

setupConnection takes the clientID and clientSecret that we made previously as arguments. There’s a lot going on here, so let’s break it down. First you have to make a connection manager, in this case we want a manager that supports TLS:

manager <- lift $ newManager tlsManagerSettings

The reason we have to call lift $ newManager is because the function evaluates to a MaybeT IO metatype. lift transports the newManager function from the IO monad into the MaybeT IO monad.

The next step is to ask the user for an access token by opening a standard URL in a web browser. This URL can be found in OAuth 2.0 for Mobile & Desktop Apps under the Sample Authorization URLs heading as the copy-paste sample:

lift $ openBrowser ("https://accounts.google.com/o/oauth2/v2/auth?" ++
                    "scope=https://www.googleapis.com/auth/spreadsheets&" ++
                    "response_type=code&" ++
                    "state=security_token%3D138r5719ru3e1%26url%3Doauth2.example.com/token&" ++
                    "redirect_uri=urn:ietf:wg:oauth:2.0:oob&" ++
                    "client_id=" ++ clientID)

Other methods of authorization are available for web and mobile applications.

The user’s browser will open a page like this:


Next we request the authorization code from the user:

lift $ putStrLn "Please enter authorization code:"
lift $ hFlush stdout
authCode <- lift $ getLine

Then we need to build a request for an authorization token from Google’s OAuth2 server:

initialRequest <- lift $ parseRequest "https://www.googleapis.com/oauth2/v4/token"
let pairs = fmap (\(x, y) -> (C.pack x, C.pack y))
            [("code", authCode),
             ("client_id", clientID),
             ("client_secret", clientSecret),
             ("redirect_uri", "urn:ietf:wg:oauth:2.0:oob"),
             ("grant_type", "authorization_code")]
    request = urlEncodedBody pairs initialRequest

The request consists of a set of key value pairs encoded as ByteStrings. C.pack converts a String to a ByteString, so we can map a tuple conversion lambda over the list of key value pairs to create an appropriate GET request. The request is parsed by urlEncodedBody, which is a function in the HTTP client library.

Then we call the request using the httpLbs function with the TLS manager and check the response:

response <- lift $ httpLbs request manager
if responseStatus response == status200
then do let body = responseBody response
        do bodyData <- MaybeT $ return $ (decode body :: Maybe AuthResponse)
           MaybeT $ return $ createConnection manager bodyData
else MaybeT $ return $ Nothing

If the response is 200 OK, we need to parse the response message. I used the Aeson library to decode the response with the type AuthResponse:

data AuthResponse = AuthResponse {accessToken :: T.Text,
                                  tokenType :: T.Text,
                                  expiresIn :: Int,
                                  refreshToken :: T.Text}

instance FromJSON AuthResponse where
    parseJSON (Object v) = AuthResponse
                           <$> v .: T.pack "access_token"
                           <*> v .: T.pack "token_type"
                           <*> v .: T.pack "expires_in"
                           <*> v .: T.pack "refresh_token"
    parseJSON invalid = typeMismatch "AuthResponse" invalid

Once we have the accessToken, we can create an authorized connection to Google Sheets:

data Connection = Connection Manager AuthResponse

createConnection :: Manager -> AuthResponse -> Maybe Connection
createConnection manager authResponse = Just $ Connection manager authResponse

The next step after getting an authorized connection to Google Sheets is to get the flash cards out of the sheet:

getFlashCards :: String -> String -> Connection -> MaybeT IO [[T.Text]]
getFlashCards spreadSheetID rowsToRead (Connection manager (AuthResponse {accessToken = thisAccessToken,
                                                                          tokenType = thisTokenType,
                                                                          expiresIn = thisExpiresIn,
                                                                          refreshToken = thisRefreshToken}))
    = do rowsRequest <- parseRequest ("GET https://sheets.googleapis.com/v4/spreadsheets/" ++
                                      spreadSheetID ++
                                      "/values/Sheet1!A1:B" ++ rowsToRead ++ "?access_token=" ++
                                      (T.unpack thisAccessToken))
         rowsResponse <- lift $ httpLbs rowsRequest manager
         maybeRowsResponse <- return (decode (responseBody rowsResponse) :: Maybe RowsResponse)
         MaybeT $ return $ fmap getValues maybeRowsResponse

Again, there’s a lot going on here, so let’s break it down one function call at a time. First, we need to get the rows out of the sheet. This is achieved using a GET command, as specified in Reading and Writing Values using the Google Sheets API under the Reading a single range heading:

rowsRequest <- parseRequest ("GET https://sheets.googleapis.com/v4/spreadsheets/" ++
                             spreadSheetID ++
                             "/values/Sheet1!A1:B" ++ rowsToRead ++ "?access_token=" ++
                             (T.unpack thisAccessToken))

The message requests a set of values from A1 to BN where N is the number of rows to read. The A column contains the front of the flash card and the B column contains the back of the flash card. We also have to pass the access token using “?access_token=” ++ (T.unpack thisAccessToken). T.unpack converts a Data.Text string to a String.

Next, we send the request using the httpLbs function and the TLS manager:

rowsResponse <- lift $ httpLbs rowsRequest manager

After this, we need to parse the response body, which contains the rows which were read from the user’s spreadsheet:

maybeRowsResponse <- return (decode (responseBody rowsResponse) :: Maybe RowsResponse)
MaybeT $ return $ fmap getValues maybeRowsResponse

Again, we use the Aeson library to parse the response. The RowsResponse type contains the data for the rows in its values field:

data RowsResponse = RowsResponse {range :: T.Text,
                                  majorDimension :: T.Text,
                                  values :: [[T.Text]]}

instance FromJSON RowsResponse where
    parseJSON (Object v) = RowsResponse
                           <$> v .: T.pack "range"
                           <*> v .: T.pack "majorDimension"
                           <*> v .: T.pack "values"
    parseJSON invalid = typeMismatch "RowsResponse" invalid

getValues :: RowsResponse -> [[T.Text]]
getValues (RowsResponse {values = thisValues}) = thisValues

Once the values are parsed into a [[T.Text]] type, it’s simple to run an interactive flash card test on the command line by printing the front of the “flash card”, making the user press enter after they make a guess about what is on the back, and then showing the back of the “flash card”:

doFlashCards :: [[T.Text]] -> MaybeT IO ()
doFlashCards [] = lift $ return ()
doFlashCards (row : rows)
    = do lift $ putStrLn $ T.unpack (row !! 0)
         lift $ hFlush stdout
         lift getLine
         lift $ putStrLn $ T.unpack (row !! 1)
         lift $ hFlush stdout
         lift getLine
         doFlashCards rows

The source code for this post is available at Google Sheets Demo.


Haskell HTTP Client Documentation

OAuth 2.0 for Mobile and Desktop Applications in the Google API

Introduction to the Google Sheets API

Reading and Writing Values using the Google Sheets API