Writing a Ray Tracer in Haskell (Part 1)

I started making a ray tracer in Haskell for fun so I’m going to blog about it. A ray tracer is a piece of software that generates an image of a 3D scene by simulating the way light moves around the scene and enters a camera. Since light travels in straight lines until it interacts with matter, this is done by tracing straight light rays around the scene.

This isn’t going to be a tutorial, but I’ll go over some of the quirks of writing a ray-tracer in a pure functional language where you don’t have access to mutable state.

I used two books for reference, Ray Tracing from the Ground Up by Kevin Suffern, and Physically Based Rendering by Matt Pharr and Greg Humphreys.

Why write a ray tracer in Haskell?

There were a lot of things I didn’t know how to do in Haskell before starting this project, like for example how do you load a mesh from a file into a Haskell-style Algebraic Data Type (ADT). Writing a ray tracer is a great way to learn.

Also, I thought that, in principle, a ray tracer should be easier to express in Haskell because all of the operations in a ray tracer have well defined functions because they’re derived from the rendering equation. For example, when shading a surface point, we take a few inputs, including the incoming light direction, the surface and light color, and the surface normal and produce an outgoing light ray direction. There’s no need to modify the existing state, or even store the old state of the ray for later use, the previous ray can be discarded.

The basics: ray intersections

I started out writing the data types to represent shapes and rays. Like so:

data Shape = Plane (Point V3 Double) (V3 Double)
| Sphere (Point V3 Double) Double
| AABB (M44 Double) (V3 Double) (V3 Double)
| Triangle (Point V3 Double) (Point V3 Double) (Point V3 Double) (V3 Double)
| Disk (Point V3 Double) (V3 Double) Double
| Rectangle (Point V3 Double) (V3 Double) (V3 Double) (V3 Double)
deriving (Show, Eq)

data Ray = Ray { rayOrigin :: Point V3 Double
, rayDirection :: V3 Double
}
deriving (Show, Eq)

In order to store the location where a light ray hits an object, I created an intersection data structure:

data Intersection = Intersection { intersectionPoint :: Point V3 Double
, intersectionNormal :: V3 Double
, tMin :: Double
}
deriving (Show, Eq)

Then performing a ray-shape intersection is as simple as it would be in any other language, just take a ray and a shape and return an intersection. In this case, we’ll return a Maybe Intersection, because it’s possible that the ray misses the object entirely, in which case you get a result of Nothing. Here’s an example of how that works with a ray and a flat plane:

rayIntersection :: Ray -> Shape -> Maybe Intersection

rayIntersection (Ray {rayOrigin = ro, rayDirection = rd}) (Plane planePoint planeNormal) =
let denominator = (rd `dot` planeNormal)
in if (denominator > -rayEpsilon) && (denominator < rayEpsilon)
then Nothing
else let t = (planePoint .-. ro) `dot` (planeNormal ^/ denominator)
in if t <= rayEpsilon
then Nothing
else Just (Intersection {intersectionPoint = ro .+^ (rd ^* t), intersectionNormal = planeNormal, tMin = t})

Lights, cameras, materials, shaders

There are a few other things you need to represent a scene, including lights:

data Light = EnvironmentLight (Color Double)
| PointLight (Point V3 Double) (Color Double)
| DirectionalLight (V3 Double) (Color Double)
| DiskLight (Point V3 Double) (V3 Double) Double (Color Double) -- Point, normal, and radius
| SphereLight (Point V3 Double) Double (Color Double) -- Point and radius
| RectangleLight (Point V3 Double) (V3 Double) (V3 Double) (Color Double) -- Point and radius
deriving (Show, Eq)

cameras:

data Camera = Camera (Point V3 Double) (V3 Double) (V3 Double) -- Origin, look, and up
deriving (Show, Eq)

and materials:

data Material = ColorMaterial (Color Double) -- Color (No shading)
| MatteMaterial (Color Double) Double -- Diffuse, kD
| PlasticMaterial (Color Double) Double (Color Double) Double Double -- Diffuse, kD, Specular, kS, kExp
deriving (Show, Eq)

In order to tie all of these systems together, I created a class to represent an object, which has a shape, a material, and a function, called a shader, from a ShadePoint to a Color.

data Object = Object Shape Material (ShadePoint -> Color Double)

A ShadePoint contains everything you need to shade a surface point of an object: the material of the object, the surface normal, the incoming light ray direction, and the outgoing light ray direction:

data ShadePoint = ShadePoint Material (V3 Double) (V3 Double) (V3 Double)
deriving (Show, Eq)

Here’s an example of a diffuse shader (diffuseF is a helper function):

diffuseF :: Color Double
-> Double
-> Color Double
diffuseF diffuse kD =
let invPi = 1.0 / pi
in diffuse ^* (kD * invPi)

lambertShader :: ShadePoint -> Color Double
lambertShader (ShadePoint (ColorMaterial color) normal wIn wOut) = color
lambertShader (ShadePoint (MatteMaterial diffuse kD) normal wIn wOut) = diffuseF diffuse kD
lambertShader (ShadePoint (PlasticMaterial diffuse kD _ _ _) normal wIn wOut) = diffuseF diffuse kD

Tracing

If you’re going to trace rays, there’s no point tracing against a single triangle. To make a meaningful render, you need to trace a scene like this:

data Scene = ListScene [Object]
| KDScene KDTree
deriving (Show)

I started out with the most basic kind of scene, just a list of objects and made a scene with a KD-tree accelerator later. I’ll talk about that in a later post, but for now let’s look at how to trace a list scene.

The approach to tracing a list scene is to simply compare every ray for intersections against every object. This produces a very slow, but correct render.

Here’s the entire code for the tracer (Don’t worry about the LowDiscrepancySequence for now, I’ll describe that in a later post):

traceRays :: (LowDiscrepancySequence s)
=> Scene
-> Color Double
-> Ray
-> s
-> ((TraceResult, Ray), s)

traceRays (ListScene objects) bgColor ray gen =
((foldl' (\traceResult@(TraceResult (Intersection {tMin = traceTMin}) material shader) (Object shape objectMaterial objectShader) ->
case rayIntersection ray shape of
Nothing -> traceResult
Just objectIntersection@(Intersection {tMin = tm}) ->
if tm < traceTMin
then TraceResult objectIntersection objectMaterial objectShader
else traceResult) (emptyTraceResult bgColor) objects, ray), gen)

That’s it, a single foldl’. You take the minimum intersection point and that’s the point you shade. The shadow rays are cast in a separate function which is pretty much the same.

Lessons learned

I was surprised by a few things when I started developing the ray tracer, and I think that this is relevant to any projects which involve large amounts of data and computations:

  1. The tracer is very slow compared to C++. The tracer renders at approximately 36000 rays per second on a Core i5-8250U on 4 cores tracing a mesh with 16300 faces. This is a lot slower than I was hoping for, but still faster than it would be if written in a scripting language like Python. I didn’t spend that much time on optimization, but it’s definitely not performant without some more time spent on optimization. Switch from ADTs to other types and using more strict evaluation would probably help a lot but those are not obvious to me as a Haskell beginner.
  2. The code is really small. 1052 lines of code for a ray tracer with random sampling, a KD-tree accelerator, and mesh loading is incredibly concise. I like this because it makes it feasible for me to develop a large project like this on my own in my spare time. It might also be important if your work involves fast prototyping too.
  3. Using generic types makes computations even slower. My original code used the generic Floating typeclass. Removing the typeclass and specializing the code to use Doubles resulted in a 10x speedup!
  4. If you don’t use strict folds when working with lots of data, your program will crash. If you use foldl or foldr over enough data, your program will run out of memory from all of the thunks it allocates. Use foldl’ to avoid this.
  5. Dealing with I/O was actually pretty easy. There are only a couple of files which actually deal with I/O in the project and the rest of the code is 100% pure.
  6. Adding multithreading was annoying, but ultimately required very few changes. It took a lot of digging to figure out how to get the tracer to run on multiple cores, but in the end I just used a parListChunk and it worked.
  7. Modifying the code is super simple. Because everything is pure, adding a feature like random numbers to the program takes a fraction of the time it normally would. Iterating on a pure functional program is super fast.
  8. Lambdas are the most natural way to express shaders. In a ray tracer, when a ray hits an object, you need to determine how the light will reflect off of the surface and what color the resulting ray will be. This is done using a function called a shader. In most languages, shaders are bound to a surface using an ID and accessed using function pointers. In Haskell, you can simply store the shader as a lambda inside the object!

You can find the source code for this project under an MIT license at https://github.com/WhatTheFunctional/HaskellTracer.

Advertisement

A Brief Introduction to the λ-Calculus (Part 2)

In the second part of this series on λ-calculus, we’ll explore how to perform recursion in λ-calculus and how to represent numbers in λ-calculus using Peano arithmetic. At the end of this post, I’ll show how to code an algorithm to compute the nth number in the Fibonacci sequence using λ-calculus.

A word of warning, this post contains some challenging concepts. Take it slow and make sure you understand each step before moving on to the next one.

Self application

To begin with, we’ll look at an interesting behavior that happens when you call the self_apply function with itself as an argument.

Recall that self_apply is defined as:

self_apply = λi.(i i)

Applying self_apply to itself:

(self_apply self_apply)
=> (λi.(i i) λi.(i i))
=> (λi.(i i) λj.(j j)) [α-conversion]
=> (λj.(j j) λj.(j j)) [β-reduction]
=> (self_apply self_apply)

Notice that applying α-conversion and β-reduction the statement (self_apply self_apply) results in (self_apply self_apply). No matter how many operations are applied to this expression we get the same expression as a result.

If you had a machine which could choose which operation to apply to a given λ-calculus expression and which would apply them automatically, this expression would cause an automated infinite self repeating operation. We’ll take advantage of this behavior soon.

Recursion

Recall from part 1 that we can’t perform recursion in λ-calculus by using function names inside function definitions. Let’s temporarily lift that restriction to imagine an example of what we’d like to achieve:

WARNING: What follows is not real λ-calculus! It is only presented for illustrative purposes.

f = λi.(f (G i))

Repeatedly expanding the definition of such a function would look like this:

(f x)
=> (λi.(f (G i)) x)
=> (λi.(λi'.(f (G i')) (G i)) x)
=> (λi.(λi'.(λi''.(f (G i'')) (G i')) (G i)) x)

I had to rename variables as part of these pseudo-operations because this isn’t actually valid λ-calculus.

Let’s stop the recursion there by deleting the f and see what would result if we could terminate this iteration. Again, this is not a real λ-calculus operation:

(λi.(λi'.(λi''.(G i'') (G i')) (G i)) x)
=> (λi'.(λi''.(G i'') (G i')) (G x)
=> (λi''.(G i'') (G (G x))
=> (G (G (G x)))

As you can see, this would prefix a series of calls to G applied to x.

This illustrates the kind of behavior we want, repeatedly expanding f, resulting in a series of recursive operations.

We can achieve this kind of recursion without referring to the function name in its function body by taking the next iteration of f as an argument, f’, to the function f. We’re back to real λ-calculus from now on:

f = λf'.λi.λc.(c i (f' f' (G i)))

Notice that we need to pass a copy of the f function to f as its first argument to use for further recursion. We use a selector function, c, to terminate the recursion by discarding the (f’ f’ (G i)) expression if c is true. In the case where c is false we continue the recursion by selecting the (f’ f’ (G i)) expression. We have two instances of f’ in the recursion expression so that further recursive calls to f can re-use the copy of f we already have.

When c is false, f returns a function, (f’ f’ (G i)), which takes a selector function as an argument to terminate the iteration. This way we can specify when to terminate the function by passing it a series of false and true values.

We can call the function recursively to duplicate G a finite number of times in front of x as follows. We need to call f with itself as an argument followed by its intended argument, x, and a string of Boolean selector functions:

(f f x false false true)
=> (λf'.λi.λc.(c i (f' f' (G i))) f x false false true)
=> (λi.λc.(c i (f f (G i))) x false false true)
=> (λc.(c x (f f (G x))) false false true)
=> ((false x (f f (G x))) false true)
=> ((λf'.λi.λc.(c i (f' f' (G i))) f (G x)) false true)
=> ((λc.(c (G x) (f f (G (G x)))) false true)
=> ((false (G x) (f f (G (G x)))) true)
=> ((λf'.λi.λc.(c i (f' f' (G i))) f (G (G x))) true)
=> ((λi.λc.(c i (f f (G i))) (G (G x))) true)
=> ((λc.(c (G (G x)) (f f (G (G (G x)))))) true)
=> (true (G (G x)) (f f (G (G (G x)))))
=> (G (G x))

Notice how we use the false function to continue the recursion and the true function to terminate the recursion.

Y

This kind of manual recursive function call is fine for performing recursion when we know the number of iterations we need ahead of time, but what about if we don’t know the number of iterations ahead of time? We need an automatic mechanism for performing recursion which can terminate when we encounter a terminating condition.

We saw a function, self_apply, which performs an automatic infinite operation sequence in the first section of this article. Notice in that section how the body (i i) causes the infinite repetition? Let’s modify self_apply by adding in a conditional function as an argument that can conditionally discard the repetition (i i):

half_recur = λf.λi.(f (i i))

If we pass false to this function, the (i i) self application is not evaluated:

half_recur false F G 
=> λf.λi.(f (i i)) false F G
=> λi.(false (i i)) F G
=> (false (F F)) G
=> (λfirst.λsecond.second (F F)) G
=> λsecond.second G
=> G

false terminates the automatic self application!

If we apply this function to true, we get the self apply behavior, (F F), as a result.

Applying half_recur to a function, F, we get another function:

half_recur F
=> λf.λi.(f (i i)) F
=> λi.(F (i i))

By applying this function to itself, we get the famous Y-combinator, also known as the fixed point combinator, discovered by Haskell Curry:

Y = λf.(λi.(f (i i)) λi.(f (i i)))

The Y-combinator has a very useful behavior when it’s applied to a function:

(Y F)
=> (λf.(λi.(f (i i)) λi.(f (i i))) F)
=> (λi.(F (i i)) λi.(F (i i)))
=> (λi.(F (i i)) λi'.(F (i' i'))) [α-conversion]
=> (F (λi'.(F (i' i')) λi'.(F (i' i'))))
=> (F (λi.(F (i i)) λi.(F (i i)))) [α-conversion]
== (F (Y F))

The Y-combinator duplicates the function it’s passed as a prefix to itself!

This behavior allows us to perform general recursion.

Notice that this is not necessarily an infinite recursion. If F is a conditional function, a false condition would discard the Y-combinator and terminate the recursion!

As a personal note, I think that Curry’s discovery of the Y-combinator was an act of sheer genius. It is a remarkable mechanism.

Numbers

In the late 19th century, Guiseppe Peano created a set of axioms for natural numbers which we can use to perform arithmetic in λ-calculus. (I’m going to skip over the undecidability of Peano’s arithmetic for the purpose of this article).

Peano’s axioms use a “successor” function, as well as defining the number 0 as a natural number. We’ll use the following two functions for zero and successor:

zero = identity == λi.i
successor = λn.λs.((s false) n)

This kind of numbering system, described in Greg Michaelson’s book, is an alternative to Church Numerals. Michaelson’s formulation of Peano’s arithmetic makes it easier for us to perform comparisons on numbers.

Any natural number can be created by applying the successor function a number of times to zero. For example:

one = (successor zero)
two = (successor (successor zero))
three = (successor (successor (successor zero)))

Every time we apply successor to a number, it creates a pair of arguments false and n to a function s. We can define the natural numbers like so:

one = successor zero
=> λn.λs.((s false) n) zero
=> λs.((s false) zero)

two = successor one == successor successor zero
=> λn.λs.((s false) n) λn.λs.((s false) n) zero
=> λn.λs.((s false) n) λs.((s false) zero)
=> λn.λs.((s false) n) λs'.((s' false) zero)
=> λs.((s false) λs'.((s' false) zero))

and so on...

Notice that all numbers are functions which take a selector function as an argument.

We choose to define numbers this way because in the case where we apply a number to true, the result will be false, unless the number is zero, in which case we’ll get true as a result:

zero true
=> λi.i true
=> true

one true
=> λs.((s false) zero) true
=> ((true false) zero)
=> ((λfirst.λsecond.first false) zero)
=> (λsecond.false zero)
=> false

This is because true has the same behavior as select_first and the first argument which will be passed to the selector for any non-zero number will always be false because of how we defined successor (successor always puts false as the first argument to the selector).

This behavior allows us to create an is_zero function:

is_zero = λn.(n true)

is_zero applied to zero will evaluate to true, and if applied to any other natural number, it will evaluate to false.

Since we defined natural numbers as recursive applications of the successor function, we can discard one application of the successor function to get the predecessor of a number. We’ve previously seen a function which will discard the first argument in a pair of functions, the select_second function (a.k.a false).

Here’s what happens when we apply a non-zero natural number (successor X) to false:

((successor X) false)
=> ((λn.λs.((s false) n) X) false)
=> (λs.((s false) X) false)
=> ((false false) X)
=> ((λfirst.λsecond.second false) X)
=> (λsecond.second X)
=> X

By applying the number to false, we’ve stripped the successor function prefix from X!

We can define a predecessor function like this:

predecessor = λn.((is_zero n) n (n false))

We need to handle zero as a special case, so we use is_zero in this function.

Addition

In order to add two numbers to one another, we’ll take the approach of incrementing one number while decrementing the other:

add_iter = λadd_iter'.λx.λy.((is_zero y) x (add_iter' (successor x) (predecessor y)))

We can then define the add function by calling this increment/decrement function recursively with the Y-combinator:

add = Y add_iter

Let’s try adding 1+1.

This next section is pretty long, but it demonstrates that addition can work just by applying the four operations of λ-calculus:

add one one
=> (Y add_iter) one one
=> λf.(λi.(f (i i)) λi.(f (i i))) add_iter one one
=> (λi.(add_iter (i i)) λi’.(add_iter (i’ i’))) one one
=> add_iter (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) one one

Replacing (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) by (Y add_iter)

=> add_iter (Y add_iter) one one
=> λadd_iter'.λx.λy.((is_zero y) x (add_iter' (successor x) (predecessor y))) (Y add_iter)) one one
=> λx.λy.((is_zero y) x ((Y add_iter) (successor x) (predecessor y))) one one
=> λy.((is_zero y) one ((Y add_iter) (successor one) (predecessor y))) one
=> ((is_zero one) one ((Y add_iter) (successor one) (predecessor one)))
=> (false one ((Y add_iter) (successor one) (predecessor one)))
=> (false one ((Y add_iter) (successor one) (predecessor one)))
=> ((Y add_iter) (successor one) (predecessor one))

Replacing successor one by two and predecessor one by zero

=> (Y add_iter) two zero
=> (λf.(λi.(f (i i)) λi.(f (i i))) add_iter add_iter) two zero
=> ((λi.(add_iter (i i)) λi.(add_iter (i i))) add_iter) two zero
=> ((λi.(add_iter (i i)) λi’.(add_iter (i’ i’)))) two zero
=> add_iter (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) two zero

Replacing (λi’.(add_iter (i’ i’)) λi’.(add_iter (i’ i’))) by (Y add_iter)

=> add_iter (Y add_iter) two zero
=> λadd_iter'.λx.λy.((is_zero y) x (add_iter' (successor x) (predecessor y))) (Y add_iter) two zero
=> λx.λy.((is_zero y) x ((Y add_iter) (successor x) (predecessor y))) two zero
=> λy.((is_zero y) two ((Y add_iter) (successor two) (predecessor y))) zero
=> ((is_zero zero) two ((Y add_iter) (successor two) (predecessor zero)))
=> (true two ((Y add_iter) (successor two) (predecessor zero)))
=> two

Yes, I know that took a lot of work; but it is a valid arithmetic procedure which is performed using nothing but functions!

Fibonacci

Now that we have numbers, arithmetic, and recursion, it’s trivial to define a function to calculate the nth number in the Fibonacci sequence:

fib_iter = λfib_iter'.
λn.((is_zero n)
zero
((is_zero (predecessor n))
one
(add
(fib_iter' (predecessor n))
(fib_iter' (predecessor (predecessor n))))))

fib = Y fib_iter

Conclusion

That’s essentially all we need to use λ-calculus to perform general computations on the natural numbers. As Countess Ada Lovelace famously realized, any machine which can perform general numerical computations is sufficiently powerful to perform computations on any kind of data.

There’s a lot more to λ-calculus than what I’ve discussed in these posts, but this should be enough to give you the tools you need to continue exploring λ-calculus.

References

An Introduction to Functional Programming Through Lambda Calculus by Greg Michaelson

Lambda Calculus at Wikipedia

The Lambda Calculus at Stanford Encyclopedia of Philosophy

Haskell theoretical foundations – Lambda calculus

Normal, Applicative and Lazy Evaluation

A Brief Introduction to the λ-Calculus (Part 1)

In this post, I’ll be discussing the untyped λ-calculus (lambda calculus). λ-calculus forms the basis of all functional programming languages and is one of the three theoretical models of computing, the other two being Turing Machines and Recursive Function Theory.

Alonso Church created λ-calculus in the 1930s as a formal system of mathematical logic for computation based on function abstraction and application. Church envisioned a simple language which contains only functions. λ-calculus doesn’t even have Boolean values or numbers. We’ll explore how to represent Boolean values using only functions below and we’ll cover representing numbers using only functions in a later post.

The most important distinction between imperative languages and functional languages is how they perform abstraction. In an imperative language, abstraction is performed by assigning names to variables which can change value over time. This is similar to how the parts of a Turing Machine can change over time; e.g. the tape of the machine can move. In functional programming languages, abstraction is performed by assigning names to values and functions which never change and computing new values by applying functions to values. In λ-calculus, functions are immutable values which we can name.

For example, here is the identity function:

λi.i

The function starts with the λ symbol, followed by a name, representing the argument of the function. A period separates the argument of the function from the body of the function. The body of the function can be any λ-calculus expression. Names in λ-calculus can be any string of characters except spaces, parentheses, . and λ.

For convenience, we can give names to our functions (technically this is an extension of the λ-calculus). Let’s define a few example functions so you can get used to the syntax:

identity = λi.i
self_application = λi.ii
apply = λx.(λy.xy)

Don’t worry too much about what these do just yet, you’ll probably be able to figure out what they do after you learn about β-reduction below.

Notice that the body of a function can also contain other functions. Functions are first class in λ-calculus, just like in any other functional programming language!

In the identity function above, i is used to form an abstraction. In the function, i can refer to anything until the function is specialized by application. For example, suppose we have a name, G, we could apply the identity function to the name G to specialize the function. We indicate that we want to apply the function by placing it in front of the argument G in parentheses:

(λi.i G)

In order to actually apply the value, we replace the instances of i in the function body with the argument, G. In this case, we get the result G:

=> G

The process we performed above is an operation. We use => to indicate that an operation was performed on (λi.i G) to compute G. We could also write the steps above like this:

(λi.i G) => G

Operations advance an algorithm to its next step, eventually resulting in a solution. In imperative languages, there are many operations, each of which alter the value of variables or the program counter. In λ-calculus there are only four operations:

  • λ-abstraction (lambda abstraction)
  • β-reduction (beta reduction)
  • α-conversion (alpha conversion)
  • η-conversion (eta conversion)

It’s important to note that naming functions is not an operation. Function names are assigned statically, they can not be used inside their own function definition. They are simply an alias for the function and can’t be used for recursion.

Operations in the λ-calculus

λ-abstraction

λ-abstraction is simply the introduction of a lambda function. For example, we could create a new function using the λ-abstraction operation:

λx.xy

Also, notice that there are two names, x and y, in the body of the function but only x appears in the function argument. This is a valid λ-calculus expression.

We say that the name x is bound and the name y is free. A function which has no free variables is called a combinator and a function with at least one free variable is called a closure.

Sometimes the parentheses are excluded from nested functions like this:

apply = λx.λy.xy

In this case, you can always add parentheses by following the rule that lambda abstractions are right-associative.

β-reduction

β-reduction is process of applying a function to a value, as we saw above:

(λi.ij G) => Gj

For convenience, the parentheses are often excluded. If they are not used, you can always add them by using the rule that function applications are left-associative.

α-conversion

α-conversion allows us to rename an argument to avoid a name collision. To do this, we choose a different name for the argument and replace the old name with the new one wherever it exists in the function body:

λi.ij => λp.pj

For example, the two instances of j in this expression refer to different values, so we need to rename them using α-conversion before applying β-reduction:

(λi.iji λj.j) => (λi.iji λx.x) => (λx.x jλx.x) => jλx.x

η-conversion

η-conversion is used when the argument of a function only appears as the last term of the function body. In this case, the function can be simplified to remove the argument:

λi.RWCi => RWC

Applicative versus normal order reduction

Unlike in imperative languages, in λ-calculus, the order in which these operations are performed is undefined. For example when a function has an argument which contains an expression which could be simplified by β-reduction, we can choose which β-reduction to apply first. Consider the following expression:

(λi.ij (λx.xy G))

Here we apply β-reduction to the argument first. This is called applicative order reduction:

(λi.ij (λx.xy G)) => (λi.ij Gy) => Gyj

Here we apply β-reduction to the left-most function first. This is called normal order reduction:

(λi.ij (λx.xy G)) => (λx.xy G)j => Gyj

Church and Rosser showed that all evaluation orders of an expression in λ-calculus result in the same value.

This property of λ-calculus and functional programming languages in general is called execution order independence. Execution order independence enables the parallel execution of many lambda calculus expressions.

Making decisions

In order to model general computation, we need a way to choose from two alternatives. In order to do this, we introduce the select_first and select_second functions:

select_first = λfirst.λsecond.first
select_second = λfirst.λsecond.second

select_first consumes one argument, and then a second, and discards the second argument:

select_first A B
=> ((λfirst.λsecond.first A) B
=> (λsecond.A B)
=> A

select_second chooses the second argument:

select_second A B
=> ((λfirst.λsecond.second A) B
=> (λsecond.second B)
=> B

Consider the structure of an if-then statement in an imperative language.

if condition then A else B

If the condition is true, then we want to evaluate the first expression, A, and if the condition is false, then we want to evaluate the second expression, B.

But select_first has the behavior of evaluating the first expression!

select_first A B => A

We can rename select_first to true:

true A B => A

This performs a behavior equivalent to:

if true then A else B

We can similarly use select_second to represent false:

false A B => B

This performs a behavior equivalent to:

if false then A else B

Cond

Then the whole if-then statement can be expressed as a function cond that takes another function, c, which chooses either the expression a or the expression b:

cond = λa.λb.λc.((c a) b)

Let’s evaluate cond with true as an argument:

cond A B true
=> λa.λb.λc.((c a) b) A B λfirst.λsecond.first
=> λb.λc.((c A) b) B λfirst.λsecond.first
=> λc.((c A) B) λfirst.λsecond.first
=> ((λfirst.λsecond.first A) B)
=> (λsecond.A B)
=> A

It’s easy to see how passing false to cond will produce the desired behavior of evaluating to B.

Not

Not can be expressed as:

if condition then false else true

So we can simply apply false and true to cond to get a definition for not:

not = λx.(((cond false) true) x)

Applying not to true, we get:

not true
=> λx.(((cond false) true) x) true
=> (((cond false) true) true)
=> λa.λb.λc.((c a) b) false true true
=> λb.λc.((c false) b) true true
=> λc.((c false) true) true
=> ((true false) true)
=> ((λfirst.λsecond.first false) true)
=> (λsecond.false true)
=> false

And

And can be expressed as:

if A then B else false

If A is true then the expression is true if B is true, otherwise it’s false. If A is false, then it doesn’t matter what value B is, the expression is false.

and = λx.λy.((x y) false)

Applying true and true to this function we get:

and true true
=> ((λx.λy.((x y) false) true) true)
=> (λy.((true y) false) true)
=> ((true true) false)
=> ((λfirst.λsecond.first true) false)
=> (λsecond.true false)
=> true

Or

Or can be expressed as:

if A then true else B

If A is true then it doesn’t matter what B is, the expression is true. If A is false, then the expression is true if B is true, otherwise it’s false.

or = λx.λy.(((cond true) y) x)

Let’s apply false and true to or:

or false true
=> ((λx.λy.(((cond true) y) x) false) true)
=> (λy.(((cond true) y) false) true)
=> (((cond true) true) false)
=> (((λa.λb.λc.((c a) b) true) true) false)
=> ((λb.λc.((c true) b) true) false)
=> (λc.((c true) true) false)
=> ((false true) true)
=> ((λfirst.λsecond.second true) true)
=> (λsecond.second true)
=> true

Conclusion

At this point, you have a good enough grasp of the concepts of λ-calculus to make simple, non-recursive functions which act on Boolean values.

Next time we’ll cover how to represent numbers in the λ-calculus and how to perform recursion using the Y-combinator!

You can continue reading part 2 of this series here.

A side note on computability

A function which can be evaluated in the λ-calculus with a finite number of operations is called λ-computable. All λ-computable functions are computable on a Turing Machine and all Turing computable functions are λ-computable (See Church-Turing thesis). As a result, Turing Machines and the λ-calculus are equivalent in terms of what kinds of functions they can evaluate. Because of this equivalence, any problem which can be solved efficiently with an imperative programming language can also be solved efficiently with a functional programming language.

Formal syntax of λ-calculus

<expression> := <name> | <function> | <application>
<function> := λ<name>.<expression>
<application> := (<function> <expression>)

References

An Introduction to Functional Programming Through Lambda Calculus by Greg Michaelson

Lambda Calculus at Wikipedia

The Lambda Calculus at Stanford Encyclopedia of Philosophy

Haskell theoretical foundations – Lambda calculus

Normal, Applicative and Lazy Evaluation

Making a Haskell Interface for the Rosie Pattern Language

I discovered the Rosie Pattern Language at Dr. Jamie Jennings’ talk about it at Strange Loop 2018. The Rosie Pattern Language (RPL) is a DSL for parsing strings which is more convenient and easier to work with than regular expressions. RPL is a Lua library with a C Foreign Function Interface (FFI). It also has Python and Go interfaces, but I noticed that there wasn’t a way to call it from Haskell.

I met Dr. Jennings after the talk and offered to add a Haskell interface for RPL, which I started working on after Strange Loop. I haven’t used Haskell’s C FFI before, so I thought I’d write a post about it. The documentation for the FFI is a little sparse, so I’m going to walk through how I built the interface step by step.

Making a FFI Project

In order to build a project using the Haskell FFI, you need to link to C .o files when you call ghc. I modified the Makefile used by the RPL Go interface to link to the .o when building a Rosie.hs module:

LUA_FILES=$(wildcard ../../../submodules/rosie-lpeg/src/*.o)
LUA_EXTRA_FILES=$(wildcard ../../../submodules/lua-cjson/*.o)
LIB_LUA_FILES=$(wildcard ../liblua/*.o)
Rosie: Rosie.hs ../binaries/$(ROSIE_OBJECT_FILE_NAME)
    ghc --make -main-is Rosie -o Rosie Rosie.hs ../binaries/$(ROSIE_OBJECT_FILE_NAME) $(LUA_FILES) $(LIB_LUA_FILES) $(LUA_EXTRA_FILES)

As you can see, all I had to do to call ghc with the FFI was to add the –make flag, specify a main module using -main-is (in this case Rosie) as well as listing all .o files I wanted to link to.

You can see the full Makefile I wrote here: https://gitlab.com/lemms/rosie/blob/master/src/librosie/haskell/Makefile

Building a Rosie Haskell module

The first thing you need to add when you use the C FFI is the Foreign Function Interface language extension:

{-# LANGUAGE ForeignFunctionInterface #-}

I made a module called Rosie which exports some RPL data types and all of the functions exported by the RPL C libary:

module Rosie (RosieStr(..),
              RosieMatch(..),
              RosieEngine(..),
              rosieStringLength,
              rosieStringChars,
              unsafeNewEmptyRosieString,
              cRosieNewString,
              ...
              cRosieParseBlock,
              newRosieString,
              newRosieEngine,
              main) where

All of the functions starting with cRosie are direct calls to the RPL C interface.

rosieStringLength, rosieStringChars, and unsafeNewEmptyRosieString are helper functions I wrote to work with RosieString data in Haskell.

The RPL C functions are very low level, only supporting manual memory management so you need to remember to free the memory you allocated when you’re done with it. The Rosie module lets you call these unsafe allocations but I also added newRosieString and newRosieEngine in order to add automatic memory management to the Rosie module.

Managing foreign data structures in Haskell

Haskell can only refer to C data structures by pointer. In general you will have a handle to a foreign data structure which looks like this:

Ptr RosieEngine

In order to get the Ptr data type, you need to call:

import Foreign.Ptr

If you don’t need to access the internals of a C data structure, as in this RosieEngine example, you can simply specify an empty value constructor:

data RosieEngine = RosieEngine

The C struct engine contains references to Lua interface libaries. It’s difficult to determine the memory layout for the Lua interface. Instead of opening that can of worms, I just forbid access to the internals of the RosieEngine.

If you want internal access to the C data structures from Haskell, you need to specify the internal components using the record syntax:

data RosieStr = RosieStr
                { len :: Word32
                , ptr :: CString
                } deriving (Show, Eq)

CString is a C style string data structure you can access by calling:

import Foreign.C

and Word32 is a 32 bit unsigned integer which you can import using:

import Data.Word (Word32)

Note: CString is just a type alias to Ptr CChar.

In order to read and write from a RosieStr in the C code, I had to make RosieStr an instance of the Storable typeclass:

instance Storable RosieStr where
    alignment _ = 8
    sizeOf _    = 16
    peek ptr    = RosieStr
        <$> peekByteOff ptr 0
        <*> peekByteOff ptr 8
    poke ptr (RosieStr l p) = do
        pokeByteOff ptr 0 l
        pokeByteOff ptr 8 p

alignment is used to specify the alignment of each item in the record in memory. alignment is the least common multiple of the sizes of all data types in the record. In my case, I have a Word32, which is 4 bytes and a Ptr CChar, a pointer which on a 64 bit machine is 8 bytes in size; so alignment = 8, the lowest common multiple of 4 and 8.

peek and poke specify how a RosieStr is read from and written to memory. They use peekByteOff and pokeByteOff to read and write bytes from a pointer and offset in memory. Each element must be aligned to its size in memory. len is at offset 0, which is fine because 0 is a multiple of 4 (the size of Word32). ptr can’t be at offset 4, even though that would result in a tightly packed data structure because 4 isn’t a multiple of 8 (the size of Ptr), so we need to leave 4 bytes of empty space in the data structure and put ptr at an offset of 8 bytes.

In some cases, we may need to create a new RosieStr which is empty. I made a helper function for this which uses the new function from the Foreign.Marshall.Utils library:

import Foreign.Marshal.Utils

unsafeNewEmptyRosieString :: IO (Ptr RosieStr)
unsafeNewEmptyRosieString = new (RosieStr { len = 0, ptr = nullPtr })

I made a similar data structure for pattern matches called RosieMatch.

Calling foreign functions from Haskell

Now that we’ve covered how to make a foreign data structure, let’s look at how to call foreign functions. You can import a foreign function from a linked .o file using the foreign import ccall statement:

foreign import ccall "rosie_new_string_ptr" cRosieNewString :: CString -> Word32 -> IO (Ptr RosieStr)

This statement imports the C function rosie_new_string_ptr as cRosieNewString (which I modified to match Haskell’s function naming conventions). It has the type CString-> Word32 -> IO (Ptr RosieStr). You can call cRosieNewString with a CString and a Word32 length to create a new RosieStr on the heap.

Note: There is a function in the RPL C interface called rosie_new_string which returns a Rosie string by value. Haskell doesn’t support foreign functions which return foreign structs by value, so I couldn’t add that to the Haskell interface.

It’s very important for type safety that this function evaluates to a value in the IO monad! Foreign functions can have uncontrolled side-effects which must be captured or the guarantees provided by Haskell will be broken. The RPL interface functions can do unsafe things like leak memory and modify their inputs. The IO monad captures these side effects appropriately.

All I had to do to import all of the RPL functions is add a foreign import ccall statement for each of them.

At this point we have everything we need to call Rosie Pattern Language from Haskell!

Making a safer RPL interface

It’s still possible to leak memory with cRosieNewString, cRosieNew, and unsafeNewEmptyRosieString because the user has to manually call cRosieStringFree, cRosieFinalize, and free to free the memory they allocated for each.

Instead of relying on these manual memory allocation functions, I made newRosieString and newRosieEngine to allocate ForeignPtr managed memory pointers. Let’s look at newRosieString:

newRosieString :: String -> IO (ForeignPtr RosieStr)
newRosieString s = do
    let l = fromIntegral (length s)
    pRString <- withCString s (\cString -> cRosieNewString cString l)
    newForeignPtr ptrRosieFreeStringPtr pRString

newRosieString takes a Haskell string, and extracts its length using (length s). It then calls withCString with the Haskell string s and a lambda function. withCString converts the Haskell string s into a CString and passes it as the argument to the lambda function. The lambda function calls cRosieNewString to make a new Ptr RosieString. Finally, I create a new ForeignPtr RosieStr by calling:

newForeignPtr ptrRosieFreeStringPtr pRString

newForeignPtr takes a pointer to a function for freeing the Ptr RosieStr and the Ptr RosieStr itself and evaluates to a ForeignPtr RosieStr which will garbage collect itself when all references to it are unreachable.

In order to get a function pointer to free the Rosie string, I used:

foreign import ccall "&rosie_free_string_ptr" ptrRosieFreeStringPtr :: FunPtr (Ptr RosieStr -> IO ())

Importing a foreign function with a & prefix imports the function as a function pointer.

You need to call the following to import newForeignPtr and ForeignPtr:

import Foreign.ForeignPtr

Finally, let’s look at newRosieEngine:

foreign import ccall "&rosie_finalize" ptrRosieFinalize :: FunPtr (Ptr RosieEngine -> IO ())

newRosieEngine :: ForeignPtr RosieStr -> IO (ForeignPtr RosieEngine)
newRosieEngine messages = 
    withForeignPtr messages (\m -> do
        engine <- cRosieNew m
        newForeignPtr ptrRosieFinalize engine)

newRosieEngine uses withForeignPtr instead of withCString. It converts its first argument from a ForeignPtr RosieStr to a Ptr RosieStr and passes it as the argument to the lambda. The lambda constructs a new Ptr RosieEngine by calling cRosieNew and makes a memory managed ForeignPtr RosieEngine using newForeignPtr.

Right now, in order to call Rosie functions, you need to convert your Haskell data structures to Ptrs and ForieignPtrs using the with and withForeignPtr functions. My plan is to add some functional helper functions to make this process less verbose.

You can see the full Rosie module here: https://gitlab.com/lemms/rosie/blob/master/src/librosie/haskell/Rosie.hs

References

Rosie Pattern Language

Rosie Pattern Language GitLab

Rosie Pattern Language IBM

https://wiki.haskell.org/Foreign_Function_Interface

https://wiki.haskell.org/GHC/Using_the_FFI

http://book.realworldhaskell.org/read/interfacing-with-c-the-ffi.html

https://wiki.haskell.org/FFI_complete_examples

Getting Started with Clojure on Windows

I’ve been taking a break from writing blog posts because I’ve been feeling pretty burned out the past few months, but I’m trying to get back to functional programming following my experiences at Strange Loop 2018.

I heard a lot of good things about Clojure at Strange Loop, so I’m going to start learning the language. In this post I’ll be discussing how to get Clojure working on Windows because developing for the language is only officially supported on Linux and Mac.

I’m also going to discuss how to get Clojure working on Windows Subsystem for Linux (WSL).

Clojure on Windows

If you don’t have Windows Powershell, I recommend getting that first. It’s generally a better than the Windows Command Line.

Also, you need to install Java with a version greater than 1.6 because Clojure runs on the JVM. You can check your Java version with the following command:

java -version

Getting Leiningen

In order to install Clojure on Windows, you must first install Leiningen. I tried using the lein script on the Leiningen website, but it didn’t work for me at the time of writing. Instead, I used Chocolatey to perform the installation. (More information can be found about this at https://chocolatey.org/packages/lein)

First, run Powershell as administrator by right clicking on the Powershell icon and run the following command to install Chocolatey:

Set-ExecutionPolicy Bypass -Scope Process -Force; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))

Then run the Leiningen installation using Chocolatey:

choco install lein

It’s possible that the Leiningen installation may fail, in which case you can try running:

lein self-install

When I tried this on my machine, the self installer failed to download the Leiningen jar file. If this happens to you, you can download the latest jar file here: https://github.com/technomancy/leiningen/releases

Rename the standalone .zip file extension to .jar and place it in your %HOME%/.lein/self-installs/ directory.

After restarting the Powershell, you should be able to run the following command to create a Clojure project:

lein new app my-app

You can run the app by changing to the app directory and calling:

lein run

You can run a REPL using the following command:

lein repl

You can find more information about working with Leiningen here: https://www.braveclojure.com/getting-started/

Clojure on WSL

Getting Windows Subsystem for Linux (WSL)

WSL is a convenient way to cross compile for Linux if you have a Windows 10 machine.

You can find instructions for installing WSL here: https://docs.microsoft.com/en-us/windows/wsl/install-win10

Once you’ve restarted your computer, you can install Ubuntu 18.04 LTS by searching in Windows Store for Ubuntu:

UbuntuWSL

Getting Clojure on WSL

Check your Java version with:

java -version

You should have a Java version greater than 1.6 to run Clojure.

Once you’ve installed Ubuntu 18.04 LTS on WSL, you can run the following command to install Leiningen:

sudo apt-get install leiningen

and you’re ready to get started with Clojure development!

References

https://clojure.org/

https://leiningen.org/

https://chocolatey.org/

https://www.braveclojure.com/

A thought experiment: Category Theory and Quantum Computing

This week I’m taking a break from my regular Haskell posts. A few weeks ago I posted about High Level Quantum Assembly using Haskell and that got me thinking about what a high level quantum computing language might look like. This week I’m going to attempt to perform a thought experiment and imagine what a hypothetical high-level quantum computing language might look like using some very basic category theory and type theory.

Properties of quantum gates

I’m going to massively simplify quantum computing, because I don’t really understand the physics behind it. Basic low-level quantum computing involves two components, qubits and quantum gates which act on them.

There are plenty of interesting articles on quantum gates including Demystifying Quantum Gates — One Qubit At A Time by Jason Roell, Quantum Gates and Circuits: The Crash Course by Anita Ramanan, and this excellent introductory talk Quantum Computing for Computer Scientists by Andrew Helwer. I’m not going to discuss low-level quantum gates in detail; in fact that would be counter-productive because in order to create a high-level quantum computing language, we need to be able to forget about the details of how qubits work. What we want is to generalize the objects of quantum computing so that we don’t need to worry about these details any more.

Before we start generalizing, let’s examine the qualities of qubits and gates.

Qubits are represented by vectors with two components. The two components represent two orthogonal dimensions in some Hilbert space represented in Dirac notation as |0k> and |1k>, where k represents the index of the qubit in a system of multiple qubits. The multiple qubits’ vectors are stacked on one another to produce a single vector with 2k elements. In addition, when you have a system containing multiple entangled qubits, you are operating on the tensor product of all of the qubits in the system. The tensor product of k entangled qubits with one-another produces a vector which contains 2k elements.

For example:

[a, b] ⊗ [c, d] = [a * c,  a * d, b * c, b * d]
[a, b] ⊗ [c, d] ⊗ [e, f] = [a * c,  a * d, b * c, b * d] ⊗ [e, f]
    = [a * c * e, a * d * e, b * c * e, b * d * e,
       a * c * f, a * d * f, b * c * f, b * d * f]

Note that the state of any qubit system is ultimately represented as a vector.

When a qubit in a qubit set is measured, its superposition is “collapsed”, which forces it to assume a value of |0> or |1>. The likelihood of the qubit assuming a |0> or |1> value is based on the value of the qubit’s vector before the measurement. Again, I’m not sure exactly what this means physically, but I do understand that this operation is non-reversible, which distinguishes it from other operations on qubits.

Quantum gates act on qubits, performing operations which can change the phase of a single qubit or multiple qubits. I have no clue how this happens physically, but the effect of this operation on a qubit can be entirely captured by a unitary matrix. For example, the SWAP operation has the following matrix:

[1, 0, 0, 0
 0, 0, 1, 0
 0, 1, 0, 0
 0, 0, 0, 1]

Since this is how quantum gates operate, we can model quantum systems as matrix multiplications applied to vectors. Specifically, “A gate which acts on k qubits is represented by a 2k x 2k unitary matrix.” [Wikipedia]

A quantum category

Since quantum gates are equivalent to matrix multiplications on qubit state vectors, we can rely on the properties of matrix multiplication to create an abstraction.

Given two matrices, M and N, which are applied to a vector v in sequence, NMv, there exists a matrix NM which produces an identical result. The matrix NM is called the composition of M and N. Since the effect of quantum gates can be modeled by a unitary matrix, then equivalently, for every two quantum gates M and N, there exists a gate NM, which is the composition of M and N. In other words, quantum gates are composable.

Furthermore, since matrix multiplication is associative, quantum gate applications are associative. Therefore, for quantum gates M, N, and O and a qubit state vector v, O(NM)v == (ON)Mv. (Please let me know if this is not the case, I haven’t seen anything in my brief literature review which contradicts this statement).

In addition, for every vector v, there is an identity matrix I, such that Iv == v. Equivalently, there is a quantum identity gate; if you don’t apply a gate, you get the same qubit state vector you started with.

Since quantum gates are composable, associative, and have an identity, quantum gates form a category! Since we have a category, we can use category theory to describe a model for abstract quantum operations! Let’s specialize this category with types, to create a type theoretic model for quantum operations. We’ll start by creating a category called Quantum with two type constructors, Measured Bool and Super Bool, which represent the value of a qubit in its measured state and its superposition state.

data Quantum = Measured Bool | Super Bool

Quantum1.png

Now we can define operations on the value of a qubit which go from a measured qubit to a superposition qubit. For example, we could apply a Hadamard gate to Measured Bool to create a Super Bool:

Quantum2.png

We could also apply a Hadamard gate to a Super Bool to produce another Super Bool:

Quantum3.png

Here’s the type of the Hadamard function:

hadamard :: Quantum Bool -> Quantum Bool

In fact, we can apply all quantum gates to Measured Bool or Super Bool, with the requirement that the codomain of the gate functions must be the Super Bool type.

We can apply the constraint that all operations on the Quantum meta-type must be reversible, so that we preserve the quantum properties of the system. There is one exception to this constraint, the measure function:

Quantum4.png

This breaks our rule. How can we make everything consistent? The answer is that since Measured Bool is really just the classical type Bool, we can move it out of the Quantum metatype:

Quantum5.png

Now every function in Quantum can be reversible! We change our definition of Quantum like this:

data Quantum = Super Bool

There’s no real reason to restrict ourselves to the Bool type. It’s possible to represent other types such as Bitset and Int with classical types, so we can imagine representing a Bitset or an Int as a collection of qubits. A Super Int could simply be a superposition of all possible Int values. What would we need a Super Int for? I have no clue; but it’s technically possible to have one, so why not?

In fact we can represent all classical pure types using qubits, so let’s generalize the diagram above with the set of all pure types, T. Let’s rename the Super value constructor to Quantum too:

data Quantum a = Quantum a

Quantum6.png

We need to define a measure function for all types in Quantum T, but that detail is left as an exercise for the reader.

This simplifies our definition of Quantum functions; all functions in the Quantum category are now reversible.

For example, hadamard still has the same type:

hadamard :: Quantum Bool -> Quantum Bool

but now we only need one version of H, rather than two:

Quantum7.png

A quantum Applicative Functor

There’s one problem with our Quantum category; we can no longer move any classical data into it! Let’s fix that by making an Applicative Functor for our category.

To start with, let’s make Quantum an instance of Functor:

instance Functor (Quantum a) where
    fmap f (Quantum a) = Quantum (f (measure a))

Now we can take any classical function, f, and apply it to any Quantum data a, by measuring it first. Note that by definition fmap must involve a measurement of the superposition, collapsing the superposition. For example, if we wanted to apply the classical not function to the result of calling hadamard on a Quantum Bool, we could do the following:

hadamardNot :: Quantum Bool -> Quantum Bool
hadamardNot x = fmap not (hadamard x)

Suppose we have a list of Quantum Bool, and we want to hadamardNot each of the elements, we can now use regular Haskell to do this:

hadamardNotList :: [Quantum Bool] -> [Quantum Bool]
hadamardNotList x = fmap hadamardNot x

Next, let’s make Quantum an instance of Applicative:

instance Applicative (Quantum a) where
    pure x = Quantum x
    (Quantum f) <*> (Quantum x) = Quantum (f (measure x))

Note that apply (<*>) by definition must also involve a measurement of the superposition, collapsing the superposition.

Now we can use pure to take classical data or functions from T into Quantum T:

Quantum8.png

For example, we could move a Bool into Quantum, call hadamard on it, and apply a classical not function to it like this:

let qnot = (Quantum not)
in qnot <*> (hadamard (pure True))

This would have the effect of moving the True value into a quantum register, applying the H gate, measuring the result and taking the not of that result. A useless operation, but I’m sure more useful computations exist.

Note that it’s still possible to make functions which reside entirely in the Quantum category, so we could define a function bell:

bell :: (Quantum Bool, Quantum Bool) -> (Quantum Bool, Quantum Bool)
bell (x, y) = cnot (hadamard x) y

Functions in Quantum which don’t involve fmap, pure, <*>, or measure are reversible.

At this point, it’s pretty easy to imagine compound quantum data types, for example a binary tree of qubits could be defined like this:

data QubitTree = Leaf | Node (Quantum Bool) QubitTree QubitTree

You could imagine other kinds of data structures, for example a graph G = (V, E), where V is a set of vertices, each of which contains a qubit, and E, the set of edges, represent entangled qubit pairs. Each qubit would be entangled with all of its neighbors on the graph.

Or you could move a compound data structure into the Quantum Applicative Functor like this:

makeQuantumList :: [a] -> Quantum [a]
makeQuantumList x = pure x

A quantum Monad

The next obvious step is to make Quantum an instance of Monad, which is quite simple:

instance Monad (Quantum a) where
    return x = Quantum x
    x >>= f = f (measure x)

So we can chain functions which generate a Quantum value from a classical value using bind. Again, by definition, a bind (>>=) must also involve a measurement of the superposition, collapsing the superposition. I don’t even have an example of a function which might take a classical value and evaluate to a superposition, so I’m just going to pretend that there are two of them called foo and bar:

foo :: String -> Quantum Int
bar :: Int -> Quantum Float

We could chain these operations one after another using bind:

return "Quantum" >>= foo >>= bar

This is an extremely useless operation, but maybe someone will figure out how to make the Quantum Monad useful.

Again, it’s important to note that functions in Quantum which don’t involve return and bind are reversible.

There is a possible extension of the Quantum category where you can preserve the reversibility of operations even in the presence of measure, fmap, apply, pure, bind and return, by introducing another typeclass Measured. The measure, fmap, apply, pure, bind and return operations would take a Quantum value to a Measured value, but that complicates things significantly, so I don’t really want to go into detail about it.

Strongly-typed quantum computing?

It looks like I just ended up adding quantum computations to Haskell without actually inventing a new language after all. This was an interesting thought experiment, but I’m still not sure if it’s useful. At least it’s a fun way to spend a weekend!

P.S. Please cite this article if you build upon the ideas described here.

Google Sheets and Haskell

This week, I’m playing with some web programming in Haskell. I don’t have much experience with accessing web services in my day job so I decided learn about them by making a little flash-card app which accesses the Google Sheets API to retrieve flash cards.

Haskell OAuth2?

Although the Google Sheets API doesn’t have official support for Haskell, it is built upon the OAuth2 API, as described in Using OAuth 2.0 for Web Server Applications.

Haskell has an interface to OAuth 2.0 called hoauth2. Unfortunately, the documentation for hoauth2 is so sparse that I couldn’t figure out how to use it (the only documentation they have is a single web-app built with the WAI framework). Since I want to make a command line application and not an app which you interact with in a browser, I used Haskell’s HTTP client package and Google’s authorization URLs directly.

Making a Google API project

Next I’ll cover what you need to do to enable the Google API for your project by describing how I enabled Sheets for my flash card application.

First open https://console.developers.google.com/ and create a new project:

0_create.png

1_new_project.png

You should see your project name appear in the upper left of the page:

2_title.png

Click on Enable APIs and Services:

3_enable_apis.png

Search for the API you want to enable:

4_sheets_api.png

Click Enable to enable the API for your project:

5_enable.png

Next, you’ll have to create credentials for your project:

6_credentials.png

Add credentials to your project. I’m creating a CLI tool that accesses application data:

7_add_credentials

Name your OAuth 2.0 client:

8_id.png

Set up the consent screen:

9_consent.png

Finally, note down your client ID and download the credentials file:

10_get_credentials.png

The file will be called client_id.json. This will be the token you’ll use to verify your app with the Google API.

OAuth Authorization

Google has a great explanation for how to use OAuth2 with Mobile and Desktop Applications. Unfortunately, they don’t have a Haskell API, so we need to modify their suggestions to work with the http-client.

First, we need to import the HTTP client modules and ByteString, which is used to read from a HTTP message:

import qualified Data.ByteString.Char8 as C

import Network.HTTP.Client
import Network.HTTP.Client.TLS
import Network.HTTP.Types.Status

In addition, some requests must use the Data.Text format for strings:

import qualified Data.Text as T

We’ll also need to request permission from the user to access their sheets. This is done by opening a web browser with a page which the user can use to generate a token to access their account. We can open a web browser using the Web.Browser library:

import Web.Browser

Finally, we need a JSON parser to decode GET messages. I used the Aeson library for this:

import Data.Aeson
import Data.Aeson.Types
import qualified Data.Map as M

The main function of this application is runFlashCardsMaybe:

runFlashCardsMaybe :: MaybeT IO ()
runFlashCardsMaybe = do lift $ putStrLn "Running flash cards"
                        args <- lift $ getArgs
                        if length args < 4
                        then lift $ putStrLn "Usage: GoogleSheetsDemo-exe <client_id> <client_secret> <spreadsheet_id> <rows_to_read>"
                        else let clientID = args !! 0
                                 clientSecret = args !! 1
                                 spreadSheetID = args !! 2
                                 rowsToRead = args !! 3
                             in do connection <- setupConnection clientID clientSecret
                                   flashCards <- getFlashCards spreadSheetID rowsToRead connection
                                   doFlashCards flashCards

MaybeT is a Monad Transformer, which means that it adds Maybe functionality to the IO Monad. I haven’t covered Monad Transformers yet in my blog, but for now, you can think of them as a multi-layered Monad, similar to Maybe IO.

The function gets the arguments passed via the CLI for the clientID and clientSecret which we got in the previous step, the user’s spreadsheet ID and the number of rows to read from the spreadsheet.

There are three steps to the application, setupConnection, getFlashCards, and doFlashCards, in that order.

The first part of connecting to Google API is setting up a connection. Here’s the function I used to connect:

setupConnection :: String -> String -> MaybeT IO Connection
setupConnection clientID clientSecret
    = do manager <- lift $ newManager tlsManagerSettings
         lift $ openBrowser ("https://accounts.google.com/o/oauth2/v2/auth?" ++
                             "scope=https://www.googleapis.com/auth/spreadsheets&" ++
                             "response_type=code&" ++
                             "state=security_token%3D138r5719ru3e1%26url%3Doauth2.example.com/token&" ++
                             "redirect_uri=urn:ietf:wg:oauth:2.0:oob&" ++
                             "client_id=" ++ clientID)
         lift $ putStrLn "Please enter authorization code:"
         lift $ hFlush stdout
         authCode <- lift $ getLine
         initialRequest <- lift $ parseRequest "https://www.googleapis.com/oauth2/v4/token"
         let pairs = fmap (\(x, y) -> (C.pack x, C.pack y))
                          [("code", authCode),
                           ("client_id", clientID),
                           ("client_secret", clientSecret),
                           ("redirect_uri", "urn:ietf:wg:oauth:2.0:oob"),
                           ("grant_type", "authorization_code")]
             request = urlEncodedBody pairs initialRequest
         response <- lift $ httpLbs request manager
         if responseStatus response == status200
         then do let body = responseBody response
                 do bodyData <- MaybeT $ return $ (decode body :: Maybe AuthResponse)
                    MaybeT $ return $ createConnection manager bodyData
         else MaybeT $ return $ Nothing

setupConnection takes the clientID and clientSecret that we made previously as arguments. There’s a lot going on here, so let’s break it down. First you have to make a connection manager, in this case we want a manager that supports TLS:

manager <- lift $ newManager tlsManagerSettings

The reason we have to call lift $ newManager is because the function evaluates to a MaybeT IO metatype. lift transports the newManager function from the IO monad into the MaybeT IO monad.

The next step is to ask the user for an access token by opening a standard URL in a web browser. This URL can be found in OAuth 2.0 for Mobile & Desktop Apps under the Sample Authorization URLs heading as the copy-paste sample:

lift $ openBrowser ("https://accounts.google.com/o/oauth2/v2/auth?" ++
                    "scope=https://www.googleapis.com/auth/spreadsheets&" ++
                    "response_type=code&" ++
                    "state=security_token%3D138r5719ru3e1%26url%3Doauth2.example.com/token&" ++
                    "redirect_uri=urn:ietf:wg:oauth:2.0:oob&" ++
                    "client_id=" ++ clientID)

Other methods of authorization are available for web and mobile applications.

The user’s browser will open a page like this:

installedresult

Next we request the authorization code from the user:

lift $ putStrLn "Please enter authorization code:"
lift $ hFlush stdout
authCode <- lift $ getLine

Then we need to build a request for an authorization token from Google’s OAuth2 server:

initialRequest <- lift $ parseRequest "https://www.googleapis.com/oauth2/v4/token"
let pairs = fmap (\(x, y) -> (C.pack x, C.pack y))
            [("code", authCode),
             ("client_id", clientID),
             ("client_secret", clientSecret),
             ("redirect_uri", "urn:ietf:wg:oauth:2.0:oob"),
             ("grant_type", "authorization_code")]
    request = urlEncodedBody pairs initialRequest

The request consists of a set of key value pairs encoded as ByteStrings. C.pack converts a String to a ByteString, so we can map a tuple conversion lambda over the list of key value pairs to create an appropriate GET request. The request is parsed by urlEncodedBody, which is a function in the HTTP client library.

Then we call the request using the httpLbs function with the TLS manager and check the response:

response <- lift $ httpLbs request manager
if responseStatus response == status200
then do let body = responseBody response
        do bodyData <- MaybeT $ return $ (decode body :: Maybe AuthResponse)
           MaybeT $ return $ createConnection manager bodyData
else MaybeT $ return $ Nothing

If the response is 200 OK, we need to parse the response message. I used the Aeson library to decode the response with the type AuthResponse:

data AuthResponse = AuthResponse {accessToken :: T.Text,
                                  tokenType :: T.Text,
                                  expiresIn :: Int,
                                  refreshToken :: T.Text}

instance FromJSON AuthResponse where
    parseJSON (Object v) = AuthResponse
                           <$> v .: T.pack "access_token"
                           <*> v .: T.pack "token_type"
                           <*> v .: T.pack "expires_in"
                           <*> v .: T.pack "refresh_token"
    parseJSON invalid = typeMismatch "AuthResponse" invalid

Once we have the accessToken, we can create an authorized connection to Google Sheets:

data Connection = Connection Manager AuthResponse

createConnection :: Manager -> AuthResponse -> Maybe Connection
createConnection manager authResponse = Just $ Connection manager authResponse

The next step after getting an authorized connection to Google Sheets is to get the flash cards out of the sheet:

getFlashCards :: String -> String -> Connection -> MaybeT IO [[T.Text]]
getFlashCards spreadSheetID rowsToRead (Connection manager (AuthResponse {accessToken = thisAccessToken,
                                                                          tokenType = thisTokenType,
                                                                          expiresIn = thisExpiresIn,
                                                                          refreshToken = thisRefreshToken}))
    = do rowsRequest <- parseRequest ("GET https://sheets.googleapis.com/v4/spreadsheets/" ++
                                      spreadSheetID ++
                                      "/values/Sheet1!A1:B" ++ rowsToRead ++ "?access_token=" ++
                                      (T.unpack thisAccessToken))
         rowsResponse <- lift $ httpLbs rowsRequest manager
         maybeRowsResponse <- return (decode (responseBody rowsResponse) :: Maybe RowsResponse)
         MaybeT $ return $ fmap getValues maybeRowsResponse

Again, there’s a lot going on here, so let’s break it down one function call at a time. First, we need to get the rows out of the sheet. This is achieved using a GET command, as specified in Reading and Writing Values using the Google Sheets API under the Reading a single range heading:

rowsRequest <- parseRequest ("GET https://sheets.googleapis.com/v4/spreadsheets/" ++
                             spreadSheetID ++
                             "/values/Sheet1!A1:B" ++ rowsToRead ++ "?access_token=" ++
                             (T.unpack thisAccessToken))

The message requests a set of values from A1 to BN where N is the number of rows to read. The A column contains the front of the flash card and the B column contains the back of the flash card. We also have to pass the access token using “?access_token=” ++ (T.unpack thisAccessToken). T.unpack converts a Data.Text string to a String.

Next, we send the request using the httpLbs function and the TLS manager:

rowsResponse <- lift $ httpLbs rowsRequest manager

After this, we need to parse the response body, which contains the rows which were read from the user’s spreadsheet:

maybeRowsResponse <- return (decode (responseBody rowsResponse) :: Maybe RowsResponse)
MaybeT $ return $ fmap getValues maybeRowsResponse

Again, we use the Aeson library to parse the response. The RowsResponse type contains the data for the rows in its values field:

data RowsResponse = RowsResponse {range :: T.Text,
                                  majorDimension :: T.Text,
                                  values :: [[T.Text]]}

instance FromJSON RowsResponse where
    parseJSON (Object v) = RowsResponse
                           <$> v .: T.pack "range"
                           <*> v .: T.pack "majorDimension"
                           <*> v .: T.pack "values"
    parseJSON invalid = typeMismatch "RowsResponse" invalid

getValues :: RowsResponse -> [[T.Text]]
getValues (RowsResponse {values = thisValues}) = thisValues

Once the values are parsed into a [[T.Text]] type, it’s simple to run an interactive flash card test on the command line by printing the front of the “flash card”, making the user press enter after they make a guess about what is on the back, and then showing the back of the “flash card”:

doFlashCards :: [[T.Text]] -> MaybeT IO ()
doFlashCards [] = lift $ return ()
doFlashCards (row : rows)
    = do lift $ putStrLn $ T.unpack (row !! 0)
         lift $ hFlush stdout
         lift getLine
         lift $ putStrLn $ T.unpack (row !! 1)
         lift $ hFlush stdout
         lift getLine
         doFlashCards rows

The source code for this post is available at Google Sheets Demo.

Resources:

Haskell HTTP Client Documentation

OAuth 2.0 for Mobile and Desktop Applications in the Google API

Introduction to the Google Sheets API

Reading and Writing Values using the Google Sheets API

High Level Quantum Assembly using Haskell

I recently watched an interesting video on the Computerphile YouTube channel by Robert Smith of Rigetti Computing about their quantum computer instruction set. You can read about their instruction set here.

I don’t know anything about quantum physics or quantum computers but after reading their paper, I realized that it comes down to executing a set of assembly instructions with two kinds of registers, either quantum registers or classical registers. That seems easy enough to play around with in Haskell!

Quantum registers

Let’s make some qubit registers:

data Quantum = QubitRegister Int |
               MetaQubitRegister String

QubitRegisters are indexed by an integer. We’ll also need a MetaQubitRegister for circuit macros, I’ll talk about those soon.

Since this is a high level assembler, its output will be low level assembly. In order to generate the low level instruction for a qubit register, we’ll simply make Quantum an instance of the Show typeclass:

instance Show Quantum where
   show (QubitRegister i) = show i
   show (MetaQubitRegister s) = s

Quantum registers are simply printed out as integers. For example here’s how you would call the CNot instruction on register 5 and 3:

CNOT 5 3

All Quantum registers hold a single Qubit.

Classical registers are handled similarly:

data Classical a = Register Int |
                   Range Int Int |
                   RealConstant a |
                   ComplexConstant (Complex a) |
                   MetaRegister String

Classical arguments in Quil can include classical numeric constants, like RealConstant and ComplexConstant. These are used as arguments to some of the Quil instructions.

Here’s how Classical is an instance of Show:

instance (Floating a, Show a, Ord a) => Show (Classical a) where
    show (Register i) = "[" ++ (show i) ++ "]"
    show (Range i j) = "[" ++ (show i) ++ "-" ++ (show j) ++ "]"
    show (RealConstant r) = show r
    show (ComplexConstant (p :+ q))
        | q >= 0 = (show p) ++ "+" ++ (show q) ++ "i"
        | otherwise = (show p) ++ (show q) ++ "i"
    show (MetaRegister s) = s

Classical registers are printed out in square brackets. Here’s an example of the Classical register 5 used in Quil’s not instruction:

NOT [5]

Classical constants can also be used as parameters of quantum instructions. In the following example, the Classical complex constant 0.9009688679-0.4338837391i is used as an argument to the instruction RX along with the quantum register 3:

RX(0.9009688679-0.4338837391i) 3

I’m going to assume that Classical “registers” also hold only a single bit but multiple “registers” in a Range can hold integers, real numbers, and complex numbers using some standard format.

Here’s another example of RX using the Range [64-127]:

RX([64-127]) 3

Notice how Classical and Quantum are separate types, this is useful for type checking the arguments to Quil assembly instructions.

Quantum instructions

Next up, we need some assembly instructions. The Quil instruction set is split into instructions which operate on Qubits and instructions which operate on regular bits.

Note: In the following sections, q represents a quantum register, c represents a classical register or constant.

Pauli gates

I q
X q
Y q
Z q

Hadamard gate

H q

Phase gates

PHASE(c) q
S q
T q

Controlled-phase gates

CPHASE00(c) q q
CPHASE01(c) q q
CPHASE10(c) q q
CPHASE(c) q q

Cartesian rotation gates

RX(c) q
RY(c) q
RZ(c) q

Controlled-X gates

CNOT q q
CCNOT q q q q

Swap gates

PSWAP(c) q q
SWAP q q
ISWAP q q
CSWAP q q q q

There are also two special instructions which control the quantum state, Measure, which reads a qubit and optionally write its value into a classical register, and reset, which somehow resets all of the qubits.

MEASURE q
MEASURE q c
RESET

I have no idea what any of these do, but we can make a type to represent these pretty easily:

data Instruction a
    = PauliI Quantum | --Quantum instructions
      PauliX Quantum | 
      PauliY Quantum | 
      PauliZ Quantum |
      Hadamard Quantum |
      Phase (Classical a) Quantum |
      PhaseS Quantum |
      PhaseT Quantum |
      CPhase00 (Classical a) Quantum Quantum |
      CPhase01 (Classical a) Quantum Quantum |
      CPhase10 (Classical a) Quantum Quantum |
      CPhase (Classical a) Quantum Quantum |
      RX (Classical a) Quantum |
      RY (Classical a) Quantum |
      RZ (Classical a) Quantum |
      CNot Quantum Quantum |
      CCNot Quantum Quantum Quantum Quantum |
      PSwap (Classical a) Quantum Quantum |
      Swap Quantum Quantum |
      ISwap Quantum Quantum |
      CSwap Quantum Quantum Quantum Quantum |
      Measure Quantum |
      MeasureOut Quantum (Classical a) |
      Reset |

We’ll add the classical instructions to the Instruction type too:

      Halt | --Classical instructions
      Jump String |
      JumpWhen String (Classical a) |
      JumpUnless String (Classical a) |
      Label String |
      Nop |
      IFalse (Classical a) |
      ITrue (Classical a) |
      INot (Classical a) |
      IAnd (Classical a) (Classical a) |
      IOr (Classical a) (Classical a) |
      Move (Classical a) (Classical a) |
      Exchange (Classical a) (Classical a) |
      Pragma String |

Finally, the Quil assembly language supports macros called “circuits”. I’ll explain how I made a type to represent a circuit below, but for now let’s add an instruction to define a circuit and to call a circuit:

      DefCircuit (Circuit a) |
      CallCircuit (Circuit a) [Either Quantum (Classical a)]

I’m using the Either metatype here to enable circuits to be called with a list of either Quantum or Classical arguments.

Assembling quantum instructions

Now all we need to do to assemble a program for a Quil machine is to make Instruction an instance of the Show typeclass. Since we’ve already defined how to show Quantum and Classical registers, we can output these using the show command, concantenating them with the instruction strings. For example, here’s how to define show for rotation gates:

instance (Floating a, Show a, Ord a) => Show (Instruction a) where
...
    show (RX c q) = "RX(" ++ (show c) ++ ") " ++ (show q)
    show (RY c q) = "RY(" ++ (show c) ++ ") " ++ (show q)
    show (RZ c q) = "RZ(" ++ (show c) ++ ") " ++ (show q)
...

Now we can print a RX gate instruction by calling putStrLn like this:

putStrLn (show $ RX (ComplexConstant (0.9009688679 :+ (-0.4338837391))) (QubitRegister 1))

The function above will print out the following assembly instruction to the terminal:

RX(0.9009688679-0.4338837391i) 1

The rest of the Quil instructions are just as easily shown, except for macros:

instance (Floating a, Show a, Ord a) => Show (Instruction a) where
...
    show (DefCircuit c) = case showDefCircuit c of
                          Left e -> e
                          Right c -> c
    show (CallCircuit c arguments) = case showCallCircuit c arguments of
                                     Left e -> e
                                     Right c -> c

These instructions are shown by showDefCircuit and showCallCircuit. It’s possible to get a type-mismatch in these functions, so I’m using Either to track whether there was an error.

Quantum circuits

In order to define a circuit, we need a Circuit type:

data Circuit a = Circuit String [Either Quantum (Classical a)] [Instruction a]

A Circuit has a name String, a list of Either Quantum or Classical parameters, and a list of Instructions.

For convenience I created a type synonym to hold the current text definition of the circuit:

type CircuitText = String

showDefCircuit prints the circuit definition:

showDefCircuit :: (Floating a, Show a, Ord a) => Circuit a -> Either String CircuitText
showDefCircuit (Circuit name _ []) = Left ("Error (showDefCircuit): No instructions in circuit " ++ name)
showDefCircuit (Circuit name parameters instructions) = (Right ("DEFCIRCUIT " ++ (fmap toUpper name))) >>= (defCircuitParameters parameters instructions)

It shows DEFCIRCUIT followed by the circuit name and then calls defCircuitParameters. Notice that if there are no instructions in the circuit, an error will be displayed. Since we’re using the Either monad, we use >>= bind to pass the current result to the next function, or abort if an error was found.

Here’s the definition of defCircuitParameters:

defCircuitParameters :: (Floating a, Show a, Ord a) => [Either Quantum defCircuitParameters :: (Floating a, Show a, Ord a) => [Either Quantum (Classical a)] -> [Instruction a] -> CircuitText -> Either String CircuitText
defCircuitParameters [] instructions circuitText = (Right (circuitText ++ ":")) >>= (defCircuitInstructions instructions)
defCircuitParameters (Left r@(MetaQubitRegister _) : parameters) instructions circuitText = (Right (circuitText ++ " " ++ (show r))) >>= (defCircuitParameters parameters instructions)
defCircuitParameters (Right r@(MetaRegister _) : parameters) instructions circuitText = (Right (circuitText ++ " " ++ (show r))) >>= (defCircuitParameters parameters instructions)
defCircuitParameters p _ _ = Left ("Error (defCircuitParameters): Type mismatch for parameter " ++ (show p))

Circuit parameters are shown in order, followed by a : character and a newline.

Notice that only Left r@(MetaQubitRegister _) and Right r@(MetaRegister _) are pattern matched. This ensures that the function will evaluate to a Left error if we pass anything other than a meta-register in as a circuit parameter. For example, the following will result in an error message:

showDefCircuit (Circuit "foo" [Left (QubitRegister 3)] [...])

because QubitRegister 3 is a literal qubit register, not a placeholder for a qubit register.

Finally, defCircuitInstructions is called. Since there are a lot of instructions and they’re basically all the same format, I’ll just show how the RX instruction is defined:

defCircuitInstructions :: (Floating a, Show a, Ord a) => [Instruction a] -> CircuitText -> Either String CircuitText
...
defCircuitInstructions (instruction@(RX (RealConstant _) (MetaQubitRegister _)) : instructions) circuitText = circuitInstruction (instruction : instructions) circuitText
defCircuitInstructions (instruction@(RX (ComplexConstant _) (MetaQubitRegister _)) : instructions) circuitText = circuitInstruction (instruction : instructions) circuitText
defCircuitInstructions (instruction@(RX (MetaRegister _) (MetaQubitRegister _)) : instructions) circuitText = circuitInstruction (instruction : instructions) circuitText

Since RX can take a constant or register as an argument, I’m allowing defCircuitInstructions to pattern match against either RealConstant and ComplexConstant in addition to a MetaRegister for RX. The second parameter of RX is a qubit register, so we ensure that a MetaQubitRegister was used as an argument to RX in the Circuit definition.

Again, if any literal registers are passed into the circuit, a type mismatch error will be thrown at compile time.

defCircuitInstructions just calls circuitInstruction to show the instruction:

circuitInstruction :: (Floating a, Show a, Ord a) => [Instruction a] -> CircuitText -> Either String CircuitText
circuitInstruction (instruction : instructions) circuitText = (Right (circuitText ++ "\n " ++ (show instruction))) >>= (defCircuitInstructions instructions)

showCallCircuit is similar:

showCallCircuit :: (Floating a, Show a, Ord a) => Circuit a -> [Either Quantum (Classical a)] -> Either String CircuitText
showCallCircuit (Circuit name _ _) [] = Right name
showCallCircuit (Circuit name parameters _) arguments = (Right name) >>= callCircuitArguments parameters arguments

callCircuitArguments shows the arguments to the circuit call:

callCircuitArguments :: (Floating a, Show a, Ord a) => [Either Quantum (Classical a)] -> [Either Quantum (Classical a)] -> String -> Either String String
callCircuitArguments [] [] circuitText = Right circuitText
callCircuitArguments (Left (MetaQubitRegister _) : parameters) (Left q@(QubitRegister _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show q))) >>= callCircuitArguments parameters arguments
callCircuitArguments (Left (MetaQubitRegister _) : parameters) (Left q@(MetaQubitRegister _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show q))) >>= callCircuitArguments parameters arguments
callCircuitArguments (Right (RealConstant _) : parameters) (Right c@(RealConstant _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show c))) >>= callCircuitArguments parameters arguments
callCircuitArguments (Right (ComplexConstant _) : parameters) (Right c@(ComplexConstant _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show c))) >>= callCircuitArguments parameters arguments
callCircuitArguments (Right (MetaRegister _) : parameters) (Right r@(Register _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show r))) >>= callCircuitArguments parameters arguments
callCircuitArguments (Right (MetaRegister _) : parameters) (Right r@(Range _ _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show r))) >>= callCircuitArguments parameters arguments
callCircuitArguments (Right (MetaRegister _) : parameters) (Right r@(MetaRegister _) : arguments) circuitText = (Right (circuitText ++ " " ++ (show r))) >>= callCircuitArguments parameters arguments
callCircuitArguments _ (a : arguments) _ = Left ("Error (callCircuitArguments): Type mismatch for argument " ++ (show a))

Again, pattern matching ensures the correctness of this code. If the Circuit has a Left (MetaQubitRegister _) as its first parameter, then the corresponding argument must be a Left q@(QubitRegister _). It’s impossible to pass a classical register as the first argument, if you do, an error message will result.

Building a circuit

Now that we know how a Circuit definition and call are printed, it’s useful to look at a definition of a simple circuit. Here’s how to define the BELL circuit which is defined in the Quil paper:

testCircuit :: (Floating a, Show a, Ord a) => Circuit a
testCircuit = let parameters@[Left a, Left b] = [Left (MetaQubitRegister "a"), Left (MetaQubitRegister "b")]
                  instructions = [(Hadamard a),
                                  (CNot a b)]
              in Circuit "BELL" parameters instructions

There are two benefits to using a function like this to build a Circuit.

Firstly, we know that the parameters in the DEFCIRCUIT line must be the same parameters used in the calls to H and CNOT; it’s impossible to accidentally use an undefined parameter name in the circuit. For example, we know for sure that this code won’t be produced by show DefCircuit:

DEFCIRCUIT BELL a b:
    H x
    CNOT a b

Because “x” isn’t in the parameters list.

Secondly, the parameters a and b are type-checked as MetaQubitRegisters, so it’s impossible to accidentally pass a classical register or constant into the BELL circuit.

Conclusion

With all of the above we can define a silly quantum program which is certainly not going to actually do anything actually practical but will compile correctly:

import Data.Complex
import Register
import Instruction

bellCircuit :: (Floating a, Show a, Ord a) => Circuit a
bellCircuit = let parameters@[Left a, Left b] = [Left (MetaQubitRegister "a"), Left (MetaQubitRegister "b")]
                  instructions = [(Hadamard a),
                                  (CNot a b)]
              in Circuit "BELL" parameters instructions

testRXCircuit :: (Floating a, Show a, Ord a) => Circuit a
testRXCircuit = let parameters@[Left a] = [Left (MetaQubitRegister "a")]
                    instructions = [(RX (ComplexConstant (5.0 :+ 10.0)) a)]
                in Circuit "TESTRX" parameters instructions

compile :: IO ()
compile = putStrLn "Compiling quantum executable" >>
          putStrLn (show $ CNot (QubitRegister 0) (QubitRegister 1)) >>
          putStrLn (show $ PSwap (ComplexConstant (5.0 :+ (-3.2))) (QubitRegister 0) (QubitRegister 1)) >>
          putStrLn (show $ Measure (QubitRegister 4)) >>
          putStrLn (show $ MeasureOut (QubitRegister 4) (Register 5)) >>
          putStrLn (show $ DefCircuit bellCircuit) >>
          putStrLn (show $ CallCircuit bellCircuit [Left (QubitRegister 5), Left (QubitRegister 3)]) >>
          putStrLn (show $ DefCircuit testRXCircuit)

The output of the compile function is this Quil assembly:

CNOT 0 1
PSWAP(5.0-3.2i) 0 1
MEASURE 4
MEASURE 4 [5]
DEFCIRCUIT BELL a b:
    H a
    CNOT a b

BELL 5 3
DEFCIRCUIT TESTRX a:
    RX(5.0+10.0i) a

Since we’re still working with Haskell, we can benefit from all of our usual features like maps, folds, and monads! For example, this function:

 foldl (\a x -> a >> putStrLn (show x))
       (return ())
       (fmap (\x -> CallCircuit bellCircuit x)
             (fmap (\(x, y) -> [(Left (QubitRegister x)), (Left (QubitRegister y))])
                   (zip [0..5] [1..6])))

prints the following instructions:

BELL 0 1
BELL 1 2
BELL 2 3
BELL 3 4
BELL 4 5
BELL 5 6

As one final example, here’s a high level circuit with a nested if statement:

hadamardRotateXYZ :: (Floating a, Show a, Ord a) => Complex a -> Circuit a
hadamardRotateXYZ rotation
    = let parameters@[Left a, Left b, Right r, Left z] = [Left (MetaQubitRegister "a"), Left (MetaQubitRegister "b"), Right (MetaRegister "r"), Left (MetaQubitRegister "z")]
          instructions = [(Hadamard a),
                          (MeasureOut a r)] ++
                          (ifC "HROTXYZTHEN0" "HROTXYZEND0"
                               r
                               [(RX (ComplexConstant rotation) z)]
                               ([(Hadamard b),
                                 (MeasureOut b r)] ++
                                (ifC "HROTXYZTHEN1" "HROTXYZEND1"
                                     r
                                     [(RY (ComplexConstant rotation) z)]
                                     [(RZ (ComplexConstant rotation) z)])))
      in Circuit "HROTXYZ" parameters instructions

which produces the following assembly code:

DEFCIRCUIT HROTXYZ a b r z:
    H a
    MEASURE a r
    JUMP-WHEN @HROTXYZTHEN0 r
    H b
    MEASURE b r
    JUMP-WHEN @HROTXYZTHEN1 r
    RZ(0.70710678118+0.70710678118i) z
    JUMP @HROTXYZEND1
    LABEL @HROTXYZTHEN1
    RY(0.70710678118+0.70710678118i) z
    LABEL @HROTXYZEND1
    JUMP @HROTXYZEND0
    LABEL @HROTXYZTHEN0
    RX(0.70710678118+0.70710678118i) z
    LABEL @HROTXYZEND0

One notable deficiency of the HROTXYZ circuit above is that it includes unnecessary jumps. This could be solved by using an optimizing compiler and a full high level language.

High level quantum assembly in Haskell was a pretty fun weekend project and probably the best example I’ve ever written of how Haskell’s built in type checking can be useful in a specialized problem domain.

The source code for this project is available at https://github.com/WhatTheFunctional/Hasquil.

Some time in future I’d like to define a functional quantum high level language, but I’ve never written a compiler for a functional language, so I’ll have to learn how to do that first.

Making an Ecosystem Simulation in Haskell (Part 3)

In this final part of  my series on making an ecosystem simulation using Haskell, I’ll discuss how I represented the behaviors of creatures and how they interact with the world.

Updating the world

The updateWorld function is called by simulateWorld, which we discussed last time.

updateWorld :: RandomGen g => WorldState g -> WorldState g
updateWorld worldState@(WorldState {iteration = thisIteration,
                                    io = thisIO,
                                    grid = thisGrid})
    = foldr resetCreature (foldr updateCreature newWorldState coordinates) coordinates
          where newWorldState = setIO (thisIO >> putStrLn ("Iteration: " ++ (show thisIteration))) worldState
                coordinates = (,) <$> [1 .. (nrows thisGrid)] <*> [1 .. (ncols thisGrid)]

updateWorld takes the existing WorldState and calls updateCreature at every coordinate with foldr, followed by calling resetCreature on every coordinate.

Each Creature has a Bool which stores whether it has acted on this iteration. resetCreature sets the active Bool to False for each creature so they can act again next iteration.

resetCreature :: RandomGen g => (Int, Int) -> WorldState g -> WorldState g
resetCreature (i, j) worldState@(WorldState {grid = thisGrid})
    | coordinatesAreInGrid (i, j) thisGrid &&
      creature /= Empty
          = unsafeSetCreature (setCreatureActed False creature) i j worldState
    | otherwise = worldState
        where creature = unsafeGet i j thisGrid

updateCreature is responsible for performing all of the behaviors for each creature in the world.

updateCreature :: RandomGen g => (Int, Int) -> WorldState g -> WorldState g
updateCreature (i, j) worldState@(WorldState {grid = thisGrid})
    | coordinatesAreInGrid (i, j) thisGrid &&
      creature /= Empty &&
      not (creatureHasActed creature)
          = let (Location _ _ _ newWorldState)
                = execState (state creatureDeath >>
                             state chooseBehavior >>
                             state performBehavior) (Location (incrementHunger $ incrementLifetime creature) i j worldState)
            in newWorldState
    | otherwise = worldState
        where creature = unsafeGet i j thisGrid

It achieves this by calling three state processors on each creature after incrementing the Creature‘s hunger and lifetime values. The three state processors are creatureDeath, chooseBehavior, and performBehavior.

The life and death of a virtual creature

creatureDeath, chooseBehavior, and performBehavior are state processors which have a Location as both their state and value. State proessors are explained in my previous post.

creatureDeath is responsible for ensuring that a creature dies if it has exceeded its lifetime total or if its hunger is over its starvation threshold. It does this by calling lifetimeDeath and hungerDeath which will evaluate to Empty if the creature is dead.

creatureDeath :: RandomGen g => Location g -> (Location g, Location g)
creatureDeath location@(Location creature i j worldState@(WorldState {io = thisIO}))
    = (newLocation, newLocation)
        where newCreature = lifetimeDeath $ hungerDeath creature
              newIO = thisIO >>
                      putStr ((show i) ++ ", " ++ (show j) ++ " (" ++ (show creature) ++ "): ") >>
                      putStrLn ("Lifetime: " ++ (show (getLifetime creature)) ++ " Hunger: " ++ (show (getHunger creature)) ++ " State: " ++ (show (getState creature)))
              newLocation = Location newCreature i j (setIO newIO $ unsafeSetCreature newCreature i j worldState)

chooseBehavior acts as the brain of a Creature, it’s based on a finite state machine (FSM) which is based on pattern matching. Here are the states for a Rabbit:

chooseBehavior :: RandomGen g => Location g -> (Location g, Location g)
...
--Rabbit
chooseBehavior location@(Location creature@(Rabbit l h Wander a) i j _)
    | length predators > 0 = let newLocation = unsafeSetCreatureInLocation (Rabbit l h Flee a) i j location
                             in (newLocation, newLocation)
    | h > 0 = let newLocation = unsafeSetCreatureInLocation (Rabbit l h Graze a) i j location
              in (newLocation, newLocation)
    | otherwise = (location, location)
        where predators = searchFor predatorSearch (getSearchDistance creature) location
chooseBehavior location@(Location creature@(Rabbit l h Graze a) i j _)
    | length predators > 0 = let newLocation = unsafeSetCreatureInLocation (Rabbit l h Flee a) i j location
                             in (newLocation, newLocation)
    | h <= 0 = let newLocation = unsafeSetCreatureInLocation (Rabbit l h Wander a) i j location
               in (newLocation, newLocation)
    | otherwise = (location, location)
        where predators = searchFor predatorSearch (getSearchDistance creature) location
chooseBehavior location@(Location creature@(Rabbit l h Flee a) i j worldState)
    | length predators == 0 = let newLocation = unsafeSetCreatureInLocation (Rabbit l h Wander a) i j location
                              in (newLocation, newLocation)
    | otherwise = (location, location)
        where predators = searchFor predatorSearch (getSearchDistance creature) location
...

As you can see, each pattern for chooseBehavior matches a Rabbit in a different state. The first function matches a Rabbit in the Wander state, the second matches a rabbit in the Graze state, and the final pattern matches a Rabbit in the Flee state.

The first thing which happens in each state is the searchFor function is called with the predatorSearch function as a parameter. seachFor predatorSearch evaluates to a list of all predators in visual range.

Then guards are evaluated for each state:

If the number of predators is greater than 0,  the Rabbit is set to the Flee state.

The Rabbit‘s hunger, h, is also tested, in which case the Rabbit may choose to Graze or Wander depending on if it’s hungry.

If the Rabbit is in the Flee state and no longer detects any predators, it returns to the Wander state.

Fox and Wolf also have pattern matched FSMs.

performBehavior is responsible for updating the WorldState based on the Creature‘s current state.

performBehavior :: RandomGen g => (Location g) -> (Location g, Location g)
performBehavior location@(Location Empty _ _ _) = (location, location)
performBehavior location@(Location (Rabbit _ _ Wander _) _ _ _) = runState (state wander >> state reproduce) location
performBehavior location@(Location (Rabbit _ _ Graze _) _ _ _) = graze location
performBehavior location@(Location (Rabbit _ _ Flee _) _ _ _) = flee location
performBehavior location@(Location (Fox _ _ Wander _) _ _ _) = runState (state wander >> state reproduce) location
performBehavior location@(Location (Fox _ _ Hunt _) _ _ _) = runState (state hunt >> state consume) location
performBehavior location@(Location (Fox _ _ Flee _) _ _ _) = flee location
performBehavior location@(Location (Wolf _ _ Wander _) _ _ _) = runState (state wander >> state reproduce) location
performBehavior location@(Location (Wolf _ _ Hunt _) _ _ _) = runState (state hunt >> state consume) location

The function calls the behaviors: wander, reproduce, graze, flee, and hunt based on the pattern matched Creature state.

Each of the behaviors acts in a similar manner, so I’m only going to cover the wander behavior in detail.

wander :: RandomGen g => Location g -> (Location g, Location g)
wander location@(Location creature i j worldState@(WorldState {generator = thisGenerator}))
    | null mates = moveCreature (neighborCoordinates (i, j) randomNumber) (Location newCreature i j newWorldState)
    | fst (mates !! randomNumber) < i = moveCreature (i - 1, j) (Location newCreature i j newWorldState)
    | fst (mates !! randomNumber) > i = moveCreature (i + 1, j) (Location newCreature i j newWorldState)
    | snd (mates !! randomNumber) < j = moveCreature (i, j - 1) (Location newCreature i j newWorldState)
    | snd (mates !! randomNumber) > j = moveCreature (i, j + 1) (Location newCreature i j newWorldState)
    | otherwise = let newLocation = Location newCreature i j newWorldState in (newLocation, newLocation)
        where mates = searchFor (mateSearch i j) (getSearchDistance creature) location
                      (randomNumber, newGenerator) = if null mates
                                                     then randomR (0 :: Int, 3 ::Int) thisGenerator
                                                     else randomR (0 :: Int, ((length mates) - 1) ::Int) thisGenerator
              newCreature = setCreatureActed True creature
              newWorldState = unsafeSetCreature newCreature i j $ setGenerator newGenerator worldState

wander causes the Creature to move randomly around the world. The searchFor (mateSearch i j) function is called first and evaluates to a list of all potential mates in the region. If a mate is found, the Creature moves one cell towards a random mate in the region. If no mate is found, the Creature moves one cell in a random direction.

A living world

So, that’s how you make a set of creatures in a virtual world in Haskell. The State monad made the whole process a lot simpler, because it’s not necessary to manually track all of the state in every function call. In the end, it comes down to binding one State processor to another to define a set of processes which the creatures should follow.

The source code for this project is available at https://github.com/WhatTheFunctional/EcosystemSimulation.

Making an Ecosystem Simulation in Haskell (Part 2)

Last time I discussed how to work with the Haskell Stack and random number generators using a simple Ecosystem Simulation as an example. This week I’ll continue using the Ecosystem Simulation example to discuss how to work with the State monad.

State in Haskell

Since Haskell is a (mostly) pure functional language and each function is referentially transparent by default, you can’t just keep the state of a simulation in a variable and update it as the simulation evolves over time. Each call to your simulation must take in the current state and produce a new state as an output.

The simplest way to do this is to simply pass the old state as an argument to each function, but this results in a lot of extra code and makes your code into a chain of nested function calls like this:

function3 (state0, state1, state2)
    | (recursionComplete state0 state1 state2) = (state0, state1, state2)
    | otherwise = function3 (
                      function2 (
                          function1 (
                              function0 (state0, state1, state2))))

This code calls function0, function1, function2, and function3 until the recursionComplete function evaluates to True. As you can see, the functions are listed in reverse order compared to the order in which they execute. Also note that this function would increase in size as the size of the tuple of state objects increases.

Ideally, we want something that looks more like an imperative program with classes where the functions are listed in order, like this recursive example using Python:

def function3(stateRecord):
    if recursionComplete(stateRecord):
        return stateRecord
    else:
        function0(stateRecord)
        function1(stateRecord)
        function2(stateRecord)
        return function3(stateRecord)

Notice how function0, function1, and function2 don’t return a value? Since Python is not a pure functional language, function0, function1, and function2 can update the variable stateRecord (depending on the contents of function0, function1, and function2; more information about that here).

Haskell doesn’t allow mutation of any constant, so we have to deal with passing constant values between function calls somehow. Fortunately, Haskell provides a solution to this problem which results in code which is relatively compact and where function calls are written in the same order as they will evaluate.

The State monad

For the purpose of this post, let’s use this definition of a State monad from the understanding monads article:

newtype State s a = State {runState :: s -> (a, s)}

In practice, the State monad is actually defined using a monad transformer. I’ll be covering monad transformers in a later post when I’ve actually learned how they work.

You can import the State monad using the command:

import Control.Monad.State

As you can see above, the State monad actually wraps a function called runState which takes a state, of type s, and evaluates to a tuple, (a, s), where a is the type of the result of the computation. So the State monad doesn’t represent a state itself, but rather a state processor which performs a stateful computation. In order to make a state processor, you can use the state function with another function as an argument:

incrementIntFunction :: Int -> (Int, Int)
incrementIntFunction i = (i, i + 1)

incrementInt :: State Int Int
incrementInt = state incrementIntFunction

If you call runState with a State and an initial value, you get the result of the state processor, as well as a new state. For example, consider applying incrementInt to the initial value 4:

runState incrementInt 4 --Evaluates to (4, 5)

I like to think of the second value in the result tuple as some side-car data which is carried around by the computation.

Of course, the State monad is an instance of the Monad typeclass (you can read more about monads in my post Modeling Generalized Behaviors and Imprisoning Side Effects):

instance Monad (State s) where
    return :: a -> State s a
    return x = state (\s -> (x, s))

    >>= :: State s a -> (a -> State s b) -> State s b
    p >>= k = state (\s0 ->
                     let (x, s1) = runState p s0
                     in runState (k x) s1)

    >> :: State s a -> State s b -> State s b
    p0 >> p1 = state (\s0 ->
                       let (x, s1) = runState p s0
                       in runState p1 s1)

Let’s examine what this means starting with the return function. In the case of the State monad, return x evaluates a state processor that takes any state and produces the value x. For example:

runState (return 5) "InitialState" --Evaluates to (5, "InitialState")

return 5 evaluated to a state processor which produces the value 5.

The State monad’s bind operation takes a state processor and a function which takes the output of the state processor and produces another state processor. It evaluates to a new state processor.

The >>= bind operation creates a new state processor which takes a state, s0, applies the first state processor, p, to that state using runState, producing a new value, x, and state s1. It then passes the new value into the function k, producing a new state processor (k x) which is applied to the new state s1.

The effect of all this is that the state processor p is run and its output is fed into the state processor created by k. This allows us to chain state processors together in a manner similar to an imperative program. For example:

runState (incrementInt >>= (\x -> state (\s -> (show (x * x), s + 1)))) 5 --Evaluates to ("25", 7)

This statement first runs incrementInt with an initial state of 5, which produces a tuple (5, 6) and then the value, 5, is passed into the state processor, state (\s -> (show (x * x), s + 1))), which squares the value and produces a tuple (“25”, 7).

The >> bind operator is easier to understand, it simply runs the first state processor and uses the state of the first processor as the input to the second processor. For example:

runState (incrementInt >>
          incrementInt) 5 --Evaluates to (6, 7)
runState (incrementInt >>
          (state (\x -> (show x, x)))) 5 --Evaluates to ("6", 6)

Notice how >> allows us to specify state processors in the same order as they are evaluated. It’s also useful that we don’t have to explicitly pass the current state to each state processor.

Now we can reproduce the function3 example in a way that’s sort of similar to the imperative implementation:

data StateRecord = StateRecord {state0 :: TypeA,
                                state1 :: TypeB,
                                state2 :: TypeC}

function0 :: State StateRecord StateRecord
...

function1 :: State StateRecord StateRecord
...

function2 :: State StateRecord StateRecord
...

function3 :: State StateRecord StateRecord
function3 = state (\s ->
                   if recursionComplete s
                   then (s, s)
                   else runState (function0 >>
                                  function1 >>
                                  function2 >>
                                  function3) s)

I admit that it’s not quite as nice looking as the Python example, but the functions are at least listed in the same order they are executed and the state is not explicitly passed between the functions. If no conditionals are necessary, a state processor can simply consist of a chain of state processers interspersed with >> bind functions. With the use of do notation, function3 can look a lot like imperative code:

function3 :: State StateRecord StateRecord
function3 = state (\s ->
                   if recursionComplete s
                   then (s, s)
                   else runState (do function0
                                     function1
                                     function2
                                     function3) s)

I’m not a big fan of do notation in general, but I will concede that it makes a lot of sense to use it with the State monad.

Bear in mind that although this looks like imperative code, it’s still completely referentially transparent and, other than the side-car state data, it’s still side-effect free.

There are a few helper functions which are convenient to use with the State monad:

put :: s -> State s ()
put newState = state (\_ -> ((), newState))

get :: State s a
get = state (\s -> (s, s))

put is used to insert a state into a stateful computation, ignoring whatever the state was before.

get is used to transfer the state into the value of our state processing, so it can be used in further computations.

evalState :: State s a -> s -> a
evalState p s = fst (runState p s)

execState :: State s a -> s -> s
execState p s = snd (runState p s)

evalState is used to get the value from a state processor.

execState is used to get the state from a state processor.

The state of the world

So how does this all work in a larger program with more state? In order to find out, I used the State monad to simulate a little world populated with virtual animals.

The entire ecosystem is modeled using a WorldState record type in src/World.hs:

data WorldState g = WorldState {iteration :: Int,
                                io :: IO (),
                                generator :: g,
                                grid :: Matrix Creature}

The WorldState consists of an iteration count, an IO stream, a random number generator, and a grid of Creatures.

The WorldState is created using the function makeWorld:

makeWorld :: RandomGen g => Int -> IO () -> g -> Matrix Creature -> Maybe (WorldState g)
makeWorld thisIteration thisIO thisGenerator thisGrid
    = Just (WorldState {iteration = thisIteration,
                        io = thisIO,
                        generator = thisGenerator,
                        grid = thisGrid})

I added a call to makeWorld using the grid which I populated previously in runSimulation:

runSimulation :: IO ()
runSimulation = let width = 30
                    height = 30
                    initialGrid = initGrid width height
                    generator = mkStdGen 126590563
                    (initialCount, newGenerator) = randomR (10 :: Int, floor ((fromIntegral (width * height)) * 0.1)) generator
                    initialCoordinates = take initialCount (shuffle' ((,) <$> [1..width] <*> [1..height]) (width * height) newGenerator)
                    initialPopulation = unfoldr (generatePopulation 70 25) (initialCoordinates, newGenerator)
                    iGrid = populateGrid initialPopulation initialGrid
                in putStrLn ("Population simulation with " ++ (show initialCount) ++ " creatures.\n") >>
                   performIO (evalState simulate (iGrid >>= makeWorld 0 (return ()) newGenerator))

Note that I used the >>= bind to bind the result of the populateGrid function, which is a Maybe (Matrix Creature), to the makeWorld function. The result is a Maybe (Matrix Creature), which is used as the initial state of the simulation.

I call evalState on the simulate state processor, which performs the simulation on the world. Then I call performIO on the Maybe (WorldState g) to print the state of the world to standard output.

Before we dive into the simulate state processor, let’s quickly look at the definition of performIO:

performIO :: RandomGen g => Maybe (WorldState g) -> IO ()
performIO Nothing = return ()
performIO (Just (WorldState {io = thisIO})) = thisIO

performIO simply evaluates to the IO () field of the WorldState record.

This is how I defined the simulate state processor:

simulate :: RandomGen g => State (Maybe (WorldState g)) (Maybe (WorldState g))
simulate = maybeStep simulateWorld >>
           maybeStep printWorld >>
           maybeStep waitWorld >>
           maybeStep incrementWorld >>=
           (\worldState -> 
             case worldState of
             Nothing -> get
             Just (WorldState {iteration = thisIteration})
                -> if thisIteration > 1000
                   then get
                   else simulate)

The simulator consists of a chain of sub-state processors, chained with >> binds. The simulation first simulates the world, then prints it, waits for user input, increments the iteration counter and checks if thisIteration is > 1000, in which case I call get to terminate the simulation and move the state into the value of the state processor. If thisIteration is <= 1000, the simulation continues recursively.

maybeStep converts a function which operates on a Maybe (WorldState g) into a state processor which acts on a Maybe (WorldState g):

maybeStep :: RandomGen g => (WorldState g -> (Maybe (WorldState g))) -> State (Maybe (WorldState g)) (Maybe (WorldState g))
maybeStep updateFunction = state (\worldState -> let newWorldState = worldState >>= updateFunction --worldState has type Mabye (WorldState g)
                                                 in (newWorldState, newWorldState))

simulateWorld updates the state of the world:

simulateWorld :: RandomGen g => WorldState g -> Maybe (WorldState g)
simulateWorld worldState = Just (updateWorld worldState)

printWorld adds a printGrid call to the IO stream of the WorldState:

printWorld :: RandomGen g => WorldState g -> Maybe (WorldState g)
printWorld worldState@(WorldState {io = thisIO, grid = thisGrid})
    = Just (setIO (thisIO >> printGrid thisGrid) worldState)

waitWorld adds some print-outs and a getChar to the IO stream of the WorldState:

waitWorld :: RandomGen g => WorldState g -> Maybe (WorldState g)
waitWorld worldState@(WorldState {io = thisIO})
    = Just (setIO (thisIO >> putStrLn "-----" >> hFlush stdout >> getChar >> return ()) worldState)

incrementWorld increments the current iteration of the WorldState:

incrementWorld :: RandomGen g => WorldState g -> Maybe (WorldState g)
incrementWorld worldState = Just (incrementIteration worldState)

Working with the State monad made it easy to define the stages of the simulation, and adjust the order in which operations are performed. It also made it much easier to write concise function definitions when a sequence of operations needs to be performed, like in the simulate function. Compared to my method of handling state in Making a Text Adventure in Haskell, this is much more readable and easier to debug.

We’ll cover the updateWorld function, which updates the creatures in the world, in my next post.

The code for this simulation is available at Ecosystem Simulation.

Resources:

State Monad

Haskell/Understanding monads/State