Functional Programming with Haskell

COMP 211, the first computer science course I ever signed up for, was taught in the Racket programming language. If you haven't heard of it, you're in the majority. You may have heard of functional programming or Lisp; Racket is one of the more obscure members of the Lisp family of functional languages.

As a college freshman, I had certainly never heard of Racket. All of my programming experience (limited as it was) involved imperative languages, and the professor had his hands full teaching us the basics. At the end of the semester I moved on to 300-level courses taught in Java. The department retired COMP 211 and my knowledge of Racket began to fade.

From time to time, though, I would come across a discussion of functional programming on the internet. The postings were full of mysterious words like “functor” and “monad”, and I couldn't help but be intrigued. What are monads? They sound pretty cool. I want to be in the monad club.

So functional programming has been on the back of my mind for a while. This spring, I finally sat down to learn it, settling on the Haskell language. I was prepared to slog through lots of esotericism, but I found Haskell quite accessible. In fact, I recommend it: It's given me a fresh perspective on the day-to-day problems we tackle at work, and though I'm hardly an expert, I'm certain it's made me a better developer.

Here are highlights of my experience. It turns out that monads have been thoroughly covered by other people—we've got blog posts about blog posts about blog posts about monads—but I've put together a walkthrough of two other Haskell features that are powerful and easy to understand.

Why Haskell?

Actually, I'm going to start with a discussion of Java, which is more widely known and gives us some useful context. This is going to be old hat for lots of you, but bear with me!

The Java language is rooted in imperative, object-oriented programming. Code is expressed as a sequence of commands, and the commands are bundled into methods that can be individually invoked. Pieces of program state are bundled into objects. Methods are attached to objects, accept objects as input, and return objects as output, and in this way the objects can interact with each other.

As a quick illustration, let's tackle an example problem. Given a list of strings, each representing an integer, we'd like to parse the integers out of the strings and put them in another list, but only if they're greater than zero. An imperative solution might look like this:

List<Integer> parsePositiveInts(List<String> strings)
{
List<Integer> ints = new ArrayList<>();
Iterator<String> iter = strings.iterator();

while (iter.hasNext())
{
String string = iter.next();
int i = Integer.parseInt(string);

if (i > 0)
{
ints.add(i);
}
}

return ints;
}

We've written a single method that accepts a List object containing strings and returns another List containing integers. We first call the iterator() method of the input list, retrieving an Iterator object used to iterate over the list's contents. With repeated calls to the iterator's next() method, we retrieve individual strings, converting each one to an integer. The integers are conditionally added to a new ArrayList using its add() method.

It's a trivial example, but similar logic appears time after time in real applications. We often find ourselves applying a transformation to each element in a collection or filtering the contents of a collection based on various criteria. In the example above, our transformation and filtration predicate are embedded in the boilerplate iteration code. What if we could write the first two without the latter?

Functional programming is a programming paradigm that attempts to solve this problem. It's built around higher-order functions: functions that accept or return other functions. The poster children, available across many languages, are map and filter. map applies a transformation to a collection or stream of elements, while filter discards elements based on a predicate. That's exactly what we wanted to do in the example above.

With the 2014 release of Java 8, Oracle added some functionally-inspired features to the language. They formalized the concept of a “functional interface”, which is any ordinary Java interface that has a single method. A functional interface represents an implementation of its method, and nothing more or less. Several new functional interfaces were introduced in the java.util.function package.

Among those interfaces is ToIntFunction, which represents a transformation from an object to an integer primitive. Our integer-parsing transformation could be implemented like so:

ToIntFunction<String> parseInt = new ToIntFunction<>()
{
@Override
public int applyAsInt(String s)
{
return Integer.parseInt(s);
}
};

Our integer predicate could be implemented as an IntPredicate:

IntPredicate isPositive = new IntPredicate()
{
@Override
public boolean test(int i)
{
return i > 0;
}
};

All of this had been possible before—indeed, some libraries had included functional interfaces, described as such, for years—but the version 8 release was a categorical endorsement of methods as a first-class entity. To this end, it introduced concise syntax for declaring anonymous functional interface implementations:

IntFunction<String> parseInt = Integer::parseInt;
IntPredicate isPositive = i -> i > 0;

The new syntax was complemented by a slew of standard library additions written in the functional style. The Stream interface available in version 8 allows streams of objects to be transformed, filtered, and generally manipulated every which way. When we replace the boilerplate in our example code with a stream, it becomes shorter, more clear, and more obviously correct.

List<Integer> parsePositiveInts(List<String> strings)
{
return strings.stream()
.mapToInt(Integer::parseInt)
.filter(i -> i > 0)
.boxed().collect(toList());
}

This is much nicer than the imperative approach, but there are still some pitfalls to be aware of. The Stream documentation recommends that user-provided functions be “non-interfering and stateless”, meaning they should not perturb program state (in particular, the stream's source) and they should not be affected by program state. Violating this assumption of statelessness can lead to confusing or nondeterministic behavior.

class StatefulPredicate implements IntPredicate
{
private int maxObservedInt;
private boolean observedAny;

@Override
public boolean test(int i)
{
if (!observedAny || i > maxObservedInt)
{
maxObservedInt = i;
observedAny = true;
return true;
}
return false;
}
}

List<Integer> parseAscendingInts(List<String> strings)
{
return strings.stream()
.mapToInt(Integer::parseInt)
.filter(new StatefulPredicate())
.boxed().collect(toList());
}

Above is an example of a stateful predicate that only accepts integers greater than those it has already seen. In the method parseAscendingInts, we've replaced our old predicate with the new, stateful one. Passing in the list ["1", "2"] will give us [1, 2] as before. However, passing ["2", "1"] will yield [2] because the first integer is greater than the second. The output of this code depends on the order in which integers are passed to the predicate, and that's often problematic. If the input type is changed from List to Set, the input iteration order may not be well-defined. If we call strings.parallelStream() instead of strings.stream(), the result becomes nondeterministic.

We've run in to trouble because it's tempting to think of a Predicate as a static set of criteria that a subject might satisfy (or not). But in reality, we have nothing more than a method that accepts an object and returns a boolean, and it's free to return whichever boolean it likes. Two consecutive invocations of a predicate on a particular subject can produce two conflicting answers, and it's not generally possible to catch the problem in advance.

Clearly, it's easier to reason about higher-order functions like filter when we don't have to worry about state, but Java has no way of requiring well-behaved functional interface implementations.

Haskell Can Fix That

Haskell tackles the problem head-on by declaring that all functions are stateless, full stop. The result of a Haskell function must depend only on the input. Because it restricts our ability to mutate state, the compiler finds itself with an unusual degree of freedom: Stateless computations can be delayed or reordered with no effect on the correctness of the overall program. In fact, Haskell will avoid doing any computation until it absolutely has to (e.g. when a result must be printed to standard output).

Sounds promising! So what does it look like? Let's take a crash course in Haskell, starting with a variable definition:

foo :: Int
foo = 2

Here's the equivalent in Java:

int foo;
foo = 2;

Just as we can combine declaration and assignment in Java (int foo = 2), we can combine them in Haskell (foo = 2 :: Int), but the Haskell declaration is usually kept separate by convention.

On the Java side of things, we could follow our assignment with a second assignment foo = 3 that overwrites the initial value. When we try the same thing in Haskell, the compiler puts its foot down, giving us the error “Multiple declarations of ‘foo’”. Changing the value of foo would constitute a change in our program's state, and we've agreed not to do that. There's an obvious sticking point here: If we can't mutate values, how do we accomplish anything? Going down that path is interesting, but complicated enough to merit a post of its own. Besides, it turns out we can do quite a lot without state.

Haskell is statically typed, but includes powerful type inference, so not every variable requires a type declaration. We can define a second variable bar = foo + 1 and the compiler will know that it has type Int, the same as foo.

Lots of types we've seen in Java have analogues in Haskell:

myBoolean :: Bool
myBoolean = True

myChar :: Char
myChar = 'a'

myString :: String
myString = "hello"

There are also some types that we haven't seen before.

Lists

A list is homogenous ordered collection of zero or more items. In contrast to Java, all of the items must be of the same type. For example, you can't have a list containing both a String and an Int. The type of a list of Int is written [Int].

Lists are defined recursively. A list is either the empty list, denoted [], or an item x prepended to another list xs using a colon: x:xs. Note that this implies a singly-linked list implementation.

When defining multi-element lists, we can use some syntactic sugar, separating the elements with commas and wrapping the whole thing in square braces. [1,2,3,4] is equivalent to 1:2:3:4:[]. Strings are actually another example of syntactic sugar—behind the scenes, they're just lists of characters. The String type is an alias for [Char] and the string "hey" is equivalent to 'h':'e':'y':[].

Tuples

A tuple is a heterogenous ordered collection of a fixed number of items. Whereas lists can contain any number of items of a single type, tuples contain a specific number of items with specific but potentially different types. For example, the type (Int, String) describes a 2-tuple, also referred to as a pair. The first element of the pair will be an integer and the second a string. We can create a tuple by separating elements with commas and wrapping the whole thing in parentheses, as in (4, "oy").

The 0-tuple is called the unit, and both the type and value are written ().

Functions

Functions are essential in Haskell (as in any functional language). Since a Haskell function can't affect or be affected by program state, it must return the same output given the same inputs, making it closer to a function in the mathematical sense of the word. Similarly, the evaluation of a Haskell function can't cause any observable side effects such as performing I/O. Functions with these properties are known as pure functions.

To get a feeling for how functions work in Haskell, let's look at an example. The head function accepts a list and returns the first element of that list. (Taking the head of an empty list will cause your program to explode at run time, so it's important to only pass lists that are known to be nonempty.) head has type [a] -> a. The a here is a type variable, which is sort of a placeholder that gets replaced with a concrete type when the function is used in context. The -> indicates a function which accepts the type on the left and returns the type on the right. So the overall type [a] -> a tells us that head accepts a list of any type and returns a single element of that type, as expected.

Functions are applied using prefix notation. We write the function name followed by a space and then the parameters:

head [1,2,3,4]

The above expression evaluates to 1.

Function application is left-associative. If we have a list of lists of numbers and we want to retrieve the first number from the first inner list, writing head head [[1]] won't work. We're applying the function head to itself, which doesn't make any sense! We have to use parentheses to apply head to the inner list: head (head [[1]]).

A few functions are special in that they use infix notation by default. We've actually seen an example already in the : function that's used to build lists. Given an element and a list of elements of the same type, : prepends the first element to the list, so it has type a -> [a] -> [a]. Other examples of infix functions include the familiar arithmetic functions +, -, *, and /; also the equality and inequality functions ==, /=, <=, >=, <, and >.

The arithmetic functions have the precedence you would expect, so 1 + 2 * 3 yields 7 and not 9. Prefix function application has a higher precedence than any infix applications, meaning head [1] + head [2] is equivalent to (head [1]) + (head [2]).

Let's look at some more examples of functions that operate on lists and tuples.

  • tail is the counterpart to head, accepting a list and returning everything except the first element. Empty lists will cause a run time failure here too. It has type [a] -> [a].
  • fst and snd accept a 2-tuple and return the first and second tuple elements respectively. fst has type (a, b) -> a and snd type (a, b) -> b.
  • zip takes two lists and combines their elements pairwise into tuples, stopping when it hits the end of the shorter list. For example, zip [1,2,3] "abc" will give us [(1, 'a'), (2, 'b'), (3, 'c')]. The type of zip is [a] -> [b] -> (a, b).

Finally, we can write functions of our own. The syntax is similar to function application. Check out this groundbreaking code:

increment :: Int -> Int
increment x = x + 1

We can use our new function to add one to any integer. increment 2 will evaluate to 3.

With the preliminaries covered, let's move on to the cool stuff.

Standout Feature #1: Pattern Matching

In many languages, defining a function parameter involves a type and a name and nothing else. The parameter can assume any of the possible values of that type, and it's up to the function implementation to figure out what sort of input it's dealing with.

In Haskell, we provide a type and also one or more patterns that the parameter value might match. Suppose our parameter type is [Int]. The simplest pattern is just a variable name: foo used as a pattern will match any input list, assigning it the name foo. But instead of a variable name, we can also write [], a pattern that matches only the empty list. We can even write [a,b,c], which will match any list containing three elements and assign the names a, b, and c to those elements. When a pattern matches, it's used to deconstruct the input value and assign variable names to its components.

This is best illustrated with another example. Here's a function that accepts a list and returns a string describing how long the list was.

describeLength :: [a] -> String
describeLength [] = "An empty list!"
describeLength [a] = "One element."
describeLength [a, b] = "Two elements."
describeLength xs = "Three or more elements."

Without pattern matching, this would involve bunch of if-statements, but the patterns let us clean things up considerably. As you can see, we get to provide a different function implementation for each set of patterns. Patterns are matched from top to bottom, so the most general pattern should come last. If we moved our last line before the empty-list case, our function would always return "Three or more elements."

What happens if none of the patterns match? We get a run time failure. It's bad practice to write functions that are undefined for some inputs, so the last pattern in a function declaration is often a catch-all.

In our example, we don't really care what the list elements are, only that they exist. We can make this explicit by using an underscore in our pattern, which does the same thing as a variable name pattern-wise but doesn't extract the matched component.

describeLength [] = "An empty list!"
describeLength [_] = "One element."
describeLength [_, _] = "Two elements."
describeLength _ = "Three or more elements."

Just to make sure we've got a handle on this, let's work through a second example. We're going to write a function that reverses lists. Because lists are singly-linked, the best approach is to make one pass over the original list and construct the reversed version as we go. Each time we visit an element, we'll add that element to the front of the growing reversed list. I like to visualize this as two stacks of books: imagine we're shifting books one at a time from the first stack to the second. It's not a perfect analogy because the first stack of books is immutable, but hopefully you get the idea.

The implementation below consists of two functions, but the first one does all the work. We can see from its type [a] -> [a] -> [a] that it accepts two lists and returns a third. The first parameter is the input list, or what's left of it. These are the elements that we have yet to transfer to the reversed list. The second parameter acc is the reversed list that we're constructing as we go. (The name "acc" is short for "accumulator", since we're accumulating a result.)

In an imperative language, we might loop over the input list, assigning each element in turn to a shared variable. But functions in Haskell are stateless, and the language has no concept of a loop. Instead, we turn to recursion.

myReverseHelper :: [a] -> [a] -> [a]
myReverseHelper [] acc = acc
myReverseHelper (x:xs) acc = myReverseHelper xs (x:acc)

myReverse :: [a] -> [a]
myReverse xs = myReverseHelper xs []

The first line of myReverseHelper defines our base case, using pattern matching to identify when the input list is empty. If we're out of input, we must be finished constructing the reverse list, so our result is simply the accumulator.

The second line handles the recursive case. We know the input list is not empty; therefore, it must have the form x:xs, where x is the first element of the list and xs is the remainder. We use pattern matching to extract both components, then immediately recurse, passing xs in place of the original input and x:acc as the accumulator value. For each x in the original list, x is prepended to the result.

Last, we wrap myReverseHelper in a nicer interface by defining myReverse, which just delegates to the helper function with an empty list as the second parameter.

Standout Feature #2: Partial Function Application

We made a big deal out of higher-order functions earlier, so let's see what our friends map and filter look like in Haskell.

  • map takes a function and a list and applies the function to each list element, returning a new list with the results. It has type (a -> b) -> [a] -> [b]. Note the parentheses—this is not the same type as a -> b -> [a] -> [b]. The first type represents a two-parameter function (where the first parameter is itself a function) and the second represents a three-parameter function.
  • filter takes a predicate and a list and removes any elements that don't satisfy the predicate, returning a new list. It has type (a -> Bool) -> [a] -> [a].

There's one more thing we need to recreate the Java example, and that's a way to parse integers. We'll use the read function, which, without going in to too much detail, is for parsing the string representation of things. read determines what kind of thing it should be parsing based on the (possibly inferred) result type.

Here's a Haskell version of parsePositiveInts:

isPositive :: Int -> Bool
isPositive x = x > 0

parsePositiveInts :: [String] -> [Int]
parsePositiveInts strings = filter isPositive (map read strings)

One thing that jumps out is that our predicate definition is kind of verbose. In Java, we just wrote x -> x > 0 and let the compiler figure out all the types for us. We can also omit the type here (i.e. remove the first line) and the compiler will infer it. In fact, we can do even better.

Let's take a close look at the type of map function, which is (a -> b) -> [a] -> [b]. The return type [b] is separated from the parameters by a -> token, but interestingly, there's also a -> between the first and second parameters. What should we make of that? Do both usages mean the same thing? We know that -> indicates a function, but we know it can't be left-associative: (a -> b) -> [a] -> [b] and a -> b -> [a] -> [b] are different types.

It turns out that -> is right-associative, and (a -> b) -> [a] -> [b] is equivalent to (a -> b) -> ([a] -> [b]). In other words, map isn't really a two-parameter function, but a single-parameter function that returns another function. Recall that function application is left-associative. In the expression below, we're mapping the increment function we wrote earlier over a list:

map increment [1,2,3]

Because map is secretly a single-parameter function, what this actually does is apply map to increment, and then apply the resulting function to the list. We could put parentheses around map increment without changing the meaning of the overall expression.

That raises the question of whether map increment can stand on its own. It turns out it can!

mapIncrement :: [Int] -> [Int]
mapIncrement = map increment

If we stop after the first parameter, we get a function that accepts a list of integers and returns a new list with every integer incremented. Neat! Every “multi-parameter” function in Haskell can be partially applied by omitting parameters, and it leads to extremely concise code.

Infix functions can be partially applied as well. Returning to our integer-parsing example, we can move the predicate inside the body of the main function.

parsePositiveInts :: [String] -> [Int]
parsePositiveInts strings = filter (> 0) (map read strings)

Finally, there's a third higher-order function that allows partial application to really shine. The . function (that's a period character) is an infix function that performs function composition. Its type, which is (b -> c) -> (a -> b) -> a -> c, tells us all we need to know about its behavior: Given a function that turns Bs into Cs and a function that turns As into Bs, it smashes the two together to produce a function that turns As into Cs.

Function composition lets us do this:

parsePositiveInts :: [String] -> [Int]
parsePositiveInts = filter (> 0) . map read

In its most essential form, our example function is the composition of two list transformations. This way of writing functions is known as point-free style because we've omitted the parameter, or “point”, on which the function operates. In doing so, we've focused more on the high-level behavior of the function and less on how that behavior is implemented.

Wrapping Up

I won't pretend that I've started coding exclusively in Haskell. At work, the majority of our code and knowledge is grounded in Java, and a sudden switch to Haskell would be roughly equivalent to addressing my coworkers in Latin. Having said that, a big part of the value of functional programming is in the way it teaches you to approach problems. I've found myself replacing big chunks of imperative code with stream expressions and cleaning up careless state management where I might not have noticed it before.

As a next step, I highly recommend Miran Lipovača's Learn You a Haskell for Great Good, available at learnyouahaskell.com. It's well-paced and includes lots of practical examples, not to mention some excellent drawings. If you just want to evaluate some expressions in your browser, there's a demo available on the front page of haskell.org. Try it out!