Lambda Calculus and Combinators

This page is inspired by the talk A Flock of functions by Gabriel Lebec

Combinators are functions (in the sense of lambda calculus) that are defined in terms of nothing but there arguments.

Functions in the sense of the lambda calculus can be thought of as a subset of what is nowadays called closures in programming languages like JavaScript or python.

Closures in modern programming languages bring all the additional semantics of the respective programming language with them (evaluation order, identifiers, variables, loops, if/else, data types, multiple parameters...)

Function in the lambda calculus are much simpler:

  1. They take only a single argument (multiple arguments must be constructed by currying)
  2. The only operation that can happen inside a function is applying functions to arguments (calling function with arguments)
  3. A Function can not have a name.
  4. Combinators (a special case of function) may not even access any identifier that is not passed as argument.

The goal of this document is to explore how we can use only combinators to construct a whole lot of arithmetic functions.

We will see that in theory all of computable math can be constructed only from combinators alone.

A few examples

For now lets look at a few examples of what counts as lambda function and as combinator and what not:

The following is not a lambda calculus function because it makes use of JavaScript semantics like if/else, return, >, the number 20, and multiple arguments:

The following is a lambda calculus function since the only thing it does is to call other functions. But it is still no combinator because it accesses the alert and sqrt identifiers that are assumed to be defined somewhere else but not passed in as arguments:

To make it a combinator it would have to look like this:

But now it is not a classical lambda function anymore because it takes 3 arguments instead of exactly one.

Lets try again:

Now we have a function that is both a lambda function (consists of only one parameter functions and function applications) as well as a combinator (access no identifiers other than the arguments).

Some common real world combinators

Now you might wonder how useful a function can be that takes only a single argument and can express nothing but function application.

Below you can see some more examples of combinators. Some of them you might have already seen as helper functions in your own code.

Note that we assign them to named JavaScript constats but that these names are not part of the function definition themself but only for us to label them from the outside, in order to talk about them.

Apply is a combinator that takes two arguments, calls the first argument (assuming it is a function) with the second argument as parameter

Identity takes one argument and simply returns it.

Compose takes three arguments, calls the second argument using the third as parameter, then use the result as parameter to call the first argument.

Note that since in lambda calculus there exist nothing but functions be implicitly assume that all the parameters that are passed to a function are functions themself. This is why when can simply write b(c) without even checking what b is.

BinaryCompose takes four arguments, assumes the second to be a binary function and calls it with the third and fourth argument, then pass the result as parameter to the first argument

When we say b is a binary function, what we mean is that b is curried, ie. that when applying b to a single argument the result is another function that can then be applied to another argument.

Flip takes 3 arguments, assumes the first to be a binary function and calls it with the two other arguments but in flipped order.

Const takes two arguments but always returns the first one in the end. Why is it called const? Because once the first parameter is provided the result is already determined and can not be changed by the second parameter.

On takes for arguments, the first is assumed to be a binary function the second to be a unariy function. Calls the second function with each the third and fourth argument, then passing both results to the binary function.

S takes three arguments, the first one being a binary function the second a unariy function. Applies the unary function on the third argument and the pass both the original third argument and the result from the unary function to the binary function

You can come up with many more combinators. See this page for a longer list.

Instead of explicitly defining each combinator, all combinators can be composed from a small base set of given combinators. Only two or three combinators are enough to build all others (for example S, Const and Identity are sufficient)

For example the Identity combinator an be constructed from only the S and Const combinators:

All three of the above functions should simply return their argument. Lets try:

Natural Numbers

Zero and One

A function (lets call it NatZero) that takes f and x as arguments and just returns x can be used to represent the concept of the natural number zero.

To understand what is meant by "the concept of zero" it is helpful to ask the question: If f and x are passed to the the NatZero function, how often will f be called? f will not be called at all, i.e. zero times, since the function just returns x

Additionally we can think of a function (lets call it NatOne) that takes two arguments f and x and simply calls f with x as the parameter. This function can be used to represent the concept of the number one. In this case the "concept of one" is expressed by the fact the a parameter f is called exactly one time.

How could we represent the number two following the same idea?

Show solution

The concept of two can be constructed by defining a function that takes an argument and applies/calls it twice:

To see our TryNatTwo function in action lets first define a helper function that simply prints out its parameter and then returns it. Note that this is not a lambda function or combinator in of it self but simply a utility for us to observe to count how often a function is called.

Now lets call our TryNatTwo function with our PrintAndReturn function and some string. We would expect the PrintAndReturn to be executed two times resulting in two lines being printed:

Did it work?

Would the following also be a possible definition for the concept of two?

Indeed one could try to come up with other ways to reprent "two-ness". In the above defintion of TryAnotherNatTwo the parameter f is used two times as well. The issue with this definition is that the function definition is no longer a lambda expression since it makes use of the semicolon(;) to chain two statements.


Now that we have two function definitions that represent the concept of the number zero and the concept of the number one respectively, we can ask if it is possible to define a function that increments the number that a given function f is called by one.

In other words this function we would like to define takes three arguments: a function n, a function f and some value x.

We expect the argument n itself to be a binary function that itself encodes the concept of a number. For now think of n as being either NatZero or NatOne as defined above.

Our new function should now call f one more time than n itself would do.

To achieve this we first call the function n with f and x as paramters (resulting f being called so many times as n itself does).

afterwards the pass the result from n to f once again. This leads to f being called one more time than n would do.

This new function we call OpNatSucc (for successor) because when being passed a number representing function it it increases the number represented by the concept of how many times the argument f is called by one.

By Applying the OpNatSucc function we can conceptualize numbers greater zero or one.

Lets try to use our constructed numbers:


The next task is to define a function (lets call it OpNatPlus) that takes two arguments a and b (each representing a number in the sense as described above) and two arguments f and x and then calls the function f as many times as the a and b would combined would do. In other words this function that represents the addition of two natural numbers. Such a function can be defined as follows: First pass f and x to a (resulting in f being called as many times as a represents) Then pass the result and f to b (resulting in f now being called as many times as b represents) In total we would see f to be called "a+b" times.

Alternatively we can think of a+b as applying the successor function b times on a:

By making use of the addition we can construct new functions representing larger numbers


Multiplication can be implemented in a similar way: By passing the function b (with f already filled in) itself to a, b(f) is called a times Therefor f will be called a*b times in total.

Alternatively we can think of a*b as adding b "a times" to zero:

Functions representing even larger numbers can be easily constructed


Similar to multiplication the concept exponentiation can be constructed This time by passing a (the base) to b (the exponent) to construct the concept of "multiplying by a b times" and only then passing f to the result

Alternatively we can think of a to the power of b as multiplying b "a times" with one:

Construct even larger nubers

Data Structure


Not only numbers can be represented but also more complex data structures the simplest example being a tuple (or pair) encoded as a function that takes three arguments: a left value and the right value and a selector function to which both are passed

To extract only the right or only the left value from a tuple we can define two functions that take two arguments each but return only one of them:

Next we can define a function that takes two arguments f and x, applies f to x but stores both the original x and the result of f(x) into a tuple

Next we can define a function that takes a function f and a tuple p. The right value of the tuple is extracted and passed to the function f Then a new tuple is constructed with the previous right value now on the left side and the the result of f(...) on the right side:

A few example tuples can be constructed: A tuple (0,0) and a tuple (0,1)

A tuple (1,2) can be constructed by applying the shift and successor functions to (0,1) The original value 1 is moved to the left side the the successor of 1 (i.e. 2) is put onto the right side of the new tuple:

Natural Numbers Continued...


The predecessor function for natural number n (i.e. n-1) can be thought of as applying the successor function to zero one times less than n. (i.e. (n-1) times).

To get the predecessor of a number n you can count from zero up to n but stop one step early. To do so the OpStrucPairShift function from above can be used to remember the previous argument that the predecessor function was called with.

In other words construct the tuple (0,0) and increment the right side n times but in each step shifting the not-yet-incremanted value to the left side of the tuple. Finally we keep only the left side of the tuple to yield the predecessor.


Now that the predecessor function allows us to decrement a value we can apply it multiple times (b times) to subtract b from a:

And make use of it to construct some more large numbers:

Boolean Values: True and False

The boolean values true and false can also be represented as functions. Boolean values can be thought of as the choice between two options. So each boolean value can be though of a function that takes two arguments but only returns one or the other.

The BoolTrue function returns only the first argument. The BoolFalse returns only the second argument


Logical negation takes an argument c (which is assumed to be one of the two functions representing a boolean) and inverts its meaning (true becomes false and false becomes true). This can simply be done by flipping the order of the two arguments passed to the boolean:


The logical AND function takes two arguments, each of them being a function representing a boolean. If both arguments are true then true should be returned. Otherwise false should be returned.

In other words: if the argument a is false, a should be returned, otherwise whatever b is should be returned. This choice can be implemented by passing exactly these two consequences as parameters to the a function.


Similarly the logical OR function takes two arguments. If the first argument is true, true should be returned Otherwise whatever the second argument should be returned As with the logical AND function this choice is implemented by simply passing the two possible choices to the first logical function.

Remember that for this to work the two parameters a and b must be assumed to be one of the functions representing true or false as defined above.


The boolean NAND function is simply first applying AND and then negate the result:

The boolean NOR function is simply first applying OR and then negate the result:

The boolean XOR function can be defined in terms of OR, NOT and AND:
a XOR b ⇔ ((a or b) and (not(a and b)))

De Morgan's law can be applied to get an alternative definition.

Natural Numbers part 3: Predicates

Now that we have prepared functions to encode boolean logic we can make use of it to implement some predicates on our natural numbers.


First we can implement a check that tells us if a given natural number is zero. This can be done by taking the boolean true value and passing it through the logicalAnd(..., false) function n times. As soon as the logicalAnd(..., false) function is called one or more times the result will be false. The result will only be false if the logicalAnd is called zero times, i.e. if n is zero.

Greater or equal to

Next we can implement a function that checks if a number a is greater than or equal to a number b. This can be done by subtracting a from b and and check if the result is zero.

Note that the way we implemented the predecessor function the predecessor of 0 is still 0 by definition. This results in (b-a) being zero even if a is actual larger than b.


Now that we have the greater-or-equal predicate implemented, we can apply it in both directions to check if two numbers are actual equal:
(a = b) ⇔ (a>=b && b>=a)

Less than (or equal)

And together with our logic functions we can also negate the checks and implement greaterThan, lessOrEqual and lessThan:

Datastructures: Linked Lists

Similar to a pair/tuple construction above a list of elements can be constructed.

Pairs could be nested in a head-tail structure like (A, (B, (C, (D, END)))) to represent a list with four elements. But note that the right field of the inner most pair needs to be filled with some special marker denoting the end of the list. This is because being a pair it needs to have a right field by construction but the right field must something that is not a pair itself, otherwise it would not be the most inner pair.

At the same time it must be no element of the list itself because the elements are to be placed only into the left field of the pairs.

So to construct a list we need to embed a choice (i.e. boolean) at at each nesting level to tell us if the list continues. For the end of the list we will simply use the false value:

Notice that the definition of StrucListEnd is the same as BoolFalse. We could reuse the BoolFalse definition but for better legibility we define a new name. Remember that the names are only for us humans to read in the first place.

To construct a new non-empty list we can wrap an existing list together with a new element. StrucListCons takes four parameters:

  1. h (head) is the new element to prepend.
  2. t (tail) is the already constructed list.
  3. c and e are functions to read the elements back out/process the elements of the list.

c is a binary function that is called with two arguments: the head of the list and the already processed tail. e is the value to return if the list is empty.

In other words: In order access an already constructed list it has to be called with two arguments: 1) a default value that should be returned if the list is empty 2) and a binary function that acts as a reducer/folder to compress the list back down to a single values

From this perspective we can see that a list is simply a nested structure of binary function calls for that the binary function is not determined yet. It can be filled in in the end but the lists elements are prefilled arguments at each nesting level.

Example List

Lets construct a simple example list:

A lists length can be computed by using zero as default value and calling the successor function for each recursion step (i.e. for each list element).

Notice that a list as described above expects a binary function to process its elements but the successor is a unary function (taking only one argument). To determine the length of a list the actual elements do not matter. This is why Const is used to convert OpNatSucc into a unary function discarding the second argument.


Map is a higher-order function that takes a list l and and function f. It applies f to each element in l and collects the results in a new list of the same length as the original.

It can be implemented by using the empty list as default value and calling StrucListCons at each step to construct a new list of the same length. Additionally each element is passed through the mapping function f.

The reduce function does not need to be implemented at all because a list is exactly its own reduce function:

A list can be filtered by using a predicate p to decide for each step if the StrucListCons function or the Identity function should be applied for the reduction:


Two lists can be concatenated by simply using the second list as default value and StrucListCons as binary function to reduce the first list:


Next we want to reverse a list. A simple way to reverse a list (head, tail) is to concatenate the reversed tail with the list containing only the head. In other words the order of head and tail must be flipped and the tail itself needs to be reversed.

First lets define a helper function that constructs a list of length 1 from a single element:

Now we can use StructListSingleton to convert the head element of list into a list itself the concatenate this to the previous tail list:

Notice that we do not explicitly reverse of this list. Instead the tail we are given is already reversed. This is a result of order in which the elements of the list are processed. The way our list is constructed we elements in the can only be processed from right to left by applying the the binary function. The second argument of the binary function is the accumulator of the reduction, not the remaining part of the list.


To access the head of the list we just need to use a binary function that always returns the left argument (i.e. the current head) as the reducer. But notice that the way the list is implemented it still processes all the nested pairs from the inside out (i.e. from right to left):


To access the tail of a list we can again make use of the shifting pair trick to remember the previous argument to a repeated function call. But this time we want to remember the right argument of a binary function call so we need to implement a binary version of our shift function:

Now we can use the binary shifter to restore the latest argument to our binary reducer call:

Notice that still the list is processed from the inside out. To access the tail of a list we must walk through all the elements an construct a completely new list only without the first element.

Lazy Lists

An alternative way to implement a list is to evaluate the tail lazily. For this alternative list construction we define a new list end marker and a new binary function to prepend to the list:

The end marker is the same as before. But notice how the function application t(c)(e), which processes the tail of the list, is wrapped inside a new lambda cc => ee => t(cc)(ee) in order to defer its evaluation. This allows the reducer function to decide if the tail of the list should actually be processed.

Constructing a list works as before:

This deferred evaluation allows the head and the tail of a list to be accessed without walking down the whole list and processing it from the end:

Notice that now the tail is not the accumulator containing the already processed rest of the list but instead a function can could be called to process the rest of the list.


In order to process such a lazy list the reducer function needs to recursively walk down the list on its own. Such a reducer would need to be defined recursively. Typically a recursive function is one that references itself inside its definition by its own name.

For example in natural language: To calculate the sum of all the elements in a list, add the current head to the the sum of the tail of the list.

A reduce function for the lazy list would look something like this:

Notice how the reduce function itself (ExampleOpStrucLazyListReduce) is passed down to the tail of the list in the tail(ExampleOpStrucLazyListReduce) term. This only works because by storing the function into the JavaScript variable called ExampleOpStrucLazyListReduce. But strictly speaking this is no longer a combinator because the function accesses something that is not provided as argument, namely itself.

In all other cases above in which we made use of a previously defined function by its name it is only done for readability. For example our definition of OpStrucPairShiftBinary uses the definition of StrucPair. Technically we could just as well inline the respective definition.

But a recursive function definition can not simply be inlined. Try it and notice how the recursive reference reoccurs one level deeper. Inlining a recursive definition would lead to an infinite syntactic recursion.

When restricting ourselves to the use of combinators it is not possible to reference the functions name inside its body since a combinator function is only allow to reference its arguments and inlining the recursive part is not possible.

M Combinator

At a first glance one might think that combinators do not allow for any recursion. But this is not the case! It is actually possible to construct recursive combinators.

To see how, take a look at the following function:

The M combinator takes a function as argument and applies it to itself.

The argument could be the Identity function. The identity function applied to itself is still the identity function

We could also try to pass the NatFive function to M combinator:

The result is 3125 == 5^5 because the function NatFive (which calls its argument 5 times) is applied to itself

But what happens if we apply the M combinator to itself?

This will result in a stack overflow since calling M with itself as parameter causes and infinite recursion: M(M) -> M(M) -> M(M) -> ...

So, albeit not very useful, we can conclude that at least the M combinator allows to construct a recursive structure.

But still we did so by first naming the combinator M and then applying it to itself. Could we cause the same recursive overflow without assigning any name to a function? Yes we can by simply substituting M with its definition syntactically:

This will also cause a stack overflow.

A recursion that never ends is not useful. We need to be able to introduce some condition that triggers a base case and causes the recursion to halt.

The following function takes an addition function f as argument and wraps the x(x) call into a closure to make the evaluation lazy, like we did with the lazy list. This lazily evaluated call to x(x) is then passed as argument to f. This allows f to decide for each recursive step to decide if the next call to x(x) should actually happen or not:

Y Combinator

The above definition of the haltingRecursion function above consists of two identical parts "(x => f(a => x(x)(a)))" calling each other. By making use of the M combinator this can be written even shorter:

The common name for this function is the Y combinator.

This Y combinator can now be used to construct any recursive function without requiring the function to be named. For example a recursive function that counts up to ten:

Now that we can define recursive functions we can for example implement a recursive factorial(n):

Lazy Lists

Finally we can implement more interesting functions for processing the elements in our lazy list structure:

Natural Numbers part 4


Now that we can construct recursive functions we can also construct a iterative natural number division operation:

For the division we repeatedly subtract b from a until b is no longer less than a and count how often this was possible.


The remainder of a natural number division (a mod b) can be determined by subtracting b from a as often as possible and then returning whatever remains.

Square Root

Not all natural numbers have square roots that are also natural. While working with lambda functions and combinators we can not simply throw an error when a function can not yield a correct result. This is also why our definition of subtraction returns 0 if the subtrahend is larger then the minuend.

For a square root function we can define a low and a high square root instead. The low square root of n is the largest natural number whose square is not greater than n. The high square root of n is the smallest natural number whose square is not less than n.

Now we can check if a natural number is an actual square number. It is a square number if its high square root and its low square root are the same.

Notice how this approach is similar to our construction of natural number equality: First define subtraction to return 0 even if the second number is larger (since we have no negative numbers yet), then define the less-or-equal predicate in terms of subtraction, then define equality as symmetry of less-or-equal.

Integers, Negative Numbers

Up until now we have constructed natural numbers, booleans, tuples, two kinds of lists.

You may have noticed that in reality natural numbers are not closed neither under division nor under subtraction. That is one natural subtracted from another is does not always give a natural as result. Also a natural number divided by a natural number does not always yield a natural but also a remainder.

In our implementations the subtraction falls back to return 0 if a larger number is subtracted from a smaller one and the division returns 0 as well if the divider is greater than the divisor.

Now you may ask if it is possible to construct not only natural numbers.

What about integers including negative numbers? How could we mark a number as negative? One possibility would be to construct integers as a tuple with a boolean as the left element and a natural number as the right element. The boolean would indicate if the integer is negative or positive. Next we would need to re-implement the corresponding versions for addition, subtraction, multiplication, and exponentiation for integers.

Another solution would be to store integers not as a single natural numbers combined with a sign but instead as a tuple of two natural numbers.

A tuple (a,b) would represent the result of the calculation "a-b". So a positive integer 5 would be represented as (5,0) and a negative 5 would be represented as (0,5).

An advantage of this strategy is that the calculations are quite easy to implemented and that this strategy generalizes good to other classes of numbers like fractions or complex numbers. A disadvantage is that a single integer has no longer a unique representation: (5,0) and (6,1) both represent the integer 5, since 5-0=6-1.

But actually the way that our natural numbers are constructed as composition of function applications their representation is not distinct anyway. Notice how OpNatSucc(NatOne) and OpNatPlus(NatOne)(NatOne) are two different calculations that both represent the natural number two.

So lets implement the "pair of the naturals" strategy:

Convert natural to integer

First lets define a function to convert a natural to an integer. Note that even if the naturals are the proper subset of the integers we represent the integer 5 and the natural 5 in two different ways.

So from our perspective they are not actual subsets but merely a isomorphism to a subset.
natToInt(n) = (n,0)

For converting an integer back to a natural number we could provide and abs() function: abs(a-b) = if(a>b, a-b, b-a)

Successor and Predecessor for Integers

For the successor function we can use the definition for natural numbers and apply it only to the first element of the tuple.

For the predecessor function we increase the second element of the tuple.

Notice how the definition for predecessor of natural numbers was your complicated. Now with integer encoding it is quite simple.


For negation the elements of the tuple must simply be swapped, changing (a-b) to (b-a).

Integer Addition

Addition of two integers can be done by simply adding the positive and negative components pairwise.

Notice how this component-wise operation is similar how you calculate with fractions by hand. We will see later that fractions are also just pair/tuples of numbers with one of them representing a multiplier and the other one a divider.

Integer Subtraction

Subtracting two integers is the same as negating the second one and and adding the result to the first.

Integer Multiplication

The derive the correct implementation for integer multiplication we can write down each integer as its corresponding subtraction and then algebraically expend the parenthesis.

(a1-a2)*(b1-b2) = (a1*b1 + a2*b2) - (a1*b2 + a2*b1)

Notice that a1, a2, b1, and b2 each are not integers but natural numbers. So the multiplications of the right can be implemented in terms of our already defined natural number multiplication.

Integer Predicates

We can check if an integer is zero by checking if both components are equal.

An integer is positive if the left component is greater than or equal to the right component.

As with natural numbers two integers are equal if subtracting them yield zero. Notice that the special case of the second number being greater does not occur for integers.

Integer Division

For dividing integers we can simply divide their absolute values using our natural number division and then negate the result in case exactly one of the integers had been negative. Remember a negative times a negative is a positive.

Integer Square Root

For integer square roots we do the same trick as with natural number square roots:

Then to check if an integer is an actual square number we additionally check if it is positive:


We have already constructed natural numbers and integers. During the construction of integers it has already been shown how new classes of numbers can be defined by composing existing numbers into tuples and delegating most of the new operations back to the already defined operations on the underlying numbers.


Quotients can be constructed in a similar way to the integers. This time each quotient is a pair of and integer and a natural number. The first number represents a multiplier and the second number represents a divisor. A tuple (a,b) encodes the calculation a divided by b.

We could also construct quotients as a pair of two integers (instead of one integer and one natural number). But using a natural number as a denominator has the advantage that the sign of the number is encoded in exactly one place: the numerator.

The can define four convenient constructors to convert natural numbers and integers into quotients:

Accessing the parts

Given a quotient we can extract either the numerator or the denominator, or we can apply the encoded division operation.

Note that applying the division operator results in an integer even if the numbers could not be perfectly divided. The whole point of quotients is to represent numbers that could not be perfectly divided by not calculating the division but only represent the calculation.


The typical predicates can again be defined in terms of existing predicates. A quotient is zero if the numerator is zero. A quotient is equal to 1 if the numerator and denominator are equal. A quotient is positive if the numerator is positive. Remember that we use only natural numbers as denominator.

Unary Operations

Negation and absolute value operations can be defined by simply applying to the respective integer definitions to the numerator.

To invert a quotient we need to construct a new quotient with numerator and denominator swapped. The denominator is a natural number and needs to be converted into a integer to act as numerator.

Changing the denominator

A typical operation on quotients is to change the denominator without changing the value the quotient represents. This can be done by multiplying both numerator and denominator with the same number.

Quotient Addition

Two quotients can be added together by summing their numerator if both of them have the same denominator. In case the the two quotients do not already have the same denominator, both their denominator can be changed into a common multiple.

We can combine the two cases into a single step: Multiply the denominators two the denominator of the result, and sum the numerators each multiplied with the denominator of the other quotient.

Quotient Subtraction

Subtracting two quotients works the same as adding them, only the the numerators being subtracted instead of added:

Quotient Multiplication

For multiplying quotients the numerators and denominators can be multiplied pair-wise.

Quotient Division

Two divide one quotient by another, the second quotient can be inverted and the results then multiplied.

Quotient Equality

Two quotients are equal if their numerators are equal after changing both their denominators to a common multiple of each other.

Quotient Square Root

As with natural numbers and integers we can define define operations to check for integer square roots. This works by simply delegating to the integer and natural number variants for the numerator and denominator separately, since sqrt(a/b) = sqrt(a)/sqrt(b).

But now that we have defined quotients, we can can even calculate fractional square roots that we could not calculate before. There exist various iterative methods to determine a square root of a number. Many of them work by guessing in an initial attempt and the refining the result. A simply variant is implement below:

Instead of iterating a fixed number of times, one could also refine the result until the error is lower than a given tolerance.

Complex Numbers

Similar to integers and quotients, Complex Numbers can be defined as a pair of numbers, this time a pair of integers. We could either define the pair (a,b) to represent the complex number a + j*b (Cartesian) or to represent the number a * e^jb (Polar).

For the definitions below the Cartesian representation is used.

Not yet defined

Up until now we have seen how the lambda calculus and combinators can be used to define number systems, control flow, data structures and algorithms.

Some functions, operations, structures and numbers have been left out: Logarithms, nth-Root, iterative summation, Sets, Binary-Trees. You may try to implement some of them.


In the beginning the constructed a natural number n to be a function that takes two arguments and applying the one argument n times. This allowed us to use our natural numbers a part of our calculations but be could never display an actual readable number as result.

To make the results of our calculations readable we can provided an interpretation the converts all your numbers constructed from functions into JavaScript values that can be written to the screen.

Converting to JavaScript

The following functions can be used to convert our combinator constructions into native JavaScript values:

Notice how EvalNatToJS converts our natural number encoding into a JavaScript number by passing 0 as second argument and the JavaScript function (n) => n + 1 as first argument. When applies to a natural number n, this adds +1 to the 0 n times.

Test Suite

Test Cases

The test suite below convert some combinator encodings back into JavaScript to check if everything works as expected.


This page is inspired by the talk A Flock of functions by Gabriel Lebec

Further resources about combinatory logic can be found here: