I was reading a classic Math Overflow thread the other day about jokes “in the sense of Littlewood.” The original joke was the Euler product formula $\sum_{n > 0} n^{-s} = \prod_{p\;\text{prime}} \left(1 -
p^{-s}\right)^{-1},$ but there are several other cute entries. As a CS major who has fallen under Klaus’s influence,^{1} one that I found particularly funny is the following: given a (not necessarily commutative) ring where $1-ab$ is invertible, $(1-ba)^{-1}$ also exists, and is given by the formula $(1-ba)^{-1} = 1 + b(1-ab)^{-1}a.$ The joke is to derive this using a neat trick with a power series, but as a comment points out, the result actually makes a lot of sense if you think of these things as regular expressions over $\Sigma = \{a,b\}$, or more generally as Kleene algebras. In a way, you can think of $(1-u)^{-1}$ as $u^{\star}$ (i.e. Kleene star): $\frac{1}{1-u} = \sum_{k=0}^{\infty} u^k = u^{\star},$ to play a little fast and loose with notation. Likewise, one can think of $1$ as the empty string $\varepsilon$, since they form the identities for multiplication and string concatenation respectively.^{2} Then the result is remarkably obvious: $(ba)^{\star} = \varepsilon + b(ab)^{\star}a.$ In my opinion, this is rather funny.

This “joke” exploits the “convergence” of a geometric series in a setting where such convergence has no obvious meaning: abstract algebra and regular expressions.^{3} Paul Halmos once remarked on the utility of exactly this trick in an expository article,^{4} calling the geometric series, along with quotient structures and eigenvalues (and more generally fixed points), an “element of mathematics” that keeps cropping up in odd places where one least expects it. In fact, he used the very same example with $(1-ba)^{-1}$.

In a similar spirit, I propose the following joke in the realm of type theory, based on things that I picked up from 98-317 (Hype for Types). We can impose some algebraic structure on types in programming languages; for instance, forming a tuple of types $\tau_1$ and $\tau_2$ is analogous to taking their Cartesian product $\tau_1 \times \tau_2$.^{5} I would assume that this is what inspires the Standard ML syntax for tuples:

Similarly, given types $\tau_1$ and $\tau_2$, we can interpret the type “something from $\tau_1$ or something from $\tau_2$” as their *tagged union*, or with a more algebraic flavor, as their sum $\tau_1 + \tau_2$. In Standard ML:

```
datatype ('t1, 't2) sum = Left of 't1
| Right of 't2
val _ : (string, bool) sum = Left "foo"
val _ : (string, bool) sum = Right false
```

An intuitive way to think about this is that if $\tau_1$ is inhabited by $|\tau_1|$ values and $\tau_2$ is inhabited by $|\tau_2|$ values, then $\tau_1 \times \tau_2$ is inhabited by $|\tau_1| \times |\tau_2|$ values, and $\tau_1 + \tau_2$ is inhabited by $|\tau_1| + |\tau_2|$ values.

With these preliminaries aside, let’s try to express the type of a list algebraically. An $\alpha\;\mathsf{list}$ (i.e. a list whose elements are of type $\alpha$) is really just either an empty list, or a singleton list, or a pair of elements, or a triple of elements…essentially, a list is a sum of products:

```
datatype 'a list = Empty of unit
| Single of 'a
| Pair of 'a * 'a
| Triple of 'a * 'a * 'a
| Quadruple of 'a * 'a * 'a * 'a
| ...
```

And now using our representation of product types and sum types, we can do the very curious thing of thinking of a list as a *power series*: $\alpha\;\mathsf{list} = 1 + \alpha + \alpha^2 + \alpha^3 + \alpha^4 + \cdots = \sum_{k=0}^{\infty} \alpha^k.$ Now let’s step back for a moment and apply the usual trick of rewriting the geometric series using the formula: $\alpha\;\mathsf{list} = \sum_{k=0}^{\infty} \alpha^k =
\frac{1}{1-\alpha}.$ Now rearranging both sides, we see that this is the same as saying $\alpha\;\mathsf{list} = 1 + \alpha \times (\alpha\;\mathsf{list}).$ But let’s write this out in code:

This is precisely the canonical recursive definition of a list!

As a side note, Scott Aaronson has a blog post containing a few funny proofs, although this sense of humor is slightly different from the one above. Actually, I think I was asked a few of the questions in his blog post in a complexity theory class!

Klaus Sutner teaches computational discrete math at CMU, and he is notably enthusiastic about the role of abstract algebra in computation.↩

Or more formally and with less notational abuse, $\mathcal{L}(1) = \{\varepsilon\}$. Notice that this plays well with the Kleene star idea: if $0$ is the regular expression $\varnothing$ (the identities for addition and set union respectively), we see that $(1-0)^{-1} = 1$, and likewise $\varnothing^\star = \varepsilon$.↩

Though funnily enough, I recently stumbled across a blog post on the

*permanence of identities*, which uses category theory and model theory to justify such manipulations.↩Halmos, P.R. “Does mathematics have elements?”.

*The Mathematical Intelligencer*3, 147-153 (1981).↩As an aside, this also explains the name of the

`unit`

type, which is inhabited by the single value`()`

: it acts as the multiplicative unit.↩