The nice thing about having a “thoughts” section is that unlike a technical blog, I don’t feel pressured to make my posts particularly technically impressive. I can just write about random stuff that popped up in my life that’s kind of short and silly.

I’ve always had a tenuous working relationship with monads; I know just enough to use them for basic things and get through the monad homework for a little type theory survey course that I took, but I wouldn’t say that I have any great understanding. So it’s always pleasant when I find that I can sprinkle in a little bit of monadic thinking into my code. I am under the impression that they make one look terribly clever.

Anyway, wanting some relaxation after a somewhat taxing final exam, I decided to write a tiny bit of Haskell to solve a problem.^{1} The problem is the $d$-sum problem:

Fix an integer $d > 0$. Given a list of $n$ integers, are there $d$ (not necessarily distinct) elements that sum to $0$?

Assuming the integers are bounded by $n^{\text{poly}(d)}$ in magnitude, there’s a very obvious $\tilde{O}(n^d)$ strategy (the $\text{poly}(\log n)$ factors may arise from arithmetic operations), which is to just try every possible combination of $d$ elements.^{2} Here’s a short snippet which does the slightly more general task of seeing if any $d$ elements sum to a specified `target`

value:

```
{- computes all combinations of k elements of a list, with replacement -}
combinations :: (Integral t) => [a] -> t -> [[a]]
combinations list 0 = []
combinations list 1 = map return list
combinations list k =
combinations list (k-1) >>= \elt -> map (:elt) list
{- solves the d-sum problem inefficiently -}
dSum :: (Integral t) => [t] -> t -> t -> Bool
dSum list d target =
any (== target) $ map sum $ combinations list d
```

I think the use of the `return`

and `>>=`

operators on the list monad make the code a little prettier, and at least on toy problems, Haskell is always a joy to read. (Though of course, `(:[])`

is a little shorter than `return`

for lists, but I like the longer word.) But then I saw this and thought: hey, wouldn’t it be nice to make it even *more* monadic? Here’s a more generalized variant of the same `combinations`

function:

```
import Control.Monad
import Control.Applicative
import Control.Functor
combinations' ::
(Integral t,
Monad m,
Alternative m,
Monoid (m (m a))) => m a -> t -> m (m a)
combinations' list 0 = mempty
combinations' list 1 = fmap return list
combinations' list k =
combinations' list (k-1) >>= (list <&>) . (flip $ (<|>).return)
```

These behave identically, as one would expect, when you give them a real list:

```
λ> combinations' [1,2,3] 3 == combinations [1,2,3] 3
True
λ> (length $ combinations (take 5 [1..]) 4) == 5^4
True
```

I spent way too much time looking up the right `<&>`

, `<|>`

, `<*>`

, etc. operators to do what I wanted.

In fact, coming up with (though not implementing) an efficient solution was part of a homework problem that I had. I have a bad history with implementing solutions to homework problems; once, Evan and I were working in Gates at 1:00 am, all because I decided, for fun, to try to actually implement a certain dynamic programming algorithm for 15-251, and then I couldn’t figure out a bug in my code. I eventually realized what the bug was several months later, when working on a similar problem for another class.↩

In fact, there is a much better solution. One part of the aforementioned homework problem was to find a $\tilde{O}(n^{\lceil d/2 \rceil})$ solution, and the second was to prove that, assuming the exponential time hypothesis for 3-SAT, there is a lower bound $\Omega(n^{\alpha d})$ for any $d$. Unfortunately, since this is a homework problem, I won’t be able to share the more efficient solution, but you can try it out on your own if you’d like.↩