I want to note at the outset that, while I do have a master’s degree in linguistics and have spent a greater-than-normal amount of time thinking about verbs in my life, it’s been a long time since I graduated, and the science of language is by no means settled. Some of what I’m about to gesture at is still under dispute, and since this is a highly nontechnical post, I’m going to gloss over a lot of details and conflate some things, and this post will reflect the overall perspective and biases of the model I worked in when I was still doing linguistics work.

One of the confusions that I think motivated some of the initial responses to my tweet is that non-linguists tend to think of language as primarily a *set of words*, with *subsets* that include the different sorts of words we’re all familiar with: nouns, verbs, adjectives, and whatnot. On this model, verbs are a set of words, further divided (presumably) into subsets based perhaps on whether they are transitive or intransitive or regular or irregular in their past tense or whatever.

But, at least since Chomsky’s Syntactic Structures was published – or Saussure, if you prefer! – most linguists do not think about language this way. There are certainly words in a language, and they generally have meaning, but the meanings of individual words do not make a language; grammar makes a language. Grammar is perhaps a bad word to use for this, because it’s been tainted for most of us by primary school education, in much the same way as “math” has. So call it syntax instead. What is important is that there is an underlying structure to language, rules that we are not really conscious of but that guide us in putting sentences together and constructing and understanding meaning.

Verbs are more than an arbitrary set of words. It doesn’t matter what we call these words; what matters is *why* we group them together. If we insist on thinking of this in set theory terms, what I want is not an extensional definition of the set but an *intensional* one: what are the criteria for inclusion in that set?

How do we know when a word is a verb? We “verb” words, especially in English, all the time; how do we know it became a verb? Having worked on languages that I don’t speak and that don’t have an established tradition of Latin-based grammar study, I am painfully aware that you can’t just ask everyone in the world to tell you if something is a verb or to list the verbs in their language. For this kind of research, it really matters whether we know some criteria by which we can begin to talk about the sets and classes of words in a new language.

What makes a verb a verb? There isn’t a very satisfying explanation in terms of what those words

*mean*, a point to which I’ll return.Could there be a language without verbs? I’m not really going to address that here, but it’s fun to think about, isn’t it? The roles we associate (for most of us, unconsciously) with verbs would have to be filled by something else [narrator voice: or would they?].

I apparently really miss my old gig of overanalyzing verbs, but as I was driving that day I was still at a point where Haskell was fresh and new to me and I was also thinking about typeclasses a great deal. And it occurred to me that we might ask similar questions about monads: how do we know when a type is a monad? What makes a monad a monad?

Now, monads do all share some things in common. They all take one type argument, for example. But not all types that take one type argument are monads, so that isn’t enough. And monads like `Maybe`

and `List`

and `Either a`

are semantically quite distinct. On the other side of the coin, `Validation a`

and `Either a`

are *isomorphic* and yet `Validation a`

isn’t a monad. So we have some rules about what can be included in the typeclass `Monad`

: the type has to have a certain structure (taking one argument) and it has to be able to *do* a specific thing. For `Monad`

it has to be a type for which we can write a lawful implementation of the `>>=`

(“bind”) function. You can actually write that function for `Validation a`

, but it violates some laws so we don’t include it in the class of monads.

In other words, we know a monad by what it *can do* in a program. Monad is a *class*: a type is a monad iff it meets these conditions. Things that are not sufficient for knowing whether a type is a monad:

its semantic content.

(extensional) membership in a platonic Set of All Monads – ok, fine, this

*might*work if we could*know*what all is included in it!its structure. Its arity is a clue that we

*might*have a monad, but it doesn’t always work out – necessary, but not sufficient.

Verbs are more different from each other, perhaps, than monads are. We typically learn, at least in my American schools, that verbs express “action” and later we amend this to include “states of being”, “sensing”, or even “linking”. There are transitive and intransitive verbs, some taking no objects and some taking multiple objects. I’ve already pointed out that `Maybe`

and `List`

are semantically *very* different, but consider this small set of English verbs:

‘rain’ – Plausibly an “action” but there is no plausible agent. In English we have to give sentences a subject but the ‘it’ in “it’s raining” isn’t really an

*agent*of that action.‘seem’ – Not an action; in many uses, no plausible agent. Consider: “It seems like it might rain.”

‘run’ – Definitely an action, usually a plausible agent; no objects (intransitive).

‘give’ – Typically an action, no problem with agency here. Deluxe transitivity: there is both an object that is given and an object to whom that object is given. But not always!

These are very different in their arities, in their structures, in their meanings. What do they have in common? They all play the same role(s) in a clause. They get marked in certain ways – this is really language dependent, but typically includes

tense (present, past, future)

aspect (e.g., ongoing or continuous actions vs finished ones)

agreement with subjects and sometimes objects (such as the third person singular ‘-s’ in English; English is unfortunately very poor in this type of marking but some languages are richer in inflection.)

mood (indicative, imperative, subjunctive)

They form the head of the verb phrase and have an argument structure – that is, they require subjects, objects, other phrases possibly indicating directionality or recipients of an action, e.g., ‘give x *to him*’. On some interpretations, all the relationships between all the other things in a clause are determined or caused or at least indicated by the verb (again, some languages mark their verbs with suffixes and such much more than English does, but English does a lot of this relationship-giving by our relatively fixed word order), sort of like a function determines the relationship between its parameters.

It’s not wrong to say there is a set of verbs in a given language, but it’s also not exactly wrong to say `Monad`

is a set – it’s a class, a collection of sets where membership is defined by some property that all its members share. `Monad`

class membership is clearly defined, so that we have criteria for including new types in the class and we can always monad new types. Always be monading, that’s the Haskell way.

I verbed ‘monad’. How do you know I verbed ‘monad’ or that I’m verbing ‘verb’?

Because we have rules verbs must follow and roles that verbs fill. I’ve just added ‘monad’ to the set of verbs by making it perform certain functions. I’ve written an instance for it, so to speak.

]]>Contents

I started writing this post because, for whatever reason, I keep forgetting what the difference is between a *ring* and a *group*, which is funny to me because I never forget the difference between a *semiring* and a *semigroup* – although other people do, because it’s quite easy to forget! So, I wanted a fast reference to the kinds of algebraic structures that I am most often dealing with in one way or another, usually because I’m writing Haskell (which has some reliance on terminology and structure from abstract algebra and category theory) or I’m trying to read a book about category theory and they keep talking about “groups.” Wikipedia, of course, defines all these structures, and that’s fine, but what I need in those times is more of a refresher than an in-depth explanation.

So, that’s the intent of this post – to be that fast reference to the kinds of algebraic structures I care about. There will be some mathematical language and notation that may not be familiar to everyone, so don’t freak out if you see something you’re not familiar with – it’s okay. Some of this is stuff I learned while writing this, because I want to know more about lattices than I currently do! There is a glossary at the end, so if some terminology is unfamiliar or you’ve (as I have, many times) forgotten what “idempotent” means, try checking there.

This sort of became longer than I originally intended, but I think the different sections might be useful at different times and perhaps to different audiences, so I kept it all. Hopefully the table of contents will still make it useful as a quick reference. If you like this style, this is a good preview of what you can expect from the main Joy of Haskell book, tentatively titled *The Haskell Desk Reference*, currently being written.

I had some help writing this and fact-checking it from Daniel Brice. I also had some help writing about complements and duality from my friend, Alex Feldman-Crough – thank you for making ‘involution’ make sense. Therefore, I will stop talking about myself as if I wrote it alone and shift to using “we” but, sorry, it felt weird to write this intro as if I were multiple people. Our hope is that this reference will be helpful to people who may be learning Haskell or PureScript and encountering a lot of new vocabulary.

This list is organized by dependencies, more or less, rather than alphabetically. The focus is on the subsets of the group-like, ring-like, and lattice-like structures, but is not all-inclusive. We may add to this as time goes on and new structures grab our fancy.

Some symbols used in this section:

- *: A generic binary operation. These do not necessarily mean
*multiplication (as of integers)*. Where we must distinguish between two binary operations, we typically use`+`

and ⋅. The binary operations in the lattice-like structures have their own symbols, though. - ×: The Cartesian product. Often this represents the Cartesian product of
`A`

and`B`

, i.e., the set of all ordered pairs`(a, b)`

with*a*∈*A*and*b*∈*B*." We are primarily concerned here with operations over a single set, though, so it’s more typically*A*×*A*in this post. *u*: An identity element or unit.- ′: An inverse.
- ∀: “For all”; universal quantification.
- ∈: “in”; set membership. Thus, ∀
*x*∈*A*can be read “for every`x`

in`A`

…” meaning the thing we’re about to assert must be true for every element`x`

that is in the set`A`

.

For definitions of terms, see glossary.

A set with a (closed) binary operation.

(*A*, * )

**Structure:**

- * :
*A*×*A*→*A*

A magma where the operation is associative.

(*A*, * )

**Structure:**

- * :
*A*×*A*→*A*

**Laws:**

- ∀
*x*,*y*,*z*∈*A*;*x** (*y***z*) = (*x***y*) **z*

A semigroup with an identity element.

(*A*, *,*u*)

**Structure:**

- * :
*A*×*A*→*A* *u*:*A*

**Laws:**

- (
*A*, * ) is a semigroup - ∀
*x*∈*A*;*x***u*=*u***x*=*x*

A monoid that has inverses relative to the operation.

(*A*, *,*u*, ′)

**Structure:**

- * :
*A*×*A*→*A* *u*:*A*- ′ :
*A*→*A*

**Laws:**

- (
*A*, *,*u*) is a monoid - ∀
*x*∈*A*, ∃*x*′;*x***x*′ =*x*′ **x*=*u*

A group where the operation is also commutative. You may also see the term *abelian* applied to semigroups and monoids whose operations are commutative.

**Laws:**

- ∀
*x*,*y*∈*A*;*x***y*=*y***x*

This is in addition to the laws for semigroup, monoid, or group (whichever abelian structure you’re dealing with) that already pertain.

A group whose operation is idempotent. Strictly speaking, an idempotent group is necessarily trivial – that is, it is necessarily a group with only one element. As with *abelian*, *idempotent* may apply to semigroups or monoids as well when the operation is idempotent.

**Laws:**

- ∀
*x*∈*A*;*x***x*=*x*

Two monoids over the same set whose monoid structures are compatible, in the sense that

- one operation (called multiplication) distributes over the other (called addition) and

- the additive identity is a multiplicative annihilator.

(*A*, +,⋅,0, 1)

**Structure:**

- + :
*A*×*A*→*A* - ⋅ :
*A*×*A*→*A* - 0 :
*A* - 1 :
*A*

**Laws:**

- (
*A*, +,0) is a monoid - (
*A*, ⋅,1) is a monoid - ∀
*x*∈*A*; 0 ⋅*x*=*x*⋅ 0 = 0 - ∀
*x*,*y*,*z*∈*A*;*x*⋅ (*y*+*z*) =*x*⋅*y*+*x*⋅*z* - ∀
*x*,*y*,*z*∈*A*; (*x*+*y*) ⋅*z*=*x*⋅*z*+*y*⋅*z*

A quasiring with additive inverses.

(*A*, +,⋅,0, 1, − )

**Structure:**

- + :
*A*×*A*→*A* - ⋅ :
*A*×*A*→*A* - 0 :
*A* - 1 :
*A* - − :
*A*→*A*

**Laws:**

- (
*A*, +,0, − ) is a group

A quasiring with commutative addition. Alternatively, a ring without inverses, hence also sometimes called a *rig*, i.e., a ring without *n*egatives.

Quasiring (*A*, +,⋅,0, 1)

**Laws:**

- (
*A*, +,0) is abelian

A ring without *i*dentities.

If we may speak frankly, the *rig* and *rng* nomenclatures are abominations. Nevertheless, you may see them sometimes, but we will speak of them no more.

A quasiring that is both a nearring and a semiring. Alternatively, Abelian group plus a monoid (over the same set) where the monoid operation is distributive over the group operation.

Nearring (*A*, +,⋅,0, 1, − )

**Laws:**

- (
*A*, +,0, − ) is an abelian group

Rings, semirings, and the lot can also be commutative rings, commutative semirings, etc., but, unlike the group-like structures, they are not usually described as, e.g., “abelian rings.”

A ring with multiplicative inverses.

(*A*, +,⋅,0, 1, −,′)

**Structure:**

- + :
*A*×*A*→*A* - ⋅ :
*A*×*A*→*A* - 0 :
*A* - 1 :
*A* - − :
*A*→*A* - ′ :
*A*\ {0} →*A*\ {0}

The notation *A* \ {0}, sometimes alternatively given as *A* − {0}, means “everything in *A* except 0.”

**Laws:**

- (
*A*, +,⋅,0, 1, − ) is a ring - (
*A*\ {0}, 1, ′) is a group

A division algebra with commutative multiplication.

Division algebra (*A*, +,⋅,0, 1, −,′)

**Laws:**

- (
*A*, +,⋅,0, 1) is commutative

There are some cool things about lattices that aren’t covered here, but lattice-like structures will be getting their very own followup post at some point in the (hopefully near) future.

A magma where the operation is commutative, associative, and idempotent. It could refer to either of the abelian semigroups of a lattice, written ∨ and often called *join* or Boolean *or*, and ∧ often called *meet* or Boolean *and*. We define the term here for reasons related to its usage in Haskell, but it is best understood in context of lattices, rather than independently.

Two idempotent abelian semigroups over the same set whose semigroup structures are compatible, in the sense that the operations satisfy absorption laws. Interestingly, it’s sort of *two semilattices* in the same way that a semiring is *two monoids*, with laws tying them together (distributivity in the case of semirings, absorption laws in the case of lattices).

(*A*, ∨, ∧ )

**Structure:**

- ∨ :
*A*×*A*→*A* - ∧ :
*A*×*A*→*A*

**Laws:**

- (
*A*, ∨ ) is an idempotent abelian semigroup - (
*A*, ∧ ) is an idempotent abelian semigroup - ∀
*x*,*y*∈*A*;*x*∨ (*x*∧*y*) =*x* - ∀
*x*,*y*∈*A*;*x*∧ (*x*∨*y*) =*x*

The last pair of laws is called *absorption*. Since absorption laws are unique to lattices, we discuss them here instead of in the glossary. The absorption laws link a pair of semilattices in a kind of distributive relationship, so that a lattice is not just any two semilattices that happen to be over the same set, but only semilattices that are linked in this way. In particular, the absorption laws ensure that the two semilattices are *dual* of each other. It can take a moment to see what it means, so let’s pause and look at concrete examples.

Consider a Boolean lattice with two elements, `True`

and `False`

, where `||`

corresponds to ∨ and `&&`

corresponds to ∧:

But it’s important to note that these hold *for all* *x* and *y* in the set. So, if we swap them, the absorption laws still hold:

Positive integers form a lattice under the operations `min`

and `max`

, and we can see the absorption law in action here, too.

The absorption laws are sometimes called a special case of identity, and they’re also related to idempotence in that the idempotence laws can be derived from the absorption laws taken together.

A lattice whose semigroup structures are monoids.

(*A*, ∨,∧,0, 1)

**Structure:**

- ∨ :
*A*×*A*→*A* - ∧ :
*A*×*A*→*A* - 0 :
*A* - 1 :
*A*

**Laws:**

- (
*A*, ∨, ∧ ) is a lattice - (
*A*, ∨,0) is a monoid - (
*A*, ∧,1) is a monoid

A bounded lattice where each element has a complement.

(*A*, ∨,∧,0, 1, ′)

**Structure:**

- ∨ :
*A*×*A*→*A* - ∧ :
*A*×*A*→*A* - 0 :
*A* - 1 :
*A* - ′ :
*A*→*A*

**Laws:**

- (
*A*, ∨,∧,0, 1) is a bounded lattice - ∀
*x*∈*A*;*x*∨*x*′ = 1 - ∀
*x*∈*A*;*x*∧*x*′ = 0

*Nota bene*: Although ′ defines a particular choice of complements (i.e., each element *x* ∈ *A* has exactly one corresponding *x*′ ∈ *A*), there may additionally be other elements *y* ∈ *A* such that *x* ∨ *y* = 1 and *x* ∧ *y* = 0. In particular, there may be other suitable ′ functions, and *x*″ is not necessarily *x*.

A lattice where the operations distribute over each other.

Lattice (*A*, ∨, ∧ )

**Laws:**

- ∀
*x*,*y*,*z*∈*A*;*x*∧ (*y*∨*z*) = (*x*∧*y*) ∨ (*x*∧*z*) - ∀
*x*,*y*,*z*∈*A*;*x*∨ (*y*∧*z*) = (*x*∨*y*) ∧ (*x*∨*z*)

Strictly speaking, the second law can be derived from the first law and the lattice laws, and as such is redundant. Every totally ordered set, such as the real numbers and subsets of the reals including the naturals and integers, form distributive lattices with `max`

as ∨ (join) and `min`

as ∧ (meet).

A bounded, distributive lattice with an implication operation.

(*A*, ∨,∧,0, 1, ⇒ )

**Structure:**

- ∨ :
*A*×*A*→*A* - ∧ :
*A*×*A*→*A* - 0 :
*A* - 1 :
*A* - ⇒ :
*A*×*A*→*A*

**Laws:**

- (
*A*, ∨,∧,0, 1) is a bounded, distributive lattice. - ∀
*x*∈*A*;*x*⇒*x*= 1 - ∀
*x*,*y*∈*A*;*x*∧ (*x*⇒*y*) =*x*∧*y* - ∀
*x*,*y*∈*A*;*y*∧ (*x*⇒*y*) =*y* - ∀
*x*,*y*,*z*∈*A*;*x*⇒ (*y*∧*z*) = (*x*⇒*y*) ∧ (*x*⇒*z*)

A complemented Heyting algebra.

Complemented lattice (*A*, ∨,∧,0, 1, ′)

**Laws:**

(*A*, ∨,∧,0, 1, ⇒ ) is a Heyting algebra where ⇒ : *A* × *A* → *A* by *x* ⇒ *y* = *x*′ ∨ *y*.

The typeclass system of Haskell more or less corresponds to algebraic structures, with types as the sets. The typeclass definition gives the most general possible form of the operations over the sets, and the `instance`

declarations define the implementations of those operations for the specified type (set).

Not all of the above structures are well represented in Haskell, but some are, and a couple of them (semigroups and monoids) are super important. We give those typeclass definitions, a representative `instance`

or two, and links to documentation where appropriate.

One important thing to note before we get started that is, perhaps, somewhat disappointing: the compiler does not enforce the laws of the algebraic typeclasses. The only thing standing between you and a law-breaking ring or monoid is … well … you, and your willingness to test your instances. That isn’t really different from the situation in mathematics, where it would be on you to prove that such-and-such upholds the ring laws or whatever, but some people come to Haskell expecting the compiler wouldn’t *let you* write a bad instance and that is absolutely not the case. We like type checking and inference a lot and are grateful for the problems it does help with, but it’s important to be aware of its limitations as well!

The `Semigroup`

class in Haskell is defined as follows:

Many sets form semigroups under more than one operation, so in Haskell, to preserve the unique relationship between a type and a typeclass, we use a named type wrapper, called a `newtype`

, which identifies which semigroup we’re talking about. This parallels the situation in math where a set can’t be a group (or a ring, etc). Instead, a group has to be a pair `(G, *)`

where `G`

is a set and `* : (G, G) -> G`

and the group axioms hold. `G`

is not the group: the group is the pairing of the set `G`

with the structure function `*`

. In Haskell, we use `newtypes`

to pair a set (in this case, the type being wrapped) with some structure functions.

```
-- the Max semigroup is only for orderable sets
instance Ord a => Semigroup (Max a) where
(<>) = coerce (max :: a -> a -> a)
-- the NonEmpty semigroup is concatenation of nonempty lists
instance Semigroup (NonEmpty a) where
(a :| as) <> ~(b :| bs) = a :| (as ++ b : bs)
```

You can find more about `Semigroup`

over on Type Classes.

In modern Haskell, `Semigroup`

is a superclass of `Monoid`

. That is, since monoids are semigroups with the additional requirement that there be an identity element, semigroup is in some sense the weaker algebra and there are more of them than there are monoids. What this means is if we want a `Monoid`

, we have to first have a `Semigroup`

; the binary operation comes from the `Semigroup`

instance. Then we define the identity element for that type and operation in our `Monoid`

instance – in the `Monoid`

class it’s called `mempty`

.

Again, many sets form monoids under more than one operation, so we use `newtype`

s in Haskell to tell them apart.

```
instance Num a => Semigroup (Sum a) where
(<>) = coerce ((+) :: a -> a -> a)
instance Num a => Monoid (Sum a) where
mempty = Sum 0
instance Num a => Semigroup (Product a) where
(<>) = coerce ((*) :: a -> a -> a)
instance Num a => Monoid (Product a) where
mempty = Product 1
```

Julie has also written extensively about `Monoid`

over on Type Classes, about JavaScript and monoidal folds, and also given talks about these wonderful structures.

It is perhaps worth pointing out that the `Alternative`

and `MonadPlus`

typeclasses in Haskell are *also* monoids. The difference between them and the `Monoid`

class is that `Monoid`

is a typeclass for concrete types, whereas `Alternative`

and `MonadPlus`

are for type constructors, that is, parameterized types. We can make this more precise. If `f`

is an `Alternative`

, then for all `a`

, `f a`

is a monoid under `<|>`

, with identity `empty`

. We encode this fact in Haskell via the `Alt`

newtype and its `Monoid`

instance.

We don’t exactly have a `Ring`

typeclass in standard Haskell; what we have instead is the `Num`

class and it’s sort of like a ring. It’s a big typeclass, so this is a simplified version with the functions you’d expect a ring to have.

Perhaps surprisingly `fromInteger`

has everything to do with rings and rightly belongs in the definition of any typeclass for rings. This is because the ring of integers is an initial object in the category of rings, i.e. for every ring `A`

, there is always one and only one ring homomorphism (i.e., ring-structure-compatible function) from the ring of integers to the ring `A`

. This is beyond the scope of the blog post, at least for now, but we mention it here so that someday Julie can come back to it.

Comments in the source code say:

The Haskell Report defines no laws for ‘Num’. However, ‘(+)’ and ’(*)` are customarily expected to define a ring

and then give the properties a ring is expected to have; however, those laws are rarely mentioned with regard to `Num`

, and we suspect many people do not even think of `Num`

as a ring that should have laws. Let us speak no more of this.

It’s a shame that `Semiring`

is not in the standard library (yet?). It is in the standard PureScript library, and we really admire PureScript for that, among other things. However, we have some decent implementations of it in libraries, for example this `semirings`

package.

That `Semiring`

definition looks like this:

```
class Semiring a where
plus :: a -> a -> a -- commutative operation
zero :: a -- identity for `plus`
times :: a -> a -> a -- associative operation
one :: a -- identity for `times`
```

It also provides several instances, including this one for `Maybe`

– notice the difference between the `plus`

and `times`

cases when one input is `Nothing`

:

```
instance Semiring a => Semiring (Maybe a) where
zero = Nothing
one = Just one
plus Nothing y = y
plus x Nothing = x
plus (Just x) (Just y) = Just (plus x y)
times Nothing _ = Nothing
times _ Nothing = Nothing
times (Just x) (Just y) = Just (times x y)
```

There is more about semirings on Type Classes.

These structures are interesting, but we have not yet written much about them or used the `lattices`

package. We notice that that package defines two semilattice classes and then a `Lattice`

class that is constrained by *both*. We note that the `Lattice`

class has no new functions in it; you can use it as a constraint on other things when you have two semilattices (the meet and the join) and the absorption law holds.

```
class JoinSemiLattice a where
(\/) :: a -> a -> a
class MeetSemiLattice a where
(/\) :: a -> a -> a
class (JoinSemiLattice a, MeetSemiLattice a) => Lattice a where
```

And the instances for `Bool`

are defined:

```
instance JoinSemiLattice Bool where
(\/) = (||)
instance MeetSemiLattice Bool where
(/\) = (&&)
instance Lattice Bool where
```

The absorption law does hold for the `Bool`

lattice, so it looks like we’re all good here!

This first section gives definitions of some common terminology when talking about the laws and properties of these structures.

Some of these will probably be familiar to most people from high school math, others may not be.

**Absorption**: See lattices.**Annihilator**: We have to include this term because it is such a metal thing to call zeroes. Annihilation is a property of some structures, such that there is an element of a set that always*annihilates*the other input to a binary operation, sort of the opposite of an identity element (see below). If`(S, *)`

is a set`S`

with a binary operation`*`

on it, the annihilator, or zero element, is an element`z`

such that for all`a`

in`S`

,`z * a = a * z = z`

. In the monoid of integer multiplication, the annihilator is zero, while in the monoid of set intersection, the annihilator is the empty set; notice that the monoids of integer addition and set union*do not have annihilators*.**Associativity**: Associativity may be familiar from elementary arithmetic, even if the name isn’t. For example, you may recall that`2 * (3 * 4)`

and`(2 * 3) * 4`

always evaluate to the same result, even though you simplify the parts in parentheses first so the parentheses change the order in which you evaluate the expression. When the result never depends on the order of simplification, we say that a binary operation is*associative.*More formally, an operation`*`

on a set`S`

is*associative*when for all`x`

,`y`

, and`z`

in`S`

,`x * (y * z) = (x * y) * z`

.**Binary operation**: A*binary operation*`*`

on a set`S`

is a function`* : (S, S) -> S`

. Notice that`*`

maps`(S, S)`

back into`S`

. Because of an historical quirk, this fact is sometimes called*closure*(see below). In Haskell, that looks like a type signature such as`a -> a -> a`

because Haskell is curried by default. All functions in Haskell are … actually unary functions, taking one input and returning one result (which may itself be a function). The final parameter of a Haskell type signature is the return type; all others are input types.**Closed**: By definition, a binary operation over a set implies that the operation is*closed*, that is, for all`a`

,`b`

, in set`S`

, the result of the binary operation`a * b`

is also an element in`S`

. This coincides exactly with the definition of a function`(S, S) -> S`

(see above). Also, sometimes called the property of*closure*. While this is definitionally a property of binary operations and, thus, not independently important, we mention it here because it comes up in the Haskell literature.**Commutativity**: Commutativity is not the same as associativity, although most commutative operations are also associative. The commutative property of some binary operations holds that changing the order of the inputs does not affect the result. More formally, an operation`*`

on a set`S`

is*commutative*when for all`x`

and`y`

in`S`

,`x * y = y * x`

.**Complement**: You may have learned about*complements*in geometry or with sets: two angles are complementary when they add up to 90 degrees; and two subsets of a set`S`

– let’s call the subsets`A`

and`B`

– are complements when*A*∪*B*=*S*and*A*∩*B*= ∅ (where ∪ is for*union*and ∩ is for*intersection*). Simply put, a complement is what you combine with something to make it “whole”. In a complemented lattice, every element`a`

has a complement`b`

satisfying*a*∨*b*= 1 and*a*∧*b*= 0 where 1 and 0 are the greatest and least elements of the set, respectively. Complements need not be unique, except in distributive lattices.**Distributivity**: The distributive property in arithmetic states that multiplication distributes over addition such that`2 * (1 + 3) = (2 * 1) + (2 * 3)`

. Some algebraic structures generalize this with their own distributive law. Suppose we have a set`S`

with two binary operations,`<>`

and`><`

. We say`><`

*distributes over*`<>`

when- for all
`x`

,`y`

, and`z`

in`S`

, `x >< (y <> z) = (x >< y) <> (y >< z)`

(left distributive)*and*`(y <> z) >< x = (y >< x) <> (z >< x)`

(right distributive).

Note that if

`*`

is commutative and left distributive, it follows that it is also right distributive (and therefore distributive).- for all
**Dual**: This principle can also be somewhat tricky to understand, and discussions of what it means tend to get into the mathematical weeds quickly. Roughly, for our purposes (but perhaps not all purposes) it’s a “mirror-like” relationship between operations such that one “reflects” the other. Somewhat more formally, it means that there is a mapping between`A`

and`B`

that*involutes*, so`f(A) = B`

and`f(B) = A`

. Understanding duality is important because if you prove things in`f(A)`

, you can prove things about`B`

, and if you prove things about`f(B)`

, then you can prove them about`A`

. So,`A`

and`B`

are related but it’s a bit more complicated than a standard function mapping. An involution is a function that equals its inverse, so applying it to itself give the identity; that is, if`f(A) = B`

and`f(B) = A`

then`f(f(A))=A`

. Some examples:In Haskell, sum types and product types are dual (as are products and coproducts in category theory). You can demonstrate this by implementing

`f :: (a, b) -> (Either (a -> c) (b -> c) -> c)`

(mapping a product type to a sum) and`f' :: (Either a b) -> ((a -> c, b -> c) -> c)`

(mapping a sum type to a product) and trying it out.In classical logic, universal (“for all

`x`

in`A`

…”) and existential quantification (“there exists an`x`

in`A`

…”) are dual because ∃*x*: ¬*P*(*x*) and ¬∀*x*:*P*(*x*) are equivalent for all predicates`P`

: if there exists an`x`

for which`P`

does not hold, then it is not the case that`P`

holds for all`x`

(but the converse does not hold constructively).

**Idempotence**: The idempotence we care about for our algebraic structures is a property of some binary operations under which applying the operation multiple times doesn’t change the result after the first application. However, it can be a bit tricky to understand, so let’s consider idempotence with regard to unary functions to get a sense of the meaning first. Consider a device where there are separate buttons for turning the device on and off; pushing the*on*button doesn’t turn it “more on”, so the*on*button is idempotent (and so is the*off*button). Similarly, taking the absolute value of integers is an idempotent unary function; you can keep taking the absolute value of a number and after the first time, the answer won’t change.We say an element of a set is idempotent with respect to some operation

`*`

if`x * x = x`

. We say an operation is idempotent if every element in the set is idempotent with respect to the operation. Both the annihilator and identity elements, if present in a given structure, are idempotent elements. For the natural numbers under multiplication, both`1`

and`0`

are idempotent; for the naturals under addition, only`0`

is. Hence neither addition nor multiplication of the natural numbers is itself idempotent. Furthermore, the set operations of union and intersection are both idempotent operations.**Identity**: An identity element is an element of a set that is neutral with respect to some binary operation on that set; that is, it leaves any other element of that set unchanged when combined with it. An identity value is unique with respect to the given set and operation. More formally, for a set`S`

with a binary operation`*`

on it,`x`

is the identity value when`x * a = a * x = a`

for all`a`

in`S`

. In Haskell,`mempty`

is a*return-type polymorphic*identity value for monoids and`empty`

is the same but for`Alternative`

s, but identity values are also often called`one`

and`zero`

on analogy with the identities for addition and multiplication, respectively. Often, the identity called “zero” will also be an annihilator for the dual operation, e.g., the empty set or empty list (identity of concatenation, annihilator of`zip`

),`False`

, and the like are in their respective structures.**Invertibility**: This is also familiar to many of us from basic arithmetic, even if the name is not. Zero can serve as an identity element for addition of integers or of the natural (“counting”) numbers assuming we include zero in those. The set of integers includes numbers that we can add to each other to get back to zero, e.g.,`(-3) + 3 = 0`

; there aren’t any such natural numbers because the set of natural numbers does not include negatives. This property of the integers under addition is*invertibility*. Given a binary operation`*`

on`S`

with identity`e`

, an element`b`

in`S`

is said to be an*inverse*of an element`a`

in`S`

if`a * b = e = b * a`

, in which case`a`

(as well as`b`

, simply by the symmetry in the definition) is said to be*invertible*in`S`

relative to`*`

. If every element of`S`

is invertible in`S`

relative to`*`

, then we say`S`

has inverses relative to`*`

.**Unit**: The idea of being a unit is related to invertibility. A*unit*is an element of a ring structure that is its own inverse. The number`1`

is its own multiplicative inverse, as is`(-1)`

because`(-1) * (-1) = 1`

. In Haskell, there is a type called “unit”, written`()`

(an empty tuple, if you will); while types in Haskell do not form a ring, the unit type plays the same role in the semiring of types as the number 1 plays in the semiring of natural numbers (the zero is represented by the`Void`

type, which has no values).

You may sometimes hear about *left* or *right* associativity or identity. For example, exponentiation is only *right-associative*. That is, in a chain of such operations, they group for evaluation purposes from the right.

This is more of a convention than a property of the function, though, and it is often preferable to use parentheses to make associativity explicit when it is one-sided (i.e., when something is right associative but not *associative*). We call something *associative* when it is both left- and right-associative. We call something *distributive* when it is both left- and right-distributive. We call something an *identity* if it is both a left and right identity.

It was while writing my first book that my Twitter account became attached to my real name. Since we self-published that book and did our own marketing, it was necessary. It had always been my personal account, for my life, and learning Haskell was part of my life, so I didn’t immediately see anything wrong with blending the two.

Increasingly, this presents health problems for me. Part of the problem is undoubtedly that … I don’t seem to be a nobody anymore, and I was unprepared for it and don’t … really like it. It seems to be a truism that if a Twitter account is getting popular, they must be enjoying it and want it to be that way. Maybe it’s true in general, but it’s not true of me. I am no longer free to ignore, mute, or block whomever I please; it upsets people, they complain, they use it against me in the Reddit rumor mills once in a while, “evidence” of my unreasonableness. I have to fact-check jokes before I make them (obviously, I do not mean by this that I am literally *being forced*). Replying to anyone means that roughly 7200 people will potentially see it and think that I also have the time and energy to argue with them about it, when they (probably? hopefully?) wouldn’t think so if they overheard me and a friend talking at a coffee shop or whatever.

Also, since 2016, when my former business broke up, I have gone out of my way to give talks, lead workshops, and write publicly so that people could have a sense of my voice when I’m on my own, of what I know and don’t know, of my capabilities. For much of that time, I’ve been a single mother. When someone is on IRC or Reddit saying I don’t do my own work, I guess once in a while I think that someone will speak up for me and say, “naw she’s put out a ton of work of her own, given lots of solo talks and so forth,” because the evidence *is* out there. But people tend not to, and the people who are inclined to stereotyped views of women won’t go looking for evidence that contradicts their biases. Now, how many people you think they are and how much of an impact it has on me is debatable; it arguably has less now than it used to, but the couple of years I spent directly fighting that perception were exhausting, and part of my health problem now is from that burnout and fatigue.

Furthermore, and this is difficult to explain to people, because few have ever been in a situation like this, but *my story* and my willingness to *learn Haskell in public* while writing a monumental book about it was the main marketing strategy of that book. But I no longer have any part of that book; I do not gain from the book being well-marketed. To the extent that some of my current writing could be seen to be in competition with it, it could be said that I am being used against myself. At any rate, when my personal Twitter account became inextricably linked with that book, some part of me was lost; I no longer own that part of myself. As I say, it’s difficult to explain. Many people have started businesses with a partner and then had a bitter falling out and broken up the company, and it sucks; however, I would say that it is atypical to have that company to continue using your personal story to promote itself. It’s gotten somewhat better now, but for a while it felt a bit like when people in the *Golden Compass* books are separated from their daemons.

And then the last episode was just too much.^{1}

If charity and assumptions of good faith are owed to participants in a conversation, then they are owed to all equally; if the word

*sexism*sets you off in such a way that you can no longer exercise charity and assume good faith, then you need to remove yourself from the conversation. Use of that word (or ‘racism’ or any similar word that upsets you) does not suddenly render an argument invalid – it may be a bad argument for other reasons, but that ain’t it. “It’s inflammatory!” is a statement about how it makes you feel; I’m not invalidating those feelings, because grappling with the ways in which we fall short of ideals as a fair and reasonable meritocracy of sorts is hard and upsetting, but it’s not really a counter-argument. Also you should consider reading about what Liam Kofi Bright calls the Informal Omega Inconsistency because some of you are doing that. In a conversation on Twitter, the awesome Jon Purdy and I tried to formalize it, if you are interested.My struggles with depression, PTSD, and suicidal ideation have been part of my personal story on Twitter for a very long time, and it is

*beyond inappropriate*to dogpile me for calling out behavior**you admit was bad**because the man who did it “has mental illness.” Give some thought to whom you have sympathy for and why; I am no less deserving of it than he is.“Virtue signaling” may be a real thing, but the term is overused. If I tweet a complaint about mistreatment that had a material impact on me personally, what exactly is the peer group or tribe that I am allegedly signaling to? If a man tweets a complaint about a former employer who took something of value from him (say, his time or his IP), he is not typically accused of virtue signaling (nor, as I have been, of ruining his reputation by harping on it). Entertain the idea that, yes, sexist comments and malicious rumors that reinforce stereotypes do have material impacts on people. The idea that I gain any kind of support or well-being or material benefit from tweeting about sexism is, excuse my language,

*fucking ludicrous*. And furthermore you did not complain about this back when you thought I was on “your side” whatever that is. I’m in the rare position of having the public perception of my beliefs flip so dramatically. Note, I did not say my beliefs have flipped, though a few have changed, as they should if you are listening to people outside your bubble – it’s mostly your perception based on what group you think I belong to.It is OK to admit you do not understand a topic and seek to learn about it, even on Twitter, though not everyone will have the time or patience or ability to teach. It is less OK to start arguing with someone about a topic you do not understand, rather than seeking to understand it. This is just as true of a topic like

*misogyny*as it is of algebra or category theory.

These things were just too much, when we are already marinating in a culture where rape “jokes” are avidly defended but “racism against Germans” (reported by men, of course) is treated as a breach requiring an apology.

No, a lot of things have just been too much, but I’ve found ways to cope and carry on, and I won’t anymore. Lots of days on Twitter were fun and I learned things, and then the bad times would come and they were truly bad. I have a private Twitter account and have invited a few people to it, but I speak much more freely there, to friends or at least people I think will treat me reasonably if I engage with them, and I will continue to keep it very limited (though the fact that I haven’t invited you yet doesn’t mean you wouldn’t be welcome, so please do not take it personally – I feel sort of intrusive asking people to follow a second account). From time to time I will use my main Twitter account to post things of interest to the Haskell and math communities, primarily, but I will check notifications and DMs and the like infrequently. Twitter isn’t worth my health; it’s not even really worth sacrificing the time with my kids. As for traveling to speak at Haskell conferences and all that, I don’t know yet. I just don’t know. We’ll see how recovery goes.

At any rate, I am gradually recovering and I continue to write about Haskell and sometimes mathy things and linguistic things and I should start blogging more, while probably keeping my interactions on social media somewhat limited, at least for a while longer.

With a bit of luck and rest, I hope to be at Zurihac next year; Jasper and the crew are doing wonderful things for Haskell, in my view, and I have always loved being there.

Incidentally that last episode started because there was a thread that kept popping up in my timeline of some Australian Haskellers – all men – discussing why there aren’t more women in functional programming and what steps might be taken to make it more welcoming for women who might want to be included. Chris Martin has been one of the few people who has forced himself to look at everything I see on Twitter or wherever as part of my day-to-day interactions with the Haskell community. He has forced himself to not look away, and furthermore to hear me out about how it isn’t

*one thing*or*one person*, it’s a whole systemic problem for me and I’m not alone. Despite the fact that some women –*obviously including me*for the past several years – are in the community, many others who would like to be*are not*.↩︎

I gave the opening keynote at the C∘mp∘se :: Melbourne, and the organizers were kind enough to give me a lot of freedom in my choice of topic. I often find myself very frustrated by the way programmers talk about metaphor, so I chose a topic that would let me give an entirely different view of metaphors – metaphors the way linguists and cognitive scientists talk about them, metaphors as the crucial backbone of everyday thought and abstractions in mathematics and elsewhere.

I drew from a lot of sources in preparing this talk; citations are given here where appropriate and a complete reference list is given at the end if you would like to read more. Given the breadth of the material covered in a 45-minute talk, I had to breeze through some of it; perhaps next time I return to Australia I can explore some topics in more depth, such as how our search for *closure* (in mathematics) has been an important motivation for developing more and more kinds of numbers. At any rate, I hope for now, this will give an overview, enough that the conversation about metaphors might shift just a bit and we might find new ways to empathize with learners who are struggling to *see* the abstractions.

You can watch the talk here or keep reading below. Or both!

Welcome!

The first two images come from the book *Which One Doesn’t Belong?* by Christopher Danielson. The beauty of this book is that on each page, for each collection of images, you can argue that *any* of the four is the one that doesn’t belong. This book is not about getting the right answer; it’s about making good mathematical argumentation and talking about properties of the figures. (It’s a wonderful book and I cannot recommend it highly enough.)

Which one doesn’t belong here? The top right is the only quadrilateral. The lower right is the only one with curved lines. The lower left is the only one that is not a closed figure. The top left is the only one that is an actual triangle. You might be able to come up with more reasons why any of the four doesn’t belong.

We can see in three of the four figures some notion of “triangularity”; usually even children who have yet to learn a formal definition of a triangle can see it. *Triangularity* is a notion we came up with and then formalized by looking around at the irregular and somewhat noisy input from the real world and deciding on ways to group things according to their properties.

Kids will often ask, and so should you: What properties matter when we talk about these things?

In some contexts, the properties that matter are very obvious. In the image above, if we are in a context where what we care about is color or size, for example, then we can easily pick out which one doesn’t fit the others along those criteria. But the answer isn’t always so obvious.

Now which one doesn’t belong? Like the images from the book, there isn’t a single right answer. You can make an argument that each one in turn doesn’t belong: the top right because it’s the only product, the bottom right because it doesn’t involve numbers at all or is the only one dealing with a two-valued set, the bottom left because it’s the only one that involves a collection of things or a type constructor (in Haskell terms), the top left because it’s the only one that doesn’t have a sensible notion of an identity value or the one that is most purely identifiable as an operation of “choosing”.

However, the context matters. We can rewrite all four of those using the same operator, called `mappend`

but here written in its infix notation `<>`

, in Haskell:

```
λ> Product 4 <> Product 3
Product {getProduct = 12}
λ> Min 5 <> Min 9
Min {getMin = 5}
λ> [1, 2, 3] <> [4, 5, 6]
[1,2,3,4,5,6]
λ> Any True <> Any False
Any {getAny = True}
```

The `mappend`

operator is the `Semigroup`

operator. So, we can see each of these operations as being different from all the rest, as we saw in the previous slide, but we can also group them all under the notion of *semigroup*. A semigroup is a set along with a binary associative operation (such as `min`

, `||`

, `++`

, or `*`

) that obeys some laws; you can also think of it as a monoid minus the identity element. We use newtype wrappers (`Min`

, `Product`

, and `Any`

) to indicate which semigroup because many types form semigroups under multiple operations; we use one operator for all these in Haskell, but indicate which operation is relevant by renaming the underlying type (e.g., `Integer`

or `Bool`

).

The point is, depending on how we look at these equations and what properties matter to us at a given time, we can see them each as being different or see them as all the same. They are different, and yet they are also the same at some level of abstraction. Abstraction looks at the ways things are the same and ignores ways in which they are different. Abstraction allows us to formalize things and make sure they are all very proper and law-abiding, and it also sometimes provides us new layers of what we can consider “concrete” in order to abstract and generalize further, as we’ll see.

This process relies on our ability to analogize, and, more importantly, on metaphor. Analogy and metaphor are not quite the same thing, but they both come under suspicion from programmers who like to believe themselves rational folks who value clear, precise statements of what a thing *is* not what it’s *like*.

So you see a lot of statements like this one in this business. This was taken from a paper about designing interfaces, but I don’t want to particularly call out or shame this author, because this is an extremely common view of metaphor. But I’m here to tell you that without metaphors, mathematics and computers simply wouldn’t exist.

She was the single artificer of the world

In which she sang. And when she sang, the sea,

Whatever self it had, became the self

That was her song, for she was the maker.

– Wallace Stevens, The Idea of Order at Key West

“The Idea of Order at Key West” has always been my favorite poem, and the poet here hits the nail on the head. The ocean, and indeed the world, makes noise; we make order – we humans make music from the noise of the physical world.

It’s become clear in the past few decades that we come into the world with some innate abilities for dealing with the raw inputs we begin responding to even before birth. We share some of these with animals, and it’s clear that they are evolved aspects of our embodied minds. The case for an innate grammar of one kind of another is complex and really out of the scope of this talk; I refer you to Ray Jackendoff’s *Patterns in the Mind* for a readable, mostly nontechnical overview of the arguments and evidence. But if you’ve ever tried to listen to rapid natural speech of native speakers in a language you do not speak at all, you have some sense of how hard it is to even discern word boundaries, yet babies couldn’t learn language if they were unable to do this.

Further research indicates that we have some innate arithmetic ability. For example, we (and some animals) seem able to accurately gauge how many objects are in a collection without counting them; when we’re infants we can perhaps do this for collections up to about three objects, but this increases as we get older, up to about seven objects for most people. This ability is known as *subitizing*. Babies (and, again, some animals) also seem to understand addition and subtraction up to about three; for details about the relevant research, I’ll refer you to George Lakoff and Rafael Nuñez’s *Where Mathematics Comes From*.

And finally there is significant evidence that we have innate abilities to analogize and thence make metaphors. Here, I’ll refer you to Douglas Hofstadter’s book *Surfaces and Essences* as well as nearly any book with George Lakoff as one of the authors.

Let’s cover a couple of these in more depth.

If I ask you how many red hearts are here, you do not have to count them. Very young children might, but adults can easily subitize this, especially when they are put in a familiar pattern such as this. Greg Tang writes math books for children specifically encouraging this subitizing ability; the abacus is also designed around this principle, as are dice, the patterns on playing cards and dominoes, and the ways we group the zeroes by threes in large numbers.

Analogy is what allows categorization to happen.

Analogy is the perception of common essence between two things.

– Douglas Hofstadter, “Analogy as the Core of Cognition”

I mentioned that there is some difference between analogy and metaphor. Analogy is the similarity between two things. Metaphor is the linguistic expression of this similarity but, more fundamentally, metaphor is when we structure our understanding of one thing – usually something abstract or at least not something we can directly experience through the senses – in terms of another.

Analogy is what lets us see this friendly owl and this vicious killer and pay attention to what they have in common: they are winged, they have feathers, they lay eggs. In this case, one of these can’t even fly, although most winged and feathered creatures can!

We then organize all the data the world provides about winged, feathered, egg-laying, usually flying creatures into a group and call it *BIRD*. Bird is now a category of thing that exists in your mind and in your mental lexicon of your language.

Some birds are more prototypical birds, some are less. Generally the more “bird-like” features the bird has, the more prototypical it is and the more likely you are to think of it on one of those tests they give you where they ask you to name the first instance of a category that comes to mind. Chickens are less prototypical than robins, penguins are even less prototypical, and the Australian velociraptor from the earlier slide even less so (for most people).

Analogy and being able to find the *essence* of things and organize them accordingly is, indeed, the core of our cognition.

The essence of metaphor is understanding and experiencing one kind of thing in terms of another.

– George Lakoff and Mark Johnson, Metaphors We Live By

The word “metaphor” is often used to refer to the mere linguistic expression of analogy; however, over the past few decades, linguists and cognitive scientists have come to realize metaphor is much more.

*Conceptual metaphor* is how we structure our understanding of nearly everything we can’t directly experience. For example, we structure our understanding of time in terms of spatial relationships. We think of times as being “ahead” or “behind” us, as if we were standing on a physical timeline. One interesting thing here is that, while structuring understanding of time in terms of space is universal, the orientation is *not* universal: in some cultures, the future is “ahead” and in some it is “behind”. An important thing to understand is that the inferences about what it means for something to be “ahead” of you versus “behind” you hold for those cultures’ interpretations of time, mapped directly from the spatial inferences.

We’re going to get a little hand-wavy here, mostly to stay out of the weeds of technical detail and the various areas that are still poorly understood and up for dispute, but we can think of conceptual metaphor in the brain as something like this. We have a group of neurons that are interconnected and correspond to a concrete concept such as *UP*. Up-ness is a well understood, universal, concrete notion that we understand by being bodies in the world. So we have this little network of neurons related to that concept.

On the right we have the target frame, the thing we’re trying to understand, to grapple with, to talk about. Over time as we experience some target concept, such as “more” or “happy”, and analogize aspects of the source concept *UP* to aspects of the target, we form connections between “happy” and “up (physical)”. The little neural network for “happy” has most of the same structure as the network for “up” (although over time it might gain mappings from some other source, some other concept that we use to partially structure the concept of “happy”). And so the concept of *HAPPY* becomes irrevocably structured in terms of *UP*. This does find expression in our language, and it’s important to note that once we have this metaphor, poets might get creative about ways to express it, but the metaphor *never* breaks – *HAPPY* is never *down*.

This is how George Lakoff defines conceptual metaphor. By “grounded” we mean there is a source frame, generally something more concrete or better understood. By “inference preserving” we mean that the inferences we can make about the source concept hold for the target concept. So when we structure some concept in terms of *UP*-*DOWN* the continuum of relative up-ness and down-ness will hold for that concept as it does for the physical relationship.

But the spatial continuum represented by *UP*-*DOWN* is *everywhere*. It’s one of the most fundamental concepts we have, and so we structure *so many* things in these terms. Quantity is slightly less concrete than spatial relationships; consciousness is a lot less concrete. But we can imagine how each of those mappings might have come about: as the quantity of something in a container becomes “more” it also “rises”; unconscious animals are not typically upright.

Things like “rational” and “virtuous” mapping to *UP* may result from an extension process: conscious is up, humans are *more conscious* than other animals therefore the *more human* attributes, such as reason and virtue, are *more up*. To a certain extent, we’re telling “just so” stories here about how these things come about, but we are certain that the mappings exist, however they came to be. They show up consistently in our language usage, are extremely common cross-culturally, and are further revealed as physically *real* in our brains by the various methods of the cognitive scientist.

Importantly, these relationships also preserve the inferences of their mappings. Your mood can be raised and lifted; even if you are already happy, your spirits can be further lifted and you may become joyful or even *elated* (whose root means “raised”). And if you get *depressed* enough, you may reach a *nadir*.

One thing I like to do sometimes is think of how we can do things like “more concrete” and think of the state of “being concrete” increasing. Even though “concrete” is *down* relative to “abstract”, we can prioritize the up-ness of the *quantity* relationship, if we were going to draw a visualization of “more vs less concrete”. Think about it. Seriously stuff like this has kept me awake nights.

OK but what does this have to do with mathematics?!?! I’m referencing mathematics in my title and it’s hard to see how we get from birds and happiness to the *serious stuff* of mathematics, right? Let’s go.

Mathematics starts from a variety of human activities, disentangles from them a number of notions which are generic and not arbitrary, then formalizes these notions and their manifold interrelations.

– Saunders Mac Lane, “Mathematical Models”

We’ll take our starting cue from a man who surely understood abstraction and precision, Saunders Mac Lane, one of the inventors of category theory. He found himself frustrated at the poor state of philosophy of mathematics and tried to explain why in this paper, eventually expanding it into a book called *Mathematics Form and Function*. What he saw was that none of the dominant schools of thought adequately explained mathematics. Mathematics starts off in concrete activities, aided by our innate abilities (see above), “disentangles *from them* a number of notions which are generic and not arbitrary”, and then formalizes and makes precise those notions (and their “manifold interrelations”).

This *disentangling* is the process of settling on things like “triangularity”, “semigroup”, “bird” by picking out properties that matter. We, even advanced mathematicians, often discover which properties matter by process of experimentation, trial and error. We think, well, what if this property mattered? And see where it leads us. If it doesn’t lead to anything useful we can generalize, whether to understand the world better or to find new planes of abstraction and new relationships, we abandon it. You can see this process happen naturally in children if you let them go through it, but mathematicians like Mac Lane, William Thurston, and Eugenia Cheng sometimes describe working through similar processes.

Now we’ll look in more detail how it happens for math. Mathematical metaphors come in basically three types. *Grounding metaphors* are those that ground abstract concepts in some human activity. There are a handful of grounding metaphors for mathematics. Mac Lane gives more than Lakoff and Nuñez do, and we’ll stick with their characterization for our purposes. *Linking metaphors* link two areas of mathematics (or some other field, but our concern is maths) by analogy with each other.

The last one, *extraneous metaphors* are things like “maybe monads are like burritos”; typically when we think of metaphor as a poetic device, it’s the extraneous metaphors we’re thinking of. Extraneous metaphors can be useful sometimes for explanation of new concepts, but they are not integral to how the concepts came to be in the first place; the development of monads in category theory didn’t have anything to do with burritos (arguably it had something to do with containers, but not in the way programmers typically mean that).

The four grounding metaphors of arithmetic are:

**Object collections**: This is often one of the first ways we work with kids to teach them arithmetic. You have a group of three candies and a group of two candies (note that both of those collections can be subitized rather than counted) – now look we have a group of five! You can count them! Every time it will be five, and you can subtract the one from the other and have the two collections you started with! This metaphor does not lend itself well to a notion of zero (a collection of zero is … not a collection, not at this concrete level) and certainly not to negative numbers. It also becomes difficult to do with large numbers, although some math manipulatives have bars of 10 and blocks of ten 10s for a “hundred block” that helps with subitizing arithmetic.**Object construction**: A next step is thinking of numbers as constructed objects rather than collections. We can think of a single object being built out of parts, like you might build a single figure out of Lego bricks. We can see the figure get bigger; we can see that a single object made of five objects consists of five single bricks or of a group of two bricks plus three bricks. We can get to fractions from here, among other things (such as literal cake cutting).

Notice in both of these metaphors, numbers are related to multidimensional tactile objects.

**Measuring stick**: Measuring is also a natural human activity, whether we’re doing it with a body part such as a “foot” or with a stick or formalized stick. Now numbers can be seen as one-dimensional segments or points on a line. This notion gives us a very clear way to conceptualize zero, as the starting point of our measurement.**Motion along a path**: We’ll be talking about this one in more detail, as it makes the*inference-preserving*nature of conceptual metaphor especially clear in my opinion. We can conceive of numbers as a path that we can walk along. It has a natural zero. It includes a natural conception of negative numbers, at least when coupled with the concept of rotation in space or starting at zero and walking “the other way”. A path has a certain topology, it might even go on to the horizon, as far as the eye can see, helping us to visualize an idea of “infinity.”

With these two metaphors, especially measuring things, we start to get ideas about irrational numbers. If we already understand the Pythagorean theorem as a result of measuring triangles and formalizing those properties, then at some point we encounter a triangle whose hypotenuse is √2. And we have this metaphor that any line segment corresponds to a number, so √2 must be a number now.

The combination of these four metaphors give us a lot to work with – numbers as multidimensional tactile objects, numbers as spatial relationships, and so on. However, doing arithmetic physically with one of these source frames behaves the same as arithmetic in any of the others. If you put a collection of two objects together and a collection of three together, you get five; if you take two steps along a path and then take three more, you have taken five steps. From here, we formalize and make precise the *laws of arithmetic*.

Let’s look at the source frame *motion along a path* in a little more detail. This is closely related to the measuring stick idea, but it includes the most natural source of a “zero” point of any of the metaphors – the start of the path. Moving along a path also includes the idea of having many points between any two points, as well as the idea that we might extend a path indefinitely. We might also return to the start and go the other direction, so there are points that are in some sense “opposite” the points on the original path – the negative numbers.

Furthermore, if you have traveled from the start point to some point A, then you have been at every point between the two points, and this inference is preserved when we map the idea to the number line.

Do these look familiar? These diagrams of function composition show motion along a path and rely on our intuitions about real paths (at least, “as the crow flies” or idealized paths) to demonstrate an important property of composition. If you have a path from A to B and a path from B to C, then you have a path from A to C, although you might not have a direct path (it might literally be a path that goes from A to B and then to C).

So, what are numbers? We just don’t know.

There isn’t a single answer; what a number is depends on what branch of mathematics we’re talking about and what we want to do with it. We have numbers that violate certain properties we expect numbers to have. How did we end up here and how are all they all numbers?

We ended up here in part because we have two basic “shapes” of numbers – as points or segments along some kind of line and as potentially three-dimensional collections or groups or … *sets* of matter. These concepts are qualitatively different; they are different *in kind*. And yet we can also identify and formalize ways in which they behave the same or ways in which they are *isomorphic* to one another even if they are not *the same*.

To ask how 3 and `3 :: Float`

and π are all easily thought of as *numbers* is to ask the same question as how sparrows and penguins and emus are all *birds*. They don’t all have all the properties of birdness, but they have *enough* of them.

We extend all our metaphors. Once we have conceptualized something in terms of another thing, we can extend that conceptualization. So if we can lay out a horizontal number line, then, sure, why not? We can also have a vertical one that crosses it at the zero point. Now, among other things, we can conceptualize numbers as points in two-dimensional space.

Set-theoretic ways of constructing numbers can also be extended in various ways. Again, these are different *in kind* and useful for different *types* of mathematics.

We have structured the abstract concept *number* in terms of two different *types* of grounding concepts and thus created a very complex target concept.

As Aristotle said in the *Rhetoric*, “Ordinary words convey only what we know already; it is from *metaphor* that we can best get hold of something fresh.” I’m not certain we quite have a hold of what a number is yet, but we have done well without having a single absolute answer.

By repeated application of analogy and linking metaphor, we are perfectly able to conceive of all these things as *numbers*.

I want to say just a brief bit about *closure* here. Part of the reason why we continually link new concepts to the concept of a *number* is we have an intuition from the real world that arithmetic operations in general should be *closed* – that is, that they should return an entity of the same kind as their inputs. When we add two collections of objects, we get another collection; when we walk two sections of a path, we have a path. But as we expanded our conceptions of numbers to include irrational numbers and negative numbers, some operations were no longer closed. By considering whatever entity those operations did return as another “number” – no matter how irrational, no matter how much it didn’t *look* like other numbers – by expanding the set of numbers, we are able to preserve closure.

But incidentally, now that we have this concept of *number*, and all the ways we understand numbers are by analogy to *things in the world*, we decide – via metaphor again – that *NUMBERS ARE REAL THINGS.* The real things are real; quantities and points along a path are all things we understand intuitively from our experience in the world. But numbers, like language, are not things that exist independent of our minds.

This is one of the things Mac Lane argues against in his philosophy of mathematics, but many mathematicians believe in the Platonic existence of numbers that we discovered rather than invented. Well, as with natural language, it’s a bit of a false dichotomy: we discover properties of the real world and we invent formalizations and idealizations of them. In turn, we may discover new links between them and thus invent new branches of mathematics, new categories.

Next we’ll look more at the idea of *linking metaphors*. We’ll start first with one of the grounding metaphors of our concepts of sets.

Once we conceptualize the idea of container, we can visualize (mentally or on paper) situations that we are extremely unlikely to see in real life by analogy with ones that we do. In the container inside the pitcher, it’s very clear that the object inside the container is also inside the pitcher. And if we make that a two-dimensional drawing of the same idea, we transfer that intuition easily.

But the situation in the bottom diagram is probably not something we’ve experienced in the real world, and it doesn’t need to be as long as we can draw it. The drawing is something we can experience through our senses and linked back to the “real” containers via metaphor. Now we have a folk conception of set theory and Boolean logic.

We initially rely on our intuitions about containers to talk about sets and build up set theory, and then formalize certain properties and make them precise. Once we have, we notice that many set operations behave the same as arithmetic with integers. We can link them analogically.

George Boole came along and decided we could also conceptualize propositional logic in terms of arithmetic, from which we could link all three areas of mathematics.

In doing so, Boole paved the way for Claude Shannon to provide us a new concrete grounding for understanding logical operations: circuits.

Anyway, linking three already abstract concepts (more abstract than the original grounding metaphors of arithmetic, anyway) gave us a new abstraction.

The concept *monoid* arose from the pattern of similarities among arithmetic, Boolean arithmetic, and set operations. Set theory, arithmetic, and Boolean logic are not concrete enough to constitute sources of *grounding metaphors* in Lakoff and Nuñez’s terms, and yet they are less abstract than the concept *monoid*.

Creating that new word, that new concept, via metaphor, and then formalizing it allows us to structure entirely new experiences and concepts in terms of our understanding of what monoids are in general and how they should behave.

Like all the Western world, I have lived in the shadow of Plato, but new research is constantly emerging that this is the source of all mathematics – indeed, something like this process appears to be the engine of most human thought.

Does this mean mathematics isn’t *true* or *real*? Certainly not.

There is a notion developed independently (I believe) by several thinkers that we may call *conjectivity* or *intersubjectivity* to play off the historical dichotomy of objective-subjective. Subjective truth is wholly specific to the subject, the person thinking the thoughts or experiencing the stimulus. Objective truth is totally independent of the human mind and is true whether or not it is known or knowable by any human mind. Historically, the pendulum has swung between these two poles and there has been tension between them.

The notion of *conjectivity* (Deirdre McCloskey’s word) or *intersubjectivity* (Habermas’s word), or “institutional facts” as I believe John Searle calls them, says some knowledge is not quite subjective and not quite objective. There are some things that cannot exist independently of the human mind and yet we can still make statements about and judge these statements true or false independently of any particular subjective mind.

Human language is one such thing. Language does not exist outside of an embodied mind; it is not an objective fact about the world outside of the mind. Yet there are statements we can make about it whose truth value does not depend on the interpretation of any one particular mind. Money is another such thing; we invented money, it would not exist without us, and yet there are things we can say about it whose truth value is independent of any particular subject.

I believe mathematics is another such thing. It is clearly related to what we really experience as physical bodies in the world. As evolved, embodied minds experiencing certain aspects of the world in similar ways across all human cultures, there is something like objective reality to what we experience, and our brains seem to have evolved to pick out the same sorts of properties as important and make the same varieties of analogies. We make discoveries in this world and we invent formalizations and words for those properties, and those give us a springboard to discover new analogies. And so on.

A mathematics based on conceptual metaphor, like language, is a product of our minds, not of some reality external to us. Numbers and shapes and infinity are not literally existing objects in a realm of pure forms. Thus, if that realm exists, and you may still believe in it as one believes in a god, our mathematics is not part of it.

Finally, we’ll pay very brief attention to extraneous metaphors. You might be familiar with some.

If a monad is a burrito, is a sushi burrito a monad transformer?

Do you interact with a computer like you do your static desktop? People like Brenda Laurel, Seymour Papert, and Bret Victor would very much like to reimagine computer interfaces through different metaphors, but they will be metaphors nonetheless.

Still, these are extraneous to the development of mathematics and programming. Such metaphors may be useful in certain contexts, perhaps more or less depending on how many aspects of the source map to the target.

Teaching without these extraneous metaphors can depend, instead, on building up intuitions in the same ways that these were originally invented. I’ve quoted this in several talks, but one of Chris Martin’s math teachers, Jean Bellissard, said, “It’s a succession of trivialities. The problem with mathematics is the accumulation of trivialities.” Well, in this view, for teaching, it’s the accumulation of layers of metaphors, each of them pretty understandable by itself, but at any point if you fail to make one analogical leap, you might get lost forever.

Well, not forever, math will always welcome you back, but for a while.

So, in summary, mathematics is unreasonably effective in the natural sciences, because our brains are embodied in the natural world and we are unreasonably good at finding (and formalizing) the *properties that matter*.

*Which One Doesn’t Belong*by Christopher Danielson*Patterns in the Mind*by Ray Jackendoff*Metaphors We Live By*by George Lakoff and Mark Johnson*Surfaces and Essences*by Douglas Hofstadter*Where Mathematics Comes From*by George Lakoff and Rafael E. Nuñez*Mathematical Models: A Sketch for the Philosophy of Mathematics*by Saunders Mac Lane (1981)*What we talk about when we talk about monads*by Tomas Petricek*Mindstorms*by Seymour Papert*Computers as Theatre*by Brenda Laurel*Magic Ink*by Bret Victor*Of Subjects and Object*by Adam Gurri

We begin in the Haskell community. We are a relatively small community, but we’re maybe a little chatty, maybe have a little too much free time waiting for something to compile. Hence we are engaged in near constant internecine war over build tools and the like.

Suddenly, something brings Haskell to the attention of non-Haskellers. It might be that an article about contravariance gets retweeted way outside of the audience it was intended for (people who are pretty comfortable reading Haskell type signatures) and suddenly a few non-Haskellers are *pissed*. More frequently it’s that some famous dude with a large platform decides he’s figured out what *monads really are* and shares his opinion with his thousands of followers.

Hardened by the ongoing Versioning Wars, Haskellers are heavily armored and have a bit of a siege mentality already. Now galvanized against a common enemy, Haskellers come together.

So a guy has decided monads are X, usually something pipe-like. A famous guy with a bunch of followers. He’s not particularly a Haskeller, and the first thing Haskellers wonder is why he cares what a monad is since he doesn’t write a language that comfortably supports that abstraction.

And I don’t know how they reacted the first time it ever happened. Maybe, as is written in the annals of the Great Lisp Wars, Haskellers inherited the brittle argumentative nature of their parenthesized forebears and immediately attacked. Perhaps we are all still paying for their sins.

But what happens now is some of us try to ignore the famous dude being wrong and go about our business, but several people – generally, the same crowd who are most engaged in the Stack and Versioning Wars – show up in this guy’s mentions to tell him how wrong he is.

For a long time this baffled me. Why do you care that someone is wrong on the internet? But now I know, because now I’ve seen the aftermath enough times.

There are a few things that happen after this:

The immediate effect is the hasty generalization from “this group of Haskellers is dogpiling me about my wrongness” to “all Haskellers are assholes.”

Later some less famous dude will say he’s thought about learning Haskell, and some Haskellers will show up to encourage him, but then he’ll claim that he’s put off learning it because of all the jargon. Why do Haskellers use the word “monad” anyway? THEY’RE JUST PIPES, he’ll insist, he knows this, he learned it from the famous guy.

Haskellers reply that the word “pipe” or “computation expression” or “marshmallow” or whatever is misleading, that monad is the mathematical term so, while we understand it can be intimidating at first, we feel it’s best to stick with the name for the sake of research. And pipes are a sort of OK, not entirely wrong, first way of thinking about monads, but not sufficient for understanding them well, and so using an unfamiliar word suggests that there is something to be learned, that these aren’t *just pipes*.

Sometimes one of the famous guys will come back and ask if we think we’re smarter than Oleg because smug references to Oleg constitute an automatic win in online arguments.

It doesn’t even really matter who Oleg is, don’t worry about it. Bringing up Oleg is, for the current purposes, like sounding a trumpet in battle.

(I am sorry, I hope the real Oleg doesn’t read this, I’m sure he’s a nice enough fellow.)

In the next round, we get third-tier dudes asking what the hell is wrong with Haskellers that they can’t explain monads since MONADS ARE JUST PIPES.

*snicker snort* *Haskell isn’t a practical language anyway, look at these dorks.*

Nevermind that plenty of people (including me) *have* explained monads, many times over – for non-Haskellers, for JavaScripters, for beginning Haskellers, with and without physical analogies to containers and burritos, with and without in-depth descriptions of the Monad Laws.

And then the famous dudes and the less famous dudes will decide that we are assholes for correcting them, assholes for using a word they don’t like when one of several misleading words is readily available, that Haskellers just like to be misunderstood because we prefer to lord it over people, using the very word as a gatekeeping tool.

And if you, the Haskeller, fight back against *any* of this, you are in the wrong. But you know how they say that it takes an order of magnitude more energy to fight bullshit than to produce it? Yeah, the famous guy with the pipes has already moved on with his life and you can’t ever refute the bullshit, because you are not famous.

You are not famous and people will believe the famous guy and his appeal to the Great Oleg and only think you’re an asshole for trying to refute it.

And then it will happen all over again in a few weeks. Fortunately, Haskellers stay fit for battle by debating the moral correctness of package revision policies.

]]>Contents

- Intro to Haskell
- A typical specimen
- Validation time!
- In a bind
- Look at the types
- Why do Haskellers care about this?
- A Note on Terminology
- A Note on Learning Haskell
- Further reading:

This post is an experiment I decided to attempt after conversations with Ben Lesh and some other folks. I will assume as little knowledge of Haskell as I possibly can here. Later we’ll talk about some tools we have in Haskell to make the pattern more conceptually compact.

I hope to make this accessible to as many people as possible, and I’d love to hear from you if you think there are things I could add or clarify in order to do so.

If you can already read Haskell at least a little, go ahead and skip this section. I will annotate the code in the examples, but if you’ve never read Haskell at all, then this section may be helpful to you.

All data in Haskell is (statically) typed. Types may be concrete, such as `Integer`

or `Bool`

, but there are also *type constructors*^{1}. Type constructors must be applied to a type argument in order to become a concrete type and have concrete values – the same way a function would get applied to an argument and then evaluated. So, we have a type, called `Maybe`

that looks like this:

This datatype says that a value of type `Maybe a`

is constructed by applying `Maybe`

to another type; `a`

is a variable so it could be almost any other type that we apply it to. We could have a `Maybe Integer`

or a `Maybe String`

, for example. It also says that we have *either* a `Nothing`

value, in the case where there was no `a`

that we could construct a `Maybe a`

value from, *or* (this is an exclusive disjunction, known as a *sum type*) a `Just a`

value, where the `a`

has to be the same type as the `a`

of `Maybe a`

.

If we’re constructing a `Maybe String`

value then we can either return a `Nothing`

(where there is no `String`

) – a kind of null or error value – or a `Just "string"`

. We use this type very often in cases where a possibility of not having a value to return from some computation exists – a `String`

might appear, on which we can perform some future computation, or it might not, in which case we have `Nothing`

.

Next let’s look at `case`

expressions, a common way of pattern matching on values to effect different outcomes based on the matched value. We’ll start with this one that can remind me how bloody old I am:

```
function xs =
case (xs == "Julie") of
-- ^ equality function returns a Bool value
True -> (xs ++ " is 43.")
False -> "How old are you?"
```

When this function is applied to an argument that is equal to the `String`

“Julie”, it will match on the `True`

(because `==`

reduces to a `Bool`

) and concatenate “Julie” with " is 43." Given any other `String`

argument, it will match on the `False`

.

Note this doesn’t include any means of printing any of our strings to the screen; if you want to play with it in the REPL, you can, as GHCi always runs an implicit

A `case`

expression in general looks like this:

```
function =
case exp of
value1 -> result1
value2 -> result2
-- ... (the pattern matches should
-- be exhaustive)
```

The values are the patterns we’re matching on. They must be of the same type, the same type as the result type of `exp`

, as `Bool`

is the result type of `==`

so we had the values `True`

and `False`

in our previous example. They should cover all possible values of that type.

When such a function is called:

`exp`

is evaluated (ignoring Haskell’s actual evaluation strategy), which means it will be reduced to some value;- the result value is matched against
`value1`

,`value2`

, and so on down; - the first value it matches is chosen and that branch is followed;
- the result of matching on that value is the result of the whole
`case`

expression.

A consequence of the focus on taxonomies in 17th and 18th century was the creation of museums, which present the studied objects neatly organized according to the taxonomy. A computer scientist of such alternative way of thinking might follow similar methods. Rather than finding mathematical abstractions and presenting abstract mathematical structures, she would build (online and interactive?) museums to present typical specimen as they appear in interesting situations in the real-world. – Tomas Petricek, Thinking the Unthinkable

OK, let’s say we need to validate some passwords. We’ll start by picking some criteria for our users’ passwords: we’ll first strip off any leading whitespace, we’ll only allow alphabetic characters (no special characters, numbers, or spaces) and we’ll have a maximum length of 15 characters because we want our customers to choose unsafe passwords.

We’ll write each of our functions discretely so we can consider each problem separately. First, let’s strip any leading whitespace off the input:

```
stripSpacePwd :: String -> Maybe String
stripSpacePwd "" = Nothing
-- this first step gives us an "error"
-- if the input is an empty string
-- and also provides a base case for the
-- recursion in the case expression
stripSpacePwd (x:xs) =
case (isSpace x) of
True -> stripSpacePwd xs
-- is recursive to strip off as many
-- leading whitespaces as there are
False -> Just (x:xs)
```

This `(x:xs)`

construction is how we deconstruct lists (Strings, in this case) to pattern match on them element-by-element; the `x`

refers to the head of the list and the `xs`

to the rest of it. We test each `x`

in the string to see if it is whitespace; if there is no leading whitespace, we return the entire string (wrapped in this `Just`

constructor) – the head, `x`

, consed onto the rest of the list, `xs`

. If there is whitespace, we return the tail of the list only (`xs`

) and call the function again on the tail, in case there is more than one leading whitespace. If you give it a string of all whitespace, it’ll hit that base case and return `Nothing`

– we have no password to validate. Otherwise it will stop when it reaches a character that isn’t whitespace, follow the `False`

branch, and return the password.

Next let’s make sure we have only alphabetic characters. This one is less complex because we don’t have to (manually) recurse, but otherwise the pattern is the same:

```
checkAlpha :: String -> Maybe String
checkAlpha "" = Nothing
checkAlpha xs =
case (all isAlpha xs) of
False -> Nothing
True -> Just xs
```

We’re again returning a `Maybe String`

so that we have the possibility of returning `Nothing`

. (In a “real” program, that could allow us to match on the `Nothing`

to return error statements to the user, for example. We’ll see other ways of handling this in later posts.) We used `isAlpha`

which checks each character to see that it’s an alphabetic character and `all`

which recursively checks each item in a list for us and returns a `True`

only when it’s `True`

for all the elements.

Finally, we’ll add a length checker:

```
validateLength :: String -> Maybe String
validateLength s =
case (length s > 15) of
True -> Nothing
False -> Just s
```

We had decided on a maximum length of 15 characters, for purely evil reasons no doubt, so it takes the input string, checks to see if its length is longer than 15; if it is, we get a `Nothing`

and if it’s not, we get our password.

Now what we need to do is compose these somehow so that all of them are applied to the same input string and a failure at any juncture gives us an overall failure.

We could write one long function that nests all the various case expressions that we’re using:

```
makePassword :: String -> Maybe String
makePassword xs =
case stripSpacePwd xs of
Nothing -> Nothing
Just xs' ->
case checkAlpha xs' of
Nothing -> Nothing
Just xs'' ->
case validateLength xs'' of
Nothing -> Nothing
Just xs''' -> Just xs'''
```

This is valid Haskell, but these can get quite long and hard to read and think about, especially if we need to add more steps later (or remove some). And you sometimes have to rename arguments to avoid shadowing (that’s what `xs'`

and `xs''`

are: new names).

We might initially be tempted to try just composing them in some way, perhaps:

```
makePasswd :: String -> Maybe String
makePasswd xs = (validateLength . checkAlpha . stripSpacePwd) xs
-- or
makePasswd :: String -> Maybe String
makePasswd xs = validateLength (checkAlpha (stripSpacePwd xs))
```

Unfortunately, the compiler will reject both of those and chastise you with intimidating type errors!

The reason is that each of those functions returns a `Maybe String`

– not a `String`

– but they each only accept `String`

as their first argument.

At the risk of appearing quite not smart, I’ll admit that when I was first learning Haskell, I used to sometimes write this all out on paper to trace the flow of the types through the nested or composed function applications.

What we need is something that will allow to chain together functions that take a `String`

and return a `Maybe String`

.

Conveniently, Haskell has an operator that does this: `>>=`

. It’s so important and beloved by Haskellers, that it’s part of the Haskell logo. It’s called *bind*, and we can chain our validation functions together with it like this:

```
makePassword :: String -> Maybe String
makePassword xs = stripSpacePwd xs
>>= checkAlpha
>>= validateLength
```

The result of `stripSpacePwd`

will affect the whole rest of the computation. If it’s a `Nothing`

, nothing else will get evaluated. If it’s a `Just String`

, then we will pass that `Just String`

to the next function, even though it needs a `String`

as the first argument, like magic.

(It’s not magic, though.)

If you’ve never looked at Haskell before, this part might be somewhat opaque for you, but explaining all this in detail requires explaining almost all of Haskell. We’re only here for the gist, not the crunchy details.

Let’s look at how `>>=`

works. The (not quite complete) type signature for this operator looks like this:

When the `m`

type constructor that we’re talking about is `Maybe`

, the type looks like this:

```
(>>=) @Maybe :: Maybe a -> (a -> Maybe b) -> Maybe b
-- you can do this in your REPL by turning on the
-- language extension TypeApplications
```

The complete type of `>>=`

looks like this:

```
(>>=) :: Monad m => m a -> (a -> m b) -> m b
-- |________|
-- this part
-- tells us that whatever type `m` is,
-- it has to be a monad
```

*Ahhh, the M word.*

Our new friend `>>=`

is the primary operation of the `Monad`

typeclass, so very literally this constraint (`Monad m =>`

) says that whatever type `m`

is, it must be a type that is a monad: a type that has an implementation of this function `>>=`

written for it.

A monad is a type constructor (a type like `Maybe`

that can take a type argument, not a concrete type like `Bool`

) together with a (valid, lawful) implementation of the `>>=`

operation. So you’ll hear sentences like “`Maybe`

is a monad” meaning it’s a type that has such an implementation of `>>=`

. And this is why you hear people talk about wrapping things “in a monad” or about containers and burritos and whatnot – we even have a function that does nothing but wrap a value up so it can be used in such a computation (it’s called `pure`

now and lives in the `Applicative`

typeclass instead of in `Monad`

but why is a long story).

I’m not going to go too much into typeclasses here, how they work and how we leverage them to good effect in Haskell. So what else can we say about this? That by recognizing a common pattern and giving it a name, we can gain some intuition about other times we might see it and what to expect when we use them.

In particular, Haskellers like the opportunity to reason *algebraically* about things. Let’s talk about what that means for a moment.

An algebra is a set together with some operation(s) that can be defined over that set. In this case, `>>=`

is the symbol for an operation that can be defined over many (not all) sets – think of types as sets.

So, a monad is an algebra, or algebraic structure, that has at least two components:

- a set, or type, such as
`Maybe`

; - a bind operation defined over it.

It also has some laws, or else it wouldn’t be a proper algebra, and we talk a lot about the monad laws in Haskell and we can (and should) property check our `>>=`

implementations to make sure they behave lawfully. *BUT* the Haskell compiler doesn’t enforce laws, so a monad in Haskell is perhaps slightly less imposing than a monad in mathematics.

Reasoning about code in terms of (types) and operations we can define over those sets without having to think too much about the details of each and every set that we could ever make. Can this type that we have here be used in sequential computations where the performance of the next computation depends in some way on the result of the one before it? Cool, we might have a monad then and recognizing that might give us some extra power to reason about and understand and predict what our code will do.

Typeclasses remove yet another layer of detail to think about and let us generalize even more. How well that works is a matter of some debate, but in part we do it for the same reasons that mathematicians talk about groups and sets and very general things like that: abstracting away some details allows us to focus on and consider only the parts we care about at a certain time, and sometimes allows us to see connections we never noticed before.

It is, and understanding why can be helpful. But it’s not the right place to *start* understanding monads, unless you already understand some category theory, and while I admire people who do, they are distinctly not my intended audience for this post. I suspect they would ostracize me for being so hand-wavy about all this.

Someday, I’ll try to write a beginner friendly post about what it means that it’s a monoid in the category of endofunctors. If you know what a monoid is and know that “endofunctors” for Haskell purposes just means “functors” and understand that by “functors” we mean type constructors (not `fmap`

itself), then perhaps the *monoidness* of `>>=`

(and also the `Applicative`

operation `<*>`

) will begin to become apparent. Perhaps Ken’s Twitter thread will help, too. If you don’t already understand those things, then it might not, and that’s OK; it takes time to build up and internalize all the concepts.

If you’re trying to learn Haskell and don’t already know category theory, it is perfectly fine to use `>>=`

when you need it (or `do`

syntax, which is a syntactic sugar over this and looks more imperative^{2}) and not worry any more about it. Since all user input and every `main`

action in Haskell is handled with monads, you sort of have to to be able to use them without understanding them deeply for a while. If you have a series of computations that should be performed sequentially such that each new one depends on the successful result of the one before it, you may want `>>=`

to chain them together (which requires them to be wrapped in a [monad] type constructor such as `Maybe`

.)

Since you generally want side effects to be sequenced, and we use the `IO`

type to constrain side-effecting code, `IO`

is a sort of canonical monad. `IO`

, which is the obligatory type constructor of all `main`

actions and all side-effecting code, is a monad, so a lot of code you write that will do anything involving `IO`

is already wrapped in such a constructor and, hence, monadic, but understanding the actual implementation of this is beyond unnecessary to writing real working programs in Haskell.

`do`

syntax?I don’t use `do`

syntax when I’m trying to teach monads, even though it is meant to allow the writing of monadic code in a nice imperative style. For teaching purposes, I don’t like the fact that it hides the composition-like piping of arguments between functions. I once said it “hides the functors”; it makes it harder for me to follow the flow of types through the function applications, and so I don’t particularly like it when I’m teaching people about functors and monads. It’s cool to start using `do`

to effect monadic operations without understanding how `>>=`

works, though; we’ve all been there.

*Monad* can refer to a few things. One is a typeclass that (mostly) corresponds to an algebraic structure (a set plus some law-abiding operations defined for that set) of the same name; to form a complete algebra, in Haskell at least, though, you need three things:

- the
`class`

declaration, which defines the operation*with maximal generality*; - a type that can implement that operation; and
- a typeclass
`instance`

that binds the type with the typeclass declaration and defines the operation(s) specifically for that type.

Usually, the phrase “X is a monad” tells you that X is a type constructor with an `instance`

of `Monad`

.

This is why I don’t prefer saying that `Maybe`

*is* an instance of `Monad`

but, rather, that it *has* an instance because an `instance`

declaration is a specific piece of code that has to exist or else the type has no legitimate implementation of the function. If no `instance`

exists, no function exists for that set so we have an incomplete algebra.

Incidentally, Haskellers do this with the names of (some, but not all) other typeclasses, too, so the type `Maybe`

*is* a monoid, a functor, a monad, and so on, because it is a type (set) that has [monoidal, functorial, monadic] operations defined over it.

One of the reasons I hesitated so long to publish this post is that people who don’t have much interest in learning Haskell or who are just at the beginning of learning Haskell seem, contrary to the best advice on the internet, to always want to know straightaway what a monad is. It’s like monads have taken on some outsized mythical status. But the monad is really a sort of small thing. It’s a common enough programming task, chaining together sequences of functions that we want to behave in a predictable manner. Monad is a means of simplifying that (in some way; it doesn’t seem like a simplification when it’s new to you, but by giving us certain intuitions about how this pattern should behave – the infamous monad laws! – and being highly composable, they do remove some complexity, as the right abstraction should).

Everything we do in Haskell, even `IO`

, can be done without monads, but not as easily or well. Monads let us do those things more easily, more consistently. I know when I was learning Haskell I had built it up in my mind that it would be this huge, difficult to understand thing, and it’s sort of anticlimactic when you find out what it really is: instead of nesting case expressions or something like that, we’ll just chain stuff together with an operator. Cool.

I wrote a course that starts in just about the same place as this post but continues refactoring the password validation code with different types and ends up demonstrating one of the differences between

`Applicative`

and`Monad`

. It’s not free, but it is available now.A Gentle Intro to Monads … Maybe? – code in this one is all JavaScript.

A Guide to FP Lingo for JavaScripters – what the heck, it’s like the JavaScripters outnumber us Haskellers. :)

Adjacency – There is an interesting point being made here about adjacency in Haskell vs adjacency in

`do`

-blocks, but he rather undersells the fact that monadic binding (and, thus,`do`

syntax) isn’t just for`IO`

.

Contents

I have been homeschooling my two kids for nearly all of their lives. It was an easy decision to make for my older son because he’d learned to read very early and was well ahead of what we could expect from the local public school. It was an easy decision to make for my younger son due to his extreme social anxiety when he was little (it’s gotten better, thanks for asking).

They both went to a local public preschool when they were 3 and 4 for a few hours a week, so they both have some experience with public school. My older son also went to public school for first grade. They have respectfully requested that they never be required to go back. My older son will be attending an Atlanta private academy made just for homeschoolers two days a week this semester and he’s excited about that, but it’s small and individualized and so it gives him just enough socializing without cutting too much into the time he wants to spend, oh, doing dangerous chemistry experiments, for example.

I do work basically full time now, from home, and we somehow keep making it work. I believe this is the result of me guiding them toward becoming independent learners early, so that as soon as both of them could read independently, I don’t have to hover over them “teaching” all day. It’s a good thing for all of us, as far as I can tell.

However, a lot of my philosophy of school is premised on my kids having a lot of open-ended discussions with me (mainly me). My younger kid isn’t very talkative, while my older one is extremely talkative, so that’s taken different shapes for different kids. The important goal is to keep them curious, engaged, reasoning, trying to articulate difficult things (including their feelings about things), to whatever extent is appropriate for their age and personality while always gently pushing them (very gently) to extend themselves. I often answer questions with questions about what they think and just guide them to use what they know to find an answer. It takes a lot of patience, time, and trust – your kids have to trust you not to get angry, shut them down, laugh at their reasoning. Most kids have a ton of reasons not to trust their parents and, therefore, not to engage in these kinds of conversations with them. I started preparing for teaching them this way when they were babies, basically, developing this trust, particularly that they could trust me when they made mistakes or felt namelessly frustrated by something.

Anyway, so I get a lot of questions about how I did it, what books and resources I’ve used to guide me, what books and resources I’ve used with them. So I thought I’d just put that information here, for reference. I can update periodically as we discover new things or I remember something else we used that I’d forgotten. My priorities tend to be with making things as enjoyable (with some subjects, it’s really about pain minimization rather than actual enjoyment) as possible for the kids and guiding them toward becoming independent learners. Since I started working again a few years ago, I’ve also had to emphasize materials that require little in the way of advanced preparation from me, but the things we did more of when they were littler and I was working much less required huge inputs of prep time from me.

Many of the recommendations below, by the way, would make excellent donations to libraries, Toys for Tots, book drives, and the like, because education shouldn’t be unequally distributed.

Although I was a trained and experienced teacher before I ever had kids, none of that experience or training was in early childhood education, so I still had a lot to learn. Here are things I read that I feel I learned from and that influenced how I’ve approached teaching them:

Tending the Heart of Virtue by Vigen Gurolan

The Well-Trained Mind by Susan Wise Bauer – Ms. Bauer produces a lot of the books we have used in homeschooling over the years. We have used her history books, although now we typically learn history from other books and only buy her supplemental activity books for history, as they have arts and crafts and cooking projects to help make history more fun and real to kids. There are other books that serve a similar purpose, such as the “Hands-On History” series, and we’ve used some of those, too.

The Discovery of the Child(and many other books) by Maria Montessori

Raising Cain: Protecting the Emotional Life of Boys by Dan Kindlon and Michael Thompson – I really felt like this book was important to understanding my sons. I do not know of a similar book about the emotional life of girls, but I hope there is one.

Project-Based Homeschooling by Lori Pickert, an ongoing source of inspiration.

The entire philosophies behind

Natural Math. See especially Bright, Brave, Open Minds; Moebius Noodles; and Avoid Hard Work!. They also sometimes offer courses – sometimes they meet online, sometimes they are just a packet of guided materials for you and your kid to work through at your own pace – and I highly recommend these. I wrote about our experiences with them before.

the Great Books Academy. This is where I’ve gotten part of my reading lists for my kids, although they seem to be making the lists harder to access so I may need to start keeping my own. I also have added, e.g., contemporary fiction that I know to be excellent and worth reading and more in line with my kids’ interests and/or to match up with something we’re learning in history at the time. But kids

*can*enjoy classic literature, unabridged and unadulterated, if they learn to read well.CS Unplugged. I want to teach my kids how to think about computation; that’s important. It’s not really important that they learn any particular programming language or set of contemporary computing tools. God willing, they will have access to tools that we haven’t even dreamed of yet. I also give my older son (the young one is still a little too young for them) the BubbleSort zines and strongly recommend them.

What you read to them – indeed, what media you bring into the house, thus giving tacit endorsement to – matters so much. I aimed to find books and resources that would feed their curiosity and imagination in ways that would lead them to want to know more, learn more, think about things more.

We do a lot of crossover with math and art because it helps kids visualize what they’re thinking about mathematically – visualize and make real. There are maths that don’t translate easily or well to this, of course, but I’d encourage you to do it as often as you can.

Moebius Noodles by Marina Kopylova. Appropriate for toddlers (maybe even earlier), still fun for older kids. This has been a very important book for us. Anno’s Math Games is in a similar vein.

The whole *Sir Cumference and the Dragon of Pi* series by Cindy Neuschwander. This has some of the best and most understandable explanations of math concepts for very young children that I have seen, and they’re fun to read.

Which One Doesn’t Belong by Christopher Danielson. See also this site which builds on the idea. These lead to great discussions that lead your kids to verbalize their reasoning processes and are so good at teaching them that, sometimes, there isn’t *one* correct way to solve a problem. My kids have never been afraid of math, and I think things like this are a big part of why. Socks are like Pants, Cats are like Dogs by Malke Rosenfeld and Gordon Hamilton is along similar lines and also good.

Math + Art = Fun by Robin A. Ward. This is also good to use in conjunction with trips to art museums. Like, you might not see the exact art works that are in the book, but you can use the mindset to talk to your kids about art.

Greg Tang books and games. Kakooma is wonderful, especially.

Camp Logic by Mark Saul and Sian Zelbo. This book is fantastic at teaching kids about logic and reasoning skills. It is extremely important to help kids talk through reasoning processes, even if they got the wrong answer. It’s by finding where they went wrong in reasoning that they learn to get the right answer.

The ultimate boss of logic, Raymond Smullyan, wrote several books of puzzles that are quite appropriate to talk through with kids (probably not before they’re 8 or so, though). We started with Alice in Puzzle-land. We very often have my older son read the puzzle aloud while I’m driving and then the there of us talk it through. It’s a fantastic way to kill driving time.

Lauren Ipsum by Carlos Bueno. My kids loved this and it’s a very fun way to introduce computer science topics. We read it at bedtime (a chapter per night or something) and it kept the kids up late trying to figure it all out. Great fun; we’re planning to re-read it later this year.

Good read-aloud story books for early math:

One Grain of Rice by Demi. This book is gorgeous, as are all Demi’s books, and will impress your children with the awesome power of doubling.

Math Curse by Jon Scieszka. This book is just so fun to read and, while it’s ridiculous in places, it’s so fun to talk to your kids about some of the more absurd “math questions”. We have read it countless times and each time we have a different discussion.

How Much is a Million? by David Schwarz.

The Pythagorean Theorem for Babies by Fred Carlson. It’s a short, simple, visual proof of the Pythagorean theorem. And then you can buy your kids biographies of Pythagoras!

Stories to Solve: Folktales from Around the World by George Shannon.

We do a fair number of math craft projects (and sometimes math cooking projects, like Sierpinski Cookies). You can search around for ideas for something related to what your kids are learning. I will be honest here: I mostly shy away from sites oriented toward “fun” activities for public school teachers, because they have a different goal in teaching than I do.

If you really want to teach them a programming language and tool and *also* teach them math, I definitely recommend Chris Smith’s codeworld. It’s a subset of Haskell in an accessible playground, and he’s put a lot of work into developing it into a useful tool for teaching kids math through programming.

Nowadays my kids go through the Khan Academy’s math video courses for their grade levels and supplement a couple of times a week with practice on Front Row, too.

I did have my kids memorize multiplication tables, because reasoning has mental cost that is wasted if you have to reason through how to multiply 9 by 9 each time you do it, while as far as we can tell there aren’t any drawbacks in mental ability to memorizing. It can be a drag if that’s the only way you teach math, but it’s not for us because we only view memorizing as a way to do this particular thing more efficiently to free up mental space for reasoning about more interesting things. It’s very similar to the ways in which jargon is information compression.

While there are some very fine textbook-like books about science for little kids (and you should, if you are able, equip your budding scientists with science sets, bug study devices, and beginner microscopes), I want to focus here on books you might not think of as science education materials but really are.

11 Experiments that Failed by Jenny Offill. Seriously, this is how I taught my kids about how to run experiments and keep track of what your hypothesis, conditions, and conclusions were.

For various ages, all the books by Steve Jenkins. These books are incredible; we love them all. My kids’ favorites of these have varied at different ages, but definitely Bones; the Actual Size books; and Living Color have been long-time favorites. Also, Just a Second is such a cool exploration of units of time, and What Do You Do With a Tail Like That? is great fun for bedtime story time.

For various ages, but especially preschool to about 2nd grade, the books by Nicola Davies. My kids’ favorites were White Owl, Barn Owl; Bat Loves the Night; and Poop. Poop is a very wonderful book. Related: my kids love Who Pooped In the Park? by Gary Robson an absolutely obnoxious amount. Talking about poop is just really great with little kids because they are totally fascinated by it (ok, possibly not all children are, I don’t know).

For preschoolers especially, all the books by Dianna Hutts Aston. They are beautiful, calming, yet fascinating. My own kids’ favorites were An Egg is Quiet and A Rock is Lively. Because, hey, rocks and eggs are cool as heck.

Also check out the Magic School Bus subscription science kits and the Tinker Crates for older kids. I will never forget the time, thanks to the Magic School Bus kits, we had petri dishes of toe fungus growing all over the laundry room, and neither will your kids.

If you’re not familiar with Snap Circuits already, I can also highly recommend those. Younger kids might not read the booklets to learn about the circuits on their own, but if you do the activities with them, you can and then talk about it with them while you do it. The same company also makes beginner soldering kits that are great fun; my older son built this radio and it’s very cool. The instructions are good, but adult supervision is needed when they have, yknow, a soldering iron.

Oh, I wasn’t aware of these until later than I wished I was, so let me add this: K’Nex makes some kits oriented towards teaching about simple machines and geometry and those have been *so great*. Search for K’Nex simple machines kits.

Finally, the science list would not be complete without mention of They Might Be Giants’ Here Comes Science CD/video. The videos are great, so if you can get those (I think they are mostly available on YouTube now), do it. Your kids will be singing about the elements and how their circulatory system works in no time at all, and the pair of songs about the sun is a great example of how we used to think one thing (the sun is a mass of incandescent gas) but now we’ve had to revise that a bit (the sun is a miasma of incandescent plasma). There are a couple of songs on it that we don’t care for a great deal, but that’s ok.

This is a very broad topic, and I don’t necessarily have great answers about some of them.

I mentioned history above, but we haven’t been very happy with any of our particular history courses. My kids have been very disgruntled that so many of them are just litanies of kings and wars and dates, and I agree that I also do not find that type of history study very enjoyable. We really enjoyed one of The Great Courses called The Medieval World narrated by Professor Dorsey Armstrong because it talks so much about the daily life, culture, music, and food of the medieval world rather than just whoever was the king or whatever at the time. But these are hit or miss (and, yeah, they’re meant for adults so there are sometimes references to adult topics, although in my experience, these are kept factual and not graphic – her talking about The Canterbury Tales was the most bawdy part of that course, I think).

The Zen Ties series and other books by Jon J. Muth. These books are gorgeous and absolutely fantastic for talking with kids about ethics and the meaning of life. I can’t recommend them highly enough.

Cookies by Amy Krouse Rosenthal. We have a lot of her books and have enjoyed them, but this one in particular is helpful for teaching kids meanings that they can relate to for some difficult words.

Pretend Soup and other cookbooks for kids by Mollie Katzen. These are geared for maximum independent cooking by preschoolers and up (I think Honest Pretzels is aimed at grade schoolers). The recipes require very little adult help because they are written twice: once in words and once entirely in illustrations so even kids with limited reading skills can figure them out. My kids have greatly enjoyed cooking from these books. My older son is now moving up to a Mark Bittman cookbook, and they are responsible for cooking dinner at least once a week.

Jon Scieska’s Time Warp Trio series are good for readers who are ready for chapter books slightly more difficult than Magic Tree House. My sons found them very engaging and they often corresponded to history lessons we were having. The same author has a series called Guys Read that are collected short stories and nonfiction aimed at boys that my boys both love. Oh, and he also wrote one of the greatest, most ridiculous read-aloud books of all time, The Stinky Cheese Man and Other Stories. I still can’t get through the Stinky Cheese Man story without laughing out loud, despite many readings.

Philosophy for Kids by David A. White. I think this book has some flaws but so far it’s the best one I’ve found for talking with kids about philosophical questions. Not everyone will prefer teaching their kids that there is a great deal of debate about what constitutes “right” and “wrong” or “justice” but for those of us who do, this book is an approachable starting point.

How Artists See(series) by Colleen Carroll. These were our first books for studying art, I think.

Rosemary Sutcliffe’s translation-slash-abridgements of The Iliad and The Odyssey are astonishingly good. They are abridged to be more appropriate for kids, but without being “dumbed down” or diminished.

The anthologies of mythology that Donna Jo Napoli and Christina Balit have been doing are also so good. We have several. My older son loves especially The Arabian Nights; my younger son’s favorite is Atlantis which, oh, I guess it’s only by Christina Balit.

I might make more reading lists of great books-ish fiction for various ages, as I have often felt like it was difficult to come up with ideas for what my kids should be reading next.

(for my kids at least)

Easy Grammar by Wanda C. Phillips. By far the least painful and most effective method we’ve found. Each lesson is short and to the point, and I think her method of starting from prepositions is brilliant. You can get right to the heart of a sentence so quickly by breaking it down as subject plus verb plus prepositional phrases.

Handwriting Without Tears. Again, it just fits the bill for short, focused lessons that are also systematic enough that kids learn with a minimum of struggle. Both of my boys hated handwriting practice but don’t mind these books. I make them learn cursive, too, because I think it’s good for them, and it allows them to read letters from their great-grandmother, which is not a bad thing at all.

First Language Lessons and Writing with Ease by Susan Wise Bauer again. My boys also hate writing exercises. In fact, my older son hated this more than anything when he was in public school, and I couldn’t understand why. He loves to read, and he always has stuff to say. So I backed up to what classical education teaches us about this: that many children need to practice the plain mechanics of putting words in sentences and paragraphs on paper, and also fill their heads with great words and ideas, before they may feel comfortable writing independently. Writing lessons went, for us, from the absolute worst part of the day to … tolerable. This is tolerable for the children, and they are making great progress, progress they made under no other system. Yes, we do copybook and dictation exercises. They are short. It’s like when you tell yourself you can go to the gym for just 15 minutes so that makes it seem tolerable and then you find yourself actually exercising for half an hour or whatever. These books are like that for my sons. I also extremely appreciate that, if you buy the workbooks, this requires very little in the way of daily preparation for me, the busy mom-teacher.

Oxford Book of American Children’s Poems, although other books of classic children’s poetry would work fine. We memorize poetry. What does this have to do with writing, you ask? And why even do it?

One of the hard things about learning to write well is learning what “well” means. Filling your head with words and sentence structures you love helps so much toward that end. I was really doubtful of how well my kids would take to this before we started, but they love it. They love being out in the forest for a hike and hearing birds chattering and then all three of us, as a family, start reciting No Shop Does the Bird Use by Elizabeth Coatsworth.

Teach your Child to Read in 100 Easy Lessons – To be quite honest, I did not believe this book and method would work, but it did, wonderfully. It made teaching the kids to read so painless. I really had no idea how you go about teaching kids to read and was so grateful to find this book.

Listen to audiobooks with your kids, too, and talk about them with them. It’s a great way to spend driving time. Most recently, my kids loved Tim Curry’s performance of A Christmas Carol and have asked that listening to it every year be incorporated into our Christmas traditions.

Once they can read, the important thing for a while is to make sure they’re practicing, which will be easiest if you keep books around that are a) level-appropriate (basically, their level plus just a smidgen more challenging – too challenging and they will get frustrated; too easy and they will not improve) and b) they enjoy. My older son loved Magic Tree House books; my younger son dislikes them very strongly but loves loves loves all the Roald Dahl books. Both of these suit the purposes of giving a 7-8 year old child the right balance between just challenging enough and enjoyment. (In general, and I know this is unpopular, but I advise against letting your kids read trash. We all know there are trash books out there – formulaic plots, wooden characters, dialogue that seriously no one would ever say this – and letting your kids read them exclusively is like only feeding them potato chips. Potato chips are fine sometimes; they’re not fine as your only diet.)

As their reading skills and confidence increase, you need to keep slightly increasing the difficulty. Never force your kid to finish a book they hate. Try to talk calmly with them about why they hate it; try to get them to articulate it. Let them know it’s ok to hate it, even if the reason they hate it is because it’s too hard for them right now. That is totally legit.

I do not take the unschooling approach of not telling my kids what to read. I want them to experience genres they wouldn’t have otherwise considered reading. My older son didn’t want to read Little House on the Prairie or Charlotte’s Web at first, because he thought they were “girl” books; he ended up loving both of them and went on to read the entire Little House series on his own. It’s legit to start reading a book and find it’s genuinely not for you, although, again, you should try to get your kids to articulate why (not in a confrontational spirit, but because it’s good for their verbal reasoning skills to do this). I hated The Wind in the Willows and also Little Women; my older son has read both and was relieved to find that I’d hated them as much as he did, but we were able to talk about why. However, he likes Dickens and I don’t, and, again, we talk about why.

A final thing: both of my kids expressed fear as they grew increasingly able to read on their own that I would stop reading to them at night and so we would lose that special time and shared conversation. I suspect a lot of kids actually have this fear and won’t express it or perhaps don’t even realize it (it took a lot of gentle conversation before my kids told me). So, they are 8 and 12 now and we still read together every night. We have read the entire Bunnicula series, the entire Series of Unfortunate Events series, the entire His Dark Materials series. I love it; they love it. Consider relieving your kids’ anxiety by showing that you enjoy sharing this with them (your kids may, of course, be different than mine, but don’t assume they don’t have this anxiety just because you didn’t).

From the youngest ages, I have taken my kids to museums and aquariums and national parks just as often as we can. They often have free days for families or whatever (we used to be extremely poor, well below poverty line, so I’m very aware these can be hard to afford), so you can always try to schedule accordingly.

- At National Parks and Monuments, ask for the Junior Ranger packets even if your kids are too young to fill them out by themselves. Use it to talk to them about things you’re seeing, about the history and geography and wildlife of the place.
- Art museums and many history museums also have packets (sometimes things you can take with you, sometimes things you have to return) to guide kids through the exhibits. The Art Museum of Eastern Idaho used to have these great ones that even asked the kids to try to draw copies of some of the art work. They also had a large play area for kids full of exploratory toys, many of which we couldn’t even have afforded or made room for in our house. My kids loved it there. The Boise Art Museum has very thorough packets for each exhibit that ask a lot of great open-ended questions that encourage discussion and real engagement with the artwork. We’re finding the Atlanta area museums and attractions also offer a great number of such services, and we’re delighted by it. One reason we moved here is how great a place it is for homeschoolers.
- Get a nature journal of some kind. When your kids are really little, you can write down their observations for them; as they get older, encourage them to keep their own, even if much of it is drawings. Encourage them to notice the seasonal changes, what wildlife the park attracts (even urban parks often have some). Get them a pack of travel colored pencils or whatever drawing tool they like and take it in a backpack with some binoculars each time you have time to take them to a park – even if you only spend 15 minutes or so a week, or even every couple of weeks, doing this. It has led to some of my children’s very happiest memories, and some of my very happiest memories of them.
- If you can, even in a windowbox or a single planter on the patio, take up gardening with your kids. Even better, take up foraging (this is harder to do than gardening, though, but I’ve done it even in somewhat urban areas) which will encourage them (and you) to be very aware of their surroundings and to learn more about the plants they see everyday. Gardening can be high or low commitment, but it’s so good for kids to learn where food comes from because it’s so easy for them to engage with food. And then it can lead to conversations about food chains, ecosystems, worms (big fun) and compost, so many things.

We also play a lot of games as a family. There are a ton of great ones available for math skills, and the Cranium types of games for various skill building. My kids really loved Hi Ho Cherry-O for early addition skills, and now we play Prime Climb for a little more advanced arithmetic. When they were little, we played a lot of reptile and bug bingo and (somewhat tediously to the adult) read the information on the backs of each card as we played.

I don’t know about game recommendations, though. I’m not sure it matters so much what game you’re playing as that you’re playing with your kids. My older son loves chess and cribbage and nearly all other card games. My younger son is much more picky and used to refuse to play games at all, so it took a while to find ones he’d play with us. He really loves this Case of the Missing Mummy game. They have a knotting game they both enjoy, except the younger one is often upset that his older brother can beat him pretty handily at it.

]]>However, I also sometimes write short posts without many headings, so I wanted to automatically generate tables of contents only on posts where I needed them and not on all posts by default. It was relatively easy, with some googling for help, of course, to figure out how to enable the TOC generation, but it took me some time to figure out how to enable it *only* on certain posts. I thought I should write it down for future me or for anyone else who might want to do the same thing.

Note: I’m on hakyll-4.10 and stack resolver lts-10.3. I am fairly certain this would have to be written slightly differently for older versions of hakyll. UPDATE: Have updated to hakyll-4.12 and lts-12.13 without needing to change this.

I knew from reading the Hakyll docs that what I wanted was a helper function like this to turn on some pandoc options:

```
withToc :: WriterOptions
withToc = defaultHakyllWriterOptions
{ writerTableOfContents = True
, writerTOCDepth = 2
, writerTemplate = Just "Contents\n$toc$\n$body$"
}
```

That enables the TOC generation all right, but it isn’t conditional on having, oh, a certain length or certain types of headings that would generate the TOC, so on every post, even if there were no headings, that * Contents* heading was showing up. So the trouble was figuring out how to make it conditional.

In the general post html template, there’s a conditional that looks like this:

If an author is listed in the post metadata, then it will put a byline on the rendered post; if that field is missing from the metadata, it does nothing. My first efforts to make the appearance of the TOC were, therefore, centered around that: I added a `withtoc`

field to my metadata (that’s also where the title and tags go, in their own fields).

After playing around with it directly in the html template, I figured out what the problem was: I hadn’t told the post compiler to look for that field in the metadata and know what to do with it.

```
postCompiler :: Compiler (Item String)
postCompiler = do
tags <- buildTags postsGlob (fromCapture "tags/*.html")
ident <- getUnderlying -- these are the five lines
toc <- getMetadataField ident "withtoc" -- that I added to this
let writerSettings = case toc of -- function today
Just _ -> withToc -- in order to make my TOC
Nothing -> defaultHakyllWriterOptions -- conditional
pandocCompilerWith defaultHakyllReaderOptions writerSettings
>>= saveSnapshot "content"
>>= loadAndApplyTemplate "templates/post.html" (postCtxWithTags tags)
>>= loadAndApplyTemplate "templates/default.html" (postCtxWithTags tags)
>>= relativizeUrls
```

(Many Hakyll configurations, including the default initial configuration, I believe, will have this as part of the larger `main`

rather than split off into its own function. I have started decomposing that `main`

block in my own site code because I find it so much easier to think about the parts separately and then combine them at the end, but ymmv. If you have a more standard Hakyll `site.hs`

, then you’d need to add this to the post compiler in your `main`

, wherever you `match`

on the `posts`

or `postsGlob`

or something like that and specify the compiler instructions.)

When I had added the tags to my blog posts, I had to modify this `postCompiler`

function, as you can see in the first line after the `do`

, so it would know what to do with the data in the `tags`

field. I did basically the same thing to make a `writerSettings`

that can be conditional on the appearance of the `withtoc`

field: when that field is present now, it will compile the post with my special `withToc`

writer options; when that field isn’t present, it will just use the defaults. I suspect there are other ways to accomplish this same thing, but this all works and so we’re calling it good.

The final thing I changed was adding html directly into my Haskell file to tell it to add a header when it does generate a TOC and allow me to style it. Not everyone has a header on their TOCs (the Hakyll tutorials, for example, are bulleted but don’t have a header). I also wanted to add some `<div>`

s so I could style it. Anyway, so I had to change the last line of my `withToc`

function as below:

```
withToc :: WriterOptions
withToc = defaultHakyllWriterOptions
{ writerTableOfContents = True
, writerTOCDepth = 2
, writerTemplate = Just "\n<div class=\"toc\"><div class=\"header\">Contents</div>\n$toc$\n</div>\n$body$"
}
```

That gave me the heading “Contents” inside some `<div>`

classes so that I could spend the rest of my day messing with CSS.

And now if you look at posts that are long enough to have headings in them, I have a lovely table of contents up at the top (and, thanks to Chris Martin, it should even be mobile-responsive).

]]>There were a lot of things that bothered me during the writing of Haskell book, itches I didn’t get to scratch. I’m a person with a need to *understand* things, so I kept reading and pursuing those curiosities down a lot of winding garden paths until I felt I had reached better understanding.

I came around to the belief that a lot of the confusion I was having was related to just a couple of issues, and that most of those come about from the understandable urge to make Haskell seem *normal* to people who know other programming languages. We start doing things with strings and lists right off the bat (despite the fact that many of the `Prelude`

functions for lists are partial functions and thus *unsafe*, which seems contradictory to our desire to get people to use Haskell because it’s safer and more correct). We have people write fibonacci and factorial functions to understand recursion; we then get into `map`

and folds and other standard library functions that have recursion built in.

From that foundation, then we go on to teach `Functor`

by saying `fmap`

is a generalization of `map`

– which seems (to me at least) to imply a recursive nature to `fmap`

that isn’t there, that makes people think about containers (like lists) when they think and talk about functors.

All of this is actually a lot of novelty – potential infinities, nonstrictness and bottom, typeclasses, algebraic operations with odd names like functors, and … why do we even want to generalize this thing, what does that give us? And we tend to present all this *together*, as if typeclasses were just a regular way of doing things and didn’t need their own justification.

And I think in that urge to make Haskell not seem like a radical novelty (although there are languages more radical than Haskell), we try to pretend that thinking about abstractions and infinities is the normal way people think.

So, what I wanted to do is separate some novelties from other novelties and take them one by one. You can learn what a functor is without learning about `map`

or about typeclasses. Then later you can combine an understanding of typeclasses and an understanding of functors and see what a `Functor`

typeclass constraint, for example, on a function buys us.

One of the things we like about Haskell is the ability to destructure a problem into relatively small portions that we can reason about individually and then compose them predictably into a larger program. Yet we (and I do mean to include myself) don’t usually teach Haskell that way; we teach by *refinement* rather than *construction*.

Anyway, so I had an idea for how to do this as a book, or perhaps a series of short books, that would motivate and explain the novelties of Haskell somewhat independently from other novelties. I do not know if this is a good idea or if it will be broadly appealing to other people, but I do know I’ve learned a ton just thinking about it and writing an outline of how to do it.

And I did manage to sell Chris Martin on the idea and now we’re starting to structure this into a series of video courses (which may, if it goes well, turn into the book(s) I originally envisioned).

]]>Someone left some pickled onions in one of the boxes. Yeah, like the kind of thing you put in martinis; they’re not really, in some senses, *food*. A friend of mine spotted them and made a comment about it on his show on college radio and there was a bit of a brouhaha about it.

The thing is, food banks experience this all the time: people donating “food” no one wants to eat, or boxes and boxes of canned corn and nothing else. Many charities experience similar: there’s a hurricane and everyone wants to send in their used socks and moth-eaten baby clothes regardless of what the people in need *need*.

Experienced programmers frequently urge other programmers, especially beginners and junior devs, to ask more questions. Typically, they attribute unwillingness to ask questions to fear – fear of looking stupid in front of someone more experienced, most commonly.

I am on record as disliking this kind of discourse. It’s a self-aggrandizing narrative to assume that someone who doesn’t do what you do *must be fearful* (because, look, it means you’re the brave one). But, in fairness, there are different kinds of fears.

I have been yelled at (literally, in some cases) by programmers for not asking good enough questions, for not framing the question correctly, for not providing enough code for the question to be answerable, for not knowing the right terminology so that my question could be precise, and so on. That’s why I wrote this, essentially saying, “ask, but be careful how.”

But that isn’t my real fear.

The reason I do not ask questions on Stack Overflow or IRC or Reddit is different – fear, if you like, of having my time wasted and being left in greater confusion than I started off in. And I think that if you, well-intentioned as you are, want to really encourage beginners to ask more questions and seek help, you need to look around and see how many programmers are leaving pickled onions in the food drive boxes.

I think the most common pickled-onion answer I see is not people being overly rude or condescending – it’s people just answering whatever question they wish was asked. Maybe they didn’t read it carefully; maybe it wasn’t a well formed question; maybe they just felt like talking about this tangentially related thing.

To be fair, we might get this in Haskell more than in some communities. The Haskell subreddit has really improved, and I hear the IRC channels have, too, but it used to be that you’d ask what you thought was a reasonable beginner question and someone would tell you to start by looking into Generics or maybe read some Saunders Mac Lane and then all would become clear to you. And you hate to be rude because this person took some of their valuable time to try to help you but you don’t know what just happened and you still don’t have an answer to your question.

Programmers are, in some sense, trying to do *charity* by helping the poor lost souls who have turned to the internet in a time of need. And so it feels bad to criticize them for doing it badly, like it feels bad to tell people not to send their moth-eaten baby clothes to the Puerto Ricans who have no power or clean water.

But contributing “help” like that imposes costs: it costs the food bank (or someone) to shuffle those pickled onions around and ends up doing no one any good. And sorting through the pickled-onion answers imposes costs on the learner, too.

]]>