A Brief Guide to A Few Algebraic Structures

prev
Tags: ,

Intro

I started writing this post because, for whatever reason, I keep forgetting what the difference is between a ring and a group, which is funny to me because I never forget the difference between a semiring and a semigroup – although other people do, because it’s quite easy to forget! So, I wanted a fast reference to the kinds of algebraic structures that I am most often dealing with in one way or another, usually because I’m writing Haskell (which has some reliance on terminology and structure from abstract algebra and category theory) or I’m trying to read a book about category theory and they keep talking about “groups.” Wikipedia, of course, defines all these structures, and that’s fine, but what I need in those times is more of a refresher than an in-depth explanation.

So, that’s the intent of this post – to be that fast reference to the kinds of algebraic structures I care about. There will be some mathematical language and notation that may not be familiar to everyone, so don’t freak out if you see something you’re not familiar with – it’s okay. Some of this is stuff I learned while writing this, because I want to know more about lattices than I currently do! There is a glossary at the end, so if some terminology is unfamiliar or you’ve (as I have, many times) forgotten what “idempotent” means, try checking there.

This sort of became longer than I originally intended, but I think the different sections might be useful at different times and perhaps to different audiences, so I kept it all. Hopefully the table of contents will still make it useful as a quick reference. If you like this style, this is a good preview of what you can expect from the main Joy of Haskell book, tentatively titled The Haskell Desk Reference, currently being written.

I had some help writing this and fact-checking it from Daniel Brice. I also had some help writing about complements and duality from my friend, Alex Feldman-Crough – thank you for making ‘involution’ make sense. Therefore, I will stop talking about myself as if I wrote it alone and shift to using “we” but, sorry, it felt weird to write this intro as if I were multiple people. Our hope is that this reference will be helpful to people who may be learning Haskell or PureScript and encountering a lot of new vocabulary.

Algebraic structures, briefly described

This list is organized by dependencies, more or less, rather than alphabetically. The focus is on the subsets of the group-like, ring-like, and lattice-like structures, but is not all-inclusive. We may add to this as time goes on and new structures grab our fancy.

Some symbols used in this section:

  • *: A generic binary operation. These do not necessarily mean multiplication (as of integers). Where we must distinguish between two binary operations, we typically use + and . The binary operations in the lattice-like structures have their own symbols, though.
  • ×: The Cartesian product. Often this represents the Cartesian product of A and B, i.e., the set of all ordered pairs (a, b) with a ∈ A and b ∈ B.” We are primarily concerned here with operations over a single set, though, so it’s more typically A × A in this post.
  • u: An identity element or unit.
  • : An inverse.
  • : “For all”; universal quantification.
  • : “in”; set membership. Thus, x ∈ A can be read “for every x in A…” meaning the thing we’re about to assert must be true for every element x that is in the set A.

For definitions of terms, see glossary.

Group-like Structures

Magma

A set with a (closed) binary operation.

(A,*)

Structure:

  •  *  : A × A → A

Semigroup

A magma where the operation is associative.

(A,*)

Structure:

  •  *  : A × A → A

Laws:

  • x, y, z ∈ A; x * (y*z) = (x*y) * z

Monoid

A semigroup with an identity element.

(A,*,u)

Structure:

  •  *  : A × A → A
  • u : A

Laws:

  • (A,*) is a semigroup
  • x ∈ A; x * u = u * x = x

Group

A monoid that has inverses relative to the operation.

(A,*,u,′)

Structure:

  •  *  : A × A → A
  • u : A
  • ′ : A → A

Laws:

  • (A,*,u) is a monoid
  • x ∈ A, ∃x′; x * x′ = x′ * x = u

Abelian group

A group where the operation is also commutative. You may also see the term abelian applied to semigroups and monoids whose operations are commutative.

Laws:

  • x, y ∈ A; x * y = y * x

This is in addition to the laws for semigroup, monoid, or group (whichever abelian structure you’re dealing with) that already pertain.

Idempotent group

A group whose operation is idempotent. Strictly speaking, an idempotent group is necessarily trivial – that is, it is necessarily a group with only one element. As with abelian, idempotent may apply to semigroups or monoids as well when the operation is idempotent.

Laws:

  • x ∈ A; x * x = x

Ring-like Structures

Quasiring

Two monoids over the same set whose monoid structures are compatible, in the sense that

  1. one operation (called multiplication) distributes over the other (called addition) and
  2. the additive identity is a multiplicative annihilator.

(A,+,⋅,0,1)

Structure:

  •  +  : A × A → A
  •  ⋅  : A × A → A
  • 0 : A
  • 1 : A

Laws:

  • (A,+,0) is a monoid
  • (A,⋅,1) is a monoid
  • x ∈ A; 0 ⋅ x = x ⋅ 0 = 0
  • x, y, z ∈ A; x ⋅ (y+z) = x ⋅ y + x ⋅ z
  • x, y, z ∈ A; (x+y) ⋅ z = x ⋅ z + y ⋅ z

Nearring

A quasiring with additive inverses.

(A,+,⋅,0,1,−)

Structure:

  •  +  : A × A → A
  •  ⋅  : A × A → A
  • 0 : A
  • 1 : A
  •  −  : A → A

Laws:

  • (A,+,0,−) is a group

Semiring

A quasiring with commutative addition. Alternatively, a ring without inverses, hence also sometimes called a rig, i.e., a ring without negatives.

Quasiring (A,+,⋅,0,1)

Laws:

  • (A,+,0) is abelian

Rng

A ring without identities.

If we may speak frankly, the rig and rng nomenclatures are abominations. Nevertheless, you may see them sometimes, but we will speak of them no more.

Ring

A quasiring that is both a nearring and a semiring. Alternatively, Abelian group plus a monoid (over the same set) where the monoid operation is distributive over the group operation.

Nearring (A,+,⋅,0,1,−)

Laws:

  • (A,+,0,−) is an abelian group

Rings, semirings, and the lot can also be commutative rings, commutative semirings, etc., but, unlike the group-like structures, they are not usually described as, e.g., “abelian rings.”

Division algebra

A ring with multiplicative inverses.

(A,+,⋅,0,1,−,′)

Structure:

  •  +  : A × A → A
  •  ⋅  : A × A → A
  • 0 : A
  • 1 : A
  •  −  : A → A
  • ′ : A \ {0} → A \ {0}

The notation A \ {0}, sometimes alternatively given as A − {0}, means “everything in A except 0.”

Laws:

  • (A,+,⋅,0,1,−) is a ring
  • (A\{0},1,′) is a group

Field

A division algebra with commutative multiplication.

Division algebra (A,+,⋅,0,1,−,′)

Laws:

  • (A,+,⋅,0,1) is commutative

Lattice-like Structures

There are some cool things about lattices that aren’t covered here, but lattice-like structures will be getting their very own followup post at some point in the (hopefully near) future.

Semilattice

A magma where the operation is commutative, associative, and idempotent. It could refer to either of the abelian semigroups of a lattice, written and often called join or Boolean or, and often called meet or Boolean and. We define the term here for reasons related to its usage in Haskell, but it is best understood in context of lattices, rather than independently.

Lattice

Two idempotent abelian semigroups over the same set whose semigroup structures are compatible, in the sense that the operations satisfy absorption laws. Interestingly, it’s sort of two semilattices in the same way that a semiring is two monoids, with laws tying them together (distributivity in the case of semirings, absorption laws in the case of lattices).

(A,∨,∧)

Structure:

  •  ∨  : A × A → A
  •  ∧  : A × A → A

Laws:

  • (A,∨) is an idempotent abelian semigroup
  • (A,∧) is an idempotent abelian semigroup
  • x, y ∈ A; x ∨ (xy) = x
  • x, y ∈ A; x ∧ (xy) = x

The last pair of laws is called absorption. Since absorption laws are unique to lattices, we discuss them here instead of in the glossary. The absorption laws link a pair of semilattices in a kind of distributive relationship, so that a lattice is not just any two semilattices that happen to be over the same set, but only semilattices that are linked in this way. In particular, the absorption laws ensure that the two semilattices are dual of each other. It can take a moment to see what it means, so let’s pause and look at concrete examples.

Consider a Boolean lattice with two elements, True and False, where || corresponds to and && corresponds to :

λ> False || (False && True)
False

λ> False && (False || True)
False

But it’s important to note that these hold for all x and y in the set. So, if we swap them, the absorption laws still hold:

λ> True || (True && False)
True

λ> True && (True || False) 
True

Positive integers form a lattice under the operations min and max, and we can see the absorption law in action here, too.

λ> 5 `min` (5 `max` 9)
5

λ> 5 `max` (5 `min` 9)
5

The absorption laws are sometimes called a special case of identity, and they’re also related to idempotence in that the idempotence laws can be derived from the absorption laws taken together.

Bounded lattice

A lattice whose semigroup structures are monoids.

(A,∨,∧,0,1)

Structure:

  •  ∨  : A × A → A
  •  ∧  : A × A → A
  • 0 : A
  • 1 : A

Laws:

  • (A,∨,∧) is a lattice
  • (A,∨,0) is a monoid
  • (A,∧,1) is a monoid

Complemented lattice

A bounded lattice where each element has a complement.

(A,∨,∧,0,1,′)

Structure:

  •  ∨  : A × A → A
  •  ∧  : A × A → A
  • 0 : A
  • 1 : A
  • ′ : A → A

Laws:

  • (A,∨,∧,0,1) is a bounded lattice
  • x ∈ A; x ∨ x′ = 1
  • x ∈ A; x ∧ x′ = 0

Nota bene: Although defines a particular choice of complements (i.e., each element x ∈ A has exactly one corresponding x′ ∈ A), there may additionally be other elements y ∈ A such that x ∨ y = 1 and x ∧ y = 0. In particular, there may be other suitable functions, and x is not necessarily x.

Distributive lattice

A lattice where the operations distribute over each other.

Lattice (A,∨,∧)

Laws:

  • x, y, z ∈ A; x ∧ (yz) = (xy) ∨ (xz)
  • x, y, z ∈ A; x ∨ (yz) = (xy) ∧ (xz)

Strictly speaking, the second law can be derived from the first law and the lattice laws, and as such is redundant. Every totally ordered set, such as the real numbers and subsets of the reals including the naturals and integers, form distributive lattices with max as (join) and min as (meet).

Heyting algebra

A bounded, distributive lattice with an implication operation.

(A,∨,∧,0,1,⇒)

Structure:

  •  ∨  : A × A → A
  •  ∧  : A × A → A
  • 0 : A
  • 1 : A
  •  ⇒  : A × A → A

Laws:

  • (A,∨,∧,0,1) is a bounded, distributive lattice.
  • x ∈ A; x ⇒ x = 1
  • x, y ∈ A; x ∧ (xy) = x ∧ y
  • x, y ∈ A; y ∧ (xy) = y
  • x, y, z ∈ A; x ⇒ (yz) = (xy) ∧ (xz)

Boolean algebra

A complemented Heyting algebra.

Complemented lattice (A,∨,∧,0,1,′)

Laws:

(A,∨,∧,0,1,⇒) is a Heyting algebra where  ⇒  : A × A → A by x ⇒ y = x′ ∨ y.

Some algebra in Haskell

The typeclass system of Haskell more or less corresponds to algebraic structures, with types as the sets. The typeclass definition gives the most general possible form of the operations over the sets, and the instance declarations define the implementations of those operations for the specified type (set).

Not all of the above structures are well represented in Haskell, but some are, and a couple of them (semigroups and monoids) are super important. We give those typeclass definitions, a representative instance or two, and links to documentation where appropriate.

One important thing to note before we get started that is, perhaps, somewhat disappointing: the compiler does not enforce the laws of the algebraic typeclasses. The only thing standing between you and a law-breaking ring or monoid is … well … you, and your willingness to test your instances. That isn’t really different from the situation in mathematics, where it would be on you to prove that such-and-such upholds the ring laws or whatever, but some people come to Haskell expecting the compiler wouldn’t let you write a bad instance and that is absolutely not the case. We like type checking and inference a lot and are grateful for the problems it does help with, but it’s important to be aware of its limitations as well!

Semigroup

The Semigroup class in Haskell is defined as follows:

class Semigroup a where
  -- | An associative operation.
  (<>) :: a -> a -> a

Many sets form semigroups under more than one operation, so in Haskell, to preserve the unique relationship between a type and a typeclass, we use a named type wrapper, called a newtype, which identifies which semigroup we’re talking about. This parallels the situation in math where a set can’t be a group (or a ring, etc). Instead, a group has to be a pair (G, *) where G is a set and * : (G, G) -> G and the group axioms hold. G is not the group: the group is the pairing of the set G with the structure function *. In Haskell, we use newtypes to pair a set (in this case, the type being wrapped) with some structure functions.

-- the Max semigroup is only for orderable sets
instance Ord a => Semigroup (Max a) where
  (<>) = coerce (max :: a -> a -> a)

-- the NonEmpty semigroup is concatenation of nonempty lists
instance Semigroup (NonEmpty a) where
  (a :| as) <> ~(b :| bs) = a :| (as ++ b : bs)

You can find more about Semigroup over on Type Classes.

Monoid

In modern Haskell, Semigroup is a superclass of Monoid. That is, since monoids are semigroups with the additional requirement that there be an identity element, semigroup is in some sense the weaker algebra and there are more of them than there are monoids. What this means is if we want a Monoid, we have to first have a Semigroup; the binary operation comes from the Semigroup instance. Then we define the identity element for that type and operation in our Monoid instance – in the Monoid class it’s called mempty.

class Semigroup a => Monoid a where
  -- | Identity of '<>'
  mempty  :: a

Again, many sets form monoids under more than one operation, so we use newtypes in Haskell to tell them apart.

instance Num a => Semigroup (Sum a) where
  (<>) = coerce ((+) :: a -> a -> a)

instance Num a => Monoid (Sum a) where
  mempty = Sum 0

instance Num a => Semigroup (Product a) where
  (<>) = coerce ((*) :: a -> a -> a)

instance Num a => Monoid (Product a) where
  mempty = Product 1

Julie has also written extensively about Monoid over on Type Classes, about JavaScript and monoidal folds, and also given talks about these wonderful structures.

It is perhaps worth pointing out that the Alternative and MonadPlus typeclasses in Haskell are also monoids. The difference between them and the Monoid class is that Monoid is a typeclass for concrete types, whereas Alternative and MonadPlus are for type constructors, that is, parameterized types. We can make this more precise. If f is an Alternative, then for all a, f a is a monoid under <|>, with identity empty. We encode this fact in Haskell via the Alt newtype and its Monoid instance.

Ring

We don’t exactly have a Ring typeclass in standard Haskell; what we have instead is the Num class and it’s sort of like a ring. It’s a big typeclass, so this is a simplified version with the functions you’d expect a ring to have.

class  Num a  where
  (+), (-), (*)       :: a -> a -> a
  negate              :: a -> a
  fromInteger         :: Integer -> a

Perhaps surprisingly fromInteger has everything to do with rings and rightly belongs in the definition of any typeclass for rings. This is because the ring of integers is an initial object in the category of rings, i.e. for every ring A, there is always one and only one ring homomorphism (i.e., ring-structure-compatible function) from the ring of integers to the ring A. This is beyond the scope of the blog post, at least for now, but we mention it here so that someday Julie can come back to it.

Comments in the source code say:

The Haskell Report defines no laws for ‘Num’. However, ‘(+)’ and ’(*)` are customarily expected to define a ring

and then give the properties a ring is expected to have; however, those laws are rarely mentioned with regard to Num, and we suspect many people do not even think of Num as a ring that should have laws. Let us speak no more of this.

Semiring

It’s a shame that Semiring is not in the standard library (yet?). It is in the standard PureScript library, and we really admire PureScript for that, among other things. However, we have some decent implementations of it in libraries, for example this semirings package.

That Semiring definition looks like this:

class Semiring a where
  plus  :: a -> a -> a -- commutative operation
  zero  :: a           -- identity for `plus`
  times :: a -> a -> a -- associative operation
  one   :: a           -- identity for `times`

It also provides several instances, including this one for Maybe – notice the difference between the plus and times cases when one input is Nothing:

instance Semiring a => Semiring (Maybe a) where
  zero  = Nothing
  one   = Just one

  plus Nothing y = y
  plus x Nothing = x
  plus (Just x) (Just y) = Just (plus x y)

  times Nothing _ = Nothing
  times _ Nothing = Nothing
  times (Just x) (Just y) = Just (times x y)

There is more about semirings on Type Classes.

Lattice and semilattice

These structures are interesting, but we have not yet written much about them or used the lattices package. We notice that that package defines two semilattice classes and then a Lattice class that is constrained by both. We note that the Lattice class has no new functions in it; you can use it as a constraint on other things when you have two semilattices (the meet and the join) and the absorption law holds.

class JoinSemiLattice a where
    (\/) :: a -> a -> a

class MeetSemiLattice a where
    (/\) :: a -> a -> a

class (JoinSemiLattice a, MeetSemiLattice a) => Lattice a where

And the instances for Bool are defined:

instance JoinSemiLattice Bool where
    (\/) = (||)

instance MeetSemiLattice Bool where
    (/\) = (&&)

instance Lattice Bool where

The absorption law does hold for the Bool lattice, so it looks like we’re all good here!

A little glossary

This first section gives definitions of some common terminology when talking about the laws and properties of these structures.

Some of these will probably be familiar to most people from high school math, others may not be.

  • Absorption : See lattices.

  • Annihilator : We have to include this term because it is such a metal thing to call zeroes. Annihilation is a property of some structures, such that there is an element of a set that always annihilates the other input to a binary operation, sort of the opposite of an identity element (see below). If (S, *) is a set S with a binary operation * on it, the annihilator, or zero element, is an element z such that for all a in S, z * a = a * z = z. In the monoid of integer multiplication, the annihilator is zero, while in the monoid of set intersection, the annihilator is the empty set; notice that the monoids of integer addition and set union do not have annihilators.

  • Associativity : Associativity may be familiar from elementary arithmetic, even if the name isn’t. For example, you may recall that 2 * (3 * 4) and (2 * 3) * 4 always evaluate to the same result, even though you simplify the parts in parentheses first so the parentheses change the order in which you evaluate the expression. When the result never depends on the order of simplification, we say that a binary operation is associative. More formally, an operation * on a set S is associative when for all x, y, and z in S, x * (y * z) = (x * y) * z.

  • Binary operation : A binary operation * on a set S is a function * : (S, S) -> S. Notice that * maps (S, S) back into S. Because of an historical quirk, this fact is sometimes called closure (see below). In Haskell, that looks like a type signature such as a -> a -> a because Haskell is curried by default. All functions in Haskell are … actually unary functions, taking one input and returning one result (which may itself be a function). The final parameter of a Haskell type signature is the return type; all others are input types.

  • Closed : By definition, a binary operation over a set implies that the operation is closed, that is, for all a, b, in set S, the result of the binary operation a * b is also an element in S. This coincides exactly with the definition of a function (S, S) -> S (see above). Also, sometimes called the property of closure. While this is definitionally a property of binary operations and, thus, not independently important, we mention it here because it comes up in the Haskell literature.

  • Commutativity : Commutativity is not the same as associativity, although most commutative operations are also associative. The commutative property of some binary operations holds that changing the order of the inputs does not affect the result. More formally, an operation * on a set S is commutative when for all x and y in S, x * y = y * x.

  • Complement : You may have learned about complements in geometry or with sets: two angles are complementary when they add up to 90 degrees; and two subsets of a set S – let’s call the subsets A and B – are complements when A ∪ B = S and A ∩ B = ∅ (where is for union and is for intersection). Simply put, a complement is what you combine with something to make it “whole”. In a complemented lattice, every element a has a complement b satisfying a ∨ b = 1 and a ∧ b = 0 where 1 and 0 are the greatest and least elements of the set, respectively. Complements need not be unique, except in distributive lattices.

  • Distributivity : The distributive property in arithmetic states that multiplication distributes over addition such that 2 * (1 + 3) = (2 * 1) + (2 * 3). Some algebraic structures generalize this with their own distributive law. Suppose we have a set S with two binary operations, <> and ><. We say >< distributes over <> when

    • for all x, y, and z in S,
    • x >< (y <> z) = (x >< y) <> (y >< z) (left distributive) and
    • (y <> z) >< x = (y >< x) <> (z >< x) (right distributive).

    Note that if * is commutative and left distributive, it follows that it is also right distributive (and therefore distributive).

  • Dual : This principle can also be somewhat tricky to understand, and discussions of what it means tend to get into the mathematical weeds quickly. Roughly, for our purposes (but perhaps not all purposes) it’s a “mirror-like” relationship between operations such that one “reflects” the other. Somewhat more formally, it means that there is a mapping between A and B that involutes, so f(A) = B and f(B) = A. Understanding duality is important because if you prove things in f(A), you can prove things about B, and if you prove things about f(B), then you can prove them about A. So, A and B are related but it’s a bit more complicated than a standard function mapping. An involution is a function that equals its inverse, so applying it to itself give the identity; that is, if f(A) = B and f(B) = A then f(f(A))=A. Some examples:

    • In Haskell, sum types and product types are dual (as are products and coproducts in category theory). You can demonstrate this by implementing f :: (a, b) -> (Either (a -> c) (b -> c) -> c) (mapping a product type to a sum) and f' :: (Either a b) -> ((a -> c, b -> c) -> c) (mapping a sum type to a product) and trying it out.

    • In classical logic, universal (“for all x in A…”) and existential quantification (“there exists an x in A…”) are dual because x : ¬P(x) and ¬∀x : P(x) are equivalent for all predicates P : if there exists an x for which P does not hold, then it is not the case that P holds for all x (but the converse does not hold constructively).

  • Idempotence : The idempotence we care about for our algebraic structures is a property of some binary operations under which applying the operation multiple times doesn’t change the result after the first application. However, it can be a bit tricky to understand, so let’s consider idempotence with regard to unary functions to get a sense of the meaning first. Consider a device where there are separate buttons for turning the device on and off; pushing the on button doesn’t turn it “more on”, so the on button is idempotent (and so is the off button). Similarly, taking the absolute value of integers is an idempotent unary function; you can keep taking the absolute value of a number and after the first time, the answer won’t change.

    We say an element of a set is idempotent with respect to some operation * if x * x = x. We say an operation is idempotent if every element in the set is idempotent with respect to the operation. Both the annihilator and identity elements, if present in a given structure, are idempotent elements. For the natural numbers under multiplication, both 1 and 0 are idempotent; for the naturals under addition, only 0 is. Hence neither addition nor multiplication of the natural numbers is itself idempotent. Furthermore, the set operations of union and intersection are both idempotent operations.

  • Identity : An identity element is an element of a set that is neutral with respect to some binary operation on that set; that is, it leaves any other element of that set unchanged when combined with it. An identity value is unique with respect to the given set and operation. More formally, for a set S with a binary operation * on it, x is the identity value when x * a = a * x = a for all a in S. In Haskell, mempty is a return-type polymorphic identity value for monoids and empty is the same but for Alternatives, but identity values are also often called one and zero on analogy with the identities for addition and multiplication, respectively. Often, the identity called “zero” will also be an annihilator for the dual operation, e.g., the empty set or empty list (identity of concatenation, annihilator of zip), False, and the like are in their respective structures.

  • Invertibility : This is also familiar to many of us from basic arithmetic, even if the name is not. Zero can serve as an identity element for addition of integers or of the natural (“counting”) numbers assuming we include zero in those. The set of integers includes numbers that we can add to each other to get back to zero, e.g., (-3) + 3 = 0; there aren’t any such natural numbers because the set of natural numbers does not include negatives. This property of the integers under addition is invertibility. Given a binary operation * on S with identity e, an element b in S is said to be an inverse of an element a in S if a * b = e = b * a, in which case a (as well as b, simply by the symmetry in the definition) is said to be invertible in S relative to *. If every element of S is invertible in S relative to *, then we say S has inverses relative to *.

  • Unit : The idea of being a unit is related to invertibility. A unit is an element of a ring structure that is its own inverse. The number 1 is its own multiplicative inverse, as is (-1) because (-1) * (-1) = 1. In Haskell, there is a type called “unit”, written () (an empty tuple, if you will); while types in Haskell do not form a ring, the unit type plays the same role in the semiring of types as the number 1 plays in the semiring of natural numbers (the zero is represented by the Void type, which has no values).

Left and right

You may sometimes hear about left or right associativity or identity. For example, exponentiation is only right-associative. That is, in a chain of such operations, they group for evaluation purposes from the right.

λ> 2^3^2
512
-- is the same as
λ> 2^(3^2)
512
-- but not
λ> (2^3)^2
64

This is more of a convention than a property of the function, though, and it is often preferable to use parentheses to make associativity explicit when it is one-sided (i.e., when something is right associative but not associative). We call something associative when it is both left- and right-associative. We call something distributive when it is both left- and right-distributive. We call something an identity if it is both a left and right identity.

If you like my writing, consider buying me coffee or check out Type Classes, where I teach and write about Haskell and Nix.

prev