From the point of view of modern mathematics, the algebra you learn in high school is better called the "algebra of real numbers." The entities you deal with are real numbers like 3, -5, 2/3, $\sqrt{2}$, or $\pi$, as well as symbols like $x$ or $y$ that stand for variable or unknown real numbers. Specifically, you study operations on real numbers like $+$, $-$, $\times$, $\div$, powers, and roots, as well as functions like $\sin(x)$, $e^x$, or $x^2+2x$ that take real numbers and return other real numbers.

The goal of this article is to show you some *other* kinds of entities you can do algebra with, other than real numbers. While numbers have addition, multiplication, and so on, other kinds of objects come with their own operations that make sense. Once you've got objects and operations, you can pose problems that really feel algebraic, like "solve for $x$." I'll start with a familiar example, and then cover some of the most important recurring examples you'll run across in modern algebra.

This example isn't particularly interesting to mathematicians on its own, but will be familiar to any computer programmer. Instead of numbers, let's take our set of objects to be F# strings. One operation you can do on strings is concatenating them. F# uses the "$+$" operator to concatenate strings because, well, doesn't it just look right?

```
> "hot" + "dog";;
val it : string = "hotdog"
```

Like the set of real numbers, the set of possible strings is infinite (but in either case they can't all fit on your finite computer). You can use symbols as placeholders for strings being added together, like
$$s + \texttt{"dog"}$$
This statement means the concatenation of a yet-to-be-specified string "$s$" with the string `"dog"`

. I put `"dog"`

in quotes and use a different font for $s$ to indicate that $s$ stands *in place* of a string rather than being a string itself.

An algebraic problem involving strings and the operation of concatenation might be solving for $s$ in the following equation:

$$s + \texttt{"dog"} = \texttt{"hotdog"}.$$
Sometimes there's a solution to equations like this, but sometime's there's not. For instance:
$$\texttt{"c"} + s = \texttt{"fsharp"}$$
doesn't have a solution for $s$; there's no string you can append to `"c"`

to get `"fsharp"`

. As another difference between real numbers and strings, the sums $x + y$ and $y + x$ are always equal if $x$ and $y$ are real numbers. If $x$ and $y$ represent strings, then $x + y$ and $y + x$ are usually different. For instance, `"hot"+"dog"`

isn't equal to `"dog"+"hot"`

. In algebra jargon, we would therefore say that addition of real numbers is **commutative** while concatenation of strings is **non-commutative**.

For the rest of this article, I'll talk about various other kinds of entities and operations, focusing on those that are important in modern algebra. In each case I'll show you how to implement the operation in F#.

- Entities: A finite set of $n$ integers $\{0,1,2,\ldots,n-1\}$.
- Operations: $+$, $-$, $\times$, $\div$ with modified definitions.

Modular arithmetic works like regular integer arithmetic except that the notion of equality is replaced. The first step is to choose a positive number and "assert" that it is the same as zero. Whatever positive number $n$ we choose, we write $n \equiv 0$ and say we are working "mod $n$."

For instance, if we are working "mod $5$", then we would write $0 \equiv 5$, or often $0 \equiv 5 \pmod{5}$ to be clear. The consequence of making zero equivalent to 5 is that some other pairs of distinct integers become equivalent as well. For instance, if you start with $0 \equiv 5 \pmod{5}$ and add $2$ to both sides, you get $2 \equiv 7 \pmod{5}$. In fact, there's a whole infinite family of equivalent numbers: $\ldots, -8, -3, 2, 7, 12, 17, \ldots$. In a world where adding $5$ is equivalent to adding zero, any two integers that differ by a multiple of $5$ are equivalent.

Working mod 5, there are only 5 distinct families of numbers, which are families equivalent to $0$, $1$, $2$, $3$, or $5$. To find out what family a number belongs to, you can find its remainder when divided by $5$. For instance:

```
> 1024 % 5;;
val it : int = 4
```

so $1024 \equiv 4 \pmod{5}$.

Working mod $n$, the result of any arithmetic operation can be written as an integer in the set $\{0,1,2,\ldots,n-1\}$. For instance $10 + 10 \equiv 6 \pmod{7}$ because $10+10 = 20$ and $20 - 7 - 7 = 6$. For that reason, you can think of modular arithmetic as working with a finite subset of the integers.

To implement modular arithmetic in F#, you basically need to do the integer arithmetic and then report all results between $0$ and $n-1$. One key is that the `%`

operator in F# can return negative results if the input is negative. The trick in the `mod`

function below (which I learned here) ensures we get the right result.

```
let modulo n x = (x % n + n) % n
let plus n x y = modulo n (x + y)
let minus n x y = modulo n (x - y)
let times n x y = modulo n (x * y)
```

There are a few different ways to represent modular arithmetic in F# code, but I prefer this one using higher order functions. If you want to do some extended computations in, say, mod 7, you could locally overload the arithmetic operators, for instance:

```
let (+) = plus 7
10 + 10 // returns 6
```

As for a real *algebraic* insight into modular arithmetic, division works much differently than it does for integers. For instance, what is the integer $x$ that solves $3 \times x = 4$? There is none; the only answer is $x = 4/3$ which is not an integer. However, if you're working mod 5, then the equation $3 \cdot x \equiv 4 \pmod{5}$ does have a solution, namely $x=3$. Generally speaking, a quotient like $a/b$ can be a whole number mod $n$ if $b \neq 0$. If $n$ is a prime number, it is always a whole number.

In other words if you're working in any of mod 2, mod 3, mod 5, mod 7, mod 11, and so on, division by a non-zero number has a result. Here's an inefficient but correct way to do the division $x/y$ in mod $n$: look at all of the numbers from $0$ to $n-1$ and one of them multiplied by $y$ hopefully is equivalent to $x \pmod{n}$.

```
let divide n x y =
[0..n-1]
|> List.find (fun z -> times n y z = x)
```

When $n$ is prime and $y \neq 0$, you'll always get an answer. For instance:

```
> divide 37 26 8;;
val it : int option = Some 31
```

Otherwise the results are hit or miss: `divide 8 5 3`

successfully returns `7`

while `divide 8 5 2`

fails. Can you see why?

The set of integers $\{0,\ldots,n-1\}$, equipped with arithmetic operations mod $n$, is denoted $\mathbb{Z}/n\mathbb{Z}$ or, if $n$ is prime, $\mathbb{F}_n$. I'll explain these notations later.

In a functional-first language like F#, functions are "first-class" objects. The same is true in mathematics, and there are a lot of contexts where functions are the central objects of study. In both programming and math, there's an important operation you can use to combine two existing functions to get a new one: composition.

Mathematically speaking, the composition of two functions $f(x)$ and $g(x)$ is a new function, written $g \circ f$ defined as $(g\circ f)(x) = g(f(x))$. For that composition to make sense, the outputs of $f$ need to be valid inputs to $g$. In F#, the composition of a function `f : 'T -> 'U`

and a function `g : 'U -> 'V`

is written `f >> g`

. (I like this notation better than the math notation because the symbol points in the direction that data flows: out of $f$ and into $g$.) Notice the output type of `f`

and the input type of `g`

need to match, and F# checks this property of any composition you write at compile time.

For a function, the set of allowed inputs to a function is called the **domain** and the set the outputs belong to is called the **codomain**. For instance, the set of real numbers are both the domain and the codomain of the function $f(x) = x^2$. In math it's common to study families of functions where the domain and the codomain of each are the same set, i.e. having type `'T -> 'T`

. That means you can compose them together in any order. For instance, here are some functions that all have the signature `int -> int`

:

```
let f x = x * x
let g x = x - 1
let h x = x + 3
let e x = x
```

Any composition of these functions, like `g >> f >> h`

or `f >> e`

or `f >> f >> f >>`

is valid, as the F# type system will confirm.

There are some algebraic observations you can make here. For starters, any function composed with `e`

will be unchanged. The compositions `e >> f`

or `f >> e`

will be indistinguishable from `f`

, returning the square of their inputs. An object like this which has no effect on other objects under a given operation is called an **identity**. F# actually comes with a generic identity function called `id`

, named for this reason.

Another observation is that `h >> g >> g >> g`

is the same as the identity function on integers. In other words, three applications of `g`

"undo" an application of `h`

. The math term for this situation is that $g \circ g \circ g$ is the **inverse function** of $h$.

Intuitively, a permutation is a rearrangement of a collection of objects. For example, different permutations of the four letters "star" you can get "rats" or "tars" or "arts". Two permutations of the F# list `[1;2;3]`

are `[2;3;1]`

and `[3;1;2]`

.

You can think of any permutation on $n$ objects as a function $\sigma(i)$ that takes an index and returns a new index. (The convention in math is to use 1-based indices.) Say you want to specify a permutation to get the string `"ARTS"`

by rearranging the letters of the string `"STAR"`

. The first letter of `"ARTS"`

is `"A"`

, which comes from the third letter of `"STAR"`

, so $\sigma(1) = 3$. The second letter of `"ARTS"`

is `"R"`

, which comes from the fourth letter of `"STAR"`

, so $\sigma(2) = 4$. Likewise, the third and fourth letters of `"ARTS"`

come from the second and first letters of `"STAR"`

, so $\sigma(3) = 2$ and $\sigma(4) = 1$. Putting this data in a table, we have:
$$\begin{array}{c|cccc}
i & 1 & 2 & 3 & 4 \\ \hline
\sigma(i) & 3 & 4 & 2 & 1
\end{array}$$
How about another permuation $\tau$, given by the following table?
$$\begin{array}{c|cccc}
i & 1 & 2 & 3 & 4 \\ \hline
\tau(i) & 4 & 3 & 2 & 1
\end{array}$$
How does $\tau$ permute the string `"STAR"`

? The values of $\tau(1)$, $\tau(2)$, $\tau(3)$, and $\tau(4)$ are 4, 3, 2, and 1 respectively. Therefore, we build the result by taking the fourth, third, second, and first letters of `"STAR"`

. The result is `"RATS"`

. (Note: you may have seen a different recipe for permuting a string or list based on a permutation function $\sigma$. I like this one because the permutation of the string "1234" agrees with the way the numbers 1,2,3, and 4 would be permuted by $\sigma$.)

Basically, we're thinking of a permutation as rearranging the numbers $1$ to $n=4$ here. A compact way to record a permutation $\sigma$ of $n$ numbers is as the list $\sigma(1), \sigma(2), \ldots, \sigma(n)$. That's the same as just keeping the second row of the table above.

In F#, we can represent a permutation on $n$ objects as an array of $n$ numbers. This code defines `Permutation`

as an alias for `int array`

. The function `permuteString`

applies a permutation to a given string, like we did above.

```
open System
type Permutation = int array
let permuteString (p:Permutation) (s:string) =
p
|> Array.map (fun i -> s.[i-1])
|> String
permuteString [|4;3;2;1|] "STAR" // returns "RATS"
```

Because permutations are functions, they can be composed. The composition of $\tau \circ \sigma$ means "apply $\sigma$ and then apply $\tau$". In our notation, composition of permutations is

```
let composePermutations (p1:Permutation) (p2:Permutation) =
Array.map (fun i -> p1.[i-1]) p2
```

Note that `p1`

and `p2`

need to have the same length to be composable.

There are a bunch of algebraic statements we can make about permutations, but I'll draw attention to a few which parallel earlier results. Composition of permutations is *non-commutative* in general. That is, while some pairs of permutations have the same composition, regardless of order

```
let s,t = [|2;1;3;4|], [|1;2;4;3|]
composePermutations s t // [|2; 1; 4; 3|]
composePermutations t s // [|2; 1; 4; 3|] same
```

other pairs give different results when composed in the same order
```
let p,q = [|1;3;2|], [|2;1;3|]
composePermutations p q // [|3; 1; 2|]
composePermutations q p // [|2; 3; 1|] different
```

As another algebraic result, there's such a thing as an `identity`

permutation. On the set $\{1,\ldots,n\}$, the identity function that sends every number to itself is a permutation. In our F# convention it would be represented by `[|1..n|]`

. You can check that the permutation `[|1;2;3;4|]`

is an identity -- any other permutation on four objects that you compose it with will be unchanged.

Finally, any permutation can be "undone". In other words, any rearrangement of $n$ objects can be returned to the initial arrangement by some other permutation. In the terminology from before, every permutation has an *inverse*. You can see that `[|2;3;1;5;4|]`

is the inverse of `[|3;1;2;5;4|]`

.

`composePermutations [|2;3;1;5;4|] [|3;1;2;5;4|] // [|1;2;3;4;5|]`

Because the result of composing these two permutations is the identity permutation, applying one and then the other has no effect on the target set.

There are a lot of operations involving vectors and matrices, in fact the whole subject of **linear algebra** is devoted to studying these operations. I'll cover just a few of the most important ones and how to code them up in F#.

A simple example of vectors are points on the 2D coordinate plane, which can be represented by ordered pairs of real numbers. A starting point to model these in F# could be a type definition:

`type Vec2D = float * float`

The key operations on 2D vectors are addition and scalar multiplication. **Vector addition** combines two vectors to give a new one (i.e. `Vec2D * Vec2D -> Vec2D`

) and **scalar multiplication** combines a real number (scalar) with a vector to give a new vector (`float * Vec2D -> Vec2D`

). To add 2D vectors, you just add the corresponding components:

`let add2 ((u1,u2):Vec2D) ((v1,v2):Vec2D) = (u1 + v1, u2 + v2)`

and to multiply a 2D vector by a real number you multiply all of its coordinates by the real number.

`let scale2 (s:float) ((v1,v2):Vec2D) = (s * v1, s * v2)`

This representation is nice because type safety gives you "dimension safety". A 3D vector, representing a point in 3D space, would be represented by an ordered triple of floats. The F# compiler will ensure you never accidentally try to add a 2D vector with a 3D vector.

The compromise here is that you need to write a new function to add vectors in any number of dimensions, for instance `add2`

, `add3`

, `add4`

, and so on. I'm not aware of a way to write an F# function which is generic over tuples of different lengths. If you'd rather write your vector arithmetic functions once, you can always us a list or array instead of a tuple.

```
type Vector = float list
let add (u:Vector) (v:Vector) = List.map2 (+) u v
let scale (s:float) (v:Vector) = List.map ((*) s) v
```

There's an important class of functions that map vectors to other vectors called **linear transformations**. The most important feature of a linear transformation is the way it affects vector sums and scalar multiples. If $u+v$ is a sum of two vectors and $L$ is a linear transformation, then $L(u+v) = L(u) + L(v)$ -- the transformation of the sum is the sum of the transformed vectors. Likewise if $sv$ is a scalar multiple, then $L(sv) = sL(v)$. The implication of these two properties is that vector arithmetic "looks the same" before or after the transformation is applied.

What's more is that the instructions for any linear transformation can be written compactly. Every linear transformation produces entries of the output vector as weighted sums of the entries of the input vector. For instance, if $A$ is a transformation that maps 4D vectors to 3D vectors, then $A$ could act on a vector $(x_1,x_2,x_3,x_4)$ to produce another vector $(y_1,y_2,y_3)$. If we know $A$ is *linear* then we're guaranteed we can write the effect of $A$ as:
$$\begin{array}{rcl}
a_{11}x_1 + a_{12}x_2 + a_{13}x_3 + a_{14}x_4 &=& y_1 \\
a_{21}x_1 + a_{22}x_2 + a_{23}x_3 + a_{24}x_4 &=& y_2 \\
a_{31}x_1 + a_{32}x_2 + a_{33}x_3 + a_{34}x_4 &=& y_3
\end{array}$$
For some 12 numbers $a_{ij}$. This operation is more compactly written in **matrix notation** as follows:
$$\begin{bmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \\
a_{21} & a_{22} & a_{23} & a_{24} \\
a_{31} & a_{32} & a_{33} & a_{34}
\end{bmatrix}
\begin{bmatrix}
x_1 \\
x_2 \\
x_3 \\
x_4
\end{bmatrix}
=
\begin{bmatrix}
y_1 \\
y_2 \\
y_3 \\
\end{bmatrix}$$

This is called **matrix multiplication** of the two matrices on the left to produce the one on the right. Even though we use the term "multiplication", you can still think of it as applying a function to a vector, producing a new vector.

Two linear transformations $A$ and $B$ can be composed to get $A \circ B$ provided, as usual, that the codomain of $B$ matches the domain of $A$. For instance, the domain of $A$ above is the set of four-dimensional vectors with real number entries, denoted $\mathbb{R}^4$, while the codomain is its three-dimensional analogy, $\mathbb{R}^3$. A compatible linear transformation $B$ would need to have codomain $\mathbb{R}^4$, or in other words it needs to output 4-dimensional vectors. If its arguments are, for instance, 2D vectors then it will be represented by a matrix with two columns and four rows. The composition of $(A\circ B)$ will then have domain $\mathbb{R}^2$ and codomain $\mathbb{R}^3$ and it acts on a vector $(v_1,v_2)$ as follows: $$\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \end{bmatrix} \begin{bmatrix} b_{11} & b_{12} \\ b_{21} & b_{22} \\ b_{31} & b_{32} \\ b_{41} & b_{42} \end{bmatrix} \begin{bmatrix} v_1 \\ v_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \\ y_3 \\ \end{bmatrix}.$$

Just as the numbers $a_{ij}$ define the operation of $A$, the eight numbers $b_{ij}$ define the operation of $B$. If you expand the operation of both linear transformations $A$ and $B$, you get the following:
$$\begin{array}{rcl}
(a_{11}b_{11} + a_{12}b_{21} + a_{13}b_{31} + a_{14}b_{41}) v_1 + (a_{11}b_{12} + a_{12}b_{22} + a_{13}b_{32} + a_{14}b_{42}) v_2 &=& y_1\\
(a_{21}b_{11} + a_{22}b_{21} + a_{23}b_{31} + a_{24}b_{41}) v_1 + (a_{21}b_{12} + a_{22}b_{22} + a_{23}b_{32} + a_{24}b_{42}) v_2 &=& y_2\\
(a_{31}b_{11} + a_{32}b_{21} + a_{33}b_{31} + a_{34}b_{41}) v_1 + (a_{31}b_{12} + a_{32}b_{22} + a_{33}b_{32} + a_{34}b_{42}) v_2 &=& y_3
\end{array}
$$
Which *also* has the form of a linear transformation. This is clearer if you rewrite it as:
$$\begin{array}{rcl}
c_{11}v_1 + c_{12}v_2 &=& y_1\\
c_{21}v_1 + c_{22}v_2 &=& y_2\\
c_{31}v_1 + c_{32}v_2 &=& y_3
\end{array}$$
Where $c_{ij} = (a_{i1}b_{1j} + a_{i2}b_{2j} + a_{i3}b_{3j} + a_{i4}b_{4j}) = \sum_{k=1}^4 a_{ik}b_{kj}.$ In matrix notation,
$$\begin{bmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \\
a_{21} & a_{22} & a_{23} & a_{24} \\
a_{31} & a_{32} & a_{33} & a_{34}
\end{bmatrix}
\begin{bmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
b_{31} & b_{32} \\
b_{41} & b_{42}
\end{bmatrix}
\begin{bmatrix}
v_1 \\
v_2
\end{bmatrix}
=
\begin{bmatrix}
c_{11}&c_{12}\\
c_{21}&c_{22}\\
c_{31}&c_{32}
\end{bmatrix}
\begin{bmatrix}
v_1 \\
v_2
\end{bmatrix}
=
\begin{bmatrix}
y_1 \\
y_2 \\
y_3 \\
\end{bmatrix}$$
which makes it clear that the matrix $c_{ij}$, which we can call $C$ represents the composition of $A$ and $B$, and it's called their **matrix product**. Leaving the vectors out, we write:
$$\begin{bmatrix}
a_{11} & a_{12} & a_{13} & a_{14} \\
a_{21} & a_{22} & a_{23} & a_{24} \\
a_{31} & a_{32} & a_{33} & a_{34}
\end{bmatrix}
\begin{bmatrix}
b_{11} & b_{12} \\
b_{21} & b_{22} \\
b_{31} & b_{32} \\
b_{41} & b_{42}
\end{bmatrix}
=
\begin{bmatrix}
c_{11}&c_{12}\\
c_{21}&c_{22}\\
c_{31}&c_{32}
\end{bmatrix}$$

The product "$AB$" of two matrices is the matrix of the composition of the corresponding linear transformations $A \circ B$.

There are libraries that can do matrix multiplication for you, but it's also quick to implement it from scratch in F#. Modeling matrices as arrays of arrays of floats (arrays of rows of the matrix) you can code matrix multiplication as follows.

```
type Matrix = float array array
let matrixMultiply (a:Matrix) (b:Matrix) =
[|
for aRow in a ->
[| for bCol in Array.transpose b ->
Array.map2 (*) aRow bCol
|> Array.sum
|]
|]
```

Fixing some number of dimensions $n$, linear transformations from $\mathbb{R}^n$ to $\mathbb{R}^n$ are represented by $n$-by-$n$ matrices. They can be composed in any order you want, since the domains and codomains are all $\mathbb{R}^n$. For any $\mathbb{R}^n$, there is an **identity transformation** represented by the $n$-by-$n$ **identity matrix**, which consists of zeroes everywhere except for $n$ ones on the diagonal. For instance, in $\mathbb{R}^3$ it is
$$I = \begin{bmatrix}
1&0&0\\0&1&0\\0&0&1
\end{bmatrix}$$
You can check that multiplying $I$ by any matrix $A$ leaves that matrix unchanged: $IA = AI = A$. For some matrices $A$ (but not all!) there is an **inverse matrix** $A^{-1}$ representing the linear transformation which is the inverse of the one represented by $A$. In that case, the matrix product of $A$ and $A^{-1}$ is $I$.

As one last comment about vectors and matrices, it's possible to use other objects besides real numbers as entries. For instance, you can use complex numbers, integers, or even integers with modular arithmetic. We can write a revised, generic version of `matrixMultiply`

where you specify your own type of entry and supply your own multiplication and addition operations.

```
type Matrix<'T> = 'T array array
let matrixMultiply<'T> (add:'T->'T->'T) (mult:'T->'T->'T) (a:'T Matrix) (b:'T Matrix) =
[|
for aRow in a ->
[| for bCol in Array.transpose b ->
Array.map2 mult aRow bCol
|> Array.reduce add
|]
|]
```

Mathematicians have their own notation for generics. The set of $n$-by-$n$ matrices with entries coming from a set $R$ is denoted $M_n(R)$. When $R = \mathbb{R}$, the real numbers, we have $M_n(\mathbb{R})$, the $n$-by-$n$ real matrices with ordinary addition and multiplication. If instead $R = \mathbb{Z}$, the set of integers, the set $M_n(\mathbb{Z})$ is the subset of these matrices with *integer* entries.

If we take entries from $\mathbb{F}_5$, which is the set $\{0,1,2,3,4\}$, endowed with mod 5 arithmetic, we have a new set of matrices $M_n(\mathbb{F}_5)$. With the operations `(plus 5)`

and `(times 5)`

from above defining mod 5 addition and multiplication, we can multiply a matrices from $M_n(\mathbb{F}_5)$. For instance, we can see that the following pair of matrices are eachother's inverses.
$$\begin{bmatrix}1&2\\3&4\end{bmatrix},\quad \begin{bmatrix}3&1\\4&2\end{bmatrix}$$

```
let mat1 = [|[|1;2|];
[|3;4|]|]
let mat2 = [|[|3;1|]
[|4;2|]|]
// returns identity matrix
matrixMultiply (plus 5) (times 5) mat1 mat2
```

These matrices are *not* inverses if we consider them matrices of integers, with ordinary addition and mutiplication. In fact neither matrix would have an inverse at all if we were working in $M_n(\mathbb{Z})$ instead of $M_n(\mathbb{F}_5)$.