Two proofs complex matrices have eigenvalues

Today I will briefly discuss two proofs that every matrix T over the complex numbers (or more generally, over an algebraically closed field) has an eigenvalue. Notice that this is equivalent to finding a complex number \lambda such that T - \lambda I has nontrivial kernel. The first proof uses facts about “linear dependence” and the second uses determinants and the characteristic polynomial. The first proof is drawn from Axler’s textbook [1]; the second is the standard proof.

Proof by linear dependence

Let p(x) = a_nx^n + \dots a_1x + a_0 be a polynomial with complex coefficients. If T is a linear map, p(T) = a_nT^n + \dots a_n T + a_0I. We think of this as “p evaluated at T”.

Exercise: Show (pq)(T) = p(T)q(T).

Proof: Pick a random vector v \in V. Consider the sequence of vectors v, Tv, T^2v, \dots T^nv. This is a set of n+1 vectors, so they must be linearly dependent. Thus there exist constants a_0, \dots a_n \in \C such that a_nT^nv + a_{n-1}T^{n-1}v + \dots a_1Tv + a_0v = (a_nT^n + a_{n-1}T^{n-1} + \dots a_1T + a_0I)v = 0.

Define p(x) = a_nx^n + \dots a_1x + a_0. Then, we can factor p(x) = a_n(x-\lambda_1)\dots (x-\lambda_n). By the Exercise, this implies a_n(T - \lambda_1I)\dots (T-\lambda_nI)v = 0. So, at least one of the maps T - \lambda_iI has a nontrivial kernel, so T has an eigenvalue. \square

Proof by the characteristic polynomial

Proof: We want to show that there exists some \lambda such that T - \lambda I has nontrivial kernel: in other words, that T - \lambda I is singular. A matrix is singular if and only if its determinant is nonzero. So, let \chi_T(x) = \det(T - xI); this is a polynomial in x, called the characteristic polynomial of T. Now, every polynomial has a complex root, say \lambda. This implies T - \lambda I, so T has an eigenvalue. \square

Thoughts

To me, it seems like the determinant based proof is more straightforward, although it requires more machinery. Also, the determinant based proof is “constructive”, in that we can actually find all the eigenvalues by factoring the characteristic polynomial. On subject of determinant-based vs determinant-free approaches to linear algebra, see Axler’s article “Down With Determinants!” [3].

There is a similar situation for the problem of showing that the sum (or product) of two algebraic numbers is algebraic. Here there is a non-constructive proof using “linear dependence” (which I attempted to describe in a previous post) and a constructive proof using the characteristic polynomial (which will hopefully be the subject of a future blog post). A further advantage of the determinant-based proof is that it can be used more generally to show that the sum and product of integral elements over a ring are integral. In this more general context, we no longer have linear dependence available.

References

  1. Sheldon Axler, Linear algebra done right. Springer 2017
  2. Evan Chen, An Infinitely Large Napkin, available online
  3. Sheldon Axler. Down with Determinants! The American Mathematical Monthly, 102(2), 139, 1995. doi:10.2307/2975348, available online

Hilbert’s Basis Theorem

Here is a proof of Hilbert’s Basis Theorem I thought of last night.

Let R be a noetherian ring. Consider an ideal J in R[X]. Let I_i be the ideal in R generated by the leading coefficients of the polynomials of degree i in I. Notice that I_i \subseteq I_{i+1}, since if P \in I_i, xP \in I_{i+1}, and it has the same leading coefficient. Thus we have an ascending chain \dots \subseteq I_{i-1} \subseteq I_i \subseteq I_{i+1} \subseteq \dots, which must terminate, since R is noetherian. Suppose it terminates at i = n, so I_n = I_{n+1} = \dots.

Now for each I_i choose a finite set of generators (which we can do since R is noetherian). For each generator, choose a representative polynomial in J with that leading coefficient. This gives us a finite collection polynomials: define J_i to be the ideal of R[x] generated by these polynomials.

Let J' = J_0 + J_1 + \dots + J_n. I claim J = J'. Assume for the sake of contradiction that there is a polynomial P of minimal degree (say i) which is in J but J'. If i \leq n, there is an element of P' \in J' with the same leading coefficient, so P - P' is not in J' but has degree smaller than i: contradiction. If P is of degree i > n, then there is an element of P' of J_n which has the same leading coefficient as P. Thus P - x^{i-n}P' is of degree smaller than i but is not in J': contradiction.

Thus J = J'. Since J is therefore finitely generated, this proves R[x] is noetherian.

Algebro-geometric proof of Cayley-Hamilton

Here is a sketch of proof of the Cayley-Hamilton theorem via classical algebraic geometry.

The set of n x n matrices over an algebraically closed field can be identified with the affine space \mathbb{A}^{n^2}. Let V be the subset of matrices that satisfy their own characteristic polynomial. We will prove that V is in fact all of \mathbb{A}^{n^2}. Since affine space is irreducible, it suffices to show that V is closed and V contains a non-empty open set.

Fix a matrix M. First, observe that the coefficients of the characteristic polynomial are polynomials in the entries in M. In particular, the condition that a matrix satisfy its own characteristic polynomial amounts to a collection of polynomials in the entries of M vanishing. This establishes that V is closed.

Let U be the set of matrices that have n distinct eigenvalues. A matrix has n distinct eigenvalues if and only if its characteristic polynomial has no double roots when it splits. This occurs if and only if the discriminant of the characteristic polynomial is nonzero. The discriminant is a polynomial in the coefficients of the characteristic polynomial. Thus the condition that a matrix have n distinct eigenvalues amounts to a polynomial in the entries of M not vanishing. Thus U is open.

Finally, we have to show U \subseteq V. It is easy to check this for U a diagonal matrix. The general result follows from the fact that the determinant and thus the characteristic polynomial is basis-invariant.

I learned this from https://aergus.net/blog/posts/using-zariski-topology-to-prove-the-cayley-hamilton-theorem.html/

On algebraic numbers

A complex number is algebraic if it is the root of some polynomial P(x) with rational coefficients. \sqrt{2} is algebraic (e.g. the polynomial x^2 -2); i is algebraic (e.g. the polynomial x^2 + 1); \pi and e are not. A complex number that is not algebraic is called transcendental.

Is the sum of algebraic numbers always algebraic? What about the product of algebraic numbers? For example, given that \sqrt{2} and \sqrt{3} are algebraic, how do we know \alpha = \sqrt{2} + \sqrt{3} is also algebraic?

We can try to repeatedly square the equation x = \sqrt{2} + \sqrt{3}. This gives us x^2 = 2 + 3 + 2\sqrt{6}. Then isolating the radical, we have x^2 - 5 = 2\sqrt{6}. Squaring again, we get x^4 - 10x^2 + 25 = 24, so \alpha is a root of x^4 - 10x^2 + 1. This is in fact the unique monic polynomial of minimum degree that has \alpha as a root (called the minimal polynomial of \alpha) which shows \alpha is algebraic. But a sum like \sqrt{2} + \sqrt{3} + \sqrt{5} + i would probably have a minimal polynomial of much greater degree, and it would be much more complicated to construct this polynomial and verify the number is algebraic.

It is also increasingly difficult to construct polynomials for numbers like \sqrt{2} + \sqrt[3]{3}. And, apparently there also exist algebraic numbers that cannot be expressed as radicals at all. This further complicates the problem.

In fact, the sum and product of algebraic numbers is algebraic – in fact, any number which can be obtained by adding, subtracting, multiplying and dividing algebraic numbers is algebraic. This means that a number like

    \[\frac{\sqrt{2} + i\sqrt[3]{25} - 2 + \sqrt{3}}{15 - 3i + 4e^{5i\pi / 12}}\]

is algebraic – there is some polynomial which has it as a root. But we will prove this result non-constructively; that is, we will prove that such a number must be algebraic, without providing an explicit process to actually obtain the polynomial. To establish this result, we will try to look for a deeper structure in the algebraic numbers, which is elucidated, perhaps surprisingly, using the tools of linear algebra.

Definition: Let S be a set of complex numbers that contains 0. We say S is an abelian group if for all x, y \in S, x + y \in S and x - y \in S. In other words, S is closed under addition and subtraction.

Some examples: \mathbb{Z}, \mathbb{Q}, \mathbb{R}, \mathbb{C}, the Gaussian integers \mathbb{Z}[i] (i.e. the set of expressions a+bi where a and b are integers)

Definition: An abelian group is a field if it contains 1 and for all x, y \in S, xy \in S, and if y \neq 0, x/y \in S. In other words, S is closed under multiplication and division.

We generally use the letters k, F, E for arbitrary fields. Of the previous examples, only \mathbb{Q}, \mathbb{R}, \mathbb{C} are fields.

Exercise: Show that if k is a field, \mathbb{Q} \subseteq k.

Exercise: Show that the set \{a + b\sqrt{2} \ | \ a, b \in \mathbb{Q} \} is a field. We call this field \mathbb{Q}(\sqrt{2}). (Hint: rationalize the denominator).

Exercise: Describe the the smallest field which contains both \sqrt{2} and \sqrt{3} (this is called \mathbb{Q}(\sqrt{2}, \sqrt{3}))

Generally if k is a field and x_1, \dots x_n some complex numbers, we will denote the smallest field that contains all of k and the elements x_1, \dots x_n as k(x_1, \dots x_n).

Definition: Let k be a field. A k-vector space is an abelian group V such that if c \in k, x \in V, then cx \in V. Intuitively, the elements of V are closed under scaling by k. (We also sometimes use the phrase “V is a vector space over k “)

Examples: \mathbb{Q}(\sqrt{2}, \sqrt{3}) and \mathbb{Q}(\sqrt{2}) are both \mathbb{Q}-vector spaces. In fact, \mathbb{Q}(\sqrt{2}, \sqrt{3}) is also a \mathbb{Q}(\sqrt{2}) vector space.

With this language in place, we can state the main goal of this post as follows:

Theorem: If \alpha_1, \dots \alpha_n are algebraic numbers and x \in \mathbb{Q}(\alpha_1, \dots \alpha_n), then x is algebraic.

Note how this encompasses our previous claim that any number obtained by adding, subtracting, multiplying and dividing algebraic numbers is algebraic. For example, we can deduce the giant fraction above is algebraic by applying the theorem to \mathbb{Q}(\sqrt{2}, i, \sqrt[3]{25}, \sqrt{3}, e^{5i/12}).

Definition: A field extension of a field k is just a field F that contains k. We will write this as “F \supseteq k is a field extension”.

Exercise: If F \supseteq k is a field extension, then F is a k-vector space

Consider a field like \mathbb{Q}(\sqrt{2}). Not only is it a field, but it is also a \mathbb{Q}-vector space – its elements can be scaled by rational numbers. We want to adapt concepts from ordinary linear algebra to this setting. We want to think of \mathbb{Q}(\sqrt{2}) as a two dimensional vector space, with basis 1 and \sqrt{2} – every element is can be uniquely written as a \mathbb{Q}-linear combination of 1 and \sqrt{2}. Similarly, we would like to think of \mathbb{Q}(\sqrt{2}, \sqrt{3}) as a \mathbb{Q}-vector space with basis 1, \sqrt{2}, \sqrt{3}, \sqrt{6}. Note that if we regard \mathbb{Q}(\sqrt{2}, \sqrt{3}) as a \mathbb{Q}(\sqrt{2})-vector space, its basis is 1, \sqrt{3}. This is because if an element is written as a + b\sqrt{2} + c\sqrt{3} + d\sqrt{6}, where the coefficients are rational, we can rewrite it as (a + b\sqrt{2})1 + (c + d\sqrt{2})\sqrt{3}, now with coefficients in \mathbb{Q}(\sqrt{2}).

On the other hand, consider \mathbb{Q}(\pi). Since \pi is not algebraic, no linear combination (with rational coefficients) of the numbers 1, \pi, \pi^2, \pi^3, \dots equals 0 without all the coefficients being zero. This makes them linearly independent – and makes \mathbb{Q}(\pi) an infinite-dimensional vector space.

This finite-versus-infinite dimensionality difference will lie at the crux of our argument. Here is a rough summary of the argument to prove the Theorem:

  1. If we add an algebraic number to a field, we get a finite-dimensional vector space over that field.
  2. By repeatedly adding algebraic numbers to \mathbb{Q}, the resulting field will still be a finite-dimensional \mathbb{Q}-vector space.
  3. Any element of a field which is a finite-dimensional \mathbb{Q}-vector space must be algebraic.

From here on, we will assume that the reader has some familiarity with ideas from linear algebra, such as the notions of linear combination, linear independence, and basis. We will only sketch proofs (maybe I will add details later). We would like to warn the reader that the proof sketches have many gaps and may be difficult to follow.

Definition: A field extension F \supseteq k is finite if there is a finite set of elements e_1, \dots e_n \in F such that every element of F can be represented as c_1e_1 + \dots c_ne_n, where all the c_i \in k. We say the elements e_1, \dots e_n generate (aka span) F over k. If, furthermore, every element can be represented uniquely in this form, then we say e_1, \dots e_n is a kbasis (or just basis) for F.

Examples: \mathbb{Q}(\sqrt{2})/\mathbb{Q} and \mathbb{Q}(\sqrt{2}, \sqrt{3})/\mathbb{Q} are finite extensions, while \mathbb{Q}(\pi)/\mathbb{Q} is not.

Exercise: Show \mathbb{C}/\mathbb{Q} is not a finite extension. (Hint: \mathbb{C} is uncountable)

Proposition 1: Every finite field extension F/k has a basis.

Proof sketch: This is a special case of a famous theorem in linear algebra. Assume e_1, \dots e_n generate F. Start with the last element. If e_n can be represented as a linear combination of e_1 \dots e_{n-1}, remove it from the list. If we removed e_n, we now have e_1 \dots e_{n-1}, and we can repeat the process with e_{n-1}. Continue this process until we cannot remove any more elements. Then (it can be checked that) we obtain a set whose elements are linearly independent. Then (it can be checked that) these form a k-basis for F. \square

For the next theorem, keep in mind the example of the extensions \mathbb{Q}(\sqrt{2})/\mathbb{Q} and \mathbb{Q}(\sqrt{2}, \sqrt{3})/\mathbb{Q}(\sqrt{2}).

Proposition 2: Suppose F \supseteq k and E \supseteq F are finite field extensions. Then E \supseteq k is a finite extension.

Proof: By Proposition 1, both these field extensions have bases. Label the F-basis of E as e_1, \dots e_n, and the k-basis for f as e'_1, \dots e'_m. Then every element of F can be uniquely represented as c_1e_1 + \dots + c_ne_n, for c_i \in F. Furthermore, each c_i can be written uniquely as c_i = a_{i, 1}e'_1 + \dots + a_{i, m}e'_m for a_{i, j} \in k. Plugging these in for the c_i shows that the mn elements of the form e_ie'_j form a k-basis for E. \square

Proposition 3: If \alpha is algebraic, then k(\alpha) \supseteq k is a finite extension.

Proof sketch: Suppose x \in k(\alpha). First we will show every element in k(\alpha) is a polynomial in \alpha with coefficients in k. k(\alpha) consists of all numbers that can be obtained by repeatedly adding, subtracting, multiplying and dividing \alpha and elements of k. By performing some algebraic manipulations and combining fractions, we can see that every element can be written in the form p(\alpha)/q(\alpha), where p and q are polynomials with coefficients in k.

Now I claim that for any polynomial q where q(\alpha) \neq 0, there exists a polynomial s such that s(\alpha) = 1/q(\alpha). Let m be the minimal polynomial of \alpha. Since q(\alpha) \neq 0 and m(\alpha) = 0, q and m are relatively prime as polynomials. So there exist polynomials r and s such that rq+sm = 1. Plugging in \alpha into the polynomials gives us r(\alpha)q(\alpha) + s(\alpha)m(\alpha) = 1. Since m(\alpha) = 0, s(\alpha) = 1/q(\alpha), as desired.

Thus any element in k(\alpha), written as p(\alpha)/q(\alpha), can be written as p(\alpha)s(\alpha) for an appropriate polynomial s; in other words, every element of k(\alpha) is a polynomial of \alpha with coefficients in k. Now, suppose the minimal polynomial of \alpha is a_nx^n + \dots a_1x + a_0. Then \alpha^n = \frac{1}{a_n}(-a_0 - a_1\alpha \dots - a_{n-1}\alpha^{n-1}). So \alpha^n (and all higher powers of \alpha) can be written as rational linear combinations of 1, \alpha, \dots \alpha^{n-1}. Then it can be verified that every element in k(\alpha) can be written uniquely as a k-linear combination of 1, \alpha, \dots \alpha^{n-1}. This establishes that k(\alpha)/k is a finite extension; in particular, it has the basis 1, \alpha, \dots \alpha^{n-1}. \square

Proposition 4: Suppose k \supseteq \mathbb{Q} is a finite extension. Then any x \in k is algebraic.

Proof sketch: Consider the elements 1, x, x^2, \dots…. They cannot all be linearly independent. This is because in a finite-dimensional vector space of dimension n any set with at least n+1 elements must be linearly dependent. So there must be some linear combination of them which equals zero. But this simply means that for some constants a_i and positive integer n, a_nx^n + \dots a_1x + a_0 = 0. Letting the a_i be the coefficients of a polynomial P, P(x) = 0, so x is algebraic. \square

We have developed enough theory now to prove the result.

Proof of main Theorem: Let \alpha_1, \alpha_2, \dots \alpha_n be some collection of algebraic numbers. Then by Proposition 3, \mathbb{Q}(\alpha_1, \dots \alpha_n) \supseteq \mathbb{Q}(\alpha_1, \dots \alpha_{n-1}), \dots \mathbb{Q}(\alpha_1, \alpha_2) \supseteq \mathbb{Q}(\alpha_1), \mathbb{Q}(\alpha_1) \supseteq \mathbb{Q} are all finite field extensions. Applying Proposition 2 repeatedly gives us \mathbb{Q}(\alpha_1, \dots \alpha_n) \supseteq \mathbb{Q} is a finite extension. Then by Proposition 4, any element of \mathbb{Q}(\alpha_1, \dots \alpha_n) is algebraic.

This theorem essentially says that given a collection of algebraic numbers, by repeatedly adding, subtracting, multiplying and dividing them, we can only obtain algebraic numbers. But what about radicals? For example, we have now established that \sqrt{2} + \sqrt{3} + \sqrt{5} is algebraic. Is \sqrt{\sqrt{2} + \sqrt{3} + \sqrt{5}} algebraic as well?

In fact, we can generalize our result to the following: any number obtained by repeatedly adding, subtracting, multiplying, dividing algebraic numbers, as well as taking m-th roots (for positive integers m) will be algebraic.

This is not too hard now that we have the main theorem. It suffices to show that if x is algebraic, \sqrt[m]{x} is algebraic. If x is algebraic, then there exist constants a_0, \dots a_n such that a_nx^n + \dots a_1x + a_0 = 0. This can be rewritten as a_n(\sqrt[m]{x})^{mn} + \dots a_1(\sqrt[m]{x})^m + a_0 = 0. Thus \sqrt[m]{x} is algebraic.


Acknowledgements: Thanks to Anton Cao and Nagaganesh Jaladanki for reviewing this article.

“Locally of finite type” is a local property

Previously we showed that morphisms locally of finite type are preserved under base change. We can use this to show that

(*) Given a morphism of schemes p: X \to Y, the preimage of any affine \text{Spec }A \subset Y can be covered by affines such that the corresponding ring maps are of finite type.

Alternatively, if we define a morphism locally of finite type to be one that satisfies (*), then what we are saying is that such a property can be checked on a cover; we can replace “any affine” with “an affine in a cover of affines”.

Let’s try to prove (*). First, we base change to \text{Spec }A. Since the morphism p^{-1}(\text{Spec }A) \to \text{Spec }A is also locally of finite type, we can cover \text{Spec }A by affines \text{Spec }A_i such that their preimages can be covered by the spectra of finitely-generated A_i-algebras B_{ik}. However, we don’t know if these are finitely-generated A-algebras! To fix this, we base change to even smaller affines. Cover \text{Spec }A_i by basic open sets \text{Spec }A[f^{-1}]. This gives us a cover of each \text{Spec }B_{ik} by basic open sets of the form \text{Spec }(A[f^{-1}] \otimes_{A_i} B_{ik}). Since A_i \to B_{ik} is of finite type, A[f^{-1}] \to A[f^{-1}] \otimes_{A_i} B_{ik} is of finite type. Since A \to A[f^{-1}] is clearly of finite type, A \to A[f^{-1}] \otimes_{A_i} B_{ik} is of finite type, giving us the desired cover of p^{-1}(\text{Spec }A). The following diagram may be illustrative (every square is a pullback)

Group schemes and graded rings

In this post we will describe how an action of the multiplicative group scheme \mathbb{G}_m on \text{Spec }R defines a \mathbb{Z}-grading of R. A future post may describe how this relates to projective schemes. (I will do all of this using diagrams, but there may be some easier way using the functors of points). All this was taught to me by Mark Haiman in Math 256B (Algebraic Geometry) at UC Berkeley.

Fix a field k; we will work in the category of k-schemes. Thus R will be a k-algebra, and we will establish a graded k-algebra structure on R. However, none of our arguments change if we just let k be \mathbb{Z}. A group scheme is a group object in the category of k-schemes. A precise definition can be found here. Most importantly, group schemes can act on other schemes. The definition of a group scheme action can be found here. Note that all definitions are given by diagrams (or functor of points). For example, we specify the “identity element” of a group scheme by a map \text{Spec }k \to G, rather than selecting some point in the underlying topological space.

\mathbb{G}_m is defined as \text{Spec }k[s, t]/(st-1) = \text{Spec }k[t, t^{-1}]. (for shorthand, we will write k[t, t^{-1}] as k[t^\pm]). As a variety, it can be thought of as k^*, the “punctured affine line”. Its group operation is given by a map \mathbb{G}_m \times_k \mathbb{G}_m \to \mathbb{G}_m which corresponds to the k-algebra map \mu: k[t^\pm] \to k[t^\pm] \otimes_k k[t^\pm] \cong k[t^\pm, u^\pm] defined by t \mapsto tu. The identity is given by a map \text{Spec }k \to \mathbb{G}_m corresponding to i: k[t^\pm] \to k defined by t \mapsto 1.

Suppose \mathbb{G}_m acts on \Spec R. The action map \mathbb{G}_m \times_k \text{Spec }R \to \text{Spec }R corresponds to a k-algebra map \phi: R \to R \otimes_k k[t^\pm] \cong R[t^\pm] such that the following diagrams commute:

Associativity:

\xymatrix{R\ar[r]^{\phi}\ar[d]^{\phi} & {R[t^\pm]} \ar[d]^{id_R\otimes \mu}\\{R[u^\pm]} \ar[r]^{\phi \otimes id_{k[u^\pm]}} & {R[t^\pm, u^\pm]}}

Identity:

\xymatrix{R\ar[r]^{\phi}\ar[d]^{id_R} & {R[t^\pm]} \ar[dl]^{i}\\ R}

For r \in R, write \phi(r) = \sum_{-\infty}^{\infty} r_it^i \in R[t^\pm], where almost all the r_i are zero. Then the first diagram implies that

(*) if \phi(r) = r_it^i (i.e. the polynomial is just a single monomial), then \phi(r_i) = r_it^i.

This is because, along the top and right arrows, we have r \mapsto r_it^i \mapsto r_it^iu^i and along the left and bottom arrows we have r \mapsto r_iu^i \mapsto \phi(r_i)u^i. Furthermore, the second diagram says that

(**) for all r, \sum r_i = r.

Therefore, letting R[t^\pm]_d stand for the degree d homogenous component of R[t^\pm] (so that it consists of multiples of t^d), let R_d := \phi^{-1}(R[t^\pm]_d). Since all the R[t^\pm]_d are disjoint, their preimages are disjoint as well. Furthermore, for an arbitrary element r, we have r = \sum r_i by (*), and by (**), we have that each r_i \in R_i. Thus R = \sum R_i as a direct sum.

It remains to show that R_mR_n \subseteq R_{m+n}. But this is easy: if r_m \in R_m, r_n \in R_n, then \phi(r_mr_n) = \phi(r_m)\phi(r_n) = r_mr_nx^{m+n} \in R[t^\pm]_{m+n}, so r_mr_n \in R_{m+n} as desired.

Odd and even permutations

A standard part of the theory of permutations is the classification of permutations into “odd” and “even” types. In this post I will develop the theory of odd and even permutations, focusing on adjacent permutations.

Let X be an finite ordered set. A permutation of X is a rearrangement of X; formally, it is a bijection \sigma: X \to X. For simplicity, we will label the elements of X as 1, 2, \dots n, and we will represent a permutation \sigma by the list \sigma(1)\sigma(2)\dots\sigma(n). Thus if X has five elements, then the permutation that reverses X can be written as 54321.

The set of permutations of X naturally forms a group. That is, given two permutations \sigma_1 and \sigma_2, we can form their product \sigma_2 \dot \sigma_1 (or \sigma_2\sigma_1 for short), which is defined by performing \sigma_1 first and then \sigma_2, following the convention for composition of functions. For example, the product of the permutations 13425 \cdot 21345 is 31425.

Here are some standard permutations. The identity permutation is the permutation that does nothing. A transposition is a permutation that swaps two elements. If a transposition swaps a and b, we will denote it as (a \ b). An adjacent transposition is a transposition that swaps two adjacent elements (so it is denoted (a \ a+1)).

An important property of a permutation is its inversion number. An inversion of a permutation \sigma is a pair i, j \in \{1, 2, \dots n\} where i < j such that \sigma(i) > \sigma(j). In other words, it is a pair of elements whose relative positions have been switched by the permutation. The inversion number of \sigma is the size of its set of inversions. Thus the inversion number is at most {n \choose 2} (this occurs when the permutation reverses X). A permutation has inversion number 0 if and only if it is the identity permutation. Also, a permutation has inversion number 1 if and only if it is an adjacent transposition.

A permutation is called odd if its inversion number is odd, and even if its inversion number is even. We would like to show that the product of odd and even permutations behaves like addition of odd and even numbers. To show this, we will use the following lemma:

Every permutation is a product of adjacent transpositions.

This can be proved by using insertion sort, an algorithm which effectively writes any permutations as a product of adjacent transpositions.

Here is how it works. Consider, for example, the permutation \sigma given by 23154. Let us try to sort this list. Insertion sort first moves 1 to the front by repeated swaps, first swapping the second and third position and then the first and second positions. Then we get 12354. Then insertion sort would swap the fourth and fifth positions, and then finish with 12345. So, if we do these transpositions in reverse, we get our original permutation. That is, \sigma = (2 3)(1 2)(4 5). We can see that using this process, insertion sort can write any permutation as a product of adjacent transpositions.

We need one more lemma:

If \sigma is a permutation and \rho an adjacent transposition, then \sigma and \sigma\rho have opposite parity. That is, one is odd and one is even.

Here is why. After performing \sigma, if we swap two adjacent elements a and a+1, then the only change to the set of inversions is a, a+1, since a and a+1 haven’t moved relative to any other element. So, the only possible change is that a, a+1 was an inversion and no longer is, or it wasn’t an inversion and now is. So the inversion number has either been increased or decreased by one – so the parity has been switched.

So, if a permutation \sigma is written as a product of n adjacent transpositions, then its parity is equal to the parity of n. This establishes that the product of odd and even permutations behaves like addition of odd and even numbers (formally, what we are saying is that “parity” defines a homomorphism from the permutation group to \mathbb{Z}/2\mathbb{Z}).

Another interesting fact is that every transposition is odd. We can see this (without needing to count inversions) by noticing that we can obtain any transposition (a \ b) (with a < b) by repeatedly swapping adjacent elements till the element in place a is moved to place b, and then repeatedly swapping the other direction till the element originally in place b is moved to place a. If we write this out we see that (a \ b) = (a \ a+1)(a+1 \ a+2) \dots (b-2 \ b-1)(b-1 \ b)(b-2 \ b-1) \dots (a \ a+1), which has an odd number of terms.

Thus every permutation can be written only either as a product of an odd number or an even number of transpositions. Since the identity permutation is clearly even, we have the following remarkable fact: given any list, it is impossible, after performing an odd number of swaps, to obtain the original list.

Acknowledgements: Thanks to Anton Cao for suggestions. Thanks to Jessica Meng for reminding me the correct convention for composition order.