# Trace is the derivative of determinant

A question I always had when learning linear algebra is, “what does the trace of a matrix mean conceptually?” For example, the determinant of a matrix is, roughly speaking, the factor by which the matrix expands the volume. The conceptual meaning of trace is not as straightforward, but one way to think about it is

trace is the derivative of determinant at the identity.

Roughly you can think of this in the following way. If you start at the identity matrix and move a tiny step in the direction of , say where is a tiny number, then the determinant changes approximately by times . In other words, . Here stands for the identity matrix.

One can be very precise about what it means to take the “derivative” of the determinant, so let me do some setup. Let be either or (so we are working with real or complex Lie groups; but of course, everything makes sense for algebraic groups over arbitrary fields). Then there is a morphism of Lie groups called the determinant , given by sending a matrix to its determinant. Since we are restricting to invertible matrices, the determinants are nonzero. To check that this is really a morphism of Lie groups (i.e. both a smooth map and a homomorphism of groups), note that the determinant map is a polynomial map in the entries of the matrix (and therefore smooth) and is a group homomorphism by the property that .

Now, given any smooth map of manifolds which maps point , there is an induced linear map on from the tangent space of to the tangent space of called the derivative of at . In particular, if is a Lie group homomorphism, then it maps the identity point to the identity point, and the derivative at the identity is furthermore a homomorphism of Lie algebras. What this means is that, in addition to being a linear map, it preserves the bracket pairing.

In the case of , the Lie algebra at the identity matrix is called . We can think of it as consisting of all matrices, and the bracket operation is defined by . The Lie algebra of at consists of the elements of ; since is abelian, the bracket is trivial.

The main claim, which I will prove subsequently, is that this map , the derivative of the determinant at the identity, is actually the trace. That is, it sends a matrix to its trace, the sum of the entries on the diagonal. Note that since it is a homomorphism of Lie algebras, it preserves the bracket, and we recover the familiar property of trace , so .

We can find the derivative of a smooth map on directly, since it is an open subset of a vector space. Let be a matrix; then the derivative at the identity evaluated at is

is a polynomial in , and the number we’re looking for is the coefficient of the term.

We have

Just to get a concrete idea of what this expands to, let’s look when . Then

When ,

In particular, the coefficient of is . (In fact, see if you can convince yourself that the coefficient of is .)

See some discussion of the meaning of trace.

Acknowledgements: Thanks to Ben Wormleighton for originally telling me the slogan “trace is the derivative of determinant”, and for teaching me about Lie groups and Lie algebras.

To add: discussion of Jacobi’s formula, exponential map

# On quotient rings

In this post I will talk about how to compute the product and tensor product of quotient rings and . This sort of thing is usually left as an exercise (especially the first Corollary) and not proved in full generality in algebra courses, although it is not hard.

In all that follows is a commutative ring with identity and and are ideals of .

Lemma: If , there is a natural map .

Proposition: The natural map is an isomorphism of -algebras.

Proof: To see surjectivity, notice that generates as an -module, and , since the map is a ring homomorphism. To see injectivity, notice that every element

is equal to the pure tensor where . If , then . So , . Then .

Corollary:

Proposition (Chinese Remainder Theorem): The natural map is injective. If , it is also surjective, and thus an isomorphism.

Proof: To see injectivity, notice that if , then and , so so . To see surjectivity, note that implies there exist , , such that , for any . Consider , and set and . Then , so .

Corollary: If , . In particular, by applying this repeatedly we have .

Also, note the following fact:

Proposition: If , then .

Proof: Clearly . To go the other way, note that means that there exist , , and such that . So, consider an element . Then we have . Since , , and since , . So .

Remark: One can also think of this in terms of Tor: , and when this Tor group vanishes.

# Determinant of transpose

An important fact in linear algebra is that, given a matrix , , where is the transpose of . Here I will prove this statement via explciit computation, and I will try to do this as cleanly as possible. We may define the determinant of by

Here is the set of permutations of the set , and is the sign of the permutation . This formula is derived from the definition of the determinant via exterior algebra. One can check by hand that this gives the familiar expressions for the determinant when .

Now, since , we have

The crucial observation here is that we may rearrange the product inside the summation so that the second indices are increasing. Let . Then the product inside the summation is

Combining this with the fact that , our expression simplifies to

Noticing that the sum is the same sum if we replace all s with s, we see that this equals . So .

I wonder if there is a more conceptual proof of this? (By “conceptual”, I mean a proof based on exterior algebra, bilinear pairings, etc…)

# A lemma to identify dvrs

In this post I will prove a commutative algebra lemma, which is proved in page 7 of Serre’s Local Fields. This lemma is useful if you find a ring in the wild and want to know that it’s a discrete valuation ring.

Proposition: Let be a noetherian local domain where and is not nilpotent. Then .

Proof: Suppose . Then for all . So for all . Since is a domain, .

Therefore consider the ascending chain . This eventually stabilizes for high enough since is noetherian, so for some , . Thus , so . But is a unit, so , so .

This theorem holds more generally even if is not assumed to be a domain, but the proof is more complicated (but still among the same lines).

Proposition: Let be a noetherian local ring where and is not nilpotent. Then .

Proof: Let be the ideal of elements that kill some power of . We will use variables to refer to elements of . Since is noetherian, must be finitely generated, so all elements of kill for some fixed .

Now suppose . , so . Thus .

Consider the ascending chain . Since is noetherian it must eventually stablize, so for some , can be written as . But recall that . So so . is a unit since , and is local, so . If we force to be large enough to surpass , then , so .

# More on algebraic numbers

A complex number is algebraic if it is the root of some polynomial with rational coefficients. is algebraic (e.g. the polynomial ); is algebraic (e.g. the polynomial ); and are not. (A complex number that is not algebraic is called transcendental)

Previously, I wrote some blog posts (see here and here) which sketched a proof of the fact that the sum and product of algebraic numbers is also algebraic (and more). This is not an obvious fact, and to prove this requires some amount of field theory and linear algebra. Nevertheless, the ideas in the proof lead the way to a better understanding of the structure of the algebraic numbers and towards the theorems of Galois theory. In that post, I tried to introduce the minimum algebraic machinery necessary in order to state and prove the main result; I don’t think I entirely succeeded.

However, there is a more direct approach, one which also allows us find a polynomial that has (or ) as a root, for algebraic numbers and . That is the subject of this post. Instead of trying to formally prove the result, I will illustrate the approach for a specific example: showing is algebraic.

This post will assume familiarity with the characteristic polynomial of a matrix, and not much more. (In particular, none of the algebra from the previous posts)

## A case study

Define the set . We will think of this as a four-dimensional vector space, where the scalars are elements of , and the basis is . Every element can be uniquely expressed as , for .

We’re trying to prove is algebraic. Consider the linear transformation on defined as “multiply by “. In other words, consider the linear map which maps . This is definitely a linear map, since it satisfies and . In particular, we should be able to represent it by a matrix.

What is the matrix of ? Well, , , , and . Thus we can represent by the matrix

.

Now, the characteristic polynomial of this matrix, which is defined as , is , which has as a root. Thus is indeed algebraic.

## Why it works

The basic reason is the Cayley-Hamilton theorem. It tells us that should satisfy the characteristic polynomial: is the zero matrix. But the matrix we get when plugging into should correspond to multiplication by ; thus .

Note that I chose randomly. I could have chosen any element of and used this method to find a polynomial with rational coefficients having that element as a root.

At the end of the day, to prove that such a method always works requires the field theory we have glossed over: what is in general, why is it finite-dimensional, etc. This constructive method, which assumes the Cayley-Hamilton theorem, only replaces the non-constructive “linear dependence” argument in Proposition 4 of the original post.

# Two proofs complex matrices have eigenvalues

Today I will briefly discuss two proofs that every matrix over the complex numbers (or more generally, over an algebraically closed field) has an eigenvalue. Notice that this is equivalent to finding a complex number such that has nontrivial kernel. The first proof uses facts about “linear dependence” and the second uses determinants and the characteristic polynomial. The first proof is drawn from Axler’s textbook [1]; the second is the standard proof.

## Proof by linear dependence

Let be a polynomial with complex coefficients. If is a linear map, . We think of this as “ evaluated at ”.

Exercise: Show .

Proof: Pick a random vector . Consider the sequence of vectors This is a set of vectors, so they must be linearly dependent. Thus there exist constants such that .

Define . Then, we can factor By the Exercise, this implies . So, at least one of the maps has a nontrivial kernel, so has an eigenvalue.

## Proof by the characteristic polynomial

Proof: We want to show that there exists some such that has nontrivial kernel: in other words, that is singular. A matrix is singular if and only if its determinant is nonzero. So, let ; this is a polynomial in , called the characteristic polynomial of . Now, every polynomial has a complex root, say . This implies , so has an eigenvalue.

## Thoughts

To me, it seems like the determinant based proof is more straightforward, although it requires more machinery. Also, the determinant based proof is “constructive”, in that we can actually find all the eigenvalues by factoring the characteristic polynomial. On subject of determinant-based vs determinant-free approaches to linear algebra, see Axler’s article “Down With Determinants!” [3].

There is a similar situation for the problem of showing that the sum (or product) of two algebraic numbers is algebraic. Here there is a non-constructive proof using “linear dependence” (which I attempted to describe in a previous post) and a constructive proof using the characteristic polynomial (which will hopefully be the subject of a future blog post). A further advantage of the determinant-based proof is that it can be used more generally to show that the sum and product of integral elements over a ring are integral. In this more general context, we no longer have linear dependence available.

## References

1. Sheldon Axler, Linear algebra done right. Springer 2017
2. Evan Chen, An Infinitely Large Napkin, available online
3. Sheldon Axler. Down with Determinants! The American Mathematical Monthly, 102(2), 139, 1995. doi:10.2307/2975348, available online

# Hilbert’s Basis Theorem

Here is a proof of Hilbert’s Basis Theorem I thought of last night.

Let be a noetherian ring. Consider an ideal in . Let be the ideal in generated by the leading coefficients of the polynomials of degree in . Notice that , since if , , and it has the same leading coefficient. Thus we have an ascending chain , which must terminate, since is noetherian. Suppose it terminates at , so .

Now for each choose a finite set of generators (which we can do since is noetherian). For each generator, choose a representative polynomial in with that leading coefficient. This gives us a finite collection polynomials: define to be the ideal of generated by these polynomials.

Let . I claim . Assume for the sake of contradiction that there is a polynomial of minimal degree (say ) which is in but . If , there is an element of with the same leading coefficient, so is not in but has degree smaller than : contradiction. If is of degree , then there is an element of of which has the same leading coefficient as . Thus is of degree smaller than but is not in : contradiction.

Thus . Since is therefore finitely generated, this proves is noetherian.

# Algebro-geometric proof of Cayley-Hamilton

Here is a sketch of proof of the Cayley-Hamilton theorem via classical algebraic geometry.

The set of n x n matrices over an algebraically closed field can be identified with the affine space . Let be the subset of matrices that satisfy their own characteristic polynomial. We will prove that is in fact all of . Since affine space is irreducible, it suffices to show that is closed and contains a non-empty open set.

Fix a matrix . First, observe that the coefficients of the characteristic polynomial are polynomials in the entries in . In particular, the condition that a matrix satisfy its own characteristic polynomial amounts to a collection of polynomials in the entries of vanishing. This establishes that is closed.

Let be the set of matrices that have distinct eigenvalues. A matrix has distinct eigenvalues if and only if its characteristic polynomial has no double roots when it splits. This occurs if and only if the discriminant of the characteristic polynomial is nonzero. The discriminant is a polynomial in the coefficients of the characteristic polynomial. Thus the condition that a matrix have distinct eigenvalues amounts to a polynomial in the entries of not vanishing. Thus is open.

Finally, we have to show . It is easy to check this for a diagonal matrix. The general result follows from the fact that the determinant and thus the characteristic polynomial is basis-invariant.

# On algebraic numbers

A complex number is algebraic if it is the root of some polynomial with rational coefficients. is algebraic (e.g. the polynomial ); is algebraic (e.g. the polynomial ); and are not. A complex number that is not algebraic is called transcendental.

Is the sum of algebraic numbers always algebraic? What about the product of algebraic numbers? For example, given that and are algebraic, how do we know is also algebraic?

We can try to repeatedly square the equation . This gives us . Then isolating the radical, we have . Squaring again, we get , so is a root of . This is in fact the unique monic polynomial of minimum degree that has as a root (called the minimal polynomial of ) which shows is algebraic. But a sum like would probably have a minimal polynomial of much greater degree, and it would be much more complicated to construct this polynomial and verify the number is algebraic.

It is also increasingly difficult to construct polynomials for numbers like . And, apparently there also exist algebraic numbers that cannot be expressed as radicals at all. This further complicates the problem.

In fact, the sum and product of algebraic numbers is algebraic – in fact, any number which can be obtained by adding, subtracting, multiplying and dividing algebraic numbers is algebraic. This means that a number like

is algebraic – there is some polynomial which has it as a root. But we will prove this result non-constructively; that is, we will prove that such a number must be algebraic, without providing an explicit process to actually obtain the polynomial. To establish this result, we will try to look for a deeper structure in the algebraic numbers, which is elucidated, perhaps surprisingly, using the tools of linear algebra.

Definition: Let be a set of complex numbers that contains . We say is an abelian group if for all , and . In other words, is closed under addition and subtraction.

Some examples: , , , , the Gaussian integers (i.e. the set of expressions where and are integers)

Definition: An abelian group is a field if it contains and for all , , and if , . In other words, is closed under multiplication and division.

We generally use the letters , , for arbitrary fields. Of the previous examples, only , , are fields.

Exercise: Show that if is a field, .

Exercise: Show that the set is a field. We call this field . (Hint: rationalize the denominator).

Exercise: Describe the the smallest field which contains both and (this is called )

Generally if is a field and some complex numbers, we will denote the smallest field that contains all of and the elements as .

Definition: Let be a field. A -vector space is an abelian group such that if , , then . Intuitively, the elements of are closed under scaling by . (We also sometimes use the phrase “ is a vector space over “)

Examples: and are both -vector spaces. In fact, is also a vector space.

With this language in place, we can state the main goal of this post as follows:

Theorem: If are algebraic numbers and , then is algebraic.

Note how this encompasses our previous claim that any number obtained by adding, subtracting, multiplying and dividing algebraic numbers is algebraic. For example, we can deduce the giant fraction above is algebraic by applying the theorem to .

Definition: A field extension of a field is just a field that contains . We will write this as “ is a field extension”.

Exercise: If is a field extension, then is a -vector space

Consider a field like . Not only is it a field, but it is also a -vector space – its elements can be scaled by rational numbers. We want to adapt concepts from ordinary linear algebra to this setting. We want to think of as a two dimensional vector space, with basis and – every element is can be uniquely written as a -linear combination of and . Similarly, we would like to think of as a -vector space with basis . Note that if we regard as a -vector space, its basis is . This is because if an element is written as , where the coefficients are rational, we can rewrite it as , now with coefficients in .

On the other hand, consider . Since is not algebraic, no linear combination (with rational coefficients) of the numbers equals without all the coefficients being zero. This makes them linearly independent – and makes an infinite-dimensional vector space.

This finite-versus-infinite dimensionality difference will lie at the crux of our argument. Here is a rough summary of the argument to prove the Theorem:

1. If we add an algebraic number to a field, we get a finite-dimensional vector space over that field.
2. By repeatedly adding algebraic numbers to , the resulting field will still be a finite-dimensional -vector space.
3. Any element of a field which is a finite-dimensional -vector space must be algebraic.

From here on, we will assume that the reader has some familiarity with ideas from linear algebra, such as the notions of linear combination, linear independence, and basis. We will only sketch proofs (maybe I will add details later). We would like to warn the reader that the proof sketches have many gaps and may be difficult to follow.

Definition: A field extension is finite if there is a finite set of elements such that every element of can be represented as , where all the . We say the elements generate (aka span) over . If, furthermore, every element can be represented uniquely in this form, then we say is a basis (or just basis) for .

Examples: and are finite extensions, while is not.

Exercise: Show is not a finite extension. (Hint: is uncountable)

Proposition 1: Every finite field extension has a basis.

Proof sketch: This is a special case of a famous theorem in linear algebra. Assume generate . Start with the last element. If can be represented as a linear combination of , remove it from the list. If we removed , we now have , and we can repeat the process with . Continue this process until we cannot remove any more elements. Then (it can be checked that) we obtain a set whose elements are linearly independent. Then (it can be checked that) these form a -basis for .

For the next theorem, keep in mind the example of the extensions and .

Proposition 2: Suppose and are finite field extensions. Then is a finite extension.

Proof: By Proposition 1, both these field extensions have bases. Label the -basis of as , and the -basis for as . Then every element of can be uniquely represented as , for . Furthermore, each can be written uniquely as for . Plugging these in for the shows that the elements of the form form a -basis for .

Proposition 3: If is algebraic, then is a finite extension.

Proof sketch: Suppose . First we will show every element in is a polynomial in with coefficients in . consists of all numbers that can be obtained by repeatedly adding, subtracting, multiplying and dividing and elements of . By performing some algebraic manipulations and combining fractions, we can see that every element can be written in the form , where and are polynomials with coefficients in .

Now I claim that for any polynomial where , there exists a polynomial such that . Let be the minimal polynomial of . Since and , and are relatively prime as polynomials. So there exist polynomials and such that . Plugging in into the polynomials gives us . Since , , as desired.

Thus any element in , written as , can be written as for an appropriate polynomial ; in other words, every element of is a polynomial of with coefficients in . Now, suppose the minimal polynomial of is . Then . So (and all higher powers of ) can be written as rational linear combinations of . Then it can be verified that every element in can be written uniquely as a -linear combination of . This establishes that is a finite extension; in particular, it has the basis

Proposition 4: Suppose is a finite extension. Then any is algebraic.

Proof sketch: Consider the elements . They cannot all be linearly independent. This is because in a finite-dimensional vector space of dimension any set with at least elements must be linearly dependent. So there must be some linear combination of them which equals zero. But this simply means that for some constants and positive integer , . Letting the be the coefficients of a polynomial , , so is algebraic.

We have developed enough theory now to prove the result.

Proof of main Theorem: Let be some collection of algebraic numbers. Then by Proposition 3, are all finite field extensions. Applying Proposition 2 repeatedly gives us is a finite extension. Then by Proposition 4, any element of is algebraic.

This theorem essentially says that given a collection of algebraic numbers, by repeatedly adding, subtracting, multiplying and dividing them, we can only obtain algebraic numbers. But what about radicals? For example, we have now established that is algebraic. Is algebraic as well?

In fact, we can generalize our result to the following: any number obtained by repeatedly adding, subtracting, multiplying, dividing algebraic numbers, as well as taking -th roots (for positive integers ) will be algebraic.

This is not too hard now that we have the main theorem. It suffices to show that if is algebraic, is algebraic. If is algebraic, then there exist constants such that . This can be rewritten as . Thus is algebraic.

# Odd and even permutations

A standard part of the theory of permutations is the classification of permutations into “odd” and “even” types. In this post I will develop the theory of odd and even permutations, focusing on adjacent permutations.

Let be an finite ordered set. A permutation of is a rearrangement of ; formally, it is a bijection . For simplicity, we will label the elements of as , and we will represent a permutation by the list . Thus if has five elements, then the permutation that reverses can be written as .

The set of permutations of naturally forms a group. That is, given two permutations and , we can form their product (or for short), which is defined by performing first and then , following the convention for composition of functions. For example, the product of the permutations is .

Here are some standard permutations. The identity permutation is the permutation that does nothing. A transposition is a permutation that swaps two elements. If a transposition swaps and , we will denote it as . An adjacent transposition is a transposition that swaps two adjacent elements (so it is denoted ).

An important property of a permutation is its inversion number. An inversion of a permutation is a pair where such that . In other words, it is a pair of elements whose relative positions have been switched by the permutation. The inversion number of is the size of its set of inversions. Thus the inversion number is at most (this occurs when the permutation reverses ). A permutation has inversion number if and only if it is the identity permutation. Also, a permutation has inversion number if and only if it is an adjacent transposition.

A permutation is called odd if its inversion number is odd, and even if its inversion number is even. We would like to show that the product of odd and even permutations behaves like addition of odd and even numbers. To show this, we will use the following lemma:

Every permutation is a product of adjacent transpositions.

This can be proved by using insertion sort, an algorithm which effectively writes any permutations as a product of adjacent transpositions.

Here is how it works. Consider, for example, the permutation given by . Let us try to sort this list. Insertion sort first moves to the front by repeated swaps, first swapping the second and third position and then the first and second positions. Then we get . Then insertion sort would swap the fourth and fifth positions, and then finish with . So, if we do these transpositions in reverse, we get our original permutation. That is, . We can see that using this process, insertion sort can write any permutation as a product of adjacent transpositions.

We need one more lemma:

If is a permutation and an adjacent transposition, then and have opposite parity. That is, one is odd and one is even.

Here is why. After performing , if we swap two adjacent elements and , then the only change to the set of inversions is , since and haven’t moved relative to any other element. So, the only possible change is that was an inversion and no longer is, or it wasn’t an inversion and now is. So the inversion number has either been increased or decreased by one – so the parity has been switched.

So, if a permutation is written as a product of adjacent transpositions, then its parity is equal to the parity of . This establishes that the product of odd and even permutations behaves like addition of odd and even numbers (formally, what we are saying is that “parity” defines a homomorphism from the permutation group to ).

Another interesting fact is that every transposition is odd. We can see this (without needing to count inversions) by noticing that we can obtain any transposition (with ) by repeatedly swapping adjacent elements till the element in place is moved to place , and then repeatedly swapping the other direction till the element originally in place is moved to place . If we write this out we see that , which has an odd number of terms.

Thus every permutation can be written only either as a product of an odd number or an even number of transpositions. Since the identity permutation is clearly even, we have the following remarkable fact: given any list, it is impossible, after performing an odd number of swaps, to obtain the original list.

Acknowledgements: Thanks to Anton Cao for suggestions. Thanks to Jessica Meng for reminding me the correct convention for composition order.