*trace is the derivative of determinant at the identity.*

Roughly you can think of this in the following way. If you start at the identity matrix and move a tiny step in the direction of , say where is a tiny number, then the determinant changes approximately by times . In other words, . Here stands for the identity matrix.

One can be very precise about what it means to take the “derivative” of the determinant, so let me do some setup. Let be either or (so we are working with real or complex Lie groups; but of course, everything makes sense for algebraic groups over arbitrary fields). Then there is a morphism of Lie groups called the *determinant* , given by sending a matrix to its determinant. Since we are restricting to invertible matrices, the determinants are nonzero. To check that this is really a morphism of Lie groups (i.e. both a smooth map and a homomorphism of groups), note that the determinant map is a polynomial map in the entries of the matrix (and therefore smooth) and is a group homomorphism by the property that .

Now, given any smooth map of manifolds which maps point , there is an induced linear map on from the tangent space of to the tangent space of called the derivative of at . In particular, if is a Lie group homomorphism, then it maps the identity point to the identity point, and the derivative at the identity is furthermore a homomorphism of Lie algebras. What this means is that, in addition to being a linear map, it preserves the bracket pairing.

In the case of , the Lie algebra at the identity matrix is called . We can think of it as consisting of all matrices, and the bracket operation is defined by . The Lie algebra of at consists of the elements of ; since is abelian, the bracket is trivial.

The main claim, which I will prove subsequently, is that this map , the derivative of the determinant at the identity, is actually the trace. That is, it sends a matrix to its trace, the sum of the entries on the diagonal. Note that since it is a homomorphism of Lie algebras, it preserves the bracket, and we recover the familiar property of trace , so .

We can find the derivative of a smooth map on directly, since it is an open subset of a vector space. Let be a matrix; then the derivative at the identity evaluated at is

is a polynomial in , and the number we’re looking for is the coefficient of the term.

We have

Just to get a concrete idea of what this expands to, let’s look when . Then

When ,

In particular, the coefficient of is . (In fact, see if you can convince yourself that the coefficient of is .)

*See some **discussion** of the meaning of trace.*

** Acknowledgements:** Thanks to Ben Wormleighton for originally telling me the slogan “trace is the derivative of determinant”, and for teaching me about Lie groups and Lie algebras.

*To add: discussion of Jacobi’s formula, exponential map*

In all that follows is a commutative ring with identity and and are ideals of .

**Lemma:** If , there is a natural map .

**Proposition:** The natural map is an isomorphism of -algebras.

*Proof:* To see surjectivity, notice that generates as an -module, and , since the map is a ring homomorphism. To see injectivity, notice that every element

is equal to the pure tensor where . If , then . So , . Then .

**Corollary:**

**Proposition (Chinese Remainder Theorem):** The natural map is injective. If , it is also surjective, and thus an isomorphism.

*Proof:* To see injectivity, notice that if , then and , so so . To see surjectivity, note that implies there exist , , such that , for any . Consider , and set and . Then , so .

**Corollary:** If , . In particular, by applying this repeatedly we have .

Also, note the following fact:

**Proposition:** If , then .

*Proof:* Clearly . To go the other way, note that means that there exist , , and such that . So, consider an element . Then we have . Since , , and since , . So .

**Remark:** One can also think of this in terms of Tor: , and when this Tor group vanishes.

Probably the most famous open problem in number theory is the Riemann hypothesis. In addition to being worth a million dollars, it is a deep and fundamental problem that has remained intractable since it was first proposed by Bernhard Riemann, in 1859.

The Riemann hypothesis springs out of the field of analytic number theory, which applies complex analysis to problems in number theory, often studying the distribution of prime numbers. The Riemann hypothesis itself has significant implications for the distribution of primes and implies an asymptotic statement about their density (for a precise statement, see here). But the Riemann hypothesis is usually formulated in the language of complex analysis, as a statement about a complex-analytic function, the Riemann zeta function, and its zeroes. This formulation is succint and elegant, and allows the problem to be subsumed into the larger study of the largely conjectural theory of L-functions.

This broader theory allows one to create analogues of the Riemann zeta function and Riemann hypothesis in other contexts. Often these “alternative Riemann hypotheses” are even harder than the original Riemann hypothesis, but there is a famous case where this is fortunately not true.

In the 1940’s, AndrĂ© Weil proved an analogue of the Riemann hypothesis: not for the Riemann zeta function, but for a different zeta function. Here’s one way to describe it: very roughly speaking, the Riemann zeta function is based on the field rational numbers (it can be defined as an Euler product over the primes of ). Our zeta function will constructed analogously, but instead be based on the field (the field of rational functions with coefficients in the finite field ). So instead of the *number field* , we have swapped it out and replaced it with a *function field*.

Actually, what Weil proved, and what we will prove today, is the analogue of the Riemann hypothesis for global function fields. This work represents the greatest progress we have towards the original Riemann hypothesis, and serves as tantalizing evidence for it.

There is a general pattern in number theory which looks something like the following: start with a problem in number theory. Adapt the problem from the number field setting to the function field setting. Then interpret the function field as the function field of a curve (usually), and then use techniques of algebraic geometry (for example, is the function field of a line over ). That is exactly what we will do here: it will therefore look less like complex analysis and more like algebraic geometry

Let be a smooth projective curve over a finite field . Let be the set of points of Then the zeta function of is defined by

Here, we are using as a change of variables: if we plug in for , then we obtain an exactly analogous zeta function to the Riemann zeta function, except with respect to the function field of instead of the field . There are three important properties that we would like to have: (1) rationality, (2) satisfies a functional equation, and (3) satisfies an analogue of the Riemann hypothesis. Part (3) was proved by AndrĂ© Weil in the 1940’s; parts (1) and (2) were proved much earlier. In this post, I will present a proof of the analogue of the Riemann hypothesis assuming (1) and (2), along the lines of Weil’s original proof using intersection theory. All this material and much more is in an expository paper by James Milne called “The Riemann Hypothesis over Finite Fields: From Weil to the Present Day”. A useful reference is Appendix C in Hartshorne’s *Algebraic Geometry*; some material also comes from section V.1 on surfaces.

Let be the genus of . Then (1) says that is a rational function of . The specific function equation of (2) is the following:

It turns out that we can write out explicitly: there exist constants for such that

and the functional equation implies that the constants can be rearranged if necessary so that

Now, the analogue of the Riemann hypothesis states the following:

(To see the connection between this statement and the ordinary Riemann hypothesis, check out this blog post by Anton Hilado)

Notice that, assuming rationality and the functional equation, the Riemann hypothesis will follow from simply the inequality .

We will prove the Riemann hypothesis via the **Hasse-Weil inequality**, which is an inequality that puts an explicit bound on . The Hasse-Weil inequality states that

which is actually a pretty good bound. Why does the Hasse-Weil inequality imply the Riemann hypothesis? Well, if we take the logarithm of and use the power series for , regrouping terms gives us

; so

In other words,

is bounded.

Letting , we have

, so for all

as desired. (check this works, even if are not distinct)

We will prove the Hasse-Weil inequality using intersection theory. First, we will consider as a curve over . Then there is the Frobenius map . If we embed into projective space, then sends . We can interpret as the size of the set of fixed points of . Our plan then to use inequalities from intersection theory to bound the intersection of and (the diagonal) in .

First, let us set up the intersection theory we need. This material is from Chapter V.1 of Hartshorne, on surfaces.

Intersection pairing on a surface:Let be a surface. There exists a symmetric bilinear pairing (where the product of divisors and is denoted ) such that if are smooth curves intersecting transversely, then.

Furthermore, another theorem we’ll need is the **Hodge index theorem**:

Let be an ample divisor on and a nonzero divisor, with . Then . ( denotes )

Now let us begin with some general set up. Let and be two curves, and let . Identify with and with . Notice that and . Thus .

Let be a divisor on . Let and ; also, (expand it out). The Hodge index theorem implies then that . Expanding this out yields . This fundamental inequality is called the **Castelnuovo-Severi inequality**. We may define .

Next, let us prove the following inequality: if and are divisors, then

*Proof (fill in details):* Expand out , for . We can let become arbitrarily close to , yielding the inequality.

Here’s another lemma we will need: Consider a map . If is the graph of on , then (where is the genus of ).

*Proof (fill in details):* Rearrange adjunction formula.

Now we have what we need: we will do intersection theory on . The Frobenius map is a map of degree , so . We might as well think of as the graph of the identity map, so . Finally, and . Plugging it into the inequality, we get

yielding the Hasse-Weil inequality

This proves the Riemann hypothesis for function fields, or equivalently the Riemann hypothesis curves over finite fields.

After Weil proved this result, he speculated whether analogous statements were true for not only *curves* over finite fields, but *higher-dimensional algebraic varieties* over finite fields. He proposed as conjectures that the zeta functions for such varieties should also satisfy (1) rationality, (2) a functional equation, and (3) an analogoue of the Riemann hypothesis.

Weil also speculated a connection with algebraic topology. In our work above, the genus was crucial. But the genus can alternatively be defined topologically, by taking the equations that define the curve, looking at the locus they cut out when graphed over the complex numbers, and counting how many holes the resulting shape has. Weil suggested that for arbitrary varieties, topological Betti numbers should play this role: that is, the zeta function of the variety over the finite field should be closely connected with the topology of the analogous variety over the complex numbers.

There’s an interesting blog post that discusses this idea, in our context of curves. But the rest is history. The story of the Weil conjectures is one of the most famous in all of mathematics: the effort to prove them revolutionized algebraic geometry and number theory forever. The key innovation was the theory of Ă©tale cohomology, which is an analogue of classical singular cohomology for algebraic varieties over arbitrary fields.

]]>Here is the set of permutations of the set , and is the sign of the permutation . This formula is derived from the definition of the determinant via exterior algebra. One can check by hand that this gives the familiar expressions for the determinant when .

Now, since , we have

The crucial observation here is that we may rearrange the product inside the summation so that the second indices are increasing. Let . Then the product inside the summation is

Combining this with the fact that , our expression simplifies to

Noticing that the sum is the same sum if we replace all s with s, we see that this equals . So .

I wonder if there is a more conceptual proof of this? (By “conceptual”, I mean a proof based on exterior algebra, bilinear pairings, etc…)

]]>Recall that an absolute value on a field is a function satisfying the following axioms:

- if and only if
- (triangle inequality)

for all .

Here is an intuitive, analogous definition for the Archimedean property:

**Definition:** The absolute value is Archimedean if, for , , for some natural number .

Clearly the standard absolute value (which is defined on and , and therefore ) is Archimedean. But wait: since we assumed , we can divide both sides by to obtain . In other words, we can write the definition equivalently as:

**Equivalent Definition:** The absolute value is Archimedean if, for all , for some natural number .

Here takes the place of . The important thing here is that can be *any* element of So what this is saying is that, given any element of the field, there is some *natural number* that beats it.

Now, let us assume that the absolute value is *nontrivial*. (The trivial absolute value has for all nonzero ). Thus, for some , . So, either or . Thus by taking arbitrarily high powers of or , we can obtain arbitrarily high absolute values. So we can reformulate the definition as follows:

**Equivalent Definition:** is Archimedean if the set contains arbitrarily large elements.

In other words, the set is unbounded. So, is non-Archimedean if the sequence is bounded. However, if any , then taking arbitrarily high powers of can give us arbitrarily high absolute values. So

**Equivalent Definition:** is non-Archimedean if for .

Finally, I will present another very useful characterization of the (non)Archimedean property.

**Theorem/Equivalent Definition:** is non-Archimedean if .

*Proof*: (to be added)

Let be an ordered field. We say is Archimedean if, for where , there exists a natural number such that .

An example of a non-Archimedean number system is the hyperreal numbers. Hyperreal numbers are an enlargement of the real numbers that also contain “infinite” and “infinitesimal” quantities. The hyperreal numbers are used to give an alternative formulation of calculus in the subject of non-standard analysis, where instead of using limits, one computes with actual infinitesimals.

More familiar examples of non-Archimedean fields are function fields. For example, consider the field of rational functions (on ), denoted . We can order rational functions by declaring that

if as

for any . In other words, we order rational functions by looking at their asymptotic behavior. One can check that this satisfies the axioms, making an ordered field.

**Exercise:** Show that a rational function

is positive with respect to the order (i.e. ) if and only if .

Now one can see that the field of rational functions is clearly not Archimedean. For example, if we consider , no matter how many times we add it to itself, it will never surpass : the function is eventually surpassed by , no matter how great is.

**Exercise:** Define the degree of a rational function to be the degree of its numerator minus the degree of its denominator. For rational functions , show there exists an integer such that if and only if degree degree.

Thus the basic idea of the Archimedean property is at the core of asymptotic analysis. In defining big-O notation, we write if some multiple of surpasses as goes off to infinity.

In the next post, I will discuss the Archimedean property for valued fields (as opposed to ordered fields), and how this applies to number theory.

]]>Sometimes a cubic form can be factored. In this case, we are lucky: it factors as

but in general we will not be so lucky. It is pretty rare for a random cubic form to be factorable. From the perspective of projective algebraic geometry, a homogeneous cubic form cuts out an algebraic curve in the projective plane:

and a cubic that factors into three linear forms will cut out three lines: a “degenerate” plane cubic. Such a collection of three lines is called a “triangle”.

The space of homogeneous cubic forms is a 10-dimensional vector space with basis . However, given a cubic form , the scaled form corresponds to the same curve. Furthermore, if all the coefficients are zero, then the form doesn’t correspond to the curve at all. Thus the space of plane cubics is nine-dimensional projective space.

In this post we will prove the following enumerative result:

A general three-dimensional family of plane cubics contains exactly 15 triangles.

By a three-dimensional family (aka “web”), we mean some embedded copy of in this space of plane cubics. This geometric result corresponds to the following purely algebraic fact:

A general four-dimensional subspace of the ten-dimensional space of cubic forms contains exactly 15 forms which factor into three linear forms.

(I should probably say something about the term “general”. The statement “a general satisfies property ” this means that says that the subset of for which P(x) holds is dense in .)

This material is drawn from *3264 And All That* by Eisenbud and Harris

Let us consider the space of (ordered) triples of lines, or nonzero linear forms up to scaling. This is . We can construct a morphism

which sends a triple of linear forms to their product . This map is (in general) 6 to 1, since there are 6 permutations of three (distinct) linear forms.

We will do intersection theory in : specifically, we will pull back the class of a in to , and count how many points it consists of (in other words, how many triples of linear forms correspond to cubic surfaces in a general web). Then we will divide this number by 6, to count the number of triangles in the family.

The morphism

induces a map of Chow rings:

Now, and . The class of any in is . Furthermore, . So, . If we expand this out, removing every term that has any variable to a power of three or greater, we see that every term except the monomial term of vanishes, and its coefficient is the . is the the number of ordered triples of linear forms which correspond to cubic forms contained in a general web. Dividing by six, since six ordered triples correspond to one unordered triple (i.e. one distinct triangle), we obtain our answer of 15.

*(I will add more details to this…)*

Proposition: Let be a noetherian local domain where and is not nilpotent. Then .

*Proof:* Suppose . Then for all . So for all . Since is a domain, .

Therefore consider the ascending chain . This eventually stabilizes for high enough since is noetherian, so for some , . Thus , so . But is a unit, so , so .

This theorem holds more generally even if is not assumed to be a domain, but the proof is more complicated (but still among the same lines).

Proposition: Let be a noetherian local ring where and is not nilpotent. Then .

*Proof:* Let be the ideal of elements that kill some power of . We will use variables to refer to elements of . Since is noetherian, must be finitely generated, so all elements of kill for some fixed .

Now suppose . , so . Thus .

Consider the ascending chain . Since is noetherian it must eventually stablize, so for some , can be written as . But recall that . So so . is a unit since , and is local, so . If we force to be large enough to surpass , then , so .

]]>Previously, I wrote some blog posts (see here and here) which sketched a proof of the fact that the sum and product of algebraic numbers is also algebraic (and more). This is not an obvious fact, and to prove this requires some amount of field theory and linear algebra. Nevertheless, the ideas in the proof lead the way to a better understanding of the structure of the algebraic numbers and towards the theorems of Galois theory. In that post, I tried to introduce the minimum algebraic machinery necessary in order to state and prove the main result; I don’t think I entirely succeeded.

However, there is a more direct approach, one which also allows us find a polynomial that has (or ) as a root, for algebraic numbers and . That is the subject of this post. Instead of trying to formally prove the result, I will illustrate the approach for a specific example: showing is algebraic.

This post will assume familiarity with the characteristic polynomial of a matrix, and not much more. (In particular, none of the algebra from the previous posts)

Define the set . We will think of this as a four-dimensional vector space, where the scalars are elements of , and the basis is . Every element can be *uniquely* expressed as , for .

We’re trying to prove is algebraic. Consider the linear transformation on defined as “multiply by “. In other words, consider the linear map which maps . This is definitely a linear map, since it satisfies and . In particular, we should be able to represent it by a matrix.

What is the matrix of ? Well, , , , and . Thus we can represent by the matrix

.

Now, the characteristic polynomial of this matrix, which is defined as , is , which has as a root. Thus is indeed algebraic.

The basic reason is the Cayley-Hamilton theorem. It tells us that should satisfy the characteristic polynomial: is the zero matrix. But the matrix we get when plugging into should correspond to multiplication by ; thus .

Note that I chose randomly. I could have chosen any element of and used this method to find a polynomial with rational coefficients having that element as a root.

At the end of the day, to prove that such a method always works requires the field theory we have glossed over: what is in general, why is it finite-dimensional, etc. This constructive method, which assumes the Cayley-Hamilton theorem, only replaces the non-constructive “linear dependence” argument in Proposition 4 of the original post.

]]>Let be a polynomial with complex coefficients. If is a linear map, . We think of this as “ evaluated at ”.

**Exercise**: Show .

*Proof*: Pick a random vector . Consider the sequence of vectors This is a set of vectors, so they must be linearly dependent. Thus there exist constants such that .

Define . Then, we can factor By the Exercise, this implies . So, at least one of the maps has a nontrivial kernel, so has an eigenvalue.

*Proof*: We want to show that there exists some such that has nontrivial kernel: in other words, that is singular. A matrix is singular if and only if its determinant is nonzero. So, let ; this is a polynomial in , called the characteristic polynomial of . Now, every polynomial has a complex root, say . This implies , so has an eigenvalue.

To me, it seems like the determinant based proof is more straightforward, although it requires more machinery. Also, the determinant based proof is “constructive”, in that we can actually find all the eigenvalues by factoring the characteristic polynomial. On subject of determinant-based vs determinant-free approaches to linear algebra, see Axler’s article “Down With Determinants!” [3].

There is a similar situation for the problem of showing that the sum (or product) of two algebraic numbers is algebraic. Here there is a non-constructive proof using “linear dependence” (which I attempted to describe in a previous post) and a constructive proof using the characteristic polynomial (which will hopefully be the subject of a future blog post). A further advantage of the determinant-based proof is that it can be used more generally to show that the sum and product of integral elements over a ring are integral. In this more general context, we no longer have linear dependence available.