# Hilbert’s Basis Theorem

Here is a proof of Hilbert’s Basis Theorem I thought of last night.

Let be a noetherian ring. Consider an ideal in . Let be the ideal in generated by the leading coefficients of the polynomials of degree in . Notice that , since if , , and it has the same leading coefficient. Thus we have an ascending chain , which must terminate, since is noetherian. Suppose it terminates at , so .

Now for each choose a finite set of generators (which we can do since is noetherian). For each generator, choose a representative polynomial in with that leading coefficient. This gives us a finite collection polynomials: define to be the ideal of generated by these polynomials.

Let . I claim . Assume for the sake of contradiction that there is a polynomial of minimal degree (say ) which is in but . If , there is an element of with the same leading coefficient, so is not in but has degree smaller than : contradiction. If is of degree , then there is an element of of which has the same leading coefficient as . Thus is of degree smaller than but is not in : contradiction.

Thus . Since is therefore finitely generated, this proves is noetherian.

# Algebro-geometric proof of Cayley-Hamilton

Here is a sketch of proof of the Cayley-Hamilton theorem via classical algebraic geometry.

The set of n x n matrices over an algebraically closed field can be identified with the affine space . Let be the subset of matrices that satisfy their own characteristic polynomial. We will prove that is in fact all of . Since affine space is irreducible, it suffices to show that is closed and contains a non-empty open set.

Fix a matrix . First, observe that the coefficients of the characteristic polynomial are polynomials in the entries in . In particular, the condition that a matrix satisfy its own characteristic polynomial amounts to a collection of polynomials in the entries of vanishing. This establishes that is closed.

Let be the set of matrices that have distinct eigenvalues. A matrix has distinct eigenvalues if and only if its characteristic polynomial has no double roots when it splits. This occurs if and only if the discriminant of the characteristic polynomial is nonzero. The discriminant is a polynomial in the coefficients of the characteristic polynomial. Thus the condition that a matrix have distinct eigenvalues amounts to a polynomial in the entries of not vanishing. Thus is open.

Finally, we have to show . It is easy to check this for a diagonal matrix. The general result follows from the fact that the determinant and thus the characteristic polynomial is basis-invariant.

# On algebraic numbers

A complex number is algebraic if it is the root of some polynomial with rational coefficients. is algebraic (e.g. the polynomial ); is algebraic (e.g. the polynomial ); and are not. A complex number that is not algebraic is called transcendental.

Is the sum of algebraic numbers always algebraic? What about the product of algebraic numbers? For example, given that and are algebraic, how do we know is also algebraic?

We can try to repeatedly square the equation . This gives us . Then isolating the radical, we have . Squaring again, we get , so is a root of . This is in fact the unique monic polynomial of minimum degree that has as a root (called the minimal polynomial of ) which shows is algebraic. But a sum like would probably have a minimal polynomial of much greater degree, and it would be much more complicated to construct this polynomial and verify the number is algebraic.

It is also increasingly difficult to construct polynomials for numbers like . And, apparently there also exist algebraic numbers that cannot be expressed as radicals at all. This further complicates the problem.

In fact, the sum and product of algebraic numbers is algebraic – in fact, any number which can be obtained by adding, subtracting, multiplying and dividing algebraic numbers is algebraic. This means that a number like

is algebraic – there is some polynomial which has it as a root. But we will prove this result non-constructively; that is, we will prove that such a number must be algebraic, without providing an explicit process to actually obtain the polynomial. To establish this result, we will try to look for a deeper structure in the algebraic numbers, which is elucidated, perhaps surprisingly, using the tools of linear algebra.

Definition: Let be a set of complex numbers that contains . We say is an abelian group if for all , and . In other words, is closed under addition and subtraction.

Some examples: , , , , the Gaussian integers (i.e. the set of expressions where and are integers)

Definition: An abelian group is a field if it contains and for all , , and if , . In other words, is closed under multiplication and division.

We generally use the letters , , for arbitrary fields. Of the previous examples, only , , are fields.

Exercise: Show that if is a field, .

Exercise: Show that the set is a field. We call this field . (Hint: rationalize the denominator).

Exercise: Describe the the smallest field which contains both and (this is called )

Generally if is a field and some complex numbers, we will denote the smallest field that contains all of and the elements as .

Definition: Let be a field. A -vector space is an abelian group such that if , , then . Intuitively, the elements of are closed under scaling by . (We also sometimes use the phrase “ is a vector space over “)

Examples: and are both -vector spaces. In fact, is also a vector space.

With this language in place, we can state the main goal of this post as follows:

Theorem: If are algebraic numbers and , then is algebraic.

Note how this encompasses our previous claim that any number obtained by adding, subtracting, multiplying and dividing algebraic numbers is algebraic. For example, we can deduce the giant fraction above is algebraic by applying the theorem to .

Definition: A field extension of a field is just a field that contains . We will write this as “ is a field extension”.

Exercise: If is a field extension, then is a -vector space

Consider a field like . Not only is it a field, but it is also a -vector space – its elements can be scaled by rational numbers. We want to adapt concepts from ordinary linear algebra to this setting. We want to think of as a two dimensional vector space, with basis and – every element is can be uniquely written as a -linear combination of and . Similarly, we would like to think of as a -vector space with basis . Note that if we regard as a -vector space, its basis is . This is because if an element is written as , where the coefficients are rational, we can rewrite it as , now with coefficients in .

On the other hand, consider . Since is not algebraic, no linear combination (with rational coefficients) of the numbers equals without all the coefficients being zero. This makes them linearly independent – and makes an infinite-dimensional vector space.

This finite-versus-infinite dimensionality difference will lie at the crux of our argument. Here is a rough summary of the argument to prove the Theorem:

1. If we add an algebraic number to a field, we get a finite-dimensional vector space over that field.
2. By repeatedly adding algebraic numbers to , the resulting field will still be a finite-dimensional -vector space.
3. Any element of a field which is a finite-dimensional -vector space must be algebraic.

From here on, we will assume that the reader has some familiarity with ideas from linear algebra, such as the notions of linear combination, linear independence, and basis. We will only sketch proofs (maybe I will add details later). We would like to warn the reader that the proof sketches have many gaps and may be difficult to follow.

Definition: A field extension is finite if there is a finite set of elements such that every element of can be represented as , where all the . We say the elements generate (aka span) over . If, furthermore, every element can be represented uniquely in this form, then we say is a basis (or just basis) for .

Examples: and are finite extensions, while is not.

Exercise: Show is not a finite extension. (Hint: is uncountable)

Proposition 1: Every finite field extension has a basis.

Proof sketch: This is a special case of a famous theorem in linear algebra. Assume generate . Start with the last element. If can be represented as a linear combination of , remove it from the list. If we removed , we now have , and we can repeat the process with . Continue this process until we cannot remove any more elements. Then (it can be checked that) we obtain a set whose elements are linearly independent. Then (it can be checked that) these form a -basis for .

For the next theorem, keep in mind the example of the extensions and .

Proposition 2: Suppose and are finite field extensions. Then is a finite extension.

Proof: By Proposition 1, both these field extensions have bases. Label the -basis of as , and the -basis for as . Then every element of can be uniquely represented as , for . Furthermore, each can be written uniquely as for . Plugging these in for the shows that the elements of the form form a -basis for .

Proposition 3: If is algebraic, then is a finite extension.

Proof sketch: Suppose . First we will show every element in is a polynomial in with coefficients in . consists of all numbers that can be obtained by repeatedly adding, subtracting, multiplying and dividing and elements of . By performing some algebraic manipulations and combining fractions, we can see that every element can be written in the form , where and are polynomials with coefficients in .

Now I claim that for any polynomial where , there exists a polynomial such that . Let be the minimal polynomial of . Since and , and are relatively prime as polynomials. So there exist polynomials and such that . Plugging in into the polynomials gives us . Since , , as desired.

Thus any element in , written as , can be written as for an appropriate polynomial ; in other words, every element of is a polynomial of with coefficients in . Now, suppose the minimal polynomial of is . Then . So (and all higher powers of ) can be written as rational linear combinations of . Then it can be verified that every element in can be written uniquely as a -linear combination of . This establishes that is a finite extension; in particular, it has the basis

Proposition 4: Suppose is a finite extension. Then any is algebraic.

Proof sketch: Consider the elements . They cannot all be linearly independent. This is because in a finite-dimensional vector space of dimension any set with at least elements must be linearly dependent. So there must be some linear combination of them which equals zero. But this simply means that for some constants and positive integer , . Letting the be the coefficients of a polynomial , , so is algebraic.

We have developed enough theory now to prove the result.

Proof of main Theorem: Let be some collection of algebraic numbers. Then by Proposition 3, are all finite field extensions. Applying Proposition 2 repeatedly gives us is a finite extension. Then by Proposition 4, any element of is algebraic.

This theorem essentially says that given a collection of algebraic numbers, by repeatedly adding, subtracting, multiplying and dividing them, we can only obtain algebraic numbers. But what about radicals? For example, we have now established that is algebraic. Is algebraic as well?

In fact, we can generalize our result to the following: any number obtained by repeatedly adding, subtracting, multiplying, dividing algebraic numbers, as well as taking -th roots (for positive integers ) will be algebraic.

This is not too hard now that we have the main theorem. It suffices to show that if is algebraic, is algebraic. If is algebraic, then there exist constants such that . This can be rewritten as . Thus is algebraic.

# “Locally of finite type” is a local property

Previously we showed that morphisms locally of finite type are preserved under base change. We can use this to show that

Given a morphism of schemes , the preimage of any affine can be covered by affines such that the corresponding ring maps are of finite type.

Alternatively, if we define a morphism locally of finite type to be one that satisfies , then what we are saying is that such a property can be checked on a cover; we can replace “any affine” with “an affine in a cover of affines”.

Let’s try to prove . First, we base change to . Since the morphism is also locally of finite type, we can cover by affines such that their preimages can be covered by the spectra of finitely-generated -algebras . However, we don’t know if these are finitely-generated -algebras! To fix this, we base change to even smaller affines. Cover by basic open sets . This gives us a cover of each by basic open sets of the form . Since is of finite type, is of finite type. Since is clearly of finite type, is of finite type, giving us the desired cover of . The following diagram may be illustrative (every square is a pullback)

# Group schemes and graded rings

In this post we will describe how an action of the multiplicative group scheme on defines a -grading of . A future post may describe how this relates to projective schemes. (I will do all of this using diagrams, but there may be some easier way using the functors of points). All this was taught to me by Mark Haiman in Math 256B (Algebraic Geometry) at UC Berkeley.

Fix a field ; we will work in the category of -schemes. Thus will be a -algebra, and we will establish a graded -algebra structure on . However, none of our arguments change if we just let be . A group scheme is a group object in the category of -schemes. A precise definition can be found here. Most importantly, group schemes can act on other schemes. The definition of a group scheme action can be found here. Note that all definitions are given by diagrams (or functor of points). For example, we specify the “identity element” of a group scheme by a map , rather than selecting some point in the underlying topological space.

is defined as . (for shorthand, we will write as ). As a variety, it can be thought of as , the “punctured affine line”. Its group operation is given by a map which corresponds to the -algebra map defined by . The identity is given by a map corresponding to defined by .

Suppose acts on . The action map corresponds to a -algebra map such that the following diagrams commute:

Associativity:

Identity:

For , write , where almost all the are zero. Then the first diagram implies that

if (i.e. the polynomial is just a single monomial), then .

This is because, along the top and right arrows, we have and along the left and bottom arrows we have . Furthermore, the second diagram says that

for all , .

Therefore, letting stand for the degree homogenous component of (so that it consists of multiples of ), let . Since all the are disjoint, their preimages are disjoint as well. Furthermore, for an arbitrary element , we have by , and by , we have that each . Thus as a direct sum.

It remains to show that . But this is easy: if , then , so as desired.

# Odd and even permutations

A standard part of the theory of permutations is the classification of permutations into “odd” and “even” types. In this post I will develop the theory of odd and even permutations, focusing on adjacent permutations.

Let be an finite ordered set. A permutation of is a rearrangement of ; formally, it is a bijection . For simplicity, we will label the elements of as , and we will represent a permutation by the list . Thus if has five elements, then the permutation that reverses can be written as .

The set of permutations of naturally forms a group. That is, given two permutations and , we can form their product (or for short), which is defined by performing first and then , following the convention for composition of functions. For example, the product of the permutations is .

Here are some standard permutations. The identity permutation is the permutation that does nothing. A transposition is a permutation that swaps two elements. If a transposition swaps and , we will denote it as . An adjacent transposition is a transposition that swaps two adjacent elements (so it is denoted ).

An important property of a permutation is its inversion number. An inversion of a permutation is a pair where such that . In other words, it is a pair of elements whose relative positions have been switched by the permutation. The inversion number of is the size of its set of inversions. Thus the inversion number is at most (this occurs when the permutation reverses ). A permutation has inversion number if and only if it is the identity permutation. Also, a permutation has inversion number if and only if it is an adjacent transposition.

A permutation is called odd if its inversion number is odd, and even if its inversion number is even. We would like to show that the product of odd and even permutations behaves like addition of odd and even numbers. To show this, we will use the following lemma:

Every permutation is a product of adjacent transpositions.

This can be proved by using insertion sort, an algorithm which effectively writes any permutations as a product of adjacent transpositions.

Here is how it works. Consider, for example, the permutation given by . Let us try to sort this list. Insertion sort first moves to the front by repeated swaps, first swapping the second and third position and then the first and second positions. Then we get . Then insertion sort would swap the fourth and fifth positions, and then finish with . So, if we do these transpositions in reverse, we get our original permutation. That is, . We can see that using this process, insertion sort can write any permutation as a product of adjacent transpositions.

We need one more lemma:

If is a permutation and an adjacent transposition, then and have opposite parity. That is, one is odd and one is even.

Here is why. After performing , if we swap two adjacent elements and , then the only change to the set of inversions is , since and haven’t moved relative to any other element. So, the only possible change is that was an inversion and no longer is, or it wasn’t an inversion and now is. So the inversion number has either been increased or decreased by one – so the parity has been switched.

So, if a permutation is written as a product of adjacent transpositions, then its parity is equal to the parity of . This establishes that the product of odd and even permutations behaves like addition of odd and even numbers (formally, what we are saying is that “parity” defines a homomorphism from the permutation group to ).

Another interesting fact is that every transposition is odd. We can see this (without needing to count inversions) by noticing that we can obtain any transposition (with ) by repeatedly swapping adjacent elements till the element in place is moved to place , and then repeatedly swapping the other direction till the element originally in place is moved to place . If we write this out we see that , which has an odd number of terms.

Thus every permutation can be written only either as a product of an odd number or an even number of transpositions. Since the identity permutation is clearly even, we have the following remarkable fact: given any list, it is impossible, after performing an odd number of swaps, to obtain the original list.

Acknowledgements: Thanks to Anton Cao for suggestions. Thanks to Jessica Meng for reminding me the correct convention for composition order.