wu :: forums (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi)
riddles >> putnam exam (pure math) >> Conjugacy in a Representation
(Message started by: william wu on Mar 10th, 2004, 10:36am)

Title: Conjugacy in a Representation
Post by william wu on Mar 10th, 2004, 10:36am
Let

[rho] : G [to] GL(n,[bbr])


be a homomorphism that maps a group G to the general linear group (n-dimensional invertible matrices with real coefficients).

Let  

[chi](g) = tr([rho](g))


where g [in] G, and tr() is the trace function that returns the sum of diagonal entries of the argument matrix.


Prove

[chi](gxg-1) = [chi](x)


where g,x [in] G.

Title: Re: Conjugacy in a Representation
Post by Pietro K.C. on Mar 10th, 2004, 5:35pm
Don't look in the hidden portions unless you want hints!

This problem is rather straightforwardly solvable using [hide]standard results like the invariance of the tr() operator under similarity operations (and some filling to make this look bigger)[/hide], which in itself [hide]is not that hard to demonstrate (some random letters are auehdakf)[/hide].

Is there some really clever way to solve this, that does not require what I said above?

Title: Re: Conjugacy in a Representation
Post by Icarus on Mar 10th, 2004, 8:04pm
You can probably get it from [hide] tr(A) = d/dt|0 det(tI - A).[/hide]

It also drops out very sweetly from the theory of the exterior product. But developing that theory is a significant effort.

[e]I had the sign wrong on the formula originally[/e]

Title: Re: Conjugacy in a Representation
Post by william wu on Mar 10th, 2004, 8:15pm
You can solve it using the definition of homomorphism, and the following somewhat simpler fact about matrix trace: ::[hide]tr(AB) = tr(BA)[/hide]::

Title: Re: Conjugacy in a Representation
Post by Icarus on Mar 11th, 2004, 6:00pm
True. I still prefer my formula though, because mine can be taken as the definition of trace, whereas yours is just a property of it.

I'm not going to bother with hiding, so if anyone wants to figure this out themselves, they best stop reading here.




Note: gl(n, [bbr]) = the algebra of all n[times]n real matrices. GL(n, [bbr]) = the multiplicative group of all invertible matrices in gl(n, [bbr]).

lemma: for A [in] gl(n, [bbr]), B [in] GL(n, [bbr]), tr(BAB-1) = tr(A).
Proof: I offer two proofs. Using William's formula, this is trivial.
tr(BAB-1) = tr(B-1BA) = tr(A).
Using the defining formula tr(A) = d/dt|0 det(tI - A), it is not much harder.
det(tI - BAB-1) = det(tBIB-1 - BAB-1) = det(B)det(tI - A)det(B-1) = det(B)det(tI - A)det(B)-1 = det(tI - A).
Differtiating and evaluating at 0 gives tr(A) = tr(BAB-1).

With the lemma, the rest is obvious.
[chi](gxg-1) = tr([rho](gxg-1)) = tr([rho](g)[rho](x)[rho](g-1)) = tr([rho](g)[rho](x)[rho](g)-1) = tr([rho](x)) = [chi](x).



Some facts about determinants are the other characteristic invariants of linear maps.

Let V be an n-dimensional vector space (over any field [bbf]). An alternating k-linear function on V is a map f : Vk [to] [bbf] such that
   (i) f is linear in each of its parameters: for all i [le] k & (v1, ..., vk) [in] Vk & vi' [in] V & a, b [in] [bbf],
f(v1, ..., vi-1, avi + bvi', vi+1, ..., vk) = af(v1, ..., vi, ..., vk) + bf(v1, ..., vi', ..., vk).

   (ii) exchanging any two parameters in f negates the value: for all i, j,
f(..., vi, ..., vj, ...) = -f(..., vj, ..., vi, ...).



It is easy to see that if f and g are alternating k-linear maps then so is af + bg for any a, b [in] [bbf]. Hence the set A(k, V) of alternating k-linear maps forms a vector space. Somewhat harder, but still straight-forward, is this result:

Lemma: If V is an n-dimensional vector space, then A(n, V) is 1-dimensional.

Proof: try it yourself.

To make the reading easier, let me introduce this notational short-hand: if v [in] Vn, v = (v1, ..., vn), and L : V [to] V is a linear map, then define
Lv = (Lv1, ..., Lvn).


An easier property of alternating k-linear maps is:

Lemma: If V, W are both vector spaces over [bbf], and if L: V [to] W is a linear map and k [in] [bbn], then L induces a map L : A(k, W) [to] A(k,V) by: for all f [in] A(k, W),
Lf(v) = f(Lv).

Further, L : A(k, W) [to] A(k, V) is linear.

Proof: this one should be obvious.

Now we put it together:
Let L be a linear map from an n-dimensional vector space V to itself. L induces a linear map L : A(n, V) [to] A(n, V). But A(n, V) is one-dimensional. The only linear maps on one-dimensional vector spaces are multiplication by a scalar. I.e., there is some d [in] [bbf] such that Lf = df for all f [in] A(n, V).

Definition: the "determinant" of a linear map L : V [to] V, where dimension of V = n, is the unique scalar det(L) = d such that Lf = df for all f [in] A(n, V).

The characteristic polynomial of L is the polynomial det(L - tI). Let {ak} be the coefficients of this polynomial. The "kth characteristic invariant of L" = (-1)kak.
In particular, the 0th invariant is the determinant.
The 1st invariant is called the "trace", denoted by tr(L).
Clearly, tr(L) = d/dt|0 det(tI - L).


Why? Note that I have defined the determinant and trace for arbitrary linear maps without once ever bringing in a basis or talking about matrices and their components. In this form, it is obvious that determinants are a fundamental property of the linear map itself: the value is independent of the basis one uses to evaluate it in the normal fashion. Also, the basic properties of determinants and traces are considerably less ugly to prove in this fashion than they are by chasing indices through that nasty formula. For example:

Theorem: Let L, H : V [to] V be linear maps. Then det(LH) = det(L)det(H).

Proof: For v [in] Vn, f [in] A(n, V), (LH)f(v) = f(LHv) = Lf(Hv) = L(Hf)(v).
Hence (LH)f = L(Hf) = det(L)Hf = det(L)det(H)f.
So, det(LH) = det(L)det(H).

Clearly much nicer than the proof you commonly see!

The proof of William's exchange formula, tr(LH) = tr(HL) is only a little harder. If one of the linear maps L and H is invertible, you can prove it using the formula from the lemma at the top of the post, which is proved by the second means. tr(LHL-1) = tr(H). Now use HL in place of H:
tr(HL) = tr(LHLL-1) = tr(LH).

But the result also holds true when both maps are not invertible. I leave that proof as a challenge (i.e., prove it by the definitions and methods shown here, rather than by choosing a basis and converting everything to matrices).



Powered by YaBB 1 Gold - SP 1.4!
Forum software copyright © 2000-2004 Yet another Bulletin Board