Author |
Topic: Sum of Matrices (9/3/2002) (Read 1181 times) |
|
william wu
wu::riddles Administrator
Gender:
Posts: 1291
|
|
Sum of Matrices (9/3/2002)
« on: Sep 3rd, 2002, 6:43pm » |
Quote Modify
|
Suppose a set of m real n-by-n matrices has a nonsingular (invertible) sum. Show that some subset of at most n of those matrices also has a nonsingular sum. (Not an easy problem.)
|
« Last Edit: Sep 16th, 2002, 1:57pm by william wu » |
IP Logged |
[ wu ] : http://wuriddles.com / http://forums.wuriddles.com
|
|
|
Pietro K.C.
Full Member
Gender:
Posts: 213
|
|
Re: Sum of Matrices
« Reply #1 on: Sep 6th, 2002, 12:18pm » |
Quote Modify
|
Still working on it... didn't have a chance to really get going yet. For the case n = 2, let us represent a matrix by (v1, v2), where v1 and v2 are 2-dimensional column vectors. For instance, if a matrix A is given by / a b \ | | \ c d / then v1 = (a, c) and v2 = (b, d). We are going to induce on m, the number of matrices in the sum. For m = 1, the matrix itself is nonsingular, so the theorem is true. Now let us suppose that the m matrices A1, ..., Am sum to a nonsingular matrix N, so that: (v(11), v(21)) + ... + (v(1m), v(2m)) = (n1, n2) where n1 is not a multiple of n2 and neither one is zero. Let Am be the matrix of greatest rank among A1, ..., Am; if its rank is two, then it alone is a nonsingular matrix, and we are done. If its rank is zero, all of A1, ..., Am are the zero matrix and cannot therefore sum to N. We can also assume that none among A1, ..., Am are the zero matrix, since that would reduce the problem to the previous case, and the induction hypothesis takes care of it. Let Am's rank be one, and without loss of generality let v(1m) be nonzero. Let c be the real number such that v(2m) = c*v(1m) (we know that v(1m) is nonzero). If there were no other matrix Ak such that v(1k) was not a multiple of v(1m), the sum A1 + ... + Am cannot be singular; hence there is at least one such matrix. If Ak + Am is nonsingular, we are done. It is singular if and only if v(2k) = c*v(1k) for the same c above. We then divide the set {A1, ..., A(m-1)} into 2 categories: the matrices Ai for which v(1i) is a multiple of v(1m), and those for which it isn't. As established above, the second category is of the form (v(1i), c*(v1i)), and the first is (k*(v1m), t*(v1m)). If there is some Ai in the first category such that t/k is different from c, then this Ai summed with some Aj in the second category will be nonsingular, as can be easily checked. If there is not, let {(k1*v(1m), t1*v(1m)), ..., (ks*v(1m), ts*v(1m))} be the first category and {(w1, c*w1), ..., (wr, c*wr)} be the second. The sum of both plus Am is the matrix: ((1 + sum k)v(1m) + sum w, (c + sum t)v(1m) + c(sum w)). However, multiplying the first component by c yields the second, so this matrix is not singular. Hence, at least one of our suppositions is wrong, and they were all necessary for the nonexistence of a pair of matrices with nonsingular sum. Which means there exists such a pair, as we wished to prove. I know this proof is still clumsy, and worse yet, is just for n = 2, but maybe it will help someone figure out a better way.
|
« Last Edit: Sep 6th, 2002, 12:27pm by Pietro K.C. » |
IP Logged |
"I always wondered about the meaning of life. So I looked it up in the dictionary under 'L' and there it was --- the meaning of life. It was not what I expected." (Dogbert)
|
|
|
James Fingas
Uberpuzzler
Gender:
Posts: 949
|
|
Re: Sum of Matrices
« Reply #2 on: Sep 9th, 2002, 12:19pm » |
Quote Modify
|
Pietro: Sorry about using a very different notation! I have a solution which doesn't work. But I'm not sure if it's a useful non-working solution, or a (more common) useless non-working solution. REPLACEMENT VECTORS We are going to choose at most n of the m matrices to make up a rank-n sum. We will do this by examining the given invertible sum row-by row, and choosing matrices to maintain the essential linear independence of rows. Call the m matrices A1 .. Am. Let sumi = 1..mAi = M. We are going to be generating a matrix S, which is the sum of B1 .. Br, where r <= n and all the B's are chosen from the A's. 1) Look at the rows of M, mi. Start with the first row. Now we know that M is invertible, and thus all rows of M are linearly independent. Let's find the part of m1 which is orthogonal to all m2 .. mn, and call this part mhat1. 2) Now let's examine the A matrices and find out which ones contribute to this mhat1. All A matrices which have a component in the direction of mhat1 in their first rows are candidates. Pick any one of these, and call it B1--it will become part of the S matrix. Call the first row of this matrix t1. 3) Next, we will do the same for row two, except that we generate mhat2 to be orthogonal to t1 and m3 .. mn. Doing so, we generate t2, and decide on our second matrix, B2, to include in the sum. Note that if B2 = B1, there is no problem, as it is already included in the solution. 4) Repeat for every row, generating mhati so that it is orthogonal to t1 .. ti-1 and mi+1 .. mn. 5) Now we have a series of vectors t1 .. tn which are linearly independent, one in each of the n rows, and we have the set of n matrices, B1 .. Bn, which we collected to form S. 6) The only problem is that when we form the sum S = sumi = 1..rBi, we cannot guarantee that S has linearly independent rows. There are two problems First, the row Si is not guaranteed to have any component in the direction mhati. Even though we introduce a component in direction mhati by the inclusion of Bi in the sum S, it could be annihilated by the addition of Bi+1 .. Br. I believe that this deficiency can be overcome through judicious choices in step 2, but there is a further problem: Second, even if all the rows S1 .. Sn all have the required components in the directions mhat1 .. mhatn respectively, they still might not be linearly independent. For example, for n=3, we could have: mhat1 = [1,0,0] mhat2 = [0,1,0] mhat3 = [0,0,1] S1 = [1,1,0] S2 = [1,1,0] S3 = [0,0,1] In this case, the S matrix is not invertible, because even though S1 points partly in direction mhat1, it still has a component in direction mhat2 sufficient to make it the same as S2. So there's my non-working solution. Maybe that will give somebody an idea?
|
|
IP Logged |
Doc, I'm addicted to advice! What should I do?
|
|
|
Pietro K.C.
Full Member
Gender:
Posts: 213
|
|
Re: Sum of Matrices
« Reply #3 on: Sep 9th, 2002, 11:00pm » |
Quote Modify
|
VERY beautiful idea about the mhats!!! Too bad it's 3am and I just saw your post... got class at 8am, this'll have to wait a bit. (13h50)I'm back. I thought today of the following theorem, which, once stated, is easily demonstrated by contradiction: if, to an nxn matrix of rank n, we add another of rank k, the rank of the resulting matrix cannot be less than n - k. I think it can be generalized, but I'm not in the mood to do it - the main idea seems to be captured in the above statement. What interests me now is the problem, which maybe the theorem will help solve. By the way, the mhats do seem promising, but I'm still not quite sure how to fix the proof.
|
« Last Edit: Sep 10th, 2002, 9:57am by Pietro K.C. » |
IP Logged |
"I always wondered about the meaning of life. So I looked it up in the dictionary under 'L' and there it was --- the meaning of life. It was not what I expected." (Dogbert)
|
|
|
James Fingas
Uberpuzzler
Gender:
Posts: 949
|
|
Re: Sum of Matrices
« Reply #4 on: Sep 11th, 2002, 10:20am » |
Quote Modify
|
I've come up with another proof that doesn't work! Please, hold your applause ... Induction in rank: 1) We will generalize the problem to say: for any m nxn matrices that sum to a rank p matrix, it is possible to find a subset of size <= p of them which sums to a matrix of rank >= p. (Although this proof doesn't work, this statement may still be true) 2) For p=1, it is easy to show. Given m nxn matrices that sum to a rank-1 matrix, at least one of them must have non-zero rank. Take any such matrix, and we have solved the problem. 3) Now for the inductive step. Given m matrices that sum to a rank p+1 matrix, we must show that it is possible to find a subset of them, of size <= p+1, which sum to a matrix of rank >= p+1. 4) A little exploration of the problem: we know that various combinations of the m matrices will sum to have different ranks. Now let us make up all possible sums (there are 2m of them), and list them in order of decreasing rank. 5) Now we will take the sum with the largest rank less than or equal to p. The rank of this sum is r, and 'A' is the set of matrices used to make the sum. Now, by the inductive step, we can find a subset of A (call the subset B) such that the B matrices sum to have rank >= r, with |B| <= r. 6) Since the rank of the sum of B is >= r, then there are two possibilities: the sum has rank r, or the sum has rank > p. Note that it is not possible for this sum to have a rank between r and p, because we know r is the largest possible rank less than p for the sum of any subset of the original m matrices. If the rank is > p, then the inductive step holds. 7) If the rank is equal to r, then consider the addition of one more matrix to the set B, to make a set B'. The sum of B' will either be <=r, or will be >p. If any possible set B' has a sum >p, then the inductive step holds. If all possible B' sum to rank <=r, then surely that would violate our assumption that all m matrices sum to have rank p+1. Or so I thought! However, sometimes all possible B' have sums with rank <=r! m = 3, p = 2 1 0 0 0 0 0 0 0 0 -1 0 0 0 1 0 0 0 0 -1 0 0 0 0 1 0 0 0 here's the sum: -1 0 0 0 1 1 0 0 0 We must find a subset of size <= 2 that sum has rank >= 2. So we list all the subsets and choose one with rank 1. The subset we choose is just the first matrix. The set B is equal to the set A with no additional work. However, when we make B', we find that neither matrix 2 nor matrix 3 can add any rank to the set B. Now this example is trivial, but it disproves my proof Pietro, I think what you're saying about the ranks is true. I think that in general: |rank(A)-rank(B)| <= rank(A+B) <= rank(A) + rank(B)
|
|
IP Logged |
Doc, I'm addicted to advice! What should I do?
|
|
|
James Fingas
Uberpuzzler
Gender:
Posts: 949
|
|
Re: Sum of Matrices
« Reply #5 on: Sep 11th, 2002, 10:26am » |
Quote Modify
|
I just had a cool thought: EQUIVALENT PROBLEM: Suppose a set of m real n-by-n matrices sums to the nxn identity matrix. Show that some subset of at most n of these also has a non-singular sum. All I did was multiply all the matrices by the inverse of their sum! Maybe this problem is easier... ADDED: Here's another thing: The sum of the matrices that you don't use doesn't have 1 as an eigenvalue: (I - matrices you don't use)v != 0 (for nonzero v) This is the eigenvalue equation, with lambda = 1.
|
« Last Edit: Sep 11th, 2002, 12:36pm by James Fingas » |
IP Logged |
Doc, I'm addicted to advice! What should I do?
|
|
|
ecoist
Senior Riddler
Gender:
Posts: 405
|
|
Re: Sum of Matrices (9/3/2002)
« Reply #6 on: Jan 19th, 2007, 3:27pm » |
Quote Modify
|
Here's a proof. Let Ai, i=1,...,m be m nxn matrices and assume S=A1+...+Am is nonsingular. Form the n by nxm block matrix B=[A1 ... Am]. If B has rank less than n, then there is a nonzero 1 by n vector vt such that vtB=0. This implies that also vtS=0, whence S is singular, contrary to our assumption. Thus B has rank n. Hence there are n columns of B which are linearly independent. These columns are contained in at most n of the Ai. The corresponding block matrix B', consisting of those at most n Ai, thus has rank n; and so the corresponding sum, S', of these matrices is nonsingular. A similar argument shows: If a sum of m nxn matrices has rank k, then there exist at most k of the summands whose sum has rank k.
|
« Last Edit: Jan 20th, 2007, 6:18am by ecoist » |
IP Logged |
|
|
|
Eigenray
wu::riddles Moderator Uberpuzzler
Gender:
Posts: 1948
|
|
Re: Sum of Matrices (9/3/2002)
« Reply #7 on: Jan 20th, 2007, 11:44am » |
Quote Modify
|
on Jan 19th, 2007, 3:27pm, ecoist wrote:The corresponding block matrix B', consisting of those at most n Ai, thus has rank n; and so the corresponding sum, S', of these matrices is nonsingular. |
| This doesn't follow. For example, if B=[1 0 | -1 0 | 1 0 ] [0 0 | 0 1 | 0 1 ], then most choices of 2 matrices don't work. Quote:If a sum of m nxn matrices has rank k, then there exist at most k of the summands whose sum has rank k. |
| This is equivalent to the given problem, since if the sum has rank >= k, then we may apply the given problem to a non-singular kxk minor of the sum, and obtain <= k matrices whose sum has rank >= k.
|
|
IP Logged |
|
|
|
ecoist
Senior Riddler
Gender:
Posts: 405
|
|
Re: Sum of Matrices (9/3/2002)
« Reply #8 on: Jan 21st, 2007, 10:47am » |
Quote Modify
|
Ha, ha! Je suis un cretin! At least the others knew they had an incomplete, or incorrect, proof! Sigh. Not many arrows left in my quiver.
|
|
IP Logged |
|
|
|
Eigenray
wu::riddles Moderator Uberpuzzler
Gender:
Posts: 1948
|
|
Re: Sum of Matrices (9/3/2002)
« Reply #9 on: Jan 24th, 2007, 10:01am » |
Quote Modify
|
This problem has been bothering me the last few days. Suppose m>n, and let A1,...,Am be given nxn matrices. For a subset S of [m]={1,...,m}, define the polynomial (in n2m variables) fS = det(sum{i in S} Ai). The claim is then that if, at some point, fS vanishes for all proper subsets S < [m], then f[m] vanishes as well. By the Nullstellensatz, this is equivalent to the statement that some power of f[m] lies in the ideal generated by {fS : S < [m]}. In fact, we have [sum]S (-1)|S| fS = 0! I've never seen this remarkable identity before. For example, it says that for 2x2 matrices, det(A+B+C) = det(A+B) + det(A+C) + det(B+C) - det(A) - det(B) - det(C), or for 3x3 matrices, that det(A+B+C+D) + det(A+B) + det(A+C) + det(A+D) + det(B+C) + det(B+D) + det(C+D) = det(A+B+C) + det(A+B+D) + det(A+C+D) + det(B+C+D) + det(A) + det(B) + det(C) + det(D), an equality of cubics in 36 variables with 384 terms. Expanding out the formula for the determinant, we need to show that [sum]S (-1)|S| [sum]p sign(p) [prod]i (sum{j in S} Aj(i,pi)) = 0. To do this, fix a permutation p in Sn, and let Xi,j = Aj(i,pi). By interchanging the order of summation, it suffices to show that [sum]S (-1)|S| [prod]i (sum{j in S} Xi,j) = 0. The LHS above is a sum of monomials of the form X(J) = X1,j(1)*X2,j(2)...Xn,j(n). If T = {j1,...,jn}, then X(J) occurs once with coefficient (-1)|S| for each choice of S with T < S < [m]. Since |T|<n<m, the total coefficient is thus [sum]r=0m-|T| (-1)|T|+r C(m-|T|,r) = (-1)|T|(1-1)m-|T| = 0, and we are done.
|
|
IP Logged |
|
|
|
Obob
Senior Riddler
Gender:
Posts: 489
|
|
Re: Sum of Matrices (9/3/2002)
« Reply #10 on: Jan 24th, 2007, 4:44pm » |
Quote Modify
|
Nice proof, Eigenray! I just noticed one technicality, which is that we had better be proving the result for complex matrices in order to use the Nullstellensatz. We can then specialize to the real case, if we so desire. We might as well then state the result for arbitrary field entries, and then the proof just relies on passing to the algebraic closure of the field. That really is a nice determinant identity. At least at a superficial level, it looks identical to the inclusion-exclusion principle.
|
« Last Edit: Jan 24th, 2007, 4:45pm by Obob » |
IP Logged |
|
|
|
Eigenray
wu::riddles Moderator Uberpuzzler
Gender:
Posts: 1948
|
|
Re: Sum of Matrices (9/3/2002)
« Reply #11 on: Jan 25th, 2007, 4:08am » |
Quote Modify
|
The Nullstellensatz isn't necessary to give the proof; I only mentioned it by means of motivation. The problem resisted any linear algebra I could throw at it. It was only when I had the idea to think algebro-geometrically -- since solving the problem is equivalent to some polynomial identities in the determinants of various subset sums (which isn't otherwise obvious!) -- that I decided to try to find such an identity. After discovering the n=2 case (with Groebner bases using Mathematica ), the generalization is clear (but still surprising). I did leave out a step though: to be precise, what I really showed is that if all subsets of <k matrices have singular sum, where k>n, then all subsets of k+1 matrices have singular sum. Taking k=n,n+1,...,m-1 then gives the result. As for the identity, I believe the n=1 case is equivalent to inclusion-exclusion. In fact, there's probably a slick way to prove it inducting on n using inclusion-exclusion and/or expansion by minors of something.
|
« Last Edit: Jan 25th, 2007, 4:34am by Eigenray » |
IP Logged |
|
|
|
|