Author 
Topic: Chess Markov Chain (Read 5112 times) 

Earendil
Newbie
Gender:
Posts: 46


Chess Markov Chain
« on: Apr 16^{th}, 2005, 2:05pm » 
Quote Modify

You have a 8x8 chess board and a King on A1 at turn 1. Every turn the King moves to one of his normal movement squares with equal probability. What is the probability that after an infinite number of turns the king is on one the central squares? (D4, D5, E4, E5 )


IP Logged 



Grimbal
wu::riddles Moderator Uberpuzzler
Gender:
Posts: 7523


Re: Chess Markov Chain
« Reply #1 on: Apr 16^{th}, 2005, 3:11pm » 
Quote Modify

I find 32/420 = 7.6% Why? hidden:  Consider the probability not of the cells but of the moves between neighbouring cells. The fact that each cell has as many moves in as out, and that the probability of the moves in are redistributed evenly among the moves out, makes the probabilities converge to be all the same. There are 420 possible moves, 32 of them end in the 4 central cells. 


IP Logged 



Deedlit
Senior Riddler
Posts: 476


Re: Chess Markov Chain
« Reply #2 on: Apr 17^{th}, 2005, 9:11am » 
Quote Modify

While this is clearly the answer, I think a complete solution requires a proof that the process converges to this distribution. (which is what I assume Earendil meant by the somewhat nonsensical "infinite number of turns") There's probably some basic result that says that this Markov chain has to converge to a stable distribution  I don't know much about Markov chains. So we (probably) need to show that this is the only stable distribution. We know that for each square, the probability of each outgoing move is equal. Also, we know the outgoing moves and ingoing moves have to cancel out. So if we let P(x) be the probability of a single outgoing move from x, we have that each P(x) is a weighted average of P(y) over all neighboring squares y. If any two adjacent squares x_{0} and x_{1} have different values for P, then x_{1} must have a have a neighbor x_{2} with a larger value of P, and x_{2} must have a larger neighbor x_{3}, and so on to infinity. But this is absurd, since we only have 64 squares. So Grimbal's is the only stable distribution.


IP Logged 



towr
wu::riddles Moderator Uberpuzzler
Some people are average, some are just mean.
Gender:
Posts: 13730


Re: Chess Markov Chain
« Reply #3 on: Apr 17^{th}, 2005, 11:49am » 
Quote Modify

Another approach is to show that nth power of the transition matrix converges (to a nonzero matrix) as n goes to infinity. (I'll have to dig up my numerical math book before I can work that out further though)

« Last Edit: Apr 17^{th}, 2005, 11:53am by towr » 
IP Logged 
Wikipedia, Google, Mathworld, Integer sequence DB



iyerkri
Newbie
Gender:
Posts: 17


Re: Chess Markov Chain
« Reply #4 on: Apr 18^{th}, 2005, 7:21am » 
Quote Modify

As the number of states (64) is finite, and as each state is accessible from every other state, the markov chain is irreducible. A finite irreducible markov chain is positive recurrent and has an unique stationary distribution. So in the current problem the probability tends to a finite limit. Thus what remains is solving 64 simultaneous eqn in 64 variables, if one goes by brute force method. atleast one can reduce it to 16 eqns using symmetry. ~kris


IP Logged 



Deedlit
Senior Riddler
Posts: 476


Re: Chess Markov Chain
« Reply #5 on: Apr 18^{th}, 2005, 12:09pm » 
Quote Modify

on Apr 18^{th}, 2005, 7:21am, kris wrote:As the number of states (64) is finite, and as each state is accessible from every other state, the markov chain is irreducible. A finite irreducible markov chain is positive recurrent and has an unique stationary distribution. 
 Neat stuff! In fact, any positive recurrent Markov chain has a unique stationary distribution of the form u(j) = k / W(j,j), where W(j,j) is the mean return time for j. More generally, any invariant signed measure is of the above form. (I did some digging. ) So, for a Markov chain like this one, where the possible transitions are symmetric and the probabilities are uniformly distributed between the possible transitions, the mean return time for a state is inversely proportional to the number of possible transitions from that state! Very cool indeed. Quote: So in the current problem the probability tends to a finite limit. Thus 
 Well, that doesn't follow from what you said before. For instance, a finite irreducible Markov chain could just be moving around a ring. But a finite, irreducible, aperiodic Markov chain does converge to a unique limiting distribution (regardless of initial distribution) by the PerronFrobenius theorem.


IP Logged 



iyerkri
Newbie
Gender:
Posts: 17


Re: Chess Markov Chain
« Reply #6 on: Apr 18^{th}, 2005, 12:37pm » 
Quote Modify

yeah...should have included that case too...missed that one. however, in this current problem, we have a aperiodic chain, so that wont cause any problem. another point i missed was, using further symmetry, one can actually reduce to 10 equations in 10 variables. havent actually written or solved the equations though. ~kris


IP Logged 



SWF
Uberpuzzler
Posts: 879


Re: Chess Markov Chain
« Reply #7 on: Apr 18^{th}, 2005, 9:52pm » 
Quote Modify

This problem is very similar to a problem that was posted in this forum a couple of years ago, Knight Circuit Time, except that one was for a knight moving at random instead of king. It was solved in a similar way with a 10 by 10 matrix.


IP Logged 



Deedlit
Senior Riddler
Posts: 476


Re: Chess Markov Chain
« Reply #8 on: Apr 19^{th}, 2005, 3:09am » 
Quote Modify

on Apr 18^{th}, 2005, 12:37pm, kris wrote:yeah...should have included that case too...missed that one. however, in this current problem, we have a aperiodic chain, so that wont cause any problem. 
 Yeah, but it would be nice to have an elementary proof of convergence. It seems like there ought to be one. I noticed the previous thread also talked about the relationship between the stationary probability and the mean return time. The proof is nowhere near as hard as William's GSI suggested, though. If the Markov chain approaches a limiting distribution, then the density of times that the process is at state i converges to the limiting probability p_{i}. It follows that the nth appearance of state i is asymptotic with n/p_{i}, and thus the mean return time is 1/p_{i}. So I guess it wasn't as cool as I first thought.


IP Logged 



Deedlit
Senior Riddler
Posts: 476


Re: Chess Markov Chain
« Reply #9 on: Apr 19^{th}, 2005, 3:27pm » 
Quote Modify

Here's a proof of convergence that doesn't invoke PerronFrobenius: First, we use the fact that if the Markov chain is acyclic, then P^{n} has all positive entries for sufficiently large powers n. We have the basic fact that if gcd(a_{1}, ..., a_{i}) = 1, then there is a k such that for all n > k, n is representable as a sum of a_{i}. (This isn't hard to prove) So for any state i there is a number f(i) such that for all n >= f(i) there are paths from i to i of length n. Letting p(i,j) be the length of a path from i to j, we have that there are paths from i to j for all lengths of f(i) + p(i,j) or more. Letting n be the maximum of f(i) + p(i,j) for all (i,j), we have that Q = P^{n} has all positive entries. Next, we claim that for Q an m x m stochastic matrix with positive entries and any vector v with a sum of 0, we have [lim]_{i} Q^{i}v = 0. Let m(v) be the maximum absolute value of an entry in v, and let a be the smallest entry in Q. We claim that m(Qv) <= (1a) m(v). Suppose WLOG that m(Qv) is the absolute value of a postive entry (Qv)_{i}. At least one of the entries of v is negative, say v_{j}. Then (Qv)_{i} = [sum]_{k} P_{i,k} v_{k} < [sum]_{k != j} P_{i,k} v_{k} <= [sum]_{k != j}P_{i,k} m(v) <= (1a) m(v) and so Q^{i} v goes to 0. With Q = P_{n}, we have [lim]_{i} P^{in + j} v = Q^{i} (P^{j}v) = 0 for all i, since P^{j}v can be regarded as just another distribution. So [lim]_{i} P^{i} v = 0. Since we have a stationary distribution s, we can write any distribution v as a sum s + z, where z will be a vector with sum 0, and then [lim]_{i} P^{i} v = [lim]_{i} P^{i} s + P^{i} z = s.

« Last Edit: Apr 19^{th}, 2005, 3:31pm by Deedlit » 
IP Logged 



