wu :: forums (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi)
riddles >> hard >> randomness
(Message started by: BenVitale on Feb 11th, 2008, 4:21pm)

Title: randomness
Post by BenVitale on Feb 11th, 2008, 4:21pm
Gregory J. Chaitin stated, "Although randomness can be precisely defined and can even be measured, a given number cannot be proved to be random. This enigma establishes a limit to what is possible in mathematics. "

Does there exist a mathematical definition of randomness, and if so what is it? Furthermore, when one wishes to prove that a given sequence of say integers is random, then what is it that they would have to show?

Finally, is it possible that a precise (mathematical) definition of randomness is impossible?


Title: Re: randomness
Post by Hippo on Feb 12th, 2008, 1:03am
There exists a "Kolmogorov complexity".
If you fixed a programming language.
Each computation is started with a code and initial data. For each such pair (code,data) measure their length.
Each such pair generates one sequence.

Measure the complexity of a sequence by the size of the minimal pair of code and data generaing it. Random sequences are those where in optimal pair the code is empty.

Title: Re: randomness
Post by Grimbal on Feb 12th, 2008, 2:05am
[obsolete]... where the shortest program is as long as the sequence it generates.[/obsolete]

While that definition works well in practice, it doesn't help to prove randomness of a sequence because that definition depends on a specific programming language.  For every sequence you can design a language that generates that sequence in a single instruction.  So every sequence would be non-random in some programming language.  That means that the definition is not universal.

Title: Re: randomness
Post by Hippo on Feb 12th, 2008, 2:13am
Yes: I thing Kolmogorov uses Turing machines with explicit restrictions so the model is fixed and rather "natural", but I agree this condition is not testable by a computer (halting problem involved).

Title: Re: randomness
Post by towr on Feb 12th, 2008, 2:36am
Entropy is a good way to measure randomness.
And there are tons of statistical tests you can use (e.g. chi-square test)

Title: Re: randomness
Post by Hippo on Feb 13th, 2008, 8:17am
... yes these tests conjecture some kind of nonrandomness and statisticaly proves that the conjecture holds with high probability.

... they cannot guarantee randomness!

Title: Re: randomness
Post by towr on Feb 13th, 2008, 8:50am

on 02/13/08 at 08:17:05, Hippo wrote:
... yes these tests conjecture some kind of nonrandomness and statisticaly proves that the conjecture holds with high probability.

... they cannot guarantee randomness!
No amount of measures can guarantee randomness. There is no way to tell a random sequence apart from a well crafted sequence made to look random.
A computers pseudo random number generator is not random, yet pretty much any measure of randomness you can unleash on it would say it produces random numbers.

Title: Re: randomness
Post by Hippo on Feb 13th, 2008, 10:27am

on 02/13/08 at 08:50:16, towr wrote:
No amount of measures can guarantee randomness. There is no way to tell a random sequence apart from a well crafted sequence made to look random.
A computers pseudo random number generator is not random, yet pretty much any measure of randomness you can unleash on it would say it produces random numbers.


This again correspond to conclussions from Kolmogorov ... there are cases when we can prove the sequence is not random, but to prove it's random is uncomputable.

Probably statistical property which proves nonrandomness can be used for "wavelet kind" packing so there is close correspondence
(for long enough sequences).

Title: Re: randomness
Post by pex on Feb 13th, 2008, 10:45am

on 02/13/08 at 10:27:35, Hippo wrote:
... there are cases when we can prove the sequence is not random ...

Can we? I do not think so; we can never be completely sure that any given sequence is not the result of some random process.

Title: Re: randomness
Post by towr on Feb 13th, 2008, 1:01pm
A sequence of twenty consecutive heads might not seem random, but it should occur once every 220 runs; just the same as any 20-coin sequence.
And similar things can be said for any kind of sequence.

Title: Re: randomness
Post by Eigenray on Feb 13th, 2008, 4:03pm

on 02/13/08 at 08:50:16, towr wrote:
A computers pseudo random number generator is not random, yet pretty much any measure of randomness you can unleash on it would say it produces random numbers.

Except, of course, for [link=http://en.wikipedia.org/wiki/Algorithmically_random_sequence]algorithmic randomness[/link].  There are several 'natural' ways to define algorithmically random that all end up being equivalent, which makes this a good definition (much like there are several equivalent ways to define an algorithm or computable function).

Of course, for any sequence, there exists a function whose output is that sequence, but this function will almost always be non-computable.  And if a sequence is computable, then it's not algorithmically random.  But there are of course non-computable sequences which aren't algorithmically random too: let X = (xi) be an infinite binary sequence with x2i=0 for all i.  There are uncountably many such sequences, so most of them are non-computable.  But using the [link=http://en.wikipedia.org/wiki/Algorithmically_random_sequence#Interpretations_of_the_definitions]null cover characterization[/link], we can see that X is not algorithmically random, because for any n, X lies in the union Un of 2n intervals each of length 2-2n, and the intersection of the Un has measure 0.

Title: Re: randomness
Post by towr on Feb 14th, 2008, 12:05am

on 02/13/08 at 16:03:02, Eigenray wrote:
Of course, for any sequence, there exists a function whose output is that sequence, but this function will almost always be non-computable.
If you have a finite sequence, isn't it very very simple to plot a polynomial through all the points? (i.e such that the sequence is f(0), f(1), f(2) etc)

Title: Re: randomness
Post by Grimbal on Feb 14th, 2008, 2:01am
It is not the data that is random, but the process that generates it.  Once you generate a random sequence, it becomes just data.  Even though it might be meaningless to you.

A truly random process generates all outcomes with equal probability.  Getting Shakespeare's complete works is just as likely as any other outcome.  It wouldn't be truly random if Shakespeare's works and other suspicious outcomes, like all zeros, would be excluded as "non-random".  So no sequence is more random than another one.  You cannot test randomness, just like if I give you a number between 1 and 6, you cannot tell whether I used a perfect die to generate it.

But, well, some data are suspicious.  Some small subset of all possible outcomes can be explained by a simple non-random process.  By declaring all these suspicious cases as "generated by a non-random process" and all others as "generated by a random process", you have a good probabiliy to be right ... provided you can recognize the simple process.  Encrypted or compressed data can look very random, even though it usually is not.

Title: Re: randomness
Post by Eigenray on Feb 14th, 2008, 5:04am

on 02/14/08 at 00:05:49, towr wrote:
If you have a finite sequence, isn't it very very simple to plot a polynomial through all the points? (i.e such that the sequence is f(0), f(1), f(2) etc)

Yes but almost all sequences are infinite  :)
Besides, randomness only makes sense in the limit.  It doesn't make much sense to ask if a single number, or finite sequence of numbers, is random.  Unless these numbers are real valued, but then you can think of them as infinite binary sequences.  The binary expansion of any computable number would be non-algorithmically random, of course, but there are only countably many such numbers.

Title: Re: randomness
Post by pex on Feb 14th, 2008, 5:24am
In my opinion, randomness is a property of the process that generates a sequence of numbers, not of the numbers themselves. Thus, given a sequence of numbers (be it finite or infinite, even ignoring the problem of how one can be given an infinite sequence if there is no deterministic rule for generating it), there is no way to tell whether the numbers come from a random process or not - unless one is shown the generating mechanism, of course.

Title: Re: randomness
Post by BenVitale on Feb 14th, 2008, 11:57am
Kolmogorov complexity remained more of a curiosity than a practical mathematical tool. Right?

Title: Re: randomness
Post by BenVitale on Feb 14th, 2008, 12:06pm
Didn't Paul Vitanyi, an information theorist at the Centre for Mathematics and Computer Science and the University of Amsterdam, both in the Netherlands, and his long-time collaborator Ming Li at the University of Waterloo in Canada, make huge progress in using Kolmogorov complexity?

I am referring to the famous geometric problem set out in the first half of this century by the German mathematician Hans Heilbronn.

The problem being : If a number of pebbles (n) are placed inside a square and triangles are drawn between them, what arrangement of pebbles makes the smallest triangle as large as possible?

Didn't they show that the area of the smallest triangle is proportional to 1/n^3?

I've tried to search for their result on the net, but couldn't find it. Could anyone provide a link?

Title: Re: randomness
Post by Eigenray on Feb 14th, 2008, 2:02pm
I believe you're referring to the [link=http://mathworld.wolfram.com/HeilbronnTriangleProblem.html]Heilbronn triangle problem[/link].  Note that the largest possible value for the area of the smallest triangle is > 1/(8n2), because taking a prime p between n and 2n, no three of the p points (x,x2) mod p are collinear.

What [link=http://homepages.cwi.nl/~paulv/kolmcompl.html]Vitanyi[/link] et al. [link=http://citeseer.ist.psu.edu/261646.html]showed[/link] is that if the points are chosen uniformly at random, then the expected area of the smallest triangle is bounded above and below by C/n3.

Here's the idea: Fix n, and an integer http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif> 0.  There are C(K2,n) ways to pick n points from a KxK grid.  So most configurations X can't be described in much fewer than log C(K2,n) bits (log=log2).  Specifically, at least the fraction (1-2-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif) of them satisfy

(*) C(X|n,K) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/ge.gif log(C(K2,n)) - http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif,

where C(X|n,K) is the length of the smallest program that outputs X given n,K.  Call the configuration X incompressible if (*) holds.

Here's one way to describe a configuration X.  Let PQR be the triangle of smallest area A, and suppose L=PQ is the longest side, and let H=2A/L.  Then we can specify all points other than R in C(K2,n-1) ways, and then {P,Q} in C(n-1,2) ways.  Let g be the gcd of the coordinates of Q-P.  Then 2A = |(Q-P)x(Q-R)| is divisible by g, so HL = 2A = fg for an integer f.  Specify this integer f with log(f) bits.  This determines H.  Now the point R lies on a line segment parallel to PQ, and a distance H from it.  Since PQ was the longest side, the length of these segments are < PQ, and the two of them together have no more than 2g points.  So finally we can specify R given an extra log(2g) bits.  Since 2fg=4A, we therefore have

C(X|n,K) http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/le.gif log[C(K2,n-1)C(n-1,2)*4A] + c,

where c is an absolute constant (the length of a program which can reverse the steps in the above paragraph).  Now we can combine the two inequalities to get that if X is incompressible, then

A/(K-1)2 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/ge.gif 2c-1-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif (K2-n-1)/(K-1)2 1/[n(n-1)(n-2)] http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/ge.gif c'/[2http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gifn3]

for K sufficiently large, where c' is again an absolute constant.  Since X is incompressible with probability at least (1-2-http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/delta.gif), the lower bound on the expected value of the smallest area follows.

For the upper bound, the idea is: if the area A of the smallest triangle is large, then any two points rule out a large area where there can be no third point.  Then one can pick n/2 points such that, once specified, the other n/2 points lie in a small region, so they can be specified efficiently.  If A is large enough, then this shows that X is compressible.

Title: Re: randomness
Post by Hippo on Feb 14th, 2008, 4:49pm

on 02/13/08 at 10:45:54, pex wrote:
Can we? I do not think so; we can never be completely sure that any given sequence is not the result of some random process.


You lost the context ... two posts earlier I have said ... ... with high probability

Title: Re: randomness
Post by JiNbOtAk on Feb 14th, 2008, 7:52pm

on 02/14/08 at 02:01:25, Grimbal wrote:
 Getting Shakespeare's complete works is just as likely as any other outcome.


I was reminded of the time I was in my high school, we had an essay assignment ( of about 150 words ) to be completed. A really weird thing happened, when 2 of my classmates submitted an essay which was identical, even to the grammar mistakes ! The teacher of course, assumed that one guy copied from the other, but as one of them is my best friend, plus the fact he wrote out the essay together with me, independently, it was truly a weird experience. Even the guys who wrote it did not seem to understand how it came to be. So I guess, in selecting random words to form an essay ( from a select group ), there is a probability of 2 people choosing exactly the same 150 words.


Title: Re: randomness
Post by temporary on Feb 14th, 2008, 8:41pm
You can only have randomness in an infinite sequence without pattern or function, but you would need to see every one of the infinite numbers to know it is random.

Title: Re: randomness
Post by Christine on Mar 3rd, 2008, 12:41am
I don't understand the idea that a book is probably not  a completely random string, that it can be compressed significantly. How could you compress it? I'm not really sure about this. Say you take Hamlet from Shakespeare. How could you compress it? you cannot create a small program that will be able to create all hamlet again, unless you actually write the text on the program.  I've read the Infinite monkey theorem, see link http://en.wikipedia.org/wiki/Infinite_monkey_theorem

I guess I might be wrong or just confused about the whole thing, but I don't see how. What do you guys think?

Title: Re: randomness
Post by towr on Mar 3rd, 2008, 1:04am

on 03/03/08 at 00:41:00, Christine wrote:
I don't understand the idea that a book is probably not  a completely random string, that it can be compressed significantly. How could you compress it? I'm not really sure about this. Say you take Hamlet from Shakespeare. How could you compress it? you cannot create a small program that will be able to create all hamlet again, unless you actually write the text on the program.  I've read the Infinite monkey theorem, see link http://en.wikipedia.org/wiki/Infinite_monkey_theorem
Neverytheless, I wrote a program that did exactly that, compressing Hamlet to a fraction of its size. :P

Consider what it means to not be random*: it means that there are significant patterns in the string. Take another example, consider you throw a coin 5 times, and they all come up heads. Now you can write HHHHH to describe it, or 5H, the latter is only 40% of the size, by exploiting the pattern.
In the case of a text like Hamlet, you can use the fact that most words repeat. So you can use a dictionary and replace them with a shorter index (which tells you which word in the dictionary it is).

Now, if you have a totally random string, then you can't have an algorithm that is guaranteed to compresses it, because there aren't enough shorter strings to uniquely identify strings of that size. But, texts are never random strings (otherwise we couldn't understand them), so they're compressible as long as you know in what way they aren't random. The more you know, the better you can compress them.
But the same algorithms that can compress texts well, would probably inflate random strings. Consider again a series of coinflips, HHTHT, using the same kind of encoding as before, we get 2H1T1H1T, 60% longer than before, rather than shorter. This is because the pattern we try to exploit simply isn't there in this case.



*) I'm using the colloquial sense of random here; as has been said, 'random' doesn't really apply to sequences but to the process that generates them. But some days it just doesn't pay to say "this sequence is, with very high likelihood, generated by a process that is significantly random" instead of "this sequence is random".

Title: Re: randomness
Post by tiber13 on Mar 3rd, 2008, 4:49pm
But I AM the product of those monkeys.....

Title: Re: randomness
Post by BenVitale on Mar 17th, 2008, 8:42am
I need to go back to this thread if you don't mind. I'm considering doing some work with randomness.

Eigenray wrote,
"Note that the largest possible value for the area of the smallest triangle is > 1/(8n^2), because taking a prime p between n and 2n, no three of the p points (x,x^2) mod p are collinear."

I don't understand this. Could you please elaborate?

Title: Re: randomness
Post by Eigenray on Mar 17th, 2008, 12:47pm

on 03/17/08 at 08:42:02, BenVitale wrote:
"Note that the largest possible value for the area of the smallest triangle is > 1/(8n^2), because taking a prime p between n and 2n, no three of the p points (x,x^2) mod p are collinear."


Suppose (x,x2), (y,y2), and (z,z2) are collinear mod p.  This gives
det [ [ y-x, y2-x2 ] , [ z-x, z2-x2 ] ] = 0 mod p.
But as a function of z, this is a degree 2 polynomial, so over the field http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/bbz.gif/p, has at most 2 solutions.  These solutions must be z=x and z=y.  So there is no third point.

Now pick n < p < 2n (Bertrand), and form a pxp grid.  From the determinant form (or Pick's theorem), any triangle formed by three points on an integral grid has area a multiple of 1/2.  So for the points (1,1), (2,4), (3,9), ..., (n,n2) mod p, each triangle has area at least 1/2, since none have area 0.  Scaling the whole thing down to fit in a unit square, each triangle has area at least 1/2*1/p2 http://www.ocf.berkeley.edu/~wwu/YaBBImages/symbols/ge.gif 1/8n2.

Title: Re: randomness
Post by BenVitale on Mar 17th, 2008, 4:15pm
Suppose (x,x^2), (y,y^2), and (z,z^2) are collinear mod p. This gives det [ [ y-x, y^2-x^2 ] , [ z-x, z^2-x^2 ] ] = 0 mod p

Can you give an example of this? I think there are some concepts here which I'm unfamiliar with.

Title: Re: randomness
Post by Eigenray on Mar 17th, 2008, 4:58pm
Three point A,B,C in the plane are collinear iff the vectors B-A, C-A are linearly dependent, which happens iff the matrix whose rows are (B-A) and (C-A) has determinant 0.

Title: Re: randomness
Post by BenVitale on Apr 10th, 2008, 8:47pm
I have a new problem, and i would appreciate your insights:

Lut u(n) be the mobius function and let M(x)=sum(n less than x) u(n). It can be shown that the claim that M(x)=O(x^(1/2 +e)) for every e greater than 0 is equivalent to the Riemann Hypothesis.

See wiki under "Growth rate of Mobius function" for more details

http://en.wikipedia.org/wiki/Riemann_hypothesis

Now it can be shown that if K(x) represents the sum of heads minus the sum of tails obtained
from a coin that is flipped x number of times, then we also have K(x)=O(O(x^(1/2 +e))
for every e greater than 0. Unfortunately, the proof for this case requires us assuming
that the coin flips are random. This raises the following interesting questions.

1. Is there a fundamental difference between the Riemann Hypothesis, and the coin flipping problem?

2. Do we have to prove some sort of randomness condition for the primes numbers in order to prove the Riemann Hypothesis?

3. Is such a proof even possible?

Title: Re: randomness
Post by BenVitale on Apr 11th, 2008, 11:34am
We know that Georg Riemann found a vital clue. He discovered that the secrets of the primes are locked inside something called the zeta function.

And, Riemann Riemann worked out that if the zeros really do lie on the critical line, then the primes stray from the 1/ln(x) distribution exactly as much as a bunch of coin tosses stray from the 50:50 distribution law. This is a startling conclusion. The primes aren't just unpredictable, they really do behave as if each prime number is picked at random, with the probability 1/ln(x) -- almost as if they were chosen with a weighted coin. So to some extent the primes are tamed, because we can make statistical predictions about them, just as we can about coin tosses.

Title: Re: randomness
Post by BenVitale on Apr 29th, 2008, 12:31pm
Can we develop a test to determine the randomness of certain observationly deterministic random sequences, i.e. can we test Sequence x_n generated by x_n=(x_(n-1))^2 mod(k) where x_0 is fixed and k is the product of two very large prime numbers? (known as Blum-Blum-Shub generators -- It is supposed to be a kind of deterministic random number generator.)

- Can deterministic random sequences really exist?

- Is there a fundamental difference between deterministic and non-deterministic random sequences?

- If deterministic random sequences are possible, then is it possible to ask math questions associated with such sequences which have non-deterministic answers (ie there is no 'yes or no' solution to the question)?

Title: Re: randomness
Post by towr on Apr 29th, 2008, 1:06pm

on 04/29/08 at 12:31:20, BenVitale wrote:
- Can deterministic random sequences really exist?
As such, no. If you know all the variables involved, then the next value is known with complete certainty. However, if underlying variables are unknown, it may be impossible to tell where in the sequence you are, or indeed if it matches any deterministic function at all.
There is no randomness, but there is a lack of knowledge that makes predictions about the sequence equally infeasible.

Title: Re: randomness
Post by BenVitale on Apr 29th, 2008, 2:29pm
Thanks for answering to my first question.

What do you think of the following claim:



Quote:
Claim: Pseudorandom numbers are not really random at all. They are the result of deterministic mathematical formulae.



I don't agree with the claim that just because a sequence is given by some formula it must necessarily be non-random. Consider the following example:

Suppose we a have two sets A and B. Now let us suppose that an integer N is in set A if it satisfies property X and that an integer M is in set B if it satisfies property Y. Now let us define a sequence by the following rule.

Let X_n represent the n^th positive integer that is in both sets A and B. That is, the n^th positive integer possessing both properties X and Y.

Question: If there is no relationship between the integers that satisfy property X and the integers that satisfy property Y, then will X_n necessarily have to be a random sequence?

Most pseudo-random sequences generated cannot be exactly described using the above scenario, however the reason they may be random is due to something similar. For example, the Mobius function could be a deterministically random sequence because there is no relationship between the size of n and whether or not n is the product of an odd or even number of prime factors.



Title: Re: randomness
Post by towr on Apr 30th, 2008, 1:51am

on 04/29/08 at 14:29:07, BenVitale wrote:
What do you think of the following claim:
"Claim: Pseudorandom numbers are not really random at all. They are the result of deterministic mathematical formulae."

I don't agree with the claim that just because a sequence is given by some formula it must necessarily be non-random.
Anything that's 'given' is non-random. Given complete knowledge of the current state, you know with complete certainty the next state; there is nothing random about it. For there to be randomness, there must be multiple states you can reach given the current state, without complete certainty as to which you will end up in.


Quote:
Consider the following example:

Suppose we a have two sets A and B. Now let us suppose that an integer N is in set A if it satisfies property X and that an integer M is in set B if it satisfies property Y. Now let us define a sequence by the following rule.

Let X_n represent the n^th positive integer that is in both sets A and B. That is, the n^th positive integer possessing both properties X and Y.

Question: If there is no relationship between the integers that satisfy property X and the integers that satisfy property Y, then will X_n necessarily have to be a random sequence?
No. There is nothing random about it; every X_n is set in stone from the moment X and Y are defined, for all n, for all eternity. If you don't know X and Y the sequence may seem random, but appearances aren't everything in life.
A random process is never perfectly predictable; deterministic processes are, given complete knowledge. Randomness and determinism are by definition antithetical.

Title: Re: randomness
Post by BenVitale on Apr 30th, 2008, 8:04pm
Let us hypothetically suppose that there do exist high-quality PRNG (pseudorandom number generators ) that are indistinguishable from a truly random sequence when the algorithms that generate them are unknown. If we compare this sequence to say a sequence that was determined by the flipping of a coin that had already taken place, then what exactly can we say is the difference between the two if indeed there is no mathematical difference?

To me it makes no sense that we still must consider them as different just because one of them could be predicted if only we had the right information?

PS: This debate sorta feels like the one Einstein had with Borh back when they where debating the validity of Quantum Physics, and Einstein was arguing that just because we lack information regarding the photon, doesn't mean that the photon doesn't have that information, and thus QM must be a deterministic process.

Title: Re: randomness
Post by towr on May 1st, 2008, 1:07am

on 04/30/08 at 20:04:48, BenVitale wrote:
Let us hypothetically suppose that there do exist high-quality PRNG (pseudorandom number generators ) that are indistinguishable from a truly random sequence when the algorithms that generate them are unknown. If we compare this sequence to say a sequence that was determined by the flipping of a coin that had already taken place, then what exactly can we say is the difference between the two if indeed there is no mathematical difference?
Here we come to the distinctions that it's really processes that are random, and not sequences.
Any sequence may have been generated by a random process. If you write it as binary number and concatenate them all together; say you K bits, then flipping a coin had 1/2K probability of generating that same sequence. Of course, from that point onwards the sequences have a high probability of diverging if either generating process is random.


Quote:
To me it makes no sense that we still must consider them as different just because one of them could be predicted if only we had the right information?
They're distinct logical possibilities.
And as far as PRNG on computers go, due to limited resources they inevitably repeat themselves with a fix period (which may be unimaginably long, but still). And if you start them from the same seed, they always give the same sequence.


Quote:
PS: This debate sorta feels like the one Einstein had with Bohr back when they where debating the validity of Quantum Physics, and Einstein was arguing that just because we lack information regarding the photon, doesn't mean that the photon doesn't have that information, and thus QM must be a deterministic process.
The EPR experiment is generally considered to have busted the theory of hidden variables.
However, if QM was really deterministic after all, then quantum computing would be bust. You couldn't use superposition to calculate on all possibilities at once, because in reality you'd only work through one possibility.

Title: Re: randomness
Post by BenVitale on May 9th, 2008, 9:04am
I visited "Exploring RANDOMNESS ,G J Chaitin, IBM Research"

http://www.cs.auckland.ac.nz/~.....index.html

does Chatin really consider that many problems in mathematics (eg Riemann Hypothesis) are just probabilistic accidents and don't have any proofs?

Title: Re: randomness
Post by BenVitale on May 9th, 2008, 10:24am
If we consider the sequence 0, 1, 0, 1, 0, 1, ... (0 at even places, 1 at odd ones).
Is it random? It does have one property of a random sequence of 0 and 1: as N -> oo the proportion of 0's in the first N terms approaches 50%.
But now you may say: no, this sequence is not random, because if we consider consecutive pairs of terms, the pairs 0,0 and 1,1 never appear, but in a random sequence each should appear in a 25% proportion.

The same objection can be raised to the mobius sequence: although the prime number theorem does tell us that the proportion of 1's is equal to the number of -1's, if you consider pairs of terms of the form Mu(n), Mu(2n), then they can never be equal. In a truely random sequence assuming finitely many values (e.g. 0, 1, -1), the terms a(n) and a(2n) should be equal infinitely many times (i.e. the probability of this event is 1).

A similar objection can be raised against any pseudorandom sequence. Some randomness properties hold and others don't. A truely random sequence must satisfy every property you would expect from a random sequence.
Of course when it comes to finite sequences, there is no mathematical distinction between random and pseudorandom, unless you consider infinite families of finite sequences.

Title: Re: randomness
Post by BenVitale on May 9th, 2008, 11:19am
I have "randomness" on my mind. Sorry for posting these different messages w/o waiting for a reply first. My next questions are:

What properties of randomness must a sequence have in order for a math question regarding that sequence can be considered non-deterministic? Or perhaps more generally, is it possible to ask questions regarding sequences that aren't completely random which will have non-deterministic answers?

what conditions need to be met in order for a math problem to be considered non-deterministic?


Title: Re: randomness
Post by BenVitale on May 20th, 2008, 11:09am
Suppose we have a sequence of numbers where X% of the numbers are randomly generated, while
(1-X)% of them are not randomly generated (eg. they are determined by some specific formula).

Without trying to figure out what this formula is, is there a way to determine X?

Title: Re: randomness
Post by pex on May 20th, 2008, 2:28pm

on 05/20/08 at 11:09:14, BenVitale wrote:
Suppose we have a sequence of numbers where X% of the numbers are randomly generated, while
(1-X)% of them are not randomly generated (eg. they are determined by some specific formula).

Without trying to figure out what this formula is, is there a way to determine X?

I'd answer no.

From the point of view of someone who only observes the sequence, isn't a number coming from some unknown (and probably intricate) deterministic process pretty much the same as a number coming from some random process? If yes, how are we going to tell these numbers apart?

Title: Re: randomness
Post by rmsgrey on May 21st, 2008, 4:56am
If you know what the formula is (or rather what values it would produce for each entry), then you can derive an approximate value for X - either assuming that the match values are all generated by the formula (suitable when there's a large range of possible numbers, so a low probability of a coincidence) or by assuming that the non-match values represent (1-N)/N of the random values, and calculating X on that basis



Powered by YaBB 1 Gold - SP 1.4!
Forum software copyright © 2000-2004 Yet another Bulletin Board