wu :: forums (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi)
riddles >> easy >> Interesting Number
(Message started by: Sameer on Apr 15th, 2004, 4:39pm)

Title: Interesting Number
Post by Sameer on Apr 15th, 2004, 4:39pm
Ok This one has been bothering me for a while:

Consider a number x=n*frac(n*sqrt(2)) where sqrt() stands for square root and frac() for fractional part. n is any positive integer. What is the largest real number smaller than any x ? For what values of n?

Title: Re: Interesting Number
Post by John_Gaughan on Apr 15th, 2004, 9:10pm
irrational fractions... I ate a bee and now my mouth tastes like burning!

Is x [smiley=bbr.gif] or [smiley=bbz.gif]?

Title: Re: Interesting Number
Post by evergreena3 on Apr 15th, 2004, 9:18pm
John,

I thought it was the poisonous berrys that tasted like burning.  Darn, that is the last time I trust a monkey-butler to do anything!


EverGreenA3

Title: Re: Interesting Number
Post by towr on Apr 16th, 2004, 12:52am
hmmm.. Maybe we could do something with the continued fraction?

Of course since every x is a real number, there can't really be said to be a largest real number smaller than any x. (just like there isn't a largest number smaller than 1, which would be 0.999... :P)

Title: Re: Interesting Number
Post by Barukh on Apr 16th, 2004, 4:12am

on 04/16/04 at 00:52:04, towr wrote:
Of course since every x is a real number, there can't really be said to be a largest real number smaller than any x.

I think in this case this may be said. I would say the answer is [smiley=square.gif][hide]0[/hide][smiley=square.gif], although I didn't prove it yet. The approach is [hide]to use the fact that the sequence {frac(n*sqrt(2)} is uniformly distributed in (0,1).[/hide]

Title: Re: Interesting Number
Post by John_Gaughan on Apr 16th, 2004, 5:55am
When I first read it, the problem bugged me because you can get arbitrarily close to a real number x. As Barukh hinted at, you can make just about any number because the stuff in parens is uniformly distributed on (0,1), which effectively scales the result to (0,n). So we have a number x on the interval (0,n), and we are trying to find the next smaller [smiley=bbr.gif] than x. Like towr said, this is not possible.

Title: Re: Interesting Number
Post by John_Gaughan on Apr 16th, 2004, 6:00am
I wonder if this problem reduces to maximizing frac(n*sqrt(2)) so it is arbitrarily close to 1... I think zero might be a good choice too, but that is too obvious :-)

Title: Re: Interesting Number
Post by asterix on Apr 16th, 2004, 7:16am
I tried running a simple program to check it out. For some reason, the only times x<1, it always hits close to either .35 or .70. As n gets larger, I believe those two numbers are converging on some limits, but I'm not sure what they are or why. Here are the answers that hit .35
n; x
5; .355339
29; .3536059
169; .3535549
985; .3535534
5741; .35355339
33461; .353553390566
195025; .353553388
1136689; .35355332
6625109; .3535523280188499
Each time n increases by a multiple of 5.828427. Figure out what that number has to do with the square root of 2, and you'll find the answer.

Title: Re: Interesting Number
Post by asterix on Apr 16th, 2004, 7:17am
I'm sorry, I was reading the problem wrong. I was looking for the maximum number smaller than the smallest x. I'll try again.

Title: Re: Interesting Number
Post by asterix on Apr 16th, 2004, 7:37am
No, I think I was right after all. The next n comes at 38613965 and x is .3534395944.
Then n= 225058681; x=.3537. So for the first time x increases. Then n goes beyond the capabilities of my program.

Title: Re: Interesting Number
Post by asterix on Apr 16th, 2004, 8:07am
And of course I know where the 5.828 comes from. It's the inverse of the fractional portion of the square root of two, squared.

Title: Re: Interesting Number
Post by Sameer on Apr 16th, 2004, 9:42am
Hmm I actually agree that you cannot find a number greater or smaller than a real number. So let's re word it to say a bound for x (on the lower side, and probably in terms of sqrt(2))

asterix is on the right track... Barukh you are right about that frac thing, I think it is Weyl's distribution or something.

Title: Re: Interesting Number
Post by Icarus on Apr 16th, 2004, 3:46pm
The wording "Largest real number smaller than every x" would be alright. The only problem in the original is the need to interpret "any" to mean "every". To be more exact, we are looking for the greatest lower bound of the set
{ n(n[surd]2 - [lfloor]n[surd]2[rfloor]) | n [in] [bbn] }

Title: Re: Interesting Number
Post by asterix on Apr 16th, 2004, 3:57pm
I think I can tell you where the .353553 comes from.
It's 1/(2*sqrt(2)). But if that's supposed to be the lower bound, than a couple of the answers come in just a hair lower than they're supposed to. And I still don't know why that bound exists.

Title: Re: Interesting Number
Post by towr on Apr 17th, 2004, 6:33am
I assume you calculate those numbers by computer or calculator, so beware of rounding error..

Title: Re: Interesting Number
Post by Barukh on Apr 17th, 2004, 8:38am
That's what happens when one is too fast and tries to guess without a proof - my conjecture is wrong  >:(

Asterix, you are right - the lower bouhd is 1/sqrt( 8 ), and it's exact (really nice job, indeed!)
towr, you are right - the solution has something to do with continued fractions.

I believe I've got a proof for this problem and some generalizations - will try to summarize in a separate post. Also, I will write about computational aspect of the problem.

A wonderful puzzle, Sameer!  :D

Title: Re: Interesting Number
Post by Icarus on Apr 17th, 2004, 9:20am
Is 8-.5 the actual greatest lower bound, or is it the lowest accumulation point?

Title: Re: Interesting Number
Post by Sir Col on Apr 17th, 2004, 10:48am
I've not found a connection yet, but the continued fraction for 1/sqrt(8)=[0;2,1,(4)].

So the sequence of convergents would be:
1/2, 1/3, 5/14, 21/59, 89/250, 377/1059, ...

After the 2nd convergent all further terms are given by, pn/qn = (4pn-1+pn-2)/(4qn-1+qn-2).

Trying to link this with the sequence that Asterix found I noticed that his list of optimal values of n: 5, 29, 169, 985, 5741, ..., is given by the second order recurrence relation, un=6un-1–un-2.

For example, the next term, u6=6*5741–985=33461.

Not sure how to connect these yet?!

Title: Re: Interesting Number
Post by Barukh on Apr 18th, 2004, 9:29am
Here's the solution I came with.

[smiley=blacksquare.gif]
Instead of the function f(n) given in the problem, let's look at another one: F(n) = f(n)[sqrt]2 = n[sqrt]2*{n[sqrt]2} ({x} means fractional part).

For every n, find a smallest positive integer r, such that

2n2 – r = m2, a perfect square   (*).

Then, we have F(n) = [sqrt]( m2 + r){ [sqrt](m2 + r) } = m2 + r – m[sqrt](m2 + r) (why?).

I then needed the following two statements to proceed (I left them invisible if somebody wants to try them as exercises):

1.  For  any n and corresponding r, F(n) > r/2.
[smiley=square.gif] [hide]
It is clear that m2 + r < (m + r/2m)2, so F(n) > m2 + r – m(m + r/2m) = r/2.
[/hide] [smiley=square.gif]

2. If for a fixed r, there are infinitely many n's, than F(n) maybe made as close to r/2 as we please.
[smiley=square.gif] [hide]
Because of the way r is chosen, we have r <= 2m. Also, if there are infinitely many n's, there must be infinitely many m's, as large as we please, and the claim follows from: F(n) < (m+r/2m){ m + r/2m } = (m+r/2m)r/2m = r/2 + (r/2m)2.
[/hide] [smiley=square.gif]

Now, it becomes clear that to find the greatest lower bound for F(n), we need to find the smallest possible r, and hope it will give infinitely many values of n. If we look closer at equation (*), we recognize in it the general Pell equation for d=2, which was discussed in x^2 - 61y^2 = 1 (http://www.ocf.berkeley.edu/~wwu/cgi-bin/yabb/YaBB.cgi?board=riddles_medium;action=display;num=1080900827) thread. It is known, that infinite sequences for certain r are to be found among the convergents of continued fraction  (CF) of [sqrt]2 – as towr conjectured. In this case, the smallest r=1 is possible, because the CF for [sqrt]2 has an odd period. The numbers 5, 29, 169, 985, 5741 etc. found by asterix, are just the denominators of the 3rd, 5th, 7th, 9th etc. convergents for [sqrt]2 – the first ones corresponding to r=1, and there are infinitely many of them.

So we get, that F(n) has greatest lower bound (glb) 1/2, therefore f(n) has glb 1/(2[sqrt]2). This glb – according to statement 2 above – is exact, answering Icarus's question.
[smiley=blacksquare.gif]

It took me too long to write this one, I'll consider the computational aspect in a separate post…

Title: Re: Interesting Number
Post by Barukh on Apr 19th, 2004, 5:54am
OK, the computational aspect...   ;D

This problem shows the limitations of floating point operations executed on modern processors. In fact, the erroneous result I’ve got in one of my simulations, was one of reasons for my false first conjecture.

Consider the following C code compiled for X86 processor:


Code:
double f(unsigned n)
{
 double r = (double)n * sqrt(2.0);
  return (double)n * (r - floor(r));
}


Run it with n = 93222358. You will get a surprising 0.000.. If, instead, you estimate it using Microsoft Calculator (MC), you will get 93222357.646446609… The reason is that double precision code provides only 15 correct digits after the decimal point, while MC keeps with at least 32 (according to the help topic).

The same thing, IMHO, happened with Asterix’s estimations. For large n, the error is too big to show the right convergence. The following are the relevant values for F(n)  computed with MC (they differ from Asterix’s value by a [sqrt]2 factor, but are in a way more spectacular):

1136689    0.500000000000048..
6625109    0.5000000000000014..
38613965   0.500000000000000045..

By the way, this raises the following interesing question: how does MC handle its level of accuracy (32 digits)?  ???

Title: Re: Interesting Number
Post by towr on Apr 19th, 2004, 6:10am
You can use long doubles in C/C++

And in many math programs you can set the precision at any arbitrary number.. (though if you set it too high you may lack the necessary memory)

Title: Re: Interesting Number
Post by Barukh on Apr 22nd, 2004, 5:24am

on 04/19/04 at 06:10:42, towr wrote:
You can use long doubles in C/C++

Unfortunately, that won't help much - at most 3 additional digits (and some compilers won't do even that).


Quote:
And in many math programs you can set the precision at any arbitrary number.. (though if you set it too high you may lack the necessary memory)

Have you got any particular package to recommend?

Title: Re: Interesting Number
Post by towr on Apr 22nd, 2004, 5:54am

on 04/22/04 at 05:24:27, Barukh wrote:
Unfortunately, that won't help much - at most 3 additional digits (and some compilers won't do even that).
Are you sure? It should double the precision afaik..
[edit]I've tried it out on all server/computers I have available, and for floats, doubles, long doubles they all give 31, 62, and 72 digits precision repectively. However, they won't all print all of the bits. (To find out how many bits precision they actually used, I took a fraction (1/271) and repeatedly multiplied by two till there was no fractional part left)[/edit]


Quote:
Have you got any particular package to recommend?
I use Derive at home, but mathematica, maple and matlab are probably all better.. None of them is free though (unless you're into pirating software).
I also have Maxima, which is an open source program, but I haven't used it much, so I can't really say how good it is.

Title: Re: Interesting Number
Post by Sameer on Apr 22nd, 2004, 7:13am
Ok so If you are really good at programming you can make your own data structure. As for e.g. a linked list ofr however many precision digits you want. Making a simple header file for this datastructure will enable you to exploit however many precision you want.

Title: Re: Interesting Number
Post by towr on Apr 22nd, 2004, 7:42am
There are ready-made packages for arbitrary precision computing available. So you don't have to reinvent the wheel unless you want to.

for instance there's the GNU multiple precision arithmetic library (http://www.gnu.org/software/gmp/manual/html_mono/gmp.html)

Title: Re: Interesting Number
Post by Barukh on Apr 22nd, 2004, 9:03am

on 04/22/04 at 05:54:22, towr wrote:
Are you sure? It should double the precision afaik..

I've tried it out on all server/computers I have available, and for floats, doubles, long doubles they all give 31, 62, and 72 digits precision repectively. However, they won't all print all of the bits.

Are these binary or decimal digits? If binary, then addition for long double is 10 binary ~ 3 decimal digits – as I said.

Also, could you please specify what hardware are you using (I mean, the processor). All I said is relevant for X86 processor.


on 04/22/04 at 07:42:33, towr wrote:
for instance there's the GNU multiple precision arithmetic library (http://www.gnu.org/software/gmp/manual/html_mono/gmp.html)

I was looking for something like that – I like to use them in my own programs.


Title: Re: Interesting Number
Post by asterix on Apr 22nd, 2004, 9:51am
My own figures came from a very old powerbasic program, using "extended precision" 80 bit floating point math. So, roughly how many digits of my answers would have actually been correct?

Title: Re: Interesting Number
Post by towr on Apr 22nd, 2004, 10:19am

on 04/22/04 at 09:03:21, Barukh wrote:
Are these binary or decimal digits? If binary, then addition for long double is 10 binary ~ 3 decimal digits – as I said.
Binary, I thought you meant binary also.. But 3 decimal digits is worth the trouble if you use them right..


Quote:
Also, could you please specify what hardware are you using (I mean, the processor). All I said is relevant for X86 processor.
I only know what my own computer has, it's a celeron 333MHz. The high performance cluster is probably pentiums, but I don't know about the servers.. (I'd have tried a Cray too, but it doesn' have a C compiler)


on 04/22/04 at 09:51:15, asterix wrote:
My own figures came from a very old powerbasic program, using "extended precision" 80 bit floating point math. So, roughly how many digits of my answers would have actually been correct?
That depends on how you do your calculations. But as 80 bits is about 24 decimals, certainly no more than that. (You can easily loose over half of that precision in certain calculations though)

Title: Re: Interesting Number
Post by asterix on Apr 22nd, 2004, 10:56am
Okay, counting back from the largest intermediate figure I was working with, when n was 6625109, the first digit of my answer required 15 digits of precision, and the error that dropped it below the expected answer showed up at 20 digits of precision.

Title: Re: Interesting Number
Post by Barukh on Apr 23rd, 2004, 3:47am
It seems the discussion has smoothly transitioned into the computational aspect, does it mean everybody is convinced by the theoretical part?  ;)


on 04/22/04 at 10:19:39, towr wrote:
…But as 80 bits is about 24 decimals, certainly no more than that.

In fact, it’s less than that. 80 bits is the whole storage for the extended precision double (which I believe is the best is done for long doubles), it comprises of a sign bit, the exponent (15 bits) and mantissa (64 bits). The latter defines the accuracy, that is 19 decimal places guaranteed.

For a simple double, exponent is 11, mantissa 53; for a float – 7 and 24 respectively.


on 04/22/04 at 05:54:22, towr wrote:
To find out how many bits precision they actually used, I took a fraction (1/271) and repeatedly multiplied by two till there was no fractional part left

You cannot use this method to determine the precision: when multipying/dividing by 2, the mantissa does not change, only the exponent. The result you get actually depends on the denominator you use (try, for instance, 19 instead of 271).

Instead, take a periodic fraction you know the period of, and print as many digits as possible. Here’s what I got for 1/7 (compiled with gcc):

float: 0.142857149243354797363281  
double: 0.142857142857142849212693
long double: 0.142857142857142857140921  

that is, 8, 16 and 20 decimal places.

Title: Re: Interesting Number
Post by towr on Apr 23rd, 2004, 4:01am

on 04/23/04 at 03:47:51, Barukh wrote:
You cannot use this method to determine the precision: when multipying/dividing by 2, the mantissa does not change, only the exponent.
Depends on how you do it. You can easily remove the exponent, and keep the mantissa between 0 and 1, multiply by two, and if it's greater than 1 subtract 1. Repeat till you get zero and you'll know the precision.

printing as many digits as you can doesn't work if the compiler won't let you do that..

Title: Re: Interesting Number
Post by Barukh on Apr 23rd, 2004, 6:54am

on 04/23/04 at 04:01:29, towr wrote:
You can easily remove the exponent, and keep the mantissa between 0 and 1, multiply by two, and if it's greater than 1 subtract 1. Repeat till you get zero and you'll know the precision.

In my last post, I described the internal representation of the floating point number (that is, inside the processor). Not only the programmer cannot play with it – even the compiler. That doesn’t mean you cannot make the exponent equal to 0, but that’s certainly not with numbers from the interval (0,1).

To make it your way, you need to take the number 2-big_number(1 - 2-another_big_number).


Quote:
printing as many digits as you can doesn't work if the compiler won't let you do that..

??? I’ve never seen that compiler won’t let you print. The only two cases I’ve seen were: print the garbage at the end, of print zeros.

Title: Re: Interesting Number
Post by towr on Apr 24th, 2004, 5:29am
Look, if my method would work as you say, I'd have gotten a number in the range of 300 (2^10). So obviously it does something right. The one mistake I did make is that if the mantissa doesn't start and end with a 1 (binary) it'll be off a few digits.

Here's an example for 4 bits precision

mantissa   exponent
1xy1        0000        {<1, => *2}
1xy1        0001        {>1, => -1}
xy10        0000        {*2}
xy10        0001        {-x}
y100        0000        {*2}
y100        0001        {-y}
1000        0000        {*2}
1000        0001        {-1}
0000        0000        { ==0 }
so 4 bits, because we have to repeat the operation 4 times..

Also one of the compilers didn't let me print as many digits as I wanted, so that's not some myth..



Powered by YaBB 1 Gold - SP 1.4!
Forum software copyright © 2000-2004 Yet another Bulletin Board