wu :: forums
« wu :: forums - Randomized Algorithms »

Welcome, Guest. Please Login or Register.
Apr 25th, 2024, 4:10pm

RIDDLES SITE WRITE MATH! Home Home Help Help Search Search Members Members Login Login Register Register
   wu :: forums
   riddles
   general problem-solving / chatting / whatever
(Moderators: Icarus, towr, william wu, ThudnBlunder, SMQ, Grimbal, Eigenray)
   Randomized Algorithms
« Previous topic | Next topic »
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print
   Author  Topic: Randomized Algorithms  (Read 4903 times)
william wu
wu::riddles Administrator
*****





   
WWW

Gender: male
Posts: 1291
Randomized Algorithms  
« on: Sep 28th, 2002, 6:48pm »
Quote Quote Modify Modify

So, I'm currently taking a randomized algorithms course, and we just learned the LazySelect algorithm, which selects the kth smallest element in a list in linear time. My question is, does there exist a linear time deterministic algorithm for finding the kth smallest element in a list in linear time? (Simply sorting the list and then counting up to the kth element in the list has a running time of n log n, which isn't as good as linear time.) Is there a reason to believe that such a linear time algo does not exist?  
 
More generally, I'm very impressed by these randomized algorithms, but I don't understand why they can beat so many deterministic ones ... Dr. Karp is just presenting randomized algorithms and proving their slick running times, but we haven't got into the essence of why randomness can be so powerful. Intuitively, we'd want to make all our programs deterministic. But then some guy (who?) decided, hey, what if we just made them nondeterministic? It seems like a tremendous leap of genius. What was that guy thinking?
 
Suppose I propose the following conjecture:
 
For every nondeterministic algorithm, there exists a deterministic one that can achieve the same results in the same running time.
 
Do we already know of a reason for why this conjecture could not be true?
 
 
P.S. Here are some lecture notes about the LazySelect algorithm if you're interested: www.cs.berkeley.edu/~jfc/cs174/lecs/lec6/lec6.pdf
« Last Edit: Sep 28th, 2002, 7:31pm by william wu » IP Logged


[ wu ] : http://wuriddles.com / http://forums.wuriddles.com
Pietro K.C.
Full Member
***





115936899 115936899    
WWW

Gender: male
Posts: 213
Re: Randomized Algorithms  
« Reply #1 on: Sep 28th, 2002, 8:55pm »
Quote Quote Modify Modify

  For the more philosophical question first, here's what I think.
 
   You have the impression that nondeterminism (ND) is powerful compared to determinism (D). Another way to look at this is that D is not-powerful, I don't know, you might call "feeble" the opposite of powerful, so let's say D is feeble compared to ND. Now why might this be?
 
   Let's see how you examine D algorithms. Given a D algorithm A, you usually try to calculate, for a given input size n, what is the longest that the program will run. Worst-case analysis, that's what most of elementary complexity theory is about. So when you say that quicksort is O(n2), you mean that, for all input sizes larger than some n0, one of them will cause quicksort to run for at least C*n2, for some real constant C which does not depend on n. This is all very elementary, and has been hammered on our heads (does this expression exist in english?) countless times in countless CS courses, but I think it is the heart of the matter.
 
   Because, when you evaluate ND complexity, you are talking about average-case running time. That means you examine the possible paths towards an answer, calculate their respective running times and probability of actually happening, and find the expected running time. You no doubt have noticed that there always exists a sequence of random choices that cause the algorithm to run forever, so the worst-case is not even defined here.
 
   If you do something similar to D algorithms, like suppose all inputs of size n are equally likely, and then compute the expected running time by averaging, they turn out to be much better! It just so happens that the average-case running time of quicksort is not O(n2), but O(n log n).
 
   So this powerful-feeble comparison might be considered unfair, although I feel that it is downright inconsistent. Randomized algorithms do NOT give mathematical certainty of their correctness; they do not take into account ALL the information that is contained in an input string, just try to guess at a small relevant set. Hence the smaller running time.
 
   But they really are a good idea, although maybe it wasn't such a leap of genius. Maybe someone, somewhere really just needed correct answers MOST of the time, and the trouble of a D algorithm (and the time required to take ALL the information into account) seemed pointless to them. I also don't know who invented ND.
 
   As for your conjecture, I have a feeling that it is false, and further, that a simple example will disprove it. Maybe there will be some O(n) randomized sorting algorithm, or something like that. The conjecture seems too strong; if you loosened it a bit, like saying that every problem solvable by an average-P algorithm is also solvable by one in P...
 
   I'm still thinking (haven't had this sort of ND in university yet Sad).
 
   Now for the more easily-answered question.
 
   Yes, there does in fact exist a deterministic linear-time algorithm for finding the k-th smallest element in a list. It's actually a bit complicated; it involves pivoting, like in quicksort, and also some statistics. The lecture notes in the link actually mention this - I think any of your teachers will be able to specify the details, which are a bit lengthy, but nevertheless beautiful and surprising.
IP Logged

"I always wondered about the meaning of life. So I looked it up in the dictionary under 'L' and there it was --- the meaning of life. It was not what I expected." (Dogbert)
TimMann
Senior Riddler
****






   
WWW

Gender: male
Posts: 330
Re: Randomized Algorithms  
« Reply #2 on: Oct 12th, 2002, 2:26pm »
Quote Quote Modify Modify

A lot of good stuff in that answer, but I have some corrections and additions.
 
A terminology point: don't use the phrase "non-deterministic" for randomized algorithms. A non-deterministic algorithm is the kind that a non-deterministic Turing machine runs: try all possible answers in parallel and return when you've shown that one is correct; or equivalently, magically guess the right answer in 0 time, then charge yourself only for the time it takes to verify that the answer is right. Obviously ND Turing machines aren't physically realizable; they're used in complexity theory to define the class NP.
 
on Sep 28th, 2002, 8:55pm, Pietro K.C. wrote:
Randomized algorithms do NOT give mathematical certainty of their correctness; they do not take into account ALL the information that is contained in an input string, just try to guess at a small relevant set. Hence the smaller running time.

That's incorrect. It's true that some randomized algorithms have a nonzero probability of being wrong: for example, randomized primality testing is like that. If the algorithm says "nonprime", it is certainly correct; if it says "prime", there is some probability that it is wrong, but you can make the probability as small as you like (though never 0) by running more rounds of the algorithm. Other randomized algorithms always give a correct answer, but may have a very large (even unbounded) worst-case running time, as you allude to elsewhere in your post. There's also at least one other type that I'll get to below.
 
The real power of randomization, and the justification for evaluating randomized algorithms by the average-case rather than worst-case running time, is that the running time of a randomized algorithm depends only on how lucky you were in the random choices you made while running the algorithm, not on the input data. (Well, with some algorithms the input data may be a factor, but then you evaluate the algorithm according to the average-case running time on a worst-case input.)
 
Average-case analysis is of limited value when applied to a deterministic algorithm, because you may not know the distribution of inputs that the algorithm will see in a given application. For instance, there are some sorting algorithms for which already-sorted data is a worst case. In a real-world application of sorting, it's quite possible that you'll often be sorting data that some helpful person already sorted once before. Or if you have an adversary that is trying to make your system appear as slow as possible, if he has control over the inputs that it's fed, he can choose to feed in a worst-case input every time. Or your algorithm might be used as part of a larger system, and just as a result of Murphy's Law, it may turn out that the part of the system that supplies input to your algorithm was inadvertently designed with the property that the outputs it generates tend to be much worse than average-case inputs for your algorithm.
 
But with a randomized algorithm, there is no way to force bad performance by feeding in specific input data. So the average performance is a useful measure of the goodness of the algorithm even if you know nothing about the distribution of inputs it will receive.
 
This leads to one very simple kind of randomized algorithm, whose structure is basically (1) randomly perturb the input data, (2) apply a deterministic algorithm, (3) undo step 1 (if needed). As a trivial example, you might randomly shuffle your input data before sorting it. This kind of algorithm is useful exactly when you have a deterministic algorithm with good average case performance on random inputs, but bad worst-case performance, and you don't know what the distribution of input data will be. Adding in step (1) ensures that the input to step (2) is random, so its average-case performance becomes a useful measure.
« Last Edit: Oct 12th, 2002, 2:35pm by TimMann » IP Logged

http://tim-mann.org/
Pietro K.C.
Full Member
***





115936899 115936899    
WWW

Gender: male
Posts: 213
Re: Randomized Algorithms  
« Reply #3 on: Oct 13th, 2002, 10:47pm »
Quote Quote Modify Modify

Quote:
don't use the phrase "non-deterministic" for randomized algorithms

 
   Yeah, I know, but as Will asked why nondeterminism is more powerful, I kind of got carried away with this little abuse. Smiley
 
   Sorry if I said too much nonsense, it's just that I've never delved into that subject too deeply (only studied the official nondeterminism, not randomness). My opinions are formed based on the few examples of randomized algorithms (RA's) I've seen here. There is the LazySelect and a RA for 2-SAT somewhere in the CS forum. Well, here's some more nonsense. Wink
 
   I don't understand some of your counterarguments, Tim. You said the RA for primality testing has a nonzero probability of being wrong in saying an input is prime, and you agreed with my qualitative explanation that it tries to guess (in a smart way) at a small (in the CS sense) relevant subset of the total information contained in the encoding of the number.
 
   And you are quite right in saying there are RA's that give 100% assurance of the correctness of its response. I missed that entirely, and the 2SAT RA is just an example of that.
 
   But even in that case, I would argue that the good average-case running time, which is caused by the sizeable proportion of good running time inputs, is an effect of not taking into account all the information. Like the 2SAT RA, which randomly selects a variable and switches its value: since it is easy (polynomial) to check the truth-value of a boolean formula if we know the values of the variables, it is a good strategy to switch a random one and see what happens. If by chance the RA finds a valid assignment (and that might happen quickly), it did not at any time take into account the information contained in the 2SAT instance's structure (the connections between clauses, etc, which are not "small"). By its construction, even if it runs for very long, it will not "know" any more about the particular instance than it did at first, in that it will not start making cleverer guesses of which variables to switch values. But on the average it's quick because, usually, a whole bunch of different assignments will solve a 2SAT instance.
 
   Note that I'm talking from an "in principle" point of view; of course, the randomize-to-avoid-worst-case example is valid, but I tend to think of it as a technicality rather than a real difference; "in principle", if the only random part of the algorithm is to make the input random to avoid (assumedly improbable) worst cases, I don't see much difference from considering all inputs of a given size as equally likely and calculating average-case performance of a non-R algorithm.
 
   Again, I'm not very familiar with the subject. Do you have any counterexample RA's to what I said, i.e. which use randomness not to enforce average running times nor to guess at relevant subsets of information? I would be interested to hear about the idea behind them. Anyways, thanks for the pointers! Smiley
 
   One last thing:
 
Quote:
the running time of a randomized algorithm depends only on how lucky you were in the random choices you made while running the algorithm, not on the input data

 
You mean this for a given input size, right?
IP Logged

"I always wondered about the meaning of life. So I looked it up in the dictionary under 'L' and there it was --- the meaning of life. It was not what I expected." (Dogbert)
TimMann
Senior Riddler
****






   
WWW

Gender: male
Posts: 330
Re: Randomized Algorithms  
« Reply #4 on: Oct 17th, 2002, 1:43am »
Quote Quote Modify Modify

I'm not sure what to say to clarify my post, because you seem to understand the details OK. Maybe your mistook the main thrust of it, though. My post wasn't designed to disagree with yours and get into a debate. I mostly wanted to respond to William's original questions. Your post did a good job of explaining some things and got some things a bit wrong, so rather than starting from scratch, I started from your post, corrected some things, and filled in some things.  
 
My main point was about how and why randomized algorithms are assessed by their average-case performance instead of their worst-case performance, an important point that I had the impression William had not picked up on. (If so, I blame it on his professor getting ahead of himself, explaining how the pretty algorithms work before making sure the students understand how the algorithms are assessed. It wouldn't be the first time that's happened!)
 
The main thing I wanted to correct in your post was the part I quoted, where a couple of things were wrong. You wrote "Randomized algorithms do NOT give mathematical certainty of their correctness". I'm not sure what you actually meant by that, but it sounded like you were saying that all randomized algorithms have a chance of giving the wrong answer. So I pointed out that although there are some randomized algorithms that give an answer that is correct only with some known probability rather than with certainty, not all (not even most) are like that.
 
You also wrote "they do not take into account ALL the information that is contained in an input string, just try to guess at a small relevant set. Hence the smaller running time."  This is also an incorrect generalization. Yes, there are some randomized algorithms that have this sort of structure. I suspect you were thinking they all did because that structure is similar to nondeterministic algorthms: guess an answer out of a very large search space and check it much more rapidly than you could search the entire space, thus getting an enormous speedup over brute force. This works for nondeterministic Turing machines because they can guess all answers at once; it works for practical randomized algorithms if the search space is such that you have a high probability of making a good guess quickly. Randomized primality testing is something like that, and I gather from your post that randomized 2SAT is even more so. Hmm, I suppose simulated annealing falls into this broad category too.
 
But as I pointed out, that is not the only kind of randomized algorithm that exists. There's no sense in trying to salvage your initial generalization by denigrating other uses of randomization as "just a technicality." The idea of shuffling the input or making some random choices during the running of the algorithm, in order to reduce worst case behavior to a known, very low probability regardless of the input data, is an important one too. It may not give such a dramatic improvement in running time, but it's much needed in some applications.
 
I'm most certainly not an expert on randomized algorithms. I studied a few when I was doing my Ph.D. back in the 1980's (in operating systems, not in algorithms!), and I've bumped into a few here and there in my later career. William will probably know a lot more randomized algorithms than I do by the time he finishes the class he's taking! Perhaps he'll be able to tell you whether there are other basic ideas in randomized algorithms besides the one you knew about (which we might call "cut and try") and the other one I pointed out (shuffle the input and/or make random choices while running to avoid worst-case behavior). Even those two basic kinds of idea are enough to lead to a lot of interesting algorithms.
 
(p.s. To answer the "one last thing", yes, I meant for a fixed input size.)
 
(p.p.s. Come to think of it, I know of another use for randomization, but I don't understand it well enough to explain it properly. When trying to detect very weak electronic signals, it's actually possible to do better by adding a small amount of white noise into the signal. For instance, you should be able to hear a very small, quiet sound slightly better if there is also a little bit of white noise in the room (not too much), than if it's absolutely quiet except for the small sound. Someone once explained this to me well enough that I felt like I understood it, but now I don't anymore. I'm not sure whether this phenomenon applies in a useful way to computer hearing and vision algorithms, or if it's only really significant for physical sensing devices.)
IP Logged

http://tim-mann.org/
James Fingas
Uberpuzzler
*****





   
Email

Gender: male
Posts: 949
Re: Randomized Algorithms  
« Reply #5 on: Oct 17th, 2002, 7:12am »
Quote Quote Modify Modify

Tim,
 
Regarding the "adding-a-bit-of-white-noise", this intrigued me. Analyzing only in the frequency domain, there is no reason why this should help, but in the amplitude domain, you may help boost signals over a threshold, making them easier to perceive, even if a bit messier.
 
Very interesting!
IP Logged

Doc, I'm addicted to advice! What should I do?
TimMann
Senior Riddler
****






   
WWW

Gender: male
Posts: 330
Re: Randomized Algorithms  
« Reply #6 on: Oct 18th, 2002, 1:01am »
Quote Quote Modify Modify

Ah, after remembering that I saw something about this in Science News, I've found a reference on their web site. The technical term for the phenomenon is "stochastic resonance." One article about it is at http://www.sciencenews.org/sn_arc98/2_21_98/fob1.htm.
 
A Google search on "stochastic resonance" turns up lots of hits. A short definition is given at http://www.umbrars.com/sr/introsr.htm. Another good link is http://www.sscsd.hpc.mil/sr/SRintro/SRintro.html. Doesn't look like there is likely to be applicability to computer algorithms other than the simulation of these physical processes, but it's interesting.
IP Logged

http://tim-mann.org/
S. Owen
Full Member
***





   


Gender: male
Posts: 221
Re: Randomized Algorithms  
« Reply #7 on: Oct 18th, 2002, 8:11am »
Quote Quote Modify Modify

on Oct 17th, 2002, 1:43am, TimMann wrote:
My main point was about how and why randomized algorithms are assessed by their average-case performance instead of their worst-case performance

That's a great insight - randomized algorithms can offer great average-case performance, possibly at the expense of worse worst-case performance. As it turns out, I think, that's often a tradeoff that's worthwhile to make (since one can usually bound the worst case to taste), and thus the beauty of randomized algorithms.
IP Logged
william wu
wu::riddles Administrator
*****





   
WWW

Gender: male
Posts: 1291
Re: Randomized Algorithms  
« Reply #8 on: Dec 31st, 2002, 9:58am »
Quote Quote Modify Modify

Thanks for the corrections and valuable insights. The course is over, and I think I can mention at least one more type of randomized algorithm, but I want to think about it more carefully before I post it. In the meantime, regarding this:
 
on Oct 17th, 2002, 7:12am, James Fingas wrote:

Regarding the "adding-a-bit-of-white-noise", this intrigued me. Analyzing only in the frequency domain, there is no reason why this should help, but in the amplitude domain, you may help boost signals over a threshold, making them easier to perceive

 
An everyday thing that uses this concept is dithering. The sender adds white noise intentionally, and the receiver will subtract the exact same white noise. This technique can mitigate the undesirable consequences of having a coarse quantizer (levels are few and far between). Dithering was used in laser printers when the grayscale quantization was very coarse, resulting in abrupt jumps in darkness. Dithering allowed gradual tranisitions in darkness, such as for a sunset. I believe it's also used when sending images across the net. Actually I didn't encounter this concept in the RA course; my summer research partner came across it while reading a digital communications book.
 
The tradeoff S.Owen quoted is indeed fundamental to randomized algorithms. For RAs, in the worst-case you could just be plain wrong by returning a false positive. An example is a randomized algorithm for verifying that two polynomials are identical (see www.cs.berkeley.edu/~dror/cs174/note8.ps if you're interested; the algo uses the Schwarz-Zippel theorem). However, you design the algorithm such that incorrect answers occur with arbitrarily low probability.  
 
With other RAs, in the worst-case you might just run forever! The simple randomized 2-satisfiability algorithm is an example of this, and I'll repeat it here for convenience. The 2SAT problem: given a conjunctive normal form boolean expression such as:
 

 
find a truth assignment that makes the whole expression true. Note that if the whole expression is true, each of the individual OR clauses must be true. Here's an algorithm that solves that problem:
 
Code:

choose an initial truth assignment equally at random  
// so for n variables, any particular assignment is chosen with probab. 2[sup]-n[/sup])  
 
while there exists an unsatisfied clause  
     choose some unsatisfied clause C equally at random  
     pick one of the two variables in C equally at random, and flip its truth value  

 
Using linear recurrences, one can show that if there exists a solution, the average runtime is n2. But there's a tiny probability that every other iteration will undo the progress of the previous iteration, such that there always exists an unsatisfied clause, and thus we're stuck in that while loop forever. Perhaps an easier way to see this is to think of this algorithm as a random walk starting from one end of a line graph with n vertices. At a given iteration, if there are k out of n variables in the current truth assignment which match the solution's assignment, then that's just like being at vertex k on the line graph. Clearly there's a chance that our random walk will never reach the other end of the line, where all n variables match the correct truth assignment. But it's not very likely.
 
There's a nonrandomized algorithm which achieves the same running time in the worst case, but it's not as simple.
« Last Edit: Dec 31st, 2002, 10:04am by william wu » IP Logged


[ wu ] : http://wuriddles.com / http://forums.wuriddles.com
Pages: 1  Reply Reply Notify of replies Notify of replies Send Topic Send Topic Print Print

« Previous topic | Next topic »

Powered by YaBB 1 Gold - SP 1.4!
Forum software copyright © 2000-2004 Yet another Bulletin Board