## Tuesday, February 2, 2010

### Analyzing Riffle Shuffles

Hello Everybody,

I am writing this narrative describing properties of riffle shuffle that we analysed in Markov Chains class recently. Its in three parts. The parts are identified by the ******* marks. If you want to skip to the technical material, read the last part - the one that follows the last star marks.

"No, no, no, its not possible", I checked the board again for the fourth (or fifth) time. But there it was. Sitting on the board it was a masterpiece of reasoning which produced that result.

I was still in the strong defiance stage. The three stages of digesting (rather assimilating) a hard to believe truth, that someone in some corner of the world sometime pointed out, huddled through my mind

The result was so awe-inspiring that I immediately decided - I was going to blog about it. I told Prof Randall that I was about to blog on the topic. She gave a Dumbledoric smile. But I knew I had undertaken some responsibility. I started searching my mind for the way the lecture began. "No, this is not where I want to start", I told myself. After some pondering over the issue, I knew where to start now. Enjoy the rest of the post

**********************************************************************************

It started rather innocuously. Prof Randall was teaching something about hypercubes - and I drifted. I suddenly recalled a movie called hypercube and my attention was partly swayed. When I came back to myself, I had skipped a few sentences. "Damn, not again", I cursed myself. I cannot understand why this word hypercube triggers an imagery of sci-fi movies through my brain even in lectures.

But it was okay. I was able to follow the other parts of the lecture including the hypercube part itself. A few moments later, a doubt grabbed me. I wanted to get it clarified and lo - I forgot my doubt. This has happened to me before as well. But today was a different day. Two hated things happened on the same day in the same lecture. The irony is that I cannot even call it ironic as it was too moronic.

As it were, I was there sitting partly distracted by the latest turn of events, when a voice (a question that Prateek Bhakta asked) from behind brought me to my senses and made me attend the rest of the lecture that finished with a wonderful demonstration of some curious properties of riffle shuffles that I intend to discuss in today's post. This is the heart of today's post and forms the last part.

**********************************************************************************

So, what in Merlin's name is a Riffle Shuffle? Well it is a way to shuffle a deck of cards which we will shortly see randomizes the order of the cards in the deck in very quickly. Our analysis here uses some part of the lecture but also draws heavily from many other sources which I will mention towards the end of this post.

Let us begin the tour with a fairly simple question. You have got a deck of cards labelled $1,2...n$ from top to bottom on a table. Let us call this initial ordering of cards the identity permutation.

When can you claim that your shuffles have randomized the order of the cards in the deck? When indeed?

Put simply, this question asks "how many shuffles suffice?"

At a certain level you might feel like this question asks you to define randomness for which we do not have a very comfortable mathematical notion. But still, in some sense, we can answer this question. We rely on what we call total variation distance to save the day.

Informally, the idea is to estimate the "distance" of two probability distributions over all the possible permutations of cards in the deck.
Total variation between two probability distributions $P$ and $Q$ is given by $||P - Q|| = \frac{1}{2} \cdot \sum_{\pi \in S_n} \vert P(\pi) - Q(\pi) \vert$ where $\pi$ is one permutation and $S_n$ is the group of all possible permutations.
You pick one of these distributions as the uniform distribution and the other one as the distribution you generate after $k$ shuffles of your scheme. You hope to develop a scheme for which $k$ is not inordinately large (of course quantities of the order of $52!$ are an abomination, but so are quantities like $200,\ 100$ or even $50$ shuffles).

Hmmmm....what should we do. What indeed? The first idea might be to invoke what we call the coupon collector. The problem involves something like throwing a $n$-sided-dice and asks you to estimate the expected number of throws after which you can claim that all the faces have turned up atleast once. The answer can be given by summing up over a geometric random variable.

Here, in card shuffling, the idea can be to keep removing the top-card and inserting it someplace in the deck randomly. The stopping rule will be to say that you are done when you place the $n^{th}$ card someplace in the deck.

The idea for this is again intuitively obvious. There are two things that you will need a proof for. One - that it really achieves small total variation from the uniform distribution; two - it does so within the stated time bound.

That one holds can be understood this way - the order of cards below the original bottom card is really random. In expectation, when $i$ cards are already below the bottom card, it takes $\frac{n}{i+1}$  shuffles to get it in the desired position (i.e., anywhere below the bottom card). This explains the $n \cdot log(n)$ moves answer that you obtain if you do shuffling this way.

But you are not happy with this. With $n = 52$ cards, this works out to be more than $200$ shuffles. But we are not out of ideas yet. Not so fast. We will arrive at our target in less than 10 shuffles. Below we show how.

This technique is called Riffle Shuffle (henceforth RS). It involves cutting the deck of cards into two halves and then interleaving them in some fashion (so that you obtain a maximum of two interleaved increasing sequences) and keep repeating this process quite a few times. If you label the cards in the left hand all with a $0$ and those in the right all with a $1$, the two increasing sequences are those that are $0$-identified and those that are $1$-identified. This method really arrives at a distribution close to the uniform pretty fast.

Lets ask a simple question first. How many distinct RSs can you do on a deck of $n$ cards? It turns out, that if you have $t$ cards in one of your hands, number of possible RSs are $n \choose t$ $-1$ (why?). Adding over all $t$'s, you get $2^n - n$ as the total number of possible RSs.

Lets also spare a look at the probability distribution on $S_n$ due to RS. It is fairly simple to explore and I would leave it to the reader to think why it comes out the way it does

$Rif(\pi) = \frac{n+1}{2^n}$ if $\pi$ is the identity permutation.
$Rif(\pi) = \frac{i}{2^n}$ if $\pi$ consists of two increasing sequences
$Rif(\pi) = 0$ otherwise

We are doing good. Now its time for the leap. "No, no, no, its not possible", I checked the board again.

Thing is, you can study what we call the inverse riffle shuffle (henceforth IRS) to analyze the number of number of riffle shuffles needed to randomize the deck. Both have the same mixing time! And that is amazing. Too amazing.

So, let me define IRS and tell you what its curious properties are which might help in understanding this beautiful concept. (The material below largely draws from the wonderful Proofs from the book by Martin Aigner)

An inverse shufle would take a subset of the cards in the deck, remove them from the deck, and place them on top of the remaining cards of the deck - while maintaining the relative order in both parts of the deck. Such a move is determined by the subset of the cards: Take all subsets with equal probability. Equivalently, assign a label $0$ or $1$ to each card, randomly and independently with probabilities $\frac{1}{2}$, and move the cards labelled $0$ to the top.

That IRS gives the same probability distribution as RS is obvious as you can check that you get the identity permutation precisely when all $0$-cards are above the $1$-cards.

So analyzing the IRSs is good enough! Also, as informally noted above, the inverse shuffles give a probability distribution $Rif(\bar{\pi})$ such that $Rif(\bar{\pi}) = Rif(\pi^{-1})$. Further, we know that Uniform permutation is pretty cool in that it allows $U(\pi) = U(\pi^{-1})$.

So, the following two quantites are same

$||Rif(\pi) - U|| = ||Rif(\bar{\pi}) - U||$.

Finally, we note when should we stop the iterations of IRS so that we can state with confidence that the deck is randomized. Let us suppose that you can do this after $k$ shufflings. Each shuffling, the IRS way, is according to the bit-that you assign to that card. SO, you can proceed by assuming that you assign a random $k$-bit string to the card. And then the cards fate is decided by the string it started out with. Two cards might still stay in same relative order if they started out with same bit strings. This can be avoided if all cards get distinct bit strings. So, the problem now reduces to a problem of Birthday Paradox. As I said, this was where I went through those stages

I would invite you to show why. For the benefit of others, you can leave the proof as a comment.

Since, its 6'0 clock in the morning and I want to catch some rest, I will not do that now.

Have a Good Day
-Akash

### Some fundamentals of Markov Chains

Hello All,

This is a placeholder post. I will replace it with the contents that should be here soon.

Till then you might want to see the $7^{th}$ Chapter of Mitzenmacher-Upfal book