Thursday, June 3, 2010

Farewell, Mr Gardner

Martin Gardner, the first Mathemagician, the first editor of the column called Mathematical games in Scientific American and an author of over 70 thought provoking books on topics mostly related to mathematics, physics, logic and philosophy passed away on May 22 this year.

Given that he was 95, it had to happen sooner or later, but still his passing away came as a blow to me. I am a huge admirer of books by Gardner. I love trying out new puzzles and to me Gardner's books were God sent at a time when all I could see around myself were books by George Summers, Shakuntla Devi and Ravi Narula. I did not find these books to be any good as most of the puzzles they contained were not good enough. I could solve most of the problems of these books when I tried them and most of the time I did not feel like trying them as they were too boring -- to me, they looked like problems of grocery bill variety. I was of belief that good puzzles ought to have some "sort of life". Essentially what I was looking for were questions of the type which required some Aha! moments as Mr. Gardner put it.

Then I ran across several of his books - Aha! Insight, Aha! Gotcha, Puzzles and Diversions, Knotted Doughnuts, Penrose Tiles to Trapdoor Ciphers, Colossal book of Mathematics and a few other books. These books made mathematics look like sport. I am not saying they made mathematics look easy -- they made mathematics look fun. I also joined the group of people whose lives were affected by Gardner. As Conway, Berkelamp and Guy put it in their Winning Ways for your Mathematical Plays, Gardner truly brought more math to millions than did anyone else.

If you have heard of some good puzzle that you think requires some Aha! moment, chances are that Martin Gardner first popularized the puzzle in some of his columns or through some of his books.

His passing away certainly shocked me. But Martin Gardner has left an undying legacy behind him. You should definitely go and see his books if you have not seen them so far. I promise, it will be an Aha! experience.

I Salute you Mr. Gardner. Farewell, Sir.

Tuesday, February 2, 2010

Analyzing Riffle Shuffles

Hello Everybody,

I am writing this narrative describing properties of riffle shuffle that we analysed in Markov Chains class recently. Its in three parts. The parts are identified by the ******* marks. If you want to skip to the technical material, read the last part - the one that follows the last star marks.

"No, no, no, its not possible", I checked the board again for the fourth (or fifth) time. But there it was. Sitting on the board it was a masterpiece of reasoning which produced that result.

I was still in the strong defiance stage. The three stages of digesting (rather assimilating) a hard to believe truth, that someone in some corner of the world sometime pointed out, huddled through my mind

Strong Defiance, Mild Reluctance....finally Admittance.

The result was so awe-inspiring that I immediately decided - I was going to blog about it. I told Prof Randall that I was about to blog on the topic. She gave a Dumbledoric smile. But I knew I had undertaken some responsibility. I started searching my mind for the way the lecture began. "No, this is not where I want to start", I told myself. After some pondering over the issue, I knew where to start now. Enjoy the rest of the post


It started rather innocuously. Prof Randall was teaching something about hypercubes - and I drifted. I suddenly recalled a movie called hypercube and my attention was partly swayed. When I came back to myself, I had skipped a few sentences. "Damn, not again", I cursed myself. I cannot understand why this word hypercube triggers an imagery of sci-fi movies through my brain even in lectures.

But it was okay. I was able to follow the other parts of the lecture including the hypercube part itself. A few moments later, a doubt grabbed me. I wanted to get it clarified and lo - I forgot my doubt. This has happened to me before as well. But today was a different day. Two hated things happened on the same day in the same lecture. The irony is that I cannot even call it ironic as it was too moronic.

As it were, I was there sitting partly distracted by the latest turn of events, when a voice (a question that Prateek Bhakta asked) from behind brought me to my senses and made me attend the rest of the lecture that finished with a wonderful demonstration of some curious properties of riffle shuffles that I intend to discuss in today's post. This is the heart of today's post and forms the last part.


 So, what in Merlin's name is a Riffle Shuffle? Well it is a way to shuffle a deck of cards which we will shortly see randomizes the order of the cards in the deck in very quickly. Our analysis here uses some part of the lecture but also draws heavily from many other sources which I will mention towards the end of this post.

Let us begin the tour with a fairly simple question. You have got a deck of cards labelled $1,2...n$ from top to bottom on a table. Let us call this initial ordering of cards the identity permutation.

When can you claim that your shuffles have randomized the order of the cards in the deck? When indeed?

Put simply, this question asks "how many shuffles suffice?"

At a certain level you might feel like this question asks you to define randomness for which we do not have a very comfortable mathematical notion. But still, in some sense, we can answer this question. We rely on what we call total variation distance to save the day.

Informally, the idea is to estimate the "distance" of two probability distributions over all the possible permutations of cards in the deck.
Total variation between two probability distributions $P$ and $Q$ is given by $||P - Q|| = \frac{1}{2} \cdot \sum_{\pi \in S_n} \vert P(\pi) - Q(\pi) \vert$ where $\pi$ is one permutation and $S_n$ is the group of all possible permutations.
You pick one of these distributions as the uniform distribution and the other one as the distribution you generate after $k$ shuffles of your scheme. You hope to develop a scheme for which $k$ is not inordinately large (of course quantities of the order of $52!$ are an abomination, but so are quantities like $200,\ 100$ or even $50$ shuffles).

Hmmmm....what should we do. What indeed? The first idea might be to invoke what we call the coupon collector. The problem involves something like throwing a $n$-sided-dice and asks you to estimate the expected number of throws after which you can claim that all the faces have turned up atleast once. The answer can be given by summing up over a geometric random variable.

Here, in card shuffling, the idea can be to keep removing the top-card and inserting it someplace in the deck randomly. The stopping rule will be to say that you are done when you place the $n^{th}$ card someplace in the deck.

The idea for this is again intuitively obvious. There are two things that you will need a proof for. One - that it really achieves small total variation from the uniform distribution; two - it does so within the stated time bound.

That one holds can be understood this way - the order of cards below the original bottom card is really random. In expectation, when $i$ cards are already below the bottom card, it takes $\frac{n}{i+1}$  shuffles to get it in the desired position (i.e., anywhere below the bottom card). This explains the $n \cdot log(n)$ moves answer that you obtain if you do shuffling this way.

But you are not happy with this. With $n = 52$ cards, this works out to be more than $200$ shuffles. But we are not out of ideas yet. Not so fast. We will arrive at our target in less than 10 shuffles. Below we show how.

This technique is called Riffle Shuffle (henceforth RS). It involves cutting the deck of cards into two halves and then interleaving them in some fashion (so that you obtain a maximum of two interleaved increasing sequences) and keep repeating this process quite a few times. If you label the cards in the left hand all with a $0$ and those in the right all with a $1$, the two increasing sequences are those that are $0$-identified and those that are $1$-identified. This method really arrives at a distribution close to the uniform pretty fast.

Lets ask a simple question first. How many distinct RSs can you do on a deck of $n$ cards? It turns out, that if you have $t$ cards in one of your hands, number of possible RSs are $ n \choose t$ $-1$ (why?). Adding over all $t$'s, you get $2^n - n$ as the total number of possible RSs.

Lets also spare a look at the probability distribution on $S_n$ due to RS. It is fairly simple to explore and I would leave it to the reader to think why it comes out the way it does

$Rif(\pi) = \frac{n+1}{2^n}$ if $\pi$ is the identity permutation.
$Rif(\pi) = \frac{i}{2^n}$ if $\pi$ consists of two increasing sequences
$Rif(\pi) = 0$ otherwise

We are doing good. Now its time for the leap. "No, no, no, its not possible", I checked the board again.

Thing is, you can study what we call the inverse riffle shuffle (henceforth IRS) to analyze the number of number of riffle shuffles needed to randomize the deck. Both have the same mixing time! And that is amazing. Too amazing.

So, let me define IRS and tell you what its curious properties are which might help in understanding this beautiful concept. (The material below largely draws from the wonderful Proofs from the book by Martin Aigner)

An inverse shufle would take a subset of the cards in the deck, remove them from the deck, and place them on top of the remaining cards of the deck - while maintaining the relative order in both parts of the deck. Such a move is determined by the subset of the cards: Take all subsets with equal probability. Equivalently, assign a label $0$ or $1$ to each card, randomly and independently with probabilities $\frac{1}{2}$, and move the cards labelled $0$ to the top.

That IRS gives the same probability distribution as RS is obvious as you can check that you get the identity permutation precisely when all $0$-cards are above the $1$-cards.

So analyzing the IRSs is good enough! Also, as informally noted above, the inverse shuffles give a probability distribution $Rif(\bar{\pi})$ such that $Rif(\bar{\pi}) = Rif(\pi^{-1})$. Further, we know that Uniform permutation is pretty cool in that it allows $U(\pi) = U(\pi^{-1})$.

So, the following two quantites are same

$||Rif(\pi) - U|| = ||Rif(\bar{\pi}) - U||$.

Finally, we note when should we stop the iterations of IRS so that we can state with confidence that the deck is randomized. Let us suppose that you can do this after $k$ shufflings. Each shuffling, the IRS way, is according to the bit-that you assign to that card. SO, you can proceed by assuming that you assign a random $k$-bit string to the card. And then the cards fate is decided by the string it started out with. Two cards might still stay in same relative order if they started out with same bit strings. This can be avoided if all cards get distinct bit strings. So, the problem now reduces to a problem of Birthday Paradox. As I said, this was where I went through those stages

Strong Defiance, Mild Reluctance....finally Admittance.

I would invite you to show why. For the benefit of others, you can leave the proof as a comment.

Since, its 6'0 clock in the morning and I want to catch some rest, I will not do that now.

Have a Good Day

Some fundamentals of Markov Chains

Hello All,

This is a placeholder post. I will replace it with the contents that should be here soon.

Till then you might want to see the $7^{th}$ Chapter of Mitzenmacher-Upfal book

Monday, January 11, 2010

Some snapshots of Markov chains and Monte Carlo Methods

Hello everybody,

Sorry for choosing such a bad name for today's blog. The problem is - this was the first lecture on the subject today, and I did not really know what else to call this blog.

Anyway, this semester, spring 2010, I am enrolled in a course called Markov Chains and Monte Carlo Method (henceforth MCMC) here at Georgia Tech. Our instructor is the wonderful professor Dana Randall and she has an amazing knack for surprising you in each of her lectures. I do not understand how she communicates her lectures this nice - which in essence is ,in my opinion, an art of explaining things effectively - an art I am really horrible at. I guess to explain this nice you have to understand nicer.

Anyways, enough belaboring, let me get to the point; but wait let me get some other things out of the way first.

(The section below definitely need some rework; I hope I will get around to do that)

The exposition in this series of lectures by Prof Randall will be mainly technique driven. That is, we will see the techniques that have evolved while we work with Markov Chains to solve some important problems in Graph Theory, Statistical Physics, Molecular Biology and other curious areas. We will begin our tour with some classical counting problems and see the power of Markov chains as they solve the problem for us by developing appropriate context and motivation.

Last but not the least, since I do not trust my memory fully, I may skip some of what Professor taught in the class owing to my RAM (memento style temporary memory, well not exactly memento style) and thus do a poor job of presenting the lecture's material (with high probability) and sometimes I may add a few extra things from my knowledge that could make the lecture's material a little easier to follow (with vanishingly small probability).

Now the lecture.

So, as I said above, we begin the tour with some classical counting problems.

So, let me just mention the following problem. Can you decide in polynomial time whether an arbitrary undirected graph has got a perfect  matching or not? If you can, can you explicitly find that matching? Going further, can you count the number of matchings in an arbitrary undirected graph? Lastly, can you return an arbitrary perfect matching from the graph (assuming graph has got some finite number, $n$ of perfect matchings, this question asks you to select anyone uniformly).

If I tell you how to solve the $3^{rd}$ the first two problems become trivial. Also, if I tell you answer to the second one, first one becomes trivial. No sweat.

Thus, the first problem is no harder than the second one and the second one is no harder than the third one. What about the other way round?

It turns out if you have access to some oracle which can answer the first question (viz whether the graph has  got a perfect matching or not) you can answer the $2^{nd}$ question as well. And given an oracle access to some device that solves the $3^{rd}$ question, you can answer the $4^{th}$ one. We will see in future posts how to do all these (if I get around to write some).

But let me remind you about the problem of counting perfect matchings in a bipartite graph. Its "cute" in the sense that it has a very good linear algebraic interpretation. Turns out, solving the above counting problem is as tough as computing the permanent of a matrix. You know permanent? Its just the determinants with all the "minuses" gone. And surprisingly its hard to compute permanent whereas computing the determinant (which involves some zig-zagging through 'signs") is easy. In fact, computing the permanent is #$P-Complete$.

Thus, given the above information you can see that counting the  number of perfect matchings in an arbitrary general graph must also be, well, #$P-Complete$.

Now, lets see how the knowledge that a graph has a perfect matching helps in constructing one. Hmmm....So I tell you that I have a graph which has got a erfect matching. What good is this information in finding out one?

Pretty good it turns out. We capture the essence of the above discussion by saying that graphs are self reducible with respect to the perfect matching property. We will prove this later.

Now, I will turn to discuss something about a special bipartite graph, namely the chessboard. Turns out in a chessboard the number of perfect matchings is easily computable. First, lets turn to the following famous problem. You have a $n*n$ chessboard. Can you tile it up using $2*1$ dominos?

You can easily see that its not possible for odd n. For even $n$ its clearly possible. Too trivial. Now I ask is it always the case that you can tile up an arbitrary connected figure which has got an even number of squares. The answer is no. Consider the traditional chessboard with two opposite corners removed. Say the removed squares are both black. The number of white squares being more, domino-tiling is not possible anymore.

Thus, it looks like the sufficient condition for tilability is an elusive beast. Not if Prof Randall has her way. Somewhere, in the course of her graduate studies, she made the following cute observation.

Consider the following special "marked" dominoes (alongwith an unmarked one)

Call these figures (a), (b), (c) and (d). The marked tiles, (a), (b) and (c) respectievely connect $(x, y)$ to $(x+1, y-1), (x+1, y+1)$ and $(x+2, y)$.

Then Prof Randall makes a nice series of observations beginning with a very simple idea. She first says that if you can tile the board with unmarked dominoes only, then you can trivially put marks on them and obtain a marked tiling.

But a marked tiling is not she is happy with. She is looking for a meaningful marking - a marking which can tell her something about the number of perfect matchings in the above kind of graphs. In short, she establishes a bijection between the number of domino tilings and the number of perfect matchings. The mental gymnastics she went through for this are not very clear to me, but I can try imparting some intuition by "cooking up" the following explanation. Possibly, she arrived at this result by some other genius inventive idea; but the description below should help.

Note The discussion below uses the word domino and tiles interchangeably.

First, let us assume we are on a $n*n$ grid where $n$ is even. (We will discuss the general case for $\RR \subseteq Z^2$ later). Further, to assist our discussion imagine that squares are coloured black and white like they  are in a chessboard. Also, think of your dominoes as all being transparent. Now, put a red dot over the center of the region of the domino which lies over a black square. Let us identify the starting point of a tile as this red dot lying over a black region on the tile.

You can now imagine the tiles as being some sort of "teleporter" carrying the red dot from one tile to the other. They can carry it "horizontally through" the tile as shown in (c), or "diagonally" as in (a) and (b).

[I am not providing any diagrams, but should you need one, you can look here on page 9.]

Thus, you obtain a triangular lattice whose edges correspond to the direction the "teleporter" carried the red dot in. If you can chalk out a simply connected path for red dot starting at some left boundary tile (source) and ending at some right boundary tile (sink), you basically just figured out a tiling for the squares you traversed on the original board.

Thus figuring a simply connected path on the triangular lattice corresponds to figuring out a tiling for the corresponding portion of the chessboard! Figure out a series of such non-intersecting paths, each beginning at some source $s_i$ and ending at some sink $t_i$ such that all the paths are non-intersecting; you win.

(If paths intersect, your tiles overlap - thats why we need them all to be disjoint). The series of paths from $s_i$'s to $t_i$'s is called routing. Thus, the number of domino-tilings of the chessboard is the same as the number of routings! Also, notice that a source-sink pair, $(s_i, t_i)$ is respected by all tilings. That is, if one arbitrary routing has got a set of sources $(s_1, s_2, ..., s_k)$ and a set of sinks $(t_1, t_2,...,t_k)$, all other routings will have the same set of source and sinks. Its easy to see why. If it were not, paths would intersect leading to pathologies (not to mention other disasters like earthquakes, vampires, stinking socks, filth, etc.)

You might have noticed that under the assumption that the region was tilable, the number of sources and sinks is the same. Figure out why on your own. (Its too trivial).

The same analysis extends to any other tilable $\RR \subseteq Z^2$.

But as of now we have not mentioned when is a region tilable. Relax, we are coming to it as well. We would do it using some tools from group theory. Our proof will be a constructive one, if there exist some tilings, it will find one.

Allow me to relax a bit. I will resume posting soon.

Sunday, April 12, 2009

A further remark on the orb puzzle

Here is a further remark on the orb puzzle also called the egg puzzle and (in)famously also called, because of interview forums, the 2 orb puzzle or the 2 egg puzzle.

Back here, in India, the object that guys have been using, since time immemorial, is an egg.

NOTE: In what follows, I would be using egg (and not orb) as the object of experiment. The title was chosen as "A further remark on the orb puzzle" as a homage to the mind of whoever first came up with this remarkable puzzle. Now you can gear yourself up for the further remarks, which to my best knowledge have not been accumulated elsewhere.

If you want to know what the puzzle and how to solve it, you can read my previous post.

Now that you have settled (assumption being you are aware of the puzzle and its solution) you can start reading what follows.

Somewhere, while doing the above question, my thoughts turned to a strange corner. I had solved the problem in its full generality and had also tried cases with different number of eggs and floors.

With 3 eggs, it took you 9 turns to cross 100 for the first time. With 4 eggs, this number was 8.

With 5 eggs, the number came out to be 7.

I started wondering, could I beat binary search with this method?

I just wanted to see the number of floors you scale when you were allowed 7 drops. Could it exceed 128?

I tried with 6 eggs. Answer? 126.

7 eggs. 127.

Using more than 7 eggs, as expected, gives 127 because max drops is 7. And thus, for the following number of eggs, I had

8? Again 127
20? Again 127.

I then had a look at floors scaled with 6 drops. The answer was 63. 63 appeared the first time (with 6 being the fixed number of drops) when I used 6 eggs.

127 appeared (with 7 being the fixed number of drops) the first time when I allowed myself at least 7 eggs. Wherefrom, it remained 7 regardless of the number of eggs.

Hmmm..Something was brewing in the mathematical universe. The number of floors scaled with a certain number of drops had an upper bound which could not be pushed by upping the number of eggs.

What was going on?

I found the answer in a pdf by Umesh Nair. Though he does not address this issue directly, the treatment he invoked for solving this question could be readily extended to solve my problem.

In particular, he made the interesting observation that if you are allowed a total of 'd drops' then you could not scale more than 2^d floors.

Once I read this statement, it was pretty obvious!

Can you see why?

The answer lies, as Mr. Nair points out, in the innocous fact that for a 'particular setting' ('e eggs', 'n floors', the 'break floor' being the b-th floor) there exists a unique sequence of drops which locates the 'break floor'.

You change the number of 'break floor', you see a different drop sequence emerge. There is, thus, a one-to-one mapping between drop-sequence and a 'particular setting'.

And therefore, the number of drop sequences equals the number of floors you scale for a particular setting.

And obviously, you have scenarios where you do not end up using all of your 'd drops' for a particular setting.

For example, even though in case of 2 eggs and 100 floors, you are allowed a total of 14 drops, if the 'break floor is 1' you can locate it in 2 drops! (not all 14 drops are needed).

Therefore, some drop-sequences are smaller in size. Which means that with maxmium 'd drops' you can really not scale unbounded heights as you increase the number of eggs. Further that bound is 2^d (as remarked above) because some drop sequences being smaller, terminate the entire sub-drop tree that could have been rooted at that position.

Because for the 2 egg, 100 floor, 1st floor-break-floor setting, drop sequence 1st-at-14 and 2nd-at-1 terminates; you cant have, in this particular case (where 1is the break-floor) any other trials after completing your 2nd drop.

This is pretty interesting analysis!

Hats off to Mr. Nair for his work on this problem!

He has made a few other remarks as well. For example, he has converted the original recurrence (which you can also find in my previous post) to a combinatorial summnation.

I will write about that later.

So far, so good.

But that still does not answer why this egg method converges with binary search when number of floors and number of eggs are in binary-search-tandem.

(That is floors are 2^e - 1 when number of eggs is e). In case, you are not sure it does, think a little bit about it; i will not answer it here apart from mentioning that you should be able to make it out from what has been said above.

Call it the tandem-issue.

Intuitively, we would expect someone not familiar with the recursive solution this puzzle boasts of, to come up with the binary search strategy only, given number of eggs and floors are in binary-search-tandem. (In fact, guys usually come up with binary search even when the values are not in that tandem - but that is not what I intend to discuss).

The fact that one comes up, on an intuitive level, with binary search given the tandem complimented with binary search information-theoretic content, maybe reason enough for someone to point out that this solves the tandem-issue.

But such a reasoning is not complete. We need to answer why in binary-search-tandem settings, does the recurrence become as good as binary-search itself. We should not invoke reverse reasoning. That is, we should not say that binary search is what we can use in tandem-issues and since it is known to be best, thats what we do.

We have also got the responsibility to answer why our-recursive-way will also proceed the binary-search way. Sure, it could have figured out a different way requiring the same number of drops differently.

Then why did not it do so? Why another mathematical cataclysm?

Now that we are clear on the problem, we solve it.

We just need to show that T(d, d) = 2^d - 1.

Try it on your own. I will answer it soon in this space only.

For now, its goodbye

Friday, April 10, 2009

A few algorithmic puzzles

Hello There,

Today I will be telling you about a few algorithmic puzzles.

The talk basically involves a problem with its solution. So, if you are a reader who wants to try these out before peeking out at the solution, beware.

Also, I would say that unfortunately, one of these puzzles has been made too popular through interview forums and other discussions, which makes the solving techniques used for these kinds of puzzles look futile.

It is my opinion that interview forums should operate a little differently.

Too much digression, now I will come straight to the point.
Below, I present the first of the two questions that I will be tackling in this blog.

Prob.1) Given 2 special eggs and a building 100 floors high, can you dear reader, find the minimum number of drops in which you can find the height after which these 2 eggs start breaking.

Oh yes, the eggs are special in the sense that they wont start breaking unless dropped from or above the threshold height.

Sol.) The key fact here is to realise that you need to prove the minimality of your solution for which the most general approach that you can use is a recursive one.

Infact, in my opinion, for handling discrete minimization/maximization problems, the best way is to establish a recursive equation. Once you have done that, establishing the lower/upper bound is as easy (hard) as solving the recurrence.

Okay, did you try this problem on your own so far. If yes, did you arrive at the first wrong answer most people do (19). Or did you arrive at the correct answer 14?

Whatever you did, probably reading the explanation for 14 below won't hurt. (I will not explain how people arrive at 19).

Well, the first thing we see is that, its a minimization problem. A good thing about these problems is that, they can always be inverted. You can recast it into a maximization problem which maybe (and in most cases, in my experience is) easier to answer.

There is actually a reason for this as well. Maximization problems tie well with out intuition of "overflow". The analogy is that a bucket is said to hold a 'maximum quantity' when adding anymore quantity leads to 'quantity spillage'.

In mathematical terms, you try to show that a set S contains maximum elements with Property P when adding anymore quantity of elements with Property P will spill one or more quantities. (However, there maybe a few subtle issues involved.) More on this later.

A small, but powerful technique.

In this case, the corresponding maximization problem is to determine the maximum height you can scale using 2 eggs and 'd drops.'

Vary 'the number of drops' till the height scaled is greater than or equal to the number of floors in the question.

Well, the above discussion suggests a not-too-hard/easy-to-find recurrence for this problem.

Try one last time before I give it away in its full grandeur.

Let us assume that T(e, d) indicates the height you scale when you have 'e eggs' and 'd drops'

Hmm. I have selected the notation. What remains is notion. As Gauss remarked, it is notion not notation which is important.

But a notation is also good enough as a starting point. Now - the notion.

Well, its recurrence, right? (Notation says so)

Then the corresponding notion is to setup on its RHS another invocation(s) of the same. (Notion)

Clearly it involves using on RHS some quantity smaller than the quantity you begin with so that the recursion terminates.

SO, what can that be? Well you pick an intelligent height, and go for the first drop. And then you have d-1 drops left. Hold it. You can see your recursion now.

After 1st intelligent drop, you can scale T(e, d-1) floors. The intelligent choice must be so placed, that if the egg breaks, then the culprit floor among the floors below is locat-able with 'e-1 eggs' and 'd-1 drops'. That is the intelligent drop is one level above T(e-1, d-1)!

Thus, T(e, d) = T(e, d-1) + [T(e-1, d-1) + 1]

And for recursion termination?

T(1, d) = d.

NOTE: I have not highlighted (maybe even not told)in this problem where does that 'overflow analogy' is. But rest assured, you can find it.

Will post it later, and then remove this note.

Prob. 2) Well, now we come to the second problem. It is from a competition called felicity (organised by IIIT Hyderabad).

The problem asks you to 'maximize' the number of elements in a set S. All of the elements in S come from a set A of natural numbers which contains numbers from 1 to 2000.

But you need to be sure that you do not include a number thrice (or one-third) and five-times (or one-fifth) of some number you already picked. These elements thrice (or one third) and five times (or one-fifth) are called respectively the first-kin and the second-kin of the number picked.

You need to determine the maximal size the set S can be.

Go on, try it! Below I produce the solution.

Sol.) The key fact here is that it is (indeed) a direct maximization problem. We can try to use the 'overflow analogy' as I demonstrate below. As of now I have not tried a recursive solution to this problem myself, but anyways I have put the 'overflow analogy' to good use here.

See if you can arrive at the solution using what has been said this far (which by the way is no hint..but its okay to try with the assurance that you may need to invoke this process, no matter how insignificant the 'place of invoking' is).

Well, if you tried and did not get it (or maybe did not try it at all) here is the solution again in full grandeur.

Ask yourself, when does overflow begin?
The answer is when maximum cardinality is reached, right?
And how exactly do I arrange matters to create it?

It is this question which puts your creativity on trial.
You can begin with a few cases simpler to analyze. For example limit the size of the set A to 10. How big set S can you create now?

The first choice, guys (usually) come up with. is S = {1 2 X 4 X X 7 8 9 X}.
(X indicates the absence of an element.)

Have we achieved true maximality?
Sadly, no.

Hmmm... these guys say that 'overflow analogy' says that placing any absent element into the set S would cause spill and therefore, they claim that they have achieved true maximality.

And this is where we change all the rules. This is precisely the subtlety I talked about earlier. Call it the subtle issue.

You can see that you can make S bigger by removing 1 and keeping 3 and 5 in the set. Thus, the notion of 'quantity spillage' in these problems is of a different kind. Here, your bucket holds maximum quantity when there are no left out quantities keeping which back spills less.
For example, consider one second answer : S = {X 2 3 4 5 X 7 8 X X}. Hell, its no good than the first one.
This configuration (in second answer) is bad as we can allow 'overflow of just one quantity (2) by adding two quantities (6, 10).

A little experimentation shows true maximality is achieved when S is the set with precisely the above modification= {X X X X 4 5 6 7 8 9 10}

You can see that adding any other element leads to 'spillage'. Also, notice that no quantity 'spills less than it contributes'. That is, you can't (in this particular example) add two quantities spilling out just one.

Thus, we have achieved true maximality.

Similarly, for A = {1, 20} the maximal S is
{1(stop) 7,8,9,10,11,12,13,14,15,16,17,18,19,20}.

But, hold your horses. Do you see it?
See what?
An Aha! insight my friend.

The maximal solution outlined above has contiguous elements! Maybe its 2 or 3 sections but within each section, elements are contiguous.

We will prove that you can always arrange matters so that this condition holds true.

First, notice that your examples above for 10 and 20 elements have one thing in common.
Both of them end at the last element.

So we can try beginning in general with the last element. We will create a decreasing sequence of numbers till we hit n/3. We list all the numbers starting from n down to floor(n/3) + 1. Call these numbers Range-1-start and Range-1-end respectively. You can see that Range-1-start and Range-1-end hold a decreasing sequence of numbers.

Next, we make another sequence starting from (Range-2-start) ceiling(Range-1-start/5) - 1 and continue down till we hit floor(Range-2-start/3) + 1.

This way, you go on till you run out of elements.

Creating this sequence for 1st 300 numbers goes like this.

first-sequence: all numbers from 300 down to 101 (inclusive)
second-sequence: all numbers from 20 down to 7. (inclusive, 21 and others are not there as including them would result in throwing away elements of the first sequence).
third sequence - Containing just the number 1.

This way, you can achieve maximality for any n. It seems you have already proved it for its clear enough.

In case you have not, here is one proof. Including any discarded element will cause spill of one or more than one entries and you don't have the bad situation of more elements getting added at expense of fewer getting discarded in the above construction.

Consider the case when A holds 300 elements. Discarded elements are between 21 and 100 (inclusive) and also between 2 and 6 (again inclusive). Putting 'boundary elements' looks like a dangerous idea; we will analyze it later (It is related with the subtle issue). Putting non-boundary elements leads to spillage of two and is definitely wasteful.

Now back to 'boundary elements' (and neighbors). Putting these back spills an element from the first sequence which cannot lead to the subtle issue as the second-kin of the spilled element from the first set is not in any of the two sequences. Putting it back will lead to one other spillage. So it is also wasteful. And there, finally you have seen true maximality.

Also, I will help you with an extample of one boundary element. For example in the case of A holding 300 elements, if you add 100 to the set S, you spill 300. Hmm. can subtle issue arise? And can it lead to inclusion of two elements at the cost of one 300?

No, it won't. For after inclusion of 100, 300's first-kin, you cannot include its second-kin, 50. Including it will spill 10.

This is the general idea behind the proof which can be easily made more rigorous.

NOTE: Some of my friends told me that it was difficult to make the connection between consecutive-ness of elements and the maximality of the set S. I agree.

Also, they say that 'I got lucky' by guessing the pattern with 2 examples which normally requires many turns. And they also say what if the pattern did not hold for long. I could have got stuck.

True. But my answer is that I did not stop at 2 trials (examples for 10 and 20). I tried a total of 20 trials! (From |A| = 10 to |A| = 30)!!

Thats what mathematical problem solving is about. This is what research is about and this is what interview forums do not do. And yes, this is as far from this research as anything might be but my primary attack is not at its unimaginable light-yearish distance from research but it is primarily again at interview forums which do not promote students to think.

They make you more of an analyst and less of a designer.
And the subject was called 'Design and Analysis of algorithms'. Absurd

Enough for now.

There are a few other problems, I would later include in this post itself. Right now, I want to rest.


Saturday, March 1, 2008

Escape Velocity

Note: This blog is out of context in as much as the big bang theory is concerned...Its just a short story. Hope u enjoy it...(it was authored by Brian N Ball. I try to communicate the story though there are a few changes as I was unable to recollect the original version of the story)....

Eric was just back from the summer holidays. His internship session was about to begin and he wanted to start it in all seriousness for this was the first time he would be coming face to face with an industry. And not just any industry. It was the space industry –or rather- the research center for astronomical studies. He had it carefully thought through. The industry which he selected was the one he had yearned for all his life. Though now almost a computer science student (his degree was due in four months) he had never given up upon his ambition of wandering the depths of space, of walking on moon, of racing with his friends on Mars. And now the first stage of this big dream was closing in.

He had a penchant for studies and another penchant for applying his knowledge to diverse fields. And the list of these diverse fields was always topped by things –rather any thing- that had even a remote connection with space; a connection which people would not talk about or normally even think of, as it required -in their words- a clumsy crow-barring of the worst kind. In fact, these people some of whom were his best friends often mocked his enthusiasm for connecting every thing with space ridiculing him with their favorite sentence – “Hey Eric! Can you give me your crow-bar? Have to kill a shark.” Others said – “That man is never short of crow-bars. You can take as many as you want and he will never ask you to return ‘em”. But his enthusiasm never dwindled and in fact he found a way to stop these comments when he got interested in real shark-hunting and therefore bough a crow-bar. And from then on when anyone asked him for a crow-bar he would start giving them the real thing –a funny sight- for the person asking for crow-bar would often sulk away embarrassed.

His internship was to start tomorrow and he was busy filling forms for the same when he came across a potentially devastating statement saying that people with weak mitral valves are not allowed to be an astronaut. He would not have taken the statement too seriously had he not seen the statement –only one in a hundred thousands has a mitral valve which can withstand the acceleration when one is propelled to space. He got worried and talked to his friends about it. They said “You need not be worried. Why go to space? You are a computer engineer man. You can analyze the data from the earth itself.” “But no” said Eric, “Being there is different”.

“In which sense”

“You will never know”. He replied.

"Okay then, let’s hope the best for the test tomorrow.” They advised. “Or hey champ! Cheer up. If the medicos open their mouth you show them your crow-bar”. Saying so, they burst out laughing. Fuming, Eric left the place convinced that no good would come out of this discussion and he had better wait for the test which he would perhaps, if he had any luck (though he knew he had none) pass. The D-day duly arrived. He was tested and he was sent a letter of regret. How he hated it! The very sight of it took his breath away. He was furious on God, on friends, on parents, on society for reasons he could not fully understand. Hypnotically, he moved to pick up his belongings thinking to go out on a hunt. He picked up his crow-bar. Yes, its time to kill sharks. He was moving past the Proctor-House when he heard him talking to the Warden, “Yes the boy had a real bad luck. If only, we could help him.”

“You know you cannot. I mean, after all it’s about the body machinery. Right? The boy would do better if he would focus on ridding himself of this weakness”, said the warden.

“And what is that supposed to mean? Should he consider a heart-transplant? Eh” asked the proctor.

“No, no. I actually meant maybe there is some other solution around. Maybe he should kind of break-in. There are, I believe always some solutions to all our problems around us. One just needs to look harder when the situation gets more stipulative. “ Clarified the warden.

“And what solution do you find” enquired the proctor.

“Well it’s the boy’s story and he should better think about it” replied the warden.

In the meantime Eric who had accidentally chanced upon this conversation determined to do something about his only dream –of a space travel. He kept listening to this conversation and when he finally moved, he gathered that there was a rocket launch scheduled today. The rocket would go to the moon. And it would transport some of the leading astronomers to the satellite. Not that he did not know this earlier. It was in the papers all the time. But he had forgot it all after that devastating interview today which seemed to have been an year ago, though hardly two hours had passed since.

But now he was a man with a purpose. He planned it –or rather he thought he did. He would be sneaking into the rocket. How would he pull this off, he was not sure. But that he would was sure. He moved to the launch site as one of the spectators, as they were allowed on the launching grounds. But to move on, he would need an access code. And there would be extensive checking with all those cameras and God only knows which other kinds of instruments. Then he hit upon a solution to this dilemma. He would make a run for it once the launch procedures were underway. If he succeeded, he would simply shut himself up in a room where people normally are not supposed to go during the launch –the cleaner room. Every rocket has a cleaner room where oxygen cylinders are stored at a sub-zero temperature. In fact the temperature is so low that you find liquefied oxygen. This serves a double purpose. When the rocket is scheduled to take off this oxygen so released and then exposed to low pressure and a considerably high temperature, due to an initiated combustion, rapidly expands and supports further combustion. Also it allows proper oxygen supply for the astronauts inside the rocket.

Once he was inside this room –when launch procedures would be about to start, Eric thought, he could be on the journey to moon safely. Safely? He mused. He may die of mitral valve problems. But it did not matter. He was ready to pay the price. He may not die after all. And why not try this solution? For, once he was inside the cleaner room, he could as well start oxygen leakage. For fear of cold oxygen, nobody would come inside the room. Yes, this was it –this was his salvation, his solution. Nobody had ever tried the cleaner room before. So, he could rest in peace with his thoughts that security controls for those inside the rocket to go into the cleaner room would be lax.

And when the launch hour ticked near, a menacingly near at one hour before launch, he dashed for the rocket. Dodging some guards, and oh, in fact dressed as a guard himself. (Which he had done after over-powering one and making him unconscious in the process)

He ran into the cleaner room, shutting it close after him. No one noticed. He was glad though his heart continued at a pace which rivaled the fastest race horses he could imagine. And then, the COUNTDOWN!!!

Off, the rocket, was about to take. “Oh, my mitral valve” Eric panicked. Suddenly he heard footsteps outside. Did they know he was in? Will they drive him out? Hell, he had not even cared for his life to see this moment. In desperation he crow-barred open one of the oxygen cylinders and was suddenly drenched in an outpouring of cold oxygen. Imagine being exposed to the coldest thing you can imagine and imagine bathing in a sea of that thing. Even that thought would not give you the shivers which did not last more than five seconds for Eric which were given to him by cold oxygen at -230 degrees Centigrade. The shivers received from an exposure to immediately cold environment can be fatal, so our body shuts itself in that case in a matter of seconds. But those shivers are so gruesome that you cannot have even seen them in movies or documentaries; they can only be asymptotically imagined. Eric knew end was close. And before going unconscious he could hear 3…2….1….0..lift-off.

When he opened his eyes he found himself in a place where he felt light and had a strange happy feeling. Was this heaven? After all he was dead after bathing in cold oxygen. And he was happy, so hell was ruled out. Therefore, going by logic it was heaven. Or was it? When he started moving around he could see a cratered surface stretching for miles and miles around him. He asked himself aloud “Does God leave you in your most cherished place after you die”.

Suddenly he heard a woman “You are not dead, you fool. But you sure would have had I not gone inside the cleaner room in time. We know the story. You are Eric. You wanted to go to Moon and you are here with us. But I must say your solution to mitral valve problem was mind blowing”.

“Eh.” Eric asked “I never solved that”.

“Ow..We thought that you knew that cold oxygen would slow your heartbeat considerably so that escape velocity acceleration does not hasten your heartbeat to a heart-bursting rate. At any cost, your solution was nice and you should know that you have got a bigger fan following on earth than we astronauts do now. They say –the age of dare has not died. It has been restarted through your rash though I must admit bold steps.”

“Do I stay” Eric asked in a fearful voice.

Laughing, the woman replied “Oh yes! You do. Or what do you think. Would we call our mission back once we are here? Though I must say your going back to earth will involve the same procedure, an exposure to ultra-cool oxygen. That’s quite a way to thwart escape velocity acceleration. And it also has inspired a generation to break free of their limitations as they too rise breaking those bondages at an escape velocity…Adventure Continues”.