Monday, September 29, 2014

Catch the Magical Monkey puzzle

[This is another one of those puzzles that seem impossible at first]

You live in a one dimensional world, where the unit of time is seconds and everyone is immortal. At time = 0 seconds, a magical monkey appeared at some integer co-ordinate, and is known to be moving at a constant integer speed.

If it was at position x at time $t$ seconds, then it disappears, and then reappears at some position $x + v$ ($v$ could be negative) at time $t+1$ seconds, where it stays for a brief moment, then disappears and reappears at $x+2v$ at $t+2$ seconds and so on.

Both the initial point of appearance, and the speed of the monkey are not known to you, and you have no record of where the monkey has been so far.

You have access to magic too, and can appear at any point of your choosing. If you appear at a certain point the same time as the monkey, you can catch him.

In your world, time progresses linearly, and there is no time travel etc.

Can you catch the monkey in a finite amount of time (assume the current time is some $T \gt 0$)? (i.e. catch him eventually, and not be on a chase forever).

[Solution]

Friday, September 26, 2014

Solution to the "Is the Sum S?" Algorithm Puzzle

[This is a solution to the Is the sum S? algorithm puzzle posted earlier]

A description of the problem:

Given n numbers in the range [-M,M], and a target sum S, also in the range [-M, M], can you figure out in O(n) time, whether those n numbers add up to S, while keeping all your computations within the [-M, M] range? (M is unknown to you). Assume arithmetic operations are O(1).

Solution


Since the numbers are in the range [-M,M], the key observation we use is that adding a positive number to a negative number (or vice versa) will not overflow.

Thus we do the following:

Start with running sum = the negative of the target. Now we try to add the elements of the array to this running sum, hoping to see whether we get a zero in the end.

If the running sum is negative, add a positive number to it. If there are no positive numbers left in the array, then we know we cannot end with a total sum of 0.

If the running sum is positive, add a negative number to it. If there are no negative numbers left in the array, then we know we cannot end with a total sum of 0.

If the running sum is zero, add a positive or negative, does not matter.

Here is what a C++ implementation might look like (untested):

bool has_sum(const std::vector<int>& a, int target) {
  // Technically, the negation is a bug 
  // and overflow is possible here.
  // The range of int in C++ is actually 
  // [-M-1, M], not [-M, M].
  // But, we ignore that...
  int sum = -target;

  std::vector<int> positives;
  std::vector<int> negatives;
  int total = 0;

  // Collect the positive and negative numbers.
  for (auto x : a) {
    if (x > 0) {
      positives.push_back(x);
      total++;
    }
    if (x < 0) {
      negatives.push_back(x);
      total++;
    }
  }

  int positives_seen = 0;
  int negatives_seen = 0;

  // Note: no overflow in the + here :-)
  while (positives_seen + negatives_seen < total) {
    if (sum < 0) {
      // If sum is negative, try to add a positive.
      if (positives_seen >= positives.size()) {
        break;
      }
      sum += positives[positives_seen++];
    } else if (sum > 0) {
      // If sum is positive, try to add a negative.
      if (negatives_seen >= negatives.size()) {
        break;
      }
      sum += negatives[negatives_seen++];
    } else {
      // If sum is zero pick a positive.
      if (positives_seen < positives.size()) {
        sum += positives[positives_seen++];
      } else {
        // There is at least one negative left.
        sum += negatives[negatives_seen++];
      }
    }
  }

  return sum == 0;
} 

Wednesday, September 24, 2014

Try the duck

I had posted this hand on bridgebase forums a few years back, as a play problem for the intermediate players.

This was the hand:

IMPS
None 
 North
♠ KQJ97
♥ 93
♦ 853
♣ AQ4

   


 South
♠ 42
♥ AQJ4
♦ AKQ
♣ 6532

You are South, in a contract of 4NT in a team game (don't ask how you got there, this is a play problem). West leads the heart 2, you play the 9, and East plays the K which you win with the A.

[Please stop reading if you want to try and solve this first]

You have 3 hearts, 3 diamonds and 1 club. You can get 2 spades for sure and have a good chance of a third spade trick (remember, you are in 4NT).

Now if you play spade KQJ, then West with ATxx, can duck twice (or win second round). Now you are one entry short to cash the third spade and will need to rely on the club finesse.

Consider what happens when you play a spade to the 9 after winning the heart Ace. Even if it loses to the T, East cannot attack your dummy entry. This will allow you to lose two spade tricks while keeping the only entry in dummy to cash the third spade. If you play spades from the top, you might have to use up the dummy entry to setup the spades.

This only loses to the playing from the top only when East has a singleton T.

Playing to the K first and then to the 9 loses to Tx with East.

[Apologies to Charles Goren for swiping the title from his BOLS tip]

Monday, September 22, 2014

Iterators part three: A General Binary Tree Iterator

[This is part three in a series about iterators]

Introduction

If you have heard of binary trees, you probably know what in-order, pre-order and post-order traversals are. These traversals are usually defined in a recursive fashion.

For instance, in-order is defined: traverse left sub-tree, visit node, traverse right sub-tree.

These kind of definitions lead to easy recursive implementations.

A common interview question is to convert these recursive method to iterative methods, which can be solved in a general fashion by doing what the compilers do: simulate the recursion using a call stack.

A possible followup to that is to add the ability to do the traversal in a lazy fashion: provide a next interface, which will return the visited nodes one by one (with next being reasonably efficient). [One could use yield, if the language provides it, but we will assume that it is unavailable.]

If you search the web, you will probably see different iteration solutions, custom made for in-order, pre-order etc.

A General Lazy Iterator for Binary Trees

In this article we will discuss a general lazy iterator for binary trees, for traversals defined recursively. This includes not just in-order etc, but also crazy traversals like traverse left node, traverse right node, visit node, visit node, traverse left node.

These general traversals can be represented as a string consisting of the letters "L", "R" and "V".

For instance, in-order traversal is "LVR", L for traverse left sub-tree, V for visit node and R for traverse right sub-tree. Post-order is "LRV" and the crazy traversal defined above is "LRVVL".

So, given a traversal method in the form a traversal string (like "LVR" and which contains at least one V), and the tree, we would like to create a lazy iterator, which will allow us to visit the nodes of the given tree in the given traversal order.

If you were to write a recursive method for the traversal, the code might look something like this (python):

# traversal_method is a string like "LVLR".
def traverse(tree, traversal_method):
    if tree == None:
        return
    for method in traversal_method:
        if method == "L":
            traverse(tree.Left, traversal_method)
        if method == "R":
            traverse(tree.Right, traversal_method)
        if method == "V":
            Visit(tree.Value) 

This recursive method can be converted to an iterative method, by maintaining a stack of (tree,  traversals_left) pairs.

For example, if the traversal required is "LVR", we first push the (root, "LVR") pair. Once the left sub-tree is traversed because of the "L" (started by pushing the left child), we update that to (root, "VR"), we then visit the node, update that node to (root, "R"), then traverse the right sub-tree.

Once a node's traversals_left becomes empty, we pop it off the stack, pick up what is on the top, examine what needs to be done and continue.

This is what the iterative version might look like (python):


# traversal_method is a string like "LVLR".
def traverse(tree, traversal_method):
    if tree == None:
        return
    stack = [(tree, traversal_method)]
    while len(stack) > 0:
        node, traversal_left = stack[-1]
        del stack[-1] # pop
        if node == None:
            continue
        # in python L[-1] is the last element of the list.
        # and L[1:] is the list without the first element.
        if len(traversal_left[1:]) != 0
            # push.
            stack.append((node, traversal_left[1:]))
        if traversal_left[0] == "L":
            stack.append(node.Left, traversal_method)
        if traversal_left[0] == "R":
            stack.append(node.Right, traversal_method)
        if traversal_left[0] == "V":
            visit(node.Value)        

In languages that support yield, an easy way to make this lazy would be to replace visit(node.Value) above with yield node.Value.

Assuming we don't have such a language feature, we want to implement a next interface, which can give us the nodes one at a time, and in a reasonably efficient manner (both worst case and amortized per call).

[We could probably blindly simulate what the C# compiler/python interpreter does behind the scenes for implementing yield, but the result might turn out to be an unreadable mess and we will probably have to spend some effort making it readable, anyway.]

We implement next as two steps.

First step is move, which will do the iteration of pushing/popping nodes into/off the stack, till we get to a node which needs to be visited, resulting in the appropriate node being at the top of the stack.

The second step of next would be to update the top of the stack indicating that a visit was just performed (and returning the tree node to the caller).

We also maintain the invariant that only the move step will change the size of the stack.

A C++ implementation might look like: [Click here to expand/collapse]

And here is a sample usage of this iterator
// Determine if two nodes of a binary search tree
// sum to target.
bool has_sum(Tree *tree, int target) {
  if (tree == NULL) return false;

  Traverser ascending(tree, "LVR");
  Traverser descending(tree, "RVL");

  Tree *left = ascending.next();
  Tree *right = descending.next();

  while (left->Value <= right->Value;) {
    int sum = left->Value + right->Value;
    if (sum == target) {
      return true;
    }
    if (sum > target) {
      right = descending.next();
    } else {
      left = ascending.next();
    }
  }
  return false;
} 

Analysis of next

If the size of the traversal description string is T [O(1) for most useful cases], and the maximum depth of tree is H, then next would run in O(TH) time, and would use O(TH) space.

Even though each call to next is O(TH), it is O(T) amortized per node visited (averaging over the whole traversal), as each node and each edge of the tree is touched only O(T) times.

Conclusion

Even though any recursive implementation which returns a list of elements, can theoretically be modified into a lazy iterative version, explicitly doing it for a binary tree was an interesting exercise.

This approach can also be generalized to general trees, where instead of "L" and "R", we can use child numbers etc.

Friday, September 19, 2014

Is the Sum S? Algorithm Puzzle.

The algorithm puzzle in this post is an extension of a simple interview question I used to ask some candidates (who wanted to do their coding in C/C++), when I was at Microsoft.

The question started off simply:

Given an array of integers, you want to find out if the sum of all the numbers in that array is S. Write a C/C++ method to do it.

Most candidates would squint at me, probably thinking, is this for real? Once they had it written down, I would ask them the follow up question: Could this give wrong answers?

The answer is yes: the integer addition could overflow, and could give wrong answers.

To rectify that, some candidates would suggest using floats/doubles etc. Then the discussion would go on about how one could implement big integers, coming up with some test-cases etc.

Anyway, here is the puzzle.

Given n numbers in the range [-M,M], and a target sum S, also in the range [-M, M], can you figure out in O(n) time, whether those n numbers add up to S, while keeping all your computations within the [-M, M] range? (M is unknown to you). Assume arithmetic operations are O(1).

Thursday, September 18, 2014

Solution to the Chain of Sets puzzle

[This is a solution to the Chain of Sets puzzle posted earlier]

A brief description of the problem:

Is there an uncountable set $F$ of subsets of $\mathbb{N}$ such that for any distinct $A, B \in F$ either $A \subset B$ or $B \subset A$?

Solution


A solution uses the fact that the dense set of rationals $\mathbb{Q}$ is countable i.e. there is some numbering of the rationals as $$\mathbb{Q} = \{q_1, q_2, q_3, \dots \}$$
For each real number $r$, let $Q_r$ be the set of rationals strictly less than $r$.

Each such $$Q_r = \{q_{i_1}, q_{i_2}, q_{i_3}, \dots \}$$ can be associated with a subset of naturals (by taking the subscripts) $$N_r = \{i_1, i_2, i_3, \dots\}$$
If $y \lt x$, then we have $N_y \subset N_x$. Since the rationals are dense, $N_x \neq N_y$.

So the answer is, yes, there is an uncountable set with the required property: $$F = \{N_r : r \in \mathbb{R}\}$$

Tuesday, September 16, 2014

An Unusual Safety Play

[This hand reminds me of a quote attributed to Bobby Fischer, which says to keep looking even when you have found a good move]

Playing a team game, you are the dealer holding A42, K974, -, KJ8642.

You open 1C, LHO bids 2D alerted as majors. Partner does not hear the alert and bids 2H. RHO bids 4S. Your agreement is that 2H is a limit+ in clubs, so you bid 5C, and this gets doubled by RHO.

LHO leads the DK and you see:



Apparently LHO forgot their system, but that is irrelevant to the hand now. You have to try and make 5C doubled.

[Click next in the above diagram to see the cards played so far]

You ruff the DK with the C2 and play the club 4, club 9 from LHO, club Q from dummy and RHO wins the CA. RHO returns the DJ, which you ruff.

A safety play looks apt for this hand.

You could try to cater to RHO having 4 clubs, by playing a heart to the dummy and playing a club to the 8.

But there is a better play.

If you play a heart to the A, and RHO discards, you must play another heart!

If you don't, clubs might well be 3-2, and a heart ruff will be your setting trick: LHO will win the C8 with the CT and won't have any trouble returning a heart.

Playing the second heart extracts the last heart from LHO (an unusual form of Dentist's coup), and now even a 3-2 break won't get you down. You play the second heart and if that is not ruffed, play a club to the 8 (there is no problem if your heart gets ruffed).

This is an unusual hand: you make a safety play guarding against an even trump break by playing an extra round of a suit where you know the opponent is void!

Monday, September 15, 2014

Solution to the Heavy Coin puzzle

[This is a solution to the heavy coin puzzle posted earlier]

A brief description of the problem:

You have $96$ coins of weight $H$, $4$ of weight $L$, with $H \gt L$. With two or fewer uses of a two pan balance, find a heavy coin.

Solution


Make three groups, $A$ with $33$ coins, $B$ with $33$ coins and C with $34$ coins.

Weigh $A$ against $B$.

If $A$ is heavier, then $A$ can have at most one light ($L$) coin.

Now we can take two coins from $A$ and weigh them against each other. Pick the heavier one if of different weight, or pick any one if equal.

The case when $B$ is heavier is similar to the above case.

If $A$ and $B$ are of equal weight, then we have the following possible scenarios

$$\begin{array}{|c|c|c|} \hline
A&B&C\\ \hline
33H&33H&30H+4L\\ \hline
32H+L&32H+L&32H+2L\\ \hline
31H+2L&31H+2L&34H \\ \hline
\end{array}$$


Now move one coin from $A$ to $B$ (call the result as $B_a$).

Weigh $B_a$ against $C$.

If $B_a$ is heavier than $C$, then the coin you moved was a heavy coin.

If $C$ is heavier, then any coin in $C$ is a heavy coin.

If $B_a$ and $C$ are equal, then the coin you just moved is a light coin, and the $32$ coins left in $A$ are all heavy.

Sunday, September 14, 2014

Iterators part two: Using the Odometer

[This is part two in a series about iterators.]



Introduction

Like the odometer of a car, the odometer we consider is basically a counter, with one generalization: each position could cycle through a different number of "digits" (each position starts at digit 0).

[Digits in quotes, because we could have one position go from, say, 0 to 999, each number is considered a digit, in base 1000.]

The general odometer works just like the one in the car.

Every time you need to increment the odometer, you increment the digit in the rightmost position (going back to 0 if you are at the largest possible digit for that position). If the increment results in that position cycling back to 0, you increment the position to the left of it. If that results in cycling back of that position to 0, you move one position left and so on.

Basically, simulating hand addition with carries.

When all the digits reach the maximum value, we cannot increment anymore. An odometer can easily be implemented as an iterator, to provide a has_next, next interface with next incrementing the counter by one.

Here is how an implementation might look like (C++):
#include <vector>

// Represents a generalized odometer.
class Odometer {
public:
  // 0 is the right most position.
  Odometer(const std::vector<int>& max_digits) {
    _max_digits = max_digits;
    for (int i = 0; i < max_digits.size(); i++) {
      _odometer.push_back(0);
    }
    _overflow = false;
  }

  bool has_next() const {
    for (int i = 0; i < _odometer.size(); i++) {
      if (_odometer[i] != 0 ) {
        return true;
      }
    }
    // All zeroes. If we overflowed,
    // we are at the end.
    return false || !_overflow;
  }

  // returns the current, but increments before
  // returning.
  std::vector<int> next() {
    _overflow = true;
    std::vector<int> current = _odometer;
    for (int i = 0; i < _odometer.size(); i++) {
      _odometer[i] = (_odometer[i] + 1) % _max_digits[i];
      if (_odometer[i] != 0) {
        break;
      }
    }
    return current;
  }

private:
  bool _overflow;
  std::vector<int> _odometer;
  std::vector<int> _max_digits;
}; 
 
We will now discuss how we can do a iterator based generation of permutations using an odometer (and briefly, Cartesian products and power-set enumeration).

Generating Permutations

[If you want to try solving the problem of implementing a has_next, next type of iterator for permutations using an odometer, please stop reading]

While the method discussed below is not optimal (unlike Heap's method, [in fact, not even asymptotically optimal]), and generates permutations in a "weird" order (unlike Narayana Pandita's method of generating in lexicographic order), it is an interesting application of odometers.

Say we want to provide an iterator to generate all the permutations of $1,2, \dots, n$, with a has_next, next interface.

The idea is to use a suitable odometer as a black-box, and map each odometer state to a permutation uniquely.

Since there are $n!$ permutations, an odometer that suggests itself:

The odometer has $n-1$ digits, $d_1, d_2, \dots, d_{n-1}$ with $d_1$ being the left-most.

The maximum possible digit achievable by $d_1$ is $n-1$, and so total $n$ digits for $d_1$, which are $\{0,1,2, \dots, n-1\}$.

Maximum digit for $d_2$ is $n-2$ and so on (max for $d_i$ being $n-i$).

This odometer can count upto $n\times(n-1)\times\dots\times 1 = n!$, exactly the number of permutations.

To generate a permutation given the odometer $[d_1, d_2, \dots, d_{n-1}]$, we can use the following method.

Start with $n$ empty circles in a row.

Write $1$ in the $(d_1+1)^{th}$ circle from the left.

Write $2$ in the $(d_2+1)^{th}$ empty circle from the left.

Write $3$ in the $(d_3+1)^{th}$ empty circle from the left.

and so on.

Once you have written $n-1$, there will be one empty circle left, where you write $n$, resulting in a permutation of $1,2,\dots, n$.

Notice that in the resulting permutation, the number of numbers greater than $i$, which appear to the left of $i$ is precisely $d_i$. In other words $[d_1, d_2, \dots, d_{n-1}]$ is the inversion table of the permutation, which tells you how many inversions a given number has.

It is not difficult to see that each odometer state generates a unique permutation.

If the odometer can be implemented as an iterator with has_next, next, so can the permutation generator.

There is an easy $O(n^2)$ time algorithm to map the odometer state to the permutation. Even $O(n\log n)$ is achievable.

Generating Cartesian Products


The Cartesian product of sets $A_1, A_2, \dots, A_n$ is the set of tuples $(a_1, a_2, \dots, a_n)$ such that $a_i \in A_i$.

If the size of $A_i$ is $s_i$, then the number of such tuples is $s_1\times s_2\times \dots \times s_n$.

An odometer for this is $[d_1, d_2, \dots, d_n]$ where $d_i$ cycles through $s_i$ digits. Given an odometer state $[d_1, d_2, \dots, d_n]$, we pick the $(d_i+1)^{th}$ element from $A_i$ for the corresponding element of the resulting tuple.

The mapping can be done in $O(n)$ time, assuming the $k^{th}$ element of $A_i$ can be picked in $O(1)$ time.

Generating Subsets



If we want to generate all the subsets of $\{a_1, a_2, \dots, a_n\}$, we can represent each set as a bit-vector $[v_1, v_2, \dots, v_n]$, where $v_i = 1$ iff $a_i$ is in the set.

We can use the odometer $[d_1, d_2, \dots, d_n]$ with each $d_i$ cycling through exactly two digits.

We pick $v_i = d_i$.

This mapping can be done in $O(n)$ time.

Conclusion


The generalized odometer seems to be a useful tool for use in implementing has_next, next type of iterators for certain combinatorial objects.

Also, given a number $M$ we can directly construct (and vice-versa) the odometer corresponding to $M$ increments, without having to increment $M$ times.

This direct mapping allows us to compute the $M^{th}$ permutation, Cartesian product, or subset directly (without having to perform $M$ increments). This would be useful to parallelize the generation and processing of those objects.


Friday, September 12, 2014

Chain of Natural Number Sets


[A puzzle which requires knowing what countable/uncountable mean]

Let $\mathbb{N}$ be the set of naturals: $\{1,2,\dots\}$.

Suppose $F$ is a subset of the power-set of $\mathbb{N}$ (i.e. $F$ is a set of subsets of $\mathbb{N}$), such that for any two distinct sets $A, B \in F$, either $A \subset B$ or $B \subset A$.

Basically $F$ is a chain of sets, each containing the previous one.

Is there an $F$ which is uncountable?


[Solution]

Wednesday, September 10, 2014

The Location of the Club Ten

[A simple hand, but nice enough to write about]

Playing a matchpoint event, red vs white, you are South and end up declaring a contract of 4S (East was Dealer).

West leads the HK (either from HKQ, or singleton).

This is what you see:



[The bidding will probably not proceed that way in real life]

Without thinking too much, you win the HA and you play a trump to your Ace and both follow. You draw the last trump and now start thinking.

What is best play to maximize your tricks? (remember, this is matchpoints).


You have a loser in hearts and if West has the CK you have a club loser too, and won't make more than 11 tricks.

So suppose East has the CK.

Now you can try an endplay: eliminate diamonds and play the heart to West. West will have to lead a club. If West has the CT and leads low, then because of the shortage of entries to dummy, you have to play the 9 from dummy (forcing east to cover), and make 3 club tricks getting you to 12 tricks in total.

This plan fails if East has the KT, because you cannot get back to dummy to trap the K.

There is a better play for 12 tricks.

After drawing trumps, you play three rounds of diamonds ending in the dummy. Now play the CJ. East has to cover this. You win the CA, and only then exit a heart.

West has to give you your twelfth trick. The location of the club Ten is irrelevant.

Monday, September 8, 2014

Iterators part one: Lazy shuffling with the Knuth Shuffle

[This is part one in a series about iterators.]

tl;dr: Generate a random permutation of $1,2 \dots, n$ lazily. Skip to the section titled Lazy Shuffle if you already know how the Knuth Shuffle works.

Introduction

Suppose you are working on something where you needed to create a random permutation of $1,2,\dots, n$ and process the random permutation one element at a time, potentially stopping the processing long before you run out of the n elements. How many elements you need to process is dependent on the elements you see (i.e. you don't know it before hand).

For example, n could be a billion, and you process the first 1000 elements in some cases, 100 in some etc.

A well known way to create a random permutation/shuffle is the Knuth Shuffle. In the Knuth shuffle, you are given an array, and the algorithm creates a random, in-place permutation of the array, in $O(n)$ time.

The idea of the Knuth shuffle is actually quite simple.

An explanation of Knuth Shuffle


The array consists of two portions, the shuffled portion and the yet to be shuffled portion. Initially the whole array is yet to be shuffled.

The shuffled portion will grow from the right to the left (i.e. initially empty, then $(A[n-1])$, then $(A[n-1], A[n-2])$ etc). You now pick a random object from the yet to be shuffled portion and move it to the location which will grow the shuffled portion by one. This will result in the displacement of a yet to be shuffled object. That object you move to the empty space created by the newly selected object that was just moved to the shuffled portion.


Lazy Shuffle


We could apply the Knuth shuffle to our problem, by creating an array $A$ of size n, such that $A[i] = i$, performing the Knuth shuffle on it, and process the elements in the order $A[0], A[1], \dots$.

Since n could be a billion, the time and space costs of this could be wasteful.

A practical approach would be to repeatedly generate a random number from $1$ to $n$ and check if you have generated that number before. If you have seen it before, you regenerate. This is a big improvement, but still wasteful, with the potential to run for a long time.

We can actually make the Knuth Shuffle work for us in a lazy fashion, without having to pay the $O(n)$ space or time costs.

Assuming we need to process only $m$ elements ($m$ unknown to us), there is a way to make only $m$ random number calls, and use $O(m)$ space, and run in $O(m)$ expected time.

We can achieve this by simulating the array in the Knuth shuffle by a hashtable.

[We can also use a sparse array implementation of the array, which will use $\Theta(n)$ space, but the running time will still be $O(m)$]

If we are simulating the array $A$, then the key will be $i$, and the value will be $A[i]$. The hashtable will only contain entries for elements that have been touched (read or written to). The first time we try to read $A[i]$, the hashtable will create an entry for it, setting the value to be $i$.

This allows us to provide the hasNext, next style iterator for the shuffled elements.

Here is what the code might look like (C++).

#include <map>  // For unordered_map
#include <random>  // Assume exists with some API.

// Sample usage:
// LazyShuffle ls(1000000);
// while (ls.has_next()) {
//     int v = ls.next();
//     Status s = process(v);
//     if (s.Done()) break;
// }
class LazyShuffle {
public:
    // generates a random permutation of 0 to n-1.
    LazyShuffle(int n)
        : _to_shuffle(n) {
    };

    bool has_next() {
        return _to_shuffle > 0;
    }

    int next() {
        // Do the Knuth shuffle here.
        // [yet to be shuffled] [shuffled]

        // Generate a random number from
        // 0 to _to_shuffle-1;
        _to_shuffle--;
        int selected = _random.generate(0, _to_shuffle);

        // Now swap this with element which 
        // grows the shuffled part
        swap(selected, _to_shuffle);

        return get(_to_shuffle);
    }

private:
    void set(int idx, int value) {
        _array[idx] = value;
    }

    int get(int idx) {
        if (_array.find(idx) == _array.end()) {
            _array[idx] = idx;
        }
        return _array[idx];
    } 

    void swap(int idx1, int idx2) {
        int val1 = get(idx1);
        int val2 = get(idx2);
        set(idx1, val2);
        set(idx2, val1);
    }

    // We use unordered_map, but we can use a hashtable
    // with expected O(1) time operations.
    std::unordered_map<int, int> _array; 

    // The number of elements not shuffled.
    int _to_shuffle;

    // Assume exists
    random::Generator _random;
};


Solution to the sum to fifteen two player game

This is a solution to the sum to fifteen two player game puzzle.

A short description of the puzzle:

Two players alternately pick numbers from $1,2,3,\dots, 9$. The first one to pick some three numbers which sum to 15 is the winner. Does anyone have a winning strategy?

Solution


This puzzle is basically two people playing tic-tac-toe!

Look at this magic square (image swiped from wikipedia):



Each row, column and diagonal sum to 15.

Also notice that all the even numbers are at the corners.

If you pick three numbers, only one (or three) of which appear at a corner, then the sum is even and cannot be 15.

If you pick three numbers, none or two of which are at the corners, then you have to pick a whole row/column/diagonal to make the sum 15.

So, the sum to fifteen puzzle is basically playing tic-tac-toe, which has no winning strategy for either player.

Sunday, September 7, 2014

Solution to the warden and 100 prisoners puzzle

Here is a solution to the warden puzzle from a few days ago.

A short description of the problem:

There are 100 prisoners, each with a number from 1 to 100 painted on their heads. Each can see the other 99 painted numbers but not his own, and has to guess his own number based only on the other 99. The goal is to get at least one correct guess.

Solution

The prisoners choose a numbering for themselves: $1, 2, \dots, 100$ (say ordered by when they got to prison). Suppose the numbers they get painted on their heads is $h_1, h_2, \dots, h_{100}$.

The solution is based on the following observation:

There is some $1 \le j \le 100$ such that $$h_1 + h_2 + \dots + h_{100} = j \mod 100$$ i.e. there is some prisoner (let's call him 'the boss') such that the sum of the painted numbers leaves the same remainder (when divided by 100) as the number chosen by the prisoners for 'the boss', based on the imprisonment date.

Now the strategy is simple, each prisoner assumes he is 'the boss' and makes his guess: by computing the sum of the other 99 (modulo 100, resulting in a number between 0 and 99) and subtracting that from his number based on imprisonment date. If the number is zero or negative, he adds a 100 to it and that would be his guess.

At least one will make the correct guess.

Saturday, September 6, 2014

Finding the heavy coin puzzle

[This is a nice puzzle from a Russian mathematics contest for 8th graders]

You have 100 coins. 96 of them have the same weight. The remaining 4 are lighter than the other 96, and those 4 have the same weight.

You are given a pan balance which has two pans, and you can compare if the objects in the left pan are heavier/lighter/equal to the objects in the right pan.

You are allowed to use this pan no more than 2 times.

Can you find a heavy coin?

[Solution]

Wednesday, September 3, 2014

A surprising property of power strings

[This came about trying to come up with a linear time algorithm to determine if a string is a power string]

Definition of power string.

A string $x$ is called a power string, if and only if there is some string $z$, such that $$x = z^m\quad \text{and}\quad m \gt 1$$

In other words, $x$ is $z$ concatenated to itself $m$ times.

For example, $nananana$ is the power string $(na)^4$, while  $nanananananabatman$ is not a power string.

[If you want to try finding a linear time algorithm for the detection of power strings, please stop reading]

The surprising property of power string is given by the following:

Proposition: Power strings are the only strings which are a cyclic shift of themselves.

We ignore the trivial shifts, by the length of the string which rotates every string into itself.

That power strings are cyclic shifts of themselves is easy to see. We show two proofs of the converse here.

Proof 1)

[Even though this proof seems verbose, it is not really difficult]

Suppose \(x\) is a string of length \(n\), over the alphabet \(A\), which is a cyclic shift of itself. You can basically treat \(x\) as a function \(f:\{0,1,2,\dots, n-1\}\to A\).

If \(x\) is a cyclic shift of itself, then for some \(1 \le k \le n-1\), \(f\) satisfies $$f(x+k \mod n) = f(x), \forall x \in \{0,1,2,\dots, n-1\}$$
In order to show that \(x\) is a power string, we need to show that there is a \(p\) such that \(p\) divides \(n\) and \(f(x+p \mod n) = f(x), \forall x \in \{0,1,2,\dots, n-1\}\)
 
Now if \(n = qk - k_1\) where \(q \gt 1\) and \(0 \le k_1 \lt k \), the we must have that

$$ f(x) = f(x + k \mod n) = f(x+ qk \mod n)$$
$$ = f(x + n + k_1 \mod n) = f(x+ k_1 \mod n)$$
Thus we get a sequence: \(k = k_0 \gt k_1 \gt k_2 \gt \dots \) such that $$f(x+k_i) = f(x), \forall x \in \{0,1,2,\dots, n-1\}$$
 Since \(k_i\) is a strictly decreasing sequence, we must end up at \(0\) at some point, say \(k_s = 0\). Then we can choose \(p = k_{s-1}\).

Proof 2)

We use Lyndon's theorem which says that two strings \(x,y\) commute (\(xy = yx\)) iff they are power strings of the same string, i.e. \(x = z^r\) and \(y = z^s\).

If \(x\) is a cyclic shift of itself, then we have that \(x = uv\) (concatenation of strings \(u\) and \(v\)) such that \( uv = vu\).

Linear Time Algorithm

This property allows you to determine if a string is a power string in \(O(n)\) time, using any linear time string matching algorithm like Boyer-Moore. All you need to do is check if \(x\) is a sub-string of \(xx\) with the first and last letters removed from \(xx\).

Amazing play by Milton Rosenberg

[This hand was given to me by Rajendra Gokhale a few years back, who saw it in the sports magazine Sportsweek in India. I was awed by it then, and still am]

You are South in 1NT (playing Rubber bridge).

West leads the HQ, which you duck, and follows with the HJ and a low heart (you win the third round)



[Click on next to see the play so far]

If clubs are 3-2, ducking a club should get you to 7 tricks via 2 spades, 1 heart and 4 clubs.

If clubs are not 3-2, you probably have not much hope.

So duck a club and you are done, right?

No! Milton Rosenberg made the key play of cashing the spade Ace before ducking a club.

If you duck a club without cashing the spade Ace, you get squeezed on the fourth heart.

If you discard a spade from hand, opponents can switch to a club and blockage in spade and club suits will leave you one trick short.

If you discard a diamond, you might end up losing a bunch of diamond tricks.

You can't discard a club.

But, if you play the spade Ace before ducking a club, you can easily throw the small spade on the fourth heart. Opponents can cash two more diamond tricks, but cannot prevent you from taking your seven tricks. Brilliant play made at the table by Milton Rosenberg (during the 1980 Vanderbilt).

[Apparently, this has also appeared in one of Kelsey's book later]

Tuesday, September 2, 2014

Summing up to fifteen, two player game puzzle

[Heard this somewhere, but don't recollect the source]

Alice and Bob (no, this is not a crypto puzzle) are playing a game.

They have 9 coins, each with a unique integer from 1 to 9 written (i.e. all the numbers 1,2,...,9 appear).

They take turns picking coins, picking exactly one coin at their turn, and add the picked coin to their respective collection.

If at any time, any player has three unique coins (from their collection) which sum to 15, that player wins.

Here is an example game:

Alice picks 1
Bob picks 9
Alice picks 8
Bob picks 6
Alice picks 2

At this point, Alice's collection has 1, 2, 8 and Bob's collection has 6,9, and Bob hasn't won because we need three coins to sum to 15.

Alice always goes first.

Does anyone have a winning strategy?

[This has a pretty neat solution.]

Apple or Pecan?


Monday, September 1, 2014

100 prisoners and a warden puzzle

[You have probably heard this before, but this is one of those puzzles which seems impossible at first, and has a simple solution, not requiring anything more than 10th  grade mathematics]

A mathematically minded warden with the power to release his prisoners decides to play a game with them.

He tells them:

"Tomorrow, I will assemble all 100 of you in the common room at 10AM. Prior to coming in the common room, each of you will separately in your cells have an integer between 1 and 100 (including 1 and 100), painted on your head. The numbers could repeat (e.g. all of you could have the number 27 painted on your heads).

In the common room, you will be able to see the numbers on the other 99 prisoners, but not yours. There will be no communication (direct or indirect) allowed between the prisoners.

You task is to try and guess your own number solely based on the other 99 numbers you see and no other information. Each of you will have a guard on you at all times, so if you try any funny business, it will be isolation for all of you. You will all, at exactly 10:15AM, whisper your guess into your guard's ear.

Tonight, you can discuss any strategy you like. If any single one of you is able to guess their number correctly, all 100 of you will be released.

If you guess correctly, I will require you to tell me your strategy. You will be released only if I find it mathematically valid."

What should the prisoners do?

[Solution]

Tensor's Test

In the 1990s there used to be a tradition of newly joined IITians from Hyderabad, to conduct a pre-IIT JEE examination for the next batch, called the Tensor's test (is it still going on?). IIT-JEE =  joint entrance examination for admission to the IITs.

I had the opportunity to conduct the test (with Amit, Rajasekhar and Ramana).

Here are a few math questions from that test.

1) \(f:\mathbb{R}\to\mathbb{R}\) is a continuous function such that \(f(f(x)) = x,\ \forall x \in \mathbb{R}\). Show that, for some \(c \in \mathbb{R}, f(c) = c\).

2) Find all natural numbers \(x, y, x \gt y\), such that \(x^y = y^x\). Hint (given as part of the test): What is the maximum value of \(f(x) = x^{1/x}\)?

3) \(f_n\) is a sequence such that \(f_1 \gt 0\) and $$3f_{n+1} = 2f_n + \frac{A}{f_n^2}$$ for some constant \(A \gt 0\) and all \(n \ge 1\). Show that \(f_{n+1} \le f_n \forall n \gt 1\)

Problem 3 was a particular favourite of mine (at that time) as I had discovered a way to compute \(n^{th}\)-roots (the above sequence converges to \(\sqrt[3]{A}\)), with a nice (in my opinion :-)) elementary proof that the sequence is bounded and monotonic.

Problem 3 basically asks for the monotonicity part of that proof. Only later did I realize that this was just Netwon-Raphson's method.

[If you are interested, the solutions to the above three problems are here: http://ruffnsluff.blogspot.com/p/tensors-test-solutions.html]

Take all the finesses

This is a hand similar in theme to a hand constructed by Paul Lucaks.

Playing Rubber Bridge, you are South and reach the contract of 6S. West leads the HJ, and you see.



You win the HQ, and play the SA, to which RHO discards (click Next in the above diagram to see the play).

There are three finesse chances: diamond finesse, and the double finesse in clubs.

[You cannot eliminate the diamonds for an endplay]

If you draw trumps, eliminate hearts, and then play a club to the T (or Q), West can put you to a guess by playing back a diamond: you have to guess which finesse to take next (diamond, or play diamond Ace and play club to Q).

Is there a way to take all your finesses? You might want to stop reading if you want to think about it.

The answer is, yes it is possible (otherwise there is nothing to write about, right?)



[Click next in the diagram above to see the play]

You can take all your chances by drawing trump, cashing hearts throwing a diamond, and the key play: cash the diamond Ace, before playing a club to the T.

Cashing the diamond Ace allows you to take the diamond finesse risk free: if West returns a low diamond, you put up the Q, and can ruff if it gets covered, allowing you to take the club finesse next.

If West returns anything other than a diamond, you won't need any more finesses.