Friday, July 24, 2020

AKJT98 opposite 32


Your trump suit is AKJT98 opposite 32 in the dummy. You need to make all tricks. How will you play?

In isolation, the percentage play is to take two finesses (without cashing A or K first). This caters to Q four cards with RHO which is higher percentage than 4 small cards with RHO.


If it were AKJT9 opposite xxx, then the correct play is to cash an honour first and then take the finesses. This allows you to also cater to singleton Q offside.

Now consider the following hand:


IMPS
None 
 Dummy
♠ 32
♥ KJ87
♦ AJ
♣ KQJT9

  


 You
♠ AKJT98
♥ 65
♦ KQ2
♣ A2




You are in 6S. Yes bidding maybe relevant but assume it is forgotten and all we remember is that opps never bid.

LHO leads the H2 (3rd or 4th best), you put in the J and RHO wins the A and shifts to a small spade.



How will you play?






You need to bring the trump suit in with no losers. If we looked at the suit in isolation, we would just take two finesses.

Could we potentially cater to a singleton Q with LHO without losing the option of picking up Qxxx with RHO?


Suppose we did the following (You can follow this play more easily on BBO handviewer):

Go up with the SA (assume LHO follows low spade).

Play heart to K and take the spade finesse. Assume LHO shows out. So RHO started with Qxxx of spades.

Now we need a trump coup against RHO to pick up the Q and for that we need two ruffs in hand and then end up in dummy to play through RHO.

Since RHO could potentially have doubleton club and throw that on the last heart:

Now play CA, CK and ruff a heart (we expect RHO to follow).

Diamond to the J and ruff another heart (RHO may or may not follow).

Now a diamond to dummy and play the good clubs through RHO (discard diamond if not ruffed).

This line will go down if RHO has Qxxx and either has a singleton/void club or a 3 card heart and doubleton diamond and they throw a diamond on the 4th heart. The second case is probably unlikely as that would give RHO a 4-3-2-4 hand and LHO with a singleton club might have led it.

So is this trump coup line better than taking two spade finesses?

No!

The likelihood of a club void/singleton with RHO outweighs a specific suit break in spades! The chances of a singleton or void club with RHO is ~8% and this only goes up taking the spade suit break into consideration.

The specific spade suit break we are catering to: Q vs xxxx is around 2.85%.

[I used Richard Pavlicek's suit break calculator, but might have made a mistake]

If we changed the hands and lead, for example:


IMPS
None 
 Dummy
♠ AKJT98
♥ 65
♦ KQ2
♣ A2

   


 You
♠ 32
♥ KJ87
♦ AJ
♣ KQJT9

W N E S
1NT
P4HxfrP4S
P4NTP5C
P6S



You are in 6S, LHO leads the H2 which RHO wins with the A continues a heart.

Now it might be right to play for the trump coup, because LHO might have led a singleton club or RHO might have shifted to a diamond (trying to give partner a ruff looking at a 6+ card D suit).

I haven't done the math though.



Sunday, July 19, 2020

From fair to irrational and back

This article is kind of related to the previous post about random numbers.

The main questions we want to answer:

Can we simulate a biased coin with a fair coin? What about the reverse?

We are more interested in the case where the biased coin needs to have an irrational probability.

We will consider the classic and a not so classic approach.

(We will also explicitly look at $\frac{1}{\sqrt{2}}$ and $\frac{1}{e}$ in this post).

From fair to irrational

So suppose we have a random bit generator which generates bit 0 (and 1) with probability $\frac{1}{2}$. Also assume that each call takes constant time.

Can we use this to simulate a bit generator that generates bit 0 with probability $p$ where $p$ is some irrational in $(0, 1)$? 

One condition we will require is that the biased bit generator will run in expected constant time, without that it is not of much use.

For motivation for the method: Say we had a way to generate a real number $r$ in $(0, 1)$ and compare that against $p$, we could then have an easy algorithm to return 0 when $r \lt p$ and return 1 when $r \gt p$. This will return $0$ with probability $p$.

The trick is to have a representation of $p$ and that of $r$ which can be compared while we are in the process of generating $r$. The most obvious representation that comes to mind is the base-2 representation!

Base-2 generator


So say we had a base-2 representation of $p = 0.d_1 d_2 d_3 \dots $

To generate $r = 0.r_1 r_2 r_3 \dots $ we can randomly generate the bits $r_1, r_2, \dots$ and compare against $d_1, d_2, \dots$ while we are generating the $r_i$. With a very high likelihood there will be some finite $k$ where $r_k \neq d_k$ for the first time. We can stop and return bit 0 or bit 1 depending on whether $r_k \gt d_k$ (and so $r \gt p$) or $r_k \lt d_k$ (and so $r \lt p$).

This can be written in code as follows:


# Given bitstream (base 2 representation) generates random bit with
# pbt of 0 = number represented by bitstream.
def random_bit_base2(bias_bitstream, fair_random_bit):
  # We generate a random number and if < given number
  # return 0, else return 1.
  for bit in bias_bitstream():
    fair_bit = fair_random_bit()

    # the number represented by the fair_bit seq is different!
    if fair_bit != bit:

      # This basically is returning 0
      # if number formed by fair_bit seq < bitstream seq.
      # else returning 1.
      return fair_bit



The probability of generating a 0 is

$$\frac{d_1}{2} + \frac{1}{2}\times\frac{d_2}{2} + \dots = p$$

The expected number of random number calls is $\sum P(X \gt n) = \sum \frac{1}{2^n} = 2$. [See this math.se post]

The problem with this approach is generating the base-2 representation of the required bias $p$, but for practical purposes (depending on the application) we can choose to only work with the first "few" (eg 128) digits of $p$. Very likely $p$ can be approximated via infinite series etc and the digits placed in a lookup table.

For certain $p$ it is pretty easy to do though. For eg:


Problem 1: Can you write a simple algorithm to lazily generate the base-2 representation of $\frac{1}{\sqrt{2}}$?

Hint: If $x^2 \lt 2^{2n+1} \lt (x+1)^2$ then we only need to consider $2x$ or $2x+1$ for $2^{2n+3}$.


This base-2 version generator is in fact pretty well known (Googling will give similar results here).

There are other approaches like base-n instead of base-2. Choosing the base $n$ will depend on the bias $p$. For eg, with the existence of spigot algorithms for $\pi$, we could use base-16 for say $p = \frac{\pi}{16}$.

We will now discuss the not-so-classic approach.

Alternative to base-n: factorial base


If we assume that instead of a fair bit generator, we had a random integer generator which given input $K$ returned one of $\{0, 1, \dots, K-1\}$ with probability $\frac{1}{K}$ then for constants like $\frac{1}{e}$ we could do something other than base-n.

We can use a representation which uses factorials. For a given $r \in (0, 1)$ there is a representation of $r$ as

$$ r = \frac{a_2}{2!} + \frac{a_3}{3!} + \dots  = \sum_{n=2}^{\infty} \frac{a_n}{n!}$$

And $a_m \in \{0, 1, \dots , m - 1\}$.

This representation (say $(a_2, a_3, \dots)$) is finite only when $r$ is rational (we also need $a_k \neq k-1$ infinitely often, but we can ignore that for our purposes).

For eg.

$$ e - 2 = \sum_{n=2}^{\infty} \frac{1}{n!} = (1, 1, \dots )$$

It can also be shown that to compare $r = (a_2, a_3, \dots) $ against $p = (b_2, b_3, \dots)$ it is enough to find the first $k$ where $a_k \neq b_k$ and compare the "digits".

To generate $a_k$ we just generate a random integer in $\{0, 1, \dots, k-1\}$ using the random integer generator we have.

The code would look something like this:


# Given digits in factorial base
# i.e num = \sum_{n=2}^{\infty} d_n/n!
# Generate a random bit with probability num.
def random_bit_base_factorial(digit_stream, fair_random_int):
  # We generate a random number and if < given number
  # return 0, else return 1.
  base = 2

  # We generate a random digit (0, ..., base - 1)
  # and then increment the base.
  for digit in digit_stream():
    fair_digit = fair_random_int(base)
    base = base + 1
    if fair_digit < digit:
      return 0
    if fair_digit > digit:
      return 1

Say we need the probability $p = \frac{b_2}{2!} + \frac{b_3}{3!} + \dots $.

The probability of returning 0 is

$$ \frac{b_2}{2} + \frac{1}{2} \times \frac{b_3}{3} + \frac{1}{2} \times \frac{1}{3} \times \frac{b_4}{4} + \dots = \frac{b_2}{2!} + \frac{b_3}{3!} + \dots = p$$

The expected number of fair random number calls is $\sum \frac{1}{n!} = e - 1 \lt 2$.

[It might look like this approach is better than base-2 because it converges faster and  uses fewer random number calls but we need to consider the implementation of the random integer generator, which might be using the random bit generator under the covers.]

This representation has the same problem as generating the digits in base-n of the required probability. For certain constants it is easy to do, though.

For eg:

Problem 2: Show that $$ \frac{1}{e} = \frac{2}{3!} + \frac{4}{5!} + \dots = \sum_{n=1}^{\infty} \frac{2n}{(2n+1)!}$$


A note regarding the "actual" runtime (applies to both base-n and factorial)


We have conveniently ignored the time taken to generate both the digits of the bias $p$ and the random digit of the generated real number $r$ but that is not a huge problem.

In fact we can prove the following:

Problem 3: If the $n^{th}$ digit generation of the bias $p$ and random number $r$ takes polynomial time in $n$, show that the expected runtime of the biased random bit generator is $\mathcal{O}(1)$.

Hint: Use the generating functions and their derivatives.

So if we hardcode the first say 128 digits in a lookup table and then switch to generating the rest, we will get a very good practical algorithm.

From Irrational back to Fair


This is actually not too hard, since if we generate two bits, the probability of getting 0,1 is the same as the probability of getting 1,0. 

So the algorithm is simple, call the biased bit generator twice. return 0 if we get 0,1. return 1 if we get 1,0. Continue if we get 0,0 or 1,1. This has finite expected runtime too.



Wednesday, July 15, 2020

Using math as an interviewer.

One of my go to interview questions is

"Generate a random binary tree".

We are only interested in structure of the tree and number of nodes will be < 1000.

I say that the ideal case is all trees are generated with the same probability (uniformly random), but explain that any algorithm where the probability of every tree is non-zero will do. Two trees are distinct if they are structurally different.

[Generating a random binary tree where each tree is equally like is not an easy problem (definitely too hard for an interview, though I have had a couple of candidate who did this!) and if you want to try it here is an old post about it: https://ruffnsluff.blogspot.com/2015/03/generate-random-binary-tree.html]


I make sure to tell them I don't need uniform (all equal probability) because it is a hard problem.

 Inspite of the explicit mention that it is a hard problem and not what I am looking for, candidates come up with all sorts of algorithms and claim it is uniformly random.

As an interviewer you need to be able to quickly tell if the algorithm is truly generating uniformly random or not.

I use a "trick" which has worked with almost every candidates' solution so far.

All I need to do is look at the random number generator calls their algorithm is making.

Typically the probabilities they generate are of the form $\frac{1}{n}$ or $\frac{1}{2}$ or $\frac{1}{3}$ or some such limited set like that.

The number of trees on $n$ nodes is the Catalan number: $ \frac{\binom{2n}{n}}{n+1}$ which tends to have many weird prime number factors (see, Bertrand's postulate). For eg, for $n=3$ we expect the probability of each tree to be $\frac{1}{5}$. For $n=4$ the probability of each tree needs to be $\frac{1}{14}$ (7 is the weird prime here).

No matter how the algorithm multiplies and adds these probabilities from the limited set together, they will fail to get the factor of the weird prime in the the denominator and thus can never generate the probability we require! I don't have to walk through their complicated algorithm and compute a tree etc.

[Caveat: Here we are assuming that the algorithm always terminates in a finite number of steps, no matter how the random numbers are generated. Section about this added at the end.]

For eg, consider this algorithm to generate a random tree.

1. Add root to tree.
2. Pick an existing node in the tree randomly that has child space
  2.1 Randomly add a child node to it.
3. Continue till tree has n nodes.


Does this generate the trees uniformly?

The probabilities involved are $\frac{1}{k}$ where $k \le n$. No matter how you combine these you will not get the the weird prime factor (the one between $n$ and $2n$) in the denominator. So this cannot possibly result in a uniform distribution.

 This looking at the denominators of the random number calls has been pretty useful when trying to see if some algorithm/process generates the probabilities we need.


As another example, here is a typical flawed algorithm to shuffle an array randomly, so that each permutation is equally likely.


for (int i = 0; i < array.length; i++) {
  int r = random(array.length); // random position where ith card goes.
  swap(array, i, r); // Swap a[i] and a[r]
}

Why is this not uniform? The random number generator only generates probability $\frac{1}{n}$. The number of permutations is $n!$ and we can never achieve $\frac{1}{n!}$ no matter how we combine those $\frac{1}{n}$s (for $n \gt 2$).

Why do we require the algorithm always terminate?


Consider the following

do {
  int bit1 = random_bit();
  int bit2 = random_bit();
  int result = bit1 + 2*bit2;
  if (result < 3) return result; //0, 1 or 2.
} while (true);

This generates a number in $\{0, 1, 2\}$ with probability $\frac{1}{3}$, even though the random number calls only generates probabilities of $\frac{1}{2}$ (the random bit calls).

Why does the denominator thing not work? Because we now have an infinite series for the probability calculation and that could give any result (even irrational numbers! See newer blog post.).

The probability of getting a $0$ is

$$\frac{1}{4} + \frac{1}{4^2} + \frac{1}{4^3} + \dots  = \frac{1}{3}$$

This corresponds to getting a 0, or a 4 (so while loop continues) and then a 0 or a 4,4, 0 and so on.

Note that the algorithm not halting with a probability zero is different from the algorithm always halting!