Monday, October 26, 2020

An exciting slam

 This mainly deals with bidding, which is rare for this blog.

All white, IMPS. You hold AKJTx, AQx, AJ, KJx


Solid partner opens 4H in first seat and RHO passes. You decide slam is odds on, and trot out keycard blackwood.

Partner shows one keycard. You decide to bid 6H and LHO doubles (lead directing). You take this out to 6NT and LHO doubles again.

Will you redouble?

You must!

You expect this partner to have the HK and LHO to have CA.  LHO is unlikely to have the HK (because the double of 6NT will be risky).

If partner has SQ and HK 6NT is cold. If partner does not have the SQ, then RHO is very likely to have it! Why? LHOs double of 6H is likely based on a spade void. So if partner has a doubleton spade, 6NT is cold.

So assume partner has a singleton spade. If LHO has DKQ and CA, then you go down 1 at most.

If partner has CQ and diamond K, Q are split, then on the expected non-diamond lead you can setup 5 black suit tricks.

If partner has DQ and LHO has DK and CA (likely based on double of 6NT), then you have a strip squeeze endplay against LHO.

It is very unlikely you will go down more than 1, so you must redouble!

At the table 6NT was redoubled and made when partner had a doubleton spade and HK and LHO indeed had a spade void. 


Our teammates were surprised to win 11 IMPS after a coming back with -1100 in 6CX-5!

It was an exciting slam to bid as you played it during the bidding itself (so I kind of lied that this is bidding only :-)).

Friday, September 4, 2020

7-11 polynomial

For what positive integers $n$ is the polynomial

$$n^{11} + n^7 + 1$$

a prime number?


Scroll down for a solution.


Note that if $w \neq 1$ is a cube root of unity, then 

$$w^{11} +w^7 + 1 = w^2 + w + 1 = 0$$

Thus $x^{11} +x^7 + 1$ is divisible by $x^2 + x + 1$ and for $n \gt 1$, $n^{11} + n^7  + 1 > n^2 + n + 1$.

Thus $n=1$ is the only number for which $n^{11} +  n^7 + 1$ is a prime.

Friday, July 24, 2020

AKJT98 opposite 32


Your trump suit is AKJT98 opposite 32 in the dummy. You need to make all tricks. How will you play?

In isolation, the percentage play is to take two finesses (without cashing A or K first). This caters to Q four cards with RHO which is higher percentage than 4 small cards with RHO.


If it were AKJT9 opposite xxx, then the correct play is to cash an honour first and then take the finesses. This allows you to also cater to singleton Q offside.

Now consider the following hand:


IMPS
None 
 Dummy
♠ 32
♥ KJ87
♦ AJ
♣ KQJT9

  


 You
♠ AKJT98
♥ 65
♦ KQ2
♣ A2




You are in 6S. Yes bidding maybe relevant but assume it is forgotten and all we remember is that opps never bid.

LHO leads the H2 (3rd or 4th best), you put in the J and RHO wins the A and shifts to a small spade.



How will you play?






You need to bring the trump suit in with no losers. If we looked at the suit in isolation, we would just take two finesses.

Could we potentially cater to a singleton Q with LHO without losing the option of picking up Qxxx with RHO?


Suppose we did the following (You can follow this play more easily on BBO handviewer):

Go up with the SA (assume LHO follows low spade).

Play heart to K and take the spade finesse. Assume LHO shows out. So RHO started with Qxxx of spades.

Now we need a trump coup against RHO to pick up the Q and for that we need two ruffs in hand and then end up in dummy to play through RHO.

Since RHO could potentially have doubleton club and throw that on the last heart:

Now play CA, CK and ruff a heart (we expect RHO to follow).

Diamond to the J and ruff another heart (RHO may or may not follow).

Now a diamond to dummy and play the good clubs through RHO (discard diamond if not ruffed).

This line will go down if RHO has Qxxx and either has a singleton/void club or a 3 card heart and doubleton diamond and they throw a diamond on the 4th heart. The second case is probably unlikely as that would give RHO a 4-3-2-4 hand and LHO with a singleton club might have led it.

So is this trump coup line better than taking two spade finesses?

No!

The likelihood of a club void/singleton with RHO outweighs a specific suit break in spades! The chances of a singleton or void club with RHO is ~8% and this only goes up taking the spade suit break into consideration.

The specific spade suit break we are catering to: Q vs xxxx is around 2.85%.

[I used Richard Pavlicek's suit break calculator, but might have made a mistake]

If we changed the hands and lead, for example:


IMPS
None 
 Dummy
♠ AKJT98
♥ 65
♦ KQ2
♣ A2

   


 You
♠ 32
♥ KJ87
♦ AJ
♣ KQJT9

W N E S
1NT
P4HxfrP4S
P4NTP5C
P6S



You are in 6S, LHO leads the H2 which RHO wins with the A continues a heart.

Now it might be right to play for the trump coup, because LHO might have led a singleton club or RHO might have shifted to a diamond (trying to give partner a ruff looking at a 6+ card D suit).

I haven't done the math though.



Sunday, July 19, 2020

From fair to irrational and back

This article is kind of related to the previous post about random numbers.

The main questions we want to answer:

Can we simulate a biased coin with a fair coin? What about the reverse?

We are more interested in the case where the biased coin needs to have an irrational probability.

We will consider the classic and a not so classic approach.

(We will also explicitly look at $\frac{1}{\sqrt{2}}$ and $\frac{1}{e}$ in this post).

From fair to irrational

So suppose we have a random bit generator which generates bit 0 (and 1) with probability $\frac{1}{2}$. Also assume that each call takes constant time.

Can we use this to simulate a bit generator that generates bit 0 with probability $p$ where $p$ is some irrational in $(0, 1)$? 

One condition we will require is that the biased bit generator will run in expected constant time, without that it is not of much use.

For motivation for the method: Say we had a way to generate a real number $r$ in $(0, 1)$ and compare that against $p$, we could then have an easy algorithm to return 0 when $r \lt p$ and return 1 when $r \gt p$. This will return $0$ with probability $p$.

The trick is to have a representation of $p$ and that of $r$ which can be compared while we are in the process of generating $r$. The most obvious representation that comes to mind is the base-2 representation!

Base-2 generator


So say we had a base-2 representation of $p = 0.d_1 d_2 d_3 \dots $

To generate $r = 0.r_1 r_2 r_3 \dots $ we can randomly generate the bits $r_1, r_2, \dots$ and compare against $d_1, d_2, \dots$ while we are generating the $r_i$. With a very high likelihood there will be some finite $k$ where $r_k \neq d_k$ for the first time. We can stop and return bit 0 or bit 1 depending on whether $r_k \gt d_k$ (and so $r \gt p$) or $r_k \lt d_k$ (and so $r \lt p$).

This can be written in code as follows:


# Given bitstream (base 2 representation) generates random bit with
# pbt of 0 = number represented by bitstream.
def random_bit_base2(bias_bitstream, fair_random_bit):
  # We generate a random number and if < given number
  # return 0, else return 1.
  for bit in bias_bitstream():
    fair_bit = fair_random_bit()

    # the number represented by the fair_bit seq is different!
    if fair_bit != bit:

      # This basically is returning 0
      # if number formed by fair_bit seq < bitstream seq.
      # else returning 1.
      return fair_bit



The probability of generating a 0 is

$$\frac{d_1}{2} + \frac{1}{2}\times\frac{d_2}{2} + \dots = p$$

The expected number of random number calls is $\sum P(X \gt n) = \sum \frac{1}{2^n} = 2$. [See this math.se post]

The problem with this approach is generating the base-2 representation of the required bias $p$, but for practical purposes (depending on the application) we can choose to only work with the first "few" (eg 128) digits of $p$. Very likely $p$ can be approximated via infinite series etc and the digits placed in a lookup table.

For certain $p$ it is pretty easy to do though. For eg:


Problem 1: Can you write a simple algorithm to lazily generate the base-2 representation of $\frac{1}{\sqrt{2}}$?

Hint: If $x^2 \lt 2^{2n+1} \lt (x+1)^2$ then we only need to consider $2x$ or $2x+1$ for $2^{2n+3}$.


This base-2 version generator is in fact pretty well known (Googling will give similar results here).

There are other approaches like base-n instead of base-2. Choosing the base $n$ will depend on the bias $p$. For eg, with the existence of spigot algorithms for $\pi$, we could use base-16 for say $p = \frac{\pi}{16}$.

We will now discuss the not-so-classic approach.

Alternative to base-n: factorial base


If we assume that instead of a fair bit generator, we had a random integer generator which given input $K$ returned one of $\{0, 1, \dots, K-1\}$ with probability $\frac{1}{K}$ then for constants like $\frac{1}{e}$ we could do something other than base-n.

We can use a representation which uses factorials. For a given $r \in (0, 1)$ there is a representation of $r$ as

$$ r = \frac{a_2}{2!} + \frac{a_3}{3!} + \dots  = \sum_{n=2}^{\infty} \frac{a_n}{n!}$$

And $a_m \in \{0, 1, \dots , m - 1\}$.

This representation (say $(a_2, a_3, \dots)$) is finite only when $r$ is rational (we also need $a_k \neq k-1$ infinitely often, but we can ignore that for our purposes).

For eg.

$$ e - 2 = \sum_{n=2}^{\infty} \frac{1}{n!} = (1, 1, \dots )$$

It can also be shown that to compare $r = (a_2, a_3, \dots) $ against $p = (b_2, b_3, \dots)$ it is enough to find the first $k$ where $a_k \neq b_k$ and compare the "digits".

To generate $a_k$ we just generate a random integer in $\{0, 1, \dots, k-1\}$ using the random integer generator we have.

The code would look something like this:


# Given digits in factorial base
# i.e num = \sum_{n=2}^{\infty} d_n/n!
# Generate a random bit with probability num.
def random_bit_base_factorial(digit_stream, fair_random_int):
  # We generate a random number and if < given number
  # return 0, else return 1.
  base = 2

  # We generate a random digit (0, ..., base - 1)
  # and then increment the base.
  for digit in digit_stream():
    fair_digit = fair_random_int(base)
    base = base + 1
    if fair_digit < digit:
      return 0
    if fair_digit > digit:
      return 1

Say we need the probability $p = \frac{b_2}{2!} + \frac{b_3}{3!} + \dots $.

The probability of returning 0 is

$$ \frac{b_2}{2} + \frac{1}{2} \times \frac{b_3}{3} + \frac{1}{2} \times \frac{1}{3} \times \frac{b_4}{4} + \dots = \frac{b_2}{2!} + \frac{b_3}{3!} + \dots = p$$

The expected number of fair random number calls is $\sum \frac{1}{n!} = e - 1 \lt 2$.

[It might look like this approach is better than base-2 because it converges faster and  uses fewer random number calls but we need to consider the implementation of the random integer generator, which might be using the random bit generator under the covers.]

This representation has the same problem as generating the digits in base-n of the required probability. For certain constants it is easy to do, though.

For eg:

Problem 2: Show that $$ \frac{1}{e} = \frac{2}{3!} + \frac{4}{5!} + \dots = \sum_{n=1}^{\infty} \frac{2n}{(2n+1)!}$$


A note regarding the "actual" runtime (applies to both base-n and factorial)


We have conveniently ignored the time taken to generate both the digits of the bias $p$ and the random digit of the generated real number $r$ but that is not a huge problem.

In fact we can prove the following:

Problem 3: If the $n^{th}$ digit generation of the bias $p$ and random number $r$ takes polynomial time in $n$, show that the expected runtime of the biased random bit generator is $\mathcal{O}(1)$.

Hint: Use the generating functions and their derivatives.

So if we hardcode the first say 128 digits in a lookup table and then switch to generating the rest, we will get a very good practical algorithm.

From Irrational back to Fair


This is actually not too hard, since if we generate two bits, the probability of getting 0,1 is the same as the probability of getting 1,0. 

So the algorithm is simple, call the biased bit generator twice. return 0 if we get 0,1. return 1 if we get 1,0. Continue if we get 0,0 or 1,1. This has finite expected runtime too.



Wednesday, July 15, 2020

Using math as an interviewer.

One of my go to interview questions is

"Generate a random binary tree".

We are only interested in structure of the tree and number of nodes will be < 1000.

I say that the ideal case is all trees are generated with the same probability (uniformly random), but explain that any algorithm where the probability of every tree is non-zero will do. Two trees are distinct if they are structurally different.

[Generating a random binary tree where each tree is equally like is not an easy problem (definitely too hard for an interview, though I have had a couple of candidate who did this!) and if you want to try it here is an old post about it: https://ruffnsluff.blogspot.com/2015/03/generate-random-binary-tree.html]


I make sure to tell them I don't need uniform (all equal probability) because it is a hard problem.

 Inspite of the explicit mention that it is a hard problem and not what I am looking for, candidates come up with all sorts of algorithms and claim it is uniformly random.

As an interviewer you need to be able to quickly tell if the algorithm is truly generating uniformly random or not.

I use a "trick" which has worked with almost every candidates' solution so far.

All I need to do is look at the random number generator calls their algorithm is making.

Typically the probabilities they generate are of the form $\frac{1}{n}$ or $\frac{1}{2}$ or $\frac{1}{3}$ or some such limited set like that.

The number of trees on $n$ nodes is the Catalan number: $ \frac{\binom{2n}{n}}{n+1}$ which tends to have many weird prime number factors (see, Bertrand's postulate). For eg, for $n=3$ we expect the probability of each tree to be $\frac{1}{5}$. For $n=4$ the probability of each tree needs to be $\frac{1}{14}$ (7 is the weird prime here).

No matter how the algorithm multiplies and adds these probabilities from the limited set together, they will fail to get the factor of the weird prime in the the denominator and thus can never generate the probability we require! I don't have to walk through their complicated algorithm and compute a tree etc.

[Caveat: Here we are assuming that the algorithm always terminates in a finite number of steps, no matter how the random numbers are generated. Section about this added at the end.]

For eg, consider this algorithm to generate a random tree.

1. Add root to tree.
2. Pick an existing node in the tree randomly that has child space
  2.1 Randomly add a child node to it.
3. Continue till tree has n nodes.


Does this generate the trees uniformly?

The probabilities involved are $\frac{1}{k}$ where $k \le n$. No matter how you combine these you will not get the the weird prime factor (the one between $n$ and $2n$) in the denominator. So this cannot possibly result in a uniform distribution.

 This looking at the denominators of the random number calls has been pretty useful when trying to see if some algorithm/process generates the probabilities we need.


As another example, here is a typical flawed algorithm to shuffle an array randomly, so that each permutation is equally likely.


for (int i = 0; i < array.length; i++) {
  int r = random(array.length); // random position where ith card goes.
  swap(array, i, r); // Swap a[i] and a[r]
}

Why is this not uniform? The random number generator only generates probability $\frac{1}{n}$. The number of permutations is $n!$ and we can never achieve $\frac{1}{n!}$ no matter how we combine those $\frac{1}{n}$s (for $n \gt 2$).

Why do we require the algorithm always terminate?


Consider the following

do {
  int bit1 = random_bit();
  int bit2 = random_bit();
  int result = bit1 + 2*bit2;
  if (result < 3) return result; //0, 1 or 2.
} while (true);

This generates a number in $\{0, 1, 2\}$ with probability $\frac{1}{3}$, even though the random number calls only generates probabilities of $\frac{1}{2}$ (the random bit calls).

Why does the denominator thing not work? Because we now have an infinite series for the probability calculation and that could give any result (even irrational numbers! See newer blog post.).

The probability of getting a $0$ is

$$\frac{1}{4} + \frac{1}{4^2} + \frac{1}{4^3} + \dots  = \frac{1}{3}$$

This corresponds to getting a 0, or a 4 (so while loop continues) and then a 0 or a 4,4, 0 and so on.

Note that the algorithm not halting with a probability zero is different from the algorithm always halting!

Monday, June 29, 2020

Fractional parts of harmonic numbers

The $n^{th}$ harmonic number $H_n$ is defined as

$$H_n = \sum_{k=1}^{n} \dfrac{1}{k} = 1 + \dfrac{1}{2} + \dots + \dfrac{1}{n}$$

It is known that $H_n$ is never an integer except for $n=1$.

The problem in this post is to show that we can get arbitrarily close.

i.e.

Show that there is an ascending sequence of integers $n_1 \lt n_2 \lt n_3 \lt \dots $ such that

$$ \lim_{k \to \infty} \{H_{n_k}\} = 0$$

where $\{x\}$ is the fractional part of x. Eg, $\{H_2\} = \frac{1}{2}, \{H_3\} = \frac{5}{6}$

In general it seems like a difficult problem to estimate the fractional parts of $H_n$. So if you got here by googling for information on that, sorry, this blog post won't be of much help.


[Solution]

Saturday, June 6, 2020

Achieving 1/lcm

Let $d_n$ be the least common multiple of $1,2,\dots, n$. For eg, $d_4 = 12, d_5 = 60, d_9 = 2520$.

Given an $n \ge 1$, Show that there exist integers $A_1, A_2, \dots A_n$ (not necessarily positive) such that

$$ \sum_{k=1}^{n} \dfrac{A_k}{k} = \dfrac{1}{d_n}$$



[Solution]

Saturday, May 9, 2020

War General and Prisoners


A war general has captured N of you and is holding you prisoners.

He has two prison camps which are on different islands (say A and B) and those islands are connected to the main land by a flimsy bridge each.

Getting bored, the general decides to play a game with you.

"
You will all be given N distinct real numbers. Each of you can see the numbers of the other prisoners but not yours.

Once you see the numbers, you each have to independently come to me and tell which island you want to be in for the night (A or B).

Next morning I will start calling out two prisoners at a time starting with the smallest number and calling out in order. These two prisoners need to take the bridge of their island and walk towards the main land at the same time.

The bridges are flimsy, so if two prisoners walk the same bridge at the same time, the bridge will explode and so will the prison camps killing everyone who hasn't been freed yet.

If they take different bridges, they both go free and I will continue calling out two at a time.

You can decide upon a strategy before I assign numbers to you.
"

What is the maximum number of prisoners that can guaranteed to be freed?


[Solution]

Wednesday, April 29, 2020

Digits in $2^n$

Let $a_n$ = number of odd digits in the base-$10$ expansion of $2^n$ and let $b_n$ = total number of digits in $2^n$.

For eg, $2^8 = 256$ and so $a_8 = 1$ ($5$ is the only odd digit) and $b_8 = 3$ (it is a 3 digit number).

Three problems:

A) Show that $$\sum_{n=1}^{\infty} \dfrac{a_n}{2^n} = \dfrac{1}{9}$$

B) Show that $$S = \sum_{n=1}^{\infty} \dfrac{b_n}{2^n}$$ is an irrational number.

C) Show that $S$ as defined in B) above satisfies $$ S \gt \dfrac{1169}{1023}$$


[Solution]

Thursday, April 2, 2020

Monday, February 17, 2020

Sum of reciprocal squares

There are two parts to the problem

1) Show that $\frac{\pi}{4} \gt \sqrt{2-\sqrt{2}}$

2) Show that $\sum_{n=2}^{\infty} \frac{1}{n^2} \gt \frac{3}{5}$

Incidentally 2) implies 1) and is a stronger result than 1) which has an easier proof.

[Solution]

Saturday, February 1, 2020

Lion and Lion Tamer

There is a circular cage which has the lion and a lion tamer (assume point masses) in there somewhere. Both the lion and the tamer run at the same speed.

Can the lion catch the lion tamer? (In finite time).


Solution (Highlight to view):

This is a pretty hard problem and the surprising answer is no!

It seems like the lion should be able to catch the lion tamer by directly moving towards lion tamer, but the lion tamer can escape! This is assuming they are moving points in the 2D plane. The lion can get arbitrarily close, but cannot coincide.


This problem was proposed by Richard Rado in 1920s and was solved by Abram Besikovitch in 1950s.

A solutions appears in the the book "A Mathematical Miscellany" by Littlewood.

I don't have good links, so this might be a starting point: lion and man problem.