Professor Armstrong suggests the following procedure for generating a uniform random permutation:

$$\textsc{Permute-By-Cyclic}(A)$$$$\begin{aligned} 1& \quad n\,=\,A.length \\ 2& \quad \text{let}\,B[1..n]\,\text{be}\,a\,\text{new}\,\text{array} \\ 3& \quad offset\,=\,\textsc{Random}(1, \, n) \\ 4& \quad \textbf{for }\,i\,=\,1\textbf{ to }\,n \\ 5& \quad \qquad dest\,=\,i\,+\,offset \\ 6& \quad \qquad \textbf{if }\,dest\,>\,n \\ 7& \quad \qquad \qquad dest\,=\,dest\,-\,n \\ 8& \quad \qquad B[dest]\,=\,A[i] \\ 9& \quad \textbf{return }\,B \\ \end{aligned}$$

Show that each element \(A\)[\(i\)] has a \(1/n\) probability of winding up in any particular position in \(B\). Then show that Professor Armstrong is mistaken by showing that the resulting permutation is not uniformly random.

This exercise illustrates an important principle: having each element equally likely to appear in each position is necessary but not sufficient for a uniform random permutation.

A

Consider a fixed element \(A\)[\(i\)] and a fixed position \(j\) in the output array \(B\). We want to find the probability that \(A\)[\(i\)] ends up in \(B\)[\(j\)].

Element \(A\)[\(i\)] is placed in position \(dest = (i + offset) \bmod n\), where we use modular arithmetic (with positions numbered 0 through \(n-1\) for simplicity, though the code uses 1 through \(n\)).

For \(A\)[\(i\)] to land in position \(j\), we need:

\[(i + offset) \bmod n = j\]

This means \(offset \equiv j - i \pmod{n}\). Since \(offset\) is chosen uniformly from \(\{1, 2, \ldots, n\}\), there is exactly one value of \(offset\) that satisfies this congruence.

Therefore:

\[\Pr\{A[i] \text{ lands in position } j\} = \frac{1}{n}\]

This holds for all \(i\) and \(j\), so each element has a \(1/n\) probability of appearing in each position.

B

Despite the uniform marginal probabilities, the algorithm produces only \(n\) distinct permutations, not all \(n!\) possible permutations.

The algorithm performs a cyclic rotation of the array by \(offset\) positions. Since there are \(n\) possible values for \(offset\), there are only \(n\) possible outcomes, each with probability \(1/n\).

Consider \(A = \langle 1, 2, 3 \rangle\). The three possible outputs are:

  • \(offset = 1\): \(B = \langle 3, 1, 2 \rangle\) (each element shifted right by 1)
  • \(offset = 2\): \(B = \langle 2, 3, 1 \rangle\) (each element shifted right by 2)
  • \(offset = 3\): \(B = \langle 1, 2, 3 \rangle\) (each element shifted right by 3 \(\equiv\) 0)

The algorithm can only produce these 3 permutations, but there are \(3! = 6\) total permutations. It cannot produce \(\langle 1, 3, 2 \rangle\), \(\langle 2, 1, 3 \rangle\), or \(\langle 3, 2, 1 \rangle\).

For a uniform random permutation, each of the 6 permutations should occur with probability \(1/6\). Instead, three permutations occur with probability \(1/3\), and three permutations never occur.

This exercise demonstrates an important statistical principle: knowing the marginal probabilities (the probability distribution of each individual random variable) doesn’t uniquely determine the joint probability distribution (the probability of combinations of variables).

In this case:

  • Marginal: Each element appears in each position with probability \(1/n\) ✓
  • Joint: Each permutation appears with probability \(1/n!\) ✗

The cyclic structure introduces strong dependencies between positions. If we know where element \(A\)[1] lands, we immediately know where all other elements land, because they maintain their relative cyclic order.

To verify a permutation algorithm produces uniform random permutations, we must check that:

  1. Each of the \(n!\) permutations is possible
  2. Each permutation occurs with probability \(1/n!\)

Checking only that each element can appear in each position with probability \(1/n\) is insufficient, as this example clearly demonstrates.