ETOOBUSY đ minimal blogging for the impatient
AoC 2016/19  Dynamic Josephus
TL;DR
On with Advent of Code puzzle 19 from 2016: using dynamic programming to attack the Josephus problem variant described in Aoc2016/19  Halving Josephus.
I was very happy to get past puzzle 19 from the 2016 edition of Advent of Code, but letâs admit two facts:
 I didnât demonstrate that the heuristic is actually a rule;
 This wouldnât help in some other general case.
I meanâŚ what if I didnât spot the heuristic at all? Would I have been stuck with the brute force approach?
WellâŚ we have other cards to play.
A recurrent situation
We are actually in aâŚ recurrent situation where knowing the solution to the problem for $N$ items might help us find out a solution for $N + 1$ items somehow easily (i.e. without having to actually place $N + 1$ elves and simulate the full game).
So letâs say we know the solution for $N$. Itâs $i$.
Now, letâs consider what we would do to solve the $N + 1$ case. Letâs first place all of them:
1 2 3 ... (k3) (k2) (k1) k (k+1) (k+2) ... (N3) (N2) (N1) N (N+1)
Here we are highlighting position $k$, which is the position that we will have to eliminate in this condition. We already know how to find $k$, because of the rules about the odd and even values of $N$.
Now, our first step is to eliminate element $k$, so weâre left with:
1 2 3 ... (k3) (k2) (k1) (k+1) (k+2) ... (N3) (N2) (N1) N (N+1)
At this point, we would have to move to the following item, that is the same as taking the initial item and move it to the end, like this:
2 3 ... (k3) (k2) (k1) (k+1) (k+2) ... (N3) (N2) (N1) N (N+1) 1
Now, theoretically, our brute force approach would ask us to just repeat the same steps with this new array of elements.
Array of $N$ elements.
Wait. A. Minute.
We already know what the solution is to this case: itâs $i$. Well no, itâs the element that is at position $i$. Letâs write the positions and the values then:
i> 1 2 3 ... (k3) (k2) (k1) k (k+1) ... (N3) (N2) (N1) N
v> 2 3 4 ... (k2) (k1) (k+1) (k+2) (k+3) ... (N2) (N1) N 1
We can observe that:

if $i$ was less than $k$, then it is only transformed into $i + 1$ after we âmove onâ to the next item, i.e. take the first item and put it in the final position;

if $i$ was greater than, or equal to, $k$, then it gets shifted two positions ahead, one due to the elimination of $k$ itself, the other one for âmoving onâ like in the previous bullet;

the only special case is when $i = N$, because in this case we reset back to $1$.
Hence, if we call $E(n)$ the winning elf in a game of $n$ elves, we have the following recursive relation:
\[E(n) = \begin{cases} E(n1) + 1, & \text{if $E(n1) < \lfloor\frac{n}{2}\rfloor$ }\\ E(n1) + 2, & \text{if $\lfloor\frac{n}{2}\rfloor \le E(n1) < n  1$ } \\ 1, & \text{if $E(n1) = n  1$ } \end{cases}\]A dynamic algorithm
To solve for $N$, then, we need the solution for $N  1$, hence we need the solution for $N  2$ and so on up toâŚ $N = 2$, where we already know that the solution is $1$.
Hence, an algorithm to solve for $N$ might be:
 start from $n = 2$ and $E(2) = 1$
 make a loop where $n$ is increased by one at each iteration and the corresponding value $E(n)$ is calculated from the value of the previous iteration;
 When $n$ lands on $N$âŚ weâre done!
This can be coded as follows:
sub josephus_part2_dynamic ($N) {
my $i = 1;
my $n = 2;
my $k = 1;
while ($n < $N) {
++$n;
$k = int($n / 2);
$i = $i < $k ? $i + 1
: $i < $n  1 ? $i + 2
: 1;
}
return $i;
}
This approach as a complexity that is $O(n)$, as opposed to $O(1)$ of the heuristicâŚ but itâs anyway a more general one, because it only requires us to figure out the mapping of the different indexes from an iteration to the following.
Conclusion
This was an interesting ride! In particular, it gave me the opportunity to go a bit more in depth with the dynamic programming paradigmâŚ and learn something more on the way đ