TL;DR

A couple of initial classes in Raku to help with following Think Bayes with some Raku code.

In previous post Think Bayes we saw that there’s a free book available (Think Bayes) with code in Python, which I’d like to somehow follow using Raku.

Chapter 2 Computational Statistics introduces class `Pmf`; the first examples can be followed with this basic implementation:

``````class Pmf {
has %.pmf;

method TWEAK (:%!pmf) {}

method gist () {
return gather {
take '---';
for %!pmf.keys.sort -> \$key {
take "  «\$key» {%!pmf{\$key}}";
}
}.join("\n");
}

method total () { return [+] %!pmf.values }

method normalize (Numeric:D \$sum = 1) {
my \$total = self.total or return;
my \$factor = \$sum / \$total;
%!pmf.values »*=» \$factor;
self;
}

method set (\$key, \$value) { %!pmf{\$key} = \$value; self }

method increment (\$key, \$amount) {
%!pmf{\$key} += \$amount;
return self;
}
method multiply (\$key, \$factor) {
%!pmf{\$key} *= \$factor;
return self;
}

method probability (\$key) { self.P(\$key) }
method P (\$key) {
die "no key '\$key' in PMF" unless %!pmf{\$key}:exists;
return %!pmf{\$key} / self.total;
}
}
``````

As anticipated, the implementation is basic and functional to following the examples. As an example, the `normalize` method is virtually not needed because the normalization is always performed on the fly by the `P` method.

We can re-create the cookies example from section 2.2:

``````my \$cookie = Pmf.new(pmf => ('Bowl 1', 1, 'Bowl 2', 1).hash);
The initialization is done assigning the same value to the two hypotheses, i.e. `Bowl 1` and `Bowl 2`. As long as they are the same, it means that they have the same probability (because probabilities are calculated by dividing by the total).
Then we do the update phase, where we multiply each of the prior probabilities by the likelihood that the cookie came from each bowl. At the end, we print out the upated estimation that the cookie indeed comes from `Bowl 1`.