View source code
Display the source code in std/numeric.d from which this page was generated on
github.
Improve this page
Quickly fork, edit online, and submit a pull request for this page.
Requires a signed-in GitHub account. This works well for small changes.
If you'd like to make larger changes you may want to consider using
local clone.
Page wiki
View or edit the community-maintained wiki page associated with this page.
Function std.numeric.jensenShannonDivergence
Computes the Jensen-Shannon divergence between
and a
, which is the sum b
(ai * log(2 * ai / (ai + bi)) + bi * log(2 *
bi / (ai + bi))) / 2
. The base of logarithm is 2. The ranges are
assumed to contain elements in [0, 1]
. Usually the ranges are
normalized probability distributions, but this is not required or
checked by
. If the inputs are normalized,
the result is bounded within jensenShannonDivergence
[0, 1]
. The three-parameter version
stops evaluations as soon as the intermediate result is greater than
or equal to
.
limit
Prototypes
CommonType!(ElementType!Range1,ElementType!Range2) jensenShannonDivergence(Range1, Range2)( Range1 a, Range2 b ) if (isInputRange!Range1 && isInputRange!Range2 && is(CommonType!(ElementType!Range1, ElementType!Range2))); CommonType!(ElementType!Range1,ElementType!Range2) jensenShannonDivergence(Range1, Range2, F)( Range1 a, Range2 b, F limit ) if (isInputRange!Range1 && isInputRange!Range2 && is(typeof(CommonType!(ElementType!Range1, ElementType!Range2).init >= F.init) : bool));
Example
double[] p = [ 0.0, 0, 0, 1 ]; assert(jensenShannonDivergence(p, p) == 0); double[] p1 = [ 0.25, 0.25, 0.25, 0.25 ]; assert(jensenShannonDivergence(p1, p1) == 0); assert(approxEqual(jensenShannonDivergence(p1, p), 0.548795)); double[] p2 = [ 0.2, 0.2, 0.2, 0.4 ]; assert(approxEqual(jensenShannonDivergence(p1, p2), 0.0186218)); assert(approxEqual(jensenShannonDivergence(p2, p1), 0.0186218)); assert(approxEqual(jensenShannonDivergence(p2, p1, 0.005), 0.00602366));
Authors
Andrei Alexandrescu, Don Clugston, Robert Jacques