dodiscover.ci.kernel_utils.kl_divergence_score#

dodiscover.ci.kernel_utils.kl_divergence_score(y_stat_q, y_stat_p, eps)[source]#

Compute f-divergence upper bound on KL-divergence.

See definition 4 in [1], where the function is reversed to give an upper-bound for the sake of gradient descent.

The KL-divergence can be estimated with the following formula:

\[\hat{D}_{KL}(p || q) = \frac{1}{n} \sum_{i=1}^n log L(Y_i^p) - log (\frac{1}{m} \sum_{j=1}^m L(Y_j^q))\]
Parameters:
y_stat_qarray_like of shape (n_samples_q,)

Samples from the distribution Q, the variational class. This corresponds to \(Y_j^q\) samples.

y_stat_p: array_like of shape (n_samples_p,)

Samples from the distribution P, the joint distribution. This corresponds to \(Y_i^p\) samples.

Returns:
metricfloat

The KL-divergence score.