Kernel estimators#
Kernel estimators in xyz are fixed-radius neighborhood methods. They
replace the k-nearest-neighbor logic of KSG with a radius parameter
\(r\), then estimate probabilities from counts inside that radius.
Implemented classes#
xyz.KernelTransferEntropyxyz.KernelPartialTransferEntropyxyz.KernelSelfEntropy
Mathematical idea#
For a radius \(r\), the estimator counts how many pairs of points fall within the metric ball
The probability of a neighborhood is then approximated from these counts. Conditional entropies are estimated from log ratios of counts in:
a full space such as \((Y_t, Y_t^-, X_t^-)\),
and a reduced conditioning space such as \((Y_t^-, X_t^-)\).
In the TE setting, this yields
with each conditional entropy approximated by fixed-radius pair counts.
Why use kernel estimators#
They provide a direct geometric interpretation through the radius
r.They are often useful for sensitivity studies when you want to inspect locality explicitly.
They can be easier to explain to users who think in terms of neighborhoods rather than in terms of nearest-neighbor order statistics.
When to use them#
Kernel estimators are helpful when:
you want a local-scale interpretation of dependence,
you plan to sweep over neighborhood scales,
or you want a complementary nonparametric estimate to compare against KSG.
Typical use cases#
Exploratory analysis where robustness across locality scales matters.
Comparative studies where both fixed-
kand fixed-rviews are useful.Educational settings, because the geometry is intuitive.
How to use them#
import numpy as np
from xyz import KernelPartialTransferEntropy, KernelSelfEntropy, KernelTransferEntropy
data = np.random.randn(1500, 3)
te = KernelTransferEntropy(
driver_indices=[0],
target_indices=[1],
lags=1,
r=0.5,
metric="chebyshev",
).fit(data)
pte = KernelPartialTransferEntropy(
driver_indices=[0],
target_indices=[1],
conditioning_indices=[2],
lags=1,
r=0.5,
).fit(data)
se = KernelSelfEntropy(target_indices=[1], lags=2, r=0.5).fit(data)
print(te.transfer_entropy_)
print(pte.transfer_entropy_)
print(se.self_entropy_)
Choosing the radius r#
Too small: counts become sparse and unstable.
Too large: neighborhoods blur distinct dynamical regimes and bias estimates downward.
In practice: report a sensitivity range instead of trusting one hand-picked value.
Interactive example#
The plot below shows how a kernel TE estimate changes as the radius r is
varied in a synthetic system. A stable plateau is usually more convincing than
an isolated spike.