Integral probability metric
This article, Integral probability metric, has recently been created via the Articles for creation process. Please check to see if the reviewer has accidentally left this template after accepting the draft and take appropriate action as necessary.
Reviewer tools: Inform author |
In probability theory, integral probability metrics are types of distance functions between probability distributions, defined by how well a class of functions can distinguish the two distributions. Many important statistical distances are integral probability metrics, including the Wasserstein-1 distance and the total variation distance. In addition to theoretical importance, integral probability metrics are widely used in areas of statistics and machine learning.
The name "integral probability metric" was given by German statistician Alfred Müller;[1] the distances had also previously been called "metrics with a ζ-structure."[2]
Definition
Integral probability metrics are distances on the space of distributions over a set , defined by a class of real-valued functions on as here the notation P f refers to the expectation of f under the distribution P. The absolute value in the definition is unnecessary, and often omitted, for the usual case where for every its negation is also in .
The function f being optimized over is known as the "witness function" or the "critic"; the term "witness" is particularly used if a particular achieves the supremum, as it "witnesses" the difference in the distributions. These functions try to have large values for samples from P and small (likely negative) values for samples from .
The choice of determines the particular distance; more than one can generate the same distance.[1]
For any choice of , satisfies all the definitions of a metric except that we may have we may have for some P ≠ Q; this is variously termed a "pseudometric" or a "semimetric" depending on the community. For instance, using the class which only contains the zero function, is identically zero. is a metric if and only if separates points on the space of probability distributions, i.e. for any P ≠ Q there is some such that .[1]
Examples
- The Wasserstein-1 distance, via its dual representation, has the set of 1-Lipschitz functions.
- The related Dudley metric is generated by the set of bounded 1-Lipschitz functions.
- The total variation distance can be generated by , so that is a set of indicator functions for any event, or by the larger class .
- The closely related Radon metric is generated by continuous functions bounded in [-1, 1].
- The Kolmogorov metric used in the Kolmogorov-Smirnov test has a function class of indicator functions, .
- The kernel maximum mean discrepancy (MMD) has the unit ball in a reproducing kernel Hilbert space. This distance is particularly easy to estimate from samples, requiring no optimization.
- Variants of generative adversarial networks and classifer-based two-sample tests[3][4] frequently use a "neural net distance"[5][6] where is a class of neural networks.
Relationship to f-divergences
compare case of differing supports; does the KALE paper talk about this nicely, maybe?
TV is the only nontrivial function that's both[7]
maybe also say that it's the only overlap with Lp distances? (is this proven somewhere?)
Estimation
Bharath's paper [7]
the data-splitting estimator from Demystifying?
References
- ^ a b c Müller, Alfred (June 1997). "Integral Probability Metrics and Their Generating Classes of Functions". Advances in Applied Probability. 29 (2): 429–443. doi:10.2307/1428011. JSTOR 1428011. S2CID 124648603.
- ^ Zolotarev, V. M. (January 1984). "Probability Metrics". Theory of Probability & Its Applications. 28 (2): 278–302. doi:10.1137/1128025.
- ^ Kim, Ilmun; Ramdas, Aaditya; Singh, Aarti; Wasserman, Larry (February 2021). "Classification accuracy as a proxy for two-sample testing". The Annals of Statistics. 49 (1). arXiv:1703.00573. doi:10.1214/20-AOS1962. S2CID 17668083.
- ^ Lopez-Paz, David; Oquab, Maxime (2017). "Revisiting Classifier Two-Sample Tests". International Conference on Learning Representations. arXiv:1610.06545.
- ^ Arora, Sanjeev; Ge, Rong; Liang, Yingyu; Ma, Tengyu; Zhang, Yi (2017). "Generalization and Equilibrium in Generative Adversarial Nets (GANs)". International Conference on Machine Learning. arXiv:1703.00573.
- ^ Ji, Kaiyi; Liang, Yingbin (2018). "Minimax Estimation of Neural Net Distance". Advances in Neural Information Processing Systems. arXiv:1811.01054.
- ^ a b Sriperumbudur, Bharath K.; Fukumizu, Kenji; Gretton, Arthur; Schölkopf, Bernhard; Lanckriet, Gert R. G. (2009). "On integral probability metrics, φ-divergences and binary classification". arXiv:0901.2698 [cs.IT].