Jump to content

Random projection: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
OAbot (talk | contribs)
m Open access bot: doi added to citation with #oabot.
Undid revision 1257502210 by 103.183.83.218 (talk) reverting (unintentional?) removal of content
 
(25 intermediate revisions by 18 users not shown)
Line 1: Line 1:
{{Short description|Technique to reduce dimensionality of points in Euclidean space}}
{{multiple issues|
{{multiple issues|
{{refimprove|date=November 2014}}
{{refimprove|date=November 2014}}
Line 4: Line 5:
}}
}}


In mathematics and statistics, '''random projection''' is a technique used to [[dimensionality reduction|reduce the dimensionality]] of a set of points which lie in [[Euclidean space]]. Random projection methods are known for their power, simplicity, and low error rates when compared to other methods{{Citation needed|reason=What other methods? Says who?|date=June 2017}}. According to experimental results, random projection preserves distances well, but empirical results are sparse.<ref>{{cite conference
In mathematics and statistics, '''random projection''' is a technique used to [[dimensionality reduction|reduce the dimensionality]] of a set of points which lie in [[Euclidean space]]. According to theoretical results, random projection preserves distances well, but empirical results are sparse.<ref>{{cite conference
| first1 = Bingham | last1 = Ella
| first1 = Bingham | last1 = Ella
| first2 = Mannila | last2 = Heikki
| first2 = Mannila | last2 = Heikki
Line 40: Line 41:
| year = 1984
| year = 1984
| isbn = 9780821850305
| isbn = 9780821850305
| s2cid = 117819162
| url-access = registration
| url-access = registration
| url = https://archive.org/details/conferenceinmode0000conf/page/189
| url = https://archive.org/details/conferenceinmode0000conf/page/189
}}.
}}.
</ref> which states that if points in a vector space are of sufficiently high dimension, then they may be projected into a suitable lower-dimensional space in a way which approximately preserves the distances between the points.
</ref> which states that if points in a vector space are of sufficiently high dimension, then they may be projected into a suitable lower-dimensional space in a way which approximately preserves pairwise distances between the points with high probability.


In random projection, the original d-dimensional data is projected to a k-dimensional (k << d) subspace, using a random <math>k \times d </math> - dimensional matrix R whose columns have unit lengths.{{citation needed|date=January 2019}} Using matrix notation: If <math>X_{d \times N}</math> is the original set of N d-dimensional observations, then <math>X_{k \times N}^{RP}=R_{k \times d}X_{d \times N}</math> is the projection of the data onto a lower k-dimensional subspace. Random projection is computationally simple: form the random matrix "R" and project the <math>d \times N</math> data matrix X onto K dimensions of order <math>O(dkN)</math>. If the data matrix X is sparse with about c nonzero entries per column, then the complexity of this operation is of order <math>O(ckN)</math>.<ref>{{Cite web|url =http://www.ime.unicamp.br/~wanderson/Artigos/randon_projection_kdd.pdf |title =Random projection in dimensionality reduction: Applications to image and text data |date = May 6, 2014|website = | last1 = Bingham | first1 = Ella | last2 = Mannila | first2 = Heikki }}</ref>
In random projection, the original d-dimensional data is projected to a k-dimensional (k << d) subspace, using a random <math>k \times d </math> - dimensional matrix R whose columns have unit lengths.{{citation needed|date=January 2019}} Using matrix notation: If <math>X_{d \times N}</math> is the original set of N d-dimensional observations, then <math>X_{k \times N}^{RP}=R_{k \times d}X_{d \times N}</math> is the projection of the data onto a lower k-dimensional subspace. Random projection is computationally simple: form the random matrix "R" and project the <math>d \times N</math> data matrix X onto K dimensions of order <math>O(dkN)</math>. If the data matrix X is sparse with about c nonzero entries per column, then the complexity of this operation is of order <math>O(ckN)</math>.<ref>{{Cite web|url =http://www.ime.unicamp.br/~wanderson/Artigos/randon_projection_kdd.pdf |title =Random projection in dimensionality reduction: Applications to image and text data |date = May 6, 2014|website = | last1 = Bingham | first1 = Ella | last2 = Mannila | first2 = Heikki }}</ref>
Line 61: Line 63:
0 & \text{with probability }\frac{2}{3}\\
0 & \text{with probability }\frac{2}{3}\\
-1 & \text{with probability }\frac{1}{6} \end{cases} </math>
-1 & \text{with probability }\frac{1}{6} \end{cases} </math>
This is efficient for database applications because the computations can be performed using integer arithmetic.
This is efficient for database applications because the computations can be performed using integer arithmetic. More related study is conducted in.<ref>{{cite book
| first1 = Ping | last1 = Li | last2 = Hastie | first2 = Trevor | last3 = Church | first3 = Kenneth
| title = Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining | chapter = Very sparse random projections | doi = 10.1145/1150402.1150436
| issue = 1
| pages = 287–296
| year = 2006| isbn = 1595933395 | s2cid = 7995734 }}
</ref>


It was later shown how to use integer arithmetic while making the distribution even sparser, having very few nonzeroes per column, in work on the Sparse JL Transform.<ref>{{cite journal
It was later shown how to use integer arithmetic while making the distribution even sparser, having very few nonzeroes per column, in work on the Sparse JL Transform.<ref>{{cite journal
Line 73: Line 81:
| mr = 3167920| arxiv = 1012.1577| s2cid = 7821848 }}
| mr = 3167920| arxiv = 1012.1577| s2cid = 7821848 }}
</ref> This is advantageous since a sparse embedding matrix means being able to project the data to lower dimension even faster.
</ref> This is advantageous since a sparse embedding matrix means being able to project the data to lower dimension even faster.

===Random Projection with Quantization===

Random projection can be further condensed by quantization (discretization), with 1-bit (sign random projection) or multi-bits. It is the building block of SimHash,<ref>{{cite book
| first1 = Moses | last1 = Charikar
| title = Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
| chapter = Similarity estimation techniques from rounding algorithms
| doi = 10.1145/509907.509965
| issue = 1
| pages = 380–388
| volume = 1
| year = 2002| isbn = 1581134959
| s2cid = 4229473
}}
</ref> RP tree,<ref>{{cite journal
| first1 = Yoav | last1 = Freund | first2 = Sanjoy | last2 = Dasgupta | first3 = Mayank | last3 = Kabra | first4 = Nakul | last4 = Verma
| issue = 1
| pages = 473–480 | journal = 20th International Conference on Neural Information Processing Systems
| title = Learning the structure of manifolds using random projections
| volume = 1
| year = 2007}}
</ref> and other memory efficient estimation and learning methods.<ref>{{cite journal
| first1 = Petros | last1 = Boufounos | first2 = Richard | last2 = Baraniuk
| issue = 1
| pages = 16–21 | journal = 42nd Annual Conference on Information Sciences and Systems
| title = 1-bit Compressive Sensing
| volume = 1
| year = 2008| doi = 10.1109/CISS.2008.4558487 | s2cid = 206563812 }}
</ref><ref>{{cite journal
| first1 = Xiaoyun | last1 = Li | first2 = Ping | last2 = Li
| pages = 15150–15160 | journal = 33rd International Conference on Neural Information Processing Systems
| title = Generalization error analysis of quantized compressive learning
| volume = 1
| year = 2019}}
</ref>


==Large quasiorthogonal [[Basis (linear algebra)|bases]]==
==Large quasiorthogonal [[Basis (linear algebra)|bases]]==
Line 102: Line 145:


* [https://cran.r-project.org/web/packages/RandPro/index.html RandPro] - An R package for random projection <ref>{{cite journal |last1=Ravindran |first1=Siddharth |title=A Data-Independent Reusable Projection (DIRP) Technique for Dimension Reduction in Big Data Classification Using k-Nearest Neighbor (k-NN) |journal=National Academy Science Letters |volume=43 |pages=13–21 |doi=10.1007/s40009-018-0771-6 |year=2020 |s2cid=91946077 }}</ref><ref>{{cite journal |last1=Siddharth |first1=R. |last2=Aghila |first2=G. |title=RandPro- A practical implementation of random projection-based feature extraction for high dimensional multivariate data analysis in R |journal=SoftwareX |date=July 2020 |volume=12 |pages=100629 |doi=10.1016/j.softx.2020.100629 |bibcode=2020SoftX..1200629S |doi-access=free }}</ref>
* [https://cran.r-project.org/web/packages/RandPro/index.html RandPro] - An R package for random projection <ref>{{cite journal |last1=Ravindran |first1=Siddharth |title=A Data-Independent Reusable Projection (DIRP) Technique for Dimension Reduction in Big Data Classification Using k-Nearest Neighbor (k-NN) |journal=National Academy Science Letters |volume=43 |pages=13–21 |doi=10.1007/s40009-018-0771-6 |year=2020 |s2cid=91946077 }}</ref><ref>{{cite journal |last1=Siddharth |first1=R. |last2=Aghila |first2=G. |title=RandPro- A practical implementation of random projection-based feature extraction for high dimensional multivariate data analysis in R |journal=SoftwareX |date=July 2020 |volume=12 |pages=100629 |doi=10.1016/j.softx.2020.100629 |bibcode=2020SoftX..1200629S |doi-access=free }}</ref>
* [http://scikit-learn.org/stable/modules/random_projection.html sklearn.random_projection] - Python module for random projection
* [http://scikit-learn.org/stable/modules/random_projection.html sklearn.random_projection] - A module for random projection from the [[scikit-learn]] Python library
* Weka implementation [http://weka.sourceforge.net/doc.stable/weka/filters/unsupervised/attribute/RandomProjection.html]
* Weka implementation [http://weka.sourceforge.net/doc.stable/weka/filters/unsupervised/attribute/RandomProjection.html]


Line 118: Line 161:
* {{cite report |last1=Ramdas |first1=Aditya |citeseerx=10.1.1.377.2593 |title=A Random Introduction To Random Projections }}
* {{cite report |last1=Ramdas |first1=Aditya |citeseerx=10.1.1.377.2593 |title=A Random Introduction To Random Projections }}


[[Category:Machine learning]]
[[Category:Dimension reduction]]
[[Category:Dimension reduction]]

Latest revision as of 10:00, 19 November 2024

In mathematics and statistics, random projection is a technique used to reduce the dimensionality of a set of points which lie in Euclidean space. According to theoretical results, random projection preserves distances well, but empirical results are sparse.[1] They have been applied to many natural language tasks under the name random indexing.

Dimensionality reduction

[edit]

Dimensionality reduction, as the name suggests, is reducing the number of random variables using various mathematical methods from statistics and machine learning. Dimensionality reduction is often used to reduce the problem of managing and manipulating large data sets. Dimensionality reduction techniques generally use linear transformations in determining the intrinsic dimensionality of the manifold as well as extracting its principal directions. For this purpose there are various related techniques, including: principal component analysis, linear discriminant analysis, canonical correlation analysis, discrete cosine transform, random projection, etc.

Random projection is a simple and computationally efficient way to reduce the dimensionality of data by trading a controlled amount of error for faster processing times and smaller model sizes. The dimensions and distribution of random projection matrices are controlled so as to approximately preserve the pairwise distances between any two samples of the dataset.

Method

[edit]

The core idea behind random projection is given in the Johnson-Lindenstrauss lemma,[2] which states that if points in a vector space are of sufficiently high dimension, then they may be projected into a suitable lower-dimensional space in a way which approximately preserves pairwise distances between the points with high probability.

In random projection, the original d-dimensional data is projected to a k-dimensional (k << d) subspace, using a random - dimensional matrix R whose columns have unit lengths.[citation needed] Using matrix notation: If is the original set of N d-dimensional observations, then is the projection of the data onto a lower k-dimensional subspace. Random projection is computationally simple: form the random matrix "R" and project the data matrix X onto K dimensions of order . If the data matrix X is sparse with about c nonzero entries per column, then the complexity of this operation is of order .[3]

Gaussian random projection

[edit]

The random matrix R can be generated using a Gaussian distribution. The first row is a random unit vector uniformly chosen from . The second row is a random unit vector from the space orthogonal to the first row, the third row is a random unit vector from the space orthogonal to the first two rows, and so on. In this way of choosing R, and the following properties are satisfied:

  • Spherical symmetry: For any orthogonal matrix , RA and R have the same distribution.
  • Orthogonality: The rows of R are orthogonal to each other.
  • Normality: The rows of R are unit-length vectors.

More computationally efficient random projections

[edit]

Achlioptas[4] has shown that the Gaussian distribution can be replaced by a much simpler distribution such as

This is efficient for database applications because the computations can be performed using integer arithmetic. More related study is conducted in.[5]

It was later shown how to use integer arithmetic while making the distribution even sparser, having very few nonzeroes per column, in work on the Sparse JL Transform.[6] This is advantageous since a sparse embedding matrix means being able to project the data to lower dimension even faster.

Random Projection with Quantization

[edit]

Random projection can be further condensed by quantization (discretization), with 1-bit (sign random projection) or multi-bits. It is the building block of SimHash,[7] RP tree,[8] and other memory efficient estimation and learning methods.[9][10]

Large quasiorthogonal bases

[edit]

The Johnson-Lindenstrauss lemma states that large sets of vectors in a high-dimensional space can be linearly mapped in a space of much lower (but still high) dimension n with approximate preservation of distances. One of the explanations of this effect is the exponentially high quasiorthogonal dimension of n-dimensional Euclidean space.[11] There are exponentially large (in dimension n) sets of almost orthogonal vectors (with small value of inner products) in n–dimensional Euclidean space. This observation is useful in indexing of high-dimensional data.[12]

Quasiorthogonality of large random sets is important for methods of random approximation in machine learning. In high dimensions, exponentially large numbers of randomly and independently chosen vectors from equidistribution on a sphere (and from many other distributions) are almost orthogonal with probability close to one.[13] This implies that in order to represent an element of such a high-dimensional space by linear combinations of randomly and independently chosen vectors, it may often be necessary to generate samples of exponentially large length if we use bounded coefficients in linear combinations. On the other hand, if coefficients with arbitrarily large values are allowed, the number of randomly generated elements that are sufficient for approximation is even less than dimension of the data space.

Implementations

[edit]

See also

[edit]

References

[edit]
  1. ^ Ella, Bingham; Heikki, Mannila (2001). "Random projection in dimensionality reduction: Applications to image and text data". KDD-2001: Proceedings of the Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: Association for Computing Machinery. pp. 245–250. CiteSeerX 10.1.1.24.5135. doi:10.1145/502512.502546.
  2. ^ Johnson, William B.; Lindenstrauss, Joram (1984). "Extensions of Lipschitz mappings into a Hilbert space". Conference in Modern Analysis and Probability (New Haven, Conn., 1982). Contemporary Mathematics. Vol. 26. Providence, RI: American Mathematical Society. pp. 189–206. doi:10.1090/conm/026/737400. ISBN 9780821850305. MR 0737400. S2CID 117819162..
  3. ^ Bingham, Ella; Mannila, Heikki (May 6, 2014). "Random projection in dimensionality reduction: Applications to image and text data" (PDF).
  4. ^ Achlioptas, Dimitris (2001). "Database-friendly random projections". Proceedings of the twentieth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems - PODS '01. pp. 274–281. CiteSeerX 10.1.1.28.6652. doi:10.1145/375551.375608. ISBN 978-1581133615. S2CID 2640788.
  5. ^ Li, Ping; Hastie, Trevor; Church, Kenneth (2006). "Very sparse random projections". Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. pp. 287–296. doi:10.1145/1150402.1150436. ISBN 1595933395. S2CID 7995734.
  6. ^ Kane, Daniel M.; Nelson, Jelani (2014). "Sparser Johnson-Lindenstrauss Transforms". Journal of the ACM. 61 (1): 1–23. arXiv:1012.1577. doi:10.1145/2559902. MR 3167920. S2CID 7821848.
  7. ^ Charikar, Moses (2002). "Similarity estimation techniques from rounding algorithms". Proceedings of the thiry-fourth annual ACM symposium on Theory of computing. Vol. 1. pp. 380–388. doi:10.1145/509907.509965. ISBN 1581134959. S2CID 4229473.
  8. ^ Freund, Yoav; Dasgupta, Sanjoy; Kabra, Mayank; Verma, Nakul (2007). "Learning the structure of manifolds using random projections". 20th International Conference on Neural Information Processing Systems. 1 (1): 473–480.
  9. ^ Boufounos, Petros; Baraniuk, Richard (2008). "1-bit Compressive Sensing". 42nd Annual Conference on Information Sciences and Systems. 1 (1): 16–21. doi:10.1109/CISS.2008.4558487. S2CID 206563812.
  10. ^ Li, Xiaoyun; Li, Ping (2019). "Generalization error analysis of quantized compressive learning". 33rd International Conference on Neural Information Processing Systems. 1: 15150–15160.
  11. ^ Kainen, Paul C.; Kůrková, Věra (1993), "Quasiorthogonal dimension of Euclidean spaces", Applied Mathematics Letters, 6 (3): 7–10, doi:10.1016/0893-9659(93)90023-G, MR 1347278
  12. ^ Hecht-Nielsen, R. (1994). "Context vectors: General-purpose approximate meaning representations self-organized from raw data". In Zurada, Jacek M.; Marks, Robert Jackson; Robinson, Charles J. (eds.). Computational Intelligence: Imitating Life. IEEE. pp. 43–56. ISBN 978-0-7803-1104-6.
  13. ^ Gorban, Alexander N.; Tyukin, Ivan Y.; Prokhorov, Danil V.; Sofeikov, Konstantin I. (2016). "Approximation with Random Bases: Pro et Contra". Information Sciences. 364–365: 129–145. arXiv:1506.04631. doi:10.1016/j.ins.2015.09.021. S2CID 2239376.
  14. ^ Ravindran, Siddharth (2020). "A Data-Independent Reusable Projection (DIRP) Technique for Dimension Reduction in Big Data Classification Using k-Nearest Neighbor (k-NN)". National Academy Science Letters. 43: 13–21. doi:10.1007/s40009-018-0771-6. S2CID 91946077.
  15. ^ Siddharth, R.; Aghila, G. (July 2020). "RandPro- A practical implementation of random projection-based feature extraction for high dimensional multivariate data analysis in R". SoftwareX. 12: 100629. Bibcode:2020SoftX..1200629S. doi:10.1016/j.softx.2020.100629.

Further reading

[edit]