Anomaly detection: Difference between revisions
Revert Nassir COI changes, again. |
Undid revision - there are no conflicts of interest here. Please point them out. |
||
Line 20: | Line 20: | ||
===Specific=== |
===Specific=== |
||
* Let T be observations from a univariate Gaussian distribution and O a point from T. Then the z-score for O is greater than a pre-selected threshold if and only if O is an outlier. |
* Let T be observations from a univariate Gaussian distribution and O a point from T. Then the z-score for O is greater than a pre-selected threshold if and only if O is an outlier. |
||
=== Definition of anomalies in high-dimensional context === |
|||
In this big data era, the focus is increasingly on methodologies capable of handling the complexity and scale of data, going beyond traditional approaches to define and detect anomalies in a way that is both effective and efficient for today's data-driven decision-making processes.<ref name="Thudumu-2020">{{Cite journal |last1=Thudumu |first1=Srikanth |last2=Branch |first2=Philip |last3=Jin |first3=Jiong |last4=Singh |first4=Jugdutt (Jack) |date=2020-07-02 |title=A comprehensive survey of anomaly detection techniques for high dimensional big data |journal=Journal of Big Data |volume=7 |issue=1 |pages=42 |doi=10.1186/s40537-020-00320-x |issn=2196-1115 |doi-access=free|hdl=10536/DRO/DU:30158643 |hdl-access=free }}</ref> |
|||
* Anomalies in high-dimensional spaces are more challenging to identify due to the sparsity of the data and the relative distance between points becoming less meaningful.<ref name="Thudumu-2020" /> |
|||
* Traditional threshold-based methods become less effective as dimensionality increases, often requiring more sophisticated, multidimensional analysis techniques.<ref name="Thudumu-2020" /> |
|||
* High dimensional anomaly detection often requires careful consideration of the feature selection to reduce dimensionality and enhance the sensitivity to true anomalies.<ref name="Thudumu-2020" /> |
|||
== History == |
== History == |
||
Line 68: | Line 61: | ||
== Methods == |
== Methods == |
||
Many anomaly detection techniques have been proposed in literature.<ref name="ChandolaSurvey">{{cite journal|last1=Chandola|first1=V.|last2=Banerjee|first2=A.|last3=Kumar|first3=V.|s2cid=207172599|year=2009|title=Anomaly detection: A survey|journal=[[ACM Computing Surveys]]|volume=41|issue=3|pages=1–58|doi=10.1145/1541880.1541882 |
Many anomaly detection techniques have been proposed in the literature.<ref name="ChandolaSurvey">{{cite journal|last1=Chandola|first1=V.|last2=Banerjee|first2=A.|last3=Kumar|first3=V.|s2cid=207172599|year=2009|title=Anomaly detection: A survey|journal=[[ACM Computing Surveys]]|volume=41|issue=3|pages=1–58|doi=10.1145/1541880.1541882}}</ref> The performance of methods usually depend on the data sets. For example, some may be suited to detecting local outliers, while others global. While there is no universally best method, those that utilise partial models like Isolation Forest do mostly perform well on diverse data sets (Similar to how decision tree ensemble based methods perform well on most tabular data). However, almost all algorithms require the setting of non-intuitive parameters critical for performance, and which are usually unknown before application. Thus most methods require some amount of additional user input. |
||
Some of the popular techniques are mentioned below and are broken down into categories: |
|||
=== Statistical === |
=== Statistical === |
||
⚫ | Minimum Covariance Determinant<ref>{{Cite journal |last1=Hubert |first1=Mia |author-link=Mia Hubert |last2=Debruyne |first2=Michiel |last3=Rousseeuw |first3=Peter J. |author-link3=Peter J. Rousseeuw |date=2018 |title=Minimum covariance determinant and extensions |journal=WIREs Computational Statistics |language=en |volume=10 |issue=3 |arxiv=1709.07045 |doi=10.1002/wics.1421 |issn=1939-5108 |s2cid=67227041 |doi-access=free}}</ref><ref>{{Cite journal |last1=Hubert |first1=Mia |author-link=Mia Hubert |last2=Debruyne |first2=Michiel |date=2010 |title=Minimum covariance determinant |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/wics.61 |journal=WIREs Computational Statistics |language=en |volume=2 |issue=1 |pages=36–43 |doi=10.1002/wics.61 |issn=1939-0068 |s2cid=123086172}}</ref> |
||
==== Parameter-free ==== |
|||
{{empty section|date=January 2024}} |
|||
==== Parametric-based ==== |
==== Parametric-based ==== |
||
* [[Standard score|Z-score]], |
* [[Standard score|Z-score]], |
||
Line 80: | Line 74: | ||
=== Density === |
=== Density === |
||
* [[isolation forest]],<ref>{{Cite book |last1=Liu |first1=Fei Tony |url=https://www.computer.org/csdl/proceedings/icdm/2008/3502/00/3502a413-abs.html |title=2008 Eighth IEEE International Conference on Data Mining |last2=Ting |first2=Kai Ming |last3=Zhou |first3=Zhi-Hua |date=December 2008 |isbn=9780769535029 |pages=413–422 |language=en |chapter=Isolation Forest |doi=10.1109/ICDM.2008.17 |s2cid=6505449}}</ref><ref>{{Cite journal |last1=Liu |first1=Fei Tony |last2=Ting |first2=Kai Ming |last3=Zhou |first3=Zhi-Hua |date=March 2012 |title=Isolation-Based Anomaly Detection |url=https://www.researchgate.net/publication/239761771 |journal=ACM Transactions on Knowledge Discovery from Data |language=en |volume=6 |issue=1 |pages=1–39 |doi=10.1145/2133360.2133363 |s2cid=207193045}}</ref> |
|||
* Density-based techniques ([[K-nearest neighbor algorithm|k-nearest neighbor]],<ref>{{cite journal | doi = 10.1007/s007780050006| title = Distance-based outliers: Algorithms and applications| journal = The VLDB Journal the International Journal on Very Large Data Bases| volume = 8| issue = 3–4| pages = 237–253| year = 2000| last1 = Knorr | first1 = E. M. | last2 = Ng | first2 = R. T. | last3 = Tucakov | first3 = V. | citeseerx = 10.1.1.43.1842| s2cid = 11707259}}</ref><ref>{{cite conference | doi = 10.1145/342009.335437| title = Efficient algorithms for mining outliers from large data sets| conference = Proceedings of the 2000 ACM SIGMOD international conference on Management of data – SIGMOD '00| pages = 427| year = 2000| last1 = Ramaswamy | first1 = S. | last2 = Rastogi | first2 = R. | last3 = Shim | first3 = K. | isbn = 1-58113-217-4}}</ref><ref>{{cite conference | doi = 10.1007/3-540-45681-3_2| title = Fast Outlier Detection in High Dimensional Spaces| conference = Principles of Data Mining and Knowledge Discovery| volume = 2431| pages = 15| series = Lecture Notes in Computer Science| year = 2002| last1 = Angiulli | first1 = F. | last2 = Pizzuti | first2 = C. | isbn = 978-3-540-44037-6| doi-access = free}}</ref> [[local outlier factor]],<ref>{{cite conference| doi = 10.1145/335191.335388| title = LOF: Identifying Density-based Local Outliers| year = 2000| last1 = Breunig | first1 = M. M.| last2 = Kriegel | first2 = H.-P. | author-link2 = Hans-Peter Kriegel| last3 = Ng | first3 = R. T.| last4 = Sander | first4 = J.| work = Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data| series = [[SIGMOD]]| isbn = 1-58113-217-4| pages = 93–104| url = http://www.dbs.ifi.lmu.de/Publikationen/Papers/LOF.pdf}}</ref> [[isolation forest]]s,<ref>{{Cite book|last1=Liu|first1=Fei Tony|last2=Ting|first2=Kai Ming|last3=Zhou|first3=Zhi-Hua|title=2008 Eighth IEEE International Conference on Data Mining |chapter=Isolation Forest |date=December 2008|url=https://www.computer.org/csdl/proceedings/icdm/2008/3502/00/3502a413-abs.html|language=en|pages=413–422|doi=10.1109/ICDM.2008.17|isbn=9780769535029|s2cid=6505449}}</ref><ref>{{Cite journal|last1=Liu|first1=Fei Tony|last2=Ting|first2=Kai Ming|last3=Zhou|first3=Zhi-Hua|date=March 2012|title=Isolation-Based Anomaly Detection|url=https://www.researchgate.net/publication/239761771|journal=ACM Transactions on Knowledge Discovery from Data |language=en|volume=6|issue=1|pages=1–39|doi=10.1145/2133360.2133363|s2cid=207193045}}</ref> and many more variations of this concept<ref>{{cite journal | last1 = Schubert | first1 = E. | last2 = Zimek | first2 = A. | author-link2 = Arthur Zimek | last3 = Kriegel | first3 = H. -P. | s2cid = 19036098 | author-link3 = Hans-Peter Kriegel| doi = 10.1007/s10618-012-0300-z | title = Local outlier detection reconsidered: A generalized view on locality with applications to spatial, video, and network outlier detection | journal = Data Mining and Knowledge Discovery | volume = 28 | pages = 190–237 | year = 2012 }}</ref>) |
|||
* |
* [[K-nearest neighbor algorithm|k-nearest neighbor]],<ref>{{cite journal | doi = 10.1007/s007780050006| title = Distance-based outliers: Algorithms and applications| journal = The VLDB Journal the International Journal on Very Large Data Bases| volume = 8| issue = 3–4| pages = 237–253| year = 2000| last1 = Knorr | first1 = E. M. | last2 = Ng | first2 = R. T. | last3 = Tucakov | first3 = V. | citeseerx = 10.1.1.43.1842| s2cid = 11707259}}</ref><ref>{{cite conference | doi = 10.1145/342009.335437| title = Efficient algorithms for mining outliers from large data sets| conference = Proceedings of the 2000 ACM SIGMOD international conference on Management of data – SIGMOD '00| pages = 427| year = 2000| last1 = Ramaswamy | first1 = S. | last2 = Rastogi | first2 = R. | last3 = Shim | first3 = K. | isbn = 1-58113-217-4}}</ref><ref>{{cite conference | doi = 10.1007/3-540-45681-3_2| title = Fast Outlier Detection in High Dimensional Spaces| conference = Principles of Data Mining and Knowledge Discovery| volume = 2431| pages = 15| series = Lecture Notes in Computer Science| year = 2002| last1 = Angiulli | first1 = F. | last2 = Pizzuti | first2 = C. | isbn = 978-3-540-44037-6| doi-access = free}}</ref> |
||
* [[local outlier factor]],<ref>{{cite conference| doi = 10.1145/335191.335388| title = LOF: Identifying Density-based Local Outliers| year = 2000| last1 = Breunig | first1 = M. M.| last2 = Kriegel | first2 = H.-P. | author-link2 = Hans-Peter Kriegel| last3 = Ng | first3 = R. T.| last4 = Sander | first4 = J.| work = Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data| series = [[SIGMOD]]| isbn = 1-58113-217-4| pages = 93–104| url = http://www.dbs.ifi.lmu.de/Publikationen/Papers/LOF.pdf}}</ref> |
|||
* [[One-class_classification|One-class]] [[support vector machines]]<ref>{{cite journal|last1=Schölkopf|first1=B.|author-link=Bernhard Schölkopf|last2=Platt|first2=J. C.|last3=Shawe-Taylor|first3=J.|last4=Smola|first4=A. J.|last5=Williamson|first5=R. C.|year=2001|title=Estimating the Support of a High-Dimensional Distribution|journal=Neural Computation|volume=13|issue=7|pages=1443–71|citeseerx=10.1.1.4.4106|doi=10.1162/089976601750264965|pmid=11440593|s2cid=2110475}}</ref> |
* [[One-class_classification|One-class]] [[support vector machines]]<ref>{{cite journal |last1=Schölkopf |first1=B. |author-link=Bernhard Schölkopf |last2=Platt |first2=J. C. |last3=Shawe-Taylor |first3=J. |last4=Smola |first4=A. J. |last5=Williamson |first5=R. C. |year=2001 |title=Estimating the Support of a High-Dimensional Distribution |journal=Neural Computation |volume=13 |issue=7 |pages=1443–71 |citeseerx=10.1.1.4.4106 |doi=10.1162/089976601750264965 |pmid=11440593 |s2cid=2110475}}</ref> |
||
=== Neural networks === |
=== Neural networks === |
||
* Replicator [[neural network]]s,<ref name="replicator">{{cite book |doi=10.1007/3-540-46145-0_17 |chapter=Outlier Detection Using Replicator Neural Networks |title=Data Warehousing and Knowledge Discovery |volume=2454 |pages=170–180 |year=2002 |last1=Hawkins |first1=Simon |last2=He |first2=Hongxing |last3=Williams |first3=Graham |last4=Baxter |first4=Rohan |isbn=978-3-540-44123-6 |series=Lecture Notes in Computer Science |citeseerx=10.1.1.12.3366 |s2cid=6436930 }}</ref> |
* Replicator [[neural network]]s,<ref name="replicator">{{cite book |doi=10.1007/3-540-46145-0_17 |chapter=Outlier Detection Using Replicator Neural Networks |title=Data Warehousing and Knowledge Discovery |volume=2454 |pages=170–180 |year=2002 |last1=Hawkins |first1=Simon |last2=He |first2=Hongxing |last3=Williams |first3=Graham |last4=Baxter |first4=Rohan |isbn=978-3-540-44123-6 |series=Lecture Notes in Computer Science |citeseerx=10.1.1.12.3366 |s2cid=6436930 }}</ref> |
||
* [[Bayesian network]]s<ref name="replicator" /> |
* [[Bayesian network]]s<ref name="replicator" /> |
||
* [[Hidden Markov model]]s (HMMs)<ref name="replicator" /> |
* [[Hidden Markov model]]s (HMMs)<ref name="replicator" /> |
||
⚫ | |||
⚫ | |||
** [[Autoencoder#Anomaly detection|autoencoder]]s, |
|||
⚫ | |||
** variational autoencoders,<ref>{{cite journal |last1=An |first1=J. |last2=Cho |first2=S. |date=2015 |title=Variational autoencoder based anomaly detection using reconstruction probability |url=http://dm.snu.ac.kr/enwiki/static/docs/TR/SNUDM-TR-2015-03.pdf |journal=Special Lecture on IE |volume=2 |issue=1 |pages=1–18 |id=SNUDM-TR-2015-03}}</ref> |
|||
** [[long short-term memory]] neural networks<ref>{{Cite conference |last1=Malhotra |first1=Pankaj |last2=Vig |first2=Lovekesh |last3=Shroff |first3=Gautman |last4=Agarwal |first4=Puneet |date=22–24 April 2015 |title=Long Short Term Memory Networks for Anomaly Detection in Time Series |url=https://www.researchgate.net/publication/304782562 |conference=ESANN 2015: 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning |pages=89–94 |isbn=978-2-87587-015-5}}</ref> |
|||
** '''[[Convolutional neural network|Convolutional Neural Networks]] (CNNs):''' CNNs have shown exceptional performance in the unsupervised learning domain for anomaly detection, especially in image and video data analysis.<ref name="Qasim-2023" /> Their ability to automatically and hierarchically learn spatial hierarchies of features from low to high-level patterns makes them particularly suited for detecting visual anomalies. For instance, CNNs can be trained on image datasets to identify atypical patterns indicative of defects or out-of-norm conditions in industrial quality control scenarios.<ref>{{Cite journal |last1=Alzubaidi |first1=Laith |last2=Zhang |first2=Jinglan |last3=Humaidi |first3=Amjad J. |last4=Al-Dujaili |first4=Ayad |last5=Duan |first5=Ye |last6=Al-Shamma |first6=Omran |last7=Santamaría |first7=J. |last8=Fadhel |first8=Mohammed A. |last9=Al-Amidie |first9=Muthana |last10=Farhan |first10=Laith |date=2021-03-31 |title=Review of deep learning: concepts, CNN architectures, challenges, applications, future directions |journal=Journal of Big Data |volume=8 |issue=1 |pages=53 |doi=10.1186/s40537-021-00444-8 |issn=2196-1115 |pmc=8010506 |pmid=33816053 |doi-access=free }}</ref> |
** '''[[Convolutional neural network|Convolutional Neural Networks]] (CNNs):''' CNNs have shown exceptional performance in the unsupervised learning domain for anomaly detection, especially in image and video data analysis.<ref name="Qasim-2023" /> Their ability to automatically and hierarchically learn spatial hierarchies of features from low to high-level patterns makes them particularly suited for detecting visual anomalies. For instance, CNNs can be trained on image datasets to identify atypical patterns indicative of defects or out-of-norm conditions in industrial quality control scenarios.<ref>{{Cite journal |last1=Alzubaidi |first1=Laith |last2=Zhang |first2=Jinglan |last3=Humaidi |first3=Amjad J. |last4=Al-Dujaili |first4=Ayad |last5=Duan |first5=Ye |last6=Al-Shamma |first6=Omran |last7=Santamaría |first7=J. |last8=Fadhel |first8=Mohammed A. |last9=Al-Amidie |first9=Muthana |last10=Farhan |first10=Laith |date=2021-03-31 |title=Review of deep learning: concepts, CNN architectures, challenges, applications, future directions |journal=Journal of Big Data |volume=8 |issue=1 |pages=53 |doi=10.1186/s40537-021-00444-8 |issn=2196-1115 |pmc=8010506 |pmid=33816053 |doi-access=free }}</ref> |
||
** '''Simple Recurrent Units (SRUs):''' In time-series data, SRUs, a type of recurrent neural network, have been effectively used for anomaly detection by capturing temporal dependencies and sequence anomalies.<ref name="Qasim-2023" /> Unlike traditional RNNs, SRUs are designed to be faster and more parallelizable, offering a better fit for real-time anomaly detection in complex systems such as dynamic financial markets or predictive maintenance in machinery, where identifying temporal irregularities promptly is crucial.<ref>{{Cite journal |last1=Belay |first1=Mohammed Ayalew |last2=Blakseth |first2=Sindre Stenen |last3=Rasheed |first3=Adil |last4=Salvo Rossi |first4=Pierluigi |date=January 2023 |title=Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions |journal=Sensors |language=en |volume=23 |issue=5 |pages=2844 |doi=10.3390/s23052844 |pmid=36905048 |pmc=10007300 |bibcode=2023Senso..23.2844B |issn=1424-8220 |doi-access=free }}</ref> |
** '''Simple Recurrent Units (SRUs):''' In time-series data, SRUs, a type of recurrent neural network, have been effectively used for anomaly detection by capturing temporal dependencies and sequence anomalies.<ref name="Qasim-2023" /> Unlike traditional RNNs, SRUs are designed to be faster and more parallelizable, offering a better fit for real-time anomaly detection in complex systems such as dynamic financial markets or predictive maintenance in machinery, where identifying temporal irregularities promptly is crucial.<ref>{{Cite journal |last1=Belay |first1=Mohammed Ayalew |last2=Blakseth |first2=Sindre Stenen |last3=Rasheed |first3=Adil |last4=Salvo Rossi |first4=Pierluigi |date=January 2023 |title=Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions |journal=Sensors |language=en |volume=23 |issue=5 |pages=2844 |doi=10.3390/s23052844 |pmid=36905048 |pmc=10007300 |bibcode=2023Senso..23.2844B |issn=1424-8220 |doi-access=free }}</ref> |
||
Line 103: | Line 100: | ||
| journal = ACM Transactions on Knowledge Discovery from Data |
| journal = ACM Transactions on Knowledge Discovery from Data |
||
| volume = 10 | issue = 1 | pages = 5:1–51 | year = 2015 | doi = 10.1145/2733381}}</ref> |
| volume = 10 | issue = 1 | pages = 5:1–51 | year = 2015 | doi = 10.1145/2733381}}</ref> |
||
* Deviations from [[association rule learning|association rules]] and frequent itemsets |
|||
* Fuzzy logic-based outlier detection |
|||
=== Ensembles === |
=== Ensembles === |
||
Line 110: | Line 105: | ||
===Others=== |
===Others=== |
||
Histogram-based Outlier Score (HBOS): A fast Unsupervised Anomaly Detection Algorithm<ref>{{cite web |title=Histogram-based Outlier Score (HBOS): A fast Unsupervised Anomaly Detection Algorithm |website=https://www.researchgate.net/publication/231614824_Histogram-based_Outlier_Score_HBOS_A_fast_Unsupervised_Anomaly_Detection_Algorithm |access-date=4 September 2024}}</ref> |
|||
{{empty section|date=January 2024}} |
|||
== Anomaly detection in dynamic networks == |
== Anomaly detection in dynamic networks == |
||
Line 126: | Line 121: | ||
Many of the methods discussed above only yield an anomaly score prediction, which often can be explained to users as the point being in a region of low data density (or relatively low density compared to the neighbor's densities). In [[explainable artificial intelligence]], the users demand methods with higher explainability. Some methods allow for more detailed explanations: |
Many of the methods discussed above only yield an anomaly score prediction, which often can be explained to users as the point being in a region of low data density (or relatively low density compared to the neighbor's densities). In [[explainable artificial intelligence]], the users demand methods with higher explainability. Some methods allow for more detailed explanations: |
||
* The Subspace Outlier Degree (SOD)<ref name="Kriegel-2009" /> identifies attributes where a sample is normal, and attributes in which the sample deviates from the expected. |
* The Subspace Outlier Degree (SOD)<ref name="Kriegel-2009">{{cite conference |last1=Kriegel |first1=H. P. |author-link1=Hans-Peter Kriegel |last2=Kröger |first2=P. |last3=Schubert |first3=E. |last4=Zimek |first4=A. |author-link4=Arthur Zimek |year=2009 |title=Outlier Detection in Axis-Parallel Subspaces of High Dimensional Data |conference=Advances in Knowledge Discovery and Data Mining |series=Lecture Notes in Computer Science |volume=5476 |pages=831 |doi=10.1007/978-3-642-01307-2_86 |isbn=978-3-642-01306-5}}</ref> identifies attributes where a sample is normal, and attributes in which the sample deviates from the expected. |
||
* Correlation Outlier Probabilities (COP)<ref name="Kriegel-2012" /> compute an error vector of how a sample point deviates from an expected location, which can be interpreted as a counterfactual explanation: the sample would be normal if it were moved to that location. |
* Correlation Outlier Probabilities (COP)<ref name="Kriegel-2012">{{cite conference |last1=Kriegel |first1=H. P. |author-link1=Hans-Peter Kriegel |last2=Kroger |first2=P. |last3=Schubert |first3=E. |last4=Zimek |first4=A. |author-link4=Arthur Zimek |year=2012 |title=Outlier Detection in Arbitrarily Oriented Subspaces |conference=2012 IEEE 12th International Conference on Data Mining |pages=379 |doi=10.1109/ICDM.2012.21 |isbn=978-1-4673-4649-8}}</ref> compute an error vector of how a sample point deviates from an expected location, which can be interpreted as a counterfactual explanation: the sample would be normal if it were moved to that location. |
||
== Software == |
== Software == |
Revision as of 16:10, 5 September 2024
Part of a series on |
Machine learning and data mining |
---|
In data analysis, anomaly detection (also referred to as outlier detection and sometimes as novelty detection) is generally understood to be the identification of rare items, events or observations which deviate significantly from the majority of the data and do not conform to a well defined notion of normal behavior.[1] Such examples may arouse suspicions of being generated by a different mechanism,[2] or appear inconsistent with the remainder of that set of data.[3]
Anomaly detection finds application in many domains including cybersecurity, medicine, machine vision, statistics, neuroscience, law enforcement and financial fraud to name only a few. Anomalies were initially searched for clear rejection or omission from the data to aid statistical analysis, for example to compute the mean or standard deviation. They were also removed to better predictions from models such as linear regression, and more recently their removal aids the performance of machine learning algorithms. However, in many applications anomalies themselves are of interest and are the observations most desirous in the entire data set, which need to be identified and separated from noise or irrelevant outliers.
Three broad categories of anomaly detection techniques exist.[1] Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier. However, this approach is rarely used in anomaly detection due to the general unavailability of labelled data and the inherent unbalanced nature of the classes. Semi-supervised anomaly detection techniques assume that some portion of the data is labelled. This may be any combination of the normal or anomalous data, but more often than not, the techniques construct a model representing normal behavior from a given normal training data set, and then test the likelihood of a test instance to be generated by the model. Unsupervised anomaly detection techniques assume the data is unlabelled and are by far the most commonly used due to their wider and relevant application.
Definition
Many attempts have been made in the statistical and computer science communities to define an anomaly. The most prevalent ones include the following, and can be categorised into three groups: those that are ambiguous, those that are specific to a method with pre-defined thresholds usually chosen empirically, and those that are formally defined:
Ill defined
- An outlier is an observation which deviates so much from the other observations as to arouse suspicions that it was generated by a different mechanism.[2]
- Anomalies are instances or collections of data that occur very rarely in the data set and whose features differ significantly from most of the data.
- An outlier is an observation (or subset of observations) which appears to be inconsistent with the remainder of that set of data.[3]
- An anomaly is a point or collection of points that is relatively distant from other points in multi-dimensional space of features.
- Anomalies are patterns in data that do not conform to a well-defined notion of normal behaviour.[1]
Specific
- Let T be observations from a univariate Gaussian distribution and O a point from T. Then the z-score for O is greater than a pre-selected threshold if and only if O is an outlier.
History
Intrusion detection
The concept of intrusion detection, a critical component of anomaly detection, has evolved significantly over time. Initially, it was a manual process where system administrators would monitor for unusual activities, such as a vacationing user's account being accessed or unexpected printer activity. This approach was not scalable and was soon superseded by the analysis of audit logs and system logs for signs of malicious behavior.[4]
By the late 1970s and early 1980s, the analysis of these logs was primarily used retrospectively to investigate incidents, as the volume of data made it impractical for real-time monitoring. The affordability of digital storage eventually led to audit logs being analyzed online, with specialized programs being developed to sift through the data. These programs, however, were typically run during off-peak hours due to their computational intensity.[4]
The 1990s brought the advent of real-time intrusion detection systems capable of analyzing audit data as it was generated, allowing for immediate detection of and response to attacks. This marked a significant shift towards proactive intrusion detection.[4]
As the field has continued to develop, the focus has shifted to creating solutions that can be efficiently implemented across large and complex network environments, adapting to the ever-growing variety of security threats and the dynamic nature of modern computing infrastructures.[4]
Applications
Anomaly detection is applicable in a very large number and variety of domains, and is an important subarea of unsupervised machine learning. As such it has applications in cyber-security, intrusion detection, fraud detection, fault detection, system health monitoring, event detection in sensor networks, detecting ecosystem disturbances, defect detection in images using machine vision, medical diagnosis and law enforcement.[5]
Intrusion detection
Anomaly detection was proposed for intrusion detection systems (IDS) by Dorothy Denning in 1986.[6] Anomaly detection for IDS is normally accomplished with thresholds and statistics, but can also be done with soft computing, and inductive learning.[7] Types of features proposed by 1999 included profiles of users, workstations, networks, remote hosts, groups of users, and programs based on frequencies, means, variances, covariances, and standard deviations.[8] The counterpart of anomaly detection in intrusion detection is misuse detection.
Fintech fraud detection
Anomaly detection is vital in fintech for fraud prevention.[9][10]
Preprocessing
Preprocessing data to remove anomalies can be an important step in data analysis, and is done for a number of reasons. Statistics such as the mean and standard deviation are more accurate after the removal of anomalies, and the visualisation of data can also be improved. In supervised learning, removing the anomalous data from the dataset often results in a statistically significant increase in accuracy.[11][12]
Video surveillance
Anomaly detection has become increasingly vital in video surveillance to enhance security and safety.[13][14] With the advent of deep learning technologies, methods using Convolutional Neural Networks (CNNs) and Simple Recurrent Units (SRUs) have shown significant promise in identifying unusual activities or behaviors in video data.[13] These models can process and analyze extensive video feeds in real-time, recognizing patterns that deviate from the norm, which may indicate potential security threats or safety violations.[13]
IT infrastructure
In IT infrastructure management, anomaly detection is crucial for ensuring the smooth operation and reliability of services.[15] Techniques like the IT Infrastructure Library (ITIL) and monitoring frameworks are employed to track and manage system performance and user experience.[15] Detection anomalies can help identify and pre-empt potential performance degradations or system failures, thus maintaining productivity and business process effectiveness.[15]
IoT systems
Anomaly detection is critical for the security and efficiency of Internet of Things (IoT) systems.[16] It helps in identifying system failures and security breaches in complex networks of IoT devices.[16] The methods must manage real-time data, diverse device types, and scale effectively. Garbe et al.[17] have introduced a multi-stage anomaly detection framework that improves upon traditional methods by incorporating spatial clustering, density-based clustering, and locality-sensitive hashing. This tailored approach is designed to better handle the vast and varied nature of IoT data, thereby enhancing security and operational reliability in smart infrastructure and industrial IoT systems.[17]
Petroleum industry
Anomaly detection is crucial in the petroleum industry for monitoring critical machinery.[18] Martí et al. used a novel segmentation algorithm to analyze sensor data for real-time anomaly detection.[18] This approach helps promptly identify and address any irregularities in sensor readings, ensuring the reliability and safety of petroleum operations.[18]
Oil and gas pipeline monitoring
In the oil and gas sector, anomaly detection is not just crucial for maintenance and safety, but also for environmental protection.[19] Aljameel et al. propose an advanced machine learning-based model for detecting minor leaks in oil and gas pipelines, a task traditional methods may miss.[19]
Methods
Many anomaly detection techniques have been proposed in the literature.[1] The performance of methods usually depend on the data sets. For example, some may be suited to detecting local outliers, while others global. While there is no universally best method, those that utilise partial models like Isolation Forest do mostly perform well on diverse data sets (Similar to how decision tree ensemble based methods perform well on most tabular data). However, almost all algorithms require the setting of non-intuitive parameters critical for performance, and which are usually unknown before application. Thus most methods require some amount of additional user input.
Some of the popular techniques are mentioned below and are broken down into categories:
Statistical
Minimum Covariance Determinant[20][21]
Parametric-based
Density
- isolation forest,[22][23]
- k-nearest neighbor,[24][25][26]
- local outlier factor,[27]
- One-class support vector machines[28]
Neural networks
- Replicator neural networks,[29]
- Bayesian networks[29]
- Hidden Markov models (HMMs)[29]
- Deep Learning[13]
- autoencoders,
- variational autoencoders,[30]
- long short-term memory neural networks[31]
- Convolutional Neural Networks (CNNs): CNNs have shown exceptional performance in the unsupervised learning domain for anomaly detection, especially in image and video data analysis.[13] Their ability to automatically and hierarchically learn spatial hierarchies of features from low to high-level patterns makes them particularly suited for detecting visual anomalies. For instance, CNNs can be trained on image datasets to identify atypical patterns indicative of defects or out-of-norm conditions in industrial quality control scenarios.[32]
- Simple Recurrent Units (SRUs): In time-series data, SRUs, a type of recurrent neural network, have been effectively used for anomaly detection by capturing temporal dependencies and sequence anomalies.[13] Unlike traditional RNNs, SRUs are designed to be faster and more parallelizable, offering a better fit for real-time anomaly detection in complex systems such as dynamic financial markets or predictive maintenance in machinery, where identifying temporal irregularities promptly is crucial.[33]
Cluster-based
- Clustering: Cluster analysis-based outlier detection[34][35]
Ensembles
- Ensemble techniques, using feature bagging,[36][37] score normalization[38][39] and different sources of diversity[40][41]
Others
Histogram-based Outlier Score (HBOS): A fast Unsupervised Anomaly Detection Algorithm[42]
Anomaly detection in dynamic networks
Dynamic networks, such as those representing financial systems, social media interactions, and transportation infrastructure, are subject to constant change, making anomaly detection within them a complex task. Unlike static graphs, dynamic networks reflect evolving relationships and states, requiring adaptive techniques for anomaly detection.
Types of anomalies in dynamic networks
- Community anomalies
- Compression anomalies
- Decomposition anomalies
- Distance anomalies
- Probabilistic model anomalies
Explainable anomaly detection
Many of the methods discussed above only yield an anomaly score prediction, which often can be explained to users as the point being in a region of low data density (or relatively low density compared to the neighbor's densities). In explainable artificial intelligence, the users demand methods with higher explainability. Some methods allow for more detailed explanations:
- The Subspace Outlier Degree (SOD)[43] identifies attributes where a sample is normal, and attributes in which the sample deviates from the expected.
- Correlation Outlier Probabilities (COP)[44] compute an error vector of how a sample point deviates from an expected location, which can be interpreted as a counterfactual explanation: the sample would be normal if it were moved to that location.
Software
- ELKI is an open-source Java data mining toolkit that contains several anomaly detection algorithms, as well as index acceleration for them.
- PyOD is an open-source Python library developed specifically for anomaly detection.[45]
- scikit-learn is an open-source Python library that contains some algorithms for unsupervised anomaly detection.
- Wolfram Mathematica provides functionality for unsupervised anomaly detection across multiple data types [46]
Datasets
- Anomaly detection benchmark data repository with carefully chosen data sets of the Ludwig-Maximilians-Universität München; Mirror Archived 2022-03-31 at the Wayback Machine at University of São Paulo.
- ODDS – ODDS: A large collection of publicly available outlier detection datasets with ground truth in different domains.
- Unsupervised Anomaly Detection Benchmark at Harvard Dataverse: Datasets for Unsupervised Anomaly Detection with ground truth.
- KMASH Data Repository at Research Data Australia having more than 12,000 anomaly detection datasets with ground truth.
See also
References
- ^ a b c d Chandola, V.; Banerjee, A.; Kumar, V. (2009). "Anomaly detection: A survey". ACM Computing Surveys. 41 (3): 1–58. doi:10.1145/1541880.1541882. S2CID 207172599.
- ^ a b Hawkins, Douglas M. (1980). Identification of Outliers. Springer. ISBN 978-0-412-21900-9. OCLC 6912274.
- ^ a b Barnett, Vic; Lewis, Lewis (1978). Outliers in statistical data. Wiley. ISBN 978-0-471-99599-9. OCLC 1150938591.
- ^ a b c d Kemmerer, R.A.; Vigna, G. (April 2002). "Intrusion detection: a brief history and overview". Computer. 35 (4): supl27–supl30. doi:10.1109/mc.2002.1012428. ISSN 0018-9162.
- ^ Aggarwal, Charu (2017). Outlier Analysis. Springer Publishing Company, Incorporated. ISBN 978-3319475776.
- ^ Denning, D. E. (1987). "An Intrusion-Detection Model" (PDF). IEEE Transactions on Software Engineering. SE-13 (2): 222–232. CiteSeerX 10.1.1.102.5127. doi:10.1109/TSE.1987.232894. S2CID 10028835. Archived (PDF) from the original on June 22, 2015.
- ^ Teng, H. S.; Chen, K.; Lu, S. C. (1990). "Adaptive real-time anomaly detection using inductively generated sequential patterns". Proceedings. 1990 IEEE Computer Society Symposium on Research in Security and Privacy (PDF). pp. 278–284. doi:10.1109/RISP.1990.63857. ISBN 978-0-8186-2060-7. S2CID 35632142.
- ^ Jones, Anita K.; Sielken, Robert S. (2000). "Computer System Intrusion Detection: A Survey". Computer Science Technical Report. Department of Computer Science, University of Virginia: 1–25}.
- ^ Stojanović, Branka; Božić, Josip; Hofer-Schmitz, Katharina; Nahrgang, Kai; Weber, Andreas; Badii, Atta; Sundaram, Maheshkumar; Jordan, Elliot; Runevic, Joel (January 2021). "Follow the Trail: Machine Learning for Fraud Detection in Fintech Applications". Sensors. 21 (5): 1594. Bibcode:2021Senso..21.1594S. doi:10.3390/s21051594. ISSN 1424-8220. PMC 7956727. PMID 33668773.
- ^ Ahmed, Mohiuddin; Mahmood, Abdun Naser; Islam, Md. Rafiqul (February 2016). "A survey of anomaly detection techniques in financial domain". Future Generation Computer Systems. 55: 278–288. doi:10.1016/j.future.2015.01.001. ISSN 0167-739X. S2CID 204982937.
- ^ Tomek, Ivan (1976). "An Experiment with the Edited Nearest-Neighbor Rule". IEEE Transactions on Systems, Man, and Cybernetics. 6 (6): 448–452. doi:10.1109/TSMC.1976.4309523.
- ^ Smith, M. R.; Martinez, T. (2011). "Improving classification accuracy by identifying and removing instances that should be misclassified" (PDF). The 2011 International Joint Conference on Neural Networks. p. 2690. CiteSeerX 10.1.1.221.1371. doi:10.1109/IJCNN.2011.6033571. ISBN 978-1-4244-9635-8. S2CID 5809822.
- ^ a b c d e f Qasim, Maryam; Verdu, Elena (2023-06-01). "Video anomaly detection system using deep convolutional and recurrent models". Results in Engineering. 18: 101026. doi:10.1016/j.rineng.2023.101026. ISSN 2590-1230. S2CID 257728239.
- ^ Zhang, Tan; Chowdhery, Aakanksha; Bahl, Paramvir (Victor); Jamieson, Kyle; Banerjee, Suman (2015-09-07). "The Design and Implementation of a Wireless Video Surveillance System". Proceedings of the 21st Annual International Conference on Mobile Computing and Networking. MobiCom '15. New York, NY, USA: Association for Computing Machinery. pp. 426–438. doi:10.1145/2789168.2790123. ISBN 978-1-4503-3619-2. S2CID 12310150.
- ^ a b c Gow, Richard; Rabhi, Fethi A.; Venugopal, Srikumar (2018). "Anomaly Detection in Complex Real World Application Systems". IEEE Transactions on Network and Service Management. 15: 83–96. doi:10.1109/TNSM.2017.2771403. hdl:1959.4/unsworks_73660. S2CID 3883483. Retrieved 2023-11-08.
- ^ a b Chatterjee, Ayan; Ahmed, Bestoun S. (August 2022). "IoT anomaly detection methods and applications: A survey". Internet of Things. 19: 100568. arXiv:2207.09092. doi:10.1016/j.iot.2022.100568. ISSN 2542-6605. S2CID 250644468.
- ^ a b Garg, Sahil; Kaur, Kuljeet; Batra, Shalini; Kaddoum, Georges; Kumar, Neeraj; Boukerche, Azzedine (2020-03-01). "A multi-stage anomaly detection scheme for augmenting the security in IoT-enabled applications". Future Generation Computer Systems. 104: 105–118. doi:10.1016/j.future.2019.09.038. ISSN 0167-739X. S2CID 204077191.
- ^ a b c Martí, Luis; Sanchez-Pi, Nayat; Molina, José Manuel; Garcia, Ana Cristina Bicharra (February 2015). "Anomaly Detection Based on Sensor Data in Petroleum Industry Applications". Sensors. 15 (2): 2774–2797. Bibcode:2015Senso..15.2774M. doi:10.3390/s150202774. ISSN 1424-8220. PMC 4367333. PMID 25633599.
- ^ a b Aljameel, Sumayh S.; Alomari, Dorieh M.; Alismail, Shatha; Khawaher, Fatimah; Alkhudhair, Aljawharah A.; Aljubran, Fatimah; Alzannan, Razan M. (August 2022). "An Anomaly Detection Model for Oil and Gas Pipelines Using Machine Learning". Computation. 10 (8): 138. doi:10.3390/computation10080138. ISSN 2079-3197.
- ^ Hubert, Mia; Debruyne, Michiel; Rousseeuw, Peter J. (2018). "Minimum covariance determinant and extensions". WIREs Computational Statistics. 10 (3). arXiv:1709.07045. doi:10.1002/wics.1421. ISSN 1939-5108. S2CID 67227041.
- ^ Hubert, Mia; Debruyne, Michiel (2010). "Minimum covariance determinant". WIREs Computational Statistics. 2 (1): 36–43. doi:10.1002/wics.61. ISSN 1939-0068. S2CID 123086172.
- ^ Liu, Fei Tony; Ting, Kai Ming; Zhou, Zhi-Hua (December 2008). "Isolation Forest". 2008 Eighth IEEE International Conference on Data Mining. pp. 413–422. doi:10.1109/ICDM.2008.17. ISBN 9780769535029. S2CID 6505449.
- ^ Liu, Fei Tony; Ting, Kai Ming; Zhou, Zhi-Hua (March 2012). "Isolation-Based Anomaly Detection". ACM Transactions on Knowledge Discovery from Data. 6 (1): 1–39. doi:10.1145/2133360.2133363. S2CID 207193045.
- ^ Knorr, E. M.; Ng, R. T.; Tucakov, V. (2000). "Distance-based outliers: Algorithms and applications". The VLDB Journal the International Journal on Very Large Data Bases. 8 (3–4): 237–253. CiteSeerX 10.1.1.43.1842. doi:10.1007/s007780050006. S2CID 11707259.
- ^ Ramaswamy, S.; Rastogi, R.; Shim, K. (2000). Efficient algorithms for mining outliers from large data sets. Proceedings of the 2000 ACM SIGMOD international conference on Management of data – SIGMOD '00. p. 427. doi:10.1145/342009.335437. ISBN 1-58113-217-4.
- ^ Angiulli, F.; Pizzuti, C. (2002). Fast Outlier Detection in High Dimensional Spaces. Principles of Data Mining and Knowledge Discovery. Lecture Notes in Computer Science. Vol. 2431. p. 15. doi:10.1007/3-540-45681-3_2. ISBN 978-3-540-44037-6.
- ^ Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; Sander, J. (2000). LOF: Identifying Density-based Local Outliers (PDF). Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD. pp. 93–104. doi:10.1145/335191.335388. ISBN 1-58113-217-4.
- ^ Schölkopf, B.; Platt, J. C.; Shawe-Taylor, J.; Smola, A. J.; Williamson, R. C. (2001). "Estimating the Support of a High-Dimensional Distribution". Neural Computation. 13 (7): 1443–71. CiteSeerX 10.1.1.4.4106. doi:10.1162/089976601750264965. PMID 11440593. S2CID 2110475.
- ^ a b c Hawkins, Simon; He, Hongxing; Williams, Graham; Baxter, Rohan (2002). "Outlier Detection Using Replicator Neural Networks". Data Warehousing and Knowledge Discovery. Lecture Notes in Computer Science. Vol. 2454. pp. 170–180. CiteSeerX 10.1.1.12.3366. doi:10.1007/3-540-46145-0_17. ISBN 978-3-540-44123-6. S2CID 6436930.
- ^ An, J.; Cho, S. (2015). "Variational autoencoder based anomaly detection using reconstruction probability" (PDF). Special Lecture on IE. 2 (1): 1–18. SNUDM-TR-2015-03.
- ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautman; Agarwal, Puneet (22–24 April 2015). Long Short Term Memory Networks for Anomaly Detection in Time Series. ESANN 2015: 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. pp. 89–94. ISBN 978-2-87587-015-5.
- ^ Alzubaidi, Laith; Zhang, Jinglan; Humaidi, Amjad J.; Al-Dujaili, Ayad; Duan, Ye; Al-Shamma, Omran; Santamaría, J.; Fadhel, Mohammed A.; Al-Amidie, Muthana; Farhan, Laith (2021-03-31). "Review of deep learning: concepts, CNN architectures, challenges, applications, future directions". Journal of Big Data. 8 (1): 53. doi:10.1186/s40537-021-00444-8. ISSN 2196-1115. PMC 8010506. PMID 33816053.
- ^ Belay, Mohammed Ayalew; Blakseth, Sindre Stenen; Rasheed, Adil; Salvo Rossi, Pierluigi (January 2023). "Unsupervised Anomaly Detection for IoT-Based Multivariate Time Series: Existing Solutions, Performance Analysis and Future Directions". Sensors. 23 (5): 2844. Bibcode:2023Senso..23.2844B. doi:10.3390/s23052844. ISSN 1424-8220. PMC 10007300. PMID 36905048.
- ^ He, Z.; Xu, X.; Deng, S. (2003). "Discovering cluster-based local outliers". Pattern Recognition Letters. 24 (9–10): 1641–1650. Bibcode:2003PaReL..24.1641H. CiteSeerX 10.1.1.20.4242. doi:10.1016/S0167-8655(03)00003-5.
- ^ Campello, R. J. G. B.; Moulavi, D.; Zimek, A.; Sander, J. (2015). "Hierarchical Density Estimates for Data Clustering, Visualization, and Outlier Detection". ACM Transactions on Knowledge Discovery from Data. 10 (1): 5:1–51. doi:10.1145/2733381. S2CID 2887636.
- ^ Lazarevic, A.; Kumar, V. (2005). "Feature bagging for outlier detection". Proceedings of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining. pp. 157–166. CiteSeerX 10.1.1.399.425. doi:10.1145/1081870.1081891. ISBN 978-1-59593-135-1. S2CID 2054204.
- ^ Nguyen, H. V.; Ang, H. H.; Gopalkrishnan, V. (2010). Mining Outliers with Ensemble of Heterogeneous Detectors on Random Subspaces. Database Systems for Advanced Applications. Lecture Notes in Computer Science. Vol. 5981. p. 368. doi:10.1007/978-3-642-12026-8_29. ISBN 978-3-642-12025-1.
- ^ Kriegel, H. P.; Kröger, P.; Schubert, E.; Zimek, A. (2011). Interpreting and Unifying Outlier Scores. Proceedings of the 2011 SIAM International Conference on Data Mining. pp. 13–24. CiteSeerX 10.1.1.232.2719. doi:10.1137/1.9781611972818.2. ISBN 978-0-89871-992-5.
- ^ Schubert, E.; Wojdanowski, R.; Zimek, A.; Kriegel, H. P. (2012). On Evaluation of Outlier Rankings and Outlier Scores. Proceedings of the 2012 SIAM International Conference on Data Mining. pp. 1047–1058. doi:10.1137/1.9781611972825.90. ISBN 978-1-61197-232-0.
- ^ Zimek, A.; Campello, R. J. G. B.; Sander, J. R. (2014). "Ensembles for unsupervised outlier detection". ACM SIGKDD Explorations Newsletter. 15: 11–22. doi:10.1145/2594473.2594476. S2CID 8065347.
- ^ Zimek, A.; Campello, R. J. G. B.; Sander, J. R. (2014). Data perturbation for outlier detection ensembles. Proceedings of the 26th International Conference on Scientific and Statistical Database Management – SSDBM '14. p. 1. doi:10.1145/2618243.2618257. ISBN 978-1-4503-2722-0.
- ^ "Histogram-based Outlier Score (HBOS): A fast Unsupervised Anomaly Detection Algorithm". https://www.researchgate.net/publication/231614824_Histogram-based_Outlier_Score_HBOS_A_fast_Unsupervised_Anomaly_Detection_Algorithm.
{{cite web}}
:|access-date=
requires|url=
(help); External link in
(help); Missing or empty|website=
|url=
(help) - ^ Kriegel, H. P.; Kröger, P.; Schubert, E.; Zimek, A. (2009). Outlier Detection in Axis-Parallel Subspaces of High Dimensional Data. Advances in Knowledge Discovery and Data Mining. Lecture Notes in Computer Science. Vol. 5476. p. 831. doi:10.1007/978-3-642-01307-2_86. ISBN 978-3-642-01306-5.
- ^ Kriegel, H. P.; Kroger, P.; Schubert, E.; Zimek, A. (2012). Outlier Detection in Arbitrarily Oriented Subspaces. 2012 IEEE 12th International Conference on Data Mining. p. 379. doi:10.1109/ICDM.2012.21. ISBN 978-1-4673-4649-8.
- ^ Zhao, Yue; Nasrullah, Zain; Li, Zheng (2019). "Pyod: A python toolbox for scalable outlier detection" (PDF). Journal of Machine Learning Research. 20. arXiv:1901.01588.
- ^ "FindAnomalies". Mathematica documentation.