Jump to content

Count-distinct problem: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 29: Line 29:


==Streaming Algorithms==
==Streaming Algorithms==
State-of-the-art estimators [[hash]] every element <math> e_j </math> into a low dimensional data sketch <math> h(e_j) </math>, which can be viewed as a [[random variable]] (RV). The different techniques can be classified according to the data sketches they store for future processing.


To handle the bounded storage constraint, [[Streaming algorithm]] use a randomization to produce a non-exact estimation of the distinct number of elements, <math> n</math>.
Min/max sketches
State-of-the-art estimators [[hash]] every element <math> e_j </math> into a low dimensional data sketch using a hash function, <math> h(e_j) </math>.
<ref name=cosma2011>{{cite journal | last1=Cosma | first1=Ioana A. | last2 = Clifford | first2 = Peter | year=2011 | title=A statistical analysis of probabilistic counting algorithms | journal=Scandinavian Journal of Statistics}}
The different techniques can be classified according to the data sketches they store.

=== Min/max sketches ===
Min/max sketches <ref name=cosma2011>{{cite journal | last1=Cosma | first1=Ioana A. | last2 = Clifford | first2 = Peter | year=2011 | title=A statistical analysis of probabilistic counting algorithms | journal=Scandinavian Journal of Statistics}}
</ref>
</ref>
<ref>{{cite journal | last1=Giroire | first1=Frederic | last2 = Fusy | first2 = Eric | year=2007 | title=Estimating the Number of Active Flows in a Data Stream over a Sliding Window | journal=ANALCO | url=http://www.siam.org/proceedings/analco/2007/anl07_024efusy.pdf}}
<ref>{{cite journal | last1=Giroire | first1=Frederic | last2 = Fusy | first2 = Eric | year=2007 | title=Estimating the Number of Active Flows in a Data Stream over a Sliding Window | journal=ANALCO | url=http://www.siam.org/proceedings/analco/2007/anl07_024efusy.pdf}}
Line 60: Line 62:
</ref>
</ref>
describes a bit pattern sketch. In this case, the elements are hashed into a bit vector and the sketch holds the logical OR of all hashed values.
describes a bit pattern sketch. In this case, the elements are hashed into a bit vector and the sketch holds the logical OR of all hashed values.

===Bottom-''m'' sketches===
Bottom-''m'' sketches
Bottom-''m'' sketches
<ref>{{cite journal | last1=Cohen | first1=Edith | last2 = Kaplan | first2 = Haim | year=2008 | title=Tighter estimation using bottom k sketches | journal=PVLDB | publisher=Academic Press, Inc. | url=http://www.vldb.org/pvldb/1/1453884.pdf}}
<ref>{{cite journal | last1=Cohen | first1=Edith | last2 = Kaplan | first2 = Haim | year=2008 | title=Tighter estimation using bottom k sketches | journal=PVLDB | publisher=Academic Press, Inc. | url=http://www.vldb.org/pvldb/1/1453884.pdf}}
Line 67: Line 71:
<ref>{{Citation | last1=Metwally | first1=Ahmed | last2=Agrawal | first2=Divyakant | last3=Abbadi | first3= Amr El | year=2008 | title=Why go logarithmic if we can go linear?: Towards effective distinct counting of search traffic | series=Proceedings of the 11th international conference on Extending Database Technology: Advances in Database Technology }}
<ref>{{Citation | last1=Metwally | first1=Ahmed | last2=Agrawal | first2=Divyakant | last3=Abbadi | first3= Amr El | year=2008 | title=Why go logarithmic if we can go linear?: Towards effective distinct counting of search traffic | series=Proceedings of the 11th international conference on Extending Database Technology: Advances in Database Technology }}
</ref>
</ref>
for a practical overview with comparative simulation results.
for a practical overview with comparative simulation results.



==Weighted Count-Distinct Problem==
==Weighted Count-Distinct Problem==

Revision as of 17:24, 17 October 2014

In computer science, the count-distinct problem [1] (also known in applied mathematics as the cardinality estimation problem) is the problem of finding the number of distinct elements in a data stream with repeated elements. This is a well-known problem with numerous applications. The elements might represent IP addresses of packets passing through a router, elements in a large database, motifs in a DNA sequence, or elements of RFID/sensor networks.


Formal Definition

Instance: A stream of elements with repetitions, and an integer . Let be the number of distinct elements, namely , and let these elements be .
Objective: Find an estimate of using only storage units, where .

An example of an instance for the cardinality estimation problem is the stream: . For this instance, .


Naive Solution

The naive solution to the problem is as follows:

 Initialize a counter, , to zero, .
 Initialize an efficient dictionary data structure, , such as hash table or search tree in which insertion and membership can be performed quickly.  
 For each element , a membership query is issued. 
 If  is not a member of  ()
   Add  to 
   Increase  by one, 
 Otherwise () do nothing.
 Output .

As long as the number of distinct elements is not too big, fits in main memory and an exact answer can be retrieved. However, this approach does not scale for bounded storage, or if the computation performed for each element should be minimized. In such a case, several streaming algorithms have been proposed which use a fixed number of storage units.

Streaming Algorithms

To handle the bounded storage constraint, Streaming algorithm use a randomization to produce a non-exact estimation of the distinct number of elements, . State-of-the-art estimators hash every element into a low dimensional data sketch using a hash function, . The different techniques can be classified according to the data sketches they store.

Min/max sketches

Min/max sketches [2] [3] store only the minimum/maximum hashed values. Examples of known min/max sketch estimators: Chassaing et al. [4] presents max sketch which is the minimum-variance unbiased estimator for the problem. The continuous max sketches estimator [5] is the maximum likelihood estimator. The best known estimator is the HyperLogLog algorithm [6] it offers the best tradeoff between precision and storage size.

The intuition behind such estimators is that each sketch carries information about the desired quantity. For example, when every element is associated with a uniform RV, , the expected minimum value of is . The hash function guarantees that is identical for all the appearances of . Thus, the existence of duplicates does not affect the value of the extreme order statistics.

There are other estimation techniques other than min/max sketches. The first paper on count-distinct estimation by Flajolet et al. [7] describes a bit pattern sketch. In this case, the elements are hashed into a bit vector and the sketch holds the logical OR of all hashed values.

Bottom-m sketches

Bottom-m sketches [8] are a generalization of min sketches, which maintain the minimal values, where . See Cosma et al. [2] for a theoretical overview of count-distinct estimation algorithms, and Metwally [9] for a practical overview with comparative simulation results.

Weighted Count-Distinct Problem

In its weighted version, each element is associated with a weight and the goal is to estimate the total sum of weights. Formally,

Instance: A stream of weighted elements with repetitions, and an integer . Let be the number of distinct elements, namely , and let these elements be . Finally, let be the weight of .
Objective: Find an estimate of using only storage units, where .

An example of an instance for the weighted problem is: . For this instance, , the weights are and .

As an application example, could be IP packets received by a server. Each packet belongs to one of IP flows . The weight can be the load imposed by flow on the server. Thus, represents the total load imposed on the server by all the flows to which packets belong.


Solving the Weighted Count-Distinct Problem

Any extreme order statistics estimator (min/max sketches) for the unweighted problem can be generalized to an estimator for the weighted problem [10]. For example, the weighted estimator proposed by Cohen et al. [5] can be obtained when the continuous max sketches estimator is extended to solve the weighted problem. In particular, the HyperLogLog algorithm [6] can be extended to solve the weighted problem. The extended HyperLogLog algorithm offers the best performance, in terms of statistical accuracy and memory usage, among all the other known algorithms for the weighted problem.


See also


References

  1. ^ Ullman, Jeff; Rajaraman, Anand; Leskovec, Jure. "Mining data streams" (PDF). {{cite journal}}: Cite journal requires |journal= (help)
  2. ^ a b Cosma, Ioana A.; Clifford, Peter (2011). "A statistical analysis of probabilistic counting algorithms". Scandinavian Journal of Statistics.
  3. ^ Giroire, Frederic; Fusy, Eric (2007). "Estimating the Number of Active Flows in a Data Stream over a Sliding Window" (PDF). ANALCO.
  4. ^ Chassaing, Philippe; Gerin, Lucas (2006). "Efficient estimation of the cardinality of large data sets". Proceedings of the 4th Colloquium on Mathematics and Computer Science.
  5. ^ a b Cohen, Edith (1997). "Size-estimation framework with applications to transitive closure and reachability". J. Comput. Syst. Sci.
  6. ^ a b Flajolet, Philippe; Fusy, Eric; Gandouet, Olivier; Meunier, Frederic (2007). "HyperLoglog: the analysis of a near-optimal cardinality estimation algorithm". Analysis of Algorithms (AofA) 2007.
  7. ^ Flajolet, Philippe; Martin, G. Nigel (1985). "Probabilistic counting algorithms for data base applications". J. Comput. Syst. Sci. Academic Press, Inc.
  8. ^ Cohen, Edith; Kaplan, Haim (2008). "Tighter estimation using bottom k sketches" (PDF). PVLDB. Academic Press, Inc.
  9. ^ Metwally, Ahmed; Agrawal, Divyakant; Abbadi, Amr El (2008), Why go logarithmic if we can go linear?: Towards effective distinct counting of search traffic, Proceedings of the 11th international conference on Extending Database Technology: Advances in Database Technology
  10. ^ Cohen, Reuven; Katzir, Liran; Yehezkel, Aviv (2014). "A Unified Scheme for Generalizing Cardinality Estimators to Sum Aggregation". Information Processing Letters.