Associative array: Difference between revisions
(771 intermediate revisions by more than 100 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|Data structure that associates keys with values}} |
|||
In [[computing]], an '''associative array''', also known as a map, [[lookup table]], or dictionary, is an [[abstract data type]] very closely related to the mathematical concept of a [[function (mathematics)|function]] with a finite [[function domain|domain]]. Conceptually, an associative array is composed of a collection of keys and a collection of values, and each key is associated with one value. The operation of finding the value associated with a key is called a ''lookup'' or indexing, and this is the most important operation supported by an associative array. The relationship between a key and its value is sometimes called a [[mapping]] or binding. For example, if the value associated with the key <code>"bob"</code> is <code>7</code>, we say that our array ''maps'' <code>"bob"</code> to <code>7</code>. |
|||
{{Redirect-distinguish|Dictionary (data structure)|data dictionary}} |
|||
{{Redirect|Associative container|their C++ implementation|associative containers (C++)}} |
|||
{{Redirect|Map (computer science)|the higher-order function|Map (higher-order function)}} |
|||
{{Redirect|Associative table|the relation used in database systems to resolve many-to-many relationship|Associative entity}} |
|||
In [[computer science]], an '''associative array''', '''map''', '''symbol table''', or '''dictionary''' is an [[abstract data type]] that stores a [[collection (abstract data type)|collection]] of [[attribute–value pair|(key, value) pairs]], such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a [[Function (mathematics)|function]] with ''finite'' [[Domain of a function|domain]].<ref>{{cite book |last1=Collins |first1=Graham |last2=Syme |first2=Donald |chapter=A theory of finite maps |title=Higher Order Logic Theorem Proving and Its Applications |series=Lecture Notes in Computer Science |date=1995 |volume=971 |pages=122–137 |doi=10.1007/3-540-60275-5_61|isbn=978-3-540-60275-0 }}</ref> It supports 'lookup', 'remove', and 'insert' operations. |
|||
The '''dictionary problem''' is the classic problem of designing efficient [[data structure]]s that implement associative arrays.<ref>{{cite book|chapter=Optimal Bounds on the Dictionary Problem|last=Andersson|first=Arne|title= Proc. Symposium on Optimal Algorithms|series=Lecture Notes in Computer Science|date=1989|volume=401|pages=106–114|publisher=Springer Verlag|doi=10.1007/3-540-51859-2_10|isbn=978-3-540-51859-4}}</ref> |
|||
From the perspective of a programmer using an associative array, it can be viewed as a generalization of an array: While a regular [[array]] maps integers to arbitrarily typed objects (integers, strings, pointers, and, in an [[object-oriented programming|OO]] sense, objects), an associative array maps arbitrarily typed objects to arbitrarily typed objects. (Implementations of the two data structures, though, may be considerably different.) |
|||
The two major solutions to the dictionary problem are [[hash table]]s and [[search tree]]s.<ref name="gt"/><ref name="ms"/><ref name="clrs">{{citation | last1 = Cormen | first1 = Thomas H. | author1-link=Thomas H. Cormen | author2-link = Charles E. Leiserson|last2=Leiserson|first2=Charles E.|author3-link=Ron Rivest|last3=Rivest|first3=Ronald L.|author4-link=Clifford Stein|last4=Stein|first4=Clifford | title = [[Introduction to Algorithms]] | edition = 2nd | year = 2001 | publisher = [[MIT Press]] and [[McGraw-Hill]] | isbn=0-262-03293-7 | chapter = 11 Hash Tables | pages = 221–252}}.</ref><ref name="dietzfelbinger">Dietzfelbinger, M., Karlin, A., Mehlhorn, K., Meyer auf der Heide, F., Rohnert, H., and Tarjan, R. E. 1994. |
|||
[http://www.arl.wustl.edu/~sailesh/download_files/Limited_Edition/hash/Dynamic%20Perfect%20Hashing-%20Upper%20and%20Lower%20Bounds.pdf "Dynamic Perfect Hashing: Upper and Lower Bounds"] {{Webarchive|url=https://web.archive.org/web/20160304094014/http://www.arl.wustl.edu/~sailesh/download_files/Limited_Edition/hash/Dynamic%20Perfect%20Hashing-%20Upper%20and%20Lower%20Bounds.pdf |date=2016-03-04 }}. |
|||
SIAM J. Comput. 23, 4 (Aug. 1994), 738-761. |
|||
http://portal.acm.org/citation.cfm?id=182370 |
|||
{{doi|10.1137/S0097539791194094}}</ref> |
|||
It is sometimes also possible to solve the problem using directly addressed [[Array data structure|array]]s, [[binary search tree]]s, or other more specialized structures. |
|||
Many programming languages include associative arrays as [[primitive data type]]s, while many other languages provide [[software library|software libraries]] that support associative arrays. [[Content-addressable memory]] is a form of direct hardware-level support for associative arrays. |
|||
The operations that are usually defined for an associative array are: |
|||
* '''Add''': Bind a new key to a new value |
|||
* '''Reassign''': Bind an old key to a new value |
|||
* '''Remove''': Unbind a key from a value and remove it from the key set |
|||
* '''Lookup''': Find the value (if any) that is bound to a key |
|||
Associative arrays have many applications including such fundamental [[software design pattern|programming pattern]]s as [[memoization]]<!--NOT a misspelling--> and the [[decorator pattern]].<ref name="decorator">{{harvtxt|Goodrich|Tamassia|2006}}, pp. 597–599.</ref> |
|||
== Examples == |
|||
<!-- More examples are needed. --> |
|||
The name does not come from the [[associative property]] known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with [[Flynn's taxonomy#Associative processor|associative processors]]. |
|||
One can think of a telephone book as an example of an associative array, where names are the keys and phone numbers are the values. Another example would be a dictionary where words are the keys and definitions are the values. A [[database]] is a sort of generalized associative array. |
|||
==Operations== |
|||
== Data structures for associative arrays == |
|||
In an associative array, the association between [[attribute–value pair|a key and a value]] is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association. |
|||
The operations that are usually defined for an associative array are:<ref name="gt">{{citation|contribution=9.1 The Map Abstract Data Type|title=Data Structures & Algorithms in Java|last1=Goodrich|first1=Michael T.|author1-link=Michael T. Goodrich|last2=Tamassia|first2=Roberto|author2-link=Roberto Tamassia|publisher=Wiley|edition=4th|year=2006|pages=368–371}}</ref><ref name="ms">{{citation|contribution=4 Hash Tables and Associative Arrays|title=Algorithms and Data Structures: The Basic Toolbox|first1=Kurt|last1=Mehlhorn|author1-link=Kurt Mehlhorn|first2=Peter|last2=Sanders|author2-link=Peter Sanders (computer scientist)|publisher=Springer|year=2008|pages=81–98 |url=http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/HashTables.pdf |archive-url=https://web.archive.org/web/20140802025330/http://people.mpi-inf.mpg.de/~mehlhorn/ftp/Toolbox/HashTables.pdf |archive-date=2014-08-02 |url-status=live}}</ref><ref name="Black"/> |
|||
Associative arrays are usually used when lookup is the most frequent operation. For this reason, implementations are usually designed to allow speedy lookup, at the expense of slower insertion and a larger storage footprint than other data structures (such as association lists). |
|||
;Insert or put |
|||
=== Efficient representations === |
|||
: add a new <math>(key, value)</math> pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value. |
|||
;Remove or delete |
|||
: remove a <math>(key, value)</math> pair from the collection, unmapping a given key from its value. The argument to this operation is the key. |
|||
;Lookup, find, or get |
|||
: find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an [[Exception handling|exception]], while others return a default value (such as zero, null, or a specific value passed to the constructor). |
|||
Associative arrays may also include other operations such as determining the number of mappings or constructing an [[iterator]] to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined. |
|||
There are two main efficient data structures used to represent associative arrays, the [[hash table]] and the [[self-balancing binary search tree]]. [[Skip list]]s are also an alternative, though relatively new and not as widely used. Relative advantages and disadvantages include: |
|||
A [[multimap]] generalizes an associative array by allowing multiple values to be associated with a single key.<ref>{{harvtxt|Goodrich|Tamassia|2006}}, pp. 389–397.</ref> A [[bidirectional map]] is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value. |
|||
* Hash tables have faster average lookup and insertion time (O(1)), while balanced binary trees have faster worst-case lookup and insertion time (O(log <var>n</var>) instead of O(<var>n</var>)). These make trees more useful in real-time and interactive systems, and in high-security systems where untrusted users may deliberately supply information that triggers worst-case performance in a hash table. Hash tables are more useful for very large arrays, where O(<var>1</var>) performance is important. Skip lists have worst-case operation time of O(<var>n</var>), but average-case of O(log<var>n</var>), with much less insertion and deletion overhead than balanced binary trees. |
|||
* Hash tables have more compact storage for small value types, especially when the values are bits. |
|||
* There are simple [[persistent data structure|persistent]] versions of balanced binary trees, which are especially prominent in [[functional language]]s. |
|||
* Building a hash table requires a good [[hash function]] for the key type, which can be difficult to write, while balanced binary trees and skip lists only require a [[total ordering]] on the keys. |
|||
* Balanced binary trees and skip lists preserve ordering -- allowing one to efficiently iterate over the keys in order. Hash tables do not preserve ordering. |
|||
* Balanced binary trees can be easily adapted to efficiently assign a single value to a large ordered range of keys, or to count the number of keys in an ordered range. |
|||
=== |
===Properties=== |
||
The operations of the associative array should satisfy various properties:<ref name="Black">{{cite web |last1=Black |first1=Paul E. |last2=Stewart |first2=Rob |title=dictionary |url=https://xlinux.nist.gov/dads/HTML/dictionary.html |website=Dictionary of Algorithms and Data Structures |access-date=26 January 2022 |date=2 November 2020}}</ref> |
|||
A simple but generally inefficient type of associative array is an ''association list'', often called an ''alist'' for short, which simply stores a [[linked list]] of key-value pairs. Each lookup does a [[linear search]] through the list looking for a key match. |
|||
* <code>lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D)</code> |
|||
* <code>lookup(k, new()) = fail</code>, where <code>fail</code> is an exception or default value |
|||
* <code>remove(k, insert(j, v, D)) = if k == j then remove(k, D) else insert(j, v, remove(k, D))</code> |
|||
* <code>remove(k, new()) = new()</code> |
|||
where <code>k</code> and <code>j</code> are keys, <code>v</code> is a value, <code>D</code> is an associative array, and <code>new()</code> creates a new, empty associative array. |
|||
===Example=== |
|||
Strong advantages of association lists include: |
|||
Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from [[Python (programming language)|Python]] or [[JSON]], the data structure would be: |
|||
* No knowledge is needed about the keys, such as an order or a hash function. |
|||
* For small associative arrays, common in some applications, association lists can take less time and space than other data structures. |
|||
* Insertions are done in constant time by [[cons]]ing the new association to the head of the list. |
|||
<syntaxhighlight lang="javascript">{ |
|||
=== Specialized representations === |
|||
"Pride and Prejudice": "Alice", |
|||
"Wuthering Heights": "Alice", |
|||
"Great Expectations": "John" |
|||
}</syntaxhighlight> |
|||
A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state: |
|||
If the keys have a specific type, one can often use specialized data structures to gain performance. For example, integer-keyed maps can be implemented using [[Patricia trees]] or [[Judy arrays]], and are useful space-saving replacements for sparse arrays. Because this type of data structure can perform longest-prefix matching, they're particularly useful in applications where a single value is assigned to most of a large range of keys with a common prefix except for a few exceptions, such as in [[routing table]]s. |
|||
<syntaxhighlight lang="javascript">{ |
|||
String-keyed maps can avoid extra comparisons during lookups by using [[trie]]s. |
|||
"Pride and Prejudice": "Alice", |
|||
"The Brothers Karamazov": "Pat", |
|||
"Wuthering Heights": "Alice" |
|||
}</syntaxhighlight> |
|||
==Implementation== |
|||
== Language support == |
|||
For dictionaries with very few mappings, it may make sense to implement the dictionary using an [[association list]], which is a [[linked list]] of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small.<ref name="gt"/><ref>{{cite web |url=http://www.faqs.org/faqs/lisp-faq/part2/section-2.html |
|||
Associative arrays are known by many names in different programming languages. In [[Smalltalk programming language|Smalltalk]] and [[Python programming language|Python]] they are called ''dictionaries''; in [[Perl programming language|Perl]] and [[Ruby programming language|Ruby]] they are called ''hashes''; in [[Java programming language|Java]] they are called ''Maps'' [http://java.sun.com/j2se/1.4.2/docs/enwiki/api/java/util/Map.html] and in [[Common Lisp]] they are called ''hash tables''. "[[Hash table]]" is also the name of the most common [[data structure]] used to store an associative array. In the scripting language [[Lua programming language|Lua]], associative arrays, called <i>tables</i>, are used as the primitive building block for all data structures, even arrays. Likewise, in [[JavaScript]], all objects are associative arrays. |
|||
|title=When should I use a hash table instead of an association list? |
|||
|publisher=lisp-faq/part2 |
|||
|date=1996-02-20}}</ref> |
|||
Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key ''k'' is stored at the array cell ''A''[''k''], or if there is no mapping for ''k'' then the cell stores a special [[sentinel value]] that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small.<ref name="clrs" /> |
|||
Associative arrays can be implemented in any programming language, and one or more implementations of them is typically found either built into the language or the standard library distributed with it ([[C programming language|C]] is a noticeable exception, as neither the language nor the standard library directly provide one. It is not difficult to write one in C, though). |
|||
The two major approaches for implementing dictionaries are a [[hash table]] or a [[search tree]].<ref name="gt"/><ref name="ms"/><ref name="clrs"/><ref name="dietzfelbinger"/> |
|||
=== Smalltalk === |
|||
=== Hash table implementations === |
|||
In [[Smalltalk]] a dictionary is used: |
|||
{{main | Hash table}} |
|||
[[File:Hash table average insertion time.png|thumb|right|362px|This graph compares the average number of [[CPU cache]] misses required to look up elements in large hash tables (far exceeding size of the cache) with chaining and [[linear probing]]. Linear probing performs better due to better [[locality of reference]], though as the table gets full, its performance degrades drastically.]] |
|||
The most frequently used general-purpose implementation of an associative array is with a [[hash table]]: an [[Array data structure|array]] combined with a [[hash function]] that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations. |
|||
Hash tables must be able to handle [[hash collision|collisions]]: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are [[separate chaining]] and [[open addressing]].<ref name="gt"/><ref name="ms"/><ref name="clrs"/><ref name="fklm">{{citation|contribution=Pathfinders for associative maps|title=Ext. Abstracts GIS-l 2006|last1=Klammer|first1=F.|author1-link=F. Klammer|last2=Mazzolini|first2=L.|author2-link=L. Mazzolini|publisher=GIS-I|year=2006|pages=71–74}}.</ref> In separate chaining, the array does not store the value itself but stores a [[Pointer (computer programming)|pointer]] to another container, usually an [[association list]], that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array. |
|||
phonebook := Dictionary new. |
|||
phonebook at: 'Sally Smart' put: '555-9999'. |
|||
phonebook at: 'John Doe' put: '555-1212'. |
|||
phonebook at: 'J. Random Hacker' put: '553-1337'. |
|||
Open addressing has a lower [[cache miss]] ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer). |
|||
To access an entry the message <code>#at:</code> is sent to the dictionary object. |
|||
=== Tree implementations === |
|||
phonebook at: 'Sally Smart' |
|||
{{main | Search tree }} |
|||
==== Self-balancing binary search trees ==== |
|||
gives |
|||
Another common approach is to implement an associative array with a [[self-balancing binary search tree]], such as an [[AVL tree]] or a [[red–black tree]].<ref> |
|||
Joel Adams and Larry Nyhoff. |
|||
[http://cs.calvin.edu/books/c++/intro/3e/WebItems/Ch15-Web/STL-Trees.pdf "Trees in STL"]. |
|||
Quote: |
|||
"The Standard Template library ... some of its containers -- the set<T>, map<T1, T2>, multiset<T>, and multimap<T1, T2> templates -- are generally built using a special kind of ''self-balancing binary search tree'' called a ''red–black tree''." |
|||
'555-9999' |
|||
</ref> |
|||
Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in [[big O notation]] of O(log ''n''). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O(''n'') time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good [[hash function]] is used. |
|||
=== TCL === |
|||
A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log ''n''). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a [[linear search]] on all elements of a linked list or similar data structure.<ref name="knuth">{{cite book| first=Donald |last=Knuth |author1-link=Donald Knuth| title = The Art of Computer Programming| volume = 3: ''Sorting and Searching''| edition = 2nd| publisher = Addison-Wesley| year = 1998| isbn = 0-201-89685-0| pages = 513–558}}</ref><ref>{{cite web |url=https://schani.wordpress.com/2010/04/30/linear-vs-binary-search/ |title=Linear vs Binary Search |last=Probst |first=Mark |date=2010-04-30 |access-date=2016-11-20 }}</ref> |
|||
In [[Tcl]] every array is an associative array. |
|||
==== Other trees ==== |
|||
set "phonebook(Sally Smart)" 555-9999 |
|||
Associative arrays may also be stored in unbalanced [[binary search tree]]s or in data structures specialized to a particular type of keys such as [[radix tree]]s, [[trie]]s, [[Judy array]]s, or [[van Emde Boas tree]]s, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle.<ref>{{Cite book|last1=Alvarez|first1=Victor|last2=Richter|first2=Stefan|last3=Chen|first3=Xiao|last4=Dittrich|first4=Jens|title=2015 IEEE 31st International Conference on Data Engineering |chapter=A comparison of adaptive radix trees and hash tables |date=April 2015|location=Seoul, South Korea|publisher=IEEE|pages=1227–1238|doi=10.1109/ICDE.2015.7113370|isbn=978-1-4799-7964-6|s2cid=17170456}}</ref> The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings. |
|||
set "phonebook(John Doe)" 555-1212 |
|||
set "phonebook(J. Random Hacker)" 553-1337 |
|||
=== Comparison === |
|||
The first argument of the <code>set</code> command has to be enclosed by double quotes, as it contains a space. In Tcl, space is used to separate arguments. |
|||
{| class="wikitable" |
|||
! rowspan="2" | Underlying data structure |
|||
! colspan="2" | Lookup or Removal |
|||
! colspan="2" | Insertion |
|||
! rowspan="2" | Ordered |
|||
|- |
|||
! average |
|||
! worst case |
|||
! average |
|||
! worst case |
|||
|- |
|||
! [[Hash table]] |
|||
|style="background:#ddffdd"|O(1) |
|||
|style="background:#ffdddd"|O(''n'') |
|||
|style="background:#ddffdd"|O(1) |
|||
|style="background:#ffdddd"|O(''n'') |
|||
|{{no}} |
|||
|- |
|||
! [[Self-balancing binary search tree]] |
|||
|style="background:#ffffdd"|O(log ''n'') |
|||
|style="background:#ffffdd"|O(log ''n'') |
|||
|style="background:#ffffdd"|O(log ''n'') |
|||
|style="background:#ffffdd"|O(log ''n'') |
|||
|{{yes}} |
|||
|- |
|||
! unbalanced [[binary search tree]] |
|||
|style="background:#ffffdd"|O(log ''n'') |
|||
|style="background:#ffdddd"|O(''n'') |
|||
|style="background:#ffffdd"|O(log ''n'') |
|||
|style="background:#ffdddd"|O(''n'') |
|||
|{{yes}} |
|||
|- |
|||
! Sequential container of [[attribute–value pair|key–value pair]]s<br />(e.g. [[association list]]) |
|||
|style="background:#ffdddd"|O(''n'') |
|||
|style="background:#ffdddd"|O(''n'') |
|||
|style="background:#ddffdd"|O(1) |
|||
|style="background:#ddffdd"|O(1) |
|||
|{{no}} |
|||
|} |
|||
== Ordered dictionary == |
|||
To access an entry and put it on standard output |
|||
The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary: |
|||
* The order of enumeration is always deterministic for a given set of keys by sorting. This is the case for tree-based implementations, one representative being the {{code|<map>}} container of C++.<ref>{{cite web |title=std::map |url=https://en.cppreference.com/enwiki/w/cpp/container/map |website=en.cppreference.com}}</ref> |
|||
puts "$phonebook(Sally Smart)" |
|||
* The order of enumeration is key-independent and is instead based on the order of insertion. This is the case for the "ordered dictionary" in [[.NET Framework]], the LinkedHashMap of [[Java (programming language)|Java]] and [[Python (programming language)|Python]].<ref>{{cite web |title=OrderedDictionary Class (System.Collections.Specialized) |url=https://docs.microsoft.com/en-us/dotnet/enwiki/api/system.collections.specialized.ordereddictionary?view=netframework-4.8 |website=MS Docs |language=en-us}}</ref><ref>{{cite web |title=LinkedHashMap |url=https://docs.oracle.com/en/java/javase/19/docs/enwiki/api/java.base/java/util/LinkedHashMap.html}}</ref><ref>{{cite web |title=collections — Container datatypes — Python 3.9.0a3 documentation |url=https://docs.python.org/3.9/library/collections.html#collections.OrderedDict |website=docs.python.org}}</ref> |
|||
The latter is more common. Such ordered dictionaries can be implemented using an [[association list]], by overlaying a [[doubly linked list]] on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one. |
|||
The result is here |
|||
==Language support== |
|||
555-9999 |
|||
{{Main|Comparison of programming languages (associative array)}} |
|||
Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting. |
|||
=== Python === |
|||
Built-in syntactic support for associative arrays was introduced in 1969 by [[SNOBOL|SNOBOL4]], under the name "table". [[TMG (language)|TMG]] offered tables with string keys and integer values. [[MUMPS]] made multi-dimensional associative arrays, optionally persistent, its key data structure. [[SETL]] supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with [[AWK]] and including [[Rexx]], [[Perl]], [[PHP]], [[Tcl]], [[JavaScript]], [[Maple (software)|Maple]], [[Python (programming language)|Python]], [[Ruby (programming language)|Ruby]], [[Wolfram Language]], [[Go (programming language)|Go]], and [[Lua (programming language)|Lua]], support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax. |
|||
In [[Python programming language|Python]], associative arrays are called ''dictionaries''. Dictionary literals are marked with curly braces: |
|||
In [[Smalltalk]], [[Objective-C]], [[.NET Framework|.NET]],<ref>{{cite web |url=http://msdn.microsoft.com/en-us/library/xfhwa508.aspx |title=Dictionary<TKey, TValue> Class |publisher=MSDN}}</ref> [[Python (programming language)|Python]], [[REALbasic]], [[Swift (programming language)|Swift]], [[Visual Basic for Applications|VBA]] and [[Delphi (programming language)|Delphi]]<ref>{{Cite web|url=http://docwiki.embarcadero.com/Libraries/Tokyo/en/System.Generics.Collections.TDictionary|title=System.Generics.Collections.TDictionary - RAD Studio API Documentation|website=docwiki.embarcadero.com|language=en|access-date=2017-04-18}}</ref> they are called ''dictionaries''; in [[Perl]], [[Ruby (programming language)|Ruby]] and [[Seed7]] they are called ''hashes''; in [[C++]], [[C_Sharp_(programming_language)|C#]], [[Java (programming language)|Java]], [[Go (programming language)|Go]], [[Clojure]], [[Scala (programming language)|Scala]], [[OCaml]], [[Haskell (programming language)|Haskell]] they are called ''maps'' (see [[map (C++)]], [[unordered_map (C++)]], and {{Javadoc:SE|java/util|Map}}); in [[Common Lisp]] and [[Windows PowerShell]], they are called ''hash tables'' (since both typically use this implementation); in [[Maple (software)|Maple]] and Lua, they are called ''tables''. In [[PHP]] and [[R]], all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also [[JSON]]), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In [[Visual FoxPro]], they are called ''Collections''. The [[D (programming language)|D language]] also supports associative arrays.<ref>{{cite web |url=http://dlang.org/hash-map.html|title=Associative Arrays, the D programming language|publisher=Digital Mars}}</ref> |
|||
phonebook = {'Sally Smart':'555-9999', |
|||
'John Doe':'555-1212', |
|||
'J. Random Hacker':'553-1337'} |
|||
==Permanent storage== |
|||
To access a entry in python simply use the array indexing operator. For example the statement <code>phonebook['Sally Smart']</code> would return <code>'555-9999'</code>. |
|||
{{Main|Key–value store}} |
|||
Many programs using associative arrays will need to store that data in a more permanent form, such as a [[computer file]]. A common solution to this problem is a generalized concept known as ''archiving'' or ''[[serialization]]'', which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class.<ref>[https://developer.apple.com/library/prerelease/ios/documentation/Cocoa/Conceptual/Archiving/Archiving.html#//apple_ref/doc/uid/10000047i "Archives and Serializations Programming Guide"], Apple Inc., 2012</ref> |
|||
=== C++=== |
|||
For programs that use very large data sets, this sort of individual file storage is not appropriate, and a [[database management system]] (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These [[key–value database|key–value store]]s have been used for many years and have a history as long as that of the more common [[relational database]] (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as [[object-relational impedance mismatch]]. |
|||
[[C Plus Plus programming language|C++]] also has a form of associative array called <code>std::map</code>. One could create a map with the same information as above using C++ with the following code: |
|||
After approximately 2010, the need for high-performance databases suitable for [[cloud computing]] and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows. |
|||
#include <map> |
|||
#include <string> |
|||
==See also== |
|||
int main() |
|||
{{Portal|Computer programming}} |
|||
{ |
|||
* [[Tuple]] |
|||
std::map<std::string, std::string> phone_book; |
|||
* [[Function (mathematics)]] |
|||
phone_book["Sally Smart"] = "555-9999"; |
|||
phone_book["John Doe"] = "555-1212"; |
|||
phone_book["J. Random Hacker"] = "553-1337"; |
|||
return 0; |
|||
} |
|||
== References == |
|||
In C++, <code>std::map</code> allows keys and values to be different [[data type]]s, but all of the keys in a particular map must be of the same base type. |
|||
{{Reflist}} |
|||
The same must be true for all of the values. Although <code>std::map</code> is typically implemented using a [[self-balancing binary search tree]], the SGI STL also provides a <code>std::hash_map</code> which has the algorithmic benefits of a [[hash table]]. |
|||
== |
==External links== |
||
{{wiktionary|associative array}} |
|||
* [https://xlinux.nist.gov/dads/HTML/assocarray.html NIST's Dictionary of Algorithms and Data Structures: Associative Array] |
|||
{{Data structures}} |
|||
[[D programming language|D]] offers direct support for associative arrays |
|||
{{Data types}} |
|||
in the core language. The equivalent example would be: |
|||
<!--Categories--> |
|||
int main() |
|||
[[Category:Abstract data types]] |
|||
{ |
|||
[[Category:Associative arrays|*]] |
|||
char[][char[]] phone_book; |
|||
[[Category:Composite data types]] |
|||
phone_book["Sally Smart"] = "555-9999"; |
|||
[[Category:Data types]] |
|||
phone_book["John Doe"] = "555-1212"; |
|||
phone_book["J. Random Hacker"] = "553-1337"; |
|||
return 0; |
|||
} |
|||
Keys and values can be any types, but all the keys in an associative array |
|||
must be of the same type, and the same for values. |
|||
=== Lisp === |
|||
In [[Lisp programming language|Lisp]] and [[Scheme programming language|Scheme]], association lists are commonly used, as in the following [[S-expression]]: |
|||
'(("Sally Smart" . "555-9999") |
|||
("John Doe" . "555-1212") |
|||
("J. Random Hacker" . "553-1337")) |
|||
The syntax <code>(x . y)</code> is used to indicate a pair. Keys and values need not be the same type within an alist. Lisp and Scheme provide operations to manipulate alists in ways similar to associative arrays. |
|||
[[Common Lisp]] also supports a [[hash table]] data type. Hash tables have greater overhead than alists, but provide much faster access when there are many elements. |
|||
=== Perl === |
|||
One of [[Perl]]'s distinguishing features is its built-in, language-level support for associative arrays. Modern Perl vernacular refers to associative arrays as ''hashes''; the term ''associative array'' is found in older documentation, but is considered somewhat archaic. Perl hashes are flat in that they can only use scalars as keys or values, but values may be [[reference]]s to arrays or other hashes. |
|||
A hash variable is preceded by a <code>%</code> symbol, to distinguish it from scalar, array and other data types. A hash can be initialized from a key-value list: |
|||
%phone_book = ( |
|||
"Sally Smart", "555-9999", |
|||
"John Doe", "555-1212", |
|||
"J. Random Hacker","553-1337" |
|||
); |
|||
Perl offers the <code>=></code> syntax, semantically equivalent to the comma, to make the key-value association more visible: |
|||
%phone_book = ( |
|||
"Sally Smart" => "555-9999", |
|||
"John Doe" => "555-1212", |
|||
"J. Random Hacker => "553-1337" |
|||
); |
|||
Accessing a hash element uses the syntax <code>$hash_name{$key}</code> - the key is surrounded by ''curly braces'' and the hash name is prefixed by a <code>$</code>, indicating that the hash element itself is a ''scalar'' value, even though it is part of a hash. The value of <code>$phone_book{"John Doe"}</code> is <code>"555-1212"</code>. The <code>%</code> sign is only used when referring to the hash as a whole, such as when asking for <code>keys %phone_book</code>. |
|||
=== JavaScript === |
|||
JavaScript (and its standardized version: [[ECMAScript]]) is a prototype based object-oriented language. Objects can be dynamically extended with new properties. An object is a mapping from property names to values, that is an associative array. The only distinguishing factor is that objects have a prototype link to the object they inherit from. Doing a lookup for a property will forward the lookup to the prototype if the object does not define the property itself. |
|||
An object literal is written as <code>{ property : value, ... }</code>. Example: |
|||
var myObject = { "Sally Smart" : "555-9999", |
|||
"John Doe" : "555-1212", |
|||
"J. Random Hacker" : "553-1337" }; |
|||
If the property name is a valid identifier, the quotes can be omitted, e.g.: |
|||
var myOtherObject = { foo : 42, bar : false } |
|||
Lookup is written using property access notation, either square brackets, which always works, or dot notation, which only works for identifier keys: |
|||
myObject["John Doe"] |
|||
myOtherObject.foo |
|||
The ECMAScript standard doesn't say how objects should be implemented. Some implementations are efficient, probably a hash map, while others seem to use |
|||
a linear time lookup (for instance, Microsoft [[JScript]] in [[Internet Explorer]]). |
|||
== External links == |
|||
*[http://www.nist.gov/dads/HTML/assocarray.html NIST's Dictionary of Algorithms and Data Structures: Associative Array] |
|||
*[http://www.nist.gov/dads/HTML/dictionary.html NIST's Dictionary of Algorithms and Data Structures: Association List] |
|||
*[http://www.sgi.com/tech/stl/Map.html SGI STL page on Associative Containers] |
|||
*[http://www.sgi.com/tech/stl/Map.html SGI STL page on std::map] |
|||
*[http://www.sgi.com/tech/stl/Map.html SGI STL page on std::hash_map] |
|||
[[Category:data structures]] |
|||
[[de:Assoziatives Array]] |
|||
[[ja:連想配列]] |
|||
[[pl:Tablica asocjacyjna]] |
|||
[[ru:Ассоциативный массив]] |
Latest revision as of 17:44, 3 November 2024
In computer science, an associative array, map, symbol table, or dictionary is an abstract data type that stores a collection of (key, value) pairs, such that each possible key appears at most once in the collection. In mathematical terms, an associative array is a function with finite domain.[1] It supports 'lookup', 'remove', and 'insert' operations.
The dictionary problem is the classic problem of designing efficient data structures that implement associative arrays.[2] The two major solutions to the dictionary problem are hash tables and search trees.[3][4][5][6] It is sometimes also possible to solve the problem using directly addressed arrays, binary search trees, or other more specialized structures.
Many programming languages include associative arrays as primitive data types, while many other languages provide software libraries that support associative arrays. Content-addressable memory is a form of direct hardware-level support for associative arrays.
Associative arrays have many applications including such fundamental programming patterns as memoization and the decorator pattern.[7]
The name does not come from the associative property known in mathematics. Rather, it arises from the association of values with keys. It is not to be confused with associative processors.
Operations
[edit]In an associative array, the association between a key and a value is often known as a "mapping"; the same word may also be used to refer to the process of creating a new association.
The operations that are usually defined for an associative array are:[3][4][8]
- Insert or put
- add a new pair to the collection, mapping the key to its new value. Any existing mapping is overwritten. The arguments to this operation are the key and the value.
- Remove or delete
- remove a pair from the collection, unmapping a given key from its value. The argument to this operation is the key.
- Lookup, find, or get
- find the value (if any) that is bound to a given key. The argument to this operation is the key, and the value is returned from the operation. If no value is found, some lookup functions raise an exception, while others return a default value (such as zero, null, or a specific value passed to the constructor).
Associative arrays may also include other operations such as determining the number of mappings or constructing an iterator to loop over all the mappings. For such operations, the order in which the mappings are returned is usually implementation-defined.
A multimap generalizes an associative array by allowing multiple values to be associated with a single key.[9] A bidirectional map is a related abstract data type in which the mappings operate in both directions: each value must be associated with a unique key, and a second lookup operation takes a value as an argument and looks up the key associated with that value.
Properties
[edit]The operations of the associative array should satisfy various properties:[8]
lookup(k, insert(j, v, D)) = if k == j then v else lookup(k, D)
lookup(k, new()) = fail
, wherefail
is an exception or default valueremove(k, insert(j, v, D)) = if k == j then remove(k, D) else insert(j, v, remove(k, D))
remove(k, new()) = new()
where k
and j
are keys, v
is a value, D
is an associative array, and new()
creates a new, empty associative array.
Example
[edit]Suppose that the set of loans made by a library is represented in a data structure. Each book in a library may be checked out by one patron at a time. However, a single patron may be able to check out multiple books. Therefore, the information about which books are checked out to which patrons may be represented by an associative array, in which the books are the keys and the patrons are the values. Using notation from Python or JSON, the data structure would be:
{
"Pride and Prejudice": "Alice",
"Wuthering Heights": "Alice",
"Great Expectations": "John"
}
A lookup operation on the key "Great Expectations" would return "John". If John returns his book, that would cause a deletion operation, and if Pat checks out a book, that would cause an insertion operation, leading to a different state:
{
"Pride and Prejudice": "Alice",
"The Brothers Karamazov": "Pat",
"Wuthering Heights": "Alice"
}
Implementation
[edit]For dictionaries with very few mappings, it may make sense to implement the dictionary using an association list, which is a linked list of mappings. With this implementation, the time to perform the basic dictionary operations is linear in the total number of mappings. However, it is easy to implement and the constant factors in its running time are small.[3][10]
Another very simple implementation technique, usable when the keys are restricted to a narrow range, is direct addressing into an array: the value for a given key k is stored at the array cell A[k], or if there is no mapping for k then the cell stores a special sentinel value that indicates the lack of a mapping. This technique is simple and fast, with each dictionary operation taking constant time. However, the space requirement for this structure is the size of the entire keyspace, making it impractical unless the keyspace is small.[5]
The two major approaches for implementing dictionaries are a hash table or a search tree.[3][4][5][6]
Hash table implementations
[edit]The most frequently used general-purpose implementation of an associative array is with a hash table: an array combined with a hash function that separates each key into a separate "bucket" of the array. The basic idea behind a hash table is that accessing an element of an array via its index is a simple, constant-time operation. Therefore, the average overhead of an operation for a hash table is only the computation of the key's hash, combined with accessing the corresponding bucket within the array. As such, hash tables usually perform in O(1) time, and usually outperform alternative implementations.
Hash tables must be able to handle collisions: the mapping by the hash function of two different keys to the same bucket of the array. The two most widespread approaches to this problem are separate chaining and open addressing.[3][4][5][11] In separate chaining, the array does not store the value itself but stores a pointer to another container, usually an association list, that stores all the values matching the hash. By contrast, in open addressing, if a hash collision is found, the table seeks an empty spot in an array to store the value in a deterministic manner, usually by looking at the next immediate position in the array.
Open addressing has a lower cache miss ratio than separate chaining when the table is mostly empty. However, as the table becomes filled with more elements, open addressing's performance degrades exponentially. Additionally, separate chaining uses less memory in most cases, unless the entries are very small (less than four times the size of a pointer).
Tree implementations
[edit]Self-balancing binary search trees
[edit]Another common approach is to implement an associative array with a self-balancing binary search tree, such as an AVL tree or a red–black tree.[12]
Compared to hash tables, these structures have both strengths and weaknesses. The worst-case performance of self-balancing binary search trees is significantly better than that of a hash table, with a time complexity in big O notation of O(log n). This is in contrast to hash tables, whose worst-case performance involves all elements sharing a single bucket, resulting in O(n) time complexity. In addition, and like all binary search trees, self-balancing binary search trees keep their elements in order. Thus, traversing its elements follows a least-to-greatest pattern, whereas traversing a hash table can result in elements being in seemingly random order. Because they are in order, tree-based maps can also satisfy range queries (find all values between two bounds) whereas a hashmap can only find exact values. However, hash tables have a much better average-case time complexity than self-balancing binary search trees of O(1), and their worst-case performance is highly unlikely when a good hash function is used.
A self-balancing binary search tree can be used to implement the buckets for a hash table that uses separate chaining. This allows for average-case constant lookup, but assures a worst-case performance of O(log n). However, this introduces extra complexity into the implementation and may cause even worse performance for smaller hash tables, where the time spent inserting into and balancing the tree is greater than the time needed to perform a linear search on all elements of a linked list or similar data structure.[13][14]
Other trees
[edit]Associative arrays may also be stored in unbalanced binary search trees or in data structures specialized to a particular type of keys such as radix trees, tries, Judy arrays, or van Emde Boas trees, though the relative performance of these implementations varies. For instance, Judy trees have been found to perform less efficiently than hash tables, while carefully selected hash tables generally perform more efficiently than adaptive radix trees, with potentially greater restrictions on the data types they can handle.[15] The advantages of these alternative structures come from their ability to handle additional associative array operations, such as finding the mapping whose key is the closest to a queried key when the query is absent in the set of mappings.
Comparison
[edit]Underlying data structure | Lookup or Removal | Insertion | Ordered | ||
---|---|---|---|---|---|
average | worst case | average | worst case | ||
Hash table | O(1) | O(n) | O(1) | O(n) | No |
Self-balancing binary search tree | O(log n) | O(log n) | O(log n) | O(log n) | Yes |
unbalanced binary search tree | O(log n) | O(n) | O(log n) | O(n) | Yes |
Sequential container of key–value pairs (e.g. association list) |
O(n) | O(n) | O(1) | O(1) | No |
Ordered dictionary
[edit]The basic definition of a dictionary does not mandate an order. To guarantee a fixed order of enumeration, ordered versions of the associative array are often used. There are two senses of an ordered dictionary:
- The order of enumeration is always deterministic for a given set of keys by sorting. This is the case for tree-based implementations, one representative being the
<map>
container of C++.[16] - The order of enumeration is key-independent and is instead based on the order of insertion. This is the case for the "ordered dictionary" in .NET Framework, the LinkedHashMap of Java and Python.[17][18][19]
The latter is more common. Such ordered dictionaries can be implemented using an association list, by overlaying a doubly linked list on top of a normal dictionary, or by moving the actual data out of the sparse (unordered) array and into a dense insertion-ordered one.
Language support
[edit]Associative arrays can be implemented in any programming language as a package and many language systems provide them as part of their standard library. In some languages, they are not only built into the standard system, but have special syntax, often using array-like subscripting.
Built-in syntactic support for associative arrays was introduced in 1969 by SNOBOL4, under the name "table". TMG offered tables with string keys and integer values. MUMPS made multi-dimensional associative arrays, optionally persistent, its key data structure. SETL supported them as one possible implementation of sets and maps. Most modern scripting languages, starting with AWK and including Rexx, Perl, PHP, Tcl, JavaScript, Maple, Python, Ruby, Wolfram Language, Go, and Lua, support associative arrays as a primary container type. In many more languages, they are available as library functions without special syntax.
In Smalltalk, Objective-C, .NET,[20] Python, REALbasic, Swift, VBA and Delphi[21] they are called dictionaries; in Perl, Ruby and Seed7 they are called hashes; in C++, C#, Java, Go, Clojure, Scala, OCaml, Haskell they are called maps (see map (C++), unordered_map (C++), and Map
); in Common Lisp and Windows PowerShell, they are called hash tables (since both typically use this implementation); in Maple and Lua, they are called tables. In PHP and R, all arrays can be associative, except that the keys are limited to integers and strings. In JavaScript (see also JSON), all objects behave as associative arrays with string-valued keys, while the Map and WeakMap types take arbitrary objects as keys. In Lua, they are used as the primitive building block for all data structures. In Visual FoxPro, they are called Collections. The D language also supports associative arrays.[22]
Permanent storage
[edit]Many programs using associative arrays will need to store that data in a more permanent form, such as a computer file. A common solution to this problem is a generalized concept known as archiving or serialization, which produces a text or binary representation of the original objects that can be written directly to a file. This is most commonly implemented in the underlying object model, like .Net or Cocoa, which includes standard functions that convert the internal data into text. The program can create a complete text representation of any group of objects by calling these methods, which are almost always already implemented in the base associative array class.[23]
For programs that use very large data sets, this sort of individual file storage is not appropriate, and a database management system (DB) is required. Some DB systems natively store associative arrays by serializing the data and then storing that serialized data and the key. Individual arrays can then be loaded or saved from the database using the key to refer to them. These key–value stores have been used for many years and have a history as long as that of the more common relational database (RDBs), but a lack of standardization, among other reasons, limited their use to certain niche roles. RDBs were used for these roles in most cases, although saving objects to a RDB can be complicated, a problem known as object-relational impedance mismatch.
After approximately 2010, the need for high-performance databases suitable for cloud computing and more closely matching the internal structure of the programs using them led to a renaissance in the key–value store market. These systems can store and retrieve associative arrays in a native fashion, which can greatly improve performance in common web-related workflows.
See also
[edit]References
[edit]- ^ Collins, Graham; Syme, Donald (1995). "A theory of finite maps". Higher Order Logic Theorem Proving and Its Applications. Lecture Notes in Computer Science. Vol. 971. pp. 122–137. doi:10.1007/3-540-60275-5_61. ISBN 978-3-540-60275-0.
- ^ Andersson, Arne (1989). "Optimal Bounds on the Dictionary Problem". Proc. Symposium on Optimal Algorithms. Lecture Notes in Computer Science. Vol. 401. Springer Verlag. pp. 106–114. doi:10.1007/3-540-51859-2_10. ISBN 978-3-540-51859-4.
- ^ a b c d e Goodrich, Michael T.; Tamassia, Roberto (2006), "9.1 The Map Abstract Data Type", Data Structures & Algorithms in Java (4th ed.), Wiley, pp. 368–371
- ^ a b c d Mehlhorn, Kurt; Sanders, Peter (2008), "4 Hash Tables and Associative Arrays", Algorithms and Data Structures: The Basic Toolbox (PDF), Springer, pp. 81–98, archived (PDF) from the original on 2014-08-02
- ^ a b c d Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001), "11 Hash Tables", Introduction to Algorithms (2nd ed.), MIT Press and McGraw-Hill, pp. 221–252, ISBN 0-262-03293-7.
- ^ a b Dietzfelbinger, M., Karlin, A., Mehlhorn, K., Meyer auf der Heide, F., Rohnert, H., and Tarjan, R. E. 1994. "Dynamic Perfect Hashing: Upper and Lower Bounds" Archived 2016-03-04 at the Wayback Machine. SIAM J. Comput. 23, 4 (Aug. 1994), 738-761. http://portal.acm.org/citation.cfm?id=182370 doi:10.1137/S0097539791194094
- ^ Goodrich & Tamassia (2006), pp. 597–599.
- ^ a b Black, Paul E.; Stewart, Rob (2 November 2020). "dictionary". Dictionary of Algorithms and Data Structures. Retrieved 26 January 2022.
- ^ Goodrich & Tamassia (2006), pp. 389–397.
- ^ "When should I use a hash table instead of an association list?". lisp-faq/part2. 1996-02-20.
- ^ Klammer, F.; Mazzolini, L. (2006), "Pathfinders for associative maps", Ext. Abstracts GIS-l 2006, GIS-I, pp. 71–74.
- ^ Joel Adams and Larry Nyhoff. "Trees in STL". Quote: "The Standard Template library ... some of its containers -- the set<T>, map<T1, T2>, multiset<T>, and multimap<T1, T2> templates -- are generally built using a special kind of self-balancing binary search tree called a red–black tree."
- ^ Knuth, Donald (1998). The Art of Computer Programming. Vol. 3: Sorting and Searching (2nd ed.). Addison-Wesley. pp. 513–558. ISBN 0-201-89685-0.
- ^ Probst, Mark (2010-04-30). "Linear vs Binary Search". Retrieved 2016-11-20.
- ^ Alvarez, Victor; Richter, Stefan; Chen, Xiao; Dittrich, Jens (April 2015). "A comparison of adaptive radix trees and hash tables". 2015 IEEE 31st International Conference on Data Engineering. Seoul, South Korea: IEEE. pp. 1227–1238. doi:10.1109/ICDE.2015.7113370. ISBN 978-1-4799-7964-6. S2CID 17170456.
- ^ "std::map". en.cppreference.com.
- ^ "OrderedDictionary Class (System.Collections.Specialized)". MS Docs.
- ^ "LinkedHashMap".
- ^ "collections — Container datatypes — Python 3.9.0a3 documentation". docs.python.org.
- ^ "Dictionary<TKey, TValue> Class". MSDN.
- ^ "System.Generics.Collections.TDictionary - RAD Studio API Documentation". docwiki.embarcadero.com. Retrieved 2017-04-18.
- ^ "Associative Arrays, the D programming language". Digital Mars.
- ^ "Archives and Serializations Programming Guide", Apple Inc., 2012