Least frequently used: Difference between revisions
Citation bot (talk | contribs) Alter: url, pages. URLs might have been anonymized. Add: citeseerx, s2cid. Formatted dashes. | Use this bot. Report bugs. | Suggested by Guy Harris | #UCB_webform |
Guy Harris (talk | contribs) →Problems: Add DOI. |
||
Line 14: | Line 14: | ||
While the LFU method may seem like an intuitive approach to memory management it is not without faults. Consider an item in memory which is referenced repeatedly for a short period of time and is not accessed again for an extended period of time. Due to how rapidly it was just accessed its counter has increased drastically even though it will not be used again for a decent amount of time. This leaves other blocks which may actually be used more frequently susceptible to purging simply because they were accessed through a different method.<ref>{{cite book |author=William Stallings |title=Operating Systems: Internals and Design Principles |edition=7th |date=2012}}</ref> |
While the LFU method may seem like an intuitive approach to memory management it is not without faults. Consider an item in memory which is referenced repeatedly for a short period of time and is not accessed again for an extended period of time. Due to how rapidly it was just accessed its counter has increased drastically even though it will not be used again for a decent amount of time. This leaves other blocks which may actually be used more frequently susceptible to purging simply because they were accessed through a different method.<ref>{{cite book |author=William Stallings |title=Operating Systems: Internals and Design Principles |edition=7th |date=2012}}</ref> |
||
Moreover, new items that just entered the cache are subject to being removed very soon again, because they start with a low counter, even though they might be used very frequently after that. Due to major issues like these, an explicit LFU system is fairly uncommon; instead, there are hybrids that utilize LFU concepts.<ref>{{cite conference |author1=B.T. Zivkoz |author2=A.J. Smith |url=https://ieeexplore.ieee.org/document/567612 |title=Disk Caching in Large Database and Timeshared Systems |book-title= Proceedings Fifth International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems |date=1997}}</ref> |
Moreover, new items that just entered the cache are subject to being removed very soon again, because they start with a low counter, even though they might be used very frequently after that. Due to major issues like these, an explicit LFU system is fairly uncommon; instead, there are hybrids that utilize LFU concepts.<ref>{{cite conference |author1=B.T. Zivkoz |author2=A.J. Smith |url=https://ieeexplore.ieee.org/document/567612 |title=Disk Caching in Large Database and Timeshared Systems |book-title= Proceedings Fifth International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems |doi=10.1109/MASCOT.1997.567612 |date=1997}}</ref> |
||
==See also== |
==See also== |
Revision as of 09:56, 31 July 2023
Least Frequently Used (LFU) is a type of cache algorithm used to manage memory within a computer. The standard characteristics of this method involve the system keeping track of the number of times a block is referenced in memory. When the cache is full and requires more room the system will purge the item with the lowest reference frequency.
LFU is sometimes combined with a Least Recently Used algorithm and called LRFU.[1]
Implementation
The simplest method to employ an LFU algorithm is to assign a counter to every block that is loaded into the cache. Each time a reference is made to that block the counter is increased by one. When the cache reaches capacity and has a new block waiting to be inserted the system will search for the block with the lowest counter and remove it from the cache, in case of a tie (i.e., two or more keys with the same frequency), the Least Recently Used key would be invalidated.[2]
- Ideal LFU: there is a counter for each item in the catalogue
- Practical LFU: there is a counter for the items stored in cache. The counter is forgotten if the item is evicted.
Problems
While the LFU method may seem like an intuitive approach to memory management it is not without faults. Consider an item in memory which is referenced repeatedly for a short period of time and is not accessed again for an extended period of time. Due to how rapidly it was just accessed its counter has increased drastically even though it will not be used again for a decent amount of time. This leaves other blocks which may actually be used more frequently susceptible to purging simply because they were accessed through a different method.[3]
Moreover, new items that just entered the cache are subject to being removed very soon again, because they start with a low counter, even though they might be used very frequently after that. Due to major issues like these, an explicit LFU system is fairly uncommon; instead, there are hybrids that utilize LFU concepts.[4]
See also
References
- ^ Donghee Lee; Jongmoo Choi; Jong-Hun Kim; S.H. Noh; Sang Lyul Min; Yookun Cho; Chong Sang Kim (December 2001). "LRFU: a spectrum of policies that subsumes the least recently used and least frequently used policies". IEEE Transactions on Computers. 50 (12): 1352–1361. doi:10.1109/TC.2001.970573. S2CID 2636466.
- ^ Silvano Maffeis. "Cache Management Algorithms for Flexible Filesystems". ACM SIGMETRICS Performance Evaluation Review. 21 (3). CiteSeerX 10.1.1.48.8399.
- ^ William Stallings (2012). Operating Systems: Internals and Design Principles (7th ed.).
- ^ B.T. Zivkoz; A.J. Smith (1997). "Disk Caching in Large Database and Timeshared Systems". Proceedings Fifth International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems. doi:10.1109/MASCOT.1997.567612.
External links
- An O(1) algorithm for implementing the LFU cache eviction scheme, 16 August 2010, by Ketan Shah, Anirban Mitra and Dhruv Matani