Approximate computing: Difference between revisions
Carsonkahn (talk | contribs) m Clarify role of memoization in approximate computing |
Carsonkahn (talk | contribs) m Fix typo |
||
Line 12: | Line 12: | ||
: Instead of [[Computer data storage|storing data]] values exactly, they can be stored approximately, e.g., by [[Data truncation|truncating]] the lower-bits in [[floating point]] data. Another method is to accept less reliable memory. For this, in [[DRAM]]<ref>{{Cite journal|last1=Raha|first1=A.|last2=Sutar|first2=S.|last3=Jayakumar|first3=H.|last4=Raghunathan|first4=V.|date=July 2017|title=Quality Configurable Approximate DRAM|journal=IEEE Transactions on Computers|volume=66|issue=7|pages=1172–1187|doi=10.1109/TC.2016.2640296|issn=0018-9340|doi-access=free}}</ref> and [[eDRAM]], [[refresh rate]] assignments can be lowered or controlled.<ref>{{Cite journal|last1=Kim|first1=Yongjune|last2=Choi|first2=Won Ho|last3=Guyot|first3=Cyril|last4=Cassuto|first4=Yuval|date=December 2019|title=On the Optimal Refresh Power Allocation for Energy-Efficient Memories|url=https://ieeexplore.ieee.org/document/9013465|journal=2019 IEEE Global Communications Conference (GLOBECOM)|location=Waikoloa, HI, USA|publisher=IEEE|pages=1–6|doi=10.1109/GLOBECOM38437.2019.9013465|isbn=978-1-7281-0962-6|arxiv=1907.01112|s2cid=195776538}}</ref> In [[Static random-access memory|SRAM]], supply voltage can be lowered<ref>{{Cite journal|last1=Frustaci|first1=Fabio|last2=Blaauw|first2=David|last3=Sylvester|first3=Dennis|last4=Alioto|first4=Massimo|date=June 2016|title=Approximate SRAMs With Dynamic Energy-Quality Management|url=https://ieeexplore.ieee.org/document/7372479|journal=IEEE Transactions on Very Large Scale Integration (VLSI) Systems|volume=24|issue=6|pages=2128–2141|doi=10.1109/TVLSI.2015.2503733|s2cid=8051173|issn=1063-8210}}</ref> or controlled.<ref>{{Cite journal|last1=Kim|first1=Yongjune|last2=Kang|first2=Mingu|last3=Varshney|first3=Lav R.|last4=Shanbhag|first4=Naresh R.|date=2018|title=Generalized Water-filling for Source-aware Energy-efficient SRAMs|url=https://ieeexplore.ieee.org/document/8368137|journal=IEEE Transactions on Communications|volume=66|issue=10|pages=4826–4841|doi=10.1109/TCOMM.2018.2841406|issn=0090-6778|arxiv=1710.07153|s2cid=24512949}}</ref> Approximate storage can be applied to reduce [[Magnetoresistive random-access memory|MRAM]]'s high write energy consumption.<ref>{{Cite journal |last1=Kim |first1=Yongjune |last2=Jeon |first2=Yoocharn |last3=Choi |first3=Hyeokjin |last4=Guyot |first4=Cyril |last5=Cassuto |first5=Yuval |date=2022 |title=Optimizing Write Fidelity of MRAMs by Alternating Water-filling Algorithm |url=https://ieeexplore.ieee.org/document/9829735 |journal=IEEE Transactions on Communications |volume=70 |issue=9 |pages=5825–5836 |doi=10.1109/TCOMM.2022.3190868 |s2cid=250565077 |issn=0090-6778}}</ref> In general, any [[error detection and correction]] mechanisms should be disabled. |
: Instead of [[Computer data storage|storing data]] values exactly, they can be stored approximately, e.g., by [[Data truncation|truncating]] the lower-bits in [[floating point]] data. Another method is to accept less reliable memory. For this, in [[DRAM]]<ref>{{Cite journal|last1=Raha|first1=A.|last2=Sutar|first2=S.|last3=Jayakumar|first3=H.|last4=Raghunathan|first4=V.|date=July 2017|title=Quality Configurable Approximate DRAM|journal=IEEE Transactions on Computers|volume=66|issue=7|pages=1172–1187|doi=10.1109/TC.2016.2640296|issn=0018-9340|doi-access=free}}</ref> and [[eDRAM]], [[refresh rate]] assignments can be lowered or controlled.<ref>{{Cite journal|last1=Kim|first1=Yongjune|last2=Choi|first2=Won Ho|last3=Guyot|first3=Cyril|last4=Cassuto|first4=Yuval|date=December 2019|title=On the Optimal Refresh Power Allocation for Energy-Efficient Memories|url=https://ieeexplore.ieee.org/document/9013465|journal=2019 IEEE Global Communications Conference (GLOBECOM)|location=Waikoloa, HI, USA|publisher=IEEE|pages=1–6|doi=10.1109/GLOBECOM38437.2019.9013465|isbn=978-1-7281-0962-6|arxiv=1907.01112|s2cid=195776538}}</ref> In [[Static random-access memory|SRAM]], supply voltage can be lowered<ref>{{Cite journal|last1=Frustaci|first1=Fabio|last2=Blaauw|first2=David|last3=Sylvester|first3=Dennis|last4=Alioto|first4=Massimo|date=June 2016|title=Approximate SRAMs With Dynamic Energy-Quality Management|url=https://ieeexplore.ieee.org/document/7372479|journal=IEEE Transactions on Very Large Scale Integration (VLSI) Systems|volume=24|issue=6|pages=2128–2141|doi=10.1109/TVLSI.2015.2503733|s2cid=8051173|issn=1063-8210}}</ref> or controlled.<ref>{{Cite journal|last1=Kim|first1=Yongjune|last2=Kang|first2=Mingu|last3=Varshney|first3=Lav R.|last4=Shanbhag|first4=Naresh R.|date=2018|title=Generalized Water-filling for Source-aware Energy-efficient SRAMs|url=https://ieeexplore.ieee.org/document/8368137|journal=IEEE Transactions on Communications|volume=66|issue=10|pages=4826–4841|doi=10.1109/TCOMM.2018.2841406|issn=0090-6778|arxiv=1710.07153|s2cid=24512949}}</ref> Approximate storage can be applied to reduce [[Magnetoresistive random-access memory|MRAM]]'s high write energy consumption.<ref>{{Cite journal |last1=Kim |first1=Yongjune |last2=Jeon |first2=Yoocharn |last3=Choi |first3=Hyeokjin |last4=Guyot |first4=Cyril |last5=Cassuto |first5=Yuval |date=2022 |title=Optimizing Write Fidelity of MRAMs by Alternating Water-filling Algorithm |url=https://ieeexplore.ieee.org/document/9829735 |journal=IEEE Transactions on Communications |volume=70 |issue=9 |pages=5825–5836 |doi=10.1109/TCOMM.2022.3190868 |s2cid=250565077 |issn=0090-6778}}</ref> In general, any [[error detection and correction]] mechanisms should be disabled. |
||
; Software-level approximation |
; Software-level approximation |
||
: There are several ways to approximate at software level. [[Memoization]] or fuzzy memoization (the use of a [[Vector database|vector database]] for approximate retrieval from a cache). Some [[iteration]]s of [[Loop (computing)|loops]] can be skipped (termed as [[loop perforation]]) to achieve a result faster. Some tasks can also be skipped, for example when a run-time condition suggests that those tasks are not going to be useful ([[task skipping]]). [[Monte Carlo algorithm]]s and [[Randomized algorithm]]s trade correctness for execution time guarantees.<ref>C.Alippi, Intelligence for Embedded Systems: a Methodological approach, Springer, 2014, pp. 283</ref> The computation can be reformulated according to paradigms that allow easily the acceleration on specialized hardware, e.g. a neural processing unit.<ref>{{Cite conference|last1=Esmaeilzadeh|first1=Hadi|last2=Sampson|first2=Adrian|last3=Ceze|first3=Luis|last4=Burger|first4=Doug|title=Neural acceleration for general-purpose approximate programs|conference=45th Annual IEEE/ACM International Symposium on Microarchitecture|year=2012|doi=10.1109/MICRO.2012.48|publisher=IEEE|pages=449–460|location=Vancouver, BC}}</ref> |
: There are several ways to approximate at software level. [[Memoization]] or fuzzy memoization (the use of a [[Vector database|vector database]] for approximate retrieval from a cache) can be applied. Some [[iteration]]s of [[Loop (computing)|loops]] can be skipped (termed as [[loop perforation]]) to achieve a result faster. Some tasks can also be skipped, for example when a run-time condition suggests that those tasks are not going to be useful ([[task skipping]]). [[Monte Carlo algorithm]]s and [[Randomized algorithm]]s trade correctness for execution time guarantees.<ref>C.Alippi, Intelligence for Embedded Systems: a Methodological approach, Springer, 2014, pp. 283</ref> The computation can be reformulated according to paradigms that allow easily the acceleration on specialized hardware, e.g. a neural processing unit.<ref>{{Cite conference|last1=Esmaeilzadeh|first1=Hadi|last2=Sampson|first2=Adrian|last3=Ceze|first3=Luis|last4=Burger|first4=Doug|title=Neural acceleration for general-purpose approximate programs|conference=45th Annual IEEE/ACM International Symposium on Microarchitecture|year=2012|doi=10.1109/MICRO.2012.48|publisher=IEEE|pages=449–460|location=Vancouver, BC}}</ref> |
||
; Approximate system |
; Approximate system |
||
: In an approximate system,<ref>{{Cite journal|last1=Raha|first1=Arnab|last2=Raghunathan|first2=Vijay|date=2017|title=Towards Full-System Energy-Accuracy Tradeoffs: A Case Study of An Approximate Smart Camera System|journal=Proceedings of the 54th Annual Design Automation Conference 2017|series=DAC '17|location=New York, NY, USA|publisher=ACM|pages=74:1–74:6|doi=10.1145/3061639.3062333|isbn=9781450349277|s2cid=2503638}}</ref> different subsystems of the system such as the processor, memory, sensor, and communication modules are synergistically approximated to obtain a much better system-level Q-E trade-off curve compared to individual approximations to each of the subsystems. |
: In an approximate system,<ref>{{Cite journal|last1=Raha|first1=Arnab|last2=Raghunathan|first2=Vijay|date=2017|title=Towards Full-System Energy-Accuracy Tradeoffs: A Case Study of An Approximate Smart Camera System|journal=Proceedings of the 54th Annual Design Automation Conference 2017|series=DAC '17|location=New York, NY, USA|publisher=ACM|pages=74:1–74:6|doi=10.1145/3061639.3062333|isbn=9781450349277|s2cid=2503638}}</ref> different subsystems of the system such as the processor, memory, sensor, and communication modules are synergistically approximated to obtain a much better system-level Q-E trade-off curve compared to individual approximations to each of the subsystems. |
Revision as of 21:00, 12 June 2023
Approximate computing is an emerging paradigm for energy-efficient and/or high-performance design.[1] It includes a plethora of computation techniques that return a possibly inaccurate result rather than a guaranteed accurate result, and that can be used for applications where an approximate result is sufficient for its purpose.[2] One example of such situation is for a search engine where no exact answer may exist for a certain search query and hence, many answers may be acceptable. Similarly, occasional dropping of some frames in a video application can go undetected due to perceptual limitations of humans. Approximate computing is based on the observation that in many scenarios, although performing exact computation requires large amount of resources, allowing bounded approximation can provide disproportionate gains in performance and energy, while still achieving acceptable result accuracy.[clarification needed] For example, in k-means clustering algorithm, allowing only 5% loss in classification accuracy can provide 50 times energy saving compared to the fully accurate classification.
The key requirement in approximate computing is that approximation can be introduced only in non-critical data, since approximating critical data (e.g., control operations) can lead to disastrous consequences, such as program crash or erroneous output.
Strategies
Several strategies can be used for performing approximate computing.
- Approximate circuits
- Approximate arithmetic circuits:[3] adders,[4][5] multipliers[6] and other logical circuits can reduce hardware overhead.[7][8][9] For example, an approximate multi-bit adder can ignore the carry chain and thus, allow all its sub-adders to perform addition operation in parallel.
- Approximate storage and memory
- Instead of storing data values exactly, they can be stored approximately, e.g., by truncating the lower-bits in floating point data. Another method is to accept less reliable memory. For this, in DRAM[10] and eDRAM, refresh rate assignments can be lowered or controlled.[11] In SRAM, supply voltage can be lowered[12] or controlled.[13] Approximate storage can be applied to reduce MRAM's high write energy consumption.[14] In general, any error detection and correction mechanisms should be disabled.
- Software-level approximation
- There are several ways to approximate at software level. Memoization or fuzzy memoization (the use of a vector database for approximate retrieval from a cache) can be applied. Some iterations of loops can be skipped (termed as loop perforation) to achieve a result faster. Some tasks can also be skipped, for example when a run-time condition suggests that those tasks are not going to be useful (task skipping). Monte Carlo algorithms and Randomized algorithms trade correctness for execution time guarantees.[15] The computation can be reformulated according to paradigms that allow easily the acceleration on specialized hardware, e.g. a neural processing unit.[16]
- Approximate system
- In an approximate system,[17] different subsystems of the system such as the processor, memory, sensor, and communication modules are synergistically approximated to obtain a much better system-level Q-E trade-off curve compared to individual approximations to each of the subsystems.
Application areas
Approximate computing has been used in a variety of domains where the applications are error-tolerant, such as multimedia processing, machine learning, signal processing, scientific computing. Therefore, approximate computing is mostly driven by applications that are related to human perception/cognition and have inherent error resilience. Many of these applications are based on statistical or probabilistic computation, such as different approximations can be made to better suit the desired objectives.[18] One notable application in machine learning is that Google is using this approach in their Tensor processing units (TPU, a custom ASIC). [19]
Derived paradigms
The main issue in approximate computing is the identification of the section of the application that can be approximated. In the case of large scale applications, it is very common to find people holding the expertise on approximate computing techniques not having enough expertise on the application domain (and vice versa). In order to solve this problem, programming paradigms[20] have been proposed. They all have in common the clear role separation between application programmer and application domain expert. These approaches allow the spread of the most common optimizations and approximate computing techniques.
See also
References
- ^ J. Han and M. Orshansky, "Approximate computing: An emerging paradigm for energy-efficient design", in the 18th IEEE European Test Symposium, pp. 1-6, 2013.
- ^ A. Sampson, et al. "EnerJ: Approximate data types for safe and general low-power computation", In ACM SIGPLAN Notices, vol. 46, no. 6, 2011.
- ^ Jiang et al., "Approximate Arithmetic Circuits: A Survey, Characterization, and Recent Applications", the Proceedings of the IEEE, Vol. 108, No. 12, pp. 2108 - 2135, 2020.
- ^ J. Echavarria, et al. "FAU: Fast and Error-Optimized Approximate Adder Units on LUT-Based FPGAs", FPT, 2016.
- ^ J. Miao, et al. "Modeling and synthesis of quality-energy optimal approximate adders", ICCAD, 2012
- ^ Rehman, Semeen; El-Harouni, Walaa; Shafique, Muhammad; Kumar, Akash; Henkel, Jörg (2016-11-07). Architectural-space exploration of approximate multipliers. ACM. p. 80. doi:10.1145/2966986.2967005. ISBN 9781450344661. S2CID 5326133.
- ^ S. Venkataramani, et al. "SALSA: systematic logic synthesis of approximate circuits", DAC, 2012.
- ^ J. Miao, et al. "Approximate logic synthesis under general error magnitude and frequency constraints", ICCAD, 2013
- ^ R. Hegde et al. "Energy-efficient signal processing via algorithmic noise-tolerance", ISLPED, 1999.
- ^ Raha, A.; Sutar, S.; Jayakumar, H.; Raghunathan, V. (July 2017). "Quality Configurable Approximate DRAM". IEEE Transactions on Computers. 66 (7): 1172–1187. doi:10.1109/TC.2016.2640296. ISSN 0018-9340.
- ^ Kim, Yongjune; Choi, Won Ho; Guyot, Cyril; Cassuto, Yuval (December 2019). "On the Optimal Refresh Power Allocation for Energy-Efficient Memories". 2019 IEEE Global Communications Conference (GLOBECOM). Waikoloa, HI, USA: IEEE: 1–6. arXiv:1907.01112. doi:10.1109/GLOBECOM38437.2019.9013465. ISBN 978-1-7281-0962-6. S2CID 195776538.
- ^ Frustaci, Fabio; Blaauw, David; Sylvester, Dennis; Alioto, Massimo (June 2016). "Approximate SRAMs With Dynamic Energy-Quality Management". IEEE Transactions on Very Large Scale Integration (VLSI) Systems. 24 (6): 2128–2141. doi:10.1109/TVLSI.2015.2503733. ISSN 1063-8210. S2CID 8051173.
- ^ Kim, Yongjune; Kang, Mingu; Varshney, Lav R.; Shanbhag, Naresh R. (2018). "Generalized Water-filling for Source-aware Energy-efficient SRAMs". IEEE Transactions on Communications. 66 (10): 4826–4841. arXiv:1710.07153. doi:10.1109/TCOMM.2018.2841406. ISSN 0090-6778. S2CID 24512949.
- ^ Kim, Yongjune; Jeon, Yoocharn; Choi, Hyeokjin; Guyot, Cyril; Cassuto, Yuval (2022). "Optimizing Write Fidelity of MRAMs by Alternating Water-filling Algorithm". IEEE Transactions on Communications. 70 (9): 5825–5836. doi:10.1109/TCOMM.2022.3190868. ISSN 0090-6778. S2CID 250565077.
- ^ C.Alippi, Intelligence for Embedded Systems: a Methodological approach, Springer, 2014, pp. 283
- ^ Esmaeilzadeh, Hadi; Sampson, Adrian; Ceze, Luis; Burger, Doug (2012). Neural acceleration for general-purpose approximate programs. 45th Annual IEEE/ACM International Symposium on Microarchitecture. Vancouver, BC: IEEE. pp. 449–460. doi:10.1109/MICRO.2012.48.
- ^ Raha, Arnab; Raghunathan, Vijay (2017). "Towards Full-System Energy-Accuracy Tradeoffs: A Case Study of An Approximate Smart Camera System". Proceedings of the 54th Annual Design Automation Conference 2017. DAC '17. New York, NY, USA: ACM: 74:1–74:6. doi:10.1145/3061639.3062333. ISBN 9781450349277. S2CID 2503638.
- ^ Liu, Weiqiang; Lombardi, Fabrizio; Schulte, Michael (Dec 2020). "Approximate Computing: From Circuits to Applications". Proceedings of the IEEE. 108 (12): 2103. doi:10.1109/JPROC.2020.3033361.
- ^ Liu, Weiqiang; Lombardi, Fabrizio; Schulte, Michael (Dec 2020). "Approximate Computing: From Circuits to Applications". Proceedings of the IEEE. 108 (12): 2104. doi:10.1109/JPROC.2020.3033361.
- ^ Nguyen, Donald; Lenharth, Andrew; Pingali, Keshav (2013). "A lightweight infrastructure for graph analytics". Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles. ACM: 456–471. doi:10.1145/2517349.2522739. ISBN 9781450323888.