Adaptive system: Difference between revisions
clean out COI / selfpromo |
|||
(6 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
{{Short description|System that can adapt to the environment}} |
|||
{{Refimprove|date=November 2008}} |
{{Refimprove|date=November 2008}} |
||
Line 24: | Line 25: | ||
:<math> P_{t+h}(S\rightarrow S' | E) - P_{t+h}(S\rightarrow S') < P_t(S\rightarrow S' | E) - P_t(S\rightarrow S')</math> |
:<math> P_{t+h}(S\rightarrow S' | E) - P_{t+h}(S\rightarrow S') < P_t(S\rightarrow S' | E) - P_t(S\rightarrow S')</math> |
||
==Hierarchy of adaptations: Practopoiesis== |
|||
[[File:Practopoietic cycle of causation.gif|thumb| The feedback loops and poietic interaction in hierarchical adaptations.]]A groundbreaking theory of practopoiesis explains how various types of adaptations interact in a living system? Practopoiesis,<ref>{{cite web| url = http://www.danko-nikolic.com/practopoiesis/ |title = Practopoiesis}}</ref> a term due to its originator Danko Nikolić,<ref>{{cite web| url = https://www.researchgate.net/profile/Danko_Nikolic |url-status=dead |archive-url=https://web.archive.org/web/20150723021751/http://www.researchgate.net/profile/Danko_Nikolic |archive-date=2015-07-23 |title=Danko Nikolić (Max Planck Institute for Brain Research, Frankfurt am Main) on ResearchGate - Expertise: Artificial Intelligence, Quantitative Psychology, Cognitive Psychology}}</ref> is a reference to a hierarchy of adaptation mechanisms answering this question. The adaptive hierarchy forms a kind of a self-adjusting system in which [[autopoiesis]] of the entire ''organism'' or a ''cell'' occurs through a hierarchy of [[allopoiesis|allopoietic]] interactions among ''components''.<ref name=Nikolic2015>{{cite journal|title=Practopoiesis: Or how life fosters a mind. |author=Danko Nikolić|date=2015|doi=10.1016/j.jtbi.2015.03.003|pmid = 25791287|volume=373|journal=Journal of Theoretical Biology|pages=40–61|arxiv=1402.5332|bibcode=2015JThBi.373...40N|s2cid=12680941}}</ref> This is possible because the components are organized into a [[poiesis|poietic]] hierarchy: adaptive actions of one component result in creation of another component. The theory proposes that living systems exhibit a hierarchy of a total of four such adaptive poietic operations: |
|||
''[[evolution]]'' (i) → ''[[gene expression]]'' (ii) → ''non gene-involving [[homeostatic]] mechanisms (anapoiesis)'' (iii) → ''final cell function'' (iv) |
|||
As the hierarchy evolves towards higher levels of organization, the speed of adaptation increases. Evolution is the slowest; gene expression is faster; and so on. The final cell function is the fastest. Ultimately, practopoiesis challenges current neuroscience doctrine by asserting that mental operations primarily occur at the homeostatic, anapoietic level (iii) — i.e., that minds and thought emerge from fast homeostatic mechanisms poietically controlling the cell function. This contrasts the widespread assumption that [[thinking]] is synonymous with computations executed at the level of [[neural activity]] (i.e., with the 'final cell function' at level iv). |
|||
Sharov proposed that only [[Eukaryote]] cells can achieve all four levels of organization.<ref>Sharov, A. A. (2018). "Mind, agency, and biosemiotics." Journal of Cognitive Science, 19(2), 195-228.</ref> |
|||
Each slower level contains knowledge that is more general than the faster level; for example, genes contain more general knowledge than anapoietic mechanisms, which in turn contain more general knowledge than cell functions. This hierarchy of knowledge enables the anapoietic level to implement [[concept]]s, which are the most fundamental ingredients of a mind. Activation of concepts through anapoiesis is suggested to underlie [[ideasthesia]]. Practopoiesis also has implications for understanding the limitations of [[Deep Learning]].<ref>Nikolić, D. (2017). "Why deep neural nets cannot ever match biological intelligence and what to do about it?" International Journal of Automation and Computing, 14(5), 532-541.</ref> |
|||
Empirical tests of practopoiesis require learning on double-loop tasks: One needs to assess how the learning capability adapts over time, i.e., how the system learns to learn (adapts its adapting skills).<ref>El Hady, A. (2016). Closed loop neuroscience. Academic Press.</ref><ref>Dong, X., Du, X., & Bao, M. (2020). "Repeated contrast adaptation does not cause habituation of the adapter." Frontiers in Human Neuroscience, 14, 569. (https://www.frontiersin.org/articles/10.3389/fnhum.2020.589634/full)</ref> |
|||
It has been proposed that anapoiesis is implemented in the brain by [[metabotropic receptors]] and [[G protein-gated ion channel]]s.<ref>Nikolić, D. (2023). Where is the mind within the brain? Transient selection of subnetworks by metabotropic receptors and G protein-gated ion channels. Computational Biology and Chemistry, 107820.</ref> These membrane proteins are suggested to transiently select subnetworks and by doing so, give raise to cognition. |
|||
==Benefit of self-adjusting systems== |
==Benefit of self-adjusting systems== |
||
In an adaptive system, a parameter changes slowly and has no preferred value. In a self-adjusting system though, the parameter value “depends on the history of the system dynamics”. One of the most important qualities of ''self-adjusting systems'' is its “[[edge of chaos|adaptation to the edge of chaos]]” or ability to avoid [[chaos theory|chaos]]. Practically speaking, by heading to the [[edge of chaos]] without going further, a leader may act spontaneously yet without disaster. A March/April 2009 Complexity article further explains the self-adjusting systems used and the realistic implications.<ref>Hübler, A. & Wotherspoon, T.: "Self-Adjusting Systems Avoid Chaos". Complexity. 14(4), 8 – 11. 2008</ref> Physicists have shown that [[adaptation]] to the [[edge of chaos]] occurs in almost all systems with [[feedback]].<ref>{{cite journal|last1=Wotherspoon|first1=T.|last2=Hubler|first2=A.|title=Adaptation to the edge of chaos with random-wavelet feedback|journal=J Phys Chem A|volume=113|issue=1|pages=19–22|doi=10.1021/jp804420g|pmid=19072712|year=2009|bibcode=2009JPCA..113...19W}}</ref> |
In an adaptive system, a parameter changes slowly and has no preferred value. In a self-adjusting system though, the parameter value “depends on the history of the system dynamics”. One of the most important qualities of ''self-adjusting systems'' is its “[[edge of chaos|adaptation to the edge of chaos]]” or ability to avoid [[chaos theory|chaos]]. Practically speaking, by heading to the [[edge of chaos]] without going further, a leader may act spontaneously yet without disaster. A March/April 2009 Complexity article further explains the self-adjusting systems used and the realistic implications.<ref>Hübler, A. & Wotherspoon, T.: "Self-Adjusting Systems Avoid Chaos". Complexity. 14(4), 8 – 11. 2008</ref> Physicists have shown that [[adaptation]] to the [[edge of chaos]] occurs in almost all systems with [[feedback]].<ref>{{cite journal|last1=Wotherspoon|first1=T.|last2=Hubler|first2=A.|title=Adaptation to the edge of chaos with random-wavelet feedback|journal=J Phys Chem A|volume=113|issue=1|pages=19–22|doi=10.1021/jp804420g|pmid=19072712|year=2009|bibcode=2009JPCA..113...19W}}</ref> |
||
==See also== |
==See also== |
||
Line 81: | Line 66: | ||
{{Wiktionary | anapoiesis}} |
{{Wiktionary | anapoiesis}} |
||
{{Wiktionary | practopoiesis}} |
{{Wiktionary | practopoiesis}} |
||
* An [https://www.youtube.com/watch?v=WIzsz03X8qc animated video] explaining the theory of practopoiesis, made by Mind & Brain. |
|||
* Practopoiesis offers solutions to [http://www.danko-nikolic.com/long-standing-problems-solved-by-practopoiesis/ nine long-standing problems] in neuroscience and philosophy of mind |
|||
[[Category:Control engineering]] |
[[Category:Control engineering]] |
||
[[Category: |
[[Category:Organizational cybernetics]] |
||
[[Category:Systems theory]] |
Latest revision as of 08:22, 30 October 2024
This article needs additional citations for verification. (November 2008) |
An adaptive system is a set of interacting or interdependent entities, real or abstract, forming an integrated whole that together are able to respond to environmental changes or changes in the interacting parts, in a way analogous to either continuous physiological homeostasis or evolutionary adaptation in biology. Feedback loops represent a key feature of adaptive systems, such as ecosystems and individual organisms; or in the human world, communities, organizations, and families. Adaptive systems can be organized into a hierarchy.
Artificial adaptive systems include robots with control systems that utilize negative feedback to maintain desired states.
The law of adaptation
[edit]The law of adaptation may be stated informally as:
Every adaptive system converges to a state in which all kind of stimulation ceases.[1]
Formally, the law can be defined as follows:
Given a system , we say that a physical event is a stimulus for the system if and only if the probability that the system suffers a change or be perturbed (in its elements or in its processes) when the event occurs is strictly greater than the prior probability that suffers a change independently of :
Let be an arbitrary system subject to changes in time and let be an arbitrary event that is a stimulus for the system : we say that is an adaptive system if and only if when t tends to infinity the probability that the system change its behavior in a time step given the event is equal to the probability that the system change its behavior independently of the occurrence of the event . In mathematical terms:
- -
- -
Thus, for each instant will exist a temporal interval such that:
Benefit of self-adjusting systems
[edit]In an adaptive system, a parameter changes slowly and has no preferred value. In a self-adjusting system though, the parameter value “depends on the history of the system dynamics”. One of the most important qualities of self-adjusting systems is its “adaptation to the edge of chaos” or ability to avoid chaos. Practically speaking, by heading to the edge of chaos without going further, a leader may act spontaneously yet without disaster. A March/April 2009 Complexity article further explains the self-adjusting systems used and the realistic implications.[2] Physicists have shown that adaptation to the edge of chaos occurs in almost all systems with feedback.[3]
See also
[edit]Notes
[edit]- ^ José Antonio Martín H., Javier de Lope and Darío Maravall: "Adaptation, Anticipation and Rationality in Natural and Artificial Systems: Computational Paradigms Mimicking Nature" Natural Computing, December, 2009. Vol. 8(4), pp. 757-775. doi
- ^ Hübler, A. & Wotherspoon, T.: "Self-Adjusting Systems Avoid Chaos". Complexity. 14(4), 8 – 11. 2008
- ^ Wotherspoon, T.; Hubler, A. (2009). "Adaptation to the edge of chaos with random-wavelet feedback". J Phys Chem A. 113 (1): 19–22. Bibcode:2009JPCA..113...19W. doi:10.1021/jp804420g. PMID 19072712.
References
[edit]- Martin H., Jose Antonio; Javier de Lope; Darío Maravall (2009). "Adaptation, Anticipation and Rationality in Natural and Artificial Systems: Computational Paradigms Mimicking Nature". Natural Computing. 8 (4): 757–775. doi:10.1007/s11047-008-9096-6. S2CID 2723451.