Jump to content

Dynamical system: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Added {{Merge from}} tag
m clean up, typo(s) fixed: For example → For example,, peer reviewed → peer-reviewed
 
(37 intermediate revisions by 31 users not shown)
Line 1: Line 1:
{{short description|Mathematical model which describes the time dependence of a point in a geometrical space}}
{{Short description|Mathematical model of the time dependence of a point in space}}
{{about|the general aspects of dynamical systems|technical details|Dynamical system (definition)|the study field|Dynamical systems theory}}
{{about|the general aspects of dynamical systems|the study field|Dynamical systems theory}}
{{Redirect|Dynamical}}
{{Redirect|Dynamical}}
{{More footnotes needed|date=February 2022|partial=y|reason=need at least one per section to tell which reference to look at}}
{{Merge from|Dynamical system (definition)|discuss=Talk:Dynamical system#Proposed merge of Dynamical system (definition) into Dynamical system|date=July 2021}}
[[File:Lorenz attractor yb.svg|thumb|right|The [[Lorenz attractor]] arises in the study of the [[Lorenz system|Lorenz oscillator]], a dynamical system.]]
[[File:Lorenz attractor yb.svg|thumb|right|The [[Lorenz attractor]] arises in the study of the [[Lorenz system|Lorenz oscillator]], a dynamical system.]]


In [[mathematics]], a '''dynamical system''' is a system in which a [[Function (mathematics)|function]] describes the [[time]] dependence of a [[Point (geometry)|point]] in a [[Manifold|geometrical space]]. Examples include the [[mathematical model]]s that describe the swinging of a clock [[pendulum]], [[fluid dynamics|the flow of water in a pipe]], and [[population dynamics|the number of fish each springtime in a lake]].
In [[mathematics]], a '''dynamical system''' is a system in which a [[Function (mathematics)|function]] describes the [[time]] dependence of a [[Point (geometry)|point]] in an [[ambient space]], such as in a [[parametric curve]]. Examples include the [[mathematical model]]s that describe the swinging of a clock [[pendulum]], [[fluid dynamics|the flow of water in a pipe]], the [[Brownian motion|random motion of particles in the air]], and [[population dynamics|the number of fish each springtime in a lake]]. The most general definition unifies several concepts in mathematics such as [[ordinary differential equation]]s and [[ergodic theory]] by allowing different choices of the space and how time is measured.{{Citation needed|date=March 2023}} Time can be measured by integers, by [[real number|real]] or [[complex number]]s or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a [[manifold]] or simply a [[Set (mathematics)|set]], without the need of a [[Differentiability|smooth]] space-time structure defined on it.


At any given time, a dynamical system has a [[State (controls)|state]] given by a [[tuple]] of [[real numbers]] (a [[vector space|vector]]) that can be represented by a point in an appropriate [[state space]] (a geometrical [[manifold]]). The ''evolution rule'' of the dynamical system is a function that describes what future states follow from the current state. Often the function is [[Deterministic system (mathematics)|deterministic]], that is, for a given time interval only one future state follows from the current state.<ref>{{cite book |last=Strogatz |first=S. H. |year=2001 |title=Nonlinear Dynamics and Chaos: with Applications to Physics, Biology and Chemistry |publisher=Perseus }}</ref><ref>{{cite book |first1=A. |last1=Katok |first2=B. |last2=Hasselblatt |title=Introduction to the Modern Theory of Dynamical Systems |location=Cambridge |publisher=Cambridge University Press |year=1995 |isbn=978-0-521-34187-5 |url-access=registration |url=https://archive.org/details/introductiontomo0000kato }}</ref> However, some systems are [[stochastic system|stochastic]], in that random events also affect the evolution of the state variables.
At any given time, a dynamical system has a [[State (controls)|state]] representing a point in an appropriate [[state space (controls)|state space]]. This state is often given by a [[tuple]] of [[real numbers]] or by a [[vector space|vector]] in a geometrical manifold. The ''evolution rule'' of the dynamical system is a function that describes what future states follow from the current state. Often the function is [[Deterministic system (mathematics)|deterministic]], that is, for a given time interval only one future state follows from the current state.<ref>{{cite book |last=Strogatz |first=S. H. |year=2001 |title=Nonlinear Dynamics and Chaos: with Applications to Physics, Biology and Chemistry |publisher=Perseus }}</ref><ref>{{cite book |first1=A. |last1=Katok |first2=B. |last2=Hasselblatt |title=Introduction to the Modern Theory of Dynamical Systems |location=Cambridge |publisher=Cambridge University Press |year=1995 |isbn=978-0-521-34187-5 |url-access=registration |url=https://archive.org/details/introductiontomo0000kato }}</ref> However, some systems are [[stochastic system|stochastic]], in that random events also affect the evolution of the state variables.


In [[physics]], a '''dynamical system''' is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives".<ref>{{cite web|title=Nature|url=http://www.nature.com/subjects/dynamical-systems|publisher=Springer Nature|access-date= 17 February 2017}}</ref> In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.
In [[physics]], a '''dynamical system''' is described as a "particle or ensemble of particles whose state varies over time and thus obeys [[differential equation]]s involving time derivatives".<ref>{{cite web|title=Nature|url=http://www.nature.com/subjects/dynamical-systems|publisher=Springer Nature|access-date= 17 February 2017}}</ref> In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.


The study of dynamical systems is the focus of [[dynamical systems theory]], which has applications to a wide variety of fields such as mathematics, physics,<ref>{{cite journal|last1=Melby|first1=P. |display-authors=etal |title=Dynamics of Self-Adjusting Systems With Noise|journal= Chaos: An Interdisciplinary Journal of Nonlinear Science|volume=15 |issue=3 |pages=033902 |date=2005|doi=10.1063/1.1953147|pmid=16252993 |bibcode=2005Chaos..15c3902M}}</ref><ref>{{cite journal|last1=Gintautas|first1=V. |display-authors=etal |title=Resonant forcing of select degrees of freedom of multidimensional chaotic map dynamics|journal=J. Stat. Phys. |volume=130|date=2008|doi=10.1007/s10955-007-9444-4|arxiv=0705.0311|bibcode=2008JSP...130..617G|s2cid=8677631 }}</ref> [[biology]],<ref>{{cite book |last1=Jackson |first1=T. |last2=Radunskaya |first2=A. |title=Applications of Dynamical Systems in Biology and Medicine |date=2015 |publisher=Springer }}</ref> [[chemistry]], [[engineering]],<ref>{{cite book |first=Erwin |last=Kreyszig |title=Advanced Engineering Mathematics |location=Hoboken |publisher=Wiley |year=2011 |isbn=978-0-470-64613-7 }}</ref> [[economics]],<ref>{{cite book |last=Gandolfo |first=Giancarlo |author-link=Giancarlo Gandolfo |title=Economic Dynamics: Methods and Models |location=Berlin |publisher=Springer |edition=Fourth |year=2009 |orig-year=1971 |isbn=978-3-642-13503-3 }}</ref> [[Cliodynamics|history]], and [[medicine]]. Dynamical systems are a fundamental part of [[chaos theory]], [[logistic map]] dynamics, [[bifurcation theory]], the [[self-assembly]] and [[self-organization]] processes, and the [[edge of chaos]] concept.
The study of dynamical systems is the focus of [[dynamical systems theory]], which has applications to a wide variety of fields such as mathematics, physics,<ref>{{cite journal|last1=Melby|first1=P. |display-authors=etal |title=Dynamics of Self-Adjusting Systems With Noise|journal= Chaos: An Interdisciplinary Journal of Nonlinear Science|volume=15 |issue=3 |pages=033902 |date=2005|doi=10.1063/1.1953147|pmid=16252993 |bibcode=2005Chaos..15c3902M}}</ref><ref>{{cite journal|last1=Gintautas|first1=V. |display-authors=etal |title=Resonant forcing of select degrees of freedom of multidimensional chaotic map dynamics|journal=J. Stat. Phys. |volume=130|date=2008|issue=3 |page=617 |doi=10.1007/s10955-007-9444-4|arxiv=0705.0311|bibcode=2008JSP...130..617G|s2cid=8677631 }}</ref> [[biology]],<ref>{{cite book |last1=Jackson |first1=T. |last2=Radunskaya |first2=A. |title=Applications of Dynamical Systems in Biology and Medicine |date=2015 |publisher=Springer }}</ref> [[chemistry]], [[engineering]],<ref>{{cite book |first=Erwin |last=Kreyszig |title=Advanced Engineering Mathematics |location=Hoboken |publisher=Wiley |year=2011 |isbn=978-0-470-64613-7 }}</ref> [[economics]],<ref>{{cite book |last=Gandolfo |first=Giancarlo |author-link=Giancarlo Gandolfo |title=Economic Dynamics: Methods and Models |location=Berlin |publisher=Springer |edition=Fourth |year=2009 |orig-year=1971 |isbn=978-3-642-13503-3 }}</ref> [[Cliodynamics|history]], and [[medicine]]. Dynamical systems are a fundamental part of [[chaos theory]], [[logistic map]] dynamics, [[bifurcation theory]], the [[self-assembly]] and [[self-organization]] processes, and the [[edge of chaos]] concept.


==Overview==
==Overview==
The concept of a dynamical system has its origins in [[Newtonian mechanics]]. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a [[differential equation]], [[Recurrence relation|difference equation]] or other [[Time scale calculus|time scale]].) To determine the state for all future times requires iterating the relation many times&mdash;each advancing time a small step. The iteration procedure is referred to as ''solving the system'' or ''integrating the system''. If the system can be solved, given an initial point it is possible to determine all its future positions, a collection of points known as a ''[[trajectory]]'' or ''[[orbit (dynamics)|orbit]]''.
The concept of a dynamical system has its origins in [[Newtonian mechanics]]. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a [[differential equation]], [[Recurrence relation|difference equation]] or other [[Time scale calculus|time scale]].) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as ''solving the system'' or ''integrating the system''. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a ''[[trajectory]]'' or ''[[orbit (dynamics)|orbit]]''.


Before the advent of [[computers]], finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.
Before the advent of [[computers]], finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.


For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:
* The systems studied may only be known approximately&mdash;the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as [[Lyapunov stability]] or [[structural stability]]. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their [[Equivalence relation|equivalence]] changes with the different notions of stability.
* The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as [[Lyapunov stability]] or [[structural stability]]. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their [[Equivalence relation|equivalence]] changes with the different notions of stability.
* The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. [[Linear dynamical system]]s and [[Poincaré–Bendixson theorem|systems that have two numbers describing a state]] are examples of dynamical systems where the possible classes of orbits are understood.
* The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. [[Linear dynamical system]]s and [[Poincaré–Bendixson theorem|systems that have two numbers describing a state]] are examples of dynamical systems where the possible classes of orbits are understood.
* The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have [[bifurcation theory|bifurcation points]] where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the [[Turbulence|transition to turbulence of a fluid]].
* The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have [[bifurcation theory|bifurcation points]] where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the [[Turbulence|transition to turbulence of a fluid]].
Line 26: Line 26:
==History==
==History==


Many people regard French mathematician [[Henri Poincaré]] as the founder of dynamical systems.<ref>Holmes, Philip. "Poincaré, celestial mechanics, dynamical-systems theory and "chaos"." ''Physics Reports'' 193.3 (1990): 137-163.</ref> Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the [[Poincaré recurrence theorem]], which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.
Many people regard French mathematician [[Henri Poincaré]] as the founder of dynamical systems.<ref>Holmes, Philip. "Poincaré, celestial mechanics, dynamical-systems theory and "chaos"." ''Physics Reports'' 193.3 (1990): 137–163.</ref> Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the [[Poincaré recurrence theorem]], which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.


[[Aleksandr Lyapunov]] developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.
[[Aleksandr Lyapunov]] developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.


In 1913, [[George David Birkhoff]] proved Poincaré's "[[Poincaré–Birkhoff theorem|Last Geometric Theorem]]", a special case of the [[three-body problem]], a result that made him world-famous. In 1927, he published his ''[https://www.ams.org/online_bks/coll9/ Dynamical Systems]''. Birkhoff's most durable result has been his 1931 discovery of what is now called the [[ergodic theorem]]. Combining insights from [[physics]] on the [[ergodic hypothesis]] with [[measure theory]], this theorem solved, at least in principle, a fundamental problem of [[statistical mechanics]]. The ergodic theorem has also had repercussions for dynamics.
In 1913, [[George David Birkhoff]] proved Poincaré's "[[Poincaré–Birkhoff theorem|Last Geometric Theorem]]", a special case of the [[three-body problem]], a result that made him world-famous. In 1927, he published his ''[https://archive.org/details/dynamicalsystems00birk/ Dynamical Systems]''. Birkhoff's most durable result has been his 1931 discovery of what is now called the [[ergodic theorem]]. Combining insights from [[physics]] on the [[ergodic hypothesis]] with [[measure theory]], this theorem solved, at least in principle, a fundamental problem of [[statistical mechanics]]. The ergodic theorem has also had repercussions for dynamics.


[[Stephen Smale]] made significant advances as well. His first contribution was the [[Horseshoe map|Smale horseshoe]] that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
[[Stephen Smale]] made significant advances as well. His first contribution was the [[Horseshoe map|Smale horseshoe]] that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.
Line 36: Line 36:
[[Oleksandr Mykolaiovych Sharkovsky]] developed [[Sharkovsky's theorem]] on the periods of [[discrete dynamical system]]s in 1964. One of the implications of the theorem is that if a discrete dynamical system on the [[real line]] has a [[periodic point]] of period&nbsp;3, then it must have periodic points of every other period.
[[Oleksandr Mykolaiovych Sharkovsky]] developed [[Sharkovsky's theorem]] on the periods of [[discrete dynamical system]]s in 1964. One of the implications of the theorem is that if a discrete dynamical system on the [[real line]] has a [[periodic point]] of period&nbsp;3, then it must have periodic points of every other period.


In the late 20th century, Palestinian mechanical engineer [[Ali H. Nayfeh]] applied [[nonlinear dynamics]] in [[mechanics|mechanical]] and [[engineering]] systems.<ref name="Rega">{{cite book |last1=Rega |first1=Giuseppe |chapter=Tribute to Ali H. Nayfeh (1933-2017) |title=IUTAM Symposium on Exploiting Nonlinear Dynamics for Engineering Systems |date=2019 |publisher=[[Springer Science+Business Media|Springer]] |isbn=9783030236922 |url=https://books.google.com/books?id=pAilDwAAQBAJ&pg=PA1 |pages=1–2}}</ref> His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of [[machines]] and [[structures]] that are common in daily life, such as [[ships]], [[crane (machine)|cranes]], [[bridges]], [[buildings]], [[skyscrapers]], [[jet engines]], [[rocket engines]], [[aircraft]] and [[spacecraft]].<ref name="fi">{{cite web |title=Ali Hasan Nayfeh |url=https://www.fi.edu/laureates/ali-hasan-nayfeh |website=[[Franklin Institute Awards]] |publisher=[[The Franklin Institute]] |access-date=25 August 2019 |date=4 February 2014}}</ref>
In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer [[Ali H. Nayfeh]] applied [[nonlinear dynamics]] in [[mechanics|mechanical]] and [[engineering]] systems.<ref name="Rega">{{cite book |last1=Rega |first1=Giuseppe |chapter=Tribute to Ali H. Nayfeh (1933–2017) |title=IUTAM Symposium on Exploiting Nonlinear Dynamics for Engineering Systems |date=2019 |publisher=[[Springer Science+Business Media|Springer]] |isbn=9783030236922 |url=https://books.google.com/books?id=pAilDwAAQBAJ&pg=PA1 |pages=1–2}}</ref> His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of [[machines]] and [[structures]] that are common in daily life, such as [[ships]], [[crane (machine)|cranes]], [[bridges]], [[buildings]], [[skyscrapers]], [[jet engines]], [[rocket engines]], [[aircraft]] and [[spacecraft]].<ref name="fi">{{cite web |title=Ali Hasan Nayfeh |url=https://www.fi.edu/laureates/ali-hasan-nayfeh |website=[[Franklin Institute Awards]] |publisher=[[The Franklin Institute]] |access-date=25 August 2019 |date=4 February 2014}}</ref>


== Formal definition ==
==Basic definitions==
In the most general sense,<ref>Giunti M. and Mazzola C. (2012), "[https://www.researchgate.net/publication/272943599_Dynamical_Systems_on_Monoids_Toward_a_General_Theory_of_Deterministic_Systems_and_Motion Dynamical systems on monoids: Toward a general theory of deterministic systems and motion]". In Minati G., Abram M., Pessa E. (eds.), ''Methods, models, simulations and approaches towards a general theory of change'', pp. 173–185, Singapore: World Scientific. {{ISBN|978-981-4383-32-5}}
{{Main|Dynamical system (definition)}}
</ref><ref>Mazzola C. and Giunti M. (2012), "[https://www.researchgate.net/publication/281244041_Reversible_dynamics_and_the_directionality_of_time Reversible dynamics and the directionality of time]". In Minati G., Abram M., Pessa E. (eds.), ''Methods, models, simulations and approaches towards a general theory of change'', pp. 161–171, Singapore: World Scientific. {{ISBN|978-981-4383-32-5}}.</ref>
A dynamical system is a [[manifold]] ''M'' called the phase (or state) space endowed with a family of smooth evolution functions Φ<sup>''t''</sup> that for any element ''t'' ∈ ''T'', the time, map a point of the [[phase space]] back into the phase space. The notion of smoothness changes with applications and the type of manifold. There are several choices for the set&nbsp;''T''. When ''T'' is taken to be the reals, the dynamical system is called a ''[[Flow (mathematics)|flow]]''; and if ''T'' is restricted to the non-negative reals, then the dynamical system is a ''semi-flow''. When ''T'' is taken to be the integers, it is a ''cascade'' or a ''map''; and the restriction to the non-negative integers is a ''semi-cascade''.
a '''dynamical system''' is a [[tuple]] (''T'', ''X'', Φ) where ''T'' is a [[monoid]], written additively, ''X'' is a non-empty [[set (mathematics)|set]] and Φ is a [[function (mathematics)|function]]
:<math>\Phi: U \subseteq (T \times X) \to X</math>
with
:<math>\mathrm{proj}_{2}(U) = X</math> (where <math>\mathrm{proj}_{2}</math> is the 2nd [[Projection (set theory)|projection map]])
and for any ''x'' in ''X'':
:<math>\Phi(0,x) = x</math>
:<math>\Phi(t_2,\Phi(t_1,x)) = \Phi(t_2 + t_1, x),</math>
for <math>\, t_1,\, t_2 + t_1 \in I(x)</math> and <math>\ t_2 \in I(\Phi(t_1, x)) </math>, where we have defined the set <math> I(x) := \{ t \in T : (t,x) \in U \}</math> for any ''x'' in ''X''.


In particular, in the case that <math> U = T \times X </math> we have for every ''x'' in ''X'' that <math> I(x) = T </math> and thus that Φ defines a [[Semigroup action|monoid action]] of ''T'' on ''X''.
Note: There is a further technical condition that Φ<sup>''t''</sup> is an action of ''T'' on ''M''. That includes the facts that Φ<sup>''0''</sup> is the identity function and that Φ<sup>''s+t''</sup> is the composition of Φ<sup>''s''</sup> and Φ<sup>''t''</sup>. This is a [[monoid action]], which doesn't require the existence of negative values for ''t'', and doesn't require the functions Φ<sup>''t''</sup> to be invertible.


The function Φ(''t'',''x'') is called the '''evolution function''' of the dynamical system: it associates to every point ''x'' in the set ''X'' a unique image, depending on the variable ''t'', called the '''evolution parameter'''. ''X'' is called '''[[phase space]]''' or '''state space''', while the variable ''x'' represents an '''initial state''' of the system.
== Examples ==
The evolution function Φ<sup>&nbsp;''t''</sup> is often the solution of a ''differential equation of motion''


We often write
: <math> \dot{x} = v(x). </math>
:<math>\Phi_x(t) \equiv \Phi(t,x)</math>
:<math>\Phi^t(x) \equiv \Phi(t,x)</math>
if we take one of the variables as constant. The function
:<math>\Phi_x:I(x) \to X</math>
is called the '''flow''' through ''x'' and its [[graph (function)|graph]] is called the '''[[trajectory]]''' through ''x''. The set
:<math>\gamma_x \equiv\{\Phi(t,x) : t \in I(x)\}</math>
is called the '''[[orbit (dynamics)|orbit]]''' through ''x''.
The orbit through ''x'' is the [[image (mathematics)|image]] of the flow through ''x''.
A subset ''S'' of the state space ''X'' is called Φ-'''invariant''' if for all ''x'' in ''S'' and all ''t'' in ''T''
:<math>\Phi(t,x) \in S.</math>
Thus, in particular, if ''S'' is Φ-'''invariant''', <math>I(x) = T</math> for all ''x'' in ''S''. That is, the flow through ''x'' must be defined for all time for every element of ''S''.


More commonly there are two classes of definitions for a dynamical system: one is motivated by [[ordinary differential equation]]s and is geometrical in flavor; and the other is motivated by [[ergodic theory]] and is [[Measure (mathematics)#Measure theory|measure theoretical]] in flavor.
The equation gives the time derivative, represented by the dot, of a trajectory ''x''(''t'') on the phase space starting at some point&nbsp;''x''<sub>0</sub>. The [[vector field]] ''v''(''x'') is a smooth function that at every point of the phase space ''M'' provides the velocity vector of the dynamical system at that point. (These vectors are not vectors in the phase space&nbsp;''M'', but in the [[tangent space]] ''T<sub>x</sub>M'' of the point&nbsp;''x''.) Given a smooth Φ<sup>&nbsp;''t''</sup>, an autonomous vector field can be derived from it.


=== Geometrical definition ===
There is no need for higher order derivatives in the equation, nor for time dependence in ''v''(''x'') because these can be eliminated by considering systems of higher dimensions. Other types of [[differential equations]] can be used to define the evolution rule:
In the geometrical definition, a dynamical system is the tuple <math> \langle \mathcal{T}, \mathcal{M}, f\rangle </math>. <math>\mathcal{T}</math> is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. <math>\mathcal{M}</math> is a [[manifold]], i.e. locally a Banach space or Euclidean space, or in the discrete case a [[Graph (discrete mathematics)|graph]]. ''f'' is an evolution rule ''t''&nbsp;→&nbsp;''f''<sup>&nbsp;''t''</sup> (with <math>t\in\mathcal{T}</math>) such that ''f<sup>&nbsp;t</sup>'' is a [[diffeomorphism]] of the manifold to itself. So, f is a "smooth" mapping of the time-domain <math> \mathcal{T}</math> into the space of diffeomorphisms of the manifold to itself. In other terms, ''f''(''t'') is a diffeomorphism, for every time ''t'' in the domain <math> \mathcal{T}</math> .


==== Real dynamical system ====
: <math> G(x, \dot{x}) = 0 </math>
A ''real dynamical system'', ''real-time dynamical system'', ''[[continuous time]] dynamical system'', or ''[[Flow (mathematics)|flow]]'' is a tuple (''T'', ''M'', Φ) with ''T'' an [[open interval]] in the [[real number]]s '''R''', ''M'' a [[manifold]] locally [[diffeomorphic]] to a [[Banach space]], and Φ a [[continuous function]]. If Φ is [[continuously differentiable]] we say the system is a ''differentiable dynamical system''. If the manifold ''M'' is locally diffeomorphic to '''R'''<sup>''n''</sup>, the dynamical system is ''finite-dimensional''; if not, the dynamical system is ''infinite-dimensional''. This does not assume a [[symplectic manifold|symplectic structure]]. When ''T'' is taken to be the reals, the dynamical system is called ''global'' or a ''[[Flow (mathematics)|flow]]''; and if ''T'' is restricted to the non-negative reals, then the dynamical system is a ''semi-flow''.


==== Discrete dynamical system ====
is an example of an equation that arises from the modeling of mechanical systems with complicated constraints.
A ''discrete dynamical system'', ''[[discrete-time]] dynamical system'' is a tuple (''T'', ''M'', Φ), where ''M'' is a [[manifold]] locally diffeomorphic to a [[Banach space]], and Φ is a function. When ''T'' is taken to be the integers, it is a ''cascade'' or a ''map''. If ''T'' is restricted to the non-negative integers we call the system a ''semi-cascade''.<ref>{{Cite book|title=Discrete Dynamical Systems|last=Galor|first=Oded|publisher=Springer|year=2010}}</ref>


==== Cellular automaton ====
The differential equations determining the evolution function Φ<sup>&nbsp;''t''</sup> are often [[ordinary differential equation]]s; in this case the phase space ''M'' is a finite dimensional manifold. Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds&mdash;those that are locally [[Banach space]]s&mdash;in which case the differential equations are [[partial differential equation]]s. In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity.
A ''cellular automaton'' is a tuple (''T'', ''M'', Φ), with ''T'' a [[lattice (group)|lattice]] such as the [[integer]]s or a higher-dimensional [[integer lattice|integer grid]], ''M'' is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such [[cellular automata]] are dynamical systems. The lattice in ''M'' represents the "space" lattice, while the one in ''T'' represents the "time" lattice.


=== Further examples ===
==== Multidimensional generalization ====
Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called [[multidimensional systems]]. Such systems are useful for modeling, for example, [[image processing]].

==== Compactification of a dynamical system ====
Given a global dynamical system ('''R''', ''X'', Φ) on a [[locally compact]] and [[Hausdorff space|Hausdorff]] [[topological space]] ''X'', it is often useful to study the continuous extension Φ* of Φ to the [[one-point compactification]] ''X*'' of ''X''. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system ('''R''', ''X*'', Φ*).

In compact dynamical systems the [[limit set]] of any orbit is [[non-empty]], [[compact space|compact]] and [[simply connected]].

===Measure theoretical definition===
{{main|Measure-preserving dynamical system}}

A dynamical system may be defined formally as a measure-preserving transformation of a [[measure space]], the triplet (''T'', (''X'', Σ, ''μ''), Φ). Here, ''T'' is a monoid (usually the non-negative integers), ''X'' is a [[set (mathematics)|set]], and (''X'', Σ, ''μ'') is a [[measure space|probability space]], meaning that Σ is a [[sigma-algebra]] on ''X'' and μ is a finite [[measure (mathematics)|measure]] on (''X'', Σ). A map Φ: ''X'' → ''X'' is said to be [[measurable function|Σ-measurable]] if and only if, for every σ in Σ, one has <math>\Phi^{-1}\sigma \in \Sigma</math>. A map Φ is said to '''preserve the measure''' if and only if, for every ''σ'' in Σ, one has <math>\mu(\Phi^{-1}\sigma ) = \mu(\sigma)</math>. Combining the above, a map Φ is said to be a '''measure-preserving transformation of ''X'' ''', if it is a map from ''X'' to itself, it is Σ-measurable, and is measure-preserving. The triplet (''T'', (''X'', Σ, ''μ''), Φ), for such a Φ, is then defined to be a '''dynamical system'''.

The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the [[iterated function|iterates]] <math>\Phi^n = \Phi \circ \Phi \circ \dots \circ \Phi</math> for every integer ''n'' are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.

====Relation to geometric definition====

The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the [[Krylov–Bogolyubov theorem]]) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.

Some systems have a natural measure, such as the [[Liouville's theorem (Hamiltonian)|Liouville measure]] in [[Hamiltonian system]]s, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic [[dissipative system]]s the choice of invariant measure is technically more challenging. The measure needs to be supported on the [[attractor]], but attractors have zero [[Lebesgue measure]] and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.

For hyperbolic dynamical systems, the [[Sinai–Ruelle–Bowen measure]]s appear to be the natural choice. They are constructed on the geometrical structure of [[stable manifold|stable and unstable manifold]]s of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.

== Construction of dynamical systems ==
The concept of ''evolution in time'' is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of [[classical mechanics|classical mechanical systems]]. But a system of [[ordinary differential equation]]s must be solved before it becomes a dynamic system. For example, consider an [[initial value problem]] such as the following:

:<math>\dot{\boldsymbol{x}}=\boldsymbol{v}(t,\boldsymbol{x})</math>
:<math>\boldsymbol{x}|_{{t=0}}=\boldsymbol{x}_0</math>

where
*<math>\dot{\boldsymbol{x}}</math> represents the [[velocity]] of the material point '''x'''
*''M'' is a finite dimensional manifold
*'''v''': ''T'' × ''M'' → ''TM'' is a [[vector field]] in '''R'''<sup>''n''</sup> or '''C'''<sup>''n''</sup> and represents the change of [[velocity]] induced by the known [[force]]s acting on the given material point in the phase space ''M''. The change is not a vector in the phase space&nbsp;''M'', but is instead in the [[tangent space]] ''TM''.

There is no need for higher order derivatives in the equation, nor for the parameter ''t'' in ''v''(''t'',''x''), because these can be eliminated by considering systems of higher dimensions.

Depending on the properties of this vector field, the mechanical system is called
*'''autonomous''', when '''v'''(''t'', '''x''') = '''v'''('''x''')
*'''homogeneous''' when '''v'''(''t'', '''0''') = 0 for all ''t''

The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above

:<math>\boldsymbol{{x}}(t)=\Phi(t,\boldsymbol{{x}}_0)</math>

The dynamical system is then (''T'', ''M'', Φ).

Some formal manipulation of the system of [[differential equation]]s shown above gives a more general form of equations a dynamical system must satisfy

:<math>\dot{\boldsymbol{x}}-\boldsymbol{v}(t,\boldsymbol{x})=0 \qquad\Leftrightarrow\qquad \mathfrak{{G}}\left(t,\Phi(t,\boldsymbol{{x}}_0)\right)=0</math>

where <math>\mathfrak{G}:{{(T\times M)}^M}\to\mathbf{C}</math> is a [[functional (mathematics)|functional]] from the set of evolution functions to the field of the complex numbers.

This equation is useful when modeling mechanical systems with complicated constraints.

Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally [[Banach space]]s—in which case the differential equations are [[partial differential equation]]s.

== Examples ==
{{Div col|colwidth=25em}}
{{Div col|colwidth=25em}}
* [[Arnold's cat map]]
* [[Arnold's cat map]]
Line 102: Line 180:


[[File:LinearFields.png|thumb|500px|center|Linear vector fields and a few trajectories.]]
[[File:LinearFields.png|thumb|500px|center|Linear vector fields and a few trajectories.]]
{{-}}
{{Clear}}


===Maps===
===Maps===
A [[Discrete-time dynamical system|discrete-time]], [[Affine transformation|affine]] dynamical system has the form of a [[matrix difference equation]]:
A [[Discrete-time dynamical system|discrete-time]], [[Affine transformation|affine]] dynamical system has the form of a [[matrix difference equation]]:
: <math> x_{n+1} = A x_n + b, </math>
: <math> x_{n+1} = A x_n + b, </math>
with ''A'' a matrix and ''b'' a vector. As in the continuous case, the change of coordinates ''x''&nbsp;→&nbsp;''x''&nbsp;+&nbsp;(1&nbsp;−&nbsp;''A'')<sup>&nbsp;&ndash;1</sup>''b'' removes the term ''b'' from the equation. In the new [[coordinate system]], the origin is a fixed point of the map and the solutions are of the linear system ''A''<sup>&nbsp;''n''</sup>''x''<sub>0</sub>.
with ''A'' a matrix and ''b'' a vector. As in the continuous case, the change of coordinates ''x''&nbsp;→&nbsp;''x''&nbsp;+&nbsp;(1&nbsp;−&nbsp;''A'')<sup>&nbsp;–1</sup>''b'' removes the term ''b'' from the equation. In the new [[coordinate system]], the origin is a fixed point of the map and the solutions are of the linear system ''A''<sup>&nbsp;''n''</sup>''x''<sub>0</sub>.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.
The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.


Line 129: Line 207:
: <math> h^{-1} \circ F \circ h(x) = J \cdot x.</math>
: <math> h^{-1} \circ F \circ h(x) = J \cdot x.</math>


This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If ''λ''<sub>1</sub>,&nbsp;...,&nbsp;''λ''<sub>''ν''</sub> are the eigenvalues of ''J'' they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form ''λ''<sub>''i''</sub> &ndash; (multiples of other eigenvalues) occurs in the denominator of the terms for the function ''h'', the non-resonant condition is also known as the small divisor problem.
This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If ''λ''<sub>1</sub>,&nbsp;...,&nbsp;''λ''<sub>''ν''</sub> are the eigenvalues of ''J'' they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form ''λ''<sub>''i''</sub> Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function ''h'', the non-resonant condition is also known as the small divisor problem.


===Conjugation results===
===Conjugation results===
Line 146: Line 224:
The bifurcations of a hyperbolic fixed point ''x''<sub>0</sub> of a system family ''F<sub>μ</sub>'' can be characterized by the [[eigenvalues]] of the first derivative of the system ''DF''<sub>''μ''</sub>(''x''<sub>0</sub>) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of ''DF<sub>μ</sub>'' on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on [[Bifurcation theory]].
The bifurcations of a hyperbolic fixed point ''x''<sub>0</sub> of a system family ''F<sub>μ</sub>'' can be characterized by the [[eigenvalues]] of the first derivative of the system ''DF''<sub>''μ''</sub>(''x''<sub>0</sub>) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of ''DF<sub>μ</sub>'' on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on [[Bifurcation theory]].


Some bifurcations can lead to very complicated structures in phase space. For example, the [[Ruelle&ndash;Takens scenario]] describes how a periodic orbit bifurcates into a torus and the torus into a [[strange attractor]]. In another example, [[Bifurcation diagram|Feigenbaum period-doubling]] describes how a stable periodic orbit goes through a series of [[period-doubling bifurcation]]s.
Some bifurcations can lead to very complicated structures in phase space. For example, the [[Ruelle–Takens scenario]] describes how a periodic orbit bifurcates into a torus and the torus into a [[strange attractor]]. In another example, [[Bifurcation diagram|Feigenbaum period-doubling]] describes how a stable periodic orbit goes through a series of [[period-doubling bifurcation]]s.


==Ergodic systems==
==Ergodic systems==
Line 166: Line 244:
By studying the spectral properties of the linear operator ''U'' it becomes possible to classify the ergodic properties of&nbsp;Φ<sup>&nbsp;''t''</sup>. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ<sup>&nbsp;''t''</sup> gets mapped into an infinite-dimensional linear problem involving&nbsp;''U''.
By studying the spectral properties of the linear operator ''U'' it becomes possible to classify the ergodic properties of&nbsp;Φ<sup>&nbsp;''t''</sup>. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ<sup>&nbsp;''t''</sup> gets mapped into an infinite-dimensional linear problem involving&nbsp;''U''.


The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in [[Statistical mechanics|equilibrium statistical mechanics]]. An average in time along a trajectory is equivalent to an average in space computed with the [[Statistical mechanics#Canonical ensemble|Boltzmann factor exp(&minus;β''H'')]]. This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. [[SRB measure]]s replace the Boltzmann factor and they are defined on attractors of chaotic systems.
The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in [[Statistical mechanics|equilibrium statistical mechanics]]. An average in time along a trajectory is equivalent to an average in space computed with the [[Statistical mechanics#Canonical ensemble|Boltzmann factor exp(−β''H'')]]. This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. [[SRB measure]]s replace the Boltzmann factor and they are defined on attractors of chaotic systems.


===Nonlinear dynamical systems and chaos===
== Nonlinear dynamical systems and chaos ==
{{Main|Chaos theory}}
{{Main|Chaos theory}}
Simple nonlinear dynamical systems and even piecewise linear systems can exhibit a completely unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This seemingly unpredictable behavior has been called ''[[chaos theory|chaos]]''. [[Anosov diffeomorphism|Hyperbolic systems]] are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent space perpendicular to a trajectory can be well separated into two parts: one with the points that converge towards the orbit (the ''stable manifold'') and another of the points that diverge from the orbit (the ''unstable manifold'').
Simple nonlinear dynamical systems, including [[Piecewise linear function|piecewise linear]] systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called ''[[chaos theory|chaos]]''. [[Anosov diffeomorphism|Hyperbolic systems]] are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the [[tangent space]]s perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the ''stable manifold'') and another of the points that diverge from the orbit (the ''unstable manifold'').


This branch of [[mathematics]] deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a [[steady state]] in the long term, and if so, what are the possible [[attractor]]s?" or "Does the long-term behavior of the system depend on its initial condition?"
This branch of [[mathematics]] deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a [[steady state]] in the long term, and if so, what are the possible [[attractor]]s?" or "Does the long-term behavior of the system depend on its initial condition?"


Note that the chaotic behavior of complex systems is not the issue. [[Meteorology]] has been known for years to involve complex&mdash;even chaotic&mdash;behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The [[logistic map]] is only a second-degree polynomial; the [[horseshoe map]] is piecewise linear.
The chaotic behavior of complex systems is not the issue. [[Meteorology]] has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The [[Pomeau–Manneville scenario]] of the [[logistic map]] and the [[Fermi–Pasta–Ulam–Tsingou problem]] arose with just second-degree polynomials; the [[horseshoe map]] is piecewise linear.


=== Geometrical definition ===
=== Solutions of finite duration ===
A dynamical system is the tuple <math> \langle \mathcal{M}, f , \mathcal{T}\rangle </math>, with <math>\mathcal{M}</math> a manifold (locally a Banach space or Euclidean space), <math>\mathcal{T}</math> the domain for time (non-negative reals, the integers, ...) and ''f'' an evolution rule ''t''&nbsp;→&nbsp;''f''<sup>&nbsp;''t''</sup> (with <math>t\in\mathcal{T}</math>) such that ''f<sup>&nbsp;t</sup>'' is a [[diffeomorphism]] of the manifold to itself. So, f is a mapping of the time-domain <math> \mathcal{T}</math> into the space of diffeomorphisms of the manifold to itself. In other terms, ''f''(''t'') is a diffeomorphism, for every time ''t'' in the domain <math> \mathcal{T}</math> .

=== Measure theoretical definition ===
{{main|Measure-preserving dynamical system}}
A dynamical system may be defined formally, as a measure-preserving transformation of a [[sigma-algebra]], the quadruplet (''X'', Σ, μ, τ). Here, ''X'' is a [[set (mathematics)|set]], and Σ is a [[sigma-algebra]] on ''X'', so that the pair (''X'', Σ) is a measurable space. μ is a finite [[measure (mathematics)|measure]] on the sigma-algebra, so that the triplet (''X'', Σ, μ) is a [[measure space|probability space]]. A map τ: ''X'' → ''X'' is said to be [[measurable function|Σ-measurable]] if and only if, for every σ ∈ Σ, one has <math>\tau^{-1}\sigma \in \Sigma</math>. A map τ is said to '''preserve the measure''' if and only if, for every σ ∈ Σ, one has <math>\mu(\tau^{-1}\sigma ) = \mu(\sigma)</math>. Combining the above, a map τ is said to be a '''measure-preserving transformation of ''X'' ''', if it is a map from ''X'' to itself, it is Σ-measurable, and is measure-preserving. The quadruple (''X'', Σ, μ, τ), for such a τ, is then defined to be a '''dynamical system'''.


For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,<ref>{{cite book |author = Vardia T. Haimo |title = 1985 24th IEEE Conference on Decision and Control |chapter = Finite Time Differential Equations |year = 1985 |pages = 1729–1733 |doi = 10.1109/CDC.1985.268832 |s2cid = 45426376 |chapter-url=https://ieeexplore.ieee.org/document/4048613}}</ref> meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for [[Lipschitz continuity|Lipschitz continuous]] differential equations according to the proof of the [[Picard–Lindelöf theorem|Picard-Lindelof theorem]]. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.
The map τ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the [[iterated function|iterates]] <math>\tau^n=\tau \circ \tau \circ \cdots\circ\tau</math> for integer ''n'' are studied. For continuous dynamical systems, the map τ is understood to be a finite time evolution map and the construction is more complicated.


As example, the equation:
== Multidimensional generalization ==
:<math>y'= -\text{sgn}(y)\sqrt{|y|},\,\,y(0)=1</math>
Dynamical systems are defined over a single independent variable, usually thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called [[multidimensional systems]]. Such systems are useful for modeling, for example, [[image processing]].
Admits the finite duration solution:
:<math>y(t)=\frac{1}{4}\left(1-\frac{t}{2}+\left|1-\frac{t}{2}\right|\right)^2</math>
that is zero for <math>t \geq 2</math> and is not Lipschitz continuous at its ending time <math>t = 2.</math>


== See also ==
== See also ==
Line 201: Line 277:
* [[People in systems and control]]
* [[People in systems and control]]
* [[Sharkovskii's theorem]]
* [[Sharkovskii's theorem]]
* [[Conley's fundamental theorem of dynamical systems]]
* [[System dynamics]]
* [[System dynamics]]
* [[Systems theory]]
* [[Systems theory]]
Line 208: Line 285:
==References==
==References==
{{reflist}}
{{reflist}}
*{{cite book |first=Vladimir I. |last=Arnold |author-link=Vladimir Arnold |chapter=Fundamental concepts |title=Ordinary Differential Equations |location=Berlin |publisher=Springer Verlag |year=2006 |isbn=3-540-34563-9 }}
*{{cite book |first=I. D. |last=Chueshov |title=Introduction to the Theory of Infinite-Dimensional Dissipative Systems }} online version of first edition on the EMIS site [http://www.emis.de/monographs/Chueshov/].
*{{cite book |first=Roger |last=Temam |title=Infinite-Dimensional Dynamical Systems in Mechanics and Physics |publisher=Springer Verlag |orig-year=1988 |year=1997 }}


== Further reading ==
== Further reading ==
Line 222: Line 302:
* {{cite book | author=David Ruelle | title=Elements of Differentiable Dynamics and Bifurcation Theory | publisher=Academic Press | year=1989 | isbn=978-0-12-601710-6| author-link=David Ruelle }}
* {{cite book | author=David Ruelle | title=Elements of Differentiable Dynamics and Bifurcation Theory | publisher=Academic Press | year=1989 | isbn=978-0-12-601710-6| author-link=David Ruelle }}
* {{cite book | author=Tim Bedford, Michael Keane and Caroline Series, ''eds.'' | title= Ergodic theory, symbolic dynamics and hyperbolic spaces | publisher= Oxford University Press | year= 1991 | isbn= 978-0-19-853390-0 }}
* {{cite book | author=Tim Bedford, Michael Keane and Caroline Series, ''eds.'' | title= Ergodic theory, symbolic dynamics and hyperbolic spaces | publisher= Oxford University Press | year= 1991 | isbn= 978-0-19-853390-0 }}
* {{cite book | author= [[Ralph Abraham (mathematician)|Ralph H. Abraham]] and [[Robert Shaw (Physicist)#Illustrations|Christopher D. Shaw]] | title= Dynamics&mdash;the geometry of behavior, 2nd edition | publisher= Addison-Wesley | year= 1992 | isbn= 978-0-201-56716-8 }}
* {{cite book | author= [[Ralph Abraham (mathematician)|Ralph H. Abraham]] and [[Robert Shaw (Physicist)#Illustrations|Christopher D. Shaw]] | title= Dynamics—the geometry of behavior, 2nd edition | publisher= Addison-Wesley | year= 1992 | isbn= 978-0-201-56716-8 }}


Textbooks
Textbooks
Line 248: Line 328:


==External links==
==External links==
{{Commonscat|Dynamical systems}}
{{Commons category|Dynamical systems}}
*[http://www.arxiv.org/list/math.DS/recent Arxiv preprint server] has daily submissions of (non-refereed) manuscripts in dynamical systems.
*[http://www.arxiv.org/list/math.DS/recent Arxiv preprint server] has daily submissions of (non-refereed) manuscripts in dynamical systems.
*[http://www.scholarpedia.org/article/Encyclopedia_of_Dynamical_Systems Encyclopedia of dynamical systems] A part of [[Scholarpedia]] — peer reviewed and written by invited experts.
*[http://www.scholarpedia.org/article/Encyclopedia_of_Dynamical_Systems Encyclopedia of dynamical systems] A part of [[Scholarpedia]] — peer-reviewed and written by invited experts.
*[http://www.egwald.ca/nonlineardynamics/index.php Nonlinear Dynamics]. Models of bifurcation and chaos by Elmer G. Wiens
*[http://www.egwald.ca/nonlineardynamics/index.php Nonlinear Dynamics]. Models of bifurcation and chaos by Elmer G. Wiens
*[http://amath.colorado.edu/faculty/jdm/faq-Contents.html Sci.Nonlinear FAQ 2.0 (Sept 2003)] provides definitions, explanations and resources related to nonlinear science
*[http://amath.colorado.edu/faculty/jdm/faq-Contents.html Sci.Nonlinear FAQ 2.0 (Sept 2003)] provides definitions, explanations and resources related to nonlinear science
Line 271: Line 351:
*[https://web.archive.org/web/20070406053155/http://www.eng.ox.ac.uk/samp/ Systems Analysis, Modelling and Prediction Group], University of Oxford
*[https://web.archive.org/web/20070406053155/http://www.eng.ox.ac.uk/samp/ Systems Analysis, Modelling and Prediction Group], University of Oxford
*[http://sd.ist.utl.pt/ Non-Linear Dynamics Group], Instituto Superior Técnico, Technical University of Lisbon
*[http://sd.ist.utl.pt/ Non-Linear Dynamics Group], Instituto Superior Técnico, Technical University of Lisbon
*[http://www.impa.br/ Dynamical Systems], IMPA, Instituto Nacional de Matemática Pura e Applicada.
*[http://www.impa.br/ Dynamical Systems] {{Webarchive|url=https://web.archive.org/web/20170602221933/http://www.impa.br/ |date=2017-06-02 }}, IMPA, Instituto Nacional de Matemática Pura e Applicada.
*[http://ndw.cs.cas.cz/ Nonlinear Dynamics Workgroup], Institute of Computer Science, Czech Academy of Sciences.
*[http://ndw.cs.cas.cz/ Nonlinear Dynamics Workgroup] {{Webarchive|url=https://web.archive.org/web/20150121174532/http://ndw.cs.cas.cz/ |date=2015-01-21 }}, Institute of Computer Science, Czech Academy of Sciences.
*[https://dynamicalsystems.upc.edu/ UPC Dynamical Systems Group Barcelona], Polytechnical University of Catalonia.
*[https://dynamicalsystems.upc.edu/ UPC Dynamical Systems Group Barcelona], Polytechnical University of Catalonia.
*[https://www.ccdc.ucsb.edu/ Center for Control, Dynamical Systems, and Computation], University of California, Santa Barbara.
*[https://www.ccdc.ucsb.edu/ Center for Control, Dynamical Systems, and Computation], University of California, Santa Barbara.


{{Systems}}
{{Chaos theory}}
{{Chaos theory}}
{{Authority control}}
{{Authority control}}

Latest revision as of 18:46, 14 October 2024

The Lorenz attractor arises in the study of the Lorenz oscillator, a dynamical system.

In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve. Examples include the mathematical models that describe the swinging of a clock pendulum, the flow of water in a pipe, the random motion of particles in the air, and the number of fish each springtime in a lake. The most general definition unifies several concepts in mathematics such as ordinary differential equations and ergodic theory by allowing different choices of the space and how time is measured.[citation needed] Time can be measured by integers, by real or complex numbers or can be a more general algebraic object, losing the memory of its physical origin, and the space may be a manifold or simply a set, without the need of a smooth space-time structure defined on it.

At any given time, a dynamical system has a state representing a point in an appropriate state space. This state is often given by a tuple of real numbers or by a vector in a geometrical manifold. The evolution rule of the dynamical system is a function that describes what future states follow from the current state. Often the function is deterministic, that is, for a given time interval only one future state follows from the current state.[1][2] However, some systems are stochastic, in that random events also affect the evolution of the state variables.

In physics, a dynamical system is described as a "particle or ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives".[3] In order to make a prediction about the system's future behavior, an analytical solution of such equations or their integration over time through computer simulation is realized.

The study of dynamical systems is the focus of dynamical systems theory, which has applications to a wide variety of fields such as mathematics, physics,[4][5] biology,[6] chemistry, engineering,[7] economics,[8] history, and medicine. Dynamical systems are a fundamental part of chaos theory, logistic map dynamics, bifurcation theory, the self-assembly and self-organization processes, and the edge of chaos concept.

Overview

[edit]

The concept of a dynamical system has its origins in Newtonian mechanics. There, as in other natural sciences and engineering disciplines, the evolution rule of dynamical systems is an implicit relation that gives the state of the system for only a short time into the future. (The relation is either a differential equation, difference equation or other time scale.) To determine the state for all future times requires iterating the relation many times—each advancing time a small step. The iteration procedure is referred to as solving the system or integrating the system. If the system can be solved, then, given an initial point, it is possible to determine all its future positions, a collection of points known as a trajectory or orbit.

Before the advent of computers, finding an orbit required sophisticated mathematical techniques and could be accomplished only for a small class of dynamical systems. Numerical methods implemented on electronic computing machines have simplified the task of determining the orbits of a dynamical system.

For simple dynamical systems, knowing the trajectory is often sufficient, but most dynamical systems are too complicated to be understood in terms of individual trajectories. The difficulties arise because:

  • The systems studied may only be known approximately—the parameters of the system may not be known precisely or terms may be missing from the equations. The approximations used bring into question the validity or relevance of numerical solutions. To address these questions several notions of stability have been introduced in the study of dynamical systems, such as Lyapunov stability or structural stability. The stability of the dynamical system implies that there is a class of models or initial conditions for which the trajectories would be equivalent. The operation for comparing orbits to establish their equivalence changes with the different notions of stability.
  • The type of trajectory may be more important than one particular trajectory. Some trajectories may be periodic, whereas others may wander through many different states of the system. Applications often require enumerating these classes or maintaining the system within one class. Classifying all possible trajectories has led to the qualitative study of dynamical systems, that is, properties that do not change under coordinate changes. Linear dynamical systems and systems that have two numbers describing a state are examples of dynamical systems where the possible classes of orbits are understood.
  • The behavior of trajectories as a function of a parameter may be what is needed for an application. As a parameter is varied, the dynamical systems may have bifurcation points where the qualitative behavior of the dynamical system changes. For example, it may go from having only periodic motions to apparently erratic behavior, as in the transition to turbulence of a fluid.
  • The trajectories of the system may appear erratic, as if random. In these cases it may be necessary to compute averages using one very long trajectory or many different trajectories. The averages are well defined for ergodic systems and a more detailed understanding has been worked out for hyperbolic systems. Understanding the probabilistic aspects of dynamical systems has helped establish the foundations of statistical mechanics and of chaos.

History

[edit]

Many people regard French mathematician Henri Poincaré as the founder of dynamical systems.[9] Poincaré published two now classical monographs, "New Methods of Celestial Mechanics" (1892–1899) and "Lectures on Celestial Mechanics" (1905–1910). In them, he successfully applied the results of their research to the problem of the motion of three bodies and studied in detail the behavior of solutions (frequency, stability, asymptotic, and so on). These papers included the Poincaré recurrence theorem, which states that certain systems will, after a sufficiently long but finite time, return to a state very close to the initial state.

Aleksandr Lyapunov developed many important approximation methods. His methods, which he developed in 1899, make it possible to define the stability of sets of ordinary differential equations. He created the modern theory of the stability of a dynamical system.

In 1913, George David Birkhoff proved Poincaré's "Last Geometric Theorem", a special case of the three-body problem, a result that made him world-famous. In 1927, he published his Dynamical Systems. Birkhoff's most durable result has been his 1931 discovery of what is now called the ergodic theorem. Combining insights from physics on the ergodic hypothesis with measure theory, this theorem solved, at least in principle, a fundamental problem of statistical mechanics. The ergodic theorem has also had repercussions for dynamics.

Stephen Smale made significant advances as well. His first contribution was the Smale horseshoe that jumpstarted significant research in dynamical systems. He also outlined a research program carried out by many others.

Oleksandr Mykolaiovych Sharkovsky developed Sharkovsky's theorem on the periods of discrete dynamical systems in 1964. One of the implications of the theorem is that if a discrete dynamical system on the real line has a periodic point of period 3, then it must have periodic points of every other period.

In the late 20th century the dynamical system perspective to partial differential equations started gaining popularity. Palestinian mechanical engineer Ali H. Nayfeh applied nonlinear dynamics in mechanical and engineering systems.[10] His pioneering work in applied nonlinear dynamics has been influential in the construction and maintenance of machines and structures that are common in daily life, such as ships, cranes, bridges, buildings, skyscrapers, jet engines, rocket engines, aircraft and spacecraft.[11]

Formal definition

[edit]

In the most general sense,[12][13] a dynamical system is a tuple (T, X, Φ) where T is a monoid, written additively, X is a non-empty set and Φ is a function

with

(where is the 2nd projection map)

and for any x in X:

for and , where we have defined the set for any x in X.

In particular, in the case that we have for every x in X that and thus that Φ defines a monoid action of T on X.

The function Φ(t,x) is called the evolution function of the dynamical system: it associates to every point x in the set X a unique image, depending on the variable t, called the evolution parameter. X is called phase space or state space, while the variable x represents an initial state of the system.

We often write

if we take one of the variables as constant. The function

is called the flow through x and its graph is called the trajectory through x. The set

is called the orbit through x. The orbit through x is the image of the flow through x. A subset S of the state space X is called Φ-invariant if for all x in S and all t in T

Thus, in particular, if S is Φ-invariant, for all x in S. That is, the flow through x must be defined for all time for every element of S.

More commonly there are two classes of definitions for a dynamical system: one is motivated by ordinary differential equations and is geometrical in flavor; and the other is motivated by ergodic theory and is measure theoretical in flavor.

Geometrical definition

[edit]

In the geometrical definition, a dynamical system is the tuple . is the domain for time – there are many choices, usually the reals or the integers, possibly restricted to be non-negative. is a manifold, i.e. locally a Banach space or Euclidean space, or in the discrete case a graph. f is an evolution rule t → f t (with ) such that f t is a diffeomorphism of the manifold to itself. So, f is a "smooth" mapping of the time-domain into the space of diffeomorphisms of the manifold to itself. In other terms, f(t) is a diffeomorphism, for every time t in the domain .

Real dynamical system

[edit]

A real dynamical system, real-time dynamical system, continuous time dynamical system, or flow is a tuple (T, M, Φ) with T an open interval in the real numbers R, M a manifold locally diffeomorphic to a Banach space, and Φ a continuous function. If Φ is continuously differentiable we say the system is a differentiable dynamical system. If the manifold M is locally diffeomorphic to Rn, the dynamical system is finite-dimensional; if not, the dynamical system is infinite-dimensional. This does not assume a symplectic structure. When T is taken to be the reals, the dynamical system is called global or a flow; and if T is restricted to the non-negative reals, then the dynamical system is a semi-flow.

Discrete dynamical system

[edit]

A discrete dynamical system, discrete-time dynamical system is a tuple (T, M, Φ), where M is a manifold locally diffeomorphic to a Banach space, and Φ is a function. When T is taken to be the integers, it is a cascade or a map. If T is restricted to the non-negative integers we call the system a semi-cascade.[14]

Cellular automaton

[edit]

A cellular automaton is a tuple (T, M, Φ), with T a lattice such as the integers or a higher-dimensional integer grid, M is a set of functions from an integer lattice (again, with one or more dimensions) to a finite set, and Φ a (locally defined) evolution function. As such cellular automata are dynamical systems. The lattice in M represents the "space" lattice, while the one in T represents the "time" lattice.

Multidimensional generalization

[edit]

Dynamical systems are usually defined over a single independent variable, thought of as time. A more general class of systems are defined over multiple independent variables and are therefore called multidimensional systems. Such systems are useful for modeling, for example, image processing.

Compactification of a dynamical system

[edit]

Given a global dynamical system (R, X, Φ) on a locally compact and Hausdorff topological space X, it is often useful to study the continuous extension Φ* of Φ to the one-point compactification X* of X. Although we lose the differential structure of the original system we can now use compactness arguments to analyze the new system (R, X*, Φ*).

In compact dynamical systems the limit set of any orbit is non-empty, compact and simply connected.

Measure theoretical definition

[edit]

A dynamical system may be defined formally as a measure-preserving transformation of a measure space, the triplet (T, (X, Σ, μ), Φ). Here, T is a monoid (usually the non-negative integers), X is a set, and (X, Σ, μ) is a probability space, meaning that Σ is a sigma-algebra on X and μ is a finite measure on (X, Σ). A map Φ: XX is said to be Σ-measurable if and only if, for every σ in Σ, one has . A map Φ is said to preserve the measure if and only if, for every σ in Σ, one has . Combining the above, a map Φ is said to be a measure-preserving transformation of X , if it is a map from X to itself, it is Σ-measurable, and is measure-preserving. The triplet (T, (X, Σ, μ), Φ), for such a Φ, is then defined to be a dynamical system.

The map Φ embodies the time evolution of the dynamical system. Thus, for discrete dynamical systems the iterates for every integer n are studied. For continuous dynamical systems, the map Φ is understood to be a finite time evolution map and the construction is more complicated.

Relation to geometric definition

[edit]

The measure theoretical definition assumes the existence of a measure-preserving transformation. Many different invariant measures can be associated to any one evolution rule. If the dynamical system is given by a system of differential equations the appropriate measure must be determined. This makes it difficult to develop ergodic theory starting from differential equations, so it becomes convenient to have a dynamical systems-motivated definition within ergodic theory that side-steps the choice of measure and assumes the choice has been made. A simple construction (sometimes called the Krylov–Bogolyubov theorem) shows that for a large class of systems it is always possible to construct a measure so as to make the evolution rule of the dynamical system a measure-preserving transformation. In the construction a given measure of the state space is summed for all future points of a trajectory, assuring the invariance.

Some systems have a natural measure, such as the Liouville measure in Hamiltonian systems, chosen over other invariant measures, such as the measures supported on periodic orbits of the Hamiltonian system. For chaotic dissipative systems the choice of invariant measure is technically more challenging. The measure needs to be supported on the attractor, but attractors have zero Lebesgue measure and the invariant measures must be singular with respect to the Lebesgue measure. A small region of phase space shrinks under time evolution.

For hyperbolic dynamical systems, the Sinai–Ruelle–Bowen measures appear to be the natural choice. They are constructed on the geometrical structure of stable and unstable manifolds of the dynamical system; they behave physically under small perturbations; and they explain many of the observed statistics of hyperbolic systems.

Construction of dynamical systems

[edit]

The concept of evolution in time is central to the theory of dynamical systems as seen in the previous sections: the basic reason for this fact is that the starting motivation of the theory was the study of time behavior of classical mechanical systems. But a system of ordinary differential equations must be solved before it becomes a dynamic system. For example, consider an initial value problem such as the following:

where

  • represents the velocity of the material point x
  • M is a finite dimensional manifold
  • v: T × MTM is a vector field in Rn or Cn and represents the change of velocity induced by the known forces acting on the given material point in the phase space M. The change is not a vector in the phase space M, but is instead in the tangent space TM.

There is no need for higher order derivatives in the equation, nor for the parameter t in v(t,x), because these can be eliminated by considering systems of higher dimensions.

Depending on the properties of this vector field, the mechanical system is called

  • autonomous, when v(t, x) = v(x)
  • homogeneous when v(t, 0) = 0 for all t

The solution can be found using standard ODE techniques and is denoted as the evolution function already introduced above

The dynamical system is then (T, M, Φ).

Some formal manipulation of the system of differential equations shown above gives a more general form of equations a dynamical system must satisfy

where is a functional from the set of evolution functions to the field of the complex numbers.

This equation is useful when modeling mechanical systems with complicated constraints.

Many of the concepts in dynamical systems can be extended to infinite-dimensional manifolds—those that are locally Banach spaces—in which case the differential equations are partial differential equations.

Examples

[edit]

Linear dynamical systems

[edit]

Linear dynamical systems can be solved in terms of simple functions and the behavior of all orbits classified. In a linear system the phase space is the N-dimensional Euclidean space, so any point in phase space can be represented by a vector with N numbers. The analysis of linear systems is possible because they satisfy a superposition principle: if u(t) and w(t) satisfy the differential equation for the vector field (but not necessarily the initial condition), then so will u(t) + w(t).

Flows

[edit]

For a flow, the vector field v(x) is an affine function of the position in the phase space, that is,

with A a matrix, b a vector of numbers and x the position vector. The solution to this system can be found by using the superposition principle (linearity). The case b ≠ 0 with A = 0 is just a straight line in the direction of b:

When b is zero and A ≠ 0 the origin is an equilibrium (or singular) point of the flow, that is, if x0 = 0, then the orbit remains there. For other initial conditions, the equation of motion is given by the exponential of a matrix: for an initial point x0,

When b = 0, the eigenvalues of A determine the structure of the phase space. From the eigenvalues and the eigenvectors of A it is possible to determine if an initial point will converge or diverge to the equilibrium point at the origin.

The distance between two different initial conditions in the case A ≠ 0 will change exponentially in most cases, either converging exponentially fast towards a point, or diverging exponentially fast. Linear systems display sensitive dependence on initial conditions in the case of divergence. For nonlinear systems this is one of the (necessary but not sufficient) conditions for chaotic behavior.

Linear vector fields and a few trajectories.

Maps

[edit]

A discrete-time, affine dynamical system has the form of a matrix difference equation:

with A a matrix and b a vector. As in the continuous case, the change of coordinates x → x + (1 − A) –1b removes the term b from the equation. In the new coordinate system, the origin is a fixed point of the map and the solutions are of the linear system A nx0. The solutions for the map are no longer curves, but points that hop in the phase space. The orbits are organized in curves, or fibers, which are collections of points that map into themselves under the action of the map.

As in the continuous case, the eigenvalues and eigenvectors of A determine the structure of phase space. For example, if u1 is an eigenvector of A, with a real eigenvalue smaller than one, then the straight lines given by the points along α u1, with α ∈ R, is an invariant curve of the map. Points in this straight line run into the fixed point.

There are also many other discrete dynamical systems.

Local dynamics

[edit]

The qualitative properties of dynamical systems do not change under a smooth change of coordinates (this is sometimes taken as a definition of qualitative): a singular point of the vector field (a point where v(x) = 0) will remain a singular point under smooth transformations; a periodic orbit is a loop in phase space and smooth deformations of the phase space cannot alter it being a loop. It is in the neighborhood of singular points and periodic orbits that the structure of a phase space of a dynamical system can be well understood. In the qualitative study of dynamical systems, the approach is to show that there is a change of coordinates (usually unspecified, but computable) that makes the dynamical system as simple as possible.

Rectification

[edit]

A flow in most small patches of the phase space can be made very simple. If y is a point where the vector field v(y) ≠ 0, then there is a change of coordinates for a region around y where the vector field becomes a series of parallel vectors of the same magnitude. This is known as the rectification theorem.

The rectification theorem says that away from singular points the dynamics of a point in a small patch is a straight line. The patch can sometimes be enlarged by stitching several patches together, and when this works out in the whole phase space M the dynamical system is integrable. In most cases the patch cannot be extended to the entire phase space. There may be singular points in the vector field (where v(x) = 0); or the patches may become smaller and smaller as some point is approached. The more subtle reason is a global constraint, where the trajectory starts out in a patch, and after visiting a series of other patches comes back to the original one. If the next time the orbit loops around phase space in a different way, then it is impossible to rectify the vector field in the whole series of patches.

Near periodic orbits

[edit]

In general, in the neighborhood of a periodic orbit the rectification theorem cannot be used. Poincaré developed an approach that transforms the analysis near a periodic orbit to the analysis of a map. Pick a point x0 in the orbit γ and consider the points in phase space in that neighborhood that are perpendicular to v(x0). These points are a Poincaré section S(γx0), of the orbit. The flow now defines a map, the Poincaré map F : S → S, for points starting in S and returning to S. Not all these points will take the same amount of time to come back, but the times will be close to the time it takes x0.

The intersection of the periodic orbit with the Poincaré section is a fixed point of the Poincaré map F. By a translation, the point can be assumed to be at x = 0. The Taylor series of the map is F(x) = J · x + O(x2), so a change of coordinates h can only be expected to simplify F to its linear part

This is known as the conjugation equation. Finding conditions for this equation to hold has been one of the major tasks of research in dynamical systems. Poincaré first approached it assuming all functions to be analytic and in the process discovered the non-resonant condition. If λ1, ..., λν are the eigenvalues of J they will be resonant if one eigenvalue is an integer linear combination of two or more of the others. As terms of the form λi – Σ (multiples of other eigenvalues) occurs in the denominator of the terms for the function h, the non-resonant condition is also known as the small divisor problem.

Conjugation results

[edit]

The results on the existence of a solution to the conjugation equation depend on the eigenvalues of J and the degree of smoothness required from h. As J does not need to have any special symmetries, its eigenvalues will typically be complex numbers. When the eigenvalues of J are not in the unit circle, the dynamics near the fixed point x0 of F is called hyperbolic and when the eigenvalues are on the unit circle and complex, the dynamics is called elliptic.

In the hyperbolic case, the Hartman–Grobman theorem gives the conditions for the existence of a continuous function that maps the neighborhood of the fixed point of the map to the linear map J · x. The hyperbolic case is also structurally stable. Small changes in the vector field will only produce small changes in the Poincaré map and these small changes will reflect in small changes in the position of the eigenvalues of J in the complex plane, implying that the map is still hyperbolic.

The Kolmogorov–Arnold–Moser (KAM) theorem gives the behavior near an elliptic point.

Bifurcation theory

[edit]

When the evolution map Φt (or the vector field it is derived from) depends on a parameter μ, the structure of the phase space will also depend on this parameter. Small changes may produce no qualitative changes in the phase space until a special value μ0 is reached. At this point the phase space changes qualitatively and the dynamical system is said to have gone through a bifurcation.

Bifurcation theory considers a structure in phase space (typically a fixed point, a periodic orbit, or an invariant torus) and studies its behavior as a function of the parameter μ. At the bifurcation point the structure may change its stability, split into new structures, or merge with other structures. By using Taylor series approximations of the maps and an understanding of the differences that may be eliminated by a change of coordinates, it is possible to catalog the bifurcations of dynamical systems.

The bifurcations of a hyperbolic fixed point x0 of a system family Fμ can be characterized by the eigenvalues of the first derivative of the system DFμ(x0) computed at the bifurcation point. For a map, the bifurcation will occur when there are eigenvalues of DFμ on the unit circle. For a flow, it will occur when there are eigenvalues on the imaginary axis. For more information, see the main article on Bifurcation theory.

Some bifurcations can lead to very complicated structures in phase space. For example, the Ruelle–Takens scenario describes how a periodic orbit bifurcates into a torus and the torus into a strange attractor. In another example, Feigenbaum period-doubling describes how a stable periodic orbit goes through a series of period-doubling bifurcations.

Ergodic systems

[edit]

In many dynamical systems, it is possible to choose the coordinates of the system so that the volume (really a ν-dimensional volume) in phase space is invariant. This happens for mechanical systems derived from Newton's laws as long as the coordinates are the position and the momentum and the volume is measured in units of (position) × (momentum). The flow takes points of a subset A into the points Φ t(A) and invariance of the phase space means that

In the Hamiltonian formalism, given a coordinate it is possible to derive the appropriate (generalized) momentum such that the associated volume is preserved by the flow. The volume is said to be computed by the Liouville measure.

In a Hamiltonian system, not all possible configurations of position and momentum can be reached from an initial condition. Because of energy conservation, only the states with the same energy as the initial condition are accessible. The states with the same energy form an energy shell Ω, a sub-manifold of the phase space. The volume of the energy shell, computed using the Liouville measure, is preserved under evolution.

For systems where the volume is preserved by the flow, Poincaré discovered the recurrence theorem: Assume the phase space has a finite Liouville volume and let F be a phase space volume-preserving map and A a subset of the phase space. Then almost every point of A returns to A infinitely often. The Poincaré recurrence theorem was used by Zermelo to object to Boltzmann's derivation of the increase in entropy in a dynamical system of colliding atoms.

One of the questions raised by Boltzmann's work was the possible equality between time averages and space averages, what he called the ergodic hypothesis. The hypothesis states that the length of time a typical trajectory spends in a region A is vol(A)/vol(Ω).

The ergodic hypothesis turned out not to be the essential property needed for the development of statistical mechanics and a series of other ergodic-like properties were introduced to capture the relevant aspects of physical systems. Koopman approached the study of ergodic systems by the use of functional analysis. An observable a is a function that to each point of the phase space associates a number (say instantaneous pressure, or average height). The value of an observable can be computed at another time by using the evolution function φ t. This introduces an operator U t, the transfer operator,

By studying the spectral properties of the linear operator U it becomes possible to classify the ergodic properties of Φ t. In using the Koopman approach of considering the action of the flow on an observable function, the finite-dimensional nonlinear problem involving Φ t gets mapped into an infinite-dimensional linear problem involving U.

The Liouville measure restricted to the energy surface Ω is the basis for the averages computed in equilibrium statistical mechanics. An average in time along a trajectory is equivalent to an average in space computed with the Boltzmann factor exp(−βH). This idea has been generalized by Sinai, Bowen, and Ruelle (SRB) to a larger class of dynamical systems that includes dissipative systems. SRB measures replace the Boltzmann factor and they are defined on attractors of chaotic systems.

Nonlinear dynamical systems and chaos

[edit]

Simple nonlinear dynamical systems, including piecewise linear systems, can exhibit strongly unpredictable behavior, which might seem to be random, despite the fact that they are fundamentally deterministic. This unpredictable behavior has been called chaos. Hyperbolic systems are precisely defined dynamical systems that exhibit the properties ascribed to chaotic systems. In hyperbolic systems the tangent spaces perpendicular to an orbit can be decomposed into a combination of two parts: one with the points that converge towards the orbit (the stable manifold) and another of the points that diverge from the orbit (the unstable manifold).

This branch of mathematics deals with the long-term qualitative behavior of dynamical systems. Here, the focus is not on finding precise solutions to the equations defining the dynamical system (which is often hopeless), but rather to answer questions like "Will the system settle down to a steady state in the long term, and if so, what are the possible attractors?" or "Does the long-term behavior of the system depend on its initial condition?"

The chaotic behavior of complex systems is not the issue. Meteorology has been known for years to involve complex—even chaotic—behavior. Chaos theory has been so surprising because chaos can be found within almost trivial systems. The Pomeau–Manneville scenario of the logistic map and the Fermi–Pasta–Ulam–Tsingou problem arose with just second-degree polynomials; the horseshoe map is piecewise linear.

Solutions of finite duration

[edit]

For non-linear autonomous ODEs it is possible under some conditions to develop solutions of finite duration,[15] meaning here that in these solutions the system will reach the value zero at some time, called an ending time, and then stay there forever after. This can occur only when system trajectories are not uniquely determined forwards and backwards in time by the dynamics, thus solutions of finite duration imply a form of "backwards-in-time unpredictability" closely related to the forwards-in-time unpredictability of chaos. This behavior cannot happen for Lipschitz continuous differential equations according to the proof of the Picard-Lindelof theorem. These solutions are non-Lipschitz functions at their ending times and cannot be analytical functions on the whole real line.

As example, the equation:

Admits the finite duration solution:

that is zero for and is not Lipschitz continuous at its ending time

See also

[edit]

References

[edit]
  1. ^ Strogatz, S. H. (2001). Nonlinear Dynamics and Chaos: with Applications to Physics, Biology and Chemistry. Perseus.
  2. ^ Katok, A.; Hasselblatt, B. (1995). Introduction to the Modern Theory of Dynamical Systems. Cambridge: Cambridge University Press. ISBN 978-0-521-34187-5.
  3. ^ "Nature". Springer Nature. Retrieved 17 February 2017.
  4. ^ Melby, P.; et al. (2005). "Dynamics of Self-Adjusting Systems With Noise". Chaos: An Interdisciplinary Journal of Nonlinear Science. 15 (3): 033902. Bibcode:2005Chaos..15c3902M. doi:10.1063/1.1953147. PMID 16252993.
  5. ^ Gintautas, V.; et al. (2008). "Resonant forcing of select degrees of freedom of multidimensional chaotic map dynamics". J. Stat. Phys. 130 (3): 617. arXiv:0705.0311. Bibcode:2008JSP...130..617G. doi:10.1007/s10955-007-9444-4. S2CID 8677631.
  6. ^ Jackson, T.; Radunskaya, A. (2015). Applications of Dynamical Systems in Biology and Medicine. Springer.
  7. ^ Kreyszig, Erwin (2011). Advanced Engineering Mathematics. Hoboken: Wiley. ISBN 978-0-470-64613-7.
  8. ^ Gandolfo, Giancarlo (2009) [1971]. Economic Dynamics: Methods and Models (Fourth ed.). Berlin: Springer. ISBN 978-3-642-13503-3.
  9. ^ Holmes, Philip. "Poincaré, celestial mechanics, dynamical-systems theory and "chaos"." Physics Reports 193.3 (1990): 137–163.
  10. ^ Rega, Giuseppe (2019). "Tribute to Ali H. Nayfeh (1933–2017)". IUTAM Symposium on Exploiting Nonlinear Dynamics for Engineering Systems. Springer. pp. 1–2. ISBN 9783030236922.
  11. ^ "Ali Hasan Nayfeh". Franklin Institute Awards. The Franklin Institute. 4 February 2014. Retrieved 25 August 2019.
  12. ^ Giunti M. and Mazzola C. (2012), "Dynamical systems on monoids: Toward a general theory of deterministic systems and motion". In Minati G., Abram M., Pessa E. (eds.), Methods, models, simulations and approaches towards a general theory of change, pp. 173–185, Singapore: World Scientific. ISBN 978-981-4383-32-5
  13. ^ Mazzola C. and Giunti M. (2012), "Reversible dynamics and the directionality of time". In Minati G., Abram M., Pessa E. (eds.), Methods, models, simulations and approaches towards a general theory of change, pp. 161–171, Singapore: World Scientific. ISBN 978-981-4383-32-5.
  14. ^ Galor, Oded (2010). Discrete Dynamical Systems. Springer.
  15. ^ Vardia T. Haimo (1985). "Finite Time Differential Equations". 1985 24th IEEE Conference on Decision and Control. pp. 1729–1733. doi:10.1109/CDC.1985.268832. S2CID 45426376.
  • Arnold, Vladimir I. (2006). "Fundamental concepts". Ordinary Differential Equations. Berlin: Springer Verlag. ISBN 3-540-34563-9.
  • Chueshov, I. D. Introduction to the Theory of Infinite-Dimensional Dissipative Systems. online version of first edition on the EMIS site [1].
  • Temam, Roger (1997) [1988]. Infinite-Dimensional Dynamical Systems in Mechanics and Physics. Springer Verlag.

Further reading

[edit]

Works providing a broad coverage:

Introductory texts with a unique perspective:

Textbooks

Popularizations:

[edit]
Online books or lecture notes
Research groups