Talk:Ergodicity
This article has not yet been rated on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||||||||||||||||||
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
|
It is requested that a mathematical diagram or diagrams be included in this article to improve its quality. Specific illustrations, plots or diagrams can be requested at the Graphic Lab. For more information, refer to discussion on this page and/or the listing at Wikipedia:Requested images. |
Implications of ergodicity
It would be good to list some simple implications of ergodicity. For example, if a process is ergodic, does that imply that it is stationary? Does it imply that time averages are equal to ensemble averages?
Similarly, it would be good to state something about ergodicity in simple systems, like stating the conditions for a Markov chain to be ergodic. 131.215.45.226 (talk) 19:39, 29 June 2008 (UTC)
Similarly, it would perhaps be interesting to explore relations between ergodicity and Hurst Exponents for specific time series. Is the Hurst Exponent (one of) the invariant(s) of an ergodic process?
A queue is an example of a continuous-time Markov chain. A queue has three components - inter-arrival times, server times and the number of servers; the first two of these may be Markov processes. Assuming the inter-arrival time is a Markov process, then any particular state is recurrent (or persistent) if the probability of ever returning to it is 1 (i.e. certain). Hence if between 2 states the time is 30 seconds (the time between two entities joining the queue) then this indicates that the second state is recurrent (or persistent) because this situation could occur again. If the expected time between the two states is finite (as opposed to infinite) then the second state is ergodic. Hence the time between 2 people joining a queue is finite, could occur again and is hence ergodic.[1] Neugierigxl (talk) 14:40, 10 May 2015 (UTC)
Expert help: intuitive definition
The section "Intuitive definition" seems wrong and misleading. But, rather than deletion, can something better be said? Melcombe (talk) 12:49, 25 June 2010 (UTC)
- In fact, on re-reading, the whole article seems confused. Melcombe (talk) 12:53, 25 June 2010 (UTC)
- The article is certainly confusing and evidently written by a pure mathematician in a style which will be completely unintelligible to most people who want to understand the idea. The term ergodicity is understood in different ways in (1) physics (2) pure mathematics (3) statistics and systems analysis. The article should pay attention to each meaning and not concentrate on just one interpretation (here the pure mathematical one). The interpretations are of course related and the relation can be understood from the historical development.JFB80 (talk) 20:40, 21 April 2011 (UTC)
I think it would help physicists to give at least a sketch of a concrete example. For example, for a classical point-particle moving in a potential: X corresponds to phase space, Σ corresponds to the Lebesgue measurable subsets of X, and υ is the Lebesgue measure. In particular, I was initially thinking that υ was a physical probability distribution on X (e.g., it might be strongly concentrated around a particular point x_1 following a measurement) rather than a measure that's proportional to density of microstates. 174.28.23.23 (talk) 07:44, 8 December 2011 (UTC)
- I agree a better intuitive description is needed; a visual representation would also help a lot. -- Beland (talk) 14:29, 11 July 2014 (UTC)
- I add my support for inclusion of examples for two cases with some graphical representations.
- One corresponding to the 'physics' definition (maybe maps of the probability of a particle visiting some regions??).
- One corresponding to the 'statistics' definition (a plot of some random variable changing over time).
- In both cases there should be contrasting examples illustrating ergodicity and lack of ergodicity so that we can see the difference.
- —DIV (137.111.13.4 (talk) 00:55, 19 March 2015 (UTC))
- Agreed. And the boundary would be clearer with some examples of systems that are, only just, not ergodic. JDAWiseman (talk) 09:25, 14 June 2018 (UTC)
- I add my support for inclusion of examples for two cases with some graphical representations.
Call centre example
For non-physicists I've introduced an example about call-centre operators, based very closely on the resistor example. What is still needed is a discussion of when this would be non-ergodic. For instance, would the following be examples of ergodicity or non-ergodicity?
- Refreshments are served hourly, so that workers tend to take their break around the same time, i.e. on the hour.
- Some operators are chatterboxes, and interrupt the person calling in, whereas other operators are more taciturn, or more reluctant to interrupt the person calling in, or better listeners. The first type of call centre worker will have a much higher average number of words spoken per minute than the latter type.
- More people call in with enquiries during lunchtime.
It is also a bit unclear in the wording of the examples as to whether the time average is for one person/resistor, or for the whole group. And it is unclear why only one ensemble average is captured (at one point in time) to test for ergodicity. Surely in practice many timepoints should be chosen for (separate) testing. Finally, I am not entirely keen on the word "waveform" to describe random noise. A 'wave' connotes (oftentimes smooth) periodic behaviour. —DIV (137.111.13.48 (talk) 02:50, 24 April 2019 (UTC))
Requested move
- The following discussion is an archived discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.
The result of the move request was: page moved per request. - GTBacchus(talk) 02:03, 20 September 2010 (UTC)
Ergodic (adjective) → Ergodicity — Current name is unsatisfactory (the adjective part). Ergodicity, which is the noun form, seems preferable. Tiled (talk) 00:50, 12 September 2010 (UTC)
- The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.
Clarification of Ergodicity vs Stationary
I found a good reference online that I believe should be incorporated into either or both Ergodicity and Stationary processes. In general, it appears as if the major difference is that ergodicity has to do with "asymptotic independence," while stationary processes have to do "time invariance":
http://economia.unipv.it/pagp/pagine_personali/erossi/rossi_intro_stochastic_process_PhD.pdf
However, the above resource has some problems. It introduces a process that it claims is stationary, but not ergodic, and proceeds to prove the process is stationary. Then the process is redefined as a random walk and proved it is not ergodic. However the random walk is not stationary as the variance grows linearly in time. Are there any better examples that we could use to demonstrate (1) a process that is ergodic but not stationary and (2) a process that is not ergodic, but is stationary? — Preceding unsigned comment added by 150.135.222.130 (talk) 22:59, 31 January 2013 (UTC)
External Source Improvement
The external source file, Outline of Ergodic Theory, by Steven Arthur Kalikow, is listed as a Word document. Please reupload as a .pdf file. This is much more convenient. — Preceding unsigned comment added by 69.77.224.241 (talk) 22:31, 17 August 2013 (UTC)
- You'd have to ask the author of that paper to do that; as a copyrighted document it must be hosted externally. -- Beland (talk) 14:28, 11 July 2014 (UTC)
Example from electronics
This section is concerned with the thermal or Johnson noise exhibited by collections of resistors. As a physical phenomenon, thermal noise is the low frequency band of black body radiation, and as such at ν = 0 the emission according to the set of temperature curves (Wein's law) for the blackbody process all approach zero at that frequency. Therefore the average voltage measured must be always zero because the average voltage is the D.C. quantity, and D.C. is the spectral output at f = 0 Hz using the symbol from electrical engineering. BTW v should be lower case Nu in the first equation, which does not correctly appear.
The section maybe could mention whether the measurements should all be at the same temperature T or not, and whether or not all of the resistors are of identical value R. Measuring this from resistors of identical value R and temperature T are analogous to measuring/determining radiation output of black body objects of identical surface area and identical T. If the resistors are not of identical R and T, then measuring each resistor Johnson noise output can be an indirect way of determining R of each or the T of each if one or the other is known.
And instead of the measurement across the resistors being of the D.C or average voltage, or the same thing calculated from instantaneous samples of the voltages, the measurements should be of the R.M.S. voltage of the resistor thermal voltage or RMS volts v = σ. But even specifying this presents a problem in that the RMS quantity has a specification of bandwidth, as the RMS quantity represents the integral of the noise density spectrum (in volts/Hz^1/2) over df . And if all resistors are of identical R then by the temperature curves the RMS voltage values of each are identical, taken over identical bandwidth.
BTW the RMS as the integral over df is calculated using density spectrum in volts/(Hz^1/2); the reason is that each df individual σ cannot be added linearly unless they are all cross-correlated r = 1. But with ergodic processes any two df are cross-correlated r = 0, and so the voltages over the set {df} are added non-linearly as RMS which is calculated as the integral over df.
I do not know if any of my concerns can be incorporated into the article as the statistical nature of what is proposed possibly does not warrant the details from this post. But I think the section can be improved/clarified based on what I'm putting here.
Groovamos (talk) 10:16, 10 March 2014 (UTC)
- The section "Example from electronics" is pretty bad. It tediously re-explains the concept of ergodicity without adding anything specific from electronics. It's empty with choppy grammar. After reading it, I have no idea why it's interesting or useful to compare time and space averages of a very large number of unrelated resistors.
- By the way, Groovamos, how do you know all those things that you write about the significance of ergodicity in electronics? I don't understand them, but it suggests to me that ergodicity in electronics really is a thing. If there were a source, it would help greatly. I couldn't find the word "ergodicity" in the article on Johnson–Nyquist_noise.
Actually I have never had to pay attention to the formal categorizations of noise in dealing with noise in electronic design. I first encountered the categorizations of stochastic processes in the book "Methods of Signal and System Analysis"[2] and have never really understood very well the implications for system design of ergodic noise sources. But I have never had to work with deep space telecommunications either so that explains my first sentence. You are right about the section being somewhat disjointed but it has actually helped me in my old age to understand something better, and that is that a system with a single or dominant source of noise can possibly be analysed without regard to ergodicity. But I'm now seeing something decades later in that the methods in that book are based on the multiple sources of noise usually present in a system, and some of the techniques apply to obtaining an accurate insight into system behavior by for example obtaining the stochastic output due to all the ergodic sources as an ensemble of sources and then treating the non-ergodic sources independently and combining by superposition. And my reference to black body radiation as being the same physical process as Johnson noise means that the contributions of noise in the microwave region at a receiving antenna can be more easily accounted for in a system when taking into account at the same time the Johnson noise added by the resistive components in a system and treating them as an ensemble of noise sources to get the system stochastic response.Groovamos (talk) 21:44, 6 June 2017 (UTC)
- In the example it is unclear whether time-averages for individual resistors need to be equal to one another. —DIV (137.111.13.48 (talk) 02:24, 24 April 2019 (UTC))
Boolean Networks
A Boolean network is said to be ergodic if it cycles through all possible states of the network, visiting each state only once and returning to its initial state.
Lack of examples
There is a formal definition without any examples. Some simple examples and counter examples would really help the reader and seem necessary. 31.39.233.46 (talk) 18:22, 25 April 2016 (UTC)
Dark matter
If a quantum phenomenon occurs at a smaller pace (if compared to a more dense region of space), that phenomenon generates gravity - if compared to it's surroundings - in order ergodicity is maintained.
Problem with Markow chain
Hi everybody, I have a problem with what is written on the paragraph on Markov chains: are we sure that if all eigenvalues are smaller than 1 then the matrix is ergodic?? I cannot find any reference for it. Plus consider the example:
0 | 1 | 0 | 0 |
0 | 0 | 0 | 1 |
0 | 1 | 0 | 0 |
0 | 0 | 0 | 1 |
This matrix is clearly not ergodic, even if the eigenvalues are (1,0,0,0).
Am I missing something?
Arsik87 (talk) 17:31, 14 June 2018 (UTC)
- Yes you are missing something. The article says one eigenvalue must be 1 and all the others less than 1 in absolute value. This is not so with your matrix. There is no probability in your example and it is not a proper Markov chain. JFB80 (talk) 10:28, 15 June 2018 (UTC)
- @JFB80: "The article says one eigenvalue must be 1 and all the others less than 1 in absolute value. This is not so with your matrix." Yes it is since the eigenvalues are less than 1.
- " There is no probability in your example and it is not a proper Markov chain" I am sorry what do you mean it is not a Markov chain? It is a transition matrix of a Markov chain. If you mean some conditions on the probabilities, then the theorem in the section should specify that.
- Moreover,I think that also the reported theorem is not totally correct, since the requirement to go from one state to any other in one step is not a necessary condition, e.g. the transition matrix:
0 | 1 | 0 | 0 |
0 | 0 | 1 | 0 |
0 | 0 | 0 | 1 |
1 | 0 | 0 | 0 |
is ergodic, but you cannot go from one state to any other state in "one" step.Arsik87 (talk) 23:03, 17 June 2018 (UTC)
- This matrix is cyclic not ergodic. You don't seem to understand what ergodicity means. It means there is a stationary probability distribution which is attained from any starting state. You quote some books. What do they say? Why not check up yourself? JFB80 (talk) 04:51, 18 June 2018 (UTC)
- @JFB80: I didn't quote any book and your attitude towards me was not nice from the first comment. Anyway. I pointed out that something in the page might be not as clear as it is for an "expert" as you. The scope of wikipedia should be to explain stuff also to "stupid" people like me.
- In the text, it says "For a Markov chain, a simple test for ergodicity is using eigenvalues of its transition matrix. The number 1 is always an eigenvalue. If all other eigenvalues are less than 1 in absolute value, then the Markov chain is ergodic. "
- I m just asking what are the assumptions behind this statement, as my first transition matrix has eigenvalues less than 1 but is not ergodic.
- Regarding the second matrix, the stationary distribution is (1/4,1/4,1/4,1/4), which is attained from any starting state.
- ps. If I didn't understand what ergodicity means, isn't it a sign that the section can be improved? Arsik87 (talk) 11:37, 18 June 2018 (UTC)
- I have said the eigenvalues must be positive and hope it is ok now. I am not surprised you do not know what ergodicity means from reading this article. Ergodicity as a general idea is a very difficult subject and this article and the others on ergodicity need a lot of improvement. But on Markov processes you can find good accounts and some talk about ergodicity. The result mentioned in the article is quite obvious if you understand what spectral resolution of a matrix means. Sorry to upset you. JFB80 (talk) 10:40, 19 June 2018 (UTC)
Merger proposal
I propose to merge Ergodicity into Ergodic process 77.81.10.206 (talk) 20:29, 29 December 2018 (UTC)
I have added the relevant template to the article for this. —DIV (137.111.13.48 (talk) 02:39, 24 April 2019 (UTC))
- Neutral. This could be a good idea, but please provide more detail on why and how, What advantages this would have, and would there be any disadvantages? —DIV (137.111.13.48 (talk) 02:27, 24 April 2019 (UTC))
Stray references on this Talk page
- ^ Oxford Dictionary of Statistics, ISBN 978-0-19-954145-4
- ^ "Methods of Signal and System Analysis" (McGillam and Cooper)
- All unassessed articles
- Start-Class Statistics articles
- High-importance Statistics articles
- WikiProject Statistics articles
- Start-Class Systems articles
- Mid-importance Systems articles
- Systems articles in dynamical systems
- WikiProject Systems articles
- Start-Class mathematics articles
- Mid-priority mathematics articles
- Wikipedia requested mathematical diagrams