Talk:Measurement
This is the talk page for discussing improvements to the Measurement article. This is not a forum for general discussion of the article's subject. |
Article policies
|
Find sources: Google (books · news · scholar · free images · WP refs) · FENS · JSTOR · TWL |
Archives: 1 |
This article has not yet been rated on Wikipedia's content assessment scale. It is of interest to the following WikiProjects: | ||||||||||||||||||
Template:WP1.0
Please add the quality rating to the {{WikiProject banner shell}} template instead of this project banner. See WP:PIQA for details.
|
The definition is not accurate
I think the definition is not accurate. The essence of measurement is not assigning numbers. Say, let x=2. It's not a measurement. Or, you ask "how many", I say "2". It's not a measurement either. Measurement should be directly connected to the objects/phenomenons. --Jflycn (talk) 00:04, 1 March 2010 (UTC)
External link to Complex Unit Converter
The link was removed recently and a request to re-instate it has been made. Reply to MikeVanVoorhis with any questions about its capabilities.
A useful converter between SI and other units is available at http://AnalysisChamp.com/EEx/ExpEvalCV.asp. This converter is capable of performing complex conversions between any and all proper unit expressions and should be included as an external link to this article. MikeVanVoorhis (talk) 22:21, 5 June 2009 (UTC)
Question
i have a project due i need to know info about a balance scale
"Mensuration" also is an old word for "measurement", which is why it redirects here. Rick Norwood 13:53, 9 April 2006 (UTC)
- since there is already a measurement history article, I would put that reference there.Hubbardaie 12:56, 23 June 2007 (UTC)
Mensuration shouldn't redirect here. Beyond the musical definition, it refers to the process of determining all of the dimensions of an object based on one measurement, which is a wholly different process than simple measurement. It's also a term still in use for that process (archaeology, etc). We need to make a mensuration page and possibly a disambiguation page. —Preceding unsigned comment added by 216.17.251.236 (talk) 15:57, 12 September 2007 (UTC)
The good, the bad, and the ugly.
There are a number of good points to this article, but the first sentence is both needlessly complex and needlessly restrictive. I will attempt a rewrite. Rick Norwood 13:55, 9 April 2006 (UTC)
- I'm all for simplifying things where possible, but there are a number of issues with the proposed changes. A measurement is the act of discovering .... A measurement is not an act, it is the result of a pocess. Measurement is a process which extends from calibration to the act of obtaining particular measurements. There are four different levels of measurement -- this schema has been repeatedly criticized yet it is treated as though fact. The use of statistics to estimate the measurement of a property of a population from the measurement of that property in a sample is an important modern technique of measurement. This really doesn't make anything simpler too me -- and it is out of context. It is also very misleading. Perhaps you could provide some citations for changes and new points. Holon 14:22, 9 April 2006 (UTC)
- Let's see if we can come to an agreement. The version you restored preferences measurement in classical physics and engineering over all other sorts of measurement. Do we really want to do that?
- Measurement is both an act and the result of that act. "My measurement of the object resulted in a measurement of 3 meters." I have no problem, however, preferencing the result of the act rather than the act itself. I agree that that is more in accord with what follows.
- "Measurement is a process...", however, returns to measurement as an act rather than measurement as a result.
- "Four different levels..." I have no idea what this is about. I just kept it from the earlier version of the article, and it is still there after your reversion.
- I am not sure what your problem is with statistics, but I'm happy to discuss it.
- Comments and suggestions? Rick Norwood 14:52, 9 April 2006 (UTC)
Do we want to preference measurement in physics over all other sorts? To answer this, firstly a bit of background to let you know where I'm coming from. The two of the predominant perspectives about measurement have been referred to as the classical theory and the representational theory. The following is a brief characterization of these:
- Essentially, then, there are two features dividing the classical and representational theories of measurement: the role of ratios of quantities and the place of numbers. According to the classical theory, these two are logically connected: ratios of quantities are numbers, and this fact is the basis of measurement. According to the representational theory, numbers do not derive from ratios of quantities. They are quite independent of them and the place of numbers in measurement is determined by the structural similarity between qualitative and quantitative systems. Hence, according to the representational theory, numbers are assigned to empirical entities in measurement. According to the classical theory, numbers are discovered as relations between empirical entities in measurement. (Michell, 1993, p. 190).
The classical view is standard throughout the physical sciences because the very act of expressing the magnitude of a quantity as a number of uits implies it (e.g. Terrien, 1980). However, S. S. Stevens proposed a definition of measurement which has had a wide impcact in the social sciences even though it is at odds with the definition in the physical sciences (Michell, 1999, pp. 16-20). Stevens also proposed the so-called levels of measurement.
To answer your question, I'd rather give preference to measurement as it is understood throughout the natural sciences. Things like the index of consumer confidence and the Dow Jones industrial average are indices, not measurements, as implied by their labels. So it depends what you mean by 'all other sorts of measurement'. Measurement is sometimes used loosely to refer to all sorts of things, but personally I would stick to its meaning in the natural sciences, with some recognition of other usage. I'm open to debate on this point of course.
On levels of measurement, the current wording has 'proposed', whereas you changed it to 'there are four different levels' (I understand why you did so but it gives the schema a more definitive status). Personally, I'd leave this out of a definition, so perhaps we could do this (others may disagree).
You changed opening sentence to: A measurement is the act of discovering the ratio of a magnitude to a given unit magnitude of the same type. Common usage is that a measurement is the result of measuring on a particular occasion. Measurement is the estimation of ratios, which is a process involving instrumentation, calibration, the act of using instruments to obtain measurements, and so on.
There is also an issue with referring to measurment as an act. Doing so emphasizes one aspect of measurement as a whole: the act of using an instrument to obtain a particular measurement. Measurement, as understood in scientific literature, is far more than this. Again, maybe we need to differentiate between different uses of the term?
You added: The use of statistics to estimate the measurement of a property of a population from the measurement of that property in a sample is an important modern technique of measurement. I'm not sure what you're trying to say here. Can you give an example? A lot of statistical methods require measurements as a starting point because arithemtic operations depend on having measurements (e.g. anything involing ANOVA). My doctoral (and current) research is in the area of probabilistic measurement. Even here, statistics is not used to estimate measurements, so I am unsure what you mean. Statistics is used in the estimation of standard errors. Statistics like means are sometimes referred to as 'measures of central tendency', but they are not measurements per se; rather they characterize a property of a set of measurements. But I am not sure if this is what you mean?
References:
- Michell, J. (1993). The origins of the representational theory of measurement: Helmholtz, Hölder, and Russell. Studies in History and Philosophy of Science, 24, 185-206.
- Michell, J. (1999). Measurement in Psychology. Cambridge: Cambridge University Press.
- Terrien, J. (1980). The practical importance of systems of units; their trend parallels progress in physics. In A.F. Milone & P. Giacomo (Eds.), Proceedings of the International School of Physics ‘Enrico Fermi’ Course LXVIII, Metrology and Fundamental Constants (pp. 765-9). Amsterdam: North-Holland.
Cheers. Holon 04:56, 10 April 2006 (UTC)
- It sounds like you can make an interesting contribution to the article, rather than just revert to an earlier version that still contains things (such as "four kinds of measurement") that you object to.
- My objection to preferencing measurement in the natural sciences and physics in particular is that it ignores the use of measurement in construction work, engineering, and many other fields. The stuff about the Dow Jones was another attempt to make clear something in the earlier version that you reverted back to.
- As for statistics, since it is your area, why don't you write something about the use of statistics to obtain measurements, and the relative accuracy of statistics over direct measurement of every member in a population, in obtaining an average measurement for the population. Rick Norwood 12:51, 10 April 2006 (UTC)
Yeah, you're right I sould try to improve the article. As you say, it has some good points but the opening is not great as is. Can you elaborate on what you mean about measurement in construction, engineering and so on? The way I'd go is to define and characterize measurement in general terms, and mention that measurement is fundamental to a wide and diverse range of fields and commercial applications. Thoughts? On Dow Jones etc., understood. On statistics, I should include something on maximum likelihood estimation in probabilistic measurement, true. Averages and so forth are best dealt with separately I think -- measurement is principally about individuals not populations (I assume you're referring to the use of Bayesian stats but this is really getting away from the fundamentals of measurement imv, better placed elswhere). I'll have a go when I get some time. In the mean time, if you want to make changes, go for it and I'll take a look when I can. Cheers Rick. Holon 03:29, 11 April 2006 (UTC)
- I agree. I don't think the article needs to narrow down the many areas in which measurements are used. My objection was to giving precedence to, for example, science over engineering. I'm going to make a few changes, you make more when you have the time. Rick Norwood 13:00, 11 April 2006 (UTC)
Sorry Rick, didn't see this entry. Have already made some changes. See what you think and I'll check out any more you make. Would appreciate feedback -- be honest, it needs to be accessible. Holon 13:03, 11 April 2006 (UTC)
- It turns out we were editing at the same time. Your edit looks fine, but here is mine, just in case you want to use anything from it in your edit.
- While a measurement is usually given as a number followed by a unit, every measurement really has three parts, the estimate, an error bound, and a probability that the actual magnitude lies within the error bound of the estimate. For example, a measurement of a plank might result in a measurement of 9 meters plus or minus 0.01 meters, with a probability of 99%.
- A measurement should be distinguished from a count. A measurement is a real number, and is never exact. A count is a natural number and may be exact. For example, a non-handicapped person has exactly ten fingers and thumbs on their two hands.
- Statistics are often used to estimate counts and measurements, and if used carefully can result in greater accuracy than direct counts and measurements.
- In scientific research, measurement is essential. One of the characteristics that distinguishes science from pseudoscience is careful measurements that fall within predicted perameters. Rick Norwood 13:21, 11 April 2006 (UTC)
Have incorporated some of your edits. Much better overall now I think. Holon 13:40, 11 April 2006 (UTC)
- Rick, glad you're still checking on this article. Do you think the structure is Okay? There are a couple of bits that are repetitive now, and I'm going to try to get to those. Yes, I thought about planets being contentious, but most people wouldn't think of that. However, I thought your example of ten digits in base 10 was really nice personally. Holon 06:33, 27 April 2006 (UTC)
The neverending battle.
Yes, this page is still on my watch list -- though I've cut about a third of the pages on my watchlist, just because time is finite. Wikipedia has many virtues, mainly because people love to write about subjects that interest them, but it also has a problem with repeated edits leaving articles choppy and repetitious. From time to time, somebody has to put in the hours to go over the article from beginning to end, fixing small problems. Then, about half the time, all that work gets reverted by somebody who objects to one small item on the list. Rick Norwood 14:21, 27 April 2006 (UTC)
Pre-metric measures
User:24.248.103.130 recently added the following sentence to our history section:
- The standard roman measurement of wheat is a modius.
This didn't work in with the existing text at all, so I deleted it. But it does highlight our current History section's lack of coverage of early measurement systems. Surely these are worth a mention here. At present the main article referred to (History of measurement) almost seems to be about a completely different topic. -- Avenue 00:07, 2 June 2006 (UTC)
Proposed WikiProject
Right now the content related to the various articles relating to measurement seems to be rather indifferently handled. This is not good, because at least 45 or so are of a great deal of importance to Wikipedia, and are even regarded as Vital articles. On that basis, I am proposing a new project at Wikipedia:WikiProject Council/Proposals#Measurement to work with these articles, and the others that relate to the concepts of measurement. Any and all input in the proposed project, including indications of willingness to contribute to its work, would be greatly appreciated. Thank you for your attention. John Carter 20:33, 2 May 2007 (UTC)
Far too narrow, and not quite on target
This definition unnecessarilly excludes lots of proper measurements. Since it looks at only "physical" phenomenon, it excludes valid measurements such as public polls, the Dow Jones, etc. It would also seem to exclude phenomenon that are forecasts.
There is no need to reinvent the entire field of measurement theory. Actually, yes, there is such a field and the article makes no mention of it. It does mention metrology, which is much, much narrower. Measurement theory defines measurement as a mapping of quantities against phenomenon states. It also treats measurement as an observation that reduces uncertainty about a quantity. All empirical methods are about uncertainty reduction. The concept of "assigning a value" seems to overlook the concept that real world measurements are really probability distributions of values, not single values.
This article also ignores another key component of measurement theory - that not all measurement scales are the set of real numbers. Stanely Stevens defined measurements that included nominal and ordinal sets of values. A nominal measurement is an observation that places a phenomenon within a set where the subsets have no implied order. For example, a medical test might give the result that you either have or do not have a particular condition. A test of a fetus might determine it is male or female and that would be a nominal measurement. An ordinal measurement would be something like Mohs hardness scale. An order is implicit but not relative magnatudes. On Mohs hardness scale, a 4 is harder than a 2, but not necessarilly "twice" as hard.
The best book on the topic is "How to Measure Anything: Finding the Value of Intangibles in Business" IMHO. Hubbardaie 02:20, 19 June 2007 (UTC)
Another example is Wikipedia's assessment scale.Hubbardaie 02:22, 19 June 2007 (UTC)
- Sounds like you could make a good contribution to the article, citing the reference you mention above. Rick Norwood 13:27, 19 June 2007 (UTC)
Implementation of Hubbard's comments
I saw where Hubbard was going in the previous comment and I've changed the scope of the article. The article previously had a more narrow metrology/physical sciences focus and mistated the meaning of other measurement instruments. I broadened it to be more of a measurement theory focus with an emphasis on observations that reduce uncertainty.BillGosset 12:57, 23 June 2007 (UTC)
- I'm not really happy with the definition, "Measurement is an observation that reduces uncertainty about a quantity." I can see this as one property of measurement, even an important property, but not as a definition. I'm also not sure that "You have cancer," or "This number is in the set {1, 2, 3}," are measurements, as the term is generally understood. In short, I think the article should begin with numerical measurements, which I think you will agree is how the word is commonly used, with "nominal" measurements in a later section. Comments? Rick Norwood 00:53, 5 September 2007 (UTC)
- That definition is actually closer to how measurement is used in the empirical sciences, measurement theory and statistics. It is not uncommon for the general public or even dictionaries to have a different use for a term than how it is used in more quantitative sciences, so I'm not sure that should be our only guide. For example, look up the words "force" or "power" in a dictionary or thesaurus and you will see that there are "accepted" definitions that are very different from how those terms are used in mathematically precise contexts like science and engineering. Regarding your examples, the determination of whether a person has cancer IS a measurement, especially since many tests for such conditions have "type I" and "type II" errors. A person simply goes from a state of having, say, a 50% chance of having a condition to a 95% chance of having it, after a test is given (new observations were made, reducting uncertainty). Your second example ("This number is in the set...") would, by this definition, not qualify as a measurement since it involves no empirical observation, just a logical assertion. Perhaps we could mention both a "popular" definition and the more scientifically-meaningful use of the term. We should qualify it, however, by saying that merely representing something with a number without an observation is not the same as a measurement. Since real-world measurements have to have error, the practical definition still comes down to uncertainty reduction based on observation. You will also find that in measurement theory they specifically define measurements in this much broader sense. In fact, many statisticians, scientists and mathematicians would think of the "popular" use of the term as a common fallacy just as a thesaurus makes the "popular" assertion that heat and temperature are synonyms when, to a physicist, they are quite different.BillGosset 18:50, 5 September 2007 (UTC)
Some of the problems with "Measurement is an observation that reduces uncertainty about a quantity." 1) This implies that you have some knowledge of the quantity before the measurement. That is usually the case, but I can imagine a situation where you have no knowledge about the quantity before the measurement. An example would be Galileo's first "measurement" of the rings of Saturn, which he did not know existed before he observed them. 2) It also implies that the measurement always reduces uncertainty. I can imagine a case where measurement actually increases uncertainty. That is, we are certain about something, but wrong, and the measurement sets us right. 3) Measurement is a physical action, but certainty is a state of mind.
I still think "assigns a number" or something to that effect should come first, with a caveat, and the discussion of qualitative measurements come later in the article. Rick Norwood 12:29, 7 September 2007 (UTC)
- If you are confident in this analysis, then I suggest you publish a paper because - if you are correct - you would overturn most of measurement theory that has developed over the last several decades. But here is why I think measurement theory and the empircal sciences view of measurement is safe by addressing each of your concerns: 1) Actually, measurement theory (and information theory, for that matter) doesn't just imply you have some knowledge before you measure - it explicitly states it. And your Galileo example proves it. For example, he knew he was not measuring something that is visible to the naked eye. His new measurement instrument (the telescope) told him more than his own eyes could. 2) Actually, if you work out the chance of various measurement outcomes with a Bayesian analysis method, you will find that the probability weighted average of all outcomes is never an increase in uncertainty. There may be possible results that increase uncertainty but the math will show that those examples are rarer than those that decrease uncertainty. And if no outcome had any chance of changing uncertainty at all then it is quite accurate to say there was no measurement. 3) Measurement involves an action because it involves observation. But if that obseration did not tell you anything (or, more precisely, cannot tell you anything regardless of the result), then you literally have made no measurement. So measurement involves both the action and the previous state of mind. But this last point simply begs the question about what the definition of measurement should be. Finally, the really big problem with "assigns a number" is that it implies infinite precision that empirical measurements can almost never claim. The result of a measurement is a new probability distribution. It is a range of possible numbers and the probability distribution over that range makes a shape. This shape is often simplified by simply describing a mean and standard deviation of a measurement (i.e. the error). But, in no field of science does every measurement result in "A" number. If that is your criterion for a measurement, then we have not even measured the speed of light, the mass of a proton, or even the density of water. Every single one of those has an error even though it is extremely small. Even these standard physical constants cannot claim the infinite precision of "a" number. —Preceding unsigned comment added by BillGosset (talk • contribs) 22:50, 7 September 2007 (UTC) Ooops, I forgot to sign BillGosset 22:53, 7 September 2007 (UTC)
Oops. Of course by "assigns a number" I was speaking loosely. This article has always pointed out the difference between counts, which may be exact in the case of small numbers, and measurements, which always involve a measurement, a confidence level, and a margin of error.
I find the material from measurement theory interesting, and it certainly belongs in the article. But not necessarily as the first sentence. As a first sentence, I find it confusing, and I think it is more showy than enlightening. Also, according to the quote above, it is not really the definition of a measurement given in measurement theory.
I am not convinced by your arguments on points one and two above. On point one, if every measurement is an improvement on a previous measurement, then there can never be a first measurement, which leads to infinite regress. On point two, you ignore the fact that one can measure inaccurately. If you say that an inaccurate measurement is really no measurement at all, then I have to ask how much inaccuracy turns a measurement into a non-measurement. Rick Norwood 12:52, 8 September 2007 (UTC)
- Since BillGosset is talking about my reference I'll answer this part. Thanks for the interesting discussion. First, I don't say that a measurement is necessarilly an improvement on a previous measurement. I say it is an improvement on previous uncertainty. You DO have a previous state of uncertainty on everything. No exceptions, no infinite regress. Second, no, this position doesn't ignore the fact that one can measure inaccurately. In fact, my position is the one that assumes there IS inaccuracy. The amount of inaccuracy that turns a measurement into a non-measurement is simple. The answer is provided both withing information theory and measurement theory. If the resulting error is no less than the error of your previous state of uncertainty, then you have no informaiton value and no measurement.
- A measurement must at least be some type of information and information was mathematically quantified by Claude Shannon in information theory. Shannon described information as, again, uncertainty reduction and he derived a formaula to compute it. If a message, observation, signal, or piece of data did not result in uncertainty reduction, then it was not a measurement. More precisely, if the probability weighted average of all possible signals (or observations, messages, etc.) is not a reduction in uncertainty than Shannon's formuala would say the information quantity was zero. Let's get down to the math. In the simplest possible binary situation Shannon's formuala gives us:
- H=-p(s)*log2(p(s)-(1-p(s))*log2(1-p(s))
- where p(s) is the probability of a particular uncertaint state of nature. H is the information quantity or what Shannon called the "entropy" of the signal and it can be computed both before and after an observation.
- When p(s)=1 or 0 then H=0, when p(s)=.5, then H=1 and after an observation, the resulting p(s) changes. Now we add this special form of Bayes Theorem for binary states:
- p(s|r)=p(s)*p(r|s)/(p(r|s)*p(s)+p(r|~s)*(1-p(r))
- where p(s|r) is the probability of state s given a signal or measurement result r.
- And we hold the standard probability constraints such as p(s)+p(~s)=1. You will find that you can define no possible measurement system where the the prior entropy of the system is less than the probability weighted resulting entropy (i.e. information or measurement). If you conduct a measurement and you increase the uncertainty with one result, Bayes will show that that result will become less likely than the other result and the weighted average will still be an expected reduction in uncertainty. Of course, the other possoble solution is to create a system that has no change whatsoever on the uncertainty regardless of the outcome. For example, perhaps I count orange cars on the freeway to measure the inflation rate in Libya. It would be fair to say that knowing one does not reduce my uncertainty about the other and this would constitute no measurement at all. In this case the difference between H before and H after is 0. That's the point where it becomes a non-measurement.
- Finally, I'll make a cautionary note about that the general population might use the term in a softer and more ambiguous way, should not have much bearing on an encyclopedic entry. I'm sure the general population would find many explanations in physics and math "confusing" even though some of the words might be in common use. I actually talk about these ideas in more detail in my book "How to Measure Anything". Thanks again for the discussion. Hubbardaie 13:45, 8 September 2007 (UTC)
- THat math might actually be relevant for a section herein or even a separate article. It is relevant for the discussion of what measurement means and how it may differ in the empirical sciences from the popular use. My main issue I had with how this issue read about 4 months ago was that it seemed to be talking about measurements of physical properties only. It made no room for issues of measurement in the social sciences, psychology, economics, etc.BillGosset 17:39, 8 September 2007 (UTC)
Thank you for taking the time to explain your points of view.
I've worked on many math articles in Wikipedia, and the question always arises: Should math articles only be readable by mathematicians? As I understand Wiki policy, every article should have a lede that the educated layperson can understand, and technical jargon (and, especially, equations) should be saved for later in the article.
Shannon used "certainty" and "uncertainty" in a technical sense. I am not an expert on information theory, but I have taught coding theory and am familiar with information theory in a general way. On the other hand, the layman uses certainty and uncertainty to express a state of mind, which is not what Shannon was talking about at all. For example, my uncertainty about the date of the creation of the universe is plus or minus about a billion years. On the other hand, Bishop Usher was absolutely certain that the universe was created in 4004 B.C. Therefore his "measurement" is more certain (in his own mind) than my "measurement" is in my mind, and using this common meaning of "certain" the discovery of the big bang actually increased uncertainty, and was therefore not a measurement. Now, I know that's not what you meant. But is it what a layperson reading the article will understand the lede to mean?
Rick Norwood 13:52, 9 September 2007 (UTC)
You may have been responding to Hubbard, specifically, but I'll throw in my two cents, again. Yes, certainty can change for completely irrational reasons just as knowledge can change for completely irrational reasons. People may make completely irrational conclusions and call them "logical" and may completely misapply mathematial principles to provide a "mathematical proof". Someone may read the definition of "mathematical proof" and walk away convinced that is the standard they meet for their theory that UFO's caused hurricane Katrina. But I don't think irrational certainty is the relevant issue. There is a rational level of certainty and people can be trained to provide probabilities that realistically represent their certainty. For example, when a well calibrated person (such as most bookies) say they are 80% certain in a claim, and you track them, you will find that they are right about 80% of the time. Likewise, they are right 90% of the time when they say they are 90% confident, and so on. Shannon includes "Bayesian" uncertainty when he computes what he calls the entropy of a message. Although Bayesian statistics is entirely mathematical, Bayesian probability is sometimes used to mean a subjective probability. That's because Bayes theorem is about how to adjust existing uncertainty with new information. You would find the use of Bayes throughout papers on information theory. So I would say that to Shannon, uncertainty IS ultimately a state of mind. But, like most problems in math and logic, uncertainty can be assessed rationally regardless of whether some people are irrationally certain.
Now that we have been surprised many times about our beliefs about the universe, we should probably leave room for skepticism for just about anything. Although I find it impossible that future observations will overturn, say, our belief that the sun is not pulled through the sky by chariots, there are probably some beliefs we assume to be true that will be yet disproven. So although our uncertainty is narrower in some fields, I leave more room for future surprising findings.
But I entirely agree with your point about anticipating possible confusion among non-specialists. I would propose that perhaps we could start with a sentence like "Measurement is popularly used to mean..." and then add "But in fields where measurement is dealt with at a more technical level it is used more specifically to mean...". We could even have a section that deals with some of the more philisophical points made herein (as long as its not original research, of course). The first two citations already cover several of these issues, so we should be able to provide plenty of material. BillGosset 00:39, 10 September 2007 (UTC)
- I think your suggestion will completely resolve any problems I have. Do you want to write it, or shall I?
- On another topic -- I've gotten into heated discussions in other articles over whether the lede should contain footnotes or not. Have you an opinion on this subject? Rick Norwood 13:24, 10 September 2007 (UTC)
- I don't think it would be the lead if this change was made. But I think it might be wise that even the "popular" definition be sourced, perhaps from a dictionary or encyclopedia. I'm not familiar with that particular debate about encyclopedic entries.
- I was also going to comment on another aspect of your earlier proposed definition. I previously discussed the issue of whether a measurement should result in "a number". But I think we should also be careful about the word "assigns". The more technical definition specifically includes observation and I think any definition of measurement should include that. Otherwise, randomly generating a number qualifies as "assigning" a number.
- It might a few days before I get back to this. If you have time, take a shot at it. If not, I'll probably be able to address it at the end of the week. Thanks. BillGosset 14:24, 10 September 2007 (UTC)
I'm going to be bold and see what happens. Rick Norwood 13:59, 11 September 2007 (UTC)
- Rick/Bill, can you guys help me address the mensuration concerns above? The term needs to get its own article, but then probably also needs a reference from this one...I don't want to dive in until the big rewrite is done, and I don't want to step on toes. - mystery user who doesn't have an account yet 9/12/2007 —Preceding unsigned comment added by 216.17.251.236 (talk) 16:04, 12 September 2007 (UTC)
Spellings
I know that in American English only "meter" is used, whether referring to the unit of measurement or a device that measures something. However, I think it would be clearer to reflect the standard in non-American English of using "Metre" to describe the unit of measurement and "Meter" to describe a device that measures. Jonnyboy5 12:52, 4 October 2007 (UTC)Jonnyboy5
Is estimation correct?
"Measurement is the estimation of the magnitude of some attribute of an object," I always understood that measurement removes the need to estimate - an estimate is a measured guess to be compared to a know meter when available.
I think the article shoudl start: "Measurement is the precise determination of the magnitude of an attribute of an object when compared to a predefined standard," —Preceding unsigned comment added by Simonmcox (talk • contribs) 14:03, 9 November 2007 (UTC)
People who think carefully about measurement, or who use measurement professionally, understand that no measurement is precise. Every measurement is plus or minus some margin of error. Rick Norwood 14:11, 10 November 2007 (UTC)
- Rick Norwood is correct. Measurement - as the article states later - is about uncertainty reduction. At each stage of knowledge of a quantity, we have some uncertainty. Each additional piece of information about that quantity (i.e. a measurement) further reduces the uncertainty but, in most cases in reality, uncertainty is never completely removed. In that sense, each measurement is only an "estimate" that is uneeded once the next measurement is taken. In fact, you can't even do certain types of measurements without explicit use of prior knowledge - that is, an initial estimate. That is what Bayesian analysis does.Hubbardaie 21:23, 12 November 2007 (UTC)
Huge missing section
Looking back at this article before November 30, 2007, a lot more material including discussion of Steve Stevens and two citations (there are none now). It looks like this deletion was a vandal during a period where there seemed to be a lot of vandal activity. Subsequent repairs of vandal activity apparently left this significant content, citations, and external links out of the article. Can someone look into this?ERosa (talk) 05:38, 7 January 2008 (UTC)
- I've repaired it. -- Avenue (talk) 09:39, 7 January 2008 (UTC)
need help
hai —Preceding unsigned comment added by 218.248.69.6 (talk) 06:19, 5 September 2008 (UTC)
- Please be specific about what help you might need regarding the topic. Kalivd (talk) 14:47, 8 September 2008 (UTC)
from least to greatest
sara thank that math is the stupitest uglest on earth. —Preceding unsigned comment added by 24.224.5.126 (talk) 19:05, 2 October 2008 (UTC)
The Nagel citation
Nagel cites "Spaier,La Pensée et la Qualité, p. 34" as the source of the definition. Haven't been able to track that reference down, though. —Preceding unsigned comment added by 130.237.44.185 (talk) 14:37, 17 October 2008 (UTC)
concept measurement in construction
i don know how to explain about the measurement in construction,so i want about that..anybody tell me about the measurement in construction? —Preceding unsigned comment added by 60.48.96.89 (talk) 12:53, 15 November 2008 (UTC)
Changes made 2009, Jan
I came to view the definition and discovered the definition contained a form of the term "estimate". This invalidates anything else since the two are diametrically opposed, as least in the philosophical (philosophy of science) sense. The definition given by Princeton (http://www.google.com/url?sa=X&start=0&oi=define&q=http://wordnet.princeton.edu/perl/webwn%3Fs%3Dmeasurement&usg=AFQjCNGM6oq67ACWDZn6UJwo9bfTcUBxLA) is the best definition I found, for what knowing this is worth.
Measurement error, which comprised much of what was here, should be a separate entry.
The importance of this page should be the highest; not near the lowest. It should be near the highest because everything in science has at its roots, measurement of some kind. For example, if one compares how long the concept has existed one might look at its history. To determine its age relative to another term, one may look at the history of the other term, then one may compare the two. Hence, Wikipedia's existance as a viable tool for information, can be said to depend on this term. Kernel.package (talk) 18:45, 22 January 2009 (UTC)
Origin of the word
The text states measure comes from greek metron. Wiktionary, though, mentions latin mensura, and has no word on metron.