Talk:Student's t-test
Calculations
I don't suppose anyone wants to add HOW TO DO a t-test??
- That seems to be a deficiency of a fairly large number of statistics pages. The trouble seems to be that they're getting written by people who've gotten good grades in statistics courses in which the topics are covered, but whose ability does not exceed what that would imply. Maybe I'll be back.... Michael Hardy 22:04, 7 June 2006 (UTC)
- If I have time to learn TeX, maybe I'll do it. I know the calculations, it's just a matter of getting Wikipedia to display it properly. Chris53516 16:17, 19 September 2006 (UTC)
- Those who don't know TeX can present useful changes here on the talk page in ASCII (plain text), and others can translate them into TeX. I can do basic TeX; you can contact me on my talk page to ask for help. (i.e. I can generally translate equations into TeX; I may not be able to help with more advanced TeX questions.) --Coppertwig 11:57, 8 February 2007 (UTC)
- I uploaded some crappy images of the calculations. I don't have time to mess with TeX, so someone that's a little more TeX-savvy (*snicker*) can do it. Chris53516 16:42, 19 September 2006 (UTC)
- User:Michael Hardy converted two of my crappy graphics to TeX, and I used his conversion to do the last. So there you have it, calculations for the t-test. Chris53516 18:21, 19 September 2006 (UTC)
- Great. Now, could someone explicit the formulla? I assume than N is the sample size, s the standard deviation, but what is the df1/dft? ... Ok I found the meaning of df. I find the notation a bit comfusing. it looks a lot like the derivative of a function... is dft thez degrees of freedom of the global population?
- What do you mean: "could someone explicit the formulla (sic)" (emphasis added)? N is the sample size of group 1 or group 2, depending on which number is there; s is the standard deviation; and df is degress of freedom. There is a degree of freedom for each group and the total. The degrees of freedom for each group is calculated by taking the sample size and subtracting one. The total degrees of freedom is calculated by adding the two groups' degrees of freedom or by subtracting the total sample size by 2. I will change the formula to reflect this and remove the degrees of freedom. Chris53516 13:56, 11 October 2006 (UTC)
- Thanks for the help with doing the calculation, I'm feeling comfortable finding a confidence bound on the Mean - but is there any way to also find a confidence bound on the variation? My real goal is to make a confidence statement like "using a student t-test, these measurements offer a 90% confidence that 99% of the POPULATION would be measured below 5000". —Preceding unsigned comment added by 64.122.234.42 (talk) 14:03, 23 October 2007 (UTC)
independent samples
Should 'assumptions' include the idea that we assume all samples are independent? This seems like a major omission.
history unclear
"but was forced to use a pen name by his employer who regarded the fact that they were using statistics as a trade secret. In fact, Gosset's identity was unknown not only to fellow statisticians but to his employer - the company insisted on the pseudonym so that it could turn a blind eye to the breach of its rules." What breach? Why didn't the company know? If it didn't know, how is it insisting on a pseudonym?
Welch (or Satterthwaite) approximation?
"As the variance of each group is different, the Welch (or Satterthwaite) approximation to the degrees of freedom is used in the test"...
Huh?
--Dan|(talk) 15:00, 19 September 2006 (UTC)
Table?
This article doesn't mention the t-table which appears to be necessary to make sense of the t value. Also, what's the formula used to compute such tables? —Ben FrantzDale 15:07, 12 October 2006 (UTC)
- I'm not sure which table you are referring to or what you mean by "make sense of the t value". Perhaps you mean the table for determining whether t is statistically significant or not. That would be a statistical significance matter, not a matter of just the t-test. Besides, that table is pretty big, and for the basic meaning and calculation of t, it isn't necessary. Chris53516 15:24, 12 October 2006 (UTC)
- I forgot. The calculation for such equations is calculus, and would be rather cumbersome here. It would belong at the statistical significance article, anyway. That, and I don't know the calculus behind p. Chris53516 15:26, 12 October 2006 (UTC)
- Duah, Student's t-distribution has the answer to my question. —Ben FrantzDale 14:55, 13 October 2006 (UTC)
- Glad to be of not-so-much help. :) Chris53516 15:11, 13 October 2006 (UTC)
Are the calculations right?
The article says:
But if you ignore the -1 and -2, say for the biased estimator or if there are lots of samples, then s simplifies to
This seems backwards. The external links all divide the standard deviation by its corresponding sample size, which is what I was expecting. So I'd guess there's a typo and the article should have:
Can anyone confirm this?
Bleachpuppy 22:14, 17 November 2006 (UTC)
- I think it's right as it stands, but I don't have time to check very carefully. When you multiply s12 by N1 − 1, you just get the sum of squares of deviations from the sample mean in the first sample. Similarly with "2" instead of "1". So the sum in the numerator is the sum of squares due to error for the two samples combined. Then you divide that sum of squares by its number of degrees of freedom, which is N1 + N2 − 2. All pretty standard stuff. Michael Hardy 23:23, 17 November 2006 (UTC)
- ... and I think that just about does it; i.e. I've checked carefully. Michael Hardy 23:29, 17 November 2006 (UTC)
- Please provide a citation or derivation. I think Bleachpuppy is right that the subscripts have been switched. Suppose and , a very large number, and and are of moderate and comparable size (i.e. is a very large number in comparison to any of the other numbers involved). In this case, in effect is known almost perfectly, so the formula should reduce to a close approximation of the t-distribution for the case where the sample 1 is being compared to a fixed null-hypothesis mean which in this case is closely estimated by . In other words, it should be approximately equal to:
- But apparently the formula as written does not reduce to this; instead it reduces to approximately:
- This is claiming that this statistical test depends critically on . But since is a very large number in this example, should be pretty much irrelevant; we know with great precision regardless of the value of , as long as is not also a very large number. And the test should depend on the value of but does not. --Coppertwig 12:45, 19 November 2006 (UTC)
- Please provide a citation or derivation. I think Bleachpuppy is right that the subscripts have been switched. Suppose and , a very large number, and and are of moderate and comparable size (i.e. is a very large number in comparison to any of the other numbers involved). In this case, in effect is known almost perfectly, so the formula should reduce to a close approximation of the t-distribution for the case where the sample 1 is being compared to a fixed null-hypothesis mean which in this case is closely estimated by . In other words, it should be approximately equal to:
- All I have with me right now is an intro to stat textbook: Jaccard & Becker, 1997. Statistics for the behavioral sciences. On page 265, it verifies the original formula. I have many more advanced books in my office, but I won't be there until tomorrow. -Nicktalk 21:02, 19 November 2006 (UTC)
- P.S. none of the external links really have any useful information on them (they especially lack formulas). Everything that I've come across on the web uses the formula as currently listed in the article. -Nicktalk 21:29, 19 November 2006 (UTC)
- The original formula is also confirmed by Hays (1994) Statistics p. 326. -Nicktalk 19:36, 20 November 2006 (UTC)
- OK! I see what's wrong!! The formula is a correct formula. However, the article does not state to what problem that formula is a solution! I assumed that the variances of the two populations could differ from each other. Apparently that formula is correct if you're looking at a problem where you know the variance of the two distributions is the same, even though you don't know what the value of the variance is. I'll put that into the article. --Coppertwig 03:33, 21 November 2006 (UTC)
I know these calculations are correct; I simply didn't have my textbook to for a citation. Keep in mind that much of the time we strive to have an equal sample size between the groups, which makes the calculation of t much easier. I will clarify this in the text. – Chris53516 (Talk) 14:28, 21 November 2006 (UTC)
I'm not certain, but it looks like the calculations don't match the graphic formula; n=6 in the problem, but n=8 in the graphic formula. 24.82.209.151 07:54, 23 January 2007 (UTC)
These are wrong, they do not match each other. In the first you need to divide by 2, and in the second, you need to drop the multiplication by (1/n1+1/n2) That makes them match -DC
Extra 2?
Where the text reads, "Where s2 is the grand standard deviation..." I can't tell what that two is referring to. It doesn't appear in the formula above or as a reference. 198.60.114.249 23:29, 14 December 2006 (UTC)
- The equation you're looking for can be found at standard deviation. It was not included in this page because it would be redundant. However, I will add a link to it in the text you read. — Chris53516 (Talk) 02:38, 15 December 2006 (UTC)
- Thanks Chris! 198.60.114.249 07:23, 15 December 2006 (UTC)
I wanna buy a vowel ...
I may be off my medication or something, but does this make sense to anyone? :
"In fact, Gosset's identity was unknown not only to fellow statisticians but to his employer—the company insisted on the pseudonym so that it could turn a blind eye to the breach of its rules."
So Gosset works for Guinness. Gosset uses a pen-name cuz Guiness told him to. But, um ... Guiness doesn't know who he is and doesn't want to know. So they can turn a blind eye.
So they told this person - they know not whom - to use the pen-name.
I know this was a beer factory and all but ... somebody help me out here.
CeilingCrash 05:28, 24 January 2007 (UTC)
- I don't know the history, but maybe they promulgated a general regulation: If you publish anything on your research, use a pseudonym and don't tell us about it. Michael Hardy 20:13, 3 May 2007 (UTC)
- Maybe it should should read "a pseudonym" instead of "the pseudonym". I'm not so sure management did not know his identity, however. My recollection of the history is that management gave him permission to publish this important paper, but only under a pseudonym. Guiness did not allow publications for reasons of secrecy. Can someone research this and clear it up?--141.149.181.4 14:45, 5 May 2007 (UTC)
Unfortunatly I have no sources at hand, but the story as I heard it is that Guiness had(/has?) regulations about confidentiallity on all processes used in the factory. Since Gosset used his formulas for grain selection, they fell under the regulations, so he couldn't publish. He than published under the pseudonym, probably with non-official knowladge and consent of the company, which officially couldn't recognize the work as to be his, due to the regulations.
a medical editor's clarification
The correct way of expressing this test is "Student t'Italic text test". The word "Student" is not possessive; there is no "apostrophe s" on it. The lowercase "t" is always italicized. And there is no hyphen between the "t" and "test". It's simply "Student t'Italic text' test"
I'm a medical editor, and this is according the the American Medical Association Manual of Style,'Italic text 9th edition. Sorry I don't really know how to change it - I'm more a word person than a technology person. But I just wanted to correct this. Thank you! -- Carlct1 16:40, 7 February 2007 (UTC)
- You need to close those comma edits. When you want bold text, close it off like this:
'''bold'''
, and it will appear like this: bold. Please edit your comment above so it makes more sense using this information. — Chris53516 (Talk) 17:00, 7 February 2007 (UTC)
- I'm not sure you are correct about the possessive use. As the article notes, "Student" was Gosset's pen name, which would require a possessive s after the name; otherwise, what does the s mean? The italic on t is left off of the article name because it can't be used in the heading. There are other problems like this all over Wikipedia, and it's a technical limitation. By the way, I see both use of "t-test" and "t test" on the web, and I'm not sure that either are correct. — Chris53516 (Talk) 17:05, 7 February 2007 (UTC)
Recent edit causing page not to display properly -- needs to be fixed
Re this edit: 10:47, 8 February 2007 by 58.69.201.190 I see useful changes here; but it's not displaying properly, and also I suggest continuing to provide the equation for the unbiased estimate in addition to the link to the definition of it. I.e. I suggest combining parts of the previous version with this edit. I don't have time to fix it at the moment. --Coppertwig 11:53, 8 February 2007 (UTC)
Looking at it again, I'm not sure any useful material was added by that edit, (I had been confused looking at the diff display), so I've simply reverted it. --Coppertwig 13:02, 8 February 2007 (UTC)
Equal sample sizes misses a factor sqrt(n)
The formula with equal sample size should be a special case of the formula with unequal sample size. However, looking at the formula for the t-test with unequal sample size:
and setting n=N_1=N_2 yields
- .
The factor of sqrt(n) should be correct in the limit of large n. However, there might be a problem since one sets N_1 = N_2 which reduces the degree of freedom by one. Does anyone knows the correct answer?
Oliver.duerr 09:13, 20 February 2007 (UTC)
- I don't have the answer, but I agree both formulas don't match 128.186.38.50 15:37, 10 May 2007 (UTC)
Explaining a revert
I just reverted "t tests" from singular back to plural. The reason is that it's introducing a list of different tests, so there is more than one kind of t test. I mean to put this in the edit summary but accidentally clicked the wrong button. --Coppertwig 19:55, 25 February 2007 (UTC)
Copied from another source?
Why is this line in the text? [The author erroneously calculates the sample standard deviation by dividing N. Instead, we should divide n-1, so the correct value is 0.0497].
To me, this suggests that portions of the article were copied from another, uncited source. If true, this is copyright infringement and needs to be fixed right away.
- I can't find any internet-based source for the text in that part of the article. I think the line might be directed toward the author of the Wikipedia article, as it seems to point out an error. I removed it, and will look into the error. -Nicktalk 00:38, 19 March 2007 (UTC)
By the way; the line should be read carefully. It is correct. As this is an estimate based on an estimate, it should have been divided by n-1, so the correct value is 0.0497. Can someone please change this?
Testing Normality
The call: I think it would be appropriate to change the wording "normal distribution of data, tested by ..." as these tests for normality are only good for establishing that the data is not drawn from a normal distribution.
The backgroud: Tests for normality (e.g. Shapiro-Wilk test) test the null hypothesis that the data is normally distributed against the alternative hypothesis that it is not normally distributed. Normality cannot be refuted if the null is not rejected, a statement that can only be statistically evaluated by looking at the power of the test.
The evidence: Spiegelhalter (Biometrika, 1980, Table 2) shows that the power of the Shapiro-Wilk test can be very low. There are non normal distributions such that with 50 observations this test only correctly rejects the null hypothesis that the data is not normally distributed 8% (!) of the time.
At least two possible solutions: (1) Drop the statement that the assumption of normality can be tested. (2) Indicate that one can test if the data is not normally distributed, pointing out that no rejection of normality does not mean that the data is normally distributed due to the low power of these tests.
Schlag 11:55, 27 June 2007 (UTC)
- If you perform any of these tests before doing a t-test the p-value under the null hypothesis will no longer be uniformly distributed. This entire section is bad statistical advice (although commonly done in practice) Hadleywickham 07:27, 9 July 2007 (UTC)