Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 400: Line 400:


Does that mean it is somewhat radioactive? [[Special:Contributions/92.29.112.73|92.29.112.73]] ([[User talk:92.29.112.73|talk]]) 13:18, 11 November 2010 (UTC)
Does that mean it is somewhat radioactive? [[Special:Contributions/92.29.112.73|92.29.112.73]] ([[User talk:92.29.112.73|talk]]) 13:18, 11 November 2010 (UTC)
:No. [[User:Shoy|shoy]] <small>([[User talk:Shoy|reactions]])</small> 14:30, 11 November 2010 (UTC)


== Particles and antiparticles - two directions of time ==
== Particles and antiparticles - two directions of time ==

Revision as of 14:30, 11 November 2010

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


November 7

Beyond us

Are the problems in the world simply beyond us? Poverty, AIDS, human rights abuses, war. I am just wondering if that is what people learn as they try to accomplish things with a futility. Perhaps this should go in the misc section, but I was thinking that science should have the answer, above all things, right? AdbMonkey (talk) 00:22, 7 November 2010 (UTC)[reply]

It remains to be seen, AdbMonkey. We are awaiting the empirical results on that. Your last question reminds me of something I heard on NPR the other day, I'll see if I can dig that up for you in a moment... WikiDao(talk) 00:38, 7 November 2010 (UTC)[reply]
All you mention are problems we ourselves have created. The real question is, do we have the will to undo them? Perhaps we should start with renaming everyone's "defense" department back to their "department of war." Unfortunately, from war to pharmaceuticals, all is driven by the quest for profits. Unless of course you're just power hungry. Not to mention that the last "politician" I can think of where that moniker was not a dirty word was (for me in the U.S.) Hubert Humphrey. PЄTЄRS J VЄСRUМВАTALK 01:56, 7 November 2010 (UTC)[reply]
(e/c) Okay, that was a discussion on "Science and Morality" with Steven Pinker, Sam Harris,Simon Blackburn, and Lawrence Krauss. The lead-in to the program is:

"Did we evolve our sense of right and wrong, just like our opposable thumbs? Could scientific research ever turn up new facts to resolve sticky moral arguments such as euthanasia, or gay marriage? In this hour of Science Friday, we'll talk with philosophers and scientists about the origins of human values. Our guests are participating in an international conference entitled “The Origins of Morality: Evolution, Neuroscience and Their Implications (if Any) for Normative Ethics and Meta-ethics” being held in Tempe, Arizona on November 5-7. Listen in to their debate, and share your thoughts."

So far, I have only heard about the first 15 minutes of the program myself. But it may be relevant. If so, I'll comment further after having listened to the rest of it. :) Regards, WikiDao(talk) 02:10, 7 November 2010 (UTC)[reply]
To the OP: Not at all. Instead of comparing the current world to an ideal perfect world, without any form of suffering, instead compare today's world to that of say 50 years ago. Or 100 years ago. Or 200 years ago. Or 1000 years ago. At any point in history any arbitrarily long distance in the past, there were a higher proportion of people who were abjectly poor, or diseased, or who died young, or any other number of miserable existences. We've known about AIDS for about 30 years. Smallpox and Plague we knew about for centuries before they were finally cured. Give it time. It only seems like things are bad because you are living through them. Things were infinitely shittier before you were born. --Jayron32 03:23, 7 November 2010 (UTC)[reply]
That's a good point, and a good way of putting it. But, Jayron, the world is today in arguably a fundamentally different position than at any previous time with regard to humanity and its impacts on itself (overpopulation, technology, etc) and the world in general (environmentally, etc). It's a complex system, and as it gets more complex it gets more difficult to tell what's going to happen next... WikiDao(talk) 03:47, 7 November 2010 (UTC)[reply]
I'm not so sure about that. Ever since people began living in cities, they have been creating problems for themselves that living in caves did not. As bad as pollution was at the height of the Industrial Revolution, it still didn't cause as much death and illness as something as plainly simple as not shitting in the middle of the street. Progress tends to, on the balance, result in a higher standard of living across the board. It is true that technology and advancement causes unique problems, but it solves more problems than it causes. Despite the problems with polution, the Industrial Revolution in the UK saw the greatest population explosion that country ever saw. And as the modern economy has evolved, polution has gotten better. Yes, it is still a problem, but not nearly the problem it was in the middle 1800s. Thomas Malthus's predictions haven't come true, because his assumption that food production growth would be linear doesn't hold up. Food production has kept up with population growth because of technological advancements. The only major problems with overpopulation are politically created; its not that the food doesn't exist to feed people, or that the technology doesn't exist to fix the problems of overpopulation, its just that the political will to actually fix the problems lags behind the technological advancements. But that has always been so, and what has also always been so is that it eventually catches up. --Jayron32 03:59, 7 November 2010 (UTC)[reply]
Okay, sure, and if it does work out for humanity (and we ought to find out within the next century or so -- if we get through the tail-end here of the population explosion that has gone hand-in-hand with technological and social progress, it'll happen or not within the next hundred years) – if it does work out, it will be because it is as you describe it. I take your points about cities and failed Malthusian expectations. Still, we are globally overpopulated now. It's a closed complex system, and we've run up against the boundaries. What happens in a petri-dish when the bacteria run out of nutrients and fill it with their waste-products...? WikiDao(talk) 04:16, 7 November 2010 (UTC)[reply]
Actually, technological and social progress tends to lead to LOWER population growth, not more. See Demographic-economic paradox, aka The Paradox of Prosperity. In highly developed nations, like most of Western Europe and North America, the birth rate is below replacement rate, and these nations have to import workers from less developed nations just to do all of the work that the kids they aren't having aren't doing. The real question is what is going to happen to the world when EVERY country is so developed that we're all operating at below replacement rate. The trend would indicate that we're going to have the OPPOSITE of a Malthusian catastrophe, in that as we become more advanced, we don't even have enough kids to maintain a steady population. --Jayron32 04:24, 7 November 2010 (UTC)[reply]
A sigmoid curve.
Human population growth.
Yes. We are aiming for something like the diagram (of a sigmoid curve) shown at the left. That region on the upper-right-hand side is the region we are just entering now (see the diagram on the right and the World Population article, which says, "In the 20th century, the world saw the biggest increase in its population in human history due to lessening of the mortality rate in many countries due to medical advances and massive increase in agricultural productivity attributed to the Green Revolution," which is one of the things I was saying, too). WikiDao(talk) 04:36, 7 November 2010 (UTC)[reply]

Ah, it's he or she of the very funny user page, again! Hi, User:AdbMonkey! You might like to have a look at The Revolution of Hope, by the sociologist Erich Fromm. In one of his books, The Sane Society, I think it was, he makes a case, based on sociological metrics like suicide rates, alcoholism incidence, etc. for the idea that current Western society is more screwed up than it ever has been before. I've never seen his numbers discussed anywhere else, but (as I recall, it's been ten or twenty years) he claims the occurrence of such signs of distress is astronomically higher than ever before in recorded history. He's of the opinion that our technological development has way, way outstripped our moral intelligence, that we're like toddlers who think we can use fire responsibly because we know how to start one. ( I agree with this assessment, FWIW; it seems obvious to me. ) Along that same line, Fromm points out that if something can be done with technology, then eventually it almost certainly will be done, by someone, somewhere in the world; in this way technology has it's own inertia that sweeps us all along without much conscious choice or deliberation about whether the changes it introduces are what we really want, how we really want to live on the Earth. This is no way to run a planet, in Fromm's view. ;-) Then, in The Revolution of Hope (subtitled, "Toward a humanized technology") he gives one part of his "prescription" for what ails us as a people. Good fun, his ideas stretch the intellect, imo. Best,  – OhioStandard (talk) 04:36, 7 November 2010 (UTC)[reply]

Take a look at Malthusian catastrophe, The World Without Us, tipping point (sociology), philosophy of war, transhumanism, evolution of evolution and neuroplasticity. ~AH1(TCU) 17:02, 7 November 2010 (UTC)[reply]


Thanks WikiDao. I have listened to some lectures by Pinker before. I'm not sure how he would answer the question. Thanks Jayron, for your way to reframe how I look at the world. Ohiostandard, you kill me. I was really wondering if there perhaps is a large school of once do-gooders, who have simply become jaded with trying to effectuate positive change (please no questions of how do we define positive and negative). I was just wondering if perhaps there was an ascribed term to this besides just "went through a green phase" or "became jaded" or "grew up and realized how the world worked." I hope I am making sense? I'm think specifically of a young person who is excited and hopeful and eager to make a positive change in the world, or help build schools, or volunteer with Medicans sans Frontieres, but they get weighed down and burnt out after awhile, when they see that their efforts don't make the dent they were hoping for. I was just wondering if older people has any wisdom about this, or if anyone knew any books about how to prevent this feeling of hopelessness, if there was a psychological scientific name for this, or if simply falls into the realm of motivational speaking. I hope I am making sense. AdbMonkey (talk) 19:31, 7 November 2010 (UTC)[reply]

Creeping nihilism? I say the best way to deal with it is to just stare it down. (You have nothing to lose by the attempt, right?;) WikiDao(talk) 19:59, 7 November 2010 (UTC)[reply]

Dear Wikidao. Is that all it is? I thought nihilism was when an anarchist destroyed everything. I just thought maybe there was German term for this or something. I like you, wikidao. You're very dao about things, and that is so nice. Even your name is cute, because it has wiki and dao put together in it. Ok, well I know this isn't supposed to be socially jabbered up with public declarations or odes to other users, but I just couldn't help telling you how I like the cutseyness of your name. (If you meant it to come across like a strong, karate kicking ninja, I apologize.) So yeah, thanks for that NPR link AdbMonkey (talk) 04:34, 8 November 2010 (UTC)[reply]

Well I'm flattered and all, I'm sure, monkey, but this really is not the place - I'll respond more to all that on your talk page. :)
So nihilism isn't really what you were describing? I realize the word has an "anarchist" feel to it, but didn't mean to suggest that aspect of it. And I wasn't really sure, just guessing (note the question mark there, after "Creeping nihilism?"). You don't mean just "disillusionment", though, right? So you are thinking of a "German term", then? I'm sure there must be one, perhaps someone else will get it. Regards, WikiDao(talk) 05:07, 8 November 2010 (UTC)[reply]

magazine

can u buy famous older issues directly from playboy? —Preceding unsigned comment added by Kj650 (talkcontribs) 01:59, 7 November 2010 (UTC)[reply]

This is science? Anyway, the Playboy store only has some issues available from the 1980s forward, plus a reproduction of the Marilyn Monroe one. Clarityfiend (talk) 03:34, 7 November 2010 (UTC)[reply]
Well, it's definitely falsifiable ... 63.17.41.3 (talk) 03:59, 10 November 2010 (UTC)[reply]
IIRC, Playboy's website has digitized versions of every issue ever produced, so you can at least access it if you pay the subscription fee. There are other magazines which offer free digital versions of their back issues. Sports Illustrated has a full collection of scanned issues (not just text, but full digital scans of every page of every magazine) going back to their first issue, and it is entirely free to browse. It's also fully text searchable. --Jayron32 04:28, 7 November 2010 (UTC)[reply]
Interesting. The Playboy Archive lets you [cough, cough] "read" 53 back issues for free. And Jayron is correct; apparently you can get digital versions for each decade.[1] Clarityfiend (talk) 04:57, 7 November 2010 (UTC)[reply]

pollution question

How much chemical does it take for the environment to be labeled as contaminated? Is it the same for every chemical? —Preceding unsigned comment added by 75.138.217.43 (talk) 02:04, 7 November 2010 (UTC)[reply]

It's different. I assume it's related to how toxic the chemical is, how well the environment can handle it, and how long it takes for it to be biodegraded. For example oil in the gulf is not as big a problem as it would be near alaska. The gulf has lots of mechanisms to deal with oil (bacteria, warmth, etc), alaska doesn't. Salt would not be a problem near the ocean, but would be near the great lakes. Ariel. (talk) 02:21, 7 November 2010 (UTC)[reply]
I do have to respond to this by saying that everything in the environment is made entirely of chemicals. So, it's obviously not the same for every chemical. I'm guessing our questioner is referring to chemicals widely regards as pollutants, and the same is true for them. ANother perspective is that what is good for some plants will be deadly for others. So, huge variation. HiLo48 (talk) 03:58, 7 November 2010 (UTC)[reply]

Electronics/Physics question about laptop power supplies

Hi, all. Can any electronics/physics guru tell me what's likely to happen if I plug a newish Gateway laptop into mains via an old AC/DC power converter "brick" that's from a (much earlier, monochrome screen) Gateway laptop?

Specs for the converter/brick that came with the new laptop are DC Output 19V, 3.42 Amps, according to a label affixed to it. There's also a symbol between the "Volts" number and the "Amps" number that consists of a short, horizontal line with a parallel dashed line (in three segments) positioned just below it. I presume this has to do with the polarity of the bayonet-style connection jack that plugs in to the laptop? That jack is also represented figuratively by the familiar "two concentric rings" graphic that shows that its "negative" pole is on the periphery/outside of the jack, with the "positive" pole located at the center. A label on the bottom of the new laptop also says 19 Volts and 3.42 Amps, btw. The old converter/brick has the same parallel lines symbol as the new one, but no "concentric rings" graphic, and a legend that says it's specified to deliver 19 Volts and just 2.64 Amps. This would seem to mean that the "new" power supply is capable of delivering 30% more current at 19 Volts than the "old" one can, also at 19 Volts, right?

So if I try this, is something likely to melt, and if so, what? The power supply? The computer? Or might everything still be within tolerance? I could try it with the laptop battery installed and fully charged, of course; would that be safer in case the power supply fries? And if this would be a really dumb thing to attempt, would it be sufficiently less dumb if I were to try running the new laptop only in some very low-power mode while connected to mains via the old converter/brick?

No penalty for informed guessing: I probably won't try this, regardless of the advice I get here. But I'll formally state that, as an adult, I alone am responsible for the consequences of my actions. That means I won't blame anyone else if I try this and it disrupts the fabric of the space-time continum, sets my house on fire, or worse, fries my computer. If anyone wants to explain the physics of what's likely to happen, I'd be interested to know that, too, since that's at least half my interest in asking this question. Thanks!  – OhioStandard (talk) 04:04, 7 November 2010 (UTC)[reply]

Well, we don't yet have a guideline against answering disrupting-the-fabric-of-the-space-time-continuum advice questions so... ;) The solid-and-dashed-line symbol is a well-known symbol for DC. The AC symbol is a sine wave. It is 99% likely that the polarity of your DC output hasn't changed (keep in mind that 83% of statistics are made up on the spot). It is very rare that the "shield" (outer-most parts) of a device/plug is NOT designed to be the ground/earth/negative terminal. As to what could happen if you plugged it in...it depends. If your old power supply has overload protection built-in it might switch itself off if you try to draw too much current. Or it might just run a bit hotter than normal. Or it might overheat, melt something internally and catch alight. Either way using a full battery would lessen the current draw when you plugged it in. Note that even though the power supply is rated for X amps, it doesn't mean that the laptop draws X amps. It is likely that the power supply is over-designed by at least 10% compared to the maximum laptop draw current. YMMV. Regards, Zunaid 05:44, 7 November 2010 (UTC)[reply]
Thanks, Zunaid! That's the word I was reaching for, "overdesigned". I was wondering if the old power supply might be sufficiently overdesigned to allow the swap; good word. The laptop itself does have a sticker affixed to it that says 19 V and 3.42 A, just like the power supply that shipped with it, but I understand that may not mean much. E.g. maybe it only draws that high a current when bluetooth power is on, there's three PC cards inserted, it's ethernet circuits are busy, the DVD writer is writing, etc. etc., i.e. when the computer is operating at maximum load.
But can you also give me some feeling for what would happen, in terms of the relevant physics equations, if such a state occurred when I was using the old (2.64 A) power supply? In terms, for example, of Ohm's Law? Voltage is fixed, right? So if you start adding "loads" (fire up the DVD burner, turn on wireless networking, etc.) that does what, increase the overall resistance? ( Can you tell I'm no prodigy re this stuff? ;-) If that's correct, then ... Well, then I'm out to sea, I'm afraid. But I have the vaguely formed idea that one of the three Ohm's Law variables will somehow become too extreme, and bad things might happen.
I guess besides just knowing I could damage hardware, I'm also trying to get some glimpse about how the dynamic interaction of those variables might change with increasing load, how different parts and subsystems (or even running software routines?) might be adversely affected when one of those variables deviates too far outside normal limits. I know there's no unified, simple answer for all cases like this, but am I at least thinking of this at all correctly? What, for example, would happen in an exagerated instance similar to this case. What if I hooked up my "19 V, 3.42 A" laptop to a "19 V, 0.5 A" power supply? Apart from the smoke billowing from the power supply or the hard drive spinning at one-third of its normal speed (just kidding), is there any way to know what would be going on re the variables of Ohm's Law? Any way, from just the relevant equations, to demonstrate on paper why doing so would be bad? I could hardly be more ignorant about electricity, I'm afraid, but I'd like to be able to understand, just from formulae, if possible, what happens when a device "wants" to draw more current than a power supply can deliver.  – OhioStandard (talk) 06:43, 7 November 2010 (UTC)[reply]
Modern power supplies are Switch-mode power supplies with circuitry much more complex than simple Ohm's-law calculations, but the article doesn't say how they behave under overload conditions. My instinct is that they will just reduce the output voltage (and overheat slightly rather than bursting into flame), but I haven't run any tests to confirm or refute this claim. I have successfully run a laptop with the wrong power supply, but not under serious overload, and I wouldn't recommend the practice. Dbfirs 07:59, 7 November 2010 (UTC)[reply]
The key terms here are internal resistance and electrical power, the power supply has internal resistance (like a battery does - see the article) - and the power dissapated (as heat) is V2/R or I2R. For this it's probably easier to use the equation with I (current, amps) - try to draw 3.42A from a 0.5A supply and you will be generating about 7 over 40 times as much heat - hence it gets hot - and possibly breaks. There's more detail and explanation on this if asked. using the V2/R equation is more complicated than it seems because the power supply switches on and off - in short V is not 19V, but a higher voltage in pulses
actually the relationship between current and heat given off in the power supply is not quite the same as the example above .. in fact the heat will be about proportional to the current for your example (because of the way SMPSs work) - which means about 7 times in the above example.Sf5xeplus (talk) 08:58, 7 November 2010 (UTC)[reply]
In practice the power supply will have some sort of overload protection built in (probably by law). I'm not so sure that it would reduce the voltage as suggested by dbfirs above, but it might. What I'd guess is that it either a. detects that the current is over the maximum rated (using a hall effect sensor) and/or detects when the device gets to hot (temperature sensor) - and then shuts off.
Note if the power supply is an old transformer (heavy) type the situation is a bit different.Sf5xeplus (talk) 08:24, 7 November 2010 (UTC)[reply]
Oh. - it's the power supply that is likely to melt, not the computer - though if the computer is run using a lower voltage than designed it may work, but there is an increasing likelyhood of the processor malfunctioning (not a permanent effect - just a crash as it freezes up or gets its sums wrong..)Sf5xeplus (talk) 08:47, 7 November 2010 (UTC)[reply]
You could well be correct about not reducing the voltage on overload. It was the old transformer-type supplies that behaved in that way. Surely someone has investigated the behaviour of modern supplies under overload? Do they just switch off? I haven't time to do the experiment just now. Dbfirs 10:30, 7 November 2010 (UTC)[reply]
Thanks, Dbfirs! Thanks Sf5xeplus! I appreciate your comments on this. So the likelihood seems to be that a modern power supply would probably have some variety of overlimit mechanism that would implement a non-catastrophic fail or suspension of current? Non-catastrophic to the connected computer, I mean, if not necessarily to the external power converter "brick" itself?  – OhioStandard (talk) 04:57, 8 November 2010 (UTC)[reply]

Cholera treatment

Somewhere several years ago I heard that Gatorade would be almost ideal for treatment of Cholera (presumably used for rehydration). I haven't been able to find anything to confirm this since, though, so how plausible of an idea is this? Ks0stm (TCG) 04:11, 7 November 2010 (UTC)[reply]

According to the article you link, in the lead " The severity of the diarrhea and vomiting can lead to rapid dehydration and electrolyte imbalance. Primary treatment is with oral or intravenous rehydration solutions." Presumably, in a pinch, Gatorade would work. From reading the article, it seems that the main problem with Cholera is the massive diarrhea and vomiting causes such rapid dehydration and electrolyte problems that that can kill you before your body has a chance to fight the infection. See also Oral rehydration therapy. --Jayron32 04:20, 7 November 2010 (UTC)[reply]
Well my main question rephrased was basically whether Gatorade (or Powerade, etc) would be effective for oral rehydration when stricken with Cholera, due to such drinks having the water, salt, and electrolytes needed to replenish those lost during the infection, or if there is something that would prevent their overall effectiveness as a treatment. I already read the articles in question searching to see if it mentioned anything about such drinks, but didn't see anything. Ks0stm (TCG) 04:31, 7 November 2010 (UTC)[reply]
I had a long, speculative post written out, then I did the google search. Gatorade was first proposed as a Cholera treatment in 1969 in the New England Journal of Medicine, and you don't get a better reliable source than that. See this. You could also find a wealth of information at this google search or this similar one. --Jayron32 04:44, 7 November 2010 (UTC)[reply]
Sodium secretion into the intestinal tract during a diarrheal illness is much greater than the typical losses you would expect in sweat from exercise. While sports drinks are fine for short-term replacement of fluid and electrolyte losses from sweating, the main reason you don't see those drinks being used widely for oral rehydration is that they simply don't have enough electrolytes to replace the severe losses from diarrheal illnesses. --- Medical geneticist (talk) 13:54, 7 November 2010 (UTC)[reply]
Sports drinks are also more expensive to store and transport than the little packets of oral rehydration salts. I doubt there's much Gatorade in Haiti at the moment! Physchim62 (talk) 19:24, 7 November 2010 (UTC)[reply]
Umm I've have a whole packet of solid gatorade crystals. "Just add water". John Riemann Soong (talk) 20:27, 10 November 2010 (UTC)[reply]

Is the age of the universe relative?

Thanksverymuch (talk) 04:31, 7 November 2010 (UTC)[reply]

You'll want to read the articles Age of the universe and Comoving distance and Proper frame. The quoted age of the universe is the age given for the earth's current frame of reference, extrapolated back to the point of the Big Bang. In other words, we assume the age of the Universe to be for the Earth's current frame of reference (relative speed and location). We assume the earth to be stationary (what is called the "proper frame") and make all measurments assuming that. In reality, nothing is stationary. In a different reference frame (i.e. if you were moving at a different speed than the earth is), the age would of course be different. This is due to the issues raised by special relativity and general relativity. --Jayron32 04:37, 7 November 2010 (UTC)[reply]
Thanks! What is the maximum possible relative age of the universe? Thanksverymuch (talk) 04:46, 7 November 2010 (UTC)[reply]
That's impossible to answer, because of the way that time works. There is no universal reference frame for which we can measure against; there is no absolute time. There are an infinite number of reference frames which one could conceive of in which the universe could be literally any age. We choose the earth's reference frame because that's the one we're in. This is not the same thing as saying that the Universe is infinite in age, its just that we could arbitrarily choose any reference frame in which the Universe could be any age. --Jayron32 04:53, 7 November 2010 (UTC)[reply]
For an object moving at (over very close to) the speed of light since the big bang, how old is another object moving at the slowest possible speed since the big bang? Thanksverymuch (talk) 05:01, 7 November 2010 (UTC)[reply]
For an object moving at the speed of light relative to what? --Jayron32 05:02, 7 November 2010 (UTC)[reply]
A stationary object. Thanksverymuch (talk) 05:09, 7 November 2010 (UTC)[reply]
Stationary relative to what? --Jayron32 05:10, 7 November 2010 (UTC)[reply]
Let me ask the question differently - is there a reference frame that entails an infinitely old universe? If not, just how old can the universe get? Thanksverymuch (talk) 05:20, 7 November 2010 (UTC)[reply]
There is no reference frame that entails an infinitely old universe. There are an infinite number of reference frames that entail an arbitrarily old universe. There is a distinction between infinite and arbitrarily large. In every reference frame, the universe is a finite age. But as the number of possible reference frames is boundless, for any age you could pick, there is a reference frame for which the universe is THAT age. Does that make sense? --Jayron32 05:23, 7 November 2010 (UTC)[reply]
I was unaware of this distinction. It does make sense. Thanksverymuch (talk) 05:40, 7 November 2010 (UTC)[reply]
Actually, I was a little incorrect. The age of the universe is quoted not to Earth's reference frame, but to the reference frame of the Hubble flow, that is to the metric expansion of space. The universe is 13ish billion years old on that time frame. --Jayron32 05:30, 7 November 2010 (UTC)[reply]
Thanks for the clarification. Thanksverymuch (talk) 05:40, 7 November 2010 (UTC)[reply]

I wander though, are there really an infinite number of reference frames? Are there an infinite number of speeds? Are there an infinite number of gravitational states? If the number of reference frames is finite, it may be possible to calculate an upper and low bound for the age of the universe, and even an average age. Thoughts? Thanksverymuch (talk) 13:47, 7 November 2010 (UTC)[reply]

See multiverse, Milky Way#Velocity and cutoff. ~AH1(TCU) 16:55, 7 November 2010 (UTC)[reply]

Jayron, are you sure that "for any age you could pick, there is a reference frame for which the universe is THAT age"? My knowledge of general relativity is somewhat sketchy, but surely in special relativity it makes sense to ask what the "maximum time" between two events is? There is always a frame in which the time between events is arbitrarily small, the observer's speed simpy has to be arbitrarily close to c. But at least in special relativity this doesn't work the other way around, and you can't find a frame in which the time between two events is arbitrarily large. What would the velocity of such a frame be? 213.49.88.236 (talk) 17:16, 7 November 2010 (UTC)[reply]

Yes, Jayron is incorrect about that; the usual quoted age of the universe is the maximum elapsed time from the big bang (and see this section for an explanation of what "big bang" means in this context). Also, the whole idea of using "reference frames" at cosmological scales is dubious. I don't like reference frames even in special relativity—I think they just interfere with understanding what the theory is about. But at least the term has an unambiguous meaning in special relativity. In general relativity it doesn't. -- BenRG (talk) 19:04, 7 November 2010 (UTC)[reply]
Now I'm lost. Is the age of universe relative or not? Thanksverymuch (talk) 19:52, 7 November 2010 (UTC)[reply]

The measured age of the universe does indeed depend on how the age of the universe is measured. However, ignore everything else from above, and let's start from scratch. Imagine a bunch of little clocks that were created all over the universe shortly after the big bang. The clocks are basically running stopwatches, which are created with their time starting off at zero. At the time of their creation, the clocks are scattered every which way, with their initial velocities all completely independent of each other. Fast forward to now, when a bunch of those clocks are collected, and examined in a laboratory on Earth.

There's a kind of radiation called the cosmic microwave background radiation, that was created all over the universe a few hundred thousand years after the big bang, and which is still detectable today. If you're traveling at roughly the right velocity (which the Earth is pretty close to), that radiation looks very close to the same no matter which direction you look in. Consider a clock that’s been at that right velocity for essentially its whole life (call it a "comoving" clock), that's also spent essentially its whole life far away from any galaxies or any other kind of object. A clock like that will say that it's been ticking for roughly 13.7 billion years.

None of the other clocks collected will read a time more than that (roughly) 13.7 billion years, but some of them will read less. In particular, clocks that have spent a big chunk of their life moving very fast relative to nearby comoving clocks will read less time due to time dilation due to relative velocity, a phenomenon which can be described as "moving clocks run slow." In addition, clocks that have spent a big chunk of their life close to a massive body will read less time due to gravitational time dilation, i.e., due to the massive body bending spacetime in that area. In either case, the clocks running slow has nothing to do with how the clocks operate, but is purely a matter of how time works.

None of the clocks will show precisely zero elapsed time, but the time shown on the clocks could in principle be an arbitrarily small positive number. In practice, clocks couldn't be completely stopped due to time dilation due to relative velocity, because it'd take an infinite amount of energy to try to get the clock up to moving at the speed of light. And clocks couldn't be completely stopped due to gravitational time dilation, because that would require them to be at the center of a black hole, from which you wouldn't be able to retrieve the clock, and which the clock wouldn't survive, anyway.

The clocks could show any amount of elapsed time in between close to zero and the roughly 13.7 billion years, so there are an infinite number of different amounts of elapsed times that the clocks could show, because there are an infinite number of different real numbers within any finite range of real numbers. But the size of the range of possible different values on the clocks is about 13.7 billion years, which of course is a finite amount of time. Red Act (talk) 09:47, 8 November 2010 (UTC)[reply]

Thanks for this excellent clarification Red Act! So the answer is a definite YES - The age of the universe is relative.
But this is not what is stated in the article on the age of the universe (I quote the lead paragraph - emphasis is mine):

The estimated age of the universe is 13.75 ± 0.17 billion years,[1] the time since the Big Bang. The uncertainty range has been obtained by the agreement of a number of scientific research projects. These projects included background radiation measurements and more ways to measure the expansion of the universe. Background radiation measurements give the cooling time of the universe since the Big Bang. Expansion of the universe measurements give accurate data to calculate the age of the universe.

Thanksverymuch (talk) 12:33, 8 November 2010 (UTC)[reply]
Not sure where you are going with this. In general relativity the time measured by a clock or an observer between two events - known, somewhat confusingly, as the proper time interval - depends not just on the events themselves but also on the motion of the clock/observer between the two events. So the time measured by a hypothetical clock/observer between the events that we label "Big Bang" and "now on Earth" will depend on the space-time path that the clock/observer took between those two events. If that dependency is what you mean by "relative" then the answer is yes, the age of the universe is "relative" - but, in that sense, so is any other measured proper time interval. How old are you ? Well, that depends on what path you have taken through space-time between the events "your birth" and "here and now" - see twin paradox. There is another measure of time, called coordinate time, that depends only on the space-time co-ordinates of the events themselves - and when the age of the universe is calculated as a coordinate time interval, it is about 13.7 billion years. But putting this qualification into the opening paragraph of age of the universe would make it needlessly complicated. Gandalf61 (talk) 13:21, 8 November 2010 (UTC)[reply]
Also note that some "clocks" could have experienced less time between the "big bang" and "now on earth", than the mentioned 13.7 billion years, there are no possible clocks which could have experience (significantly) more time than that. This is basically, because, the 13.7 billion years has been measured along a trajectory which is approximately geodesic, and (timelike) geodesics (locally) maximize proper time.TimothyRias (talk) 15:02, 8 November 2010 (UTC)[reply]
Um, it's way too general to say that "when the age of the universe is calculated as a coordinate time interval, it is about 13.7 billion years". That's not necessarily the case at all; depending on what coordinate system you use, the coordinate time for the age of the universe could basically be anything. I think comoving coordinates are almost always used when doing cosmology, but that's not automatically implied by the phrase "coordinate time", especially when dealing with a question about the different ways that the age of the universe might be measured. A statement that would be accurate would be "when the age of the universe is calculated as a comoving time interval, it is about 13.7 billion years". That's basically what I called a "comoving clock" in my simplified explanation above.
Thanksverymuch has a good point that it's not well explained in the age of the universe article as to how the 13.7 billion years is to be measured. The second paragraph says "13.73 years of cosmological time," but unfortunately cosmological time just redirects to Timeline of the Big Bang, rather than being an article about what is meant by the phrase "cosmological time". At least the Timeline of the Big Bang article does have a sentence near the beginning that says that the "cosmological time parameter of comoving coordinates" is used, so at least there's a description of comoving time within two clicks of the age of the universe article. Red Act (talk) 18:11, 8 November 2010 (UTC)[reply]
To Thanksverymuch: Your last statement makes it sound (to me) like you were playing a game of "gotcha". I do not appreciate that. On the off chance that you are serious, let me say that a free-falling massive particle which emerged from the Big Bang and arrived here and now would almost certainly have experienced a duration of about 13.75 billion years. So this is not relative in that sense. JRSpriggs (talk) 02:30, 9 November 2010 (UTC)[reply]

People meddling in the environment

Hearing a story about geoengineering on the radio today got me wondering if other attempts by people to "fix" the environment/Earth/ecosystems/etc have ever worked. (I'm open to various definitions of whether something can be said to have "worked" or not) What I was wondering about specifically was when we've introduced non-native species to an area to improve something. So, has this ever worked? Dismas|(talk) 04:44, 7 November 2010 (UTC)[reply]

http://alic.arid.arizona.edu/invasive/sub2/p7.shtml Thanksverymuch (talk) 04:53, 7 November 2010 (UTC)[reply]
One famous example from Australia involved the introduction of the prickly pear cactus sometime in the 1800s, for some dopey reason, I think they were trying to use them as 'natural' fences or something. Anyway, they took off and were basically out of control, taking over swathes of the countryside with this impenetrable cactus thicket. In the 1920s the Cactoblastis cactorum cactus moth was introduced from South America and very quickly brought the prickly pear under control, almost wiping it out (there is still some around but it's not really a biological problem). This is a textbook case study of biological control in Australia as it was so successful in controlling the prickly pear and yet has had no known negative impacts on the environment. I see this is briefly described at Prickly_Pear#Ecology, and is also mentioned in the Cactoblastis article, where it notes that due to this success Cactoblastis was introduced elsewhere and has not always had the same benign impact. --jjron (talk) 11:51, 7 November 2010 (UTC)[reply]
Geoengineering is typically a very dangerous process involving the alteration of a complex hollistic global system, within which there are many unknowns and uncertainties. Introducing an alien species that may turn out to be invasive can be quite problematic, as can be sending sulfur dioxide droplets to the upper atmosphere to quell the tropospheric warming effect, since small abberations in controlling the climate by two-way forcing presents the risk of sudden abrupt climate change. Some schemes that may work include carbon dioxide air capture, but large-scale changes to the landscape or other environments can create many unintended consequences such as the theoretical phenomenon of too many wind farms causing an overall reduction in global average wind speed[2]. ~AH1(TCU) 16:51, 7 November 2010 (UTC)[reply]
There are far more examples that worked than the other way. For instance, the environment I'm in right now is an artificial building designed to stop rain, help regulate air temperature, and even provide power. It has been very successful in this. There are also various areas that have been modified to farm food. Our current population would be impossible without these. I think most of human history can be described as humans modifying our environment. — DanielLC 01:34, 9 November 2010 (UTC)[reply]

Living donor liver transplant multiple times?

After a living living donor liver transplant, both the donor and recipient should eventually have a full-sized liver each. If it is required sometime in the future, could the donor or recipient be a living donor again? Has this happened before? thanks F (talk) 08:32, 7 November 2010 (UTC)[reply]

I'll happily be corrected but I don't think you could donate more than once. When you donate part of your liver, you give the recipient a large proportion of your right lobe (around 50-70%). Your left lobe then compensates by growing to make up the size lost. Anatomically speaking, however, you still only have a left lobe and a small portion of right lobe, so you won't be able to give that same 50-70% again. Besides all this, liver donation is a large, lengthy operation that lasts several hours. There's a large potential for infection and complications, so God knows why you'd want to go through the process twice! Regards, --—Cyclonenim | Chat  14:12, 7 November 2010 (UTC)[reply]
Also, when multiple people are involved in a liver transplant process, it's usually the first healthy donor who takes the longest to recover. ~AH1(TCU) 16:42, 7 November 2010 (UTC)[reply]
Though the donor's remaining liver tissue does hypertrophy post-donation, it does not fully regenerate the original vascular structures; rather, the remaining vasculature serves the remaining (hypertrophied) liver. Because the vascular supply to (and drainage from) the donated liver tissue is crucial to the success of the graft in the recipient (PMID 12818839 and PMID 15371614 and PMID 17325920, the latter being particularly relevant), I think it's safe to surmise that a second donation would not be possible, even if the problem of perihepatic scarring from the first procedure could be overcome. Certainly, it's unlikely we'll ever have a study to support such a practice. -- Scray (talk) 21:15, 7 November 2010 (UTC)[reply]

Capacitor plague

Capacitor plague explains the problem, and repeats (what seems to be the common claim) that certain taiwanese manufacturers were to blame (due to using an incomplete electrolyte formula stolen from elsewhere..) - eg as repeated here [3] [4]

None of this I question; my question is: what about the fallout - ie what happened to the suppliers (eg I tried to find references to show that the manufacturers got their 'ass sued off' by the manufactures who bought from them) - but found nothing. As a side question - are compensation lawsuits uncommon in the far east? (sorry this isn't actually a science question - it's a science topic though..)

Also confusingly this Dell [5] page blames Nichicon, whereas the the other link says Nichicon was amongst those ".inundated with orders for low-ESR aluminum capacitors, as more customers shy away from Taiwanese-produced parts" ?. 94.72.205.11 (talk) 10:15, 7 November 2010 (UTC) [reply]

Well as you said, they used an incomplete electrolyte formula stolen from elsewhere. Given that happened in the first place, how likely is it they got their ass sued off by people who bought from them? Nil Einne (talk) 10:24, 7 November 2010 (UTC)[reply]
I don't understand what you are saying - why is it not right to assume the buyer would sue or start criminal proceedings against them?94.72.205.11 (talk) 10:27, 7 November 2010 (UTC)[reply]
I'm saying you can get an idea of the culture there by what caused the problem in the first place. This wasn't an isolated thing but quite a number of companies all producing the same crap from an incomplete stolen formula. It's also not likely the buyers were completely blind as to what's going on. (Super cheap capacitors don't suddenly appear from no where and I would expect many had their own quality control testing the stuff too, obviously not enough to pick up the flaws.) They obviously didn't expect capacitor plague but big companies have a fair idea of what they're getting in to (if they didn't they wouldn't be big companies). It's a risk they choose to take...
In this particular case it came out badly for them. (Often it does not.) There may be some form of compensation but some of the companies undoutedly would have disappeared. There were likely some lawsuits involved as well. But ultimately the people who run the companies clearly didn't think much of using a stolen formula which they apparently didn't understand well enough to know was incomplete and flawed. Clearly they didn't consider the risks, say of being sued say by the people who designed that formula, high enough to outweigh the likely advantage they would gain from producing capacitors from the stolen formula. So it's not that surprising that the buyers themselves may not gain that much from whatever lawsuits did occur. In other words, ultimately I think it's quite likely the buyers bore the brunt of the cost. (Which as I've said, they must have anticipated when they went in.) Nil Einne (talk) 10:45, 7 November 2010 (UTC)[reply]
The companies affected were at least two steps up in the supply chain, I'd also assume that the motherboard manufacturers wouldn't have bought components they new were going to fail on mass within a few months. That aside I was asking for factual answers, not your opinion. Can you please refrain from answering if you have nothing to offer but your own opinion.94.72.205.11 (talk) 14:11, 7 November 2010 (UTC)[reply]
Well the first flaw in your statement is the components often didn't fail with a few months. In fact as our articles note one or two years is when problems start to appear. Also I don't know who said motherboard manufacturers would have bought components they knew were going to fail, I definitely didn't and made this clear. (Although I would note this was a fairly well known problem by 2003 [6] or earlier but our article suggests some equipment was still using faulty capacitors manufactured in 2007 and also [7] from the article is perhaps englightening. My impression from sources such as [8] is that there are actually still some crappy capacitors being used in new products to this very day although nowdays it's may perhaps be simply lower quality capacitors rather then the 'likely to fail way to soon' the historic ones appeared to be and there's also of course the continual problem of counterfeits.) I also don't know how you know the precise relationship between the many companies affected and the manufacturers, it isn't mentioned in our article and such things usually vary and are often fairly complex besides. The earlier ref BTW also mentions some affects on one manufacturer in terms of loss of orders (although I thought this went without saying). However the number of nominal manufacturers is huge, e.g. [9] and most involved on all sides probably have no desire for people to know precisely who knew what and did what and who compensated who. And how much so other then bits and pieces of that sort, the situation is likely to remain murky because the people involved want it that way so you're not likely to find a detailed writeup or refs. Nil Einne (talk) 00:11, 8 November 2010 (UTC)[reply]
BTW, about the Nichicon thing, it appears correct that Nichicon had a batch or a few batches of bad caps. Their problems appear to have begun in 2003 [10]. This was after people had begun to avoid capacitors from Taiwan but to avoid speculation which you don't want, I'll just say the phrases 'increased demand' (well you yourself mentioned they were inundated with orders) and 'quality control problems when trying to meet increased demand' and let people speculate for themselves whether these phrases may be relevant. Several sources suggest the Nichicon capacitor problem may have been related to overfilling [11] [12] although some have expressed doubts [13]. (Overfilling does of course sound better then plenty of other execuses.) It's worth remembering that faulty components and manufacturing defects (and simply poorer quality components) are fairly common in this world. There were clearly some major problems early in this century according a few sources due to a stolen formula (since you want to avoid speculation I think we need to be clear we don't know for sure this is what happened, we only have clear cut evidence for know why the components were defective not how they got to be that way). However as is often the case everything tends to get swept up into the same boat so now any sign of capacitors failing are now automatically connected to the earlier problems. Nil Einne (talk) 01:13, 8 November 2010 (UTC)[reply]
[14] non referenced, opinion based answer moved to editors talk page 94.72.205.11 (talk) 14:18, 7 November 2010 (UTC)[reply]
I have restored the comments. Removing others' replies in this manner, particularly when they're entirely in good faith, is utterly unacceptable. -- Finlay McWalterTalk 15:56, 7 November 2010 (UTC)[reply]

Aeroplane crash

I read a question on here about jumping before a plane crashes to save you. obviously that wouldnt work, but what if you flooded the cabin with some sort of liquid or foam to spread the force acrost the entire body, and also provide more time to stop (reducing the accl, and thus the force). going from 300 km/h to zero over the distance of a few cm would be fatal, but over a couple of meters, it would be the equivilent force of going from 3 km/h to zero over a few cm. Would that work? 98.20.222.97 (talk) 10:03, 7 November 2010 (UTC)[reply]

or the cabin seats could be on a track that lets them slide forward a bit, making the stopping distance for the people inside greater —Preceding unsigned comment added by 98.20.222.97 (talk) 10:04, 7 November 2010 (UTC)[reply]

In theory, yes, but it is very difficult to find materials that will provide a gradual deceleration. To some extent, the crumpling of the metal of the plane already does this. Air bags are probably the most effective for the human body. Dbfirs 10:24, 7 November 2010 (UTC)[reply]
And there's a limit to the tradeoff between increasing safety, the real risks involved and other factors of practicality. Fitting each seat with a five-point racing harness and surrounding it with a roll cage should also increase the chance of survival, but at considerable other costs, both financial and other. Industries undertake substantial cost-benefit analyses on these things. The fact also remains that when dropping out of the sky from several kilometres up, sometimes nothing's going to save you. --jjron (talk) 11:38, 7 November 2010 (UTC)[reply]
Safety devices can cause risk also. One of the causes of the ValuJet Flight 592 crash was that it was transporting old oxygen generators, which are used to provide air to passengers in the event of pressure loss. Paul (Stansifer) 13:10, 7 November 2010 (UTC)[reply]
This is a famous article from The Economist which discusses some of the safety considerations in commercial air travel. Note that it was published in 2006, and so is out of date on a couple of points... Physchim62 (talk) 13:32, 7 November 2010 (UTC)[reply]
I think the best method would be to make every seat into an ejector seat with parachute, and to ensure all passengers always wear a life jacket in case the plane needs to eject passengers over water. Unfortunately, this system is not economical at all, and thus it will never happen for commercial airliners. Regards, --—Cyclonenim | Chat  14:04, 7 November 2010 (UTC)[reply]
The ejector seat idea is patently absurd. First, no commercial airliners are designed to have ejection seats. Fighter aircraft that have ejections seats have specially designed canopies that are blown off the airplane, or completely shattered a fraction of a second before the ejector rockets fire. I don't see how you can do anything remotely similar on a commercial airliner because the passengers have aluminum aircraft skin, overhead luggage storage and the like in the way. Second, people have to be specially trained to properly use an ejection seat. It requires preparation to eject since if you have an arm or leg sticking out when you eject, you are going to break bones, lose the limb, or even fail to eject properly. Another problem is how are you going to have a one size fits all ejection solution? What works for a standard sized person probably will not work well for a young child or the overweight guy sitting in the two seats next to you. Finally, even properly trained pilots are frequently seriously injured when they eject. Googlemeister (talk) 14:50, 8 November 2010 (UTC)[reply]
Your description of a crash sounds very similar to US Airways Flight 1549, in which the pilot successfully crash-landed on water without loss of life. ~AH1(TCU) 16:40, 7 November 2010 (UTC)[reply]
It's worth keeping in mind that the number of people who die in airplane crashes is almost statistically insignificant, compared to the number who die in automobile crashes, a place where we have far more control over individual conditions, the speeds are generally a lot slower, and the obvious impact on society is much greater. We fear airplane crashes more because we perceive ourselves to have less control over their outcome (we are strapped into a pressurized tube going 500 km/hr at 20,000 ft), but car crashes are far more deadly. Far more people die per decade in Los Angeles from car accidents than do from earthquakes, yet people always fear quakes more than cars. People here seem to be very concerned about air travel as being not very safe, when in reality it is pretty safe and secure by comparison to more mundane means of getting around. --Mr.98 (talk) 16:56, 7 November 2010 (UTC)[reply]

One very simple step which would reduce risk would be to have all seats facing backwards. Unfortuantely, not marketable. HiLo48 (talk) 08:32, 8 November 2010 (UTC)[reply]

Rear facing seats would be more acceptable if there was a dummy cockpit door at the back of the aeroplane and the in-flight movie was more interesting than looking out of the window. Cuddlyable3 (talk) 08:57, 8 November 2010 (UTC)[reply]
Looking out the window facing backwards is not a problem — I've done that on trains; it's just as pretty as looking forwards. I don't think I'd be happy about being pressed into my seatbelt on takeoff. On the other hand, with the current arrangement, it happens on landing, so I'm not sure there's any net difference. --Trovatore (talk) 09:06, 8 November 2010 (UTC)[reply]

Risks of psych experiments involving rewards

You know how they let children paint on their own, then reward them for painting, and the children stop painting in the next trial? Isn't there a risk that a future painter, say, has been taken away from a life in the arts because of such experiments? Imagine Reason (talk) 14:33, 7 November 2010 (UTC)[reply]

Well I don't know the experiment in question you are referring to, but all experiments involving human subjects generally have to pass through an Institutional review board evaluation in the US, which looks quite closely to see whether or not there is real projected harm. In this case, you'd need to actually run the experiment many times to establish what long and short term effects there were before you decided that the experiment itself was harmful. If it were known as a iron rule that such experiments would discourage creative activity then they would probably be stopped. But I doubt it is as much of an iron rule as that. And on the scale of IRB concerns, "may in a very subtle way discourage a child from being interested in painting" probably ranks low on the "harm" list, especially since you have no way of knowing whether that child would have gone into a "life in the arts" anyway in the absence of said experiment. If there was an experiment that would, without much doubt, make it so that whomever it was performed on would never again do anything artistic (e.g., by removing that part of their brain or by use of negative conditioning or whatever), I am sure it would be deemed unethical. But this sounds like something far more subtle than that. --Mr.98 (talk) 16:49, 7 November 2010 (UTC)[reply]
ec(OR) Uh, no I don't know that children generally react as you describe. Competitions continually produce creative work and are a form of art patronage. They encourage artists by validating their artwork and expose them to their peers' works. Nobody has a right to a "life in the arts" unless they are prepared to earn it by contributing their work and talent. Sorry but to call rewarding a child for painting (whether you mean a picture or a fence) a "psych experiment" seems ridiculous, and the idea that it has deprived the world of painters is an unfalsifiable speculation. It would be nice if it worked on taggers. Cuddlyable3 (talk) 16:51, 7 November 2010 (UTC)[reply]
The scenario you describe, OP, is just plain un-Skinnerian. ;) I have never heard of such an experiment giving such results, and I believe there are many which have given the opposite result. retracted after comments that follow (Giving small "rewards" is usually considered ethical for the purposes of most psych experiments; proposing small punishments would probably at least prompt more careful review by the ethics board...). WikiDao(talk) 17:45, 7 November 2010 (UTC)[reply]
I've seen those experiments, where offering a reward leads to the activity not being valued in its own right, but rather for the reward. Children rewarded for drawing, and then given an unrewarded choice between drawing and another activity, will choose the other activity, whereas children unrewarded for drawing and given the choice will pick without apparently being influenced. It becomes work instead of play, and so isn't chosen for play. The studies I've seen have been brief, and wouldn't be expected to have lasting results: they will be swamped by all the other things they do, and all the other rewards and punishments they experience outside this brief experience. Or so the ethics discussion would go. 86.166.42.171 (talk) 22:22, 7 November 2010 (UTC)[reply]
The classic experiment (and it really is a classic in psychology, here's what google books has) was done by Mark Lepper, David Greene, and Richard Nisbett in 1973. For Wikipedia, see overjustification effect. ---Sluzzelin talk 22:33, 7 November 2010 (UTC)[reply]
You might be interested in this episode of Freakonomics Radio (the stuff I'm talking about is right at the end ; it's probably faster to read the transcript than listen to the show). In it, Levitt uses rewards to incentivise his daughter, and discovers a three year old is far from Pavlov's dog. -- Finlay McWalterTalk 23:05, 7 November 2010 (UTC)[reply]
I think (in answer to the OP's question) the risk is real but the risk is small. But the risk is nevertheless real. I think Imagine Reason raises a real and valid concern. Bus stop (talk) 01:26, 8 November 2010 (UTC)[reply]
I am not sure the risk is real. Even in classical conditioning, there needs to be lots of reinforcement to maintain behavior modification over time. It isn't the sort of thing that you do once and flips a switch and never goes again. People aren't that brittle. If they were, we'd have noticed it in so many other areas of life first... --Mr.98 (talk) 01:32, 8 November 2010 (UTC)[reply]
I understand that. But what I would say is that it constitutes miseducation, when contrasted with the child to whom it is conveyed that art is a wholesome activity. The message conveyed by the giving of a small and relatively meaningless reward is that the intrinsic reward in the activity is even lower than that. A lot depends on context. The child with already a grounding in the notion that art is worthwhile will not view the small "reward" as a reflection on the art activity. The child for whom the art activity is a totally new experience is looking for his first clues as to how society regards this activity. In the absence of a clue that something of value lies within this activity, he is left with the clue that the value in the activity is the small, meaningless reward. This is discouragement, the opposite of fostering an interest in the art. I think it is slightly cruel to take children whose minds have no opinion of art and introduce a negative opinion at such an early and impressionable age. Bus stop (talk) 02:02, 8 November 2010 (UTC)[reply]
The google books link that Sluzzelin provided (thanks, Sluzzelin, I was too dismissive and misinterpretive of the question in my intitial response -- something to be avoided!) says:

"This decrement in interest persisted for at least a week beyond the initial experimental session." emphasis added

It does begin to sound like something maybe they shouldn't be meddling with at that age, doesn't it...? WikiDao(talk) 02:19, 8 November 2010 (UTC)[reply]

I believe this is cognitive dissonance. This is also like the study of the group that played an intentionally boring game for the purposes of the experiment, and then were rewarded afterwards with money. Another group was not rewarded with anything and they convinced themselves they played the game for fun. The group with money justified playing the game because of the money. Anyway, cognitive biases aside, I don't think a serious artist would care much for a reward or not, but just for the thrill of doing the art for art's sake. Perhaps children not so enthused with art would be less inclined to be artistic if they were rewarded, but there are many cases where a person is so transfixed with their 'passion' that rewards are overlooked and do not matter because the job is it's own reward to that person. AdbMonkey (talk) 04:52, 8 November 2010 (UTC)[reply]

I don't think this is cognitive dissonance. Concerning a "serious artist," I think the most common situation would be a mix of motivations—both monetary and a motivation concerning the pure pursuit that is involved in using materials and techniques to achieve an end product. Bus stop (talk) 18:22, 8 November 2010 (UTC)[reply]

ground water

most precipitation sinks below ground until it reaches a layer of what kind of rock? —Preceding unsigned comment added by 204.237.4.46 (talk) 16:47, 7 November 2010 (UTC)[reply]

Impervious. Cuddlyable3 (talk) 16:53, 7 November 2010 (UTC)[reply]
Bedrock. See also aquifer and drainage. ~AH1(TCU) 18:48, 7 November 2010 (UTC)[reply]

Unplugging mobile phone chargers

My new Nokia C5-00 phone tells me "unplug the charger from the socket to save energy" when I unplug the phone from the charger after charging. Will this really make a difference in regard of how much energy is consumed? I don't know much about electronics, but my general intuition tells me that a charger that is plugged into a socket but not actually plugged into any device does not form a closed circuit, where electricity would flow from a source to a destination, and so the electricity completely bypasses the charger, not adding to my electricity bill. Could someone who actually understands electronics clarify this? JIP | Talk 19:29, 7 November 2010 (UTC)[reply]

A charger or AC-to-DC converter draws a small current even when not delivering current and this is wasted energy that you may even feel as slight warmth from the case. I think your phone uses a switched-mode charger whose switching circuit works continually. In the case of a simple analog power supply, its mains input transformer takes a magnetising current. Some power may also go to light a LED indicator, if there is one on the charger. Wikipedia has an article about Battery charger. Cuddlyable3 (talk) 19:41, 7 November 2010 (UTC)[reply]
The no-load power drain is marginal, hardly registering on power meters, but I suppose it becomes significant if (like me) you leave lots of such chargers plugged in. The total drain is probably less than that of the transformer that runs my doorbell, but if everyone in the world did the same ... Dbfirs 00:22, 8 November 2010 (UTC)[reply]
(edit conflict)Do we have an article on phantom power? Nope, it doesn't go where I thought it would. Do however see standby power which goes over exactly what you're referring to. Dismas|(talk) 00:24, 8 November 2010 (UTC)[reply]
Our article on Switched-mode power supply will also be of interest, but it doesn't state the no-load drain. Dbfirs 00:41, 8 November 2010 (UTC)[reply]
There's also the One Watt Initiative although perhaps not really relevant for mobile phone chargers where you probably want it much lower then that. Nil Einne (talk) 01:21, 8 November 2010 (UTC)[reply]

mitosis and meiosis

what are the formula used in mitosis and meiosis? —Preceding unsigned comment added by Oiram13 (talkcontribs) 22:58, 7 November 2010 (UTC)[reply]

I have no idea what sort of "formula" you are looking for. But to start you on your way to learning about these two processes, we have substantial articles about both mitosis and meiosis, including details about the numbers of chromosomes in each. DMacks (talk) 23:09, 7 November 2010 (UTC)[reply]


November 8

Fastest airship

For the greatest speed, would it be better to make a practical airship as large as possible, or as small as possible? 92.29.116.53 (talk) 01:06, 8 November 2010 (UTC)[reply]

I don't think it would be so much size as shape, and how smooth the edges are. For maximum speed, you would want to put the cabin inside. --The High Fin Sperm Whale 01:12, 8 November 2010 (UTC)[reply]
Our Airship article says:

"The disadvantages are that an airship has a very large reference area and comparatively large drag coefficient, thus a larger drag force compared to that of airplanes and even helicopters. Given the large flat plate area and wetted surface of an airship, a practical limit is reached around 80–100 miles per hour (130–160 km/h). Thus airships are used where speed is not critical."

Drag coefficient then says that "airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume."
Clearly, all else being equal, a larger airship will have greater drag and will require greater thrust to maintain the same speed as a smaller airship. So you'd want a smaller airship for speed, down to the limit of no longer having enough lift to carry the same propulsion system (though you could probably get away with carrying less fuel, too, depending on your purposes). WikiDao(talk) 01:34, 8 November 2010 (UTC)[reply]
Given equal volumes and engine powers, a long thin airship can fly faster in still air than a short fat airship. Cuddlyable3 (talk) 08:44, 8 November 2010 (UTC)[reply]

While it's true that a smaller airship would have less drag, it would also only be able to support a smaller less powerful engine. The Zeppelins and similar airships were big things, they could have been built smaller. I'm unclear of the best ratio of power to drag, so the question is still open. I'm imagining an airship built to cross the Atlantic with the greatest speed, no expense spared. 92.15.3.137 (talk) 11:18, 8 November 2010 (UTC)[reply]

An airship would be able to cross the Atlantic from North America to Europe much faster then the reverse by using the jet stream, presuming your specific design was capable of high altitude flight. Googlemeister (talk) 14:38, 8 November 2010 (UTC)[reply]

There must be some optimum size. For example while a small aircraft could be faster than a large aircraft, a six-inch model aircraft will not do. 92.29.125.32 (talk) 11:04, 11 November 2010 (UTC)[reply]

The answer is: the bigger the higher the maximum speed. (Assuming the most aerodynamic shape at any one size.) From above:
Drag coefficient then says that "airships and some bodies of revolution use the volumetric drag coefficient, in which the reference area is the square of the cube root of the airship volume."
So, while the reference area (hence drag) rises with the square, the mass rises with the cube. Therefore larger airships can carry progressively more powerful engines in proportion to their surface area, and so achieve a higher maximum velocity. -84user (talk) 23:08, 11 November 2010 (UTC)[reply]

Thanks, so that's why Zepplins got bigger and bigger over the years. 92.29.120.164 (talk) 14:39, 12 November 2010 (UTC)[reply]

Vaccinations of the Chilean miners

I'm curious about a line in the article about the Chilean mining accident saying the group was vaccinated against tetanus, diphtheria, flu, and pneumonia. Particularly flu and diphtheria; these diseases are caught from other people, and the group had already been isolated for three weeks by the time the vaccines were sent down, so if the diseases were present, wouldn't everyone have already been exposed? Or were the vaccinations a precautionary measure intended primarily for after the miners were rescued? Mathew5000 (talk) 02:10, 8 November 2010 (UTC)[reply]

It's common to use a DPT vaccine to immunize against both diptheria and tetanus at the same time, although exact protocols vary from country to country. Physchim62 (talk) 02:35, 8 November 2010 (UTC)[reply]
As for flu, it can be acquired through contact with a surface, and the miners were in contact with the "world above", including family members living in less than ideal conditions in Camp Esperanza. Physchim62 (talk) 02:41, 8 November 2010 (UTC)[reply]
Thanks. On the first point you're probably right although if they were given a DPT vaccine I'd expect news sources to mention all three diseases, whereas none of them mention pertussis. On the second point, I think you are correct again as I found a news article in Spanish [15] that explains, in connection with the vaccines, the concern about infection on the supplies they were sending down in the shaft, although they did apparently take “las precauciones de asepsia” before anything went down. Mathew5000 (talk) 07:36, 8 November 2010 (UTC)[reply]
You can get versions of the "DPT vaccine" that don't include the pertussus component, as is mentioned in our article, and (according to this report) these are the ones that are used for the maintenance vaccinations of adults in Chile (the triple DPT vaccine being given at age 2–6 months). The same report mentions that diptheria can be transmitted by "indirect contact with contaminated elements", although this is rare. So my guess is that the medical team were more worried about tetanus infection (an obvious risk for people working in a mine), and gave the DT vaccine either because that was the vaccine they were used to using in Chile or because they thought there was a potential risk of diptheria infection. Physchim62 (talk) 13:11, 8 November 2010 (UTC)[reply]
Thank you very much, Physchim62! —Mathew5000 (talk) 09:09, 9 November 2010 (UTC)[reply]

gravity

is gravity repulsive? —Preceding unsigned comment added by Ajay.v.k (talkcontribs) 03:32, 8 November 2010 (UTC)[reply]

Yes, I find it disgusting. How dare it not allow me to fly at will! HalfShadow 03:33, 8 November 2010 (UTC)[reply]
And have you even seen some of those equations that general relativity vomits out? Physchim62 (talk) 03:48, 8 November 2010 (UTC)[reply]
No, gravity always causes an attraction between two masses – it might be a very small attraction, but it is always an attraction, never a repulsion. Physchim62 (talk) 03:48, 8 November 2010 (UTC)[reply]
Unless you happen to have some Negative mass. DMacks (talk) 04:58, 8 November 2010 (UTC)[reply]

I just want to say, I think this is a very good question, because I was wondering what it would be like if the laws of gravity were reversed and if there was just a whole different way of looking at gravity. If gravity repeled for example. So anyway, OP if you could like, tell a little more about what got you to ask that question, I would be interested. AdbMonkey (talk) 04:59, 8 November 2010 (UTC)[reply]

The fact that gravity is an attraction only (and never a repulsion) makes it unlike the other fundamental forces. For this and other reasons, no quantum theory of gravity exists; and gravity can be described with general relativity (while other interactions like electrostatic force can not). Nimur (talk) 05:18, 8 November 2010 (UTC)[reply]
Is there a fundamental flaw in the theory that gravity is a repulsion between nothingness and masses? Cuddlyable3 (talk) 08:39, 8 November 2010 (UTC)[reply]
Some kinds of nothingness are very gravitationally attractive to masses. And I can't think of any kinds of nothingness that aren't – "nature abhors a vacuum". WikiDao(talk) 23:02, 8 November 2010 (UTC)[reply]
Black holes have a heck of a lot of somethingness. Red Act (talk) 23:53, 8 November 2010 (UTC)[reply]
We call it a "black" "hole" which is suggestive of emptiness or nothingness, yet indeed it has mass. That's what I was saying. Cuddlyable3 is conceiving some variety of "nothingness" from which mass is repulsed, and I can't think of one. WikiDao(talk) 04:24, 10 November 2010 (UTC)[reply]
Could you be thinking of virtual particles, Cuddlyable3? That would be, in a sense, "masses" {emerging from/arising out of/being "repulsed" by?} "nothingness", right...? WikiDao(talk) 05:21, 10 November 2010 (UTC)[reply]

General relativity doesn't even consider gravity to be an attraction. For example, the article on Newtonian gravity uses the word "attraction" 11 times, but the article on general relativity doesn't use it once. "Attraction" as used when discussing Newtonian gravity refers to a kind of action at a distance, which general relativity rejects. In reality, mass causes a curvature of spacetime in a purely local manner. Rather than being attracted to that distant massive object, other objects in that vicinity instead just travel along locally straight lines on that curved spacetime. When discussing the forces between particles, "attraction" can be a local phenomenon, in the form of an acceleration effected in a local manner via gauge bosons. But general relativity doesn't even consider gravity to be an acceleration, a complete theory of quantum gravity doesn't exist, and the gauge boson that would be involved in gravity, the graviton, has never been observed, so it's far from clear that that same form of "attraction" mechanism would also apply in any sense to gravity. Red Act (talk) 11:44, 8 November 2010 (UTC)[reply]

I just thought maybe there was some theory that a center point in the universe created the repulsion, so that gravity was actually repulsion, but, um, I would not know. AdbMonkey (talk) 14:30, 8 November 2010 (UTC)[reply]

Glucose test

why does glucose react with benedicts solution? —Preceding unsigned comment added by 173.48.177.117 (talk) 04:53, 8 November 2010 (UTC)[reply]

We have an article about Benedict's solution, which explains exactly what sorts of chemicals it reacts with (and the gory chemical details of exactly why those are the ones). We have an article about glucose, with a whole bunch of different types of diagrams...see if you can find one there that has the general functional group type with with Benedict's reacts. DMacks (talk) 04:56, 8 November 2010 (UTC)[reply]
As a further hint, compare the oxidation states of an aldehyde versus a carboxylic acid, and copper(I) oxide versus copper(II) oxide. You might want to check reducing sugar. John Riemann Soong (talk) 09:13, 8 November 2010 (UTC)[reply]

Can an airship use siphons rather than fans?

I had my own airship question, which I'll file separately to make sure I don't take away from the previous question today.

Is it possible to get good efficiency from an airship by not using fans external to the airship, but simply having siphons that take in air from the front and push it out through a nozzle at the rear? A single chamber that uses some fibers to pull open a cavity at the center of the ship, then allows it to contract should be enough in concept, with one-way baffles at front and rear. Of course, multiple chambers separated by flexing partitions would allow the ship to more continuously take in and discharge air, without needing to change its overall shape. The exact form of the nozzle at rear strikes me as rocket science, about which I'm best off saying as little as possible...

I understand that energy may be wasted if the air is significantly compressed or expanded in the process, since this involves changes in temperature; but in general it seems like such a system should convert the entire energy expended into propulsion. Of course, the real appeal is that one dreams of riding a zeppelin that moves effortlessly and silently among the clouds. Wnt (talk) 12:14, 8 November 2010 (UTC)[reply]

You seem to visualise an airborne Jellyfish. Cuddlyable3 (talk) 14:06, 8 November 2010 (UTC)[reply]
Sounds to me even more like a low-intensity jet engine. I don't see why it wouldn't be feasible. TomorrowTime (talk) 17:11, 8 November 2010 (UTC)[reply]
I doubt it would be efficient. Turbulence in airflow is a lossy process. You can help overcome turbulence by keeping the airflow laminar - that means you need smooth surfaces and continuous air streams. The apparatus described above sounds like it would be "pulsating" - this would incur a huge amount of loss. Every time airflow impinged on a baffle or a valve, it would lose energy; the engine or mechanism used to drive the system would have to compensate by adding more energy. We have a great diagram of thermodynamic efficiencies for various engine concepts - you'll have a very hard time beating a turbofan in terms of specific impulse. They are among the most efficient devices ever built by humans for extracting kinetic energy out of chemical combustion. Nimur (talk) 18:33, 8 November 2010 (UTC)[reply]
The 1930 Omnia Dir did use directable air "jets" at each end (1932 article with diagrams and photos) in addition to a normal external propeller, for low speed manoeuvering. -84user (talk) 23:49, 11 November 2010 (UTC)[reply]

Brown sugar

I wondered what made brown sugar different than regular sugar, so I looked it up. Now I'm a bit confused. It seems, from what I read, that to make sugar you cut down the cane, process it somehow, and this gives you sugar crystals and molasses. Then, to make brown sugar, you add the molasses back into the sugar. So why bother separating them in the first place? The brown sugar article mentions being able to better control the proportion of molasses to sugar, but is this the only reason? It seems overly complicated just to maintain consistency. Dismas|(talk) 12:52, 8 November 2010 (UTC)[reply]

Well other then quality control, which tends to be rather important nowadays our article also mentions enabling the use of beet sugar while keeping the taste of sugar cane brown sugar that consumers in many countries expect. It's likely cheaper anyway. White sugar refineries produce large quantities of refined white sugar cheaply for the variety of markets which use sugar, diverting some of that production to make brown sugar by adding back some molasses before crystallisation is easier the setting up a seperate production line. Highly refining the sugar also makes it easier to remove unwanted purities other then molasses. This also concurs with the cost in most countries AFAIK (at least in NZ), white sugar is the cheapest, brown sugar is more but less refined sugars are even more. (Well in NZ we also get "raw sugar" which tends to be the same price as brown sugar but I'm not sure what it really is, it tends to be less brown and also far less sticky then brown sugar so I would guess it has less molasses, it's also more granulated.) See [16] for an example of how white sugar is produced. The recent LoGiCane [17] sold in Australia [18] and NZ [19] would be another example where something is added back that was removed although it isn't uncommon in other areas either. Nil Einne (talk) 13:25, 8 November 2010 (UTC)[reply]

Counter toxicity

There has been a lot in science journals and what not about the drug salinomycin to kill of cancer stem cells, more than 100 times anything else available at this present time, it also is said that it only kills cancer cells but doesn't disturb other cells. The drug is currently used-produced cheaply, for livestock to kill off their parasites. The tests were done on mice and the major drawback of this drug is that it seems to be very toxic to humans, including possible long term heart problems to muscle problems to being possibly fatal. My question is would it be possible to ever come up with drugs or something else that counteracts the toxicity and could in the future make it possible for humans to use the salinomycin drug to fight cancer? Is it possible to have a counteractive drug against drugs that are toxic or is that a dead end, or in other words once something that is toxic is taken in there is no drugs that can be taken as well to alleviate the toxic effects? —Preceding unsigned comment added by 71.137.248.238 (talk) 14:48, 8 November 2010 (UTC)[reply]

There are many drugs that are given together with other drugs that prevent or counteract side-effects of the first one. Whether it's feasible for a specific case depends on how related (at a biochemical level) the desired effects are to the undesired ones. For example, if the drug hits two chemical receptors, a specific agent could possibly be found that prevented the drug from affecting one (preventing the undesired effect when the drug would hit it) while still allowing it to affect the other (leading to desired effect). Or else one could alter the drug itself to be more specific to the target that has the desired effect. On the other hand, if the side-effect and desired effect are both part of the same biochemical pathway, it becomes hard to stop one effect specifically without also stopping the other. Medicinal chemistry and chemical biology are two fields that study how exactly a chemical exerts its effects--what biochemical binding happens, and how the structure of the drug does or does not affect it--and therefore can study how to alter a drug to be more specific or design a related compound that protects against or rescues the "other" biochemical effects. DMacks (talk) 17:48, 8 November 2010 (UTC)[reply]
Every drug has a therapeutic window, some narrower than others, and salinomycin apparently has some troubles there. Often it is possible to improve a drug to widen the window, because (as in this case) toxicity may be in one tissue (the heart) while the benefit is in another (the breast tumor). Or they could affect different proteins in the same cell. Through trial and error (most often) or perhaps by identifying the desired and undesired targets and trying to do rational drug design, it is possible to modify the drug so that it won't sit as well in the wrong place, or is more perfectly fits (see lock-and-key model (enzyme)). Alternatively a change in the drug might affect whether cancer cells can get rid of it with P-glycoprotein, or whether it penetrates the blood-brain barrier, or how rapidly it is broken down in the liver (since sometimes the breakdown process causes the toxicity), and any number of such idiosyncratic considerations.
But mostly, people try a lot of different related compounds based on what they can synthesize and hope they get lucky. See high-throughput screening. Also drug discovery and combinatorial chemistry may be interesting. Oh, and last but not least, consider personalized medicine using pharmacogenetics to screen out the patients the drug is most likely to harm. Wnt (talk) 22:23, 8 November 2010 (UTC)[reply]

Question

If I has a bag of sand with some marbles in it and I shake the bag of sand, do the marbles end up at the top or the bottom of the bag? —Preceding unsigned comment added by Mirroringelements (talkcontribs) 14:59, 8 November 2010 (UTC)[reply]

See Brazil nut effect. TenOfAllTrades(talk) 15:36, 8 November 2010 (UTC)[reply]
The Brazil nut effect applies when the particles are of similar density. The density of loose sand is about 1,500 kg/m3. The density of a glass marble is about 2,600 kg/m3. Shaking the bag will tend to move the centre of gravity of the whole bag & contents downwards when the bag settles. This is best achieved with the densest particles at the bottom. Axl ¤ [Talk] 09:51, 12 November 2010 (UTC)[reply]

Gigantism and evolution

So I was reading about Robert Wadlow, and I was wondering if his condition could be passed on to his offspring. Is it possible that some giant animals today exist because an ancestor had a disease that caused excessive growth, and those traits were selected for? ScienceApe (talk) 15:17, 8 November 2010 (UTC)[reply]

If by "disease", you mean "genetic abnormality", then yes. But you might want to read about formal definition of disease, as compared to genetic mutation; usually the term "disease" refers to an acquired condition. Most biologists consider the inheritance of acquired traits to be a defunct theory - that means that if the disease that caused a particular trait (like gigantism) was caused by a virus or infection, it is not something that the offspring will inherit. There are a few possible exceptions to this: epigenetics is the modern study of heritable traits by mechanisms other than chromosomal DNA; but I am not aware of any known conditions related to human growth that have such an explanation. A quick search on Google Scholar for epigenetic gigantism turned up Beckwith–Wiedemann syndrome - and that article has a section on genetics that may indicate a developmental condition; but there is stronger evidence for a "random" genetic mutation. Nimur (talk) 18:26, 8 November 2010 (UTC)[reply]
"Most biologists consider the inheritance of acquired traits to be a defunct theory" -- Most, not all??? If anyone considered it to be a live and suitable theory, I would certainly worry about where they got their degree from... Have they ever tried to look into how sperm and eggs are produced? --Lgriot (talk) 09:23, 10 November 2010 (UTC)[reply]

Gravitational constant G change

If the gravitational constant were (or pick an arbitrarily different value) instead of , how would the universe be affected? NW (Talk) 17:59, 8 November 2010 (UTC)[reply]

You have to precisely specify what this means, as explained by Michael Duff here (see Appendix C for specifically the issue of change in G). One way of making this question meaningful is to multiply G by the square of a mass as is suggested here. Count Iblis (talk) 18:52, 8 November 2010 (UTC)[reply]

Is it really possible?

Did (s)he really pull the eyes out like that? Or it's photoshopped? Thanks. —Preceding unsigned comment added by 85.222.86.190 (talk) 18:12, 8 November 2010 (UTC)[reply]

Ask yourself what that 'action' would do to the optic nerve and the muscles around the eyeball. You may like to check the anatomy of the human eye. Then think about the pain that would be generated by the 'action' in the photograph. I think you know the answer. Richard Avery (talk) 19:20, 8 November 2010 (UTC)[reply]
Kim Goodman can extend her eyeballs by 12mm, which is the world record.[20] That's only about a tenth of the distance implied by the photoshopped image. It's not real. Red Act (talk) 19:52, 8 November 2010 (UTC)[reply]
Marty Feldman's face was notable for his bulging eyes, a condition caused by Graves' disease. There are lots of images of him here.Cuddlyable3 (talk) 20:41, 8 November 2010 (UTC)[reply]

Sour things

Why is it, that when you eat something sour, your eyes involuntarily squint? Lexicografía (talk) 18:54, 8 November 2010 (UTC)[reply]

Something to do with it being astringent perhaps. 92.24.186.80 (talk) 20:44, 8 November 2010 (UTC)[reply]
Because pretty much all of the holes in your head are connected. Your eyes are connected to your nose via the Nasolacrimal duct. Your nose is connected to your mouth via the pharynx. So, when you eat something which would burn your eyes if you put it directly into them, it still burns a little because there are ways it can get there. --Jayron32 05:18, 9 November 2010 (UTC)[reply]

Lighting circuits

A couple of times, I've turned off the relevant lighting circuit before starting work chaining the light fitting, only to discover that the house's main trip goes at some point during such work. I was under the impression that turning off the lighting circuit would isolate it from doing exactly that. Is something going wrong here? 92.18.72.181 (talk) 19:46, 8 November 2010 (UTC)[reply]

Definitely time to call in a licensed electrician to figure out what's going on. If breakers/fuses are set up properly (assuming normal codes), disconnecting there will remove power from the downstream circuits and (as you say) isolate them--there would be no power, and nothing you do would affect breakers upstream of the one you pulled. I've seen all sorts of scary miswirings that can give your results: breaker on the neutral with the hot unswitched, more than one circuit wired into the same switch/junction box (i.e., you only pulled one of the feeds to it), etc. Same (or even more) goes for just turning off a wall switch...there could still be a hot wire into the fixture (before heading out to the switch) or the switch could be on the neutral wire, and jiggling the hot might short it against the junction box or some other connection. Once you're in the nonstandard situation you have symptoms of, I don't think wikipedia can recommend a solution due to potential risks. DMacks (talk) 20:01, 8 November 2010 (UTC)[reply]
The OP seems to be in the UK where the mains voltage is 240V AC and not to be messed with. Do not rely on turning off one lighting circuit before working on a light fitting. Most domestic light switches break only one wire and leave the other wire live. Turn off the house's main switch, and if you are sensible like me you will additionally remove the main fuses and check every bare wire with a neon Test light that you know works. Cuddlyable3 (talk) 20:34, 8 November 2010 (UTC)[reply]
Didn't even notice the likely UK of the poster. In that case you also get the "fun" of a possible ring circuit, in which you maybe even can't "just turn off" one circuit (again depends on local switching topology). DMacks (talk) 21:07, 8 November 2010 (UTC)[reply]
A correctly-wired ring circuit has both ends connected to a single (usually 30 amp) fuse or breaker, and no lighting should be connected to it. I agree that there appears to be some illegal wiring in the house, and strongly recommend that the OP take the advice given above. Dbfirs 21:43, 8 November 2010 (UTC)[reply]
I've done this one (I'm in UK). You cut off the lighting circuit (i.e. "live" wire) on the MCB, then you work on the circuit only to find that suddenly all the power goes off. It's because the neutral floats (I've seen 0.8V), and when you touch the neutral to earth, the RCD trips (because the power going down the neutral wire is not the same as the power going down the live wire). It's a pain, all you can do is disconnect the neutral at the box as well.  Ronhjones  (Talk) 22:01, 8 November 2010 (UTC)[reply]
First if the OP is uncertain she/he should ask a electrician but I think this could be normal as you indicates. I am from Sweden so this may not apply to the OP. If it is the Residual-current device that the OP calls "the house's main trip" it could be due to a low voltage difference between the neutral wire and the protective earth. I do not think it is correct to say that the "neutral floats" since it is still connected to the system. What happens is that there are current flowing through either the protective earth or the neutral on the path to the connection between the protective earth and the neutral (PEN) see Earthing system. This introduce a small voltage difference between PE and N due to voltage drop and if you connect them e.g. by cutting a cable it will result in enough current to trip the Residual-current device. The voltage between PE and N can be due to Neutral return currents through the ground or due to voltage drops along the neutral due to currents from other parts of the installation.Gr8xoz (talk) 22:47, 8 November 2010 (UTC)[reply]
Can this still happen in an installation with "Protective Multiple Earthing" where the neutral is bonded to earth within the house? Dbfirs 10:12, 12 November 2010 (UTC)[reply]
... (later) ... I see from our article on Earthing systems that the TN-C-S earthing system is used throughout North America, Australia, the UK and the rest of Europe etc. I recall an old (pre-PME) installation which had a constant 6 volts AC between neutral and earth, in fact I was able to light a torch bulb across these terminals, but I thought that such installations were long gone! Dbfirs 12:47, 12 November 2010 (UTC)[reply]

Dna Code

Hello, Dna is made up of the four nucleotides (G.A.T.C), thats twice as good as binary. What sort of proteins could be made using only two or more of the existing ones? Is this even possible? Or am I totally understanding things wrong? Is there any fossil records of a simpler form of dna to show how dna evolved to a base of four? Slippycurb (talk) 20:44, 8 November 2010 (UTC) —Preceding unsigned comment added by Slippycurb (talkcontribs) 20:31, 8 November 2010 (UTC)[reply]

If there are only two nucleotides, then our existing codon triplet would only allow for 8 different encoded amino acids or else a codon would have to be 4 or more (rather than 3) to give more encoding possibilities (for example, codon quintet would be needed to encode our existing 20ish amino acid choices). Fewer choices would limit the structural variations possible (fewer combinations of polarity, pKa, hydrophobicity, steric bulk, etc.) and also possibly the redundancy/tolerance for mis-pairing during reading or replication. Our Genetic code article is probably a good place to read about these ideas, and also some possible evolutionary history (especially the "Theories on the origin of the genetic code" section, and maybe also the Nucleic acid analogues article). DMacks (talk) 21:02, 8 November 2010 (UTC)[reply]
(ec) Proteins are made of chains of amino acids. Base-pairs (a nucleotide and its complementary partner) are grouped in threes, which are called codons. Each codon encodes an amino acid. The transcription process starts at a start codon, then creates by attaching to the protein being created the amino acid corresponding to the current codon, until the stop codon is reached. Our Genetic_code#RNA_codon_table has a list of codon to amino acid mapping, and our introduction to genetics has a lay-man's summary of the process. Coming back to your question, it is believed that originally only the first base-pair of each codon was used; the rest were padding. Then the second was used, and finally the third. The first base-pair makes the biggest change in the coded amino acid, normally from hydrophobic to hydrophilic, (water-attracting to water repelling). The second and third make finer changes, and normally encode for an amino acid that will cause only a slightly worse version of the protein. CS Miller (talk) 21:08, 8 November 2010 (UTC)[reply]
If you don't use G and U, you can't start a protein, and GUG start codons are rare anyway; properly you need A, U, and G. And if you don't use U and A you can't stop a protein normally, because all the stop codons contain them. (You could just stop it at the end of the RNA, but then you get non-stop decay...)
On the other hand, sequences using two nucleotides more than others are important. There are wide variations in GC-content between different organisms, sometimes over surprisingly short intervals of evolution. As purines A is like G, and as pyrimidines T and U are like C, and DNA methylation and deamination make transitions between these more common than any other mutation. As a result, you can run across proteins that are composed 70% or more of just two nucleotides. In extreme cases I think you can see evolutionary divergence of the protein as it has tried to reconcile itself to a constant stream of mutations pushing it toward a certain composition, but that's not established that I know of.
I think that most people would agree that the RNA world hypothesis involves the establishment of four nucleotide bases well in advance of the invention of proteins (to permit the level of catalytic activity needed for such aspirations), though there's no hard evidence. Wnt (talk) 21:57, 8 November 2010 (UTC)[reply]

baseline characteristics and confounder adjustment in a paper

Hi all I am going to do a presentation reviewing a clinical trial in a few days time. In this trial, the two groups of subjects (control and exposure) differ from each other at baseline in terms of age and smoking status etc. But, at the end of the paper, the authors say that they found an association between their outcome measure and exposure independent of confounders, so my question is do I need to talk about the different baseline characteristics if the authors adjusted for such confounders? Hope I have explained my questions clearly. Thanks, RichYPE (talk) 22:16, 8 November 2010 (UTC)[reply]

I've never had anything to do with clinical trials, but it might help if you at least tell us who your audience is and which capacity are you presenting? Are you lecturing, or being graded on this, or is this in a clinical capacity? You obviously know that ideally trial groups should be randomized. If they can't be then you can do your best to filter out the confounding factors, but no matter how carefully you do that it will make your trial weaker. There's really no such thing as a PERFECT trial, there are strong trials and weak trials, the more aspects of your trial that you can't get "just" right, like randomization, the weaker the trial's results should be treated. You might also want to point out that typically, just one single trial, no matter how strong, isn't usually enough to draw strong conclusions. Of course, it doesn't always happen that way, particularly and unfortunately in the pharma industry. Vespine (talk) 21:55, 9 November 2010 (UTC)[reply]

Wrong answer?

Read question 10 a) ii) : http://www.tqa.tas.gov.au/4DCGI/_WWW_doc/006624/RND01/PH866_paper03.pdf The solution is given here (you have to scroll down below the examiner's comments): http://www.tqa.tas.gov.au/4DCGI/_WWW_doc/006665/RND01/PH866_report_03.pdf Is the solution correct? It seems wrong to me (the Right hand rule tells me otherwise)?. --115.178.29.142 (talk) 22:50, 8 November 2010 (UTC)[reply]

Looks okay to me. With your thumb in the direction of the current, your fingers point up on the left (inside the coil) and down on the right (outside) for magnetic field A. This diagram agrees. B obviously goes the opposite way. Clarityfiend (talk) 03:53, 9 November 2010 (UTC)[reply]
But the solution has it going down on the left (inside the coil) and up on the right (outside) for magnetic field A. 220.253.253.75 (talk) 04:58, 9 November 2010 (UTC)[reply]
Oh, heck. I need to get my eyes checked. It's wrong. Who comes up with these "solutions"? Sarah Palin? Anyway, you're supposed to look at it upside down because you're in Tasmania. Yeah, that's it. Clarityfiend (talk) 05:46, 9 November 2010 (UTC)[reply]

Ultimate fate of photon

What ultimately happens to photons after arbitrarily long journey of many billions light years? Can they travel unchanged indefinitely or they do decay, scatter or something? Thanx. —Preceding unsigned comment added by 85.222.86.190 (talk) 23:33, 8 November 2010 (UTC)[reply]

They get redshifted due to the metric expansion of space. Red Act (talk) 23:47, 8 November 2010 (UTC)[reply]
Beyond that, no time passes from their point of reference, so nothing can happen to them. — DanielLC 01:16, 9 November 2010 (UTC)[reply]
Yeah, whether even the redshifting is a "real" change in the photon itself is just a matter of perspective. If my understanding is correct, during a cosmological redshift, the photon's wavelength as measured by cosmological proper distance increases, but the wavelength as measured by comoving distance stays the same. Red Act (talk) 02:47, 9 November 2010 (UTC)[reply]

The lifespan of the photon is zero. Neutrino oscillations proved that neutrinos do have a "lifespan" and so the photon sits alone as the only known particle with zero lifespan. Hcobb (talk) 03:06, 9 November 2010 (UTC)[reply]

What does "zero lifespan" mean exactly...? WikiDao(talk) 03:10, 9 November 2010 (UTC)[reply]
In the photon's own frame of reference it is created and destroyed in the same instant. Hcobb (talk) 03:13, 9 November 2010 (UTC)[reply]
So, in the frame of reference of photons created at about 10 seconds after the Big Bang, the Age of the Universe is... 10 seconds? WikiDao(talk) 04:15, 9 November 2010 (UTC)[reply]
Sort of. Remember that one of the postulates of special relativity is that light cannot be used as a frame of reference, if it is, then there are all sorts of unresolvable paradoxes introduced. One of them is that the photon does not exist in its own frame of reference, that is it has a zero lifespan, i.e. it exists in OUR frame of reference, but in its own it wouldn't exist for any measurable time. Another perspective on the same paradox is that, from light's frame of reference, the entire universe happens simultaneously, that is all events occur in the same instant. Don't try to rap your mind around this things, unlike some of the unintuitive paradoxes such as the twin paradox, which actually occur, these are real physical impossibilities, do we generally don't even ponder what life is like in lights frame of reference. For all intents and purposes, it doesn't exist. --Jayron32 05:12, 9 November 2010 (UTC)[reply]

Now imagine a substance so strange that it slows a beam a light down by a large enough fraction that you'll notice the difference. What does that say about the lifespan of a photon? I'd suggest sitting down with a glass of water while you think about it. Hcobb (talk) 06:48, 9 November 2010 (UTC)[reply]

The fact that the local speed of light in a medium is slower than it is in a vacuum doesn't change the nature of the speed of light. The speed of light in water is still invarient, and still presents the same limits in water as does the speed of light in a vacuum. Slow light covers some of this. That the photons slow down in water doesn't change the fundemental nature of the photons. --Jayron32 06:59, 9 November 2010 (UTC)[reply]
That's not actually true: only the vacuum speed of light is the speed of light with all the special properties (invariance, causality restrictions, etc.). See Cherenkov radiation, for instance. What actually happens is that photons traveling through water are continually destroyed and recreated (coherently) with tiny gaps during which "the photon" makes no progress because it doesn't exist. --Tardis (talk) 14:09, 9 November 2010 (UTC)[reply]
Well, sort of. If you want to claim that the photons are absorbed and re-emitted, you have to do a path integral that sums (or, maybe better, averages) over all possible Feynman diagrams, with no answer to the question "but which one of these diagrams really happened?". Not very human-friendly; we are intuitive realists. Unless you have a very good reason why not, when dealing with light in matter, you should almost always use the wave formulation — electric and magnetic fields rather than photons. --Trovatore (talk) 03:57, 10 November 2010 (UTC)[reply]
Please pardon my lie-to-children: it's just a way of describing how one can "slow light down" while still allowing photons to always travel at c. This is useful because one would suppose them to always be "in vacuum" between interactions, even though of course they're delocalized and (via the path integral) "interacting" with everything around them continuously. --Tardis (talk) 15:57, 10 November 2010 (UTC)[reply]
I utterly despise lies to children; I think they're reprehensible. They're OK if prepended with something like "roughly speaking". --Trovatore (talk) 18:48, 10 November 2010 (UTC)[reply]
I was wondering if this is the case with neutrino oscillations, does it only happen when neutrinos pass through matter, or have they also been shown in vacuum? Graeme Bartlett (talk) 21:48, 9 November 2010 (UTC)[reply]
Is the solar wind between the Earth and the Sun enough of a vacuum for you? Hcobb (talk) 22:59, 9 November 2010 (UTC)[reply]
But this vacuum is almost nothing compared to the thousands of kilometers of sun plasma they pass through after leaving the sun's core. Graeme Bartlett (talk) 08:07, 10 November 2010 (UTC)[reply]

November 9

what animals can humans coproduce with?

since a donkey plus a horse can breed a mule, what animals can a human coproduce with, and what are the resulting animals called?

also, have most combinations been tried or is it possible a lot of viable (though, like the mule, possibly sterile) combinations simply were never tried yet? Thanks. 85.181.151.31 (talk) 00:08, 9 November 2010 (UTC)[reply]

Neanderthals. Count Iblis (talk) 00:14, 9 November 2010 (UTC)[reply]
Certainly as much an "animal" as homo sapiens sapiens, but still a human animal:

"Neanderthals are either classified as a subspecies (or race) of modern humans (Homo sapiens neanderthalensis) or as a separate human species (Homo neanderthalensis)."

according to the Neanderthal article.
There is, of course, no separate species today with which humans could co-produce, OP. I hope that resolves your interest in this question, but in case not I will put up the "{{RD-alert}}" tag. :| WikiDao(talk) 00:33, 9 November 2010 (UTC)[reply]
We do also have articles on the hypothetical humanzee and on parahuman. That's all I found in Category:Mammal hybrids, apart from the already mentioned Neanderthal admixture hypothesis. ---Sluzzelin talk 00:38, 9 November 2010 (UTC)[reply]
That's two corrections on assumptions-about-answerability-of-questions in a row, Sluzzelin – I promise I'll get with it by the third! :) WikiDao(talk) 00:45, 9 November 2010 (UTC) [reply]
It remains controversial whether or not Neanderthals could breed with anatomically modern humans (or more precisely whether genes were exchanged between the populations, which would require non-sterile offspring). Until a few years ago, most genetic studies suggested no genes were transferred from Neanderthals to humans, but in recent years the arguments have tended to support a limited amount of gene transfer. Dragons flight (talk) 01:16, 9 November 2010 (UTC)[reply]

Any information given in response to this question should be prefaced Do not try this at home . An aid to remember this warning is this extract from the article Origin of AIDS: ...the virus originated in populations of wild chimpanzees in West-Central Africa...scientists calculate that the jump from chimpanzee to human probably happened during the late 19th or early 20th century, a time of rapid urbanisation and colonisation in equatorial Africa. Cuddlyable3 (talk) 11:03, 9 November 2010 (UTC)[reply]

I think it's more likely that unsanitary butchering of apes (making Bushmeat) led to the chimp-human HIV crossover, rather than sexual contact. Monkey killing (I know, chimps aren't monkeys) is much more prevalent than Monkey fucking. Just saying. Buddy431 (talk) 20:24, 9 November 2010 (UTC)[reply]
I, the OP, (different IP now) was not thinking of sexual intercourse with 'em, but rather artificial insemination. Have they tried inseminating all the animals in zoos with human sperm to see if any give birth to viable young? Obviously I'm thinking of primates rather than dolphins and such. 84.153.236.235 (talk) 11:14, 9 November 2010 (UTC)[reply]
I'm not really much of a scientist/biologist, but aren't there a whole raft of laws to prevent people doing that sort of thing - It strikes me as something that would be seen as highly un-ethical, and likely quite illegal. Forgive my naivety if this is not the case Darigan (talk) 11:33, 9 November 2010 (UTC)[reply]
Did you read the linked article, humanzee? That also links to Ilya Ivanovich Ivanov (biologist) which may provide more details Nil Einne (talk) 13:34, 9 November 2010 (UTC)[reply]
A sizeable proportion of farm boys have reportedly deposited sperm in various varieties of farm animals they had access to. Kinsey, pages 289-293, page 362 [21] in "Sexual behavior in the human male" said that 28% of farm boys (who ultimately went to college) had intercourse with animals. No offspring have been documented. Females also had intercourse with animal, per Kinsey: [22], with lower reported frequency, again without documented offspring. Few such farms have animals close to humans such as chimpanzees. Humans are said to be more closely related to chimps than lions and tigers are to each other, and they produce offspring commonly. If nothing else such data. despite laws against and moral repugnance of society toward such bestiality, make laughable the contentions of some scientists that intercourse would not have taken place between Neanderthals and "modern humans." Stating that no mad scientist would ever artificially (or via the old fashioned way) inseminate a chimp because it would be unethical or morally repugnant is also pretty silly, given that many scientists are happy to make new weapons of mass destruction, or to support industrial activities which harm humans or the environment. Current research is placing human tissues or genes in animals, such as placing human brain cells in mice [23](not sure if this is serious or a hoax) but also see The Times Online (2006): [24]. Despite the assertions of a (defeated) Tea Party Senate candidate, the mice did not have "fully functioning human brains," but they did have 1/1000 of their brains made of human brain cells[25]. Another estimate said that transgenic mice have been created with 1% human brain cells. [26], and reportedly have plans to create mice whose brains are 100% human brain cells [27]. If they created a Bonobo with 100% human brain cells, what would be its capabilities? I have seen no estimate of how many cells in which brain region would be required for an animal to have some human-type self consciousness, assuming that Bonobos or elephants do not already have some degree of such awareness. Attempts to fertilize chimps with human sperm, funded by a government in the early 20th century, are described in Humanzee. Now scientists can add human genes to animals, and presumably animal genes to humans, without insemination, through genetic manipulation. Present research uses the transgenic human-mouse model quite commonly. A discussion of possible future use of animal genes to improve human genetics is discussed in "Transgenics and evolutionary enhancement" section (pp 12-14) of a 2007 paper by Arthur Saniotis, "Recombinant nature: Transgenics and the emergence of hum-animals." [28] Edison (talk) 17:35, 9 November 2010 (UTC)[reply]
I think it's clear that the "Clyven" mouse with human intelligence is a hoax. If nothing else, the chat program that allows you to have a real-time textual conversation with a mouse should probably be tip-off. APL (talk) 17:15, 10 November 2010 (UTC)[reply]

Why doesn't cryonics just use a ton of insulation?

From my understanding, cryonics requires replacing the liquid nitrogen every week or so. Insulation decreases thermal conductivity exponentially with distance, so why not just use so much insulation that the temperature stays low until the singularity? — DanielLC 01:47, 9 November 2010 (UTC)[reply]

Thermal conductivity is linear in thickness not exponential. Dragons flight (talk) 02:37, 9 November 2010 (UTC)[reply]
And liquid nitrogen is cheap when compared to real estate! Physchim62 (talk) 03:56, 9 November 2010 (UTC)[reply]
As you increase the amount of insulation, you also increase the surface area over which you're gaining heat. For this reason, there is an optimal insulation thickness. (I don't have the book with me, but see Transport Phenomena by Bird, Stuart, and Lightfoot, I believe there's an example problem like this.) shoy (reactions) 13:03, 9 November 2010 (UTC) (Note: this only applies for cylinders such as pipes, which is what I assume the OP is talking about. --shoy (reactions) 14:07, 10 November 2010 (UTC))[reply]
Really? If I replace some volume of the environment (say, room-temperature air) around my system with anything — even something rather thermally conductive — I should be lowering the temperature at what was the outer surface because the outside of the new insulator will be at the air's temperature and the inside of the new insulator has to be colder. So then the gradients inside the original system must be smaller. (This doesn't apply, of course, if the new material establishes a good thermal connection to something at a higher temperature than what was previously the environment.) --Tardis (talk) 17:23, 9 November 2010 (UTC)[reply]
Heat flow may be linear, but the temperature profile is a natural log function (IIRC, at least for cylindrical systems), so at some point, you're causing a great increase in surface area for not that much difference in surface temperature. --shoy (reactions) 14:07, 10 November 2010 (UTC)[reply]

Liquid nitrogen is like a dollar a litre.... we can generously assume that they have to replace 100 litres of liquid nitrogen per week. (depending on the machine, it may be as little as 10?) While $5200 / year sounds expensive to most people, it sounds like peanuts for whoever could afford the procedure to be cryonised. I mean...it's probably even less than medical insurance in America for the elderly! You're actually saving money! John Riemann Soong (talk) 23:42, 9 November 2010 (UTC)[reply]

desk chair

the arm of my chair tore open and the stuffing is exposed. the stuffing looks like a black tee shirt put thru a meat grinder. what kinda stuffing is this? —Preceding unsigned comment added by Kj650 (talkcontribs) 02:03, 9 November 2010 (UTC)[reply]

It may be exactly what it looks like. It's very common to recycle old textiles as stuffing. Ariel. (talk) 04:44, 9 November 2010 (UTC)[reply]
If I may again plug The Travels of a T-shirt in a Global Economy, there's a nice section about the used clothing market. Used T-shirts may be re-sold (if in pretty good condition), turned into rags (if in good condition, but with large tears), ground into stuffing, or even ground up and re-made into low-grade fabric. Buddy431 (talk) 20:19, 9 November 2010 (UTC)[reply]

wouldent cotton get moldy? —Preceding unsigned comment added by Kj650 (talkcontribs) 05:38, 9 November 2010 (UTC)[reply]

If it got wet, it could. If old shirts are used, they are likely treated with some sort of anti-bacterial/anti-microbial solution to keep them from getting mold simply from perspiration and skin oils. Dismas|(talk) 05:40, 9 November 2010 (UTC)[reply]
You need moisture for mold to grow, so as long as it stays dry it won't mold. A single spill is not enough either, it would need to stay wet for about 2-3 days. Ariel. (talk) 06:22, 9 November 2010 (UTC)[reply]


what chemicals are they likely treated with? —Preceding unsigned comment added by Kj650 (talkcontribs) 01:38, 10 November 2010 (UTC)[reply]

street lighting

metal halide lamps vs sodium vapour. —Preceding unsigned comment added by 59.161.130.15 (talk) 03:58, 9 November 2010 (UTC)[reply]

What about it and them? We have a Street light article that mentions and compares various technologies. DMacks (talk) 04:40, 9 November 2010 (UTC)[reply]

Inclined plane, Non-conservative forces help

A block of weight 8N is launched up a 30 degree inclined plane, of length 2 m by a spring, with spring constant 2 kN/m and a maximum compression .1 m. The average force on the block due to friction & air resistance, combined, has magnitude 2 N. Does the block reach the top of the incline? If so, how much kinetic energy does it have at the top; if not, how close to the top does it get?

Okay--so here's my approach, PE(initial)=PE(final) + KE (final) + Work(nonconservative)

the initial PE is (1/2)kx^2=10J, PE final=mgh=(8N)(2sin(30))=8J Work done by NC forces=(2N)(2m)=4J

Ergo: 10J=8J+[(1/2)mv^2]-4J 6J=KEfinal by those maths, it seems to me that it does reach the top, with 6J of KE at the top. Is my reasoning sound?24.63.107.0 (talk) 04:06, 9 November 2010 (UTC)[reply]

Why do you subtract the 4 J of energy dissipated due to friction? The dissipated energy must come from the initial 10 J, so 10 J = (potential energy 2 m up the plane) + kinetic energy + energy dissipated. Icek (talk) 14:40, 9 November 2010 (UTC)[reply]
ah! of course--in that case the KE at the top would be -2J so it clearly does not make it to the top, but rather only makes it up to the point when KE(final) = 0 solving there gives a height of (3/4)m. Dividing by sin(30) gives me the value on the slope it attains--1.5 m. Sound good?158.121.82.165 (talk) 20:43, 9 November 2010 (UTC)[reply]

Oldowan tools experiment

(I originally posted this at the Humanities desk, but maybe it ought to be here.) The author of this book review states, in passing, "Experiments have shown that Oldowan tools can be made using just the part of the brain that was available back in Homo habilis times." Is that true? What was the nature of these experiments? I can't find anything at the Oldowan article. LANTZYTALK 06:31, 9 November 2010 (UTC)[reply]

This page says:

"Homo habilis is considered to be the first member of the genus homo because of two main reasons. First, their larger brain size, and second, the presence of tools indicates that the large brains were capable of more complex thought processes not seen in the Australopithecines."

Which sort of makes it sound like the "experiments" may just have been the reasoning that they had 1) tools and 2) larger brains than other Homininae which did not have tools. There are some book references at that page, too. WikiDao(talk) 05:01, 10 November 2010 (UTC)[reply]
Although our own article indicates that reasoning may be a bit controversial:

"Whether H. habilis was the first hominid to master stone tool technology remains controversial, as Australopithecus garhi, dated to 2.6 million years ago, has been found along with stone tool implements at least 100,000 - 200,000 years older than H. habilis."

WikiDao(talk) 05:08, 10 November 2010 (UTC)[reply]

hydrogenated ketone

whats a hydrogenated ketone —Preceding unsigned comment added by Kj650 (talkcontribs) 08:28, 9 November 2010 (UTC)[reply]

Could have many meanings depending on context. Could be an alcohol (the result of hydrogenating a ketone), could be a ketone in an alkane structure (the result of hydrogenation of an alkene). DMacks (talk) 08:59, 9 November 2010 (UTC)[reply]
There are catalysts that will reduce ketones to alkanes with hydrogen, or any alcohol for that matter. (wait -- if you use a silane, does it still count as "hydrogenation"?) John Riemann Soong (talk) 22:06, 9 November 2010 (UTC)[reply]

Stereogram effect

While watching TV recently I saw an interesting effect whereby a photograph seemed to be stereoscopic, the camera moved away and to one side and the images in the photograph appeared to be separate and give a stereo effect. I can understand how a two eyed viewer achieves a stereo effect but this was with a (?)single lens camera. I have also seen the same thing with what appear to be paintings. What is this effect called and how is it achieved? Caesar's Daddy (talk) 09:06, 9 November 2010 (UTC)[reply]

For real scenes the effect is achieved by taking pictures simultaneously by a horizontal row of cameras. As the viewpoint moves sideways the display morphs from camera to camera giving the illusion of parallax that continually varies with viewing position. You may notice that the range of movement is seldom wide, because cameras are expensive.Cuddlyable3 (talk) 10:44, 9 November 2010 (UTC)[reply]
(EC) This is described somewhat in Stereoscopy#Wiggle stereoscopy, Stereopsis and Parallax Nil Einne (talk) 10:47, 9 November 2010 (UTC)[reply]
I have seen this type of effect where there are clearly no 'multiple cameras' or 'multiple images' available, such as in documentaries using historic photographs. I think this may be what Caesar's Daddy is talking about. I could duplicate this effect by using a program like Photoshop to cut out the foreground elements to separate them from the background, and then manipulate them separately to give the 3D effect. In so doing it would possibly help to enlarge the foreground elements in relation to the background and do some creative image cloning to fill in the holes left in the background - in these sort of images you'll notice they never pan too far across the image as that would expose the flaws in the background. The same effect could be used on a photo or a painting. Unfortunately I have no idea what they call it, or whether this is how they actually do it (as I say, I could duplicate the effect in this manner, but there may be some simpler automated technique; it may all be covered in one of the articles mentioned above, but I don't have time to look). --jjron (talk) 14:20, 9 November 2010 (UTC)[reply]
Autostereograms (the "wallpaper effect") are fun. Could that be what you are asking about, OP? WikiDao(talk) 19:16, 9 November 2010 (UTC)[reply]
Jjron knows what I'm talking about, unfortunately he doesn't have the answer, Whaa, isn't that always the way. But, hey, thanks to Neil and Cuddly for your attention. No, WikiDao, autostereograms are something else, wonderful, but something else. Thanks. Caesar's Daddy (talk) 21:18, 9 November 2010 (UTC)[reply]
Those stereograms never work for me. But this is a nice stereo animation. Cuddlyable3 (talk) 09:47, 10 November 2010 (UTC)[reply]
Hollow-face illusion can give some surprising effects. Even an almost flat photograph (in reality a shallow negative relief) produces such an effect. Unfortunately, our article has no good visual samples. --Cookatoo.ergo.ZooM (talk) 21:57, 9 November 2010 (UTC)[reply]
Is it 3D pan and scan e.g. this kind of thing ? Sean.hoyland - talk 10:12, 10 November 2010 (UTC)[reply]
Sounds good to me - that's basically exactly what I was describing above, though I wouldn't have known how to do the video part in After Effects. So who's up for writing the 3D Pan and Scan article? :) --jjron (talk) 14:22, 10 November 2010 (UTC)[reply]

How do you compare grades?

If a candidate has an A in the course 101 and the second candidate has an A in the course 101 and a C in the course 202, the second would have lower grades, but more knowledge, wouldn't he? So, drawing the average doesn't seem always appropriate. How is this done in real life? How do you call this kind of mistake? (drawing the average when you shouldn't do it). Quest09 (talk) 13:13, 9 November 2010 (UTC)[reply]

GPA is known to be faulty. A common fix is to weight harder courses with an extra point. So, a B in a harder course is worth an A. That is how you end up with something like my high school GPA of 4.2 out of a 4.0 scale. In the end, it doesn't really matter. Nobody ever asks me what my high school GPA was. Nobody cares about my cum laude with my B.S. degree. I don't even have grades for my PhD. The degree is all that anyone cares about. -- kainaw 13:28, 9 November 2010 (UTC)[reply]
Nobody asks you what your GPA was because you have a higher degree. The same seems to be the case of your B.S. However, you'll still need a grade to get accepted at grad school, PhD program or whatever.--Quest09 (talk) 13:59, 9 November 2010 (UTC)[reply]
I was accepted to college with my score on the SAT and ACT. It had nothing to do with my GPA. I got into graduate school with my score on the GRE. They never asked about my GPA. -- kainaw 14:04, 9 November 2010 (UTC)[reply]
So, grades are still important (GRE is still a grade, isn't it?). (As a side note I have to say that the GRE is not as important as some folks - specially the institution which offers it - believe.)Quest09 (talk) 14:12, 9 November 2010 (UTC)[reply]
No, it's not accurate to equate a test score (which can be improved) with a final grade (which cannot). My own experience with college and grad school mirror's Kainaw's -- pointing to a nice GPA didn't hurt, but standardized test scores were useful. GPA was only relevant insofar as a minimum threshold needed to be met. — Lomn 14:57, 9 November 2010 (UTC)[reply]
Further, the initial question was about a discrepancy between getting an A in a 101 course and a C in a 202 course. There is only one SAT or GRE. So, the discrepancy isn't there. A score on the SAT is a score on the SAT. There is no score on an easy SAT being compared to a score on a hard SAT -- as long as you ignore that the SAT has been dummied down a tad over the years. -- kainaw 15:12, 9 November 2010 (UTC)[reply]

@Kainaw: I was not comparing the grade of an easy course with the grades of a difficult course. I was comparing someone with a good average (and less coursework) to someone with the some extra course (which pushes the grade down). It is like comparing $50 (average $50) with $50 + $10 (average $30). The second is definitely more, but the first has a higher average. Quest09 (talk) 16:58, 9 November 2010 (UTC)[reply]

Grades aren't even supposed to be a measure of how much knowledge a person has absorbed. It's closer to the truth to say that grades measure a student's capability to handle the material. I've forgotten nearly all of the facts about multi-variable calculus, and I've forgotten what grade I got, but if I were to look back and see that I got an A (and I knew that class wasn't too grade-inflated), I'd be confident that I could re-learn it and put it to use. Paul (Stansifer) 17:36, 9 November 2010 (UTC)[reply]
No matter how you phrase it, when it comes to college entrance exams, the students all take the same exam. So, you don't have a student taking more of an exam or less of an exam. It is one exam and one score per student. Technically, you can retake the exam and replace your old score. Still, it ends up being one exam considered with one score for that one exam per student. -- kainaw 18:27, 9 November 2010 (UTC)[reply]
The final question doesn't seem to have been touched on. I don't know if there's a formal name for it (see informal fallacy), but it would seem to depend a lot on specifics. In some cases, it would be almost like comparing apples to oranges; in others, it would be close to fallacy of distribution, a lot is contingent on how similar the two courses are and how different the marks are. Some would argue that getting a very poor grade in course 202 would mean that the breadth of knowledge gained was probably not significant and the poorer average is entirely deserved (hence the reason most educational institutions have a cut-off date where a class can be dropped without "earning" the poor grade). 64.235.97.146 (talk) 20:14, 9 November 2010 (UTC)[reply]
This is precisely why you are always asked for a transcript. The GPA alone only tells you almost nothing. Any college, graduate program, etc asks for a transcript. The problem you propose really doesn't exist in the real world. The Masked Booby (talk) 00:33, 10 November 2010 (UTC)[reply]
And once the college (or whatever) get the transcript, how do they compare the grades? This does not change the situation. I was asking exactly what happens when you have the grades and you don't exactly the same values. I suppose that in the case of an undergraduate program, it will be easy to compare. But what about employers or graduate programs or PhD programs? Quest09 (talk) 17:22, 10 November 2010 (UTC)[reply]
In the real world, you actually don't have this problem. A college or any other educational institution ask you for specific grades and compare them to other specific grades, no matter what they are. Employers, who could have some interest in comparing all your grades, no matter how asymetric with other candidate they might be, normally don't ask for transcripts and do not compare grades. In an ideal situation, I would compare the best grade of candidate A with the best of candidate B and see the rest as a bonus. Mr.K. (talk) 17:29, 10 November 2010 (UTC)[reply]

Where is Steve Baker???

This discussion has been moved to WP:RDTK#Discussion moved from Science Desk.

Plasma pressure : FACT / FICTION

hi guys

I know this sounds stupid but I have wondered for a while, In fiction plasma weapons / bolts etc exert physical force on things, knock them backwards etc, (e.g sorcers apprentice) would this actually happen with superheated plasma or is this pure fiction.

Thanks —Preceding unsigned comment added by 81.154.151.222 (talk) 14:40, 9 November 2010 (UTC)[reply]

Much like mundane bullets sending people flying backwards, it's fiction. A rough guideline from Newton's 3rd Law of Motion: if it doesn't send the firer flying backwards, it can't send the target flying either. — Lomn 14:52, 9 November 2010 (UTC)[reply]
(Excepting rockets and/or explosives projectiles, of course.) APL (talk) 15:28, 9 November 2010 (UTC)[reply]

So the physics behind the PEP (pulsed energy projectile) wouldnt work? thats supposed to knock people over?

—Preceding unsigned comment added by 81.154.151.222 (talk) 14:40, 9 November 2010 (UTC)[reply]

That's different. Pulsed Energy Projectile is basically a laser that causes an explosion (of plasma) at its target. That explosion at the target would send things flying in all directions. In its method of action, it is more analogous to a grenade than a bullet. The weapon itself doesn't exert much force, but the effect it creates can exert a large force. Dragons flight (talk) 15:32, 9 November 2010 (UTC)[reply]
Pulsed Energy Projectile claims that it produces an explosion that knocks people over, which is distinct from a projectile that knocks people over via transfer of momentum. Grenades are the mundane example: the user, obviously, does not suffer the effects of the explosion when he throws a grenade. Additionally, the "knocks people over" thing isn't really cited at all -- I'd guess, based on a quick overview, that the effect isn't so much "physical force knocks you down" as it is "shock, surprise, and pain incite you to fall down" -- two very different things. — Lomn 15:34, 9 November 2010 (UTC)[reply]
Consider reading our article stopping power, particularly the knockback section. Back in July, we had this discussion about plasma weapons, and I compared a hypothetical plasma weapon to a conventional flame-thrower. In reality, a plasma weapon has few or no advantages over conventional armaments. Nimur (talk) 16:25, 9 November 2010 (UTC)[reply]
The citations for that article aren't done right, but the sources listed in the "Sources" section say "The resulting shock wave will knock you to the floor"[29] and "The weapon … could literally knock rioters off their feet"[30] The grenade analogy doesn't work well in terms of conservation of energy or conservation of momentum. A grenade carries potential energy that's converted to kinetic energy at its destination, so the kinetic energy produced by the grenade isn't felt as kickback by whoever threw the grenade. But the PEP is basically just a laser. Only light comes out of the laser. There's nothing coming out of the laser that has any potential energy. Every bit of kinetic energy experienced at the destination ultimately comes from the kinetic energy of the light coming out of the laser. Ditto for momentum. Red Act (talk) 16:58, 9 November 2010 (UTC)[reply]
Sorry, I wasn't clear: the sources themselves are remarkably poor with regard to "knock to the floor". There's actual discussion about the pain aspect of the weapon. There's nothing but a throwaway line about knockdown. — Lomn 23:29, 9 November 2010 (UTC)[reply]
I'm stuck on the same thing Red Act - only photons are coming out of a laser, so the total momentum imparted should be small (non-zero, but small). On the other hand, suppose the thermal energy from the laser activates a chemical reaction at the target, liberating some chemical potential energy that is already down-range. In the extreme case, imagine using a laser to heat a balloon full of hydrogen until ignition. The laser provides a tiny "pilot light" activation energy, the majority of the energy and momentum will be liberated by the secondary action. The limiting factor would then be the types of chemical reactions that can be triggered by laser-heating. Nimur (talk) 17:08, 9 November 2010 (UTC)[reply]
I have no idea what the specifications of the laser system are, but to see how this might work, consider the following. A 500 J laser pulse at 1000 nm would have 2.5×1021 photons and a impulse of 1.6×10−6 m kg / s (i.e. the same momentum change as a 1 N force applied for 1.6 microseconds). However, if you could somehow dump all 500 J into 1 cm3 of water in a microsecond, you'd flash it to 200 C steam. Using the ideal gas approximation, that steam would then have a overpressure of 2000 atmospheres causing it to "explode" and delivering a force to it's immediate surroundings of ~25 kN during the ~50 microseconds before it dissipates. In this example, the impulse of the laser was thus amplified ~1 million times as a result of using the energy to boil water. This is a purely physical change and doesn't rely on any potential energy being in the target. Obviously the PEP is trying to do something like this, but presumably without blowing up chunks of flesh (if they want it to be non-lethal anyway). Dragons flight (talk) 17:38, 9 November 2010 (UTC)[reply]
500 J isn't enough to do that. The heat of vaporization of water is 40.65 kJ/mol, of which R(373 K)=3.1 kJ/mol is the part of the enthalpy change (taking ). One cm3 (or gram) of water is 55.5 mmol, so you need 2084 J just to vaporize that much water in place, never mind heat it to 100 °C or from there to 200 °C. (That actual heating, from 20 °C, takes 487 J; perhaps that's what you calculated?) But your point stands: just up it to 5 kJ and you still have an amplification of ≥105. --Tardis (talk) 19:37, 9 November 2010 (UTC)[reply]
Eh, I originally did the calculation for 5 MJ. I then realized the result was too enormous and tried to scale it down to something more reasonable. Apparently, I loss track of some factor in the process. Oh well. The basic point is correct though, the pressure created by a rapid phase change can lead to a much bigger impulse than the photons in the laser do directly. Dragons flight (talk) 19:48, 9 November 2010 (UTC)[reply]

It is is usually not the dominant source of the pressuse, see e.g. here:

The force that compresses and accelerates the fusion fuel inward is provided solely by the ablation pressure. The other two possible sources of pressure - plasma pressure (pressure generated by the thermal motion of the plasma confined between the casing and the fuel capsule) and radiation pressure (pressure generated by thermal X-ray photons) do not directly influence the process.

Count Iblis (talk) 00:29, 10 November 2010 (UTC)[reply]

Condensing top for cooking pot

I was cooking "Steel cut Irish Oatmeal" which requires (per the recipe on the can) keeping it at a simmer for 30 minutes. It tended to dry out and scorch, even on low heat. If I covered the pot, it tended to boil up and spill out. I tried putting an oversized Pyrex glass lid over the pot, with the convex side projecting down, and with some cool water in the concave side above, like a reflux condensor. The goal was to let the oatmeal and water simmer, while condensing the water which evaporated or boiled off and letting it drip back into the pot. Through the glass I could see a steady stream of large drops returning to the pan, while the water above the lid heated up only minimally. The knob hanging down distributed the condensed water rather than it all falling in a small pool in the center of the oatmeal. It functioned great, and the result was good. A normal metal saucepan cover might condense some of the vapor, but it tends to heat up and then steam escapes. A pressure cooker is unusable, since a food like steel cut oats can clog the relief vent resulting in overpressure and overtemperature, and possible a blowout. So is such water-cooled condensing saucepan top a commercial product, and if not why not? All I could find was a domed lid with a small depression in the center, apparently not water cooled [31]. Edison (talk) 17:04, 9 November 2010 (UTC)[reply]

How about a laboratory-grade condenser? Surely some of those have the form-factor you're looking for. I've seen some intricate coiled tube systems - their intent is usually to maximize the rate of heat-removal; but in your application, that would be simultaneous with maximizing the quantity of collected (and re-precipitated) water. Nimur (talk) 17:21, 9 November 2010 (UTC)[reply]
Those all seem like overkill, and generally would not deposit the condensate back into the saucepan as well as an inverted glass lid from Corningwear pan did. The regular metal lid from the saucepan would also work inverted and with some cooling water in it, but it was interesting to see the condensation in action. Edison (talk) 18:13, 9 November 2010 (UTC)[reply]
As to why this product (may) not exist. Many cooks experiencing similar problems would be happy to use some combination of these techniques rather than buying a special reflux condenser lid. 1 turn temperature down, 2 add a small amount of liquid, 3 use slightly larger vessel, or 3 only partially cover with existing lids. (perhaps your range is lacking on fine control at the low end?) Interesting idea though, I'll try it next time I have an opportunity :) SemanticMantis (talk) 19:37, 9 November 2010 (UTC)[reply]
A condenser seems necessary only if the evaporated material contains something precious, something you need to retain in order to preserve the character of the food you're cooking. That's obviously the case for a whisky still (where the evaporate is a key element of the final product). But in the case of oatmeal, or a casserole, is there anything really precious in that evaporate. If there isn't, then SemanticMantis' suggestion of simply adding more virgin water is sufficient (your condenser would merely save you work. and that only until you have to clean it). I guess the test is to manually collect the condensate (say by periodically lifting the lid and pouring it off into a glass, and the rest of the time keeping a bag of ice on the lid to keep it cool). Once that's done and the collected condensate has been left to cool you can taste it. If it's sufficiently strong, and sufficiently different from the mass of the food, such that adding it back in would noticeably improve the food, then a condensing pan would be a valuable addition. I'm guessing that, for most recipes, it won't - that it'll either taste (weakly) like the food, or like nothing at all. -- Finlay McWalterTalk 20:26, 9 November 2010 (UTC)[reply]

Identify this bush

What type of bush is this shown in these pictures? I took the pictures today at my house in Kansas, and in the picture the bush is displaying it's autumn colors.
P.S. I apologize for the relatively poor quality of the images, but my normal camera is out of commission, forcing me to take the images on my cell phone.

Thanks in advance, Ks0stm (TCG) 23:06, 9 November 2010 (UTC)[reply]

Some sort of Euonymus, perhaps Euonymus alatus? It's rather hard to tell from the pictures. Deor (talk) 01:30, 10 November 2010 (UTC)[reply]
It's definitely Euonymus alatus. There is one of those right outside my house. J.delanoygabsadds 03:49, 10 November 2010 (UTC)[reply]

Explanation behind a cosmic ray exhibit

An exhibit on cosmic rays at the Pacific Science Center.

Could someone please explain the science behind this exhibit better than the informational text seen in the picture did? I don't really get how cosmic rays are visible in the cloud chamber. The little tracks they make in the chamber can be seen in the picture where I highlighted them on the image's page at Commons. They appear in person as little yellowish white streaks in the blackish cloudy material. How do the cosmic rays form these little streaks?

Thanks in advance, Ks0stm (TCG) 23:16, 9 November 2010 (UTC)[reply]

Cloud chamber should help. 75.41.110.200 (talk) 23:27, 9 November 2010 (UTC)[reply]
Yes, you're seeing secondary effects caused by high-energy particles - it's not possible to see the particles themselves. This is sort of like lightning - you never see the "electricity", you see the air that the electricity has flowed through. In the case of lightning, the air has been super-heated and glows because of a combination of ionization and incandescence. In the case of a cloud-chamber, a supersaturated solution has formed a precipitate because an ionized trail has created nucleation sites in the wake of a high-energy particle. Nimur (talk) 01:42, 10 November 2010 (UTC)[reply]

November 10

Antimatter paradox

In a particle accelerator, a matter particle and the corresponding antiparticle are created simultaneously out of nothing. Why doesn't this violate the law of conservation of matter? The only resolution that I can think of is that antiparticles would have negative mass, but scientists generally consider this very unlikely. Is there something that I'm overlooking? --75.33.217.61 (talk) 01:01, 10 November 2010 (UTC)[reply]

There is no law of conservation of matter, only conservation of energy. The Sun converts matter to energy all the time. Clarityfiend (talk) 01:54, 10 November 2010 (UTC)[reply]
Mass and energy are different ways of looking at the same thing, due to the whole E=mc2 thing. In pair production of an electron-positron pair from a photon, for example, the rest mass of the electron and positron comes from the energy of the incoming photon. The total energy, or mass, depending on how you want to look at it, is conserved. See Mass–energy equivalence#Conservation of mass and energy. Red Act (talk) 02:06, 10 November 2010 (UTC)[reply]
Of course there is a law of Conservation of matter. I think the rest of what is said is pretty right. Vespine (talk) 02:55, 10 November 2010 (UTC)[reply]
The individual conservation laws of matter and energy (seperately) work for ordinary transformations which occur in the course of most people's experience, such as chemical reactions, or the like, at least to a first approximation. The combined matter+energy conservation law is more scrupulously true in all cases, but changes from matter to energy, or visa-versa are usually unmeasurable by normal means except in the cases of very high energy nuclear reactions. Most people don't work in that realm. Of course, even in a chemical reaction, there are (with sensitive enough measurements) real changes in mass due to energy changes, for example in an exothermic reaction the products are slightly less massive than the reactants due to the release of binding energy during the reaction as heat. This loss of energy will result in a tiny change of mass, which is usually unmeasurable using the kinds of scales which most people will have experience with. --Jayron32 07:33, 10 November 2010 (UTC)[reply]
Quite right. From a semantic standpoint, one is also permitted to say that in pair production relativistic mass is always conserved (for a given observer). Some might say that this is a cop out, as relativistic mass incorporates terms for kinetic energy and even accounts for particles (like photons) that have no rest mass whatsoever. On the other hand, fast-moving particles really are measurably more massive (to a suitable observer, with appropriate instruments), and photons really do have momentum — so if it quacks like a massive duck, then it must have mass. In other words, 'Conservation of matter' is a rule of thumb that applies in most 'normal' situations, whereas 'Conservation of (relativistic) mass' is the law. TenOfAllTrades(talk) 14:52, 10 November 2010 (UTC)[reply]
You say "In a particle accelerator, a matter particle and the corresponding antiparticle are created simultaneously out of nothing". But they are not created out of nothing, they are created out of other particles in interactions that obey certain conservation laws including conservation of energy and conservation of momentum - see pair production. You may be thinking of virtual particles, which can exist for short periods of time because of the uncertainty principle. Gandalf61 (talk) 15:49, 10 November 2010 (UTC)[reply]

A noob rock question

What rock is this and how did it get that way? [32] Thanks. Imagine Reason (talk) 01:42, 10 November 2010 (UTC)[reply]

It's not a great picture but it resembles granite. And what "way" are you referring to? Do you mean the relatively flat side or the slant? Either way, it could be from many things. Where did you find it? That might lend some clues. Dismas|(talk) 01:48, 10 November 2010 (UTC)[reply]
It looks like some sort of granitic or metamorphosized granitic rock. The surface of the rock is heavily weathered, so you would need to break off a piece and examine the clean, broken face to positively identify the rock; that's why geologists carry a rock hammer, since identifying rocks which have been exposed to the elements, covered in lichens, out in the rain for thousands of years, etc. etc. are hard to identify. A freshly broken rock is essential to identification. Some other options, if it isn't granite, would perhaps be gabbro or diorite or gneiss, but these are just shots in the dark. It is quite impossible to positively identify the rock from that picture. --Jayron32 02:01, 10 November 2010 (UTC)[reply]
It's just a random rock on a trail. I was wondering about the surface of the rock. Weathering...of course. Thanks for the responses. Imagine Reason (talk) 04:59, 10 November 2010 (UTC)[reply]
The vertical surface and cracks are joint (geology) planes, formed when the rock is under stress, but not too much pressure. The green stuff may be lichen or green algae, but resolution in the image is not good enough to tell. Graeme Bartlett (talk) 08:00, 10 November 2010 (UTC)[reply]

History of SR

When you plug in the Lorentz transformations into Maxwell's equations, they remain in the same form. 1) How did he (or whoever) arrive at the form of the transformation? Was it worked out or done by blind luck? 2) How did Lorentz interpret this? When we was suggesting Lorentz contraction, I appears that he though length contraction was purely an electromagnetic effect. Did he ever suggest that it might be more general? 76.68.247.201 (talk) 01:53, 10 November 2010 (UTC)[reply]

Have you read History of Lorentz transformations and Lorentz ether theory#Historical development yet? Red Act (talk) 02:35, 10 November 2010 (UTC)[reply]

a disconnected capacitor and dielectrics....

When I add a dielectric into a capacitor do I change the energy stored? Or does the potential energy remain constant? I'm guessing constant but I am not entirely sure.... I usually don't have large enough capacitors to play with so I can't tell if it costs more energy to put a dielectric in or take it out.

From what I get, putting a dielectric in can't change the charge stored, so while the capacitance increases and voltage decreases, this basically increases the amount of charge that could be stored. The voltage drops in the sense that it now costs less energy to separate the same amount of charge .... but wouldn't this reduce the potential energy stored by the capacitor? Or does adding the dielectric only make it possible to store more?

The article really isn't clear on this. John Riemann Soong (talk) 04:55, 10 November 2010 (UTC)[reply]

What would happen if you removed it? You now have "too much" charge. But actually, I believe it would take energy to insert a dielectric (which would show up as voltage, not charge), and presumably the capacitor would constantly be trying to expel the dielectric, converting the stored energy (voltage) into kinetic energy in the process. (I wonder if you can make a motor this way.) I'm not 100% sure about this, but after "simulating" this in my head it's the only option that made sense. Ariel. (talk) 06:34, 10 November 2010 (UTC)[reply]
Actually, I believe the energy stored will decrease. If you take an air-gap capacitor, charge it, and then put some dielectric (with a higher dielectric constant than air) between the plates, Q will remain the same, C will increase by the ratio of the dielectric constants because C=εA/d, V will decrease by the same factor because V=Q/C, and the available stored energy W will decrease because W=VQ/2. Red Act (talk) 07:04, 10 November 2010 (UTC)[reply]
So that will mean the dielectric is attracted to the capacitor, just like a charged plastic rod can attract paper. Graeme Bartlett (talk) 07:52, 10 November 2010 (UTC)[reply]
Well I guess if I charged the capacitor (with dielectric) with a 10V battery, then removed the dielectric, the voltage will be higher than the battery's. It will have a higher potential than the battery, so now it will put energy back into the battery. This is how I understand it. The dielectric "stabilises" the built-up charge. It's kind of like how hyperconjugation or hydrogen bonding stabilises a high-energy intermediate or transition state? John Riemann Soong (talk) 20:08, 10 November 2010 (UTC)[reply]
What if there is no battery connected? No, I believe that the capacitor is constantly trying to expel the dielectric, and if it manages to do so, the "extra" energy is converted to kinetic energy. Ariel. (talk) 04:00, 11 November 2010 (UTC)[reply]
You've got that backwards, Ariel.
As the dielectric is inserted, the potential energy stored on the capacitor decreases, as per my previous post. For conservation of energy to hold, that means that in the absence of any additional external forces, the kinetic energy of the dielectric increases as it moves into the capacitor, which means its velocity is increasing, which, since the velocity of the dielectric is toward the capacitor, that the capacitor basically sucks the dielectric into it. I.e., the dielectric is attracted to the capacitor, as Graeme said.
As another way of looking at it, look at the picture on the right at Capacitor#Theory of operation. As shown in the picture, the electric field between the plates produces a surface charge on the dielectric such that the charge on the dielectric's surface is opposite in polarity to the charge on the adjacent plate. The opposite charges result in an attractive force. Red Act (talk) 04:43, 11 November 2010 (UTC)[reply]
Could be, could be, the part that's bugging me is what happens if you remove the dielectric from a fully charged capacitor? It now holds more electrons than it can. Ariel. (talk) 06:01, 11 November 2010 (UTC)[reply]
It now holds more separated charges, creating a higher voltage than the battery that created it. Normally a capacitor stops when its voltage drop equals the battery because if it went any higher the capacitor would start charging the battery. I suppose it's a situation of equilibrium. John Riemann Soong (talk) 06:53, 11 November 2010 (UTC)[reply]
Yeah, I made my comment below while assuming that the capacitor was disconnected, in which case removing the dielectric would greatly increase the voltage on the capacitor, likely resulting in exceeding the breakdown voltage. But if the capacitor was connected to a rechargeable battery while the dielectric was being removed, then current would flow "backwards" back into the battery, charging it back up slightly, and reducing the charge stored on the capacitor, with just a tiny increase in voltage. Red Act (talk) 07:56, 11 November 2010 (UTC)[reply]
If the capacitor's breakdown voltage is exceeded, it will short out briefly. Red Act (talk) 07:19, 11 November 2010 (UTC)[reply]

cotton pants

i bought some new cotton pants and washed them and afterrwards they have a strong pesticide smell. wtf? and why didnt they smell like that while dry? —Preceding unsigned comment added by Kj650 (talkcontribs) 09:52, 10 November 2010 (UTC)[reply]

When articles get wet it seems to release some latent odours, I'm thinking 'dog' at this point But hey, look on the bright side, it'll keep the ants out. Caesar's Daddy (talk) 23:33, 10 November 2010 (UTC)[reply]

Something from Nothing

Quantum fluctuations arise because of the Heisenberg's uncertainty principle. Our universe is said to evolve from such a fluctuation, big enough. So does that mean that the Heisenberg's uncertainty principle is the ultimate driving force of all the universes in the multiverse? Or is there another explanation that I am missing?--Lightfreak (talk) 10:17, 10 November 2010 (UTC)[reply]

Mu. Either a "yes" answer or a "no" answer to your question would be unfalsifiable, because the phrase "ultimate driving force" has no established scientific definition. The word multiverse is also problematical, because that can mean too many different things, all of which are hypothetical. Red Act (talk) 11:27, 10 November 2010 (UTC)[reply]
Also, the Big Bang seems to have created the fabric of space along with matter and energy. It's not the same as a quantum fluctuation happening within the existing universe. Heisenberg does, however, imply that our current models lose predictive power for the very first moments of the Big Bang. --Stephan Schulz (talk) 17:27, 10 November 2010 (UTC)[reply]

what'ts this lake called?

Are you talking about the *type* of lake, or the name of that particular body of water?
To my eyes, it appears to be man-made in an effort to drain the water from the surrounding land that would otherwise be some sort of salt-marsh. I believe that the appearance of the land on the other side of the railway suggests this. (Am in on 'the wrong track'?) Darigan (talk) 11:48, 10 November 2010 (UTC)[reply]
On Wikimapia, it's labeled "Pinole Lake", but I'm not finding much independent confirmation of the name. Deor (talk) 13:01, 10 November 2010 (UTC)[reply]
I see no streams or such feeding the lake, so this is most likely some sort of man made pond. Perhaps a landscaping project for a business park or housing development being built in the area. Googlemeister (talk) 15:12, 10 November 2010 (UTC)[reply]
It might be an old gravel quarry. 92.24.187.248 (talk) 15:45, 10 November 2010 (UTC)[reply]
I was wondering about the name, there is a complex creek system throughout the area actually, I believe it is fed by Garrity Creek. What government agency would be able to tell me what it is, what its named or who owns it? —Preceding unsigned comment added by Hemanetwork (talkcontribs) 07:17, 11 November 2010 (UTC)[reply]

can you strangle someone with your own intestine?

I saw a fight scene where someone was eviscerated, i.e. their gut was sliced open, but in the seconds before they died, they used their own intestines to strangle their assailant. Is this remotely possible??? Could you survive for even seconds with your guts out, and are your guts tensile enough to choke someone? Thank you. 84.153.247.200 (talk) 11:30, 10 November 2010 (UTC)[reply]

No. Dolphin (t) 11:35, 10 November 2010 (UTC)[reply]
You can survive for a short while with your guts out. Hence disembowelment was historically used as a particularly gruesome form of capital punishment, and the Japanese rite of seppuku involved having a second stand by to behead the person as soon as the abdomen had been sliced open (harakiri). However, a person who has been disembowelled is hardly likely to have the strength (because of blood loss and extreme pain) to be able to strangle another person even if they had a piece of rope handy, let alone with their own intestines. Physchim62 (talk) 11:42, 10 November 2010 (UTC)[reply]
A person with guts exposed would not automatically lose all strength, and the blood loss would not necessarily be all that quickly incapacitating merely from the front of the abdomen being sliced open, as opposed to an abdominal injury which sliced arteries and internal organs and caused massive blood loss. Edison (talk) 18:32, 10 November 2010 (UTC)[reply]
Maybe it would help if they were on drugs, like PCP or something else with strong analgesic properties. There's cases of people undertaking "injurious activities" oblivious to the harm they are causing themselves. As for whether it is strong enough, my purely speculative answer is yes, they're pretty strong. Vespine (talk) 21:37, 10 November 2010 (UTC)[reply]
This is implausible for an additional important reason: Intestines released from the abdomen are on a pretty short tether - the mesentery (and mesocolon for the colon). Think of the intestines as the piping on the edge of the mesenteric apron. The mesentery won't stretch far, but could be torn (with plenty of additional vascular damage). The intestines are also fairly stretchy, so it would take some work to pull them taught enough even if they could be made to reach the intended victim. -- Scray (talk) 12:32, 11 November 2010 (UTC)[reply]

Species identifcation for File:Orange-shelf-fungus.jpg

The image in question

I'm not familar with my local fungi species here in the UK , let alone those of Australia which is where the uploader of the image appears to be based, hence this request.

In order to expand on the image description, so the image can be moved to Commons, Is anyone on the Science Reference desk able to provide a more specific species identifcation? Sfan00 IMG (talk) 13:45, 10 November 2010 (UTC)[reply]

Looks vaguely like Polyporus squamosus, but it's too blurry to make out details of the cap surface, and there's no image of the underside. Not a useful picture, really; deletion would be a suitable fate. Sasata (talk) 22:14, 10 November 2010 (UTC)[reply]
Might be laetiporus sulphureus, but with so little detail... Richard Avery (talk) 23:26, 10 November 2010 (UTC)[reply]

energy from antimatter

If one was to calculate energy from an antimatter reaction, one would use the famous E=mc2 formula, but is m equal only to the mass of the antimatter, or to 2x the antimatter mass to account for the equal amount of matter that is annihilated? Googlemeister (talk) 15:17, 10 November 2010 (UTC)[reply]

The total mass converted, which is both standard and antimatter. — Lomn 15:26, 10 November 2010 (UTC)[reply]

Antimony oxidation

Would oxidation of antimony by a mixture of dilute hydrogen peroxide and concentrated hydrochloric acid form the pentachloride or the trichloride? --Chemicalinterest (talk) 15:33, 10 November 2010 (UTC)[reply]

It might depend on your concentrations. (The redox potential cited for H2O2 is only for standard state conditions after all.) It might form a mixture of them, even. Have you tried doing any free energy calculations with redox potentials? John Riemann Soong (talk) 23:21, 10 November 2010 (UTC)[reply]

I haven't done any calculations about it. I only put a little powdered antimony in the solution and it dissolved. It did not dissolve when only the hydrochloric acid was there. From the solubility charts, when the small amount of powdered antimony turned whitish in hydrogen peroxide, antimony pentoxide was formed. I would expect it to form the pentachloride when mixed with hydrochloric acid but didn't know whether the pentachloride was stable in that solution. (It was very dilute). --Chemicalinterest (talk) 11:39, 11 November 2010 (UTC)[reply]

Meteorites in mammoth tusks?

I was watching a TV program (National Geographic?) about scientists who were proposing that North American mammoths went extinct ~13,000 years ago because of a meteor impact in what is now northern Canada. One of the scientists scoured collections of mammoth tusks with a magnet looking for small pieces of meteor-derived metal shrapnel in the tusks. This seemed to me like an extraordinarily unlikely thing to be looking for. The chance that fragments of a meteor would end up embedded in a mammoth tusk must be infinitesimal. And the chance that if such as thing did occur that the mammoth involved would end up fossilized and collected must also be infinitesimal. But the scientist did find fossilized mammoth tusks with supposed meteor fragments in them, according to the show. They just turned out to be from an older age (~30,000 years ago), and were said to be evidence of another earlier meteor event. Can this be true, or has this story somehow been embellished and dramatized for a television audience? Edgeweyes (talk) 15:39, 10 November 2010 (UTC)[reply]

Well, it probably wouldn't be fragments per se. However, stuff you breath in and stuff you eat will end up in trace amounts in your hair, teeth, and fingernails, things like that. So what was probably being looked for was elemental evidence of meteoric impact; unique elements and isotopes that would have been in the air and water which would have been ingested by the mammoths and would have shown up in their teeth and bones. Such a meteoric impact would not have likely killed the mammoths instantly, instead it would probably take a few generations for the species to go extinct. Animals which grew up in the years shortly after the meteor impact would possibly have signs in their teeth and bones which would have been indicative of that. See Isotope analysis for the likely work that was done on the mammals. Look especially at the "oxygen isotopes-tissues affected" especially the paragraph on teeth. What applies to oxygen there also applies to other elements in teeth, so any unusual isotopes introduced to the environment of the mammals, such as by a meteor impact, would show up in the animals teeth (tusks are teeth, after all) and remain "locked in" throughout the animals life. I have not seen the program in question, but I suspect what you are interpreting as "fragments" is really chemical elements. --Jayron32 15:56, 10 November 2010 (UTC)[reply]
Actually, the show depicted the scientist looking for visible fragments. He was dangling a magnet on a string and moving it slowly over the surface of the tusks, watching and feeling for a tug from metal fragments. They showed images of small pieces (hard to judge exactly how small), but clearly pieces visible without magnification. Edgeweyes (talk) 16:08, 10 November 2010 (UTC)[reply]
I think I identified the show (by searching Wikipedia, of course): National Geographic Explorer, "Mammoth Mystery" from 2007. Edgeweyes (talk) 16:10, 10 November 2010 (UTC)[reply]

This sounds like the research of Allen West. Here is an abstract of a scientific presentation about this research: http://adsabs.harvard.edu/abs/2007AGUFM.U23A0865F —Preceding unsigned comment added by 148.177.1.210 (talk) 16:32, 10 November 2010 (UTC)[reply]

No shit? Well, strike my comments above. You learn something new all the time. --Jayron32 16:41, 10 November 2010 (UTC)[reply]
I consider this "extraordinarily unlikely" - but that doesn't mean it's false. Skepticism about meteorites is nothing new. It's worth repeating the classic quote from Thomas Jefferson: "I would more easily believe that a Yankee professor would lie than that stones would fall from heaven."[34] Jefferson's skepticism has since been proven wrong, and meteorites are now widely accepted scientific facts; rocks occasionally do fall from the sky (as unlikely as that seems!); and if they survive to the ground intact, they have to land in something. From a certain point of view, it's probably easier to identify a micrometeorite in a tusk (where it stands out clearly as a mineral of foreign origin), as opposed to finding it in the ground. What I find amazingly extraordinary is that the bison survived the impact - in the case of the rest of the tusks, the micrometeorite could have impacted at any time during or after the animal's life - but at least one observation indicates that the micrometeorites hit the animal while it was still alive, and the bison managed to heal and regrow new bone-material! Nimur (talk) 22:39, 10 November 2010 (UTC)[reply]
Wow, that seems to match exactly. I have to say that I'm really surprised. Edgeweyes (talk) 23:33, 10 November 2010 (UTC)[reply]

I'd just like to note that the majority view at the moment is that the impact hypothesis for the North American mass extinction has been pretty definitively rejected, with the last nail in the coffin coming out last month; see PMID 20805511. Basically none of the things that were claimed to support it have stood up to detailed examination. Looie496 (talk) 00:23, 11 November 2010 (UTC)[reply]

The original paper from 2007 Evidence for an extraterrestrial impact 12,900 years ago that contributed to the megafaunal extinctions and the Younger Dryas coolin is nice to read. Especially if you know how heated and controversial the discussion became in the following three years. --Stone (talk) 07:55, 11 November 2010 (UTC)[reply]
Can you describe how West's work has been criticized? Edgeweyes (talk) 13:50, 11 November 2010 (UTC)[reply]

Species identifcation for File:Flower2294.JPG

File:Flower2294.JPG
The image in question

In order to expand on the image description, so the image can be moved to Commons, Is anyone on the Science Reference desk able to provide a more specific species identification? Sfan00 IMG (talk) 13:45, 10 November 2010 (UTC)[reply]

Looks like some kind of Helianthus (aka sunflower), not sure on the exact species, but you could hunt around. --Jayron32 15:58, 10 November 2010 (UTC)[reply]
My suggestion would be Calendula officinalis. Being a popular garden plant, this comes in a range of cultivars with varying appearances (double heads, different colours, etc), but the leaves (which are also subject to some variations) look consistent. 87.81.230.195 (talk) 22:18, 10 November 2010 (UTC)[reply]
There are so many types of aster that all look generally similar, that you really gotta be a specialist to be able to name a specific one (modulo a few distinctive exceptions). Looie496 (talk) 00:12, 11 November 2010 (UTC)[reply]

Molecular geometry

Hello, I would like to find a reference that clearly and concisely describes the geometry of ideal polyhedra. For example, in a molecule formed of X and O units in 4-fold coordination, the ideal O-X-O bond angle for a tetrahedral unit is cos-1(-1/3)=109.47 degrees. This is quoted in the article Molecular_geometry. However I would like to understand, in simple terms, where this formula comes from. And hence how I would go about calculating the O-X-O bond angle in other types of polyhedra, specifically for octahedra or cubic geometry. I know this is quite basic geometry but I find it quite difficult to visualise and am sure there must be a reference out there somewhere that describes how to calculate this angle for ideal polyhedra. Thanks in advance for your help. 88.219.44.165 (talk) 22:06, 10 November 2010 (UTC)[reply]

The technical term for this value is the central angle of the polyhedron. My bookmark for looking them up is [35]. Here goes my understanding of the math. In general, one considers inscribing the polydedron in a sphere, and the edges of the polyhedron are therefore chords. Now you've got a triangle (center, and the two ends of the chords) and you know all three side-lengths (from the dimensions of the polyhedron and the sphere, respectively), so you can solve the angles of it with simple trig. And the dimensions of the sphere are also based on the dimensions of the polyhedron. So "all you have to do" is figure out how to derive the sphere size for a given polyhedron. I don't remember how. DMacks (talk) 01:11, 11 November 2010 (UTC)[reply]
Okay, it appears maybe related to the "miter angle" (angle between the two faces joined by the edge that is the chord). I still don't know how to find that, but lots of woodworking sites have applets or links to calculators for at least some forms. DMacks (talk) 01:15, 11 November 2010 (UTC)[reply]
Thanks DMacks for the reference that gives the table of central angles, that helps for a start. In terms of the calculation yes it just fundamental geometry so must be described in a text book or a good online reference somewhere but thus far I have not been successful finding anything.88.219.44.165 (talk) 07:55, 11 November 2010 (UTC)[reply]

Equilibrium -- only at 109.47 degrees are all four bonds equally spaced from each other. Or is this not what you were asking? John Riemann Soong (talk) 03:03, 11 November 2010 (UTC)[reply]

Yes, although often there may be some distortion from these ideal values. Nevertheless my question is how do you actually calculate the angle (termed central angle as DMacks points out) for various ideal polyhedra. It's "basic" geometry but I'm looking for a reference that describes the procedure in simple terms. I have spent a long time (a few hours) searching on google but can't find a straightforward explanation. 88.219.44.165 (talk) 07:55, 11 November 2010 (UTC)[reply]
Since calculating the central angles of various polyhedra is really purely a math problem, you might have better luck with getting a thorough explanation if you ask about it at the math reference desk. Red Act (talk) 09:14, 11 November 2010 (UTC)[reply]
Thank you, I have posted my question here: Wikipedia:Reference_desk/Mathematics#Calculating_the_central_angle_in_polyhedra. Best wishes 88.219.44.165 (talk) 11:03, 11 November 2010 (UTC)[reply]

Species identifcation for File:Marouxlynx.jpg

The image in question

In order to expand on the image description, so the image can be moved to Commons, Is anyone on the Science Reference desk able to provide a more specific species identification?

Sfan00 IMG (talk) Sfan00 IMG (talk) 22:26, 10 November 2010 (UTC)[reply]

Looks like a canadian lynx to me (Felinae canadensis), but I'm not an authority. -- Scray (talk) 01:42, 11 November 2010 (UTC)[reply]
The same image appears here and here, both of which treat it as a Lynx canadensis. WikiDao(talk) 14:17, 11 November 2010 (UTC)[reply]

November 11

Home Solar Power.......Need Help

I have recently made an out building into a chicken coop and needed electricity during the winter. It was not cost effective to run hard line out there from the house, so i went with solar. I bought a 15 watt solar panel, a deep cycle marine battery with a regulater, and a two plug in power inverter. Got it all hooked up and running just fine. So here's my problem...currently i have a 10 watt flouresnt bulb thats on a timer for only a few hours during the night. I would light to put a red 50 watt heat bulb in there and leave it on all night? How do i find out what would be the max wattage of buld i can run with the current set up, and for how long? I have looked every where for a "conversion chart" without sucess. I'm new to this solar thing and need help? —Preceding unsigned comment added by Kright19 (talkcontribs) 03:37, 11 November 2010 (UTC)[reply]

No conversion needed, watts are watts. If your solar panel is 15 watts, and suppose it gets light for 8 hours a day, you'll have 15 watts*8 hours=120 watt-hours available. If you need 50 watts of heat then you can run that for 120 watt-hours/50 watts=2.4 hours. If you include the florescent bulb then it's 120 watt-hours/60 watts=2 hours. That said there is a complication: inverters waste energy, i.e. they are not 100% efficient. Their efficiency seems to run from 50% to 90%, if we assume 75%, then from your 120 watt-hours available, you'll only be able to use 90 watt-hours. On top of that the battery also is not fully efficient. My suggestion: Instead of using solar power to make and store electricity, store the solar heat directly. You have a number of ways of doing that. One idea is a tub of water on a pole with a u shaped pipe in a loop at the bottom of it. Next use mirrors (even aluminium foil on some plywood will work) to shine a lot of sun near the bottom of one side of the loop (not at the bottom, a little above it, and only on one side of the loop). Convection will circulate the water for you - you don't need a pump. Then use this warm (maybe even hot) water during the night to keep the temperature up. If I did not answer your question please let me know. Ariel. (talk) 03:53, 11 November 2010 (UTC)[reply]

Diabetes, Insulin, and... mandatory baseline food consumption?

From this article about the disabled Carnival cruise ship: Gina Calzada, 43, of Henderson, Nev., said her diabetic sister, Vicky Alvarez, called her Wednesday morning on her cell phone and started sobbing. She said she has not been able to take her insulin for her diabetes because she is not eating enough. She told Calzada all that she had eaten was some bread, cucumbers and lettuce. "I told her where are the Pop Tarts and the Spam? I thought they brought in 70,000 pounds of supplies," Calzada said. "She said I haven't seen that."

Could someone please explain to me the details of this woman's problem. Why can't she take her insulin? Is she in danger, or just inconvenienced? I don't really understand diabetes... The Masked Booby (talk) 05:28, 11 November 2010 (UTC)[reply]

What she describes does not make sense. Not able to take insulin because you don't eat? It's exactly the opposite - if you don't eat you don't need as much insulin. And if she has bread, cucumbers and lettuce then eat it, what's the problem? People pay a lot of money for very fancy, unlimited, food on cruises, and I bet she is upset at not having that. To give the benefit of the doubt, maybe she was told to eat a certain amount to match the insulin, and doesn't know how to adjust it for lower food consumption. And maybe they have a food shortage on board. Ariel. (talk) 06:06, 11 November 2010 (UTC)[reply]

Faint glow from compact fluorescent lamp when off

Best seen from the corner of the eye in the dark. What causes it please? 92.29.125.32 (talk) 11:16, 11 November 2010 (UTC)[reply]

Phosphorescence from the particles inside the light bulb, much like glow-in-the-dark paint. --Chemicalinterest (talk) 11:40, 11 November 2010 (UTC)[reply]
Old black-and-white TVs and oscilloscope screens used to glow after switching off, like that. Cuddlyable3 (talk) 13:00, 11 November 2010 (UTC)[reply]

Does that mean it is somewhat radioactive? 92.29.112.73 (talk) 13:18, 11 November 2010 (UTC)[reply]

No. shoy (reactions) 14:30, 11 November 2010 (UTC)[reply]

Particles and antiparticles - two directions of time

Some researchers have argued that antiparticles are just normal particles travelling backwards in time; positrons are electrons travelling to the past, and so on. Nevertheless, this implies there is an "end of time" from which antiparticles start travelling, just like normal particles start travelling from the "beginning of time". Where is this end of time located? --Leptictidium (mt) 12:07, 11 November 2010 (UTC)[reply]

I suppose the Big Crunch is a hypothetical "end of time". But why do you think that "normal particles start travelling from the "beginning of time"" ? Many "normal" particles are much younger than the Universe. The photons in daylight were created a few minutes ago at the surface of the sun. The photons in a artificially lit room at night are even younger. Electrons and positrons can be created in beta decay. And the quarks within hadrons such as the proton and neutron are continuously interacting by exchanging gluons, so a specific bound quark (if such a concept even makes sense) has an incredibly short lifetime. Gandalf61 (talk) 13:04, 11 November 2010 (UTC)[reply]

How much Vitamin D in tin of salmon?

To be exact, a 450g (approximately as far as I recall) tin of wild Alaskan pink salmon. Thanks 92.29.112.73 (talk) 13:23, 11 November 2010 (UTC)[reply]

digoxin

who discovered it and when —Preceding unsigned comment added by 208.104.163.189 (talk) 13:42, 11 November 2010 (UTC)[reply]

When someone finds a ref with this information, please contribute to the Digoxin article. DMacks (talk) 14:05, 11 November 2010 (UTC)[reply]