Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Looie496 (talk | contribs)
Line 501: Line 501:


:Can you be a bit more specific about your needs? How about [[cyclohexane]]? [[User:TenOfAllTrades|TenOfAllTrades]]([[User_talk:TenOfAllTrades|talk]]) 00:53, 9 February 2011 (UTC)
:Can you be a bit more specific about your needs? How about [[cyclohexane]]? [[User:TenOfAllTrades|TenOfAllTrades]]([[User_talk:TenOfAllTrades|talk]]) 00:53, 9 February 2011 (UTC)
::It should be able to dissolve lipids but not other compounds in food. --[[Special:Contributions/75.15.161.185|75.15.161.185]] ([[User talk:75.15.161.185|talk]]) 02:32, 9 February 2011 (UTC)

Revision as of 02:32, 9 February 2011

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 4

Sky Dome Bending?

I've seen a thunderhead in the sky once that was too big for my eyes (or the light filtering through the skydome) to handle. Is there a technical term for when the naked eye sees a thunderhead that "bends" because it's too high for the hemisphere to physically show a straight-up thunderhead from your point of view? Or is it simply known as "Skybending" or "Skydome Effect"? 71.87.112.14 (talk) 00:43, 4 February 2011 (UTC)[reply]

A thunderhead can have a base that's several miles across, and can have an altitude that's as low as 500 feet. So it's not too surprising that a single thunderhead could occupy the entire visible portion of the sky as seen from a point inside the SkyDome. Red Act (talk) 01:42, 4 February 2011 (UTC)[reply]

How do Lakes affect temperature?

i.e. do lakes make it warmer or colder? Please include specific research/data. —Preceding unsigned comment added by 146.115.78.18 (talk) 01:20, 4 February 2011 (UTC)[reply]

Lakes typically do "neither" - the neither add nor subtract thermal energy, over long time scales; but they store it. Lakes, especially very large ones, can act as heat reservoirs, because water has a higher heat capacity than air and a greater heat conductivity than ground; so large bodies of water tend to slow the rate of temperature change. That means that they'll help keep the surrounding air at the same temperature, even as weather-systems move in. In some unusual cases, a lake may sit on top of a geothermal heat source; so we could say in those cases that the lake actually contributes heat to the climate-system, but that's an exception, not a normal behavior. In an extremely large lake, water currents and/or vertical upwelling may dominate the temperature effects, regularly bringing warmer or cooler air to certain locations. Few lakes are large enough to exhibit this effect very strongly. Nimur (talk) 02:07, 4 February 2011 (UTC)[reply]
Just to clarify Nimur's respose: The answer in simple terms is "both". I live next to a large lake (Lake Michigan) and typically (not always, since other factors sometimes come into play) the temperatures I experience are warmer than areas farther from the lake in the winter and cooler than those areas in the summer. The effect can be seen somewhat by comparing the average monthly high and low temperatures in the tables at Chicago#Climate and Rockford, Illinois#Climate. Deor (talk) 03:07, 4 February 2011 (UTC)[reply]
See also Lake-effect snow: big lakes release heat in the winter, creating bunches of snow on the downwind side of them. Buddy431 (talk) 04:12, 4 February 2011 (UTC)[reply]
Also, large lakes are typically warmest in late autumn and coolest in late spring. Dimitic lakes typically mix top and bottom layers in spring and summer. ~AH1(TCU) 19:32, 6 February 2011 (UTC)[reply]

Again .. please help

i had once asked a question on a similar context once... but couldnt get a clear reply... help me this time... very specifically i have discovered a method to increase or decrease the flicker in a candle without varying oxidiser supply or fuel supply.can anyone help me where i could propose its application.please any one from industries involving flame.. p.s.:i had controlled flicker of a candle flameand would look into control of constant fuel supply LPG flames etc shortly.. . —Preceding unsigned comment added by 59.93.134.84 (talk) 08:24, 4 February 2011 (UTC)[reply]

If you're using sound it's been done already. What you need to do first is check that your method has been thought of already and that it looks promising for saving money or making something more efficient, and do that without telling people about it. Lots of google queries and some experiments can help with that. Then write a page about it and where you think it could be used and contact a patent lawyer to get a preliminary patent. Then you start talking about it. Dmcq (talk) 09:12, 4 February 2011 (UTC)[reply]
Asking again? I don't find a question about controlling candle flicker in the archive. A possible application is novelty lighting. To quote wikihow: "Flicker Flame" light bulb is a form of specialty lighting which produces a candle-like "flame" effect. The bulbs use two metal plates in an evacuated bulb, filled with the rare-earth[sic] gas neon, to create a dancing orange glow. Although these bulbs are appreciated by many lighting enthusiasts, they can be difficult to find.[1] Cuddlyable3 (talk) 12:20, 4 February 2011 (UTC)[reply]
He asked here: Wikipedia:Reference_desk/Archives/Science/2011_January_11#flicker. You didn't get a clear answer because you didn't ask a clear question. Ariel. (talk) 17:22, 4 February 2011 (UTC)[reply]

Thanx a lot.. — Preceding unsigned comment added by 59.93.131.96 (talkcontribs) 21:13, February 4, 2011

Diabetes and sugar !

I understand that diabetes is the inability of body to produce/absorb insulin. Insulin is necessary since it converts blood sugar in energy/fats for muscles and liver. Why high blood sugar is detrimental to health? same question for low blood sugar. —  Hamza  [ talk ] 09:57, 4 February 2011 (UTC)[reply]

In hypoglycemia, low levels of blood sugar particularly affect the brain, causing possibly unconsciousness or even death within a short timespan, whereas hyperglycemia, high levels of blood sugar are not necessarily damaging in the short term but can cause problems with eyesight and liver damage and in extreme cases ketoacidosis in the long term. Mikenorton (talk) 10:26, 4 February 2011 (UTC)[reply]
One answer is the pH level of blood. A ketones increase in the blood it becomes acidic leading to ketoacidosis which has its own acute problems. The more chronic version, generically Hyperglycemia, causes diffuse organ damage and more general symptoms. I don't think the damage this level has to do with blood pH. Our article on the topic actually doesn't say much about it. I'm no M.D., and like the OP I'm very curious about the physiological mechanism where high blood sugar is damaging. Shadowjams (talk) 10:37, 4 February 2011 (UTC)[reply]
The two main types of diabetes are generally quite different in their disease mechanisms.
  • Diabetes mellitus type 1 is caused by destruction of the pancreas, failure to produce essentially any insulin, and the resulting inability of the body to adequately metabolize glucose. The ultra-high blood sugar levels at the time of onset -- coupled with the body's inability to use that energy source, results in essentially starvation of the tissues. This leads to alternative energy production through fatty acid oxidation which leads to the build-up of ketones and acute ketoacidosis. This would be the cause of death of the patient unless recognized and treated.
  • Diabetes mellitus type 2 has a complex mechanism involving secondary insulin resistance in the peripheral tissues, leading to an escalation of pancreatic insulin production and eventual exhaustion of the pancreatic beta cells. Hyperglycemia in this condition is gradual, and so-called "pre-diabetic" borderline hyperglycemia and insulin resistance can be detected. Over time, the chronic hyperglycemia seems to have effects on the small blood vessels called capillaries and is probably best described as microangiopathy. This effect happens through 4 main pathways (summarized in Diabetic_cardiomyopathy#Pathophysiology): increase in the flux through the aldose reductase and the polyol pathway leading to depletion of the essential cofactor NADH, increased flux through the hexosamine pathway, activation of the Protein Kinase C (PKC) signaling pathway, and the formation of advanced glycation endproducts (AGEs) which cause protein crosslinks and alter intracellular signalling.
Type 1 diabetics can have all the complications of type 2 diabetes if their disease is not well treated and they have chronic hyperglycemia, but type 2 diabetics rarely have problems with ketoacidosis (although it is possible to happen). Pretty complicated stuff. --- Medical geneticist (talk) 11:01, 4 February 2011 (UTC)[reply]
It is also worth noting that, especially in the case of Type 2 Diabetes, it is actually a symptom of a greater syndrome of problems and it is sometimes hard to tease out the general health outcomes of a person with Type 2 Diabetes as being distinct from the other health problems they frequently have that go along with it. Type 2 Diabetes itself is quite harmful, but it most frequently arises in people with obesity, and such people also have a myriad of other related health problems which include high blood pressure, high cholesterol, artereosclerosis, etc. etc. When someone dies with Type 2 Diabetes, it may be difficult to say that it was the Diabetes itself that killed them as opposed to any one of the number of other unrelated, but connected, health problems they had. So, especially with Type 2 Diabetes, the problem with it is that it is often an indicator that LOTS of other stuff is going wrong in your body, and any of it could kill you. --Jayron32 15:22, 4 February 2011 (UTC)[reply]

Internal combustion engines

How many internal combustion engines are produced in Asia every year? 75.69.138.56 (talk) 14:48, 4 February 2011 (UTC)[reply]

tramodal

Can someone taking Tramodal for pain fail a drug test for opiates consumption? —Preceding unsigned comment added by 96.231.12.79 (talk) 18:34, 4 February 2011 (UTC)[reply]

This is explained in detail at Tramadol#Detection in biological fluids. In case that is too technical, the answer can be summarized as "it depends on what type of test and how thorough it is." Keep in mind that nobody "fails" drug tests: the results of such tests are objective indications of the presence or non-presence of certain chemicals. It is up to an interpreter to decide whether presence or non-presence of any particular chemical is "acceptable." Cross-indication is a known problem in certain types of chemical screens. That means that in certain types of test, a chemical indicator for a medication may be indistinguishable from a chemical indicator for an illicit drug. Nimur (talk) 19:08, 4 February 2011 (UTC)[reply]
To clarify Nimur's answer, the cheaper heroin tests work by detecting metabolites of opium, which are produced as your liver destroys the opium. However, legal drugs like tramodal and codeine, which are also opiates will produce these metabolites as will, and will show a positive result. In fact, high consumption of normal poppy-seeds can give a positive result. A reputable lab will run more expensive tests on any positive samples to determine which opiate was present. CS Miller (talk) 12:59, 5 February 2011 (UTC)[reply]
Since this is the science desk, precision is valued: tramadol is an opioid (synthetic derivative that binds to opioid receptors) chemically related to codeine which, like morphine, is an opiate (a natural product of opium). -- Scray (talk) 17:51, 5 February 2011 (UTC)[reply]

Are Black Holes part of the "Observable Universe?"

Are Black Holes part of the "Observable Universe?" given Hawking radiation?

Per the standard definition of "observable universe", many black holes are part of the observable universe given their less-than-13-billion-light-year distance from Earth. This is the case regardless of Hawking radiation. However, the related issue of the black hole information paradox remains outstanding. — Lomn 19:31, 4 February 2011 (UTC)[reply]

Producing a blonde offspring via gene therapy

Is it possible (at least theoretically) to produce a blonde female offspring from blonde mother and brunette father by means of gene therapy or somehow else? Are there any ways to overcome father's gene dominancy in this case, achieving the desired result? —Preceding unsigned comment added by 89.76.224.253 (talk) 21:19, 4 February 2011 (UTC)[reply]

A blonde and a brunette can produce blonde offspring completely naturally, precisely because the brunette allele is dominant. That dominance means someone with some copy of the brunette allele and one copy of the blonde allele will be brunette and if they pass on the blonde allele (which there is a 50% chance that they will) then (if their child gets the blonde allele from their other parent) they'll have a blonde child. (In reality, hair colour is far more complicated than just a single gene, but what I've said is a reasonable approximation.) --Tango (talk) 23:19, 4 February 2011 (UTC)[reply]
To clarify the above answer, this means only that there may exist coupling of a blonde and brunette that produces a blonde child. A brunette with two dominant brunette alleles cannot have a blonde child with anyone, if we assume the genetics of hair color are precisely that simple. However, the heterozygous child of such a coupling could lose the dominant brunette allele through gene conversion. Unlikely, but possible. In the laboratory, a similar effect could be achieved with gene knockout, but who knows when scientists will get around to doing that in humans. 131.215.3.204 (talk) 01:17, 5 February 2011 (UTC)[reply]
In mice (with the right genetic background) you need merely feed the mother bisphenol A, a synthetic estrogen compound later adapted for routine use in water bottles, baby bottles, etc. (See [2], but to avoid getting too carried away be sure to interpret the bar graph) But this relies on activity from a mouse intracisternal A particle ('junk DNA') not present in humans... though I wouldn't be surprised if there are more tricks yet unknown in the ever-weird agouti/ASIP gene. Wnt (talk) 07:31, 5 February 2011 (UTC) Admittedly, I was perhaps overindulging a desire for "fun with science" here, but no one took the bait! The viable yellow example is actually a rather rare case. In practice, whole-head gene therapy of every last hair is beyond our current technical abilities, unless someone knows something they're not telling. I would hold out hope for an RNAi/siRNA type approach with a lotion, but if I had a working formula, I'd be rich. ;) Wnt (talk) 00:08, 8 February 2011 (UTC)[reply]

Both my ex-wife and I are brunettes, but our son is a red-headed Irish throwback, with pure blue eyes, and skin fair fairer than either of ours. There's a lot of stuff in reserve in our genes. You can get racial traits that emerge after generations. It's not just a simple one-on-one business. Myles325a (talk) 11:21, 9 February 2011 (UTC)[reply]

butyl rubber

can butyl rubber cause dermatitis.

If you have concerns about dermatitis, you should see a doctor, perhaps a dermatologist or an allergist. --Jayron32 21:50, 4 February 2011 (UTC)[reply]
I believe we can answer these types of questions in a general sense, as long as we are not trying to answer if it will cause the questioner dermatitis. We are allowed to answer biology questions after all, we just can't diagnose or give medical advice. Ariel. (talk) 22:11, 4 February 2011 (UTC)[reply]
You could also speak to a rubber expert. Actually there are 5 different possibilities here. One, about what "can" happen, is a question for, specifically, butyl-rubber-dermatitis-causation experts. The second question is for butyle experts, in other words chemists. The third question is for rubber experts, in other words, industry experts who work with rubber. The fourth question, about "cause", requires a philosopher, and only the last question, about dermatitis, requires a doctor. I move that we strike the words "can", "cause", and "dermatitis" from the question, leaving only "Butyl rubber?" Which we are at liberty to answer. Dermatitis could be moved to the title. 109.128.92.64 (talk) 22:03, 4 February 2011 (UTC)[reply]
A quick Google search finds documents saying that it can cause dermatitis due to residual processing chemicals, but more important is that it is one of the most effective materials for shielding from other substances that can cause dermatitis. Looie496 (talk) 22:48, 4 February 2011 (UTC)[reply]

what kinda residual processing chemicals — Preceding unsigned comment added by Tommy35750 (talkcontribs) 23:21, 4 February 2011 (UTC)[reply]

North Korea

Is it true that North Korea actually agreed to the truce back in ~1952 so they would have more time to develop the plan the Nazi's used and the Chinese under Mao planned to use and the plan the North Vietnamese actually used that helped them win the war which was to dig a vast underground or subterranean set of cities, towns, tunnels, industrial and residential complexes to survive nuclear war and which is the reason today nighttime satellite views of North Korea show no lights whatsoever? --Inning (talk) 22:40, 4 February 2011 (UTC)[reply]

No. Looie496 (talk) 22:50, 4 February 2011 (UTC)[reply]
North Korea has very little artificial light (but not none whatsoever - you can see where Pyongyang is) because its economic plan has been a complete failure and it is therefore extremely poor. It simply can't afford widespread electricity. There is nothing more to it than that. --Tango (talk) 23:15, 4 February 2011 (UTC)[reply]
I heard someone at an academic talk just this week discuss that North Korean refugees have reported that there are extensive tunnel systems around Pyongyang and the rest of North Korea, and that the North Koreans are probably more prepared for nuclear war than most Western countries. But I don't know if that's true or not — I believe the speaker that refugees have said that, but it's not totally clear of the reliability of the refugee intelligence. No citation, I'm afraid: take with a grain of salt. --Mr.98 (talk) 00:50, 5 February 2011 (UTC)[reply]
Your question assumes that Nazi's and the North Vietnamese used this plan, which is false. They certainly did dig bunkers underground, and I don't know exactly how well off you'd be living in them, but they couldn't be construed as underground cities. 76.27.42.9 (talk) 21:38, 5 February 2011 (UTC)[reply]
I have seen video of underground water systems in the desert region of China for use in agriculture and I know that some Australians miners have built homes underground and I've seen pictures of underground living and cooking quarters taken by North Vietnamese photographers during the war and I know its possible for a lot of work to be accomplished by manual labor. Maybe not cities or industrial complexes but if I were answering instead of asking I could probably find a lot more indications one way or the other unless I was hiding something. 00:45, 6 February 2011 (UTC)~ — Preceding unsigned comment added by Inning (talkcontribs)

Fusion reactions and aneutronic fusion

http://en.wikipedia.org/wiki/Nuclear_fusion#Important_reactions

Little confused about how to read the reactions. How do I tell what reaction gives off the most energy? Just add the MeV numbers in the products side? What do the percentages at the end of some reactions mean? I figured it meant that a certain percentage of the products is contained in one reaction and another percentage is contained in the other reactions, but 7i, 7ii, 7iii, and 7iv don't have percentages.

For Aneutronic fusion how can you directly produce electricity? It doesn't look like any of the reactions produce electrons. ScienceApe (talk) 23:20, 4 February 2011 (UTC)[reply]

Most scientists think that a practical fusion reaction would serve as an energy source for a heat engine, similar to the way we burn coal or fission uranium to power steam turbines. Few scientists actually plan to extract electric current directly from the nuclear reaction, despite the interesting electrodynamic properties of a plasma undergoing fusion.
Regarding which reactions produce the most energy, see binding energy and mass defect, and simply compute the mass defect for each reaction. That will tell you how much energy is released per fusion reaction. To determine macroscopic energy production rates, you also need number of fusion reactions per unit time. To estimate that, you'll need empirical rules about collision rates derived from effective cross sections and plasma temperatures for any given condition. Such numbers are not easy to come by, because nuclear fusion is not practical in laboratory conditions (except in bombs; and empirical data about bombs are tightly controlled). But you can take a look at any textbook on nuclear fusion: a few are listed in our references section. You can easily estimate energy production rates in astrophysical conditions, as well. See stellar fusion. Nimur (talk) 23:42, 4 February 2011 (UTC)[reply]
Where there are percentages, it means, "these two reactions have the same products, but different results can happen, and here are the probabilities that one will happen rather than the other." So 2i and 2ii are the same reactants (D+D), but different reactions (2i results in T, 2ii in He). There is a 50/50 chance that one will happen rather than the other. I don't know why the 7 series doesn't have percentages. As for energy, yes, the MeV indicated is a rough way to see how much energy comes out of that particular reaction (how much of that is recoverable energy can vary, as I understand it). But as Nimur points out, that doesn't mean much by itself, because you need to know how many reactions are going to occur. For most purposes, what is less important than the exact energy given off is the likelihood of the reaction happening in the first place. There energies are all more or less the same order of magnitude; what's tricky is getting them to begin in the first place. --Mr.98 (talk) 23:48, 4 February 2011 (UTC)[reply]
you don't have to produce electrons to directly produce electricity. It suffices to have most of the energy produced as moving charged particles (as opposed to uncharged neutrons). 83.134.178.44 (talk) 07:25, 5 February 2011 (UTC)[reply]
How can you directly convert charged particles into electricity? What if they are positively charged? ScienceApe (talk) 16:07, 5 February 2011 (UTC)[reply]
Is this related to the question above? If so, then "You don't." You generate heat and make electricity from that. Ariel. (talk) 03:32, 6 February 2011 (UTC)[reply]
The article on aneutronic fusion says you can directly produce electricity from the charged particles produced from aneutronic fusion. ScienceApe (talk) 14:56, 6 February 2011 (UTC)[reply]
I obliquely commented above about electrical properties of plasmas during fusion; and I think I already answered your followup. Have you read our article on plasma? You can set up current in the plasma, using either electrons or positively-charged ions. This is an interesting area of research. It's not very practical as an energy source. Most scientists don't plan to extract power from any electric effect in the plasma. It's much simpler to take the fusion energy, use it as a heat-source, and drive a heat-engine. Nimur (talk) 18:01, 6 February 2011 (UTC)[reply]

Yeah I know it's more practical to use fusion as a heat source and drive a heat-engine, but that's not what I'm asking. I'm asking how you can convert charged particles into electricity. The article says,

"Aneutronic fusion reactions produce the overwhelming bulk of their energy in the form of charged particles instead of neutrons. This means that energy could be converted directly into electricity by various techniques. Many proposed direct conversion techniques are based on mature technology derived from other fields, such as microwave technology, and some involve equipment that is more compact and potentially cheaper than that involved in conventional thermal production of electricity."

But it doesn't say how this is accomplished. I might be missing something here, but my understanding of electricity is that it's electrons moving through a circuit. How can charged particles get electrons to move through a circuit? Can charged particles (either positive or negative) move through a circuit and produce electricity too? ScienceApe (talk) 22:10, 6 February 2011 (UTC)[reply]

See the last paragraph here. This appears to be a rather technical discussion of different ways to do it. --Mr.98 (talk) 01:52, 7 February 2011 (UTC)[reply]
With due respect, ScienceApe, your understanding of electric current is incomplete. Have a read of electric current. The first paragraph explains what you seem to be stumbling over: current can be carried by positively charged ions. (This has nothing to do with the fact that fusion is occuring). In copper wire and other conventional electric systems most people are familiar with, the electron is the mobile charge carrier; but in general, anything with a charge can be a carrier for electric current. Current can be carried by sodium ions in a salt-water solution; hydrogen nuclei in a charged plasma; muons in an exotic high energy experiment; and so on. Nuclear fusion, which normally occurs in a hot plasma, would create all kinds of neat electrodynamic effects, because we're not only moving charges around - we're also creating new charged particles, destroying others (by combining them into neutral particles), locally adding and subtracting kinetic energy to individual particles, and all the while swimming around in a "soup" of charged nuclei and separated electrons that have ensemble kinetic and electrodynamic effects. As a net effect, no charge is created or destroyed, but mucking with nuclei and separating electrons from their host atoms causes all kinds of local electromagnetic fields; some of those add coherently and manifest as plasma radiation (of the electromagnetic type, not the "nuclear" type, though the distinction is really only a matter of frequency, since all the energy released by a fusion system is originally nuclear energy, anyway). The book to read is Bittencourt's Fundamentals of Plasma Physics; that'll explain how you should treat and analyze electrodynamic properties of a plasma. Throw nuclear reactions into the mix, and you have one heck of a theoretical headache. But the point is, charged particles in motion are an electric circuit; the analysis of the plasma undergoing fusion will show numerous plasma oscillations (which you can consider an "AC current"); and if you could contrive some strange plasma confinement apparatus that could achieve charge separation and directional flow, you could get a DC current as well. For any particular plasma experiment, you would solve for the plasma temperature of each specie, and solve the relevant energy partition functions to determine how much energy is present (and therefore, could be extracted) from any particular plasma mode. If ions are undergoing fusion as well, you need to solve the equilibrium rates for those reactions and throw that into the mix. This is heavy stuff - if you want to know the "how", you'll need a lot of math, statistics, and electrodynamics background - the rest is "obvious." Nimur (talk) 02:35, 7 February 2011 (UTC)[reply]

Bond Polarity or Magnitudes of Partial Charge Vectors

Hey. I have been told to think of the polarity arrows we draw in addition to the δ+/δ- notation as vectors. However, by this logic atoms in a polar molecules with a greater different in electronegativity would have a vector between them with a greater magnitude, and thus would have a greater effect on the overall polarity of the molecule, but the teacher says this is not correct. For example, COF2 is a trigonal planar molecule and C has an electronegativity of 2.55, F of 3.98, and O of 3.44 (I don't remember the units). By my reasoning this molecule would be polar because the F's pull electrons towards themselves more strongly than the O can pull them towards itselves. However I have been told to assume that the magnitudes are all the same and thus to consider this molecule nonpolar since if the pulls are the same they cancel out. Why is this? Who is right here? Thanks. 24.92.70.160 (talk) 23:29, 4 February 2011 (UTC)[reply]

Formaldehyde (CH2O) is most definitely polar. It sounds like you have the logic correct. Dragons flight (talk) 01:01, 5 February 2011 (UTC)[reply]
Your teacher's logic isn't even consistent. Carbon "pulls" the electrons from hydrogen toward itself, as oxygen does with carbon. Even if the pulls were equal, they are not opposite, and so cannot cancel out. 131.215.3.204 (talk) 01:09, 5 February 2011 (UTC)[reply]

Oops, that's my bad! I picked a wrong example and I have amended above to something along the lines I'm talking abouit. 24.92.70.160 (talk) 01:42, 5 February 2011 (UTC)[reply]

Well, you have to consider that the carbonyl bond results in a stronger dipole than you'd predict straight from the difference in electronegativity between carbon and oxygen, because the oxygen is pulling on two, not just one, electron (though the oxygen, having already taken one electron from carbon, is now less able to pull on the second). Anyway, once you take that into account there will be barely any dipole moment. You are correct in your assumption that there is a dipole, but it would be very small. Comparing it to its most chemically similar compound, phosgene, carbonyl fluoride would probably be considered hydrophobic, a hallmark of non-polar compounds.
So anyway, in conclusion, there is a dipole moment, but it's rather small. Your teacher is taking the convention that if it's small enough, you just call it non-polar. It's rather simplistic and your teacher should acknowledge or at least be aware of this simplification. Someguy1221 (talk) 04:54, 5 February 2011 (UTC)[reply]
Our carbonyl fluoride article says the dipole is 0.95 D, vs phosgene (1.17 D) and water (1.85 D) and formaldehyde (2.33 D). COF2 definitely does have a dipole, but it's much smaller than that of other similar-shaped carbonyl molecules and also small compared to a water-like environment. DMacks (talk) 18:16, 5 February 2011 (UTC)[reply]
Any bond between a non-transition metal element and a non-metal element is ionic regardless of the electronegativity difference. ~AH1(TCU) 19:20, 6 February 2011 (UTC)[reply]
That was never actually in question here, and does not imply that the resulting molecule will have polar character. Methane is made of four polar bonds, but has no dipole. Someguy1221 (talk) 19:37, 6 February 2011 (UTC)[reply]
@AH1: Not exactly. You can generate covalent-type bonding between group IA and IIA elements and non-metals by several methods, most commonly done by dissolving the substance in solvents which inhibit the formation of ions, classicly these are so-called "polar aprotic solvents" like THF or diethyl ether. See Organolithium reagent and grignard reagents. --Jayron32 20:48, 6 February 2011 (UTC)[reply]


February 5

Homogenized Milks

Are all USA Homogenized Milks created equal/the amount of cream, water,etc.? Does the FDA have a standard? Is one brand richer than another? — Preceding unsigned comment added by Jza59 (talkcontribs) 01:16, 5 February 2011 (UTC)[reply]

Grading and verification of dairy is regulated and enforced in the United States by the United States Department of Agriculture, not the FDA. Here is the webpage for Dairy Standardization, which requires that certain marketing labels meet certain milk quality and content standards. The actual content and quality of milk may vary from brand-to-brand (and day to day, even from the same cow); but to be labeled with certain terms, milk and dairy products must meet USDA standards. The USDA is responsible for regulating the milk product all the way from the cow to the dairy to the wholesale distributor to the grocery store (if that is the path it takes). The FDA also has some regulatory power, specifically related to milk safety and pasteurization: see Pasteurized Milk Ordinance 2007 and the FDA's Milk Safety program. There is a little bit of overlap between FDA and USDA, particularly in dairy regulation. Nimur (talk) 01:31, 5 February 2011 (UTC)[reply]
I'm pretty sure that homogenized milk is simply milk from the cow that gets homogenized. They don't adjust the levels of cream, water etc. It's used as is. Maybe the diet of the cows varies from brand to brand, but I doubt it. Ariel. (talk) 03:28, 6 February 2011 (UTC)[reply]
They certainly do that in Germany, where "whole milk" has a fat content of 3.5-3.8% (3.5% is the legal minimum, 3.8% is what premium brands offer as "natural" fat content). The milk is centrifuged to separate out the fat, and then remixed to the desired fat content. Leftover milk fat becomes butter or cream. When I was in Scotland, they sold Channel Island milk with 5% fat content - it's delicious. Arteries? Who needs arteries? --Stephan Schulz (talk) 14:39, 6 February 2011 (UTC)[reply]
Note that's a lot to do with the breed of cattle Jersey cattle and Guernsey cattle (Channel Island cows) are renowned for their high butterfat content, at 5-6%. In contrast, the majority of (beverage) milk produced in the United States is from Holstein cattle, who produce large volumes of milk, but with one of the lowest butterfat content (not listed in the article, but listed variously as 2.5-3.8%). Holstein milk may go into the homogenizer as-is, but if your German milk is from another breed like Brown Swiss/Braunvieh (4% butterfat), they may partially skim it first to reduce the butterfat (and increase profits by selling the cream separately). -- 174.24.195.38 (talk) 16:09, 7 February 2011 (UTC)[reply]

Hydrogen bonds

I know that hydrogen bonded to Nitrogen, Oxygen, and Fluorine are capable of forming hydrogen bonds with other, similar molecules. However I looked on an electronegativity table and it shows Oxygen has an electronegativity of 3.44, Fluorine of 3.98, snd Nitrogen of 3.04 (again I forget the units), but chlorine has an electronegativity of 3.16 (greater than N) and yet I am told it does not form hydrogen bonds. I think there might be something other than electronegativity going on here; why doesn't chlorine attached to hydrogen form hydrogen bonds even though chlorine attached to nitrogen does? Thanks. 24.92.70.160 (talk) 01:47, 5 February 2011 (UTC)[reply]

Size. Chlorine has an occupied energy level that is empty in the row two elements nitrogen, oxygen, and fluorine. Being larger, the unbalanced charge from its bonding to hydrogen is more widely distributed than it is in the smaller molecules, resulting in a lower dipole moment. Indeed, the dipole moment of hydrogen chloride is just over half that of hydrogen fluoride. And I'm sure there are also quantum reasons it has a weaker tendency to donate either electrons or a proton, but my chemistry is too far behind me to remember why. Someguy1221 (talk) 05:21, 5 February 2011 (UTC)[reply]
You've pretty much got it. The main issue is that hydrogen bonding is determined by Bond dipole moment. In HCl, the longer bond means that the hydrogen atom is far enough away from the chlorine atom that it gets enough electron density to shield its nucleus. In the shorter H-N bond of ammonia, though nitrogen isn't as electronegative as chlorine it, the fact that the bond is significantly shorter means that the nitrogen can effectively remove enough electron density from the hydrogen to deshield its nucleus. The deshielded hydrogen nucleus in H-N, H-O, and H-F bonds is what gives rise to hydrogen bonding. --Jayron32 01:06, 6 February 2011 (UTC)[reply]
So factors beyond the simple electronegativity difference are in effect, though ionic bonding is usually stronger than the hydrogen bonds. ~AH1(TCU) 19:18, 6 February 2011 (UTC)[reply]

bones

How long does it take for bone to completely disintegrate into 'dust?' Say this bone is a large leg bone of an animal, 2 meters long and 1 meter around. Thanks. schyler (talk) 02:48, 5 February 2011 (UTC)[reply]

It could be more than a billion years. See fossil. Looie496 (talk) 05:13, 5 February 2011 (UTC)[reply]
Except that there weren't any bones that large a billion years ago. HiLo48 (talk) 05:15, 5 February 2011 (UTC)[reply]
Well no one said it had happened already Nil Einne (talk) 06:20, 5 February 2011 (UTC)[reply]
Excellent point. HiLo48 (talk) 06:53, 5 February 2011 (UTC)[reply]
It depends entirely upon the various forces acting upon it, e.g whether it's exposed to weathering, or chewing, or the acidity of the soil in which it's buried, etc.--Shantavira|feed me 09:46, 5 February 2011 (UTC)[reply]
My understandings that most fossils are not really the actual bone material, but minerals and stuff that's leeched in and replaced the original material over the course of many years. See Fossil#Types_of_preservation . Vespine (talk) 10:01, 5 February 2011 (UTC)[reply]
Lucy's bones are not yet dust after 3.2 million years, how long are you prepared to wait? Richard Avery (talk) 11:15, 5 February 2011 (UTC)[reply]
Are the Lucy "bones" really human bone material or are they just rocks which are in a bone shape? That seems to be the main question. The article does not distinguish between the two, and indeed, most websites do not seem to either. --Mr.98 (talk) 01:47, 7 February 2011 (UTC)[reply]
Thanks to Vespine and Shantavira for the info. schyler (talk) 14:01, 5 February 2011 (UTC)[reply]
Resolved
I think the Ship of Theseus is highly relevant to this discussion. Is it clear that the original bones still exist once fossilized? -- Scray (talk) 03:35, 6 February 2011 (UTC)[reply]
I don't think anyone, including me, has actually addressed the question properly yet. Ship of Theseus is one thing, but the fact is it is generally extremely unlikely that any one single bone will be fossilized in the first place. Fossilized bones represent an extreme minority of the bones that have ever existed. The vast majority of bones do decompose without leaving a trace. I imagine someone in the field of Forensic anthropology would be able to give a decent answer regarding how long bones last. If I had to guess, for bones just left out in average weather, I think bones would last maybe a few to several decades, however local conditions would play an extremely important part, especially rain and humidity. I imagine in a rainforest a bone probably lasts not even a few years where as in a desert a bone might last centuries. Vespine (talk) 21:58, 7 February 2011 (UTC)[reply]

Ceramic

whats the difference between Ceramic plates and Porcelain plates? — Preceding unsigned comment added by Tommy35750 (talkcontribs) 03:12, 5 February 2011 (UTC)[reply]

Well a porcelain plate is one type of ceramic plate. There are many other types of ceramics so I suggest you read the articles.--Shantavira|feed me 09:50, 5 February 2011 (UTC)[reply]
There is no difference, technically Porcelain is a type of ceramic but it's such a general type that just about any ceramic would count. Ariel. (talk) 03:10, 6 February 2011 (UTC)[reply]
Text in Section 1 of the previously linked article ceramic differs. 87.81.230.195 (talk) 10:07, 6 February 2011 (UTC)[reply]

injection moulding

when making ABS plastic vacuum cleaners by injection moulding, is a mould realease agent used. for example PVA or oil. —Preceding unsigned comment added by 66.66.92.152 (talk) 03:58, 5 February 2011 (UTC)[reply]

A number of release agents for ABS are listed in this book SpinningSpark 16:51, 5 February 2011 (UTC)[reply]

i understand they can be used, but are they usuallly used when making vacuum cleaners?

BTW, are these two questions also by you? Ariel. (talk) 03:07, 6 February 2011 (UTC)[reply]

no —Preceding unsigned comment added by 66.66.92.152 (talk) 06:15, 6 February 2011 (UTC)[reply]

"Is it really so..."

...that if I make a theory based on 10,000 (of the same) experiments (by different scientists) that proved my hypothesis 51% of the time that it is accepted by the scientific world? I really have a problem with the suppositions surrounding Half-Life. The article almost puts it where the theory could be wildly false. Does this bring into question science as a whole? Are people today substituting Science for Truth? schyler (talk) 14:15, 5 February 2011 (UTC)[reply]

That's not what it's saying. The chances of any particular atom decaying after existing for the half-life is exactly 50% - so 50% of all atoms of that age will have decayed by that time, you just don't know which ones will decay in advance. Mikenorton (talk) 14:27, 5 February 2011 (UTC)[reply]
So if my hypothesis is that atoms have a definable half-life and the tests shows that "50% of all atoms of that age will have decayed" by that hypothesized half-life, I have a theory? And it is accepted as "probabilistic" truth? schyler (talk) 14:33, 5 February 2011 (UTC)[reply]
I don't know what you mean by "truth". The point is that your hypothesis will have passed the experimental text and that's all that is expected from a good hypothesis. Dauto (talk) 15:14, 5 February 2011 (UTC)[reply]
You seem to have a misunderstanding of how everything fits together. There's an assumption that atomic decay is a constant-rate process, meaning that for an atom of a given type (say a particular isotope of a particular element), in a given period t, the probability p that an undecayed atom will remain undecayed at the end of the period is constant, and independent of the past history of the atom. This is something that can be confirmed or refuted experimentally. From that assumption, we can mathematically derive that the probability p is a function of t. There is a "rate of decay" that's an intrinsic property of the type of atom. We can describe it in many equivalent ways, one of them is the length of time after which the probability of non-decay is 50%—that's what we call the "half-life". We can, if we so choose, to use an analogously-defined "quarter-life". There's also an assumption that the decay of one atom is independent of the decay of any other. I think experiments can be designed to confirm that, although I don't know how scientists did it. Once you have these assumptions, you can mathematically work out the expected fraction of undecayed atoms as a function of time. You can use it to design different kinds of experiments to measure the "decay rate" parameter of an atom. And you can characterize the rate using a half-life measure. For atoms that are synthesized and are available only one at a time, the experiment for measuring the decay rate may involve measuring the decay probability of a single atom in a given duration, one atom at a time. The measured decay rate is mathematically converted to a half-life measure because that is the conventional measure. Measuring the half-life of an atom should not be confused with proving a hypothesis 50% (or 51%) of the time, whatever that means. --98.114.146.240 (talk) 15:45, 5 February 2011 (UTC)[reply]
It's not very polite to tell another editor that they "have a misunderstanding of how everything fits together" and then go on a barely followable rant that calls you own understanding into question. Did you come here to ask something, or to argue about some kind of vague notion half formed in your head? Beach drifter (talk) 16:02, 5 February 2011 (UTC)[reply]
I think 98.114.146.240's explanation was pretty good and understandable. Just my opinion. Dauto (talk) 18:00, 5 February 2011 (UTC)[reply]
I was responding to (the second comment of) the OP. My remark that he might "have a misunderstanding of how everything fits together" was not an insult, but rather an attempt to help by pointing out what I thought was causing his difficulty in understanding the article he referenced. If you think my explanation is incorrect, feel free to point out the errors, but please stick to the technical and be civil. --98.114.146.240 (talk) 02:02, 6 February 2011 (UTC)[reply]
The direct answer to your question is no; 10,000 identical experiments of which only 51% have results consistent with a given hypothesis cannot be taken as proof of that hypothesis. Clearly something has been missed, there is some condition or conditions that prevent the other 49% from meeting the hypothesis. However, you are confused over half-life because the claimed hypothesis involved here is a stochastic process. Let's take another example, Boyle's law. We would expect 10,000 experiments measuring the pressure and volume of gas in a balloon to to be pretty nearly 100% consistent with Boyle's law. It would be acceptable to discard the odd one or two wild results which might easily be put down to errors such as missing a decimal point in recording a result or the lab assistant filling the balloon with the "wrong" gas, but if only 51% of results agreed with Boyle's law then the law would be pretty much considered disproved. As it happens, Boyle's law does describe the behaviour of gases to a high degree of accuracy. Now coming on to half-life, the claim is not that an unstable atom will decay after the half-life time. The claim is that it will decay at an unknown time with a probability given by some probability function. The half-life time is merely the time for which the probability is one half. The law of radioactive decay is not stating a deterministic time, it is stating the probability function which can be accurately measured and, like Boyle's law, gives results close to 100% accurate. Another way to look at radioactive decay is through the time constant, which is related to (but not equal to) the half-life. What this says is that if the rate of decay continues at the initial rate then the entire sample will have decayed in the time-constant. In fact, the rate of decay does not continue at the initial rate, it continually changes (gets slower) in such a way as to keep it true that decay would be complete in the time-constant if the current rate were maintained. Rate of decay is easily measured and shown to be in line with this time-constant law, which can be shown to be entirely equivalent to the half-life law. SpinningSpark 16:40, 5 February 2011 (UTC)[reply]
Thanks Spinningspark. Also the OP may also find it useful to consider the notion that an experiment to measure a rate constant of radioactive decay can have a very low p-value. SemanticMantis (talk) 16:50, 5 February 2011 (UTC)[reply]

Thanks for the direct answer, SpinningSpark. I appreciate the example using Boyle's Law. I think the chunk of your answer is found where you say "The half-life time is merely the time for which the probability is one half." Maybe it's a mental wall when it comes to science that makes this phrase fly right over my head. What does this mean? schyler (talk) 16:48, 5 February 2011 (UTC)[reply]

Well consider coin-tossing. The probability of heads is one-half on any one toss. This is not the same as claiming that the coin will land heads. The claim is that over a large number of tosses the proportion of heads will be very close to half. We could state this in terms of time; let's say the coin is tossed at regularly time intervals. There will be a definite time in which the probability of getting heads is one half. How long it takes to actually get heads however could be anything since any number of tosses, in principle, can come up tails. SpinningSpark 16:59, 5 February 2011 (UTC)[reply]
It means that after a half-life time is passed there is a 50% chance that any given atom will have decayed and a 50% chance that it won't have decayed. If there is only one atom in your hand than it is anyone's guess what may actually happen. If you have trillions of atoms than about half of them will have decayed and the other half won't have decayed. Dauto (talk) 17:58, 5 February 2011 (UTC)[reply]
The real interesting aspect about half lives is that they are truly probabilistic as far as we know: you can't predict, at all, when a single atom is going to undergo decay. But on the aggregate, with a large number of atoms, you can say, "well, half of them will decay in a year." It's one of the interesting facets of statistical knowledge. The same principle can be applied to a lot of other probabilistic functions: I can't tell you whether any individual cigarette smoker will or won't get lung cancer, but I can tell you that out of a certain population of smokers, a given percentage will get lung cancer. (This is what is generally meant by describing risk factors—I can't tell you that surfing will cause you to get bitten by a shark, but I can tell you that out of X surfers per year, Y% get bitten by sharks.) In the case of cancer, it's because the chances of individual cancers are probabilistic in some sense (whether a given cell become a tumor), and also because they are other, complicated factors involved (like genetics, and other environmental hazards). But statistics lets you get away from worrying about the direct causes, and instead come up with observations which seem in many cases to act like iron-clad laws. When people started keeping solid records on mortality statistics, they were shocked that they could predict, with reasonable accuracy, exactly how many suicides would occur in a given nation per year, for even though suicide seemed like an entirely individual phenomena whose causes differed from person to person, over the aggregate they become as predictable as, say, flipping coins.
Exactly what kind of knowledge this statistical worldview really is was a major 19th century debate amongst scientists and philosophers (in the context of thermodynamics, specifically). Some said, "this is the root of all true knowledge, because it doesn't require us to pretend to understand the direct causes of everything in the world, just the outcomes." The others said, "this is just a crude way of understanding the world, in the aggregate, and is not a substitute for the true understanding of cause and effect." I'm not sure who won out in the end, personally — the debate seems to have just drifted out of consciousness in the 20th century, perhaps (one might guess) because probabilistic thinking became so abundantly popular in epidemiology and physics in particular. But it does occasionally surface in regards to what "risk factors" really mean, and the benefits and perils of what is sometimes called "statistical medicine".
Your larger question, about scientific consensus, is somewhat different. On the one hand, one very clever experiment can occasionally disprove a thousand other results, if it is really unambiguous. Such experiments seem rather rare. Galileo's observation of the phases of Venus might be one of them — it basically made classical (e.g. Ptolemaic) geocentrism totally impossible to argue in favor of, no matter how many other clever arguments or experiments you had. (And indeed, the Church's astronomers recognized this almost immediately, and switched over to the Tychonic system.) In most cases, though, such amazing, paradigm shifting experiments are pretty rare in science. More often you get slow building of consensus, or a slow descent into uncertainty. In any case, though, I don't think that considering what is considered to be scientific consensus is a factor of raw numbers of articles saying one thing or another — more important is exactly who is making a given claim. The article of a scientist known to be reliable is usually given far more weight than a dozen articles from unknowns.
As for whether people confuse Science with Truth; indeed, it is a feature of our age that the two are considered quite synonymous in the minds of many. Whether they should be a heady question that is best addressed separately from discussions of half-lives, though. --Mr.98 (talk) 21:22, 5 February 2011 (UTC)[reply]

Thanks all for your time and patience. I have learned something new to argue about ;) schyler (talk) 23:19, 5 February 2011 (UTC)[reply]

Resolved
I know you said this was resolved, but Mr. 98 raises some very interesting points to ponder, and his discussion over the statistical predictability of suicide reminded me a bit of Isaac Asimov's Foundation series, especially on Psychohistory (fictional), which was basically the same concept, but applied to all of humanity. Its an interesting thing to ponder; the ability to predict human behavior on large groups of people even if individually they behave rather randomly and unpredictably. Asimov uses the idea fictitiously, but its a real part of scientific philosophy, especially as it relates to the "soft" sciences like psychology and sociology and anthropology. --Jayron32 00:55, 6 February 2011 (UTC)[reply]
Actually, we're at ground zero for some interesting psychohistory, e.g. User:Wnt#Psychohistory_of_Wikipedia. But the idea behind half-life is really much simpler than that. The reason why it works as it does is that atoms are very small, simple things, which are incapable of memory. If you have an ingot of 1% U-235 or 0.0001% U-235 the atoms decay at the very same rate, because they don't know what ingot they're part of - provided, that is, they're not part of a large 100% U-235 ingot that blows up, because once the atoms start talking to each other via abundant neutrons they are no longer quite so simple. But when half-life applies, it also means mathematically that 1/4 or 3/4 or 1/1000 of atoms will all decay within certain intervals that can also be calculated, simply based on logarithms from the half-life; thus the chance that an atom breaks down at any given "instant" is also constant. Now, I can understand the visceral desire to object to probabilistic laws of physics - after all, Einstein himself made the "God doesn't play dice" statement - but a great deal of physics on this level works in just such a way. Wnt (talk) 15:48, 6 February 2011 (UTC)[reply]

I suspect the OP will continue to have arguments until it is understood that half-life is merely a statistical measure that is often useful to characterise a large number of continuous discrete events. It is not part of a nefarious conspiracy by scientists to sustitute science for truth, nor does the phase "verify by experiment" of the Scientific Method work like democratic voting. A hypothesis that gets disproven 49% of the time is unconvincing. Cuddlyable3 (talk) 18:28, 6 February 2011 (UTC)[reply]

The "51%" or "49%" is actually misleading. The hypothesis is not that each and every nucleus will decay. The hypothesis is that approximately 50% of the nuclei will decay, according to a certain statistical distribution. This hypothesis has not been disproved. Wnt (talk) 20:38, 6 February 2011 (UTC)[reply]
How about a similar simpler analogy: If I take 100 coins and flip them all, close to 50% will land heads up. I can't tell you which, but it's obvious this is a scientifically valid hypothesis that would be validated (or falsified) by the experiment. No appeal to truthiness required. 23:49, 7 February 2011 (UTC)

Help identifying a spider

This spider was found in an apartment in Sao Paulo, Brazil. Sorry for the lack of size context, not my image. Thanks! --Shaggorama (talk) 15:10, 5 February 2011 (UTC)[reply]

Multiple Chemical Sensitivity

I have Multiple Chemical Sensitivity. I was diagnosed in 1994 by Dr. Rea. But the issue is that my illness is very controversial. The vast majority of doctors I go to cannot treat me and just end up taking my money to do no real good. I went to the head of the evidence-based medical teachers board and asked him what he thought of MCS; his response was "I'm sorry, that is not an eb (evidence-based) question. Please refer to pico by scott richardson."

My three foot tall stack of medical files begs to differ. What is a non evidence-based question, anyways? How do I prove the existence of mcs? I have evidence coming out the wazoo. Where do I take it?

Where can I go to be the subject of medical research?

I can be contacted on windows live messenger - <email removed> —Preceding unsigned comment added by 24.166.186.171 (talk) 19:13, 5 February 2011 (UTC)[reply]

I've removed your email address to protect you from spam, per reference desk conventions all answers are given here. Dragons flight (talk) 20:53, 5 February 2011 (UTC)[reply]
Have you looked at the article on multiple chemical sensitivity? It explains, in broad terms, why MCS can be a problematic diagnosis. As for getting help, you probably need to seek out a specialist in MCS (or perhaps allergies) who would be willing to discuss your concerns. The reference desk doesn't offer specific medical advice. Dragons flight (talk) 21:06, 5 February 2011 (UTC)[reply]
See evidence-based medicine. Basically what it means is that there should be good evidence from scientifically based clinical trials in order to engage in the treatment of a given condition. Some conditions are controversial because they are hard to define, or they lack an objective diagnostic test that can definitively demonstrate the presence or absence of that condition in a given person, or because there is no treatment that has been clearly shown to be efficacious. The NIH has a database of clinical trials that lists one study that seems to be about multiple chemical sensitivity. That particular study is based in Copenhagen, so it may not be possible for you to participate in, but perhaps the investigators would be able to help you out further. --- Medical geneticist (talk) 21:21, 5 February 2011 (UTC)[reply]

female an dmale squirrels are they different colors

i would like to know if female squirrels are different colors than males squirrels? its an american red squirrel. —Preceding unsigned comment added by 207.255.40.102 (talk) 21:32, 5 February 2011 (UTC)[reply]

Not that I know of. The article American Red Squirrel doesn't mention anything about sexual dimorphism. You may be confusing different species of squirrel such as a Fox Squirrel or an Eastern Gray Squirrel; or you may have spotted a variety of American Red with different coloration; the Wikipedia article mentions a black and a white phase ofd the American Red Squirrel; though there is a [citation needed] tag on the statement. --Jayron32 00:47, 6 February 2011 (UTC)[reply]

What's up with Yellowstone right now?

I heard a few references to Yellowstone in mass media in the past few days. What is going on? Is there some quantitative knowledge about the risk, in light of some new data? I didn't find references to newer events in Yellowstone Caldera and Yellowstone National Park. --Mortense (talk) 23:19, 5 February 2011 (UTC)[reply]

This article was published in National Geographic. This has been picked up by media outlets around the world and given the usual 'it's all going to go boom, AAAAAAARGGGH, run run for your lives' spin. That's about it. 23:34, 5 February 2011 (UTC)
This is one of those confusions between "soon" on a geologic time scale and "soon" on a human time scale. "Any day now" has a range of +/- 1000 years on the geologic time scale. We've known most of this stuff about the Yellowstone Caldera for a long time now; I think the History Channel did a special on it 5-6 years ago at least. --Jayron32 00:42, 6 February 2011 (UTC)[reply]
Are you sure it wasn't Jellystone Park? There's a movie on in my neighbourhood that's making it famous again. HiLo48 (talk) 04:20, 6 February 2011 (UTC)[reply]
The Yellowstone system has now and then lava outbursts. We are in an earthquake cycle of the Pacific ring of fire. The probability of an outburst of lava is higher now. --Chris.urs-o (talk) 04:34, 6 February 2011 (UTC)[reply]
That's an intriguing statement. Has someone, presumably someone who doesn't subscribe to the mantle plume model, found a correlation between activity around the edge of the Pacific Plate and activity at the Yellowstone hotspot ? Sean.hoyland - talk 06:45, 6 February 2011 (UTC)[reply]
The correlation is between big earthquakes and leaking magma chambers. --Chris.urs-o (talk) 14:57, 6 February 2011 (UTC)[reply]
It is just media sensationalism at work. CNN had a story a few weeks back that said that Betelgeuse would go supernova soon and people were freaking out even though soon is defined as sometime in the next 10,000,000 years. Googlemeister (talk) 15:10, 7 February 2011 (UTC)[reply]

February 6

Beautiful sunsets

The sunsets in the San Francisco Bay Area over the last few months have been beautiful. Too beautiful. I don't think they were this nice in past years. Is there more particulate matter in the air right now for some reason? It's not listed as one of the consequences of the April 2010 Eyjafjallajökull eruption... -- BenRG (talk) 02:06, 6 February 2011 (UTC)[reply]

This article, from Apr 23, 2010, explains that Eyjafjallajökull had not (by that date, anyway) put enough particulate matter into the stratosphere to have much of a lasting effect on sunsets. WikiDao 02:18, 6 February 2011 (UTC)[reply]
This summary of emissions in California ranks them by source and county using data for 2005. Cuddlyable3 (talk) 03:14, 6 February 2011 (UTC)[reply]
The small volcanic ash particles in the stratosphere do not have a lot of "mass". There are some eruptions on the Pacific ring of fire. --Chris.urs-o (talk) 15:01, 6 February 2011 (UTC)[reply]
Many eruptions have occurred worldwide since the Icelandic eruption. See list of currently erupting volcanoes. ~AH1(TCU) 18:28, 6 February 2011 (UTC)[reply]

I think this is a result of the fact that we are currently experiencing the strongest La Niña in over ten years, and since around Jan 5 it has been very dry. Dry air tends to give better sunsets than moist air. I don't think the sunsets were so nice back in November and December, when it was rainy as hell. Looie496 (talk) 18:47, 6 February 2011 (UTC)[reply]

Well, yes, rain would rinse particulate matter out of the air. PЄTЄRS J VTALK 19:00, 6 February 2011 (UTC)[reply]
Oh, that makes sense. I don't know why I didn't think of that. -- BenRG (talk) 00:32, 7 February 2011 (UTC)[reply]

Reducing noise pollution

Hi, I want to know Is it possible to make a device which will generate sound waves which will counter the sound waves generated in real enviornment. My Idea about this device is that ..... It will detect the frequency of all sound waves around it and simultaniously it will generate counter sound waves which will destroy the real sound wave ..... and our enviorment will be free of noise pollution.. —Preceding unsigned comment added by 220.225.96.217 (talk) 05:03, 6 February 2011 (UTC)[reply]

I don't think it's quite that easy. See our article Active noise control. Mitch Ames (talk) 05:08, 6 February 2011 (UTC)[reply]
(edit conflict)See Active noise control. If the region in which you are trying to cancel the noise is small (like the space between headphones and your ears) OR if the noise you are trying to cancel is relatively predictable and easily modeled, like a car engine, then technology is already on the market and in consumer products right now. Active noise control is availible in Noise-cancelling headphones you can wear. And some high-end luxury autos use active noise control to minimize the sound of the engine noise and road noise inside the cab of the car. The problem is that you can't design a noise control system for a big space where the sources of the noise and the people are all moving around. The thing about the inside of cars, and the inside of headphones, is that its a small environment to model and its very predictable. People sit in the same place, roughly, inside a car, and the space inside the headphones is really easy to design for. If your thinking of creating a giant noise-cancellation system which would, say, cover your entire house and yard, so that anywhere you were standing in your house would be noise free, its just too complicated to design such a system. The noise cancellation system needs to know where you are, so it can design the correct countrer-noise to broadcast. If you move in the soundspace, the exact anti-noise needed is going to change drasticly. You need a system that follows you around and hears what you hear, hence the headphones. --Jayron32 05:13, 6 February 2011 (UTC)[reply]
Another problem with large-scale noise cancellation is that because energy is conserved, canceling the noise in one area necessarily means having MORE noise somewhere else. If you make your house and yard noise-free, the neighbor might object to having an interference maximum inside his house. --99.237.234.245 (talk) 05:58, 6 February 2011 (UTC)[reply]
No, that's not necessarily so. The energy needed to create noise cancelation is provided the same way that your lightbulbs light. It doesn't necessarily hold that you have to create more noise somewhere else in order to cancel noise. Total energy is conserved, not total noise! --Jayron32 06:03, 6 February 2011 (UTC)[reply]
It doesn't matter how the sound is created in the first place. Once the noise-cancellation sound is created, it has sound energy; the noise that it's meant to cancel also has sound energy. That energy can't simply disappear after the two waves interfere, so if the sound has 0 amplitude in one area, it must have a non-zero amplitude elsewhere. For the same reason, it's not possible to have minima without maxima in a double-slit experiment, or to make the water come to a standstill by throwing pebbles to cancel an existing water wave. --99.237.234.245 (talk) 06:58, 6 February 2011 (UTC)[reply]
I agree 99.237.234.245. However as a practical solution you can baffle the anti-sound, and absorb the energy without bothering the neighbors. Ariel. (talk) 09:45, 6 February 2011 (UTC)[reply]
Yeah, but then you should also be baffling the original noise at the same barrier, no? Wnt (talk) 15:50, 6 February 2011 (UTC)[reply]
Nah, I don't think think Ariel means to insert a baffle barrier between the noise source and the hearer to be protected. Ariel is talking about reducing the added sound contribution outside the protected zone. Another improvement is to radiate the anti-sound at low level from a phased array of speakers that focus maximum power only at the protected hearer.Cuddlyable3 (talk) 18:10, 6 February 2011 (UTC)[reply]
Are you talking about white noise? ~AH1(TCU) 18:27, 6 February 2011 (UTC)[reply]
He's still talking about noise cancellation, whic is distinct from white noise. Someguy1221 (talk) 19:39, 6 February 2011 (UTC)[reply]
I would still agree with Wnt. If you are adding a baffle to outskirts of the protected zone, it seems likely this be between the hearer and the noise your'e trying to cancel. This would therefore also help stop incoming noise anyway. It's probably not perfect, but that would likely apply to the anti-noise as well in other words you're still making more noise for others even if you're partially baffling it. To put it a different the baffle either works or it doesn't. If it does then why the noise cancellation? If it doesn't then you're still arguably being a nuisance to others. I guess if the source of your anti-noise is from the baffles pointed towards the hearer and the protected zone is quite large your contribution may be minimal compared to the effectiveness of the baffle itself in stopping incoming noise (in other words the baffle may be better at stopping your anti-noise getting out then in stopping other noise coming in) but I'm not that sure, perhaps someone who better understands the physics of baffles and noise can explain? The exception would be if you're trying to cancel noise within the protected zone but as I understood the original premise, the idea was to create a protected zone from external noises like cars, lawn mowers, leaf blowers whatever. If the noise is within the protected zone, tell your partner/flatmate/child/parent/whatever to tone down the noise may work. Nil Einne (talk) 20:40, 6 February 2011 (UTC)[reply]
 noise ---> (protection) <--- anti-noise ---> (baffle) [otherwise extra noise would go here]
Ariel. (talk) 21:52, 6 February 2011 (UTC)[reply]
Yes that's what I mentioned in the late part of my discussion. My understanding is it wasn't really what the OP was thinking about, but perhaps I'm mistaken. Nil Einne (talk) 08:17, 7 February 2011 (UTC)[reply]
That diagram might look plausible, but move "(protection)" half a wavelength and the "noise" and "anti-noise" now add to one another. That's why I was assuming the layout was noise - anti-noise - (protection). Also of course the reality is three dimensional, and spheres don't meet up as nicely as line segments. Wnt (talk) 17:09, 7 February 2011 (UTC)[reply]

Gases

A mono atomic gas and a diatomic gas are initially at same temperature and pressure, which are later compressed adiabatically to half their initial volumes. Then which gas has higher temperature? According to thermodynamics TV^(gamma-1) is constant, so that temperature of mono atomic gas is higher. But if we consider like, as attractive forces in diatomic gas are more due to more vander-waal forces of attraction so that, more energy is released and then Potential energy increases and due to which Kinetic energy increase, thus increasing temperature. So, what is the mistake in the latter reasoning? — Preceding unsigned comment added by Krishnashyam1994 (talkcontribs) 09:19, 6 February 2011 (UTC)[reply]

In a gas, the molecules spend most of the time so far from each other that there's no forces acting. Exactly which forces act during collisions is not thermodynamically important -- only what comes out of the collision is. The diatomic gas is different because its molecules can store energy in rotational and vibrational modes that do not show up in the kinetic energy of the entire molecule, and this internal energy mixes with the kinetic when collisions happen. There are no such hidden energies in the atoms of a monatomic gass. Therefore, when you add some energy to the gas (by doing mechanical work to compress it), the monatomic gas has to use all of the extra energy to make the molecules move faster (= increased temperature), whereas a diatomic gas can store some of it internally and only uses some of it for kinetic energy (= less increased temperature). –Henning Makholm (talk) 09:43, 6 February 2011 (UTC)[reply]

Specific heats

What will be the Cv(Specific heat at const. volume) for triatomic linear and non-linear molecules? i.e., what will be the contribution of rotational and translational freedoms? but for both, translational freedoms are 3R/2 — Preceding unsigned comment added by Krishnashyam1994 (talkcontribs) 09:24, 6 February 2011 (UTC)[reply]

You first need to compute the number of degrees of freedom for the vibrational modes. You can do this as follows. A particle considered as a point mass has 3 translational degrees of freedom. A bound state of 3 particles will have 3 degrees of freedom for each particle, so there will be 9 degrees of freedom in total. You can then decompose these 9 degrees of freedom for the bound state in terms of the center of mass motion, rotational and vibrational modes of the bound state. Clearly, there are 3 degrees of freedom for the translational center of mass motion, and you have either 2 or 3 rotational degrees of freedom depending on whether or not you have a linear moelcule or not. In case of a non-linear molecule, you can chose 3 independent axes of roitation, so you have 3 degrees of freedom for rotation. In case of a linear molecule the axis parallel to the molecule is not allowed. The reason for that is that this correponds to a spin degree of freedom of the atoms, which we didn't consider in the total of 9 degrees of freedom. So, we have:

total number of degrees of freedom = 9 = N vibrational modes + 3 translational center degrees of freedom for center of mass + 2 or 3 rotational modes.

So, you can solve for the N vibrational degrees of freedom for the cases of linear and non-linear molecules. Then, the heat capacity per molecule is obtained from the equipartition formula. Since each vibrational modes contributes to two quadratic terms in the Hamiltonian, you get:

C_v = N k_B + 3/2 k_B + (2 or 3)/2 k_B

Count Iblis (talk) 01:31, 7 February 2011 (UTC)[reply]

Will it be the same for both ideal and real gases? — Preceding unsigned comment added by Krishnashyam1994 (talkcontribs) 09:07, 7 February 2011 (UTC)[reply]

For real gases the total heat capacity will be different due to the potential energy between the molecules. However, the contribution due to the translational, rotational and vibrational degrees of freedom will be the same. Of course, you can still question the exactness of these contributions to the heat capacity. The contribution 3/2 k_B to the translational modes to the heat capacity is almost exact, the corrrections coming from quantum corrections that are only important at extremely low temperatures (usually less than 10^(-10) K). For the vibrational modes, the opposite is true. In most cases, quantum effects leads to these being frozen well above room temperature. Then a few hundred K above room temperature where you would expect the formula to hold, you will see deviations, because describing the molecule as being bound by a harmonic potential isn't that accurate. Then the rotational modes are typically frozen below 100 K or so. There are obvioulsly corrections to the classical formula due to quantum effects. Also, there is a coupling between the rotational modes and the vibrational modes. E.g. if the molecule rotates faster, it stretches a bit which causes the moment of inertia to be larger. So, there is an inherent contradiction in treating the molecule as a rigid rotator and a harmonic oscillator at the same time. Count Iblis (talk) 14:16, 7 February 2011 (UTC)[reply]

Tank filling on a cold day

Is it better to your pocket? Gasoline will have less volume, but does it matter? — Preceding unsigned comment added by Quest09 (talkcontribs) 12:10, 6 February 2011 (UTC)[reply]

I seem to recall a discussion on this before but can't remember where. In any case, [3] suggest the change in density is only about 0.5% change for 5 degree C change in temperature. In other words even if there's a 30 degree C change between day and night thats only 3% difference. More to the point [4] which is the ref used in the earlier source and [5] mention 2 key things. Number 1 many petrol stations have their tanks underground where the temperature change over the course of a day is minimal. Even if they don't, a large volume of petrol probably doesn't change temperature over the course of the day that much anyway. So this may be an issue in summer vs winter, but day vs night not so much. And even then as both refs mention, most modern pumps are probably sophisticated enough that they do take temperature in to account (and in any case would I expect be periodicly recalibrated probably more then once a year). Nil Einne (talk) 12:36, 6 February 2011 (UTC)[reply]
Not to mention that prices fluctuate much more over the year than the density possibly could anyway. –Henning Makholm (talk) 13:18, 6 February 2011 (UTC)[reply]
This explains it nicely: Consumer Watchdog--Aspro (talk) 19:05, 6 February 2011 (UTC)[reply]
This Fuel dispenser#The metrology of gasoline talks about it a bit. Basically in Canada where the cold temperatures hurt the gas sellers they compensate for the difference. Same for the wholesale level. But in the warm US they don't (the warm south more than compensates for the cold north). People think that's unfair but it's not unfair enough for anyone to get worked up enough to do something about it. Ariel. (talk) 19:10, 6 February 2011 (UTC)[reply]
The petrol is stored underground, which means there will be very little temperature variation during the day due to the insulating properties of the ground. There will be a variation from season to season, possibly, but you can't really wait until winter to fill up! --Tango (talk) 22:35, 6 February 2011 (UTC)[reply]
From the consumer watchdog PDF linked above 513.8 million gallons of gasoline sold in the summer 2007 will be attributable to the thermal expansion of gasoline. can I just say I had to nudge my eyes back into their sockets. It still doesn't fail to stagger me every time I hear statistics about how much gas we churn through, I think this should really be something kids are taught in schools. We've reached peak oil already, if you argue we haven't, then you can't argue will very very soon, and most people are still completely oblivious to the staggering amount of resource we pull out of the ground and burn every year.. Vespine (talk) 01:27, 7 February 2011 (UTC)[reply]

Magma

A while ago I recall a wiki user stating that the Earth's structure is solid throughout. Meaning that if you drill down to the core, you won't encounter magma (he said magma was formed through other means). I'm sure the outer core is liquid, but is it magma? The article on mantle states part of it is melting. Would that be magma? If we were to drill down to the core of the planet, would we eventually encounter magma? ScienceApe (talk) 15:02, 6 February 2011 (UTC)[reply]

Magma is formed when part of the earth (mantle or crust) melts because conditions locally (e.g. relatively high levels of water i.e. above a subduction zone or areas where the temperature is relatively high considering the pressure i.e. decompression melting) mean that the melting temperature is reached. Only part of the rock actually melts (partial melting) and a magma forms when the melt rises and starts to form distinct bodies of molten rock. This is different from the outer core, which is all liquid. You would only encounter magma on drilling if you did it at one of the locations where partial melting is happening. Mikenorton (talk) 15:26, 6 February 2011 (UTC)[reply]
Do you mean the "outer core is all liquid"? All of the links in this vicinity describe the inner core as solid. SemanticMantis (talk) 15:37, 6 February 2011 (UTC)[reply]
I am getting mixed up in my old age - corrected. Mikenorton (talk) 16:03, 6 February 2011 (UTC)[reply]
Our article on Structure of the Earth tells us "The liquid outer core surrounds the inner core and is believed to be composed of iron mixed with nickel and trace amounts of lighter elements." So no, the outer core is not magma. SemanticMantis (talk) 15:30, 6 February 2011 (UTC)[reply]
(ECx2) Well Structure of the Earth says "The liquid outer core surrounds the inner core and is believed to be composed of iron mixed with nickel and trace amounts of lighter elements." This doesn't sound much like what people commonly think of as magma which is of course discussed in our article. I'm suspect the temperature and pressure differences with places were magma is formed are enough that even if it were rock you wouldn't get something people would think of as magma either. Nil Einne (talk) 15:34, 6 February 2011 (UTC)[reply]
So would it look like liquid metal? ScienceApe (talk) 17:28, 6 February 2011 (UTC)[reply]
The asthenosphere of the mantle is generally semi-solid, but the lower mesosphere is solid. ~AH1(TCU) 18:24, 6 February 2011 (UTC)[reply]
The asthenosphere is fully solid (apart from a suggested 1% melt), but at the slow rates of deformation that occur in the uppermost mantle it acts like a highly viscous fluid, a form of rheid. As to the appearance of the outer core, we have no way of knowing - the conditions can be recreated using a diamond anvil cell in the laboratory but we can't look at it as the sample is so small. Mikenorton (talk) 19:23, 6 February 2011 (UTC)[reply]
The liquid outer core is composed primarily of metals. But it may or may not look like molten metal at atmospheric pressure. SemanticMantis (talk) 19:58, 6 February 2011 (UTC)[reply]
The inner core would look like molten metal at atmospheric pressure (assumin the temp stayed the same). The only reason the inner core is solid is the massive pressure that it is under. For example, Ice VII is a form of water ice that doesn't melt until over 400 degC because it is at 10 GPa. Googlemeister (talk) 15:05, 7 February 2011 (UTC)[reply]

Refilling the Ogallala aquifer

One of the leading regional creeping-doom scenarios in the U.S. concerns the rapid draining of the Ogallala aquifer, leading to reduced potential for agriculture throughout much of the Great Plains. What I don't understand is that the barrier to refilling is apparently mostly a matter of permeability, and there are four major rivers crossing the aquifer region. Why don't people just drive wells down from the river to the aquifer every hundred yards for its length to refill it? (To avoid confusion, I mean wells with some degree of filtering, close to the surface of the water) Wnt (talk) 21:29, 6 February 2011 (UTC)[reply]

Well, there are also major disputes about the usage of water from the rivers. For example, Nebraska and Kansas have been involved in lawsuits about Republican River water rights, and Nebraska and Colorado have struggled with rights to the South Platte River. In recent years it has been common for the middle Platte River (upstream of Columbus, Nebraska, where it is joined by the Loup River) to dry up completely in the summer. I'm sure similar issues occur in states other than Nebraska; I only know of these examples because Nebraska is my home state. For more information, you might try http://water.unl.edu/. —Bkell (talk) 21:40, 6 February 2011 (UTC)[reply]
Obviously rivers are not infinite supplies of water. If you were to drain hydrologically significant quantities of water from the river, in order to "re-fill" the aquifer, you'd be extracting enormous quantities of water; and you can't just assume that "more river water will just keep flowing." Extraction of such volumes of water would have as much ecological impact as the depletion of the reservoir in the first place! The water cycle is a closed system - if the aquifer is being depleted, the water is ending up somewhere else. The real issue is, though, that where it ends up is not economically useful. The water that has been extracted from the aquifer be distributed globally in the form of increased precipitation somewhere else in the world. If the water is pumped out for crops and agriculture, and significant amounts undergo evapotranspiration and end up as rain over the Atlantic Ocean, then that freshwater is "lost" as a useful reservoir. However, that means that there's more water in the ocean, and less in the atmosphere; so there will be increased oceanic evaporation, and new clouds will form, and rain will precipitate somewhere else (maybe in Africa or Asia or the Pacific). If we're lucky, and if all things were "equitable and fair," this would also mean more rain in the Mississippi River basin - so the rivers would have higher throughput, so we could sustainably "drain off" a few billion gallons per year to "pump back into the ground." The point is, though, that the implications of hydrology on the scales of continent-sized aquifers are part of the entire global climate system. For every place that undergoes desertification or aquifer depletion, somewhere else on the planet is receiving more fresh-water. Unfortunately, this redistribution often occurs in a way that is neither useful for human economic activity or agriculture, nor for ecosystems. If you could make a convincing scientific case that depleting the river water flow, and pumping or "sequestering" water in an aquifer would actually result in increased economic activity, you'd have an easy time finding major industrial sponsors to start the pumping. Nimur (talk) 22:17, 6 February 2011 (UTC)[reply]
I really doubt that "For every place that undergoes desertification or aquifer depletion, somewhere else on the planet is receiving more fresh-water." There's no reason why the oceans couldn't give up a few inches of water and all the land would become wetter, if local conditions permitted it. That said, if the river is being sucked dry, obviously that's a problem. I still think of the area as a place for "500-year floods"[6] but of course that's not every year. I'd call it a missed opportunity, but then again, I don't know how well the system would work in muddied raging floodwaters anyway. Wnt (talk) 01:41, 7 February 2011 (UTC)[reply]
You're right; I should clarify that if the "conserved quantity" of fresh water ends up raining in the ocean, it's no longer really "fresh water." If climate-sized quantities of rainfall over the ocean change, the salinity will decrease by some microscopic amount, but that's not really helpful (it's still not going to be "freshwater"). My earlier comments were not meant to imply that there's a fixed amount of fresh water that we're guaranteed to always have distributed throughout Earth. Nimur (talk) 02:50, 7 February 2011 (UTC)[reply]
You would also have to worry about contamination of the aquifer. I do not have pollution data handy, but I would expect that a deep aquifer would have less pollutants then a surface river which gets fertilizer runoff. Googlemeister (talk) 14:59, 7 February 2011 (UTC)[reply]

How deep is the deepest parking garage? Could one ever be built deep enough to reach Hell, like in this commercial?

This 2003 Honda Pilot Commercial depicts a full parking garage for the first many levels. Soon, they reach some humid utility area 96 levels down, and eventually a Hell-like cavern 720 levels below.

The head sticking out appears to be the last sentry of Upper Hell before the realm's exit, and what a surprise - there are still lower levels.

For those of you who don't believe in Hell, could one ever be made deep enough to reach a Hell-like place?

Anyway, what has been the deepest parking garage ever made and what feats of engineering and financing would it take to reach a Hell-like area below the surface? At what depth/floor level would it get unrealistic and why? --70.179.181.251 (talk) 22:00, 6 February 2011 (UTC)[reply]

You mean down into the mantle? I suppose it would be possible, but possible and practical are two very different things. The mantle is 75 km below the surface in places. Building a car garage even close to that depth would need an elevator, and fan system to remove carbon dioxide, and a humidity regulator. Sure, it can be done, but that's an idea like a floating city - possible, but completely pointless. One thing that should be noted is that it's not like a surface that you break through and then magma; the rock gets more and more plasticy as one goes down. You also only need to go down a few miles before it starts to get noticeably hotter. --T H F S W (Contact) 22:08, 6 February 2011 (UTC)[reply]
Just in terms of feasibility, it's of note that the lowest point ever drilled by humans — the Kola Superdeep Borehole — was only about 12.3 km below the surface, and that was not very easy. --Mr.98 (talk) 01:40, 7 February 2011 (UTC)[reply]
That also links to Extreme points of Earth#Lowest point (artificial) which notes the 'lowest human-sized point underground' is the TauTona Mine which is 3.9 kilometers deep. Our article notes 'The journey to the rock face can take 1 hour from surface level. The lift cage that transports the workers from the surface to the bottom travels at 16 metres per second (58 km/h).' I don't know if there's any shaft large enough for a car, perhaps not. As for 'hell like' well 'The mine is a dangerous place to work and an average of five miners die in accidents each year' ('employs some 5,600 miners') and 'Air conditioning equipment is used to cool the mine from 55 °C (131 °F) down to a more tolerable 28 °C (82 °F). The rock face temperature currently reaches 60 °C (140 °F)' Nil Einne (talk) 06:10, 7 February 2011 (UTC)[reply]
I see a couple of places that were planning to have seven underground parking levels, one in Vancouver and another in Chicago. Don't know if either got built. Clarityfiend (talk) 22:29, 6 February 2011 (UTC)[reply]
The Torre del Caballito in Mexico City reportedly has 15 (fifteen!) underground parking levels. I don't know if that's a world record, but I couldn't readily locate anything deeper. TenOfAllTrades(talk) 03:04, 7 February 2011 (UTC)[reply]
I love the "For those of you who don't believe in Hell"! Even if I believed in Hell, could I seriously believe that it is inside the Earth? If so, I would start a campain for the governements get drilling and free those souls! --Lgriot (talk) 10:55, 7 February 2011 (UTC)[reply]
Do you also want to campaign to go around all the maximum security prisons opening doors? If you believe in Hell, you probably would believe that they are there for a reason. Googlemeister (talk) 14:53, 7 February 2011 (UTC)[reply]
I am sorry my irony in this post didn't get through. The underlying assumption is that God, being all-knowing, including our future, knew that these people would have a bad life, yet, he let them be born, and after their death let them then be condemned to hell. So my view is that God is a very bad being to let people be born, have a bad life and then punish them for something he knew would happen and let it happen anyway. To punish him, we should free all the bad and let them loose in heaven where they can think of ways to annoy God. that would be fun to watch, wouldn't it? It isn't the same as human justice where the judges are genuinely not responsible for the prisonners actions (well, most of the time). --Lgriot (talk) 15:43, 7 February 2011 (UTC)[reply]
But logically, God would know what you were going to do, and having infinite power, could stop you if he felt it would actually be irritating, so it really wouldn't work. Googlemeister (talk) 16:07, 7 February 2011 (UTC)[reply]
Again, failed, sorry. I seem to have a problem making Googlemeister understand that I am joking. Maybe because I am the only one to see how funnily ridiculous the idea of an all powerful god is? --Lgriot (talk) 08:59, 8 February 2011 (UTC)[reply]
I think we could make a fair scientific argument for opening the prisons - or rather, can anyone show evidence that having them is useful? At least in the U.S., which invented them and seems to think that they can cure virtually any problem whatsoever (and when they don't, that's irrelevant). As for Hell, well, if God is an author, and he throws rough drafts that aren't going anywhere into the fire, then the fire burns always, the drafts burn but a moment, and the characters in those drafts do not suffer, but neither are they saved. Wnt (talk) 17:22, 7 February 2011 (UTC)[reply]
The US invented prisons ? No, they've been around as long as civilization, probably longer. While they seem to be fairly useless at reform, they do prevent many crimes, since criminals spend years removed from potential victims. StuRat (talk) 18:56, 7 February 2011 (UTC)[reply]
While I agree the idea the US invented prisons is bizzare, you appear to have a fairly naïve view of the interaction between crimes, prisons and criminals. In particular, you appear to be assuming prisons make no difference as to how likely a criminal is to commit a crime. Yet there is some evidence prisons may in some instances increase the likelihood someone will commit further crimes in the future, for example by these related issues of teaching someone to be a better criminal, by disconnecting someone from society, by for lack of better description 'reducing their humanity', making it more difficult for someone to interact acceptably in society (e.g get a legitimate job), and simply by familiarising someone with criminals and crime as a way of life. If you lock someone up for 20 years of their life in total and they commit 10 crimes during that life regardless vs if you don't lock someone up at all and they only commit 5 crimes during their live then clearly you have not prevented many crimes but increase the number. And many would agree some crimes are worse then others so going solely by the number doesn't work very well anyway. And without getting in to rehabilitation at all, all that money you spend on prisons as well as associated costs like courts and police is money you can't spend elsewhere, whether you believe that should be improving the lot of those in a very bad situation or reducing taxes. (If you believe prisons do reduce the number of crimes then the cost of running prisons can partially be offset against what you save from the reduction in crimes but as I've said that hasn't been established.) Note that I'm not arguing completely removing punishment or justice would result in less crime personally I don't believe it will although I do agree with Wnt that the current system in the US isn't working very well but that's somewhat beside my point which is you can't simply say 'people in prison can't commit crimes so you have less crimes/victims' since there is clearly a complex interaction, the fact people in prisons can't commit crimes doesn't help you if people are more likely to commite crimes when they aren't. Nil Einne (talk) 05:42, 8 February 2011 (UTC)[reply]
I think by the time most people end up with prison sentences, they are already career criminals. First time offenders typically don't get caught, or, if caught, aren't charged or get a suspended sentence or community service. However, another factor supporting your contention, is that some crimes seem as though they are likely to occur regardless of the criminal. If you arrest a drug dealer, another will work the same location to supply the customers there. If you arrest a hit man, those who would have hired him just hire somebody else. StuRat (talk) 06:02, 8 February 2011 (UTC)[reply]
You might want to take a look at When the World Screamed first. --Stephan Schulz (talk) 11:08, 7 February 2011 (UTC)[reply]
I have to admit, I didn't think anyone still believed that hell was literally, physically inside the Earth. APL (talk) 17:31, 7 February 2011 (UTC)[reply]
I don't believe in any sort of literal hell. But hell is certainly a fun theological and philosophical tool for analyzing meaning and reality and destiny. Modern views vary widely, ranging from purely secular interpretations of the writings in the various holy-books, to mainstream theological interpretations in major world religions. Here's our article on Hell in Christian theology - I bring this up because Pope John Paul II reversed some major Catholic doctrine in the 1970s by stating that hell wasn't actually inside the Earth; in this 1999 speech, he said it's not a place at all. That should give you some perspective on just how modern that viewpoint is. More recently, in 2007, Pope Benedict has emphatically denied John Paul II's statements, claiming that Hell is actually a real place inside the Earth. Unfortunately, Popes tend to speak a cryptic and poetic dialect of heavily-Latin-influenced Italian, and rarely go to great lengths to unambiguously clarify their dictums; much meaning is lost through translation and interpretation. (It is my opinion that if any religious doctrine were to be laid out in unambiguous and clear language, the silly ideas could be easily refuted, leaving only a modernized, secular philosophy). I'm neither a Catholic nor very religious at all, but I have read many of John Paul II's speeches and writings, and I can say that he was a much more rational and intelligent pope than most - a suitable religious leader in the era of space travel and scientific reason. Nimur (talk) 21:01, 7 February 2011 (UTC) [reply]
Neither of those external links actually say what you say they say. Is that intentional? Some sort of test to check people are actually following links? John Paul II was referring to what was already in the Catechism (he refers to it by name) in saying that Hell is a state of being: he makes no mention of it being 'inside the Earth' because that hasn't been Catholic teaching since I don't know when. Benedict XVI says that Hell is very real, even though people don't talk about it much anymore, and it is a state of separation from God: he makes no mention of location or of it being 'inside the Earth' because that hasn't been Catholic teaching since I don't know when. He is particularly talking to support the new version of the Catechism, which explains (again) that Hell is a state of separation from God with no particular worldly location. Seriously, your links say nothing even close to what you say they do. Benedict XVI is a smart cookie steeped in rational study (of Catholic and related theology), although he sometimes forgets his audience and ends up too esoteric. 86.162.68.36 (talk) 22:36, 8 February 2011 (UTC)[reply]
In fact, this must be a test to see if people are actually following your links, because the very internal link you provided explains that neither Pope was saying what you said they were saying, and that this is a piece of misinformation spread by misreporting. Either that, or you don't read the links you provide yourself... 86.162.68.36 (talk) 22:46, 8 February 2011 (UTC)[reply]

February 7

What happens when black holes collide?

I thought that there would be an almighty bang, with cosmic rays galore, and local solar systems devestated. Because I was thinking of a sun being sucked into a black hole. As it spiraled in, it would be spaghettified, ripped apart, and there would be an extremely powerful radiation that would be seen for galaxies around. But then I thought, with black holes, NOTHING can escape, not light, matter, nothing. So even the most powerful collision between two of them would hardly matter. Consider a technology superior to ours which, instead of sending two atomic particles on collision courses in something like the Large Hadron Collider, they sent two massive black holes, both at nearly the speed of light directly at each other. What would happen? No explosion. Maybe one would bounce of the other, or they would just integrate, like two blobs of paint. Much less dramatic than the collision of two suns. And no one around would notice, let alone be obliterated. Is that right? Myles325a (talk) 06:43, 7 February 2011 (UTC)[reply]

Although a collision of black holes doesn't give off matter or light, it does give off energy in the form of gravitational waves. In the most realistic case of a collision that isn't exactly head on, the two black holes will inspiral for a short time before they coalesce into a single black hole, and the gravitational waves produced will carry away angular momentum. The gravitational waves produced by the merger of two black holes also carries away linear momentum, producing a recoil on the merged black hole. In the case of the merger of two supermassive black holes, the recoil can be huge enough to wind up kicking the merged black hole out of its host galaxy. The ejected black hole can rip some stars away from the galaxy when it leaves, producing a hypercompact stellar system. See Gravitational wave#Energy, momentum, and angular momentum carried by gravitational waves. Red Act (talk) 07:58, 7 February 2011 (UTC)[reply]
The aftermath of two black holes colliding is formidable. During the inspiral, you'll have a moderate set of waves coming out, which increase somewhat exponentially with time. They reach their peak during coalescence, which is a short period of time where the two black holes become one. Then, there will be a dumbbell shaped black hole, which will 'ring down' into a sphere, emitting more waves. This pattern of waves (inspiral, coalescence, ringdown) is quite distinctive, and is one of the ways we can detect (colliding) black holes. The nature of the coalescence waves can give us an insight to black hole physics. In fact, there's a very large experiment called LIGO going on, which attempts to detect these waves as they reach Earth. ManishEarthTalkStalk 16:06, 7 February 2011 (UTC)[reply]
How does the black hole decide which way to move, relative to the center of mass of the two colliding precursors? Wnt (talk) 16:46, 7 February 2011 (UTC)[reply]
If the collision is a head on collision between two non-rotating black holes than there is no recoil. A more realistic collision wouldn't be so symmetric and there might be "some" recoil. Dauto (talk) 19:13, 7 February 2011 (UTC)[reply]
The recoil encountered during the merger of two black holes can be huge. In the paper I reference below, describing calculations of the mergers of two black holes with spins of magnitude S/m2=0.8 under various initial conditions, the recoil came to more than 1000 km/s in all but one case. That's about 1/300 of the speed of light, and about 33 times faster than the orbital speed of the Earth around the sun. The merger of even non-spinning black holes of unequal mass can have a recoil of up to 175 km/s, which is still more than 5 times faster than the orbital speed of the Earth. When black holes merge, the resulting black hole goes flying off at a very high speed relative to the initial center of mass, in a hard-to-predict direction. Gravitational waves carry away a huge amount of linear momentum when black holes merge. Red Act (talk) 22:08, 7 February 2011 (UTC)[reply]
I know. That's why I put the word some within quotation marks. Dauto (talk) 03:24, 8 February 2011 (UTC)[reply]
Oh, OK, I understand now. I mistook the quotes as connoting downplaying, instead of connoting ironic understatement. Sorry. Red Act (talk) 04:42, 8 February 2011 (UTC)[reply]
In realistic cases, a precise head-on collision isn't going to occur, and the black holes are going to be a binary system for a while before they merge. The merger of non-spinning black holes of equal sizes produces no kick (a.k.a. recoil). For the merger of two non-spinning black holes of different sizes, the kick direction (which of course must be in the orbital plane, due to mirror-image symmetry) is mainly determined by the relative positions of the two black holes before their very last orbit and merger. The net radiated linear momentum before then is small, because it's fairly close to being equal in all directions over time when the orbits of the two black holes are nearly circular. However, much larger kicks occur when the merger is between two spinning black holes, in which case the initial spin orientations play a crucial role in the kick direction, and the kick direction is no longer even confined to the orbital plane. As in the case of non-spinning black holes, almost the entire kick is accumulated during the final merger phase, so the kick direction is highly sensitive to initial conditions, and there isn't some simple formula for it. For more information, see this paper. Red Act (talk) 19:58, 7 February 2011 (UTC)[reply]

Scientific name etymology

Hi. Several sources (including Wikipedia) state that the scientific name Leptictidium means "graceful weasel". However, this etymology seems to correspond to Leptictis (lepti+ictis) rather than to Leptictidium. So what does Leptictidium actually mean? What's the difference in meaning between Leptictidium and Leptictis? Thanks. --79.89.248.158 (talk) 07:17, 7 February 2011 (UTC)[reply]

The suffix '-idium' is from the greek suffix '-idion' meaning smaller or lesser. Mikenorton (talk) 07:55, 7 February 2011 (UTC)[reply]

Why don't we have pneumatic tubes?

It seems people in the past (like end of nineteenth century) thought that by now -- in the 21st century -- we would have pneumatic tubes everywhere, for fast delivery. Why don't we? Is there something wrong with the idea after all, that makes it impractical, etc? I can't get a homecooked meal in my house any faster than either making it, going to go get it myself, or having someone deliver it. Whereas with tubes, I could just get one as from a buffet... Is there some reason the idea doesn't actually work? (even though more than a hundred years ago, people were convinced it would). 109.128.127.87 (talk) 12:01, 7 February 2011 (UTC)[reply]

Such pneumatic tube systems were quite widely used in the 20th century in industrial and commercial settings, such as factories, banks and department stores, and even across cities, to transport packages of documents, cash, postal items, etc. They remain in use in a variety of specialist settings - my local Tesco supermarket, for example, uses such a system to send containers of cash from tills to a central cashier's office. Although our article is not explicit, my guess (as someone professionaly experienced in Facilities management) is that the necessary servicing and maintenance required to keep them working is relatively impractical in a domestic setting, making rival technologies or methods more economical. 87.81.230.195 (talk) 12:39, 7 February 2011 (UTC)[reply]
Document distribution has become overwhelmingly electronic, information is no longer the prisoner of sheets of processed dead tree. The need to transport a specific piece of paper within a single physical facility has become relatively rare.
I know some large hospitals use pneumatic systems to transport laboratory specimens and urgent medications.
City-wide "public utility" systems ran into scalability problems. What works well in a single building gets hopelessly snarled up in a city-wide system - the information and communication technology of the late 19th - early 20th century simply could not do the job economically. Roger (talk) 13:16, 7 February 2011 (UTC)[reply]
Rival technologies - there is none. If I want piping hot food in my house, I have to make it, or someone has to walk it to my doorstep (probably me). There is nothing that replaces it. Dodger67, I understand about late 19th - early 20th century limitations. My question is: what about today? 109.128.127.87 (talk) 13:18, 7 February 2011 (UTC)[reply]
Using magnetic levitation and propulsion would get over some of the difficulties of long distance operation -if it can be made to work economically.A visionary idea for modernising the goods-distribution network --Aspro (talk) 13:57, 7 February 2011 (UTC)[reply]
The infrastructure needed would mean that there would be high start up costs, which would make it a risky investment, and probably high running costs. The Royal Mail had a narrow gauge railway under London which they closed in 2003 due to running costs. -- Q Chris (talk) 14:14, 7 February 2011 (UTC)[reply]
I'm not sure that pneumatic tubes are ideally suited to food delivery. The acceleration and deceleration, not to mention the sudden twists and turns would thoroughly scramble most food items.
A pizza would be disastrous, and the wrong shape anyway. Some Chinese-food items would be OK, but some would not survive. Grinders would be the ideal shape of course, but I think that even tightly wrapped in wax paper they'd be pretty badly damaged. I'll bet all the meat would wind up bunched up at one end and the bread would be soaked from the obliterated tomato. I'll bet a burger would be just as bad, but with more grease.
Maybe some systems go a lot slower than I'm imaginingAPL (talk) 16:20, 7 February 2011 (UTC)[reply]
"I'll bet all the meat would wind up bunched up at one end and the bread would be soaked from the obliterated tomato." It's a sacrifice I'm willing to make. 109.128.127.87 (talk) 16:32, 7 February 2011 (UTC)[reply]
Well if it was spag bol, it wouldn't matter. But in other cases, the food could be vaccuum sealed in a heavy plastic wrapper, which would prevent it from moving around. Myles325a (talk) 00:44, 8 February 2011 (UTC)[reply]
Actually I would think unmanned aerial vehicles would offer a rival technology, when appropriately optimized for the task. "All hail the Pizza Bomber!" But unless you teach a parachutist pizza how to ring a doorbell, you'll need some specialized slot in your hose for the planes to dock, deliver their cargo and take off again. Conceptually, the friction of moving air in a sufficiently long pneumatic tube should eventually exceed the necessary lift and drag of a tiny airplane. Wnt (talk) 16:55, 7 February 2011 (UTC)[reply]
NOW you're talking. Smart Quadrocopters could flit all over town carrying precious fast-food take out. They could use cel-phones to alert buyers when they're about to land. I wonder how much you have to tip a delivery robot. APL (talk) 17:26, 7 February 2011 (UTC)[reply]
OK, now for a list of shortcomings of pneumatic tubes:
1) As Wnt mentioned, friction is more of a problem the longer the tubes are (and the more bends). You would need to put more and more pressure in the tubes, the longer they are, to get them to work. This would require that the tubes be thicker and better sealed, and the container would also need to seal more closely with the tube, to prevent air leakage, which in turn would also increase friction. Perhaps powdered graphite or some other lubricant would be needed along the inside of the tubes, which would, unfortunately, tend to blow out black dust at the ends. Also, at such pressures you could no longer have the tubes be open at the end. While this results in a harmless puff of air for short lengths, for the lengths you are talking about, the air pressure blast would blow your ear drums out. Instead, you'd need to have the pipe open up to a chimney on the roof, and would need special baffles or maybe use active electronic noise reduction methods, to prevent it from making loud sounds.
2) As APL mentioned, food might get messed up by the forces involved. Soda, even if well sealed, would likely explode when you opened it.
3) Switching systems would be needed, as a single tube from every fast food restaurant to every home wouldn't work. This is another opportunity for air to leak out. I suppose an automated system using bar codes could be used.
4) There could be a security concern, as somebody could send bombs this way, or stop your food and poison it. StuRat (talk) 18:31, 7 February 2011 (UTC)[reply]
Now for some places where it could work:
A) Hotels. Rooms service would no longer require that you put your robe on. As for scrambling the food, the relatively short distances would mean that they could move it rather slowly. Another way to do the same thing would be with dumb waiters. The latter wouldn't allow going around corners, but you would need a vertical shaft between every pair of rooms (or maybe every 4 rooms, if the halls are on the outside, as in a motel). The dumb waiter system might be quieter and allow a full tray of food, but either system would need to control the door at the room end, so people couldn't just steal food sent to another room.
B) Skyscraper apartment buildings.
C) Hospitals.
D) Retirement homes. StuRat (talk) 18:44, 7 February 2011 (UTC)[reply]
Bellamy in 1887 wrote "Looking Backward", a sort of time travel or futurism forecast of what the world of 2000 would be like. He forecast an underground delivery system, whereby you could order and get automatically delivered food, clothing, or whatever. Picture the Denver Airport luggage handling fiasco. Such a system would likely cost more to set up and maintain than just getting stuff delivered by car from the pizza place, or by FedEx from internet merchants for books or clothes. Edison (talk) 19:59, 7 February 2011 (UTC)[reply]
It might also be useful to list some of the alternatives:
- For horizontal transport (or slight angles): Conveyor belts.
- For downward transport: Chutes. No power required.
- For upward transport: Elevators.
The additional flexibility of pneumatic tubes to transport in any direction apparently doesn't make up for the other short-comings of such a system.
Incidentally, it might interest you to know that the first New York Subway and an early London Underground were both pneumatic. StuRat (talk) 23:32, 7 February 2011 (UTC)[reply]

We currently use a pneumatic tube system at the hospital where I'm employed. It's primarily for sending samples to the lab, and medicines to various floors. We used to use a dumb waiter for moving large charts up to Medical Records, but most charting is electronic now, so that's obsolete. Suffice to say, I don't see much use for pneumatic tubes outside our current implementation. — The Hand That Feeds You:Bite 23:45, 8 February 2011 (UTC)[reply]

In fairness to the tubes, you shouldn't really need to have the entire tube powered by an end-to-end difference in air pressure - you could e.g. have pistons at regular intervals which pull out as the package approaches and push in after it passes; this creates a gradient of air pressure only over a shorter interval. That said, pneumatic propulsion is (as with the railway) simply an alternative to an engine power, and eventually it still makes sense to have one engine that goes with your package rather than dozens or hundreds along the way. (Unless you have a whole lot of packages?) Wnt (talk) 00:53, 9 February 2011 (UTC)[reply]

How do the birds "print" spots on their eggs?

Guillemot eggs

Do they have "printers" in the cloaca? -- Toytoy (talk) 14:40, 7 February 2011 (UTC)[reply]

I found this in the article eggshell:
In an average laying hen, the process of shell formation takes around 20 hours. Pigmentation is added to the shell by papillae lining the oviduct, coloring it any of a variety of colors and patterns depending on species. -- Toytoy (talk) 15:04, 7 February 2011 (UTC)[reply]
You seem to have answered your own question in part, but in case you're also interested in why you might like to read this. This book also has more details on the how. SmartSE (talk) 16:30, 7 February 2011 (UTC)[reply]

capillary

what is capillary action? —Preceding unsigned comment added by 59.92.106.164 (talk) 14:52, 7 February 2011 (UTC)[reply]

See Capillary action. Red Act (talk) 15:11, 7 February 2011 (UTC)[reply]

the environment.

Does nuclear energy destroy the environment? — Preceding unsigned comment added by Drloic (talkcontribs) 15:10, 7 February 2011 (UTC)[reply]

See Environmental effects of nuclear power. Red Act (talk) 15:14, 7 February 2011 (UTC)[reply]
If it is released all at once, it has a very negative impact on the local environment. Googlemeister (talk) 16:05, 7 February 2011 (UTC)[reply]
There are some rather spectacular products of nuclear energy that could adversely affect the environment, however we'd all die without it. -- JSBillings 16:16, 7 February 2011 (UTC)[reply]
Short version: if there are no accidents, the net pollution is probably a couple of orders of magnitude less than coal. The "long version" explanation would include the probability of accidents (never non-zero, but lower than most people think; but the consequences of them are high), and the question of whether coal is the appropriate alternative to compare it to (whether that comparison creates a false dichotomy or not). You have to compare it to something, because all forms of energy generation have some environmental effects. --Mr.98 (talk) 16:36, 7 February 2011 (UTC)[reply]
I agree - coal power actually releases more radiation into the environment than nuclear: [7] [8] and the waste from coal can lead a very nasty mess [9] which is worse than anything proven about nuclear waste AFAIK. SmartSE (talk) 16:41, 7 February 2011 (UTC)[reply]
Note that nuclear decommissioning hasn't really been done that often, and until it is done we don't know whether the radioactive components inside will really be disposed of neatly in some small underground place we agree never to go to again, or dispersed all over the countryside during war or civil unrest as accidental discharges or dirty bombs. Wnt (talk) 17:05, 7 February 2011 (UTC)[reply]
I think the most important detail is that nuclear fission power stations produce a small, solid waste-product (which I can admit is fairly hazardous). But unlike any fossil-fuel power plant, the waste is not tons of gas that is just vented to the atmosphere. Nuclear waste can be safely carried to a secure location for disposal, while carbon dioxide, other noxious gases, and the soot and particulate pollutants of fossil fuel plants (especially coal) are just haphazardly thrown to the wind. Nimur (talk) 17:06, 7 February 2011 (UTC)[reply]
Yes, this is a very important thing to keep in mind. Containment is much easier for solids than gasses. SemanticMantis (talk) 21:21, 7 February 2011 (UTC)[reply]
An appropriate syllogism is "Nuclear power is to coal power as air travel is to car travel". As in, air travel is orders of magnitude safer than car travel, and yet when we have plane crashes they are Major Events, whereas someone dies of a car crash so often, no one notices (someone, right now in fact, is probably dying in a horrific car crash). In the same way, nuclear is far and away safer and more environmentally friendly than coal, but coals effects are more insidious, while nuclears is more disasterous. Thus, we overemphasis the danger of nuclear and underemphasize that of coal, much as plane crashes are big news, and car crashes are ignored. --Jayron32 22:07, 7 February 2011 (UTC)[reply]
There are several problems with nuclear energy. Apart from the fact that we have not yet found a proven solution for long-term treatment and/or storage of nuclear waste, the overall CO2 footprint is not that small. The power plant itself does not emit much CO2, but mining, transporting, and refining uranium is, at the moment, fairly CO2 intensive. A study a few years back found that nuclear power plants are better than coal (big challenge...), but slightly worse than natural gas in total CO2 production. The major snag for nuclear is, however, that it does not scale. Even now, people are not happy with Iran and North Korea expanding their nuclear capabilities. And I really don't want to see 10 reactors each in Nigeria, Eritrea, Colombia, Burma, Syria and Yemen. It enormously increases the risk of proliferation, and it's very uncertain if politically unstable countries can operate nuclear plants safely and securely. If we do not develop an energy solution with global reach, fossil fuel burning will simply shift to other countries. --Stephan Schulz (talk) 23:53, 7 February 2011 (UTC)[reply]
I would just like to note that theoretically the reactor problem does not have to be a big proliferation problem. There are types of reactors that are relatively proliferation resistant and would require really a lot of obvious effort to make a weapon out of. Extracting plutonium from spent fuel is not something that can be done in your basement.
The real proliferation problem is the fuel cycle. If you permit enrichment, then the proliferation problem is huge. If you permit recycling, it is not only monumentally huge, but you also raise the very real possibility that material can be stolen. (The "material unaccounted for" at a recycling plant the size of Rokkasho Reprocessing Plant, for example, is around 50 kg a year or so. That means that there is really no way for them to know whether a missing bomb's worth of plutonium is just lost in the pipes somewhere, or has been smuggled out by an employee.) --Mr.98 (talk) 01:04, 8 February 2011 (UTC)[reply]
Don't forget that in assessing the carbon footprint of a power station you need to do a life-cycle analysis from construction to decommissioning, not just a snapshot of consumption in normal use. Nuclear power stations are made of many tons of concrete, and manufacture of cement requires a very high energy input. Itsmejudith (talk) 12:29, 8 February 2011 (UTC)[reply]
Somehow that logic doesn't seem to add up. If a nuclear power plant really needs that much fossil fuel input than the energy prices from those plants wouldn't be competitive in the market place. We know that nuclear power is fairly competitive. Dauto (talk) 16:11, 8 February 2011 (UTC)[reply]
"We know that nuclear power is fairly competitive" - um, yes, once the government provides heavily subsidised risk insurance and waste disposal, not to mention the fact that governments underwrote most of the R&D to begin with. I don't think nuclear has ever competed fairly in a open market. --Stephan Schulz (talk) 21:28, 8 February 2011 (UTC)[reply]
Neither has coal, really. And I'm speaking as a Kentuckian. — The Hand That Feeds You:Bite
It's hard to get raw economic numbers that are unbiased by politics on any infrastructure project, especially if the order-of-magnitude investment is going to be a few billion dollars. Thorough study of the reported costs will require parsing through thousands of analyses of construction, operation, labor, and technology costs; and it's naive to assume that a straight answer is available from every single contractor, who will act in good faith, provide correct numbers, and act selflessly to make sure the best technology wins (irrespective of individual profit potential). That's why the issue is so murky. For example, I hear the "concrete carbon footprint argument" a lot. I don't think nuclear plants require that much concrete, proportional to the energy they deliver. Remember: we're talking about an energy production system that can power an entire city for a thirty to fifty year lifespan. How much electricity and gasoline does one concrete factory use up, even accounting for all the embedded energy inputs in the mining and transportation? I'm astonished at the assertion that the amount of fossil fuel required to construct one building is even relevant. How can that amount of energy possibly compare to the amount of coal that is not burned as a result - literally an entire city's worth of fossil fuel, over a time-span of decades? A nuclear plant, at full capacity, produces about a gigawatt - that's one billion joules every second. Nimur (talk) 00:06, 9 February 2011 (UTC)[reply]
(EC)n the history of nuclear power plant operation, the radioactivity released to the environment has been at trace levels, except for Chernobyl type accidents. Coal contains Uranium and Thorium, which are released up the smokestack when the coal burns. See a paper from Oak Ridge National Laboratory, a nuclear think tank, which has a graph indicating about 12000 metric tons of Uranium and 3000 metric tons of Thorium released into the atmosphere from coal combustion in 2000. It says people living downwind from a coal plant are exposed to more radioactivity from the combustion of coal than if they lived downwind from a nuclear plant operated per regulations (again, excepting major accidents). It says the radioactive elements released from coal combustion far exceed the radioactive fuel used in nuclear reactors. Edison (talk) 16:16, 8 February 2011 (UTC)[reply]

density

A NEEDLE IS IMMERSEDIN WATER BUT A HEAVY SHIP IS NOT. WHY —Preceding unsigned comment added by 122.179.85.65 (talk) 15:29, 7 February 2011 (UTC)[reply]

From Buoyancy: "A ship will float even though it may be made of steel (which is much denser than water), because it encloses a volume of air (which is much less dense than water), and the resulting shape has an average density less than that of the water". -- Q Chris (talk) 15:36, 7 February 2011 (UTC)[reply]
(EC) See Archimedes' principle, which should help. Mikenorton (talk) 15:37, 7 February 2011 (UTC)[reply]
Actually, a needle can also float, even though it's denser than water, due to a different reason than why a ship can float, namely surface tension. Red Act (talk)

Motor oil LD50

What is the LD50 of 10W40 motor oil? I saw the Quantum Solice James Bond where a guy was thinking about drinking it and wondered how toxic the stuff actually was. Googlemeister (talk) 16:23, 7 February 2011 (UTC)[reply]

You need to improve your google skills! This says the LD50 varies on the route of ingestion. Inhalation = 2.18 to >4 mg/l (Rat) Dermal = 2 gm/kg (Rabbit) and orally = 5 ml/kg (Rat). SmartSE (talk) 16:34, 7 February 2011 (UTC)[reply]
As an indication of how toxic it is compared to other oils though, it looks similar as they all seem to have similar LD50s. SmartSE (talk) 16:37, 7 February 2011 (UTC)[reply]
But does an LD50 of 5 ml/kg on a rat mean it is the same 5ml/kg on a human? I mean the metabolism is very different between a human and a rat and the digestive systems aren't quite the same etc... Googlemeister (talk) 17:34, 7 February 2011 (UTC)[reply]
Well I doubt anyone is going to feed engine oil to people and see how much it takes to kill them... Rat and human metabolism is very similar and tbh I doubt that engine oil is going to be metabolised anyway. SmartSE (talk) 17:54, 7 February 2011 (UTC)[reply]
Fair enough, but what specifically in the motor oil is toxic to people and how does it cause the harm? Does it paralyze muscles, or does it block neural signals or does it cause ruptures in the digestive tract or what? Googlemeister (talk) 20:01, 7 February 2011 (UTC)[reply]
"Symptoms of motor oil ingestion are depression, lethargy, paralysis of hind legs, staggering, vomiting and coma, in very severe cases." They're talking about dogs by the way. SpinningSpark 20:29, 7 February 2011 (UTC)[reply]
Sounds like a muscle relaxant then. Googlemeister (talk) 21:54, 7 February 2011 (UTC)[reply]

Veinal sins

The site where blood is removed and replaced in the veins and arteries, for dialysis, seems to deteriorate over time, such that they need to periodically change the location. What's the name and cause of this problem ? StuRat (talk) 18:04, 7 February 2011 (UTC)[reply]

Scar formation. Roger (talk) 18:23, 7 February 2011 (UTC)[reply]
Right, but what about dialysis causes the veins and arteries to scar ? Is it due to pressure, infections, some type of chemical imbalance, or something else ? StuRat (talk) 18:48, 7 February 2011 (UTC)[reply]
I presume it's just due to the repeated breaking of the vein wall by the insertion of the needle (or whatever the proper word is) and the resultant healing. I have a similar problem as a result of having donated blood many times: the staff sometimes have trouble finding a suitable spot to stab. AndrewWTaylor (talk) 18:54, 7 February 2011 (UTC)[reply]
In patients that are undergoing hemodialysis, there is usually a surgically created arteriovenous fistula or shunt. The various alternatives and some of the complications of this process are discussed at Hemodialysis#Access. --- Medical geneticist (talk) 19:16, 7 February 2011 (UTC)[reply]
Thanks. That said "The catheter is a foreign body in the vein and often provokes an inflammatory reaction in the vein wall". Doesn't medical science have any catheter material which doesn't provoke an inflammatory reaction ? StuRat (talk) 23:18, 7 February 2011 (UTC)[reply]

Dicrocoelium dendriticum or Dicrocoelium dentriticum?

Is it: Dicrocoelium dendriticum or Dicrocoelium dentriticum? I see sources for both. Is one simply a typo, or are both valid? Ariel. (talk) 21:59, 7 February 2011 (UTC)[reply]

Google scholar shows sources using both, but over ten times as many using the d, and many that use the t appear to be from non-native-English speakers. In short, the t form appears to be a relatively common error, but an error nonetheless. Looie496 (talk) 22:08, 7 February 2011 (UTC)[reply]

Chromatid is to sister chromatid as chromosome is to what?

What's the associated chromosome that every chromosome typically has in humans, among others? Not sister chromosome? --90.213.111.224 (talk) 22:21, 7 February 2011 (UTC)[reply]

Homologous chromosome? Not sure I understand what particular "association" you have in mind. DMacks (talk) 22:39, 7 February 2011 (UTC)[reply]

Life span of cut dogwood

I have some dogwood branches from a florist that I bought about a week ago. They've been sitting in my car (I live in Canada, it's pretty much been below freezing thus far). If I put them in a bucket of water now will they be okay? Or has it been too long and the branches are likely dead? Thanks. --生け 23:23, 7 February 2011 (UTC)[reply]

There doesn't seem to be much cost to trying. One suggestion, cut the last bit of the branches off, under water. This will hopefully remove the dead, dried out portion, without introducing air bubbles into the circulation system of the branches, which would tend to prevent the uptake of water. StuRat (talk) 23:36, 7 February 2011 (UTC)[reply]
My grandmother could grow pants from cuttings such as a rose bush from cut flowers. Perhaps one has to be something of a "plant witch" to achieve such success. Edison (talk) 03:23, 8 February 2011 (UTC)[reply]
Were they in leaf, in bloom, or dormant? If they were already dormant, they still may well take root and sprout. As StuRat point out, it's a low-risk experiment. Good luck! SemanticMantis (talk) 04:17, 8 February 2011 (UTC)[reply]
They're..."dormant", I guess, since they are no leaves or flowers on them. I've placed them in water...hard to tell if it's having any effect though. :P --生け 22:27, 8 February 2011 (UTC)[reply]

February 8

Pinhole

A single pinhole lens provides a single, dim image with an infinite field of focus. So:

1) Can multiple pinhole images be combined to create a single, bright image of infinite focus, using either mirrors to combine the images or electronic image capture from each pinhole to combine them ?

2) Or, if combining all those slightly different images leads to a fuzzy image, could adaptive optics be used to restore the composite image to a perfect focus ? StuRat (talk) 04:05, 8 February 2011 (UTC)[reply]

Pinhole lenses are pretty much poor-man's optics. If you're already going to use mirrors/lenses/digital processing to resolve an image, it seems that the pinhole becomes pragmaticly worthless. I suppose you could technically do all of the above, but it strikes me as rather pointless to use a lens or mirror to focus multiple pinhole images into a brighter image where the lens or mirror itself would do the job better by itself... --Jayron32 04:11, 8 February 2011 (UTC)[reply]
I think this was a physics question, not an engineering question. Ariel. (talk) 05:34, 8 February 2011 (UTC)[reply]
Also, there are ideas to image distant planets using an enormous pinhole camera. http://www.universetoday.com/9934/biggest-pinhole-camera-ever/ Ariel. (talk) 05:38, 8 February 2011 (UTC)[reply]
Actually, despite the article's name, I don't think that's really a pinhole camera. It's basically an artificial eclipse, which screens out the parent star so that the planet can be seen with a telescope. It looks to me like it is not a device meant to create an image somewhere based on the physical separation of rays with different angles as they pass through the pinhole. Wnt (talk) 01:01, 9 February 2011 (UTC)[reply]
The multiple pinholes would have different points of view, so their combined image via mirrors would inevitably be blurred, if there is anything in the foreground and if the pinholes are any distance apart. And a pinhole image is inherently blurry, not at all of "infinite focus." It is more of "no focus." It's just that nearby and distant objects are equally blurry, with the blur circles related to the size of the pinhole. A large pinhole lets in more light but has more blur. A very small pinhole has more diffraction. There is an optimum, as discussed in books by Ansel Adams. I have made pinhole images on photographic paper, then scanned the (negative) image, reversed it via Photoshop, and sharpened it via Photoshop, resulting in a very pleasing image. Photographic paper or film does a nice job of integrating photons over an extended exposure. Edison (talk) 04:49, 8 February 2011 (UTC)[reply]
I think there may be some miscommunication here. In the limit of a hole of width zero, StuRat is quite correct that a pinhole image has perfect focus regardless of the location of the target (but also zero brightness). And it is indeed possible to reconstruct a scene from an image formed using multiple pinholes -- there is a substantial literature on "multiple-pinhole image reconstruction". The technique is particularly useful for images formed using types of radiation for which no good lenses or mirrors exist. Looie496 (talk) 05:17, 8 February 2011 (UTC)[reply]
If the goal here is to create an image with infinite depth of field, I think a simpler solution would be to use a traditional camera and focus stacking. --Daniel 05:21, 8 February 2011 (UTC)[reply]
Edison is correct. In the limit of zero width the image becomes blurry. Dauto (talk) 05:28, 8 February 2011 (UTC)[reply]

Anti-Histamines + Zantac

Is there a reason why no drug company has marketed a anti-histamine such as Allegra (H1 blocker) + Zantac (H2 blocker) in one pill? I have patients with serious hives taking it and I hear it works much better than an anti-histamine alone. Would Zantac alone help also? —Preceding unsigned comment added by 76.169.33.234 (talk) 08:05, 8 February 2011 (UTC)[reply]

We do not give medical advice. And, yeah, that's it. --Ouro (blah blah) 12:30, 8 February 2011 (UTC)[reply]
Perhaps drug companies think that such pills will become cheaper than the price of the two separate pills. Count Iblis 14:11, 8 February 2011 (UTC)[reply]
I think we can answer parts of this question without violating the med advice restriction.
  • The first question "Is there a reason why no drug company has marketed..." is probably simply because the combination of H1 and H2 blockers hasn't been proven to be efficacious and the FDA (presuming this is referring to the US) has not approved the use of H2 antagonists for this purpose. If a well designed randomized controlled trial demonstrated that a combination drug was superior to the single drug standard of care, it would probably be marketed quickly to replace the old stand-bys. And I doubt that it would be cheaper -- they would come up with a fancy name like Zanyryx XR (lots of x's, y's, and z's means its a really good drug), package it up in a nice box, spend gazillions on marketing to convince the general public to "ask your doctor how Zanyryx can help you" and quadruple the price of the two individual components. Who wants to take two separate pills when you can take just one!
  • With regard to the last question "Would Zantac alone help also?" we can't really say for any given patient whether ranitidine would be helpful, but for general information about this topic we can point the OP towards references ([10] and [11]) the latter of which says: "An H2-antihistamine administered concurrently with an H1-antihistamine may modestly enhance relief of itching and wheal formation in some patients with urticaria refractory to treatment with an H1-antihistamine alone. The available evidence does not justify the routine addition of H2-antihistamine treatment to H1-antihistamine treatment."
Does that help? --- Medical geneticist (talk) 14:45, 8 February 2011 (UTC)[reply]
My guess is that the biggest reason you don't see an Allegra/Zantac combo pill (at least in the US) is that Allegra was approved for treatment of allergies, and Zantac was approved for treatment of stomach issues. Although doctors are free to prescribe drugs for off-label use once they are approved, the FDA has some pretty stringent rules on marketing drugs for off-label use. Who would you "officially" sell the combo pill to? People with both allergies and acid reflux? Combo pills like Ezetimibe/simvastatin (Vytorin) and amoxicillin/clavulanic acid (Augmentin) are sold because both drugs treat the same condition. A pill for treatment of two different conditions doesn't have as large a market (and you can't sell it to treat a single condition, because the drugs aren't approved to treat a single condition). Even if the combo does better than one drug alone, the FDA (or insert-country-specific-regulatory-agency) wouldn't approve it until you did a clinical trial to show that it did. That's a massive expenditure of cash for drugs which are off patent, even if prior approval means you can skip most of the safety stages. -- 174.24.195.38 (talk) 16:22, 8 February 2011 (UTC)[reply]

What causes you to want to stretch in the morning?

What causes you to want to stretch in the morning? It seems as though it is almost an involuntary action, or at least that you would be very silly to resist the urge when it comes. I am looking for a reason on some molecular or cellular level if possible.

I asked some friends about this and we came to the educated guess that it had something to do with your muscles/other tissues wanting more oxygen, so they output some chemical that sends a signal to your brain to stretch, making more blood flow to your muscles....

Is this even remotely accurate? what are the processes involved and why does your body force you to stretch?

137.81.116.186 14:29, 8 February 2011 (UTC) —Preceding unsigned comment added by 137.81.116.186 (talk) [reply]

I can't comment at the molecular level, but think the reason for it is to prevent muscle injuries. Just like you should stretch before running, you should also do that before using muscles for the first time each day. One thought is that it may be simply to increase the temperature, as muscles, tendons, etc., become more flexible and less likely to tear at higher temps. Locations far from the body core, like say the Achilles tendon, are perhaps most likely to be cold in the morning (especially if your feet stick out from under the blankets), and thus in most need of a few warm-up exercises. StuRat 16:58, 8 February 2011 (UTC)[reply]
As discussed in Stretching#Research and controversy, the research is unclear about whether a few minutes of static stretching (the typical kind) before activities like running actually does prevent any injuries, and pre-event stretching may even have a negative impact on performance. Stretching matters more in sports where you need a very large range of motion, but running isn't one of those. Others suggest that dynamic stretching may have more of a benefit, but there isn't much research on that either way. It is however good to stretch or participate in other cool-down activities after exercise (including running), because it helps the muscles relax gradually and that is clearly found to help prevent injury. What any of this means for the early morning stretch, I don't really know. Dragons flight (talk) 21:00, 8 February 2011 (UTC)[reply]
There is a stretching "controversy", lol. I don't know why but I find that really funny.. Forgive me I haven't had my coffee yet. Vespine (talk) 21:37, 8 February 2011 (UTC)[reply]
It's a combination of two things, first the fact that joints and muscles tend to stiffen when they are not in use due to the formation of adhesions between muscle fibers, second the properties of circadian rhythms. During the nighttime hours, body temperature drops and a variety of injury-repair mechanisms are activated -- both of these tend to promote formation of muscle and joint adhesions that are broken by stretching. Looie496 (talk) 22:54, 8 February 2011 (UTC)[reply]

A rifle with a barrel made of heat resistant ceramic

Is it possible to make something like this? ScienceApe (talk) 15:01, 8 February 2011 (UTC)[reply]

Yes. It is possible. However, it is not as good as a regular barrel. Instead, it is common to have a regular barrel with a ceramic lining or coating. -- kainaw 15:04, 8 February 2011 (UTC)[reply]
Ceramics are generally too brittle to absorb shock loads without damage. The elasticity of the types of steel used for rifle barrels is what allows them to contain the shock (abrupt change in pressure) of firing. I would like to see some evidence of User:Kainaw's claim that ceramic coatings or linings are common. I've seen and handled many different rifles in my life and I have yet to lay eyes on one with such a lining or coating. The overwhelming majority of rifle barrels are just steel with various finishes ranging from blueing, Parkerizing, nitriding, case hardening to plain and simple enamel paint. Roger (talk) 16:15, 8 February 2011 (UTC)[reply]
Something I know next to nothing about but I wonder about that too from my searches. These 2 refs from 2003-2004 suggest it's an area of active research for the US army but not currently very successful [12] [13]. Things may have changed a lot since then but if not I wonder how common it can be if even the army isn't doing it. It seems some companies to offer to coat existing guns, e.g. [14] [15] but this doesn't sound like something which would be common. Incidentally from some of those refs I think chrome line barrels may be somewhat common. Nil Einne (talk) 16:23, 8 February 2011 (UTC)[reply]
(ECx3)Ceramics can be resistant to high temperature, and can have great compression strength. Do ceramics have the tensile strength to prevent the barrel splitting open from the outward directed pressure, without having a ridiculous wall thickness making the weapon non-portable? Aren't they generally brittle? Edison (talk) 16:25, 8 February 2011 (UTC)[reply]
As a general rule, ceramics are not good at tensile strength. Some composite materials like carbon fiber might be able to give greater tensile strength then ceramics, but they might still be brittle enough to shatter in that application.

Hot body in vacuum

Suppose a solid sphere of metal of 1 sq.m (choose a convenient substance) of 100 deg.C perfectly isolated by any incoming energy in perfect vacuum. How much time it will take to get to the lowest temperature it can reach? --M121121121 (talk) 20:53, 8 February 2011 (UTC)[reply]

This sounds suspiciously like a homework question. No one here at Wikipedia is going to answer the question for you, but if you want some background on the concepts you need to solve it, see black body and thermal radiation. --Jayron32 21:03, 8 February 2011 (UTC)[reply]
Forever. Assuming it were perfectly isolated (which is impossible, but we can assume it anyway), it would radiate energy away in ever decreasing amounts as it's temperature decays towards but never actually reaches 0 K. For details, see Stefan-Boltzmann equation. Dragons flight (talk) 21:06, 8 February 2011 (UTC)[reply]

It's not homework, but you're right. Who/where should I ask for this calculus? Please advice. --M121121121 (talk) 21:24, 8 February 2011 (UTC)[reply]

The appropriate equation to apply is the Stefan–Boltzmann law, which defines the rate of heat loss from a body. It's worth noting that if the object is in the "empty vacuum of space", the relevant "cold sink" is the cosmic microwave background, at around 3 kelvins. Nimur (talk) 21:30, 8 February 2011 (UTC)[reply]
You said it is "perfectly isolated by any incoming energy":
A) Assuming "by" means "from", that would mean it would also be perfectly isolated as far as radiating out energy. If there was such a perfect thermos, it would stay the initial temperature forever.
B) If, however, we assume that it's the only source of energy in the universe, then it would radiate energy but not receive any back. In that case the temperature would decrease at a decreasing rate, but would still never quite reach absolute zero.
C) If we assume it's in the real universe, but far from any source of energy, as in a galactic void, then the temperature would again decline at a decreasing rate, gradually approaching the average temperature in that void. StuRat (talk) 22:50, 8 February 2011 (UTC)[reply]

Homemade cavendish experiment .

I do enjoy this facinating webpage. I like seeing the videos of the "mass attracting mass" but I am still a little skeptical.

http://www.fourmilab.ch/gravitation/foobar/

It still seems a little strange to me to actually see gravity interacting with such homemade apparatus. Are the masses too small? Is there some kind of experimental error going on? Perhaps an electric force is causing this?

What do you think? —Preceding unsigned comment added by 92.17.89.69 (talk) 21:09, 8 February 2011 (UTC)[reply]

Well, the Cavendish experiment is well known; and you can, using simple Newtonian physics equations, estimate what the force should be and decide if the magnitude of errors introduced by, say, turbulent air currents or electric forces are relevant. You can also eliminate (or at least, reduce) electrostatic effects by grounding all involved objects with electrically conductive wire. I have to say, the experiment seems pretty fantastic! But, as a firm believer in Gravity, I am at a loss to come up with a more plausible reason why the balance would torque in this way. (Though I admit, gravity is a bit implausible, but I observe it daily nonetheless). Nimur (talk) 21:39, 8 February 2011 (UTC)[reply]
Do objects tend to drift like this on say the ISS? Say they left a bowling ball hovering, would it accelerate towards the center of gravity on the space station? —Preceding unsigned comment added by 92.17.89.69 (talk) 21:59, 8 February 2011 (UTC)[reply]
Center of Gravity? An object floating free in a orbiting vessel would be affected by the sum of the masses around it, relative speed and the air currents of the air conditioning system. Also, it may be deflected on route, by any skittles it meets on the way. The article on Micro-g environment may be of interest too.--Aspro (talk) 22:15, 8 February 2011 (UTC)[reply]
(ec) The ISS, and other manned spacecraft, are in "microgravity" - which is to say, not quite freefall. There's a small but measurable net force on the spacecraft at almost all times, caused by the acceleration due to non-ideal effects like gas drag and orbit correction maneuvering (stationkeeping). Stanford's Gravity Probe B might be interesting, though - an entire spacecraft was launched just to measure the gravitational nonlinearity under ideal conditions. Preliminary results were announced a couple of years ago (see this press release from April 2007's APS Plenary; unfortunately, the mission wasn't well-received when the data turned out to be noisier than expected. Nimur (talk) 22:18, 8 February 2011 (UTC)[reply]

The straw that stirs the drink

Whenever I pour a carbonated beverage into a glass and stick a straw in it, CO2 bubbles invariably attach to the straw, then lift it to the point where it seems like the straw should fall out of the glass. However, I have never had a straw actually fall out. What provides the force keeping the straw in place? Is there a length beyond which the straw will tumble? Hemoroid Agastordoff (talk) 22:03, 8 February 2011 (UTC)[reply]

I have often had a straw fall out. Perhaps I fill my glass fuller? 86.162.68.36 (talk) 22:17, 8 February 2011 (UTC)[reply]

clouds

why are clouds white? 71.2.42.116 (talk) 22:11, 8 February 2011 (UTC)[reply]

Have you discovered our search box? It just so happens, we have a article section on the colour of clouds.--Aspro (talk) 22:23, 8 February 2011 (UTC)[reply]

Sanitary drain(industrial sewer) sizing and design discharge flow rate estimate

how do I estimate discharge flow rate (L/s) based on the number of fixture units?--165.228.109.94 (talk) 22:21, 8 February 2011 (UTC)[reply]

Kilos instead of larger units

Why planet and star masses are commonly indicated in kilograms instead of more handy larger units, such as gigatonnes or teratonnes (where no exponentiation would be necessary)? —Preceding unsigned comment added by 89.76.224.253 (talk) 22:28, 8 February 2011 (UTC)[reply]

Actually, most astrophysicists use cgs units - that is, grams, to measure the mass of planets. Why? Because they're dealing with many orders of magnitude during different calculations, so they must use scientific notation anyway. In other words, exponentiation will be required, no matter what, so we might as well use it consistently. (Though, Griffiths and others attribute the cgs preference to electrodynamics, where the gauss and the convention of unitary permittivity- and permeability- of free space, require "less writing." Other physicists, cosmologists in particular, prefer dimensionless physical constants, so they prefer the ("horribly inconvenient") SI units normalized by the values fundamental constants, called "planck units." As physicists, when we study planets and space science, we try to isolate any "biases" we might have in our system of units that are historical artifacts of measuring the size of the Earth. SI and cgs is not entirely guilt-free in that respect, as the meter is historically defined as a ratio to the earth's circumference. But in any case, if you're going to measure a planet's mass, it's still going to be "huge", whether you measure it in grams, tons, or solar mass units; and if you measure two planets, chances are you'll need scientific notation in any unit system. Nimur (talk) 22:45, 8 February 2011 (UTC)[reply]

Help understanding Parsec illustration

Can someone chime in at Talk:Parsec#Dots in image? I didn't understand the image used in the article and would appreciate some help in that regard. Thanks, Waldir talk 22:54, 8 February 2011 (UTC)[reply]

Trampolining gives you muscles?

An acquantance of mine has an amazing body. He has big legs, a six pack, good arms, and generally good upper body and back. When I asked him how many times a week he worked out he replied that hedidn't work out at all. He said he uses his mini trampoline every day. Can this be true? Google shows many health benefits of trampolining but it doesnt go into detail about muscle growth. I can understand how it might get you big legs but a six pack!?! Surely that would be reason enough for every guy anywhere to get one? Surely such a thing would widely known and not such a well kept secret? And asked him another day whether he really meant a mini trampoline and he replied affirmatively. I mean you cant ever do sommersaults on those! So how do you explain the good arms and upper body? Edit: oh and by 'big' I may be exaggerating slightly. He doesnt loook like a body builder but he does have a very impressive slim athletic physique. —Preceding unsigned comment added by 91.49.33.244 (talk) 23:32, 8 February 2011 (UTC)[reply]

What he didn't tell you is about all the steroids he takes. Looie496 (talk) 01:32, 9 February 2011 (UTC)[reply]

February 9

What's a good nonpolar organic solvent?

Preferably not carcinogenic. --75.15.161.185 (talk) 00:26, 9 February 2011 (UTC)[reply]

Can you be a bit more specific about your needs? How about cyclohexane? TenOfAllTrades(talk) 00:53, 9 February 2011 (UTC)[reply]
It should be able to dissolve lipids but not other compounds in food. --75.15.161.185 (talk) 02:32, 9 February 2011 (UTC)[reply]