Wikipedia:Reference desk/Science: Difference between revisions
→Time over distance...: Taking a difference doesn't necessarily require the full precision... |
|||
Line 552: | Line 552: | ||
:::::[http://einstein.stanford.edu/RESOURCES/KACST_docs/KACSTlectures/KACST-IntroPhysicsInSpace..pdf Here] is a document by Dr. John Mester, a physics prof at your school, which refers to the Pound-Rebka experiment. So you could stop by his office and ask him about Pound-Rebka, if you're still dubious about it. [[User:Red Act|Red Act]] ([[User talk:Red Act|talk]]) 07:59, 13 October 2009 (UTC) |
:::::[http://einstein.stanford.edu/RESOURCES/KACST_docs/KACSTlectures/KACST-IntroPhysicsInSpace..pdf Here] is a document by Dr. John Mester, a physics prof at your school, which refers to the Pound-Rebka experiment. So you could stop by his office and ask him about Pound-Rebka, if you're still dubious about it. [[User:Red Act|Red Act]] ([[User talk:Red Act|talk]]) 07:59, 13 October 2009 (UTC) |
||
::::::I'll follow up on that lead. As I disclaimed earlier, I'm not an expert in this field - and the experiment did get published in a reputable peer-reviewed journal - so as much as I flail around shouting "dubious", my opinion is only worth so much. The [[Gravity Probe B]] mission, and the [[LIGO]], also seeking to measure relativistic gravitational effects, also both suffer from tiny signal amongst huge noise. I think the ultimate answer here is that the OP's suggestion of "minutes per hour" is ''very'' far from reality; the predicted changes should be [[femtosecond]]s - which cannot be measured by even the most accurate atomic clocks. To measure these, it seems necessary to build a complex custom "device" and extrapolate a time dilation via a frequency shift. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 11:47, 13 October 2009 (UTC) |
::::::I'll follow up on that lead. As I disclaimed earlier, I'm not an expert in this field - and the experiment did get published in a reputable peer-reviewed journal - so as much as I flail around shouting "dubious", my opinion is only worth so much. The [[Gravity Probe B]] mission, and the [[LIGO]], also seeking to measure relativistic gravitational effects, also both suffer from tiny signal amongst huge noise. I think the ultimate answer here is that the OP's suggestion of "minutes per hour" is ''very'' far from reality; the predicted changes should be [[femtosecond]]s - which cannot be measured by even the most accurate atomic clocks. To measure these, it seems necessary to build a complex custom "device" and extrapolate a time dilation via a frequency shift. [[User:Nimur|Nimur]] ([[User talk:Nimur|talk]]) 11:47, 13 October 2009 (UTC) |
||
:::::I'd be really cautious about suggesting that the Pound-Rebka device needs 64 bits of precision to make a successful measurement. Remember, the experimenters don't need to measure the frequency ''from scratch''. They just need an apparatus sensitive to minor differences in frequency — which the universe handily provides in the form of crystalline iron-57. It's the difference between measuring elapsed milliseconds between two events on the bench (trivial) and attempting to measure elapsed milliseconds since the start of the universe, twice, and taking a difference (ludicrous). [[User:TenOfAllTrades|TenOfAllTrades]]([[User_talk:TenOfAllTrades|talk]]) 12:38, 13 October 2009 (UTC) |
|||
= October 13 = |
= October 13 = |
Revision as of 12:38, 13 October 2009
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
October 7
Hubble space telescope
I am having a doubt that how the hubble space telescope takes the picture of the galaxies because galaxies size itself in lightyears and the light emitted from a star are any other object will travel in straight line. If this is true then the light that is received by the hubble from distant object will be a small proprotion of the light emitted from them with that light received how they are developing the image of the galaxies and many huge objects. —Preceding unsigned comment added by Dineshbkumar06 (talk • contribs) 01:53, 7 October 2009 (UTC)
- A very good observation. A galaxy has 100,000,000,000 stars. Sometimes more. Each one produces an average 1,000,000,000 times the lighting of all the daylight falling on all the fields, mountains and oceans in the entire world. Even so, it still takes a light gathering area 100,000 times your eyes' to make those pictures. Sagittarian Milky Way (talk) 02:09, 7 October 2009 (UTC)
- Also, most of Hubble's pictures are actually exposures taken over many hours and even days. The total exposure time for the Ultra Deep Field image was over 11 days. Unlike our eye which can only capture photons in "real time", digital cameras like the ones in hubble can "add" the photons up over a long time. Vespine (talk) 06:12, 7 October 2009 (UTC)
- Sort of like having realy long persistence of vision ... plus being able to keep you eyes absolutely still for a very long time ... and never blinking ... Gandalf61 (talk) 10:49, 7 October 2009 (UTC)
- We need that persistence of vision, because all the benefits of the larger lens are used just to make the image bigger: galaxies are very far away making them look very very small. Sagittarian Milky Way (talk) 11:47, 7 October 2009 (UTC)
- And a long time ago. →Baseball Bugs What's up, Doc? carrots 11:55, 7 October 2009 (UTC)
- That's not true. The main lens is large in order to capture more light, making the image brighter. The magnification in a telescope is done by the eyepiece and isn't that important. --Tango (talk) 18:06, 7 October 2009 (UTC)
- What I was trying to say was that making the image brighter by enlargening the lens size from human eye to Hubble Space Telescope is about the same as the dimming caused by magnifying it so they cancel out and have no effect. Magnifying that much in the Ultra Deep Field is unavoidable if you want to show any detail at all. Sagittarian Milky Way (talk) 12:21, 8 October 2009 (UTC)
- We need that persistence of vision, because all the benefits of the larger lens are used just to make the image bigger: galaxies are very far away making them look very very small. Sagittarian Milky Way (talk) 11:47, 7 October 2009 (UTC)
- Sort of like having realy long persistence of vision ... plus being able to keep you eyes absolutely still for a very long time ... and never blinking ... Gandalf61 (talk) 10:49, 7 October 2009 (UTC)
- Also, most of Hubble's pictures are actually exposures taken over many hours and even days. The total exposure time for the Ultra Deep Field image was over 11 days. Unlike our eye which can only capture photons in "real time", digital cameras like the ones in hubble can "add" the photons up over a long time. Vespine (talk) 06:12, 7 October 2009 (UTC)
- To more directly answer your question... yes, light does travel in a straight line (for the most part) but not all of it is perpendicular. Some of it is angled slightly down. It's the same reason you can see a tree or skyscraper that is much much taller than you. ~ Amory (u • t • c) 12:53, 7 October 2009 (UTC)
- Hmmm ? To see a tree or skyscraper that is much much taller than you, you look up. What does "not all of it is perpendicular" mean - perpendicular to what ?? Gandalf61 (talk) 14:10, 7 October 2009 (UTC)
- By perpendicular I guess Amory means "at right angles to Amory's chest", or it could be "at right angles to Amory's face" if Amory looks up by rotating eyeballs but not head. Cuddlyable3 (talk) 19:32, 7 October 2009 (UTC)
- I think maybe I get what the OP is trying to say, this has actually completely blown me away also... Just try to imagine this: A star is a big ball which radiates photons in all directions. Those photons travel away from the star in straight lines (mostly). The inverse square law states that the energy is inversly proportinal to the square of the distance, does that roughly equate to the "number of photons" is inversly proportinal to the square of the distance? In any case. Say you are looking at our closest neighbour star, 4 light years away, photons from that star are hitting your retina all the time, move to the side one centimeter and photons from that star are still hitting your retina!! So that means that if you make a sphere with a radius of 4 light years with the star in the middle, photons from that star are still passing through every single square inch of that "sphere". For that close star that's about 200 square light years of space!!! For Betelgeuse which is 640 light years away, that's an area of 5 MILLION SQUARE LIGHT YEARS! And every single square inch has Betelgeuse photons passing through it!! ZOMG. I know photons don't have mass, but still, I find it mind boggling. And that's not even mentioning photons from something like the Andromeda galaxy 2.5 million light years away which is still visible with the naked eye! Vespine (talk) 22:42, 7 October 2009 (UTC)
- And considering that there are 139,000,000,000,000,000,000,000,000,000,000,000 square inches in every square light year.. now THAT is alot of photons. Sagittarian Milky Way (talk) 12:44, 8 October 2009 (UTC)
- I think maybe I get what the OP is trying to say, this has actually completely blown me away also... Just try to imagine this: A star is a big ball which radiates photons in all directions. Those photons travel away from the star in straight lines (mostly). The inverse square law states that the energy is inversly proportinal to the square of the distance, does that roughly equate to the "number of photons" is inversly proportinal to the square of the distance? In any case. Say you are looking at our closest neighbour star, 4 light years away, photons from that star are hitting your retina all the time, move to the side one centimeter and photons from that star are still hitting your retina!! So that means that if you make a sphere with a radius of 4 light years with the star in the middle, photons from that star are still passing through every single square inch of that "sphere". For that close star that's about 200 square light years of space!!! For Betelgeuse which is 640 light years away, that's an area of 5 MILLION SQUARE LIGHT YEARS! And every single square inch has Betelgeuse photons passing through it!! ZOMG. I know photons don't have mass, but still, I find it mind boggling. And that's not even mentioning photons from something like the Andromeda galaxy 2.5 million light years away which is still visible with the naked eye! Vespine (talk) 22:42, 7 October 2009 (UTC)
- By perpendicular I guess Amory means "at right angles to Amory's chest", or it could be "at right angles to Amory's face" if Amory looks up by rotating eyeballs but not head. Cuddlyable3 (talk) 19:32, 7 October 2009 (UTC)
- Hmmm ? To see a tree or skyscraper that is much much taller than you, you look up. What does "not all of it is perpendicular" mean - perpendicular to what ?? Gandalf61 (talk) 14:10, 7 October 2009 (UTC)
Simple Chem Question
Assuming complete combustion:
C2H6O + 2 O2 → 2 CO2 + 3 H2O
If I am given the number of grams of water produced, can I simply perform a stoichiometric conversion to find the number of moles of ethanol in the initial reactants? Acceptable (talk) 02:18, 7 October 2009 (UTC)
- Yes. Assuming the reaction were to be balanced, of course. DMacks (talk) 02:21, 7 October 2009 (UTC)
Honey bee identification
I have some honey bees in my room. How can I identify what species they are? They are not bumblebees. Mac Davis (talk) 03:04, 7 October 2009 (UTC)
- Only with some difficulty. There are many species of honey bee worldwide but it is a tricky business telling one from the other, see here - [1] 86.4.186.107 (talk) 10:01, 7 October 2009 (UTC)
- Do you keep them in a shoebox? 87.81.230.195 (talk) 15:04, 7 October 2009 (UTC)
- Depending on how much light your room gets (and your budget and patience), you could start introducing a series of potted plants and seeing which ones the bees like. Eventually you could narrow it down, based on the existing literature about which species of bees prefer which plants. --M@rēino 15:55, 8 October 2009 (UTC)
- Ooh, should have thought of this first: take a photograph of the bees. Find out who the appropriate contact is at your state's [Agricultural Board or Agricultural College, and email the photo to that person. Usually both the boards and the colleges have it in their mission to welcome random questions from the citizenry. --M@rēino 16:01, 8 October 2009 (UTC)
How are Retin-A and Renova different?
They both contain tretinoin, but are marketed differently by the same company. --68.103.141.28 (talk) 03:45, 7 October 2009 (UTC)
why isn't cholesterol soluble in acetone?
Acetone is usually miscible with most organic compounds. Is it the utter lack of aromaticity in cholesterol? John Riemann Soong (talk) 04:41, 7 October 2009 (UTC)
- Well acetone is pretty polar, hence it's solubility with water. Cholesterol, despite the hydroxyl group, is essentially a large, bulky hydrocarbon, interacting with non-polar molecules. That makes it great for membranes, but is the reason it is only slightly soluble is water, and presumably) acetone. ~ Amory (u • t • c) 12:48, 7 October 2009 (UTC)
I never tried it, but the data sheets for cholesterol say it is soluble in acetone [2], [3]. Are you sure it is not soluble in acetone? --Dr Dima (talk) 22:01, 7 October 2009 (UTC)
- Yeah, but in my experience acetone often dissolves whatever substances diethyl ether does, especially a wide range of not-very-polar organic acids. In my lab experience, cholesterol is not soluble in acetone -- it remains a large chunk in your test tube no matter how much acetone you pour into it. John Riemann Soong (talk) 03:14, 8 October 2009 (UTC)
- How wet is your acetone? Acetone is notoriously hygroscopic; and a small amount of water in the acetone could mess up the solubility of the colesterol. Anyone who has ever gotten a few drops of water in melting chocolate and seen it "seize" will understand what is happening with your cholesterol; they are both due to the same effect, which is the small amounts of water causes the non-polar molecules to aggregate and it becomes impossible to get them to dissolve properly. If you have some scrupulously dry acetone, it will probably dissolve the cholesterol just fine. However, even a moderately humid day will cause the acetone to absorb enough water from the air to mess this up. --Jayron32 04:52, 8 October 2009 (UTC)
- Well it's lab grade acetone. You squeeze it out to clean your glassware, so it isn't exposed to moisture generally. Even my lab TA told me that acetone wouldn't dissolve my cholesterol. John Riemann Soong (talk) 03:12, 9 October 2009 (UTC)
Red blood cell
what is a red blood cell —Preceding unsigned comment added by 82.111.23.189 (talk) 08:42, 7 October 2009 (UTC)
Data for floor area ratio (FAR)?
Where can I find maps, isopleth maps, FAR-to-distance from city center graphs, or at least lists, for floor area ratio of cities around the world? Maybe they'd use floor area per acre instead? It'd be cool to see and compare them to scale. For example, some city might have the largest area of 20.00 or greater some other city might be the largest area over 2.00, LA would probably vie for the largest square miles in it's class.. Cubic volume of built structure per land area would be better indicator for "how much structure/square mile", (for example, the Washington Monument or a stadium would count), but I bet nobody cares about m³/ha. Do they make FAR that adjusts for the percent of block covered by non-private parts of the block (i.e. streets, parks and sidewalks)? Sagittarian Milky Way (talk) 11:24, 7 October 2009 (UTC)
- The US might be the only country that uses FAR. 78.147.129.9 (talk) 15:08, 7 October 2009 (UTC)
Are there any species with multiple hearts?
Organ redundancy seems like a good thing (I'm glad I have two lungs and two kidneys) and an extra heart (if it were in parallel with the first) would take a huge load off of a single heart over a lifetime.20.137.18.50 (talk) 12:53, 7 October 2009 (UTC)
- Well, Time Lords have two hearts (one for casual, one for best), but they're fictional. Pais (talk) 13:01, 7 October 2009 (UTC)
- I looked in Heart and it seems to be mostly about the human heart. I think there are other types of creatures (maybe insects or sea-going creatures) that have multiple hearts. I can think of plenty of organs it would seem useful to have two of. Although the second brain in the stegosaurus didn't help much when it came time for extinction. →Baseball Bugs What's up, Doc? carrots 13:03, 7 October 2009 (UTC)
- I knew I had seen this somewhere recently. The octopus (which is a cephalopod - see below) has three hearts. I guess the two skinny ones balance out the fat one. →Baseball Bugs What's up, Doc? carrots 13:05, 7 October 2009 (UTC)
- Note: Dinosaurs did not actually have two brains. That's since been debunked. Says so right in the article. APL (talk) 02:35, 8 October 2009 (UTC)
- I looked in Heart and it seems to be mostly about the human heart. I think there are other types of creatures (maybe insects or sea-going creatures) that have multiple hearts. I can think of plenty of organs it would seem useful to have two of. Although the second brain in the stegosaurus didn't help much when it came time for extinction. →Baseball Bugs What's up, Doc? carrots 13:03, 7 October 2009 (UTC)
- Earthworms vary somewhat by species, but typical examples have five hearts / aortic arches (depending on how specific you want to get with terminology). — Lomn 13:09, 7 October 2009 (UTC)
- (several ec's) Earthworms and Cephalopods have more than one. Here's an interesting discussion from howstuffworks.com. Zain Ebrahim (talk) 13:11, 7 October 2009 (UTC)
- I just found Abby and Brittany Hensel, two humans who share a single circulatory system with two hearts!20.137.18.50 (talk) 13:27, 7 October 2009 (UTC)
- Two humans with two hearts in total. Ironically, they are the real Minnesota Twins. →Baseball Bugs What's up, Doc? carrots 13:30, 7 October
2009 (UTC)
Animals with multiple "hearts" usually have less ambitious hearts compared to the mammalian / bird 4-chambered heart. The squid heart has one type of heart for pumping hemolymph through the circulatory system and one type of heart for pumping hemolymph through the gills -- plus as a mollusk the squid's predecessors had an open circulatory system.
- Crocodilians also have 4 chambered hearts. And the 4 chambered heart is actually two 2 chambered hearts -- they are just fused and result in more efficacy -- the squid's dual hearts are thus similar to our single one. DRosenbach (Talk | Contribs) 23:54, 8 October 2009 (UTC)
There probably wasn't an evolutionary pressure to evolve more than one mammalian heart, because the "strain" usually kicks in around old age, and evolution is notoriously cruel towards the elderly. John Riemann Soong (talk) 13:49, 8 October 2009 (UTC)
Eating raw flour
When I was a kid, my mother tried to dissuade me from eating too much raw cookie dough or raw cake batter out of the mixing bowl by telling me that eating too much raw flour was "bad for you". Is this actually true in any sense, or is just a cautionary tale. (Obviously "too much" is open to interpretation, but the implication was that raw flour was "bad for you" in quantities that baked/cooked flour wouldn't be.) Pais (talk) 12:55, 7 October 2009 (UTC)
- Did you suffer any ill effects from it? Other than weight gain? Keep in mind that the paste used by kiddies is (or used to be) largely flour and water, in case the little idiots tried to eat it. →Baseball Bugs What's up, Doc? carrots 13:00, 7 October 2009 (UTC)
- Usually they tell you not to eat raw cookie dough because of the raw eggs.
- Certainly it can't be too bad for you. Cookie dough ice-cream is a popular and delicious flavor. (Even if it does sometimes come with warnings about uncooked eggs. Salmonella is probably delicious too.) APL (talk) 13:02, 7 October 2009 (UTC)
- Most commercial cookie dough ice creams are using a dough-like substitute. It generally lacks the leavening and omits the raw egg (substituting either pasteurized eggs or some less food-like substance). — Lomn 13:07, 7 October 2009 (UTC)
- Sure, but there's about a zillion local ice cream shops that make their own ice-cream, and put warnings on their cookie dough ice-cream. APL (talk) 13:20, 7 October 2009 (UTC)
- Very true. I just wanted to note that, if someone is really concerned about the possibility of salmonella, there are safe forms of cookie dough ice cream. Myself, I like the hard stuff, risk be damned. — Lomn 13:54, 7 October 2009 (UTC)
- Sure, but there's about a zillion local ice cream shops that make their own ice-cream, and put warnings on their cookie dough ice-cream. APL (talk) 13:20, 7 October 2009 (UTC)
- Most commercial cookie dough ice creams are using a dough-like substitute. It generally lacks the leavening and omits the raw egg (substituting either pasteurized eggs or some less food-like substance). — Lomn 13:07, 7 October 2009 (UTC)
- The usual danger reported from raw cookie dough, cake batter, and the like is the possibility of salmonella from raw eggs. Raw flour is just floury and pasty, in my experience. Our article notes that it has the potential to go rancid (depending on how it was processed), but that's going to be nasty whether you cook it or not. — Lomn 13:03, 7 October 2009 (UTC)
- Well, as a follow-up, it appears the FDA has reported salmonella in flour in some cases. I don't think it's nearly as likely as it being in eggs, but it's a possibility. — Lomn 13:07, 7 October 2009 (UTC)
- Well, I never suffered any noticeable ill effects from it, but then I never actually succeeded in eating much raw batter or dough because she was preventing me from doing so. Now as an adult I will sometimes mix up a batch of cookies and polish off about half of it before baking, also with no ill effects. (Salmonella doesn't scare me - what doesn't kill us makes us stronger.) But what does occur to me is that I can't think of any recipes where flour is intended to be eaten raw (apart perhaps from cookie dough ice cream) - every recipe I can think of that calls for flour, expects the flour to be either baked or otherwise cooked (e.g. in gravies and roux). Pais (talk) 13:09, 7 October 2009 (UTC)
- "What doesn't kill us makes us stronger"—it doesn't really apply in terms of food poisoning, which is awful and worth avoiding if you can. It doesn't make you stronger, it just makes you feel utterly and totally miserable. --Mr.98 (talk) 13:42, 7 October 2009 (UTC)
- Actually in general terms it's a cliche that is often wrong. For example getting a sunburn may not kill you. At least not initially. Smoking a few cigarettes ditto. Or since we're talking about food, drinking/eating 300ml of cream each day, particularly if you are young and lead an active life. None of these however are particularly healthy activities and most probably are not going to make you stronger. Exposing the immune system to potential pathogens may be of some benefit but it's not always going to make you stronger and there are other things to consider. A number of the toxins you are exposed to by poor food will have negative effects including potentially carcinogenic effects. (Of course many of the toxins won't be destroyed by cooking so if they are present it may not make much difference if you cook the dough first.) This doesn't mean you shouldn't be worried about every single little thing. In fact I often eat things which are a bit dubious. However I don't do it under the belief that it's definitely going to make me stronger. Rather I've decided to accept the risk of potential both short term and long term harm. In other words, I don't think the risks from uncooked flour are really something to worry about but I definitely wouldn't assume it's going to make you stronger. Nil Einne (talk) 14:00, 7 October 2009 (UTC)
- I wasn't actually being serious about "What doesn't kill us makes us stronger" with respect to salmonella poisoning. Pais (talk) 14:25, 7 October 2009 (UTC)
- I also recall reading on our chicken egg article that only 1 in 30,000 eggs are contaminated, and according to Salmonella article, 30 deaths resulted from the 142,000 people who got infected, I think I can handle the 1 in 71,000,000 chance of dying each time I eat raw cookie dough. It compares very favorably to the 1 in 2,560,000 chance of dying in a car accident on a typical day. Googlemeister (talk) 14:13, 7 October 2009 (UTC)
- I do actually agree that the salmonella threat is extremely exaggerated, with "appears in the health section of the New York Times" type odds (anything that shows up there, you have a better chance of winning the lottery than contracting). I'm just saying, it doesn't really make you stronger. It just sucks! --98.217.71.237 (talk) 15:03, 7 October 2009 (UTC)
- The odds are probably better than that for a healthy adult, too, since obviously an affront to the system like food poisoning will disproportionately kill those who are already compromised in some way. I'd be interested to know how many of those 30 people were otherwise healthy adults, and unsurprised if the answer was "none". --Sean 17:04, 7 October 2009 (UTC)
- Actually in general terms it's a cliche that is often wrong. For example getting a sunburn may not kill you. At least not initially. Smoking a few cigarettes ditto. Or since we're talking about food, drinking/eating 300ml of cream each day, particularly if you are young and lead an active life. None of these however are particularly healthy activities and most probably are not going to make you stronger. Exposing the immune system to potential pathogens may be of some benefit but it's not always going to make you stronger and there are other things to consider. A number of the toxins you are exposed to by poor food will have negative effects including potentially carcinogenic effects. (Of course many of the toxins won't be destroyed by cooking so if they are present it may not make much difference if you cook the dough first.) This doesn't mean you shouldn't be worried about every single little thing. In fact I often eat things which are a bit dubious. However I don't do it under the belief that it's definitely going to make me stronger. Rather I've decided to accept the risk of potential both short term and long term harm. In other words, I don't think the risks from uncooked flour are really something to worry about but I definitely wouldn't assume it's going to make you stronger. Nil Einne (talk) 14:00, 7 October 2009 (UTC)
- "What doesn't kill us makes us stronger"—it doesn't really apply in terms of food poisoning, which is awful and worth avoiding if you can. It doesn't make you stronger, it just makes you feel utterly and totally miserable. --Mr.98 (talk) 13:42, 7 October 2009 (UTC)
- Well, I never suffered any noticeable ill effects from it, but then I never actually succeeded in eating much raw batter or dough because she was preventing me from doing so. Now as an adult I will sometimes mix up a batch of cookies and polish off about half of it before baking, also with no ill effects. (Salmonella doesn't scare me - what doesn't kill us makes us stronger.) But what does occur to me is that I can't think of any recipes where flour is intended to be eaten raw (apart perhaps from cookie dough ice cream) - every recipe I can think of that calls for flour, expects the flour to be either baked or otherwise cooked (e.g. in gravies and roux). Pais (talk) 13:09, 7 October 2009 (UTC)
- Well, as a follow-up, it appears the FDA has reported salmonella in flour in some cases. I don't think it's nearly as likely as it being in eggs, but it's a possibility. — Lomn 13:07, 7 October 2009 (UTC)
- I think the problem is the raising agent. Cookies are usually made with some sort of raising agent, such as baking powder, cream of tartar, bicarbonate of soda, and the chemical reaction which makes the cakes rise should be occurring in the oven - not in your gut! That could give you some real pain, especially if you're a kid. --TammyMoet (talk) 17:08, 7 October 2009 (UTC)
Your mother was dissuading you from getting too big for your britches, that's all. Vranak (talk) 18:36, 7 October 2009 (UTC)
- Aren't there people who eat raw egss because they think it's healty? Ala rocky balboa? . Vespine (talk) 00:08, 8 October 2009 (UTC)
- I have witnessed such a thing. Vranak (talk) 00:15, 8 October 2009 (UTC)
- Aren't there people who eat raw egss because they think it's healty? Ala rocky balboa? . Vespine (talk) 00:08, 8 October 2009 (UTC)
I love raw cookie dough while I'm eating it, but later my stomach gets upset if I've eaten too much. So, maybe it's not much more unhealthy than the baked cookies, but it can make you feel bad. ike9898 (talk) 13:42, 8 October 2009 (UTC)
- If you try eating raw bread dough, don't eat much of it, because it rises a lot more then cookie dough and can be extremely uncomfortable. Googlemeister (talk) 14:26, 8 October 2009 (UTC)
the fluorescent light tube
Could someone explain ( I might be in the wrong section ) when say a fluorescent tube rated at 65w or one rated at 80w.... when lit, the consumption of electricity to it if checked with a monitor is always higher than the tube is rated at ? where as with an ordinary 100w standard incandescent bulb the consumption does show as 100w
Regards George Atkinson —Preceding unsigned comment added by 86.145.161.70 (talk) 14:13, 7 October 2009 (UTC)
- Is the monitor or power meter recently lab calibrated? How accurate does the manufacturer specify it to be? IDoes it indicate watts or are you working from amps and volts, with an unknown power factor and irregular waveform? Is the voltage the nominal voltage, and is the input power free from harmonics? Edison (talk) 17:21, 7 October 2009 (UTC)
- Most voltmeters and ammeters when measuring AC actually respond to the average rectified voltage or current but are calibrated to show an RMS value. The calibration is only correct for sine waves and a power measurement using these instruments to find V x I is only correct if V and I have the same phase. These conditions are met if the load has a Power factor of 1.0. Filament lamps have a power factor of 1.0 but flourescent lamp ballasts often have a power factor below 1.0. A Wattmeter that measures true power into a loads of various power factors is a specialised instrument. Cuddlyable3 (talk) 19:19, 7 October 2009 (UTC)
- How much higher is it? --Tango (talk) 19:34, 7 October 2009 (UTC)
- [4] quotes a power factor 0.6 for presumably an ordinary fluorescent light. The low PF of an inductive ballast may be raised by an added capacitor. Cuddlyable3 (talk) 10:10, 8 October 2009 (UTC)
- Many/most ballasts sold now are electronic. The current waveform is quite nonsinusoidal. Edison (talk) 18:42, 8 October 2009 (UTC)
- [4] quotes a power factor 0.6 for presumably an ordinary fluorescent light. The low PF of an inductive ballast may be raised by an added capacitor. Cuddlyable3 (talk) 10:10, 8 October 2009 (UTC)
moving in space
If you put the space shuttle at the L1 point, would the solar winds be sufficient to make it move back towards earth? How close to the sun would you need to be most likely to have an equilibrium where it would not move because the extra gravity from the sun would be balanced by the solar wind? Googlemeister (talk) 14:23, 7 October 2009 (UTC)
- Since the non-trojan Lagrange points (that is, L1-L3) are unstable, something will disturb the shuttle. I'd bet that orbital irregularities or the Moon or Venus would do it before the solar wind would, but the solar wind is a good candidate. As for how far you'd have to offset... estimate the force of the solar wind, throw in the surface area of the shuttle and the distances to the L1 point and you've got a fairly simple system of equations. Estimating the force of the solar wind looks to be the trickiest part, as it's highly variable and has both slow and fast components. — Lomn 15:16, 7 October 2009 (UTC)
- The solar wind isn't constant, so it's impossible to compensate for it that easily. You need active stationkeeping to stay at L1. You can stay in a bounded, but non-periodic "orbit" of L1 for quite a while (maybe months, I'm not quite sure), though - that is, you roughly circle the point but don't repeat the exact same path each time, but you always stay fairly close to the point. --Tango (talk) 15:55, 7 October 2009 (UTC)
- Which is called a Lissajous orbit. Sagittarian Milky Way (talk) 22:43, 8 October 2009 (UTC)
electrolytes
The Wikipedia electrolyte article references a Webmd article that gives:
- 1 quart (950 mL) water
- ½ teaspoon (2.5 g) baking soda
- ½ teaspoon (2.5 g) table salt
- ¼ teaspoon (1.25 g) salt substitute (potassium-based), such as Lite Salt or Morton Salt Substitute
- 2 tablespoons (30 g) sugar
The problem is that Morton produces another salt substitute with no sodium and twice the potassium. The question is which substitute is actually right. What is the proper proportion of sodium and potassium and what about phosphorus and magnesium? Through which ingredients can one add them and in what proportion? 71.100.5.245 (talk) 21:13, 7 October 2009 (UTC)
Your question is a little too broad, you ask which substitute is right, but right for what? In order to answer your question, I'll assume you are asking which is closest to human plasma. This is an important question to ask because the fluid in the veins, the fluid in cells, and the fluid in between the cells all have completely different amounts of electrolytes in them. Also, it doesn't really matter that you replace fluid exactly right as long as you get close. Why? Humans have these wonderful things called kidneys, which tend to figure out electrolyte imbalances with far more grace than we can, but I digress. The formula above contains sugar, which is actually not an electrolyte, and will not contribute to the tonicity of the solution after it enters the intravascular space, so we will concentrate on water, sodium, potassium, chloride, bicarbonate, calcium, and magnesium, the most important constituents. Sodium is generally around 140mmol/L, potassium is about 4.3mmol/L, calcium is 2.5mmol/L, chloride is 100mmol/L, bicarb is 25 mmol/L, and phosphate is 1mmol/L. I'm sorry to give you such units, but I'll let someone else calculate that out.
You might be interested to know that it is very rarely necessary to give electrolyte in such exacting fashion. As far as I know, there are no commercially available solutions in the US (for consumers or hospitals) of either oral or intravenous fluid, that have a perfect balance. If the kidneys can't sort it out, we can always add each individual constituent alone to correct abnormalities. In the medical profession, the fluids one gives to a patient depend more on a doctor's specialty training than anything else. The closest stock solution we have is lactated ringer's, but I hear they have some fancier stuff in Europe. Tuckerekcut (talk) 00:28, 8 October 2009 (UTC)
- Right means avoiding either extreme...
Electrolyte | Ionic formula | Elevation disorder | Depletion disorder |
---|---|---|---|
Sodium | Na+ | hypernatremia | hyponatremia |
Potassium | K+ | hyperkalemia | hypokalemia |
Calcium | Ca2+ | hypercalcemia | hypocalcemia |
Magnesium | Mg2+ | hypermagnesemia | hypomagnesemia |
Chloride | Cl- | hyperchloremia | hypochloremia |
Phosphate | PO43- | hyperphosphatemia | hypophosphatemia |
Bicarbonate | HCO3- | hyperbicarbonatemia | hypobicarbonatemia |
- from lliams, Stanish, and Micheli. New :York: Oxford University Press, pp. 97-113.
- (see Electrolyte_imbalance#Table_of_common_electrolyte_disturbances)
- Another Electrolyte Chart
Electrolyte | extracellular (mmol/L) | Sweat (mmol/L) | Intracellular (mmol/L) |
---|---|---|---|
Sodium | 137-144 | 20-80 | 10 |
Potassium | 3.5-4.9 | 4.0-8.0 | 148 |
Calcium | 4.4-5.2 | 3.0-4.0 | 0-2.0 |
Magnesium | 1.5-2.1 | 1.0-4.0 | 30-40 |
Chloride | 100-108 | 30-70 | 2 |
- From Maughan and Shirreffs, 1998. Fluid and electrolyte loss and replacement in exercise. In :Oxford textbook of sports medicine, 2nd Edition. 71.100.5.245 (talk) 02:33, 8 October 2009 (UTC)
- So assuming we need only consider extracellular values (that sweat and intracellular values are automatic) what then are the sources and intake or extracellular values for phosphorus and magnesium in grams per liter? 71.100.5.245 (talk) 02:48, 8 October 2009 (UTC)
Well, the RDA of Mg is 6mg/kg/day for an adult. The plasma value should be about 2mmol/L, expressed in grams/liter this would be about 50mg/L. When we give Mg in TPN, usually we use 10mEq/day (which is 5mmol/day). Phosphorus is given in the form of phosphate, the RDA is 700mg, and the plasma level should be about 1mmolar. This corresponds to about 100mg/L. TPN calls for 15mmol/day of PO4. This is an order form for TPN I found on the internet, seems legit enough. It may answer some of your questions as to how much patient's get when they are completely nutritionally dependent. Tuckerekcut —Preceding unsigned comment added by 146.189.236.79 (talk) 19:36, 8 October 2009 (UTC)
"Living wall" vertical planting using felt?
Hi all,
I am thinking about how to construct a "living wall" or "Green wall." I've seen some sources say, without using too much detail, that one can use a layer of felt stapled against a layer of plastic.
Would felt be a good medium for plants to grow through? I'm guessing the plants are just pushing through the felt and their roots are pushed between the felt and the plastic? Has anyone heard of doing anything like this?
Thanks! — Sam 63.138.152.238 (talk) 21:16, 7 October 2009 (UTC)
- I was under impression that green wall is made with plastic containers attached to the wall. With felt, I would be really concerned about it deteriorating and falling off under the weight of the growing plants. Also, mold will proliferate, and rotting felt will smell; which is probably not what you want. There may be green-wall DIY manuals on the web someplace; you should probably search for those. --Dr Dima (talk) 22:11, 7 October 2009 (UTC)
- Fresh willow sticks can be woven together to make a fence and they will sprout leaves and grow. 78.146.183.133 (talk) 23:06, 7 October 2009 (UTC)
- This looks like a good place to start reading. SteveBaker (talk) 02:25, 8 October 2009 (UTC)
Cooling pop
What would cool down faster? A two liter bottle of pop that came straight from the store and is then put into the fridge and is therefore still pressurized or a bottle that has had the top off, put back on, and then put into the fridge? Dismas|(talk) 22:35, 7 October 2009 (UTC)
- I'm not smart, but my assumption is that the less stuff you have, the faster you can change its temperature. "Depressurizing" is another way of saying "letting stuff out", so I assume you can cool it faster when there's less stuff in there. All else being equal, of course. --Sean 22:41, 7 October 2009 (UTC)
- All other things would have to be amazingly equal! The mass of that tiny volume of CO2 you'd release would be truly negligable compared to about 2kg of soda. I'd want to know what effect the pressure and dissolved CO2 would have on thermal capacity and conductivity - those could potentially have a much bigger effect. I don't know what the answer is though! SteveBaker (talk) 02:21, 8 October 2009 (UTC)
- Isn't there like an associated temperature drop with the pressure release? What's that process called? You know in the same way a gas bottle gets cold after it releases some gas. Also probably quite a small effect but maybe more significant then the mass reduction. Vespine (talk) 03:56, 8 October 2009 (UTC)
- I just realised that will happen regardless whether you open the bottle before or after you put it in the fridge so shouldn't have an effect on the outcome. Vespine (talk) 03:58, 8 October 2009 (UTC)
- Isn't there like an associated temperature drop with the pressure release? What's that process called? You know in the same way a gas bottle gets cold after it releases some gas. Also probably quite a small effect but maybe more significant then the mass reduction. Vespine (talk) 03:56, 8 October 2009 (UTC)
- Well, when the top is opened and the CO2 comes out of the soda, the temperature does drop, due mostly to adiabatic cooling; that is since you cannot get something for nothing, the decrease in pressure does work, and the energy for that work comes from a drop in temperature. So the depressurized soda will cool down faster, but only because the depressurization process itself causes a drop in temperature, so that bottle gets a "head start". The effect will only matter if you do it immediately before putting it in the fridge, and even then it will be a small effect; the depressurized soda will reach equilibrium with the fridge a few minutes ahead of the sealed bottle, so its not enough to make much of a difference at all. --Jayron32 04:44, 8 October 2009 (UTC)
- But heat conduction is proportional to the difference in temperatures, so it would contribute toward slower cooling if a temperature drop due to pressure release occurred before putting it in the fridge instead of after. On the other hand, there'd also be a contribution in the opposite direction, due to the pressure release being smaller if it occurs when the soda is cooler.
- It's a complicated question, with lots of tiny effects. I don't know which effect would dominate, or even which direction some of the effects go in. And unfortunately I don't think the question can be resolved with a simple kitchen experiment, since I think the difference would be too small to measure reliably, without high precision lab equipment. Red Act (talk) 04:57, 8 October 2009 (UTC)
- It would cool more slowly, but I think that what Jayron is saying is that it would reach a specified cooler temperature sooner. Awickert (talk) 05:06, 8 October 2009 (UTC)
- It also depends on when you start the clock. If bottle A and bottle B are both at 25 deg C, and you are cooling them both to 5 deg C, then you need to define when the clock is started to say which is faster. If we start the clock before bottle A is opened, then bottle A cools faster since it gets from 25 deg C to 5 deg C before bottle B does. However, if you are measuring the instantaneous rate of cooling (the differential cooling) at any point along the way after you put them in the fridge, then B will be cooling faster at most points along the way. That's because A does more of its cooling in a big burst at the beginning, so differentially, it cools slower the rest of the way. But saying it cools faster, when measuring the average rate of cooling from 25-5 deg C is perfectly accurate. There is just a difference between avereage rate and instantaneous rate and most people will understand this problem to be looking at the former, and not the latter, given the wording of the OP's question. --Jayron32 05:15, 8 October 2009 (UTC)
- Yeah, you're right. I was partially responding to Vespine's comment about "whether you open the bottle before or after you put it in the fridge". However, the OP doesn't say anything about opening the bottle after it's been in the fridge, so presumably the temperature is to be measured immediately upon removal from the fridge, without (re)opening the bottle in either case.
- However, we still don't know what the answer to the question should be, since whatever effects the loss of a little CO2 has on the soda's specific heat capacity and thermal conductivity have not yet been addressed. There's more going on here than simply adiabatic cooling of an ideal gas. That may well be the dominant effect, but I wouldn't take that for granted. Red Act (talk) 06:12, 8 October 2009 (UTC)
- I think the best way to answer this question is empirically. It's not hard after all. While not of direct relevance, the Mpemba effect illustrates that this is the sort of thing where you have to be careful answering purely theoretically. Ultimately there are enough variables that it may just vary from repetition to repetition. Nil Einne (talk) 12:15, 9 October 2009 (UTC)
- It also depends on when you start the clock. If bottle A and bottle B are both at 25 deg C, and you are cooling them both to 5 deg C, then you need to define when the clock is started to say which is faster. If we start the clock before bottle A is opened, then bottle A cools faster since it gets from 25 deg C to 5 deg C before bottle B does. However, if you are measuring the instantaneous rate of cooling (the differential cooling) at any point along the way after you put them in the fridge, then B will be cooling faster at most points along the way. That's because A does more of its cooling in a big burst at the beginning, so differentially, it cools slower the rest of the way. But saying it cools faster, when measuring the average rate of cooling from 25-5 deg C is perfectly accurate. There is just a difference between avereage rate and instantaneous rate and most people will understand this problem to be looking at the former, and not the latter, given the wording of the OP's question. --Jayron32 05:15, 8 October 2009 (UTC)
- If yo can precool several litres of a freezing solution or glycol in water say to -18 degrees, you can dip your softdrink bottle in and get a much quicker cool than a fridge. Doing it below zero should make it quicker than using ice water. Graeme Bartlett (talk) 11:21, 8 October 2009 (UTC)
- Yes, but if cooling curves display significant non-linear or chaotic behavior, than the difference between a slow cooling in a 4 deg C fridge in air and a rapid cooling in a -18 deg C liquid could result in wildly different cooling curves, which would not be directly applicable to the fridge situation. If you really want to do the experiment, buy a few dozen identical bottles of the same soda, and run the experiment yourself. The tricky issue would be measuring the temperature of the sealed soda inside the bottle which is supposed to remain unopened. However, a contact thermocouple which is insulated from the fridge air, and in contact with the bottles, should do reasonably well. --Jayron32 23:02, 9 October 2009 (UTC)
- Releasing 1 liter of gas from the soda against a pressure of 1 bar requires 100J of energy. If 100% of that energy goes toward cooling the soda, and if the specific heat capacity of soda is about the same as that of water (4.1813 J/(g·K)), then the 2kg of soda will be cooled by only 0.012°C. Doing an experiment involving temperature differences that small would require not only a high-precision temperature sensor, but also a fridge that maintains a constant temperature much more precisely than a normal kitchen fridge does. Red Act (talk) 00:38, 10 October 2009 (UTC)
- Yes, but if cooling curves display significant non-linear or chaotic behavior, than the difference between a slow cooling in a 4 deg C fridge in air and a rapid cooling in a -18 deg C liquid could result in wildly different cooling curves, which would not be directly applicable to the fridge situation. If you really want to do the experiment, buy a few dozen identical bottles of the same soda, and run the experiment yourself. The tricky issue would be measuring the temperature of the sealed soda inside the bottle which is supposed to remain unopened. However, a contact thermocouple which is insulated from the fridge air, and in contact with the bottles, should do reasonably well. --Jayron32 23:02, 9 October 2009 (UTC)
October 8
Continental shift
Well, I'm interested in plate tectonics, and recently I heard about the expanding earth theory. I'm not sure you are all familiar with it, but basically it says that there has been no continental shift, but instead constant drift and expansion due to the earth constantly expanding. What evidence is there against this? It seems ridiculous to me, but to be honest I have not taken it upon myself to study the "(not at all scientific) theory" in detail, so I believe my refutations would be off the mark. Also, I've been looking over the grander sweep of evolutionary history and continental shift, and I was wondering if I could find a full animated model of all past and present positions of the continents. Thank you. —Preceding unsigned comment added by 69.232.239.99 (talk) 06:49, 8 October 2009 (UTC)
- We have a page, Expanding Earth. The short story is that this was a popular idea in the 50s and 60s, but paleontological data has since shown that the Earth's radius is within a few percent what it was over half a billion years ago. Someguy1221 (talk) 07:03, 8 October 2009 (UTC)
- There is also the lack of a convincing mechanism for the expansion, which has to explain how all the oceans have appeared during the last 250 million years and why nothing happened before that.
- Regarding animated plate reconstructions (we have no article for this, but it is on my 'to do' list) there are several available online.
- There may well be others out there, but these should be a start. Mikenorton (talk) 08:52, 8 October 2009 (UTC)
- As Mike says, the Expanding Earth folks have no way of explaining marine fossil assemblages before Pangaea (which exist across the Earth). They also have no way to account for the drastic change in global volume. Their hypotheses are inconsistent with seismic imaging of tectonic plates moving. One of their many arguments is that the length of mid-ocean ridges (where crust is created) is greater than that of subduction zones (where crust is destroyed) and that the Earth must therefore be getting larger, but they forget that the plate velocities at subduction zones are much greater, and therefore the flux of material is conserved. Awickert (talk) 08:19, 11 October 2009 (UTC)
Relaxation and schizophrenia
I'm aware that there is conventional medical wisdom that deep relaxation can bring on a relapse in schizophrenics. I did a Google search yesterday hoping to find some research that confirms this, but all I could find was guidelines saying things like "it is accepted that relaxation can worsen some symptoms of schizophrenia" - not actual research papers. Can anyone (maybe with some access to the relevant journals) find a paper for me that confirms this? Many thanks.--TammyMoet (talk) 12:08, 8 October 2009 (UTC)
- Looks like this could be what you are looking for. I don't have access to the full text at the moment, but the abstract looks promising. Rockpocket 21:44, 10 October 2009 (UTC)
- Progressive muscle relaxation (a different animal) can be soothing for schizophrenics. (Chen WC; Chu H; Lu RB; Chou YH; Chen CH; Chang YC; O'Brien AP; Chou KR. Efficacy of progressive muscle relaxation training in reducing anxiety in patients with acute schizophrenia. Journal Of Clinical Nursing [J Clin Nurs] 2009 Aug; Vol. 18 (15), pp. 2187-96.) Using MEDLINE and EBSCOhost I found no good studies supporting what you say, and a few equally mediocre ones that said the opposite. - Draeco (talk) 03:42, 11 October 2009 (UTC)
Thanks for these, guys - mainly confirming what I found. Interesting. --TammyMoet (talk) 08:57, 11 October 2009 (UTC)
Wheel alignment
If a car's wheel alignment is not correct (specifically, the car doesn't travel perfectly straight when the steering wheel is in the neutral position), what are the possible consequences of not getting it repaired? ike9898 (talk) 13:36, 8 October 2009 (UTC)
- Well, it is going to depend on how bad it is. Generally, what will happen is that you will get more fatigued if you are driving for long periods, and you fuel mileage will be somewhat reduced. I think it might cause uneven tire wear, but I am not positive on that. Googlemeister (talk) 14:21, 8 October 2009 (UTC)
- At the very least, substantially reduced tyre life - if they are always scrubbing they will wear out quickly and unevenly. At the worst, an accident as the car veers one way or another and hits another car or solid object. This can be easy and cheap to fix (it may simply be a steering gear adjustment) and I would strongly recommend getting it looked at. --Phil Holmes (talk) 14:21, 8 October 2009 (UTC)
- Echoing the above (tire life and fuel mileage in particular), but a key possible consequence hides a little deeper: why is the alignment not correct? If the wheels were just misaligned, then there's no immediate danger. If the wheels are misaligned due to some other fault, you may be at serious risk. For example, my wheels last went out of alignment because I had a wheel bearing that was almost toast, and the bearing failure could have been catastrophic. Misalignment was a primary cause of the maintenance that found the root issue. — Lomn 14:26, 8 October 2009 (UTC)
- Thank you all. It's possible that the misalignment was caused by hitting the curb too often while parallel parking. ike9898 (talk) 16:50, 8 October 2009 (UTC)
- Wait, wait...there is no "neutral position" for a steering wheel. If your ONLY symptom is that the wheels do not point straight ahead when the steering wheel bar is horizontal - then that may just mean that at some time in the past, someone removed it for some reason and didn't put it back on the right 'notch' of the spline...in which case, there might be nothing wrong at all. However, if you let go of the steering wheel while driving straight - and the car then pulls to one side (ie, you have to exert some force in order to keep the car going straight) then you have an alignment problem that you really should get looked at because it might be indicative of something really serious going on...and you'll wear out your tyres and become fatigued by long trips...but dying when the bent strut breaks and one of the wheels turns 90 degrees at 70 mph is really the bigger concern here! If the car pulls to one side when breaking - but not when in normal motion - then there are different problems, which still need to be attended to. SteveBaker (talk) 17:32, 8 October 2009 (UTC)
- There is an adjustment in the steering mechanism for the steering wheel orientation, besides possible misalignment on the steering column (is that even possible?). A true bozo of a mechanic did an alignment on my car once and immediately after I had to hold the wheel 15 degrees to one side to drive straight. He claimed that it was just due to the "crown of the road" and that I probably hadn't noticed it before. When I complained to a manager, he got the necessary tools and adjust the steering position. Edison (talk) 18:39, 8 October 2009 (UTC)
- I'm surprised that you've seen steering wheel adjustments - and misalignment of the steering wheel on the steering column is only too possible! Admittedly, I only tend to work on ancient relics rather than modern cars - but certainly on the few I've restored, you'd either have to dismantle the steering box - or (much, MUCH easier) drive the car into your garage with the wheels dead straight - then remove the big nut that holds the steering wheel onto the column, lift it off (CAREFULLY - don't want to yank any of the wires) and put it back in the right position for straight wheels. The shaft has a spline on the top (like the teeth on a gearwheel) that meshes into a hole under the steering wheel. You can reposition it in about (maybe) 10 degree increments. VERY IMPORTANT NOTE: If you have an airbag - you'll need to disconnect the car battery before you even think about doing this!! But if there is a proper adjustment mechanism somewhere then by all means use that. Anyway - only do this if the car DOESN'T pull to one side when you let go of the steering wheel. SteveBaker (talk) 20:36, 8 October 2009 (UTC)
- There is an adjustment in the steering mechanism for the steering wheel orientation, besides possible misalignment on the steering column (is that even possible?). A true bozo of a mechanic did an alignment on my car once and immediately after I had to hold the wheel 15 degrees to one side to drive straight. He claimed that it was just due to the "crown of the road" and that I probably hadn't noticed it before. When I complained to a manager, he got the necessary tools and adjust the steering position. Edison (talk) 18:39, 8 October 2009 (UTC)
- Minor drifting to the left or right when you let go of the wheel can also be caused by a poorly inflated tire on one side. Given that air is cheap it is generally worth checking the state of your tires before asking a mechanic to look at the alignment. You'd be surprised how many alignment complaints are really tire problems that people didn't notice. Dragons flight (talk) 22:36, 8 October 2009 (UTC)
Thanks again guys! ike9898 (talk) 17:12, 9 October 2009 (UTC)
Raw potato (inspired by question about cookie dough above)
When I was very young, my mom gave me a piece of raw, peeled potato and I really liked it. Afterward, I asked for more on many occasions, but my mom was convinced it wasn't good to eat raw potato. I've also never seen raw potato used as food in any cuisine. Is there anything wrong with eating raw potato? ike9898 (talk) 13:47, 8 October 2009 (UTC)
- Raw potatoes are harder to digest than cooked ones, but are otherwise pretty edible. Don't eat green potatoes under any circumstances, they are likely toxic (see Potato#Toxicity). In general, cooking potatoes reduces their toxicity, but it should be fairly unlikely to have problems assuming they aren't green. --Mr.98 (talk) 13:57, 8 October 2009 (UTC)
- In principle, Mr98 is correct that raw potatoes are no more toxic than boiled ones. But cooking doesn't reduce their toxicity, as mentioned in solanine. Frying can do this, because you basically extract the solanine, but cooking in water is not hot enough to do much to the toxin. --TheMaster17 (talk) 14:09, 8 October 2009 (UTC)
- I vaguely recall that they are very indigestible. The problem (I believe) is that starch can only be converted to digestible sugars by saliva in the mouth or in the duodenum - or in the colon (by bacterial break-down). As our article on starch says "Digestive enzymes have problems digesting crystalline structures. Raw starch will digest poorly in the duodenum and small intestine, while bacterial degradation will take place mainly in the colon."...so the only place where raw starch can be converted to sugars is in the mouth. But the cells of the potato (which contain the starch) are surrounded by a fairly robust cell-wall which doesn't break much as you chew. So the saliva can't get to the starch to break it down - and later on, the gut can't deal with it because of the crystalline properties of the stuff. Cooking the potato breaks down the cell walls and allows the enzymes in the gut to deal with it. You'll notice that if you chew a piece of bread or cooked potato (lots of starch - easy to get at), it tastes sweeter and sweeter the longer you chew it...because your saliva is converting the starch to sugars. But no matter how long you chew on raw potato - it never seems to change taste. So your mom was right about the digestibility thing. I wonder whether having large amounts of undigested starch making it through to the colon - and being broken down there by bacteria - might somehow be bad for you. Whether that means that it's not safe to eat it - I'm not so sure. But that's getting into the realms of medical advice. SteveBaker (talk) 17:23, 8 October 2009 (UTC)
- There are instances when eating your own feces can have a beneficent effect (or so I have heard). With this in mind I would not ask you to swear off raw potatoes. Vranak (talk) 17:38, 8 October 2009 (UTC)
- You've really derailed the thread with that one, which is sort of uncool, but I have to disagree that eating one's own feces brings any benefit. Cite sources here on the refdesk, please. Comet Tuttle (talk) 20:50, 8 October 2009 (UTC)
- Definitely think Vranak's source is not correct, don't believe everything you hear. Even if you are dying of hunger you should not eat your own feces! It will make you worse not better. Conversly if you are dying of thirst, you should definitely drink your urine, it will hydrate you much more then harm you. Vespine (talk) 00:01, 9 October 2009 (UTC)
- Hey, I hope my source is mistaken as well. Vranak (talk) 20:12, 9 October 2009 (UTC)
- You've really derailed the thread with that one, which is sort of uncool, but I have to disagree that eating one's own feces brings any benefit. Cite sources here on the refdesk, please. Comet Tuttle (talk) 20:50, 8 October 2009 (UTC)
carpal tunnel
Does wearing a watch increase chances of carpal tunnel from too much typing? —Preceding unsigned comment added by 218.186.12.253 (talk) 15:47, 8 October 2009 (UTC)
- My own experience says "Yes, very definitely" - but I'm not a good statistical sample. SteveBaker (talk) 17:24, 8 October 2009 (UTC)
- Only if the watch adds significant physical stress because of its weight or if the strap is too tight or loose. Cuddlyable3 (talk) 13:43, 9 October 2009 (UTC)
- I found that the strap presses against the underside of your wrist as you rest them on the desk (or whatever) - and that was a cause for problems for me. I had problems in only the wrist with the watch - and when I stopped wearing it, things got much better. But as I said - you can't really extrapolate from a single example. SteveBaker (talk) 14:04, 9 October 2009 (UTC)
Time Zones
If I need to organize a global business meeting with live participants in a variety of time zones what is the best way to schedule it? It seems like no matter when I schedule things it’s always some crazy time in either Sydney, Jakarta, Paris or Rio. What’s the best compromise or is there a better way to coordinate this? TheFutureAwaits (talk) 16:59, 8 October 2009 (UTC)
- This is essentially the problem the Formula 1 organisers have. Their solution is to run most races at noon UTC on a Sunday, which caters best for their core audience in Europe, and makes for sensible evening viewing in Asia, where they're hoping to expand. It disfavours the Americas, but not to the extent that the core south american audience is getting up insanely early. That's really the best you can do. This mostly screws Sydney, however. -- Finlay McWalter • Talk 17:26, 8 October 2009 (UTC)
- F1 can show repeats at more convenient times, a meeting can't, so it is a little different. F1 can just decide that they can't reasonably cater to Australia, say, and choose a good time for everyone else. For a business meeting you have to compromise. --Tango (talk) 17:29, 8 October 2009 (UTC)
- I would draw a line and mark it from 0 to 24 GMT and then draw on it the office hours in each of those countries. If there is a time in all the office hours, choose it, if there isn't choose one that is in most of the office hours and not too far away from the rest. --Tango (talk) 17:29, 8 October 2009 (UTC)
The problem is more how to make fair arrangements in requesting employees in Sydney and Jakarta attend this meeting. It's vital they be involved yet we have to run from 12PM-3PM in New York. So is it totally unreasonable for these employees to attend a (3AM-5AM Sydney / 11PM-2AM Jakarta) meeting even if this will only happen once a quarter? TheFutureAwaits (talk) 18:18, 8 October 2009 (UTC)
- Yes. I'd be pissed unless you rotate the schedule to hit everybody (I'd still be pissed, but at least I would not feel unfairly disadvantaged). Why is the NY time window nailed to the floor? If it's that important, fly them in in time to overcome jet lag, then fly them back. Or schedule a one week meeting yearly in a nice place. Or, better still, throw money at the problem. Hand them a bonus and have them bid for a slot ;-). --Stephan Schulz (talk) 18:27, 8 October 2009 (UTC)
- Yes, it is totally unreasonable to expect someone to attend a 3AM-5AM meeting. You're asking Sydney (or whatever office gets that time slot) to perform up to their usual high standards of professionalism at a time slot that human biology just plain hasn't equipped them for. If this meeting is "vital", then someone in Sydney could rightly perceive this as the sort of meeting where their future in the organization is on the line -- a smooth presentation could grease the wheels for a promotion, while inattentiveness could dash that chance. In all seriousness, concerns like that can often inspire far-flung offices to split from the mother company. That's why many global organizations either do away with these pan-global meetings, or else set aside a travel budget so that these meetings can be in person. --M@rēino 19:48, 8 October 2009 (UTC)
- If the meeting has to be 12PM-3PM New York time, then I'm not really sure what can be done about it. If want to avoid holding the meeting while anyone in those cities should be sleeping, you've got Sydney at -10, Jakarta at -7, Paris at 0, Rio at +3, New York at +5, which has the largest gap between New York and Sydney (9 hours difference) so you'd want to hold it as early in the morning as possible in New York, which would be late, but not unreasonably late in Sydney. Although if the meeting is 3 hours long it's still sort of problematic. Starting at 6AM in New York is still finishing at midnight in Sydney. Rckrone (talk) 19:51, 8 October 2009 (UTC)
- Oops, I think the signs are backward on those timezones. Rckrone (talk) 20:06, 8 October 2009 (UTC)
- Bah, who doesn't do quarterly earnings at midnight?
- Oops, I think the signs are backward on those timezones. Rckrone (talk) 20:06, 8 October 2009 (UTC)
- Don't forget that in a couple of weeks time Sydney will be putting their clocks forward at the same time as London puts theirs back. This probably exacerbates rather than relieves the problem, as the time difference between the two suddenly goes from 9 hours to 11 hours.--Shantavira|feed me 07:34, 9 October 2009 (UTC)
Advise the participants that the meeting will be reorganised this way: 1) Each participant gives a presentation at their local working time. The presentations are collected in videos that are passed as e-mail attachments westwards around the world. 2) After 24 hours everyone has seen everyone's presentation. A second 24 hour cycle can let everyone comment on everyone's presentation. Cuddlyable3 (talk) 13:37, 9 October 2009 (UTC)
freezing point
What is the freezing point of a 12 oz aluminum can of unopened Coca Cola Classic? The American recipe that uses corn syrup instead of real sugar. Googlemeister (talk) 18:49, 8 October 2009 (UTC)
- Lower than 32F or 0C, and lower than equivalent Diet Coke, and lower than the same can opened.Above 0 F (see below) An empirical approach could answer the question. When it freezes, the top pops up and the bottom becomes rounded. I once put a can of (probably diet) pop in the deep freeze and after it froze, tried to use the pressure buildup to round out the bottom more (to make a Van de Graaf generator) by cranking the temperature down to minimum. At some very low temp, it burst the lid free and became a rocket and launched dramatically through the deepfreeze, leaving a rocket exhaust of Coke slush. Edison (talk) 18:58, 8 October 2009 (UTC)
- http://questions.coca-cola.com/NSREExtended.asp?WhatUserSaid=At+what+temperature+will+soft+drinks+freeze%3F Livewireo (talk) 19:17, 8 October 2009 (UTC)
- It was surprising that the Coke ws still liquid enough to squirt out as slush, by frozen enough to expand to the point it completely expanded the top and bottom of the can. Edison (talk) 17:56, 9 October 2009 (UTC)
- Ice takes up 9% more volume than water. If a pop can is 120 mm tall, the top 5 mm contain no liquid, it is an approximately uniform cylinder, and the CO2 is negligible, then only about 56 mm (half of the pop) must be frozen for the can to be filled to capacity. If these assumptions are right (and they seem to be close per Edison's deep freeze experiment, in which there was still liquid present after the bulging out of the top and bottom of the can), the contents of the can must be partially liquid when it bursts. Awickert (talk) 08:26, 11 October 2009 (UTC)
- It was surprising that the Coke ws still liquid enough to squirt out as slush, by frozen enough to expand to the point it completely expanded the top and bottom of the can. Edison (talk) 17:56, 9 October 2009 (UTC)
- http://questions.coca-cola.com/NSREExtended.asp?WhatUserSaid=At+what+temperature+will+soft+drinks+freeze%3F Livewireo (talk) 19:17, 8 October 2009 (UTC)
Carbon neutral / entirely dependent on renewable energery
Anyone know of anywhere other than Somos in Europe that is totally dependent on renewable enegery or alternatively places that are totally carbon neutral? 188.220.144.215 (talk) 21:53, 8 October 2009 (UTC)
- Do you mean just electricity? Because I don't think anywhere that has any cars would qualify otherwise. Unless you include some kind of offset system... TastyCakes (talk) 21:56, 8 October 2009 (UTC)
- Essentially eletricity, althought if there was a place with an offset system I'd like to know where it was... 188.220.144.215 (talk) 22:10, 8 October 2009 (UTC)
- Iceland. See Renewable energy in Iceland. --Stephan Schulz (talk) 22:12, 8 October 2009 (UTC)
- I think Quebec would be a more impressive contender. Almost all (96%) of its electricity is produced by hydro and it has a population 25 times that of Iceland. I'm not sure about heating, I suspect many homes are heated by natural gas or even fuel oil, unlike Iceland where most homes are heated by geothermal energy, I believe. TastyCakes (talk) 22:20, 8 October 2009 (UTC)
- Sorry, I re-read your question and see you wanted places in Europe... TastyCakes (talk) 22:21, 8 October 2009 (UTC)
- Yeah, but purely with the view of visiting them and Europe is significantly closer that Quebec —Preceding unsigned comment added by 188.220.144.215 (talk) 22:34, 8 October 2009 (UTC)
- Sorry, I re-read your question and see you wanted places in Europe... TastyCakes (talk) 22:21, 8 October 2009 (UTC)
- Essentially eletricity, althought if there was a place with an offset system I'd like to know where it was... 188.220.144.215 (talk) 22:10, 8 October 2009 (UTC)
- 99% of electricity in Norway is hydro (see Renewable energy in Norway); so much that they built the NorNed HVDC line to sell it to the Netherlands. -- Finlay McWalter • Talk 23:35, 8 October 2009 (UTC)
- Yeah anywhere with loads of valleys and rivers can support most of their electrical needs with dams. British Columbia qualifies. Vranak (talk) 20:11, 9 October 2009 (UTC)
B-12 Deficiency
As I understand it, vegetarians require B-12 supplementation to avoid deficiency because B-12 is only available in meat. Now, I know that vegetarian diets have been around for thousands of years while B-12 supplements are a rather recent innovation. My question is: before the invention of B-12 supplements, were all vegetarians B-12 deficient? Or is there some other way of getting the required B-12? —Preceding unsigned comment added by 24.62.245.13 (talk) 23:42, 8 October 2009 (UTC)
- Vitamin B12 is also available in things like eggs and milk, which vegetarians can eat. Perhaps you mean vegans. In any case, apparantly the body's stores of B12 can often go a long time without ruinning out. 89.242.154.74 (talk) 00:01, 9 October 2009 (UTC)
- It's a very interesting question. Our own History of Vegetarianism article discusses the existence of what would now be described as Vegans in antiquity, with no mention of B12 issues. Someguy1221 (talk) 00:03, 9 October 2009 (UTC)
- A balanced and varied vegetarian diet does not require any supplements. A vegan diet is rather more extreme and supplements or fortified foods are probably required. --Tango (talk) 00:09, 9 October 2009 (UTC)
- There is B12 in milk - the vegetarians who have been around for thousands of years (like, for example, the ancient Greeks) were more than happy to consume dairy products. The only problem is with the vegans - but that particular group have really only been around since the early 1950's - which corresponds pretty closely to the time when Vitamin B12 became available and was known to be present in meat. So everything lines up rather neatly - vegetarians prior to the discovery of B12 didn't need it - vegans after the discovery of B12 had no problem getting it. The only thing I wonder about is how ancient greeks who were lactose-intolerant managed. It is often claimed that tolerance for lactose in adult humans is a fairly recently evolved trait that came along at about the same time as we developed the ability to farm domesticated dairy animals (sheep, goats, cows). That doesn't leave much time for the Greeks to have developed lactose tolerance...but I think the facts speak for themselves. SteveBaker (talk) 01:03, 9 October 2009 (UTC)
- You're right. There were ancient vegetarians who shunned eggs, but there is no mention of avoiding dairy as well, as far as I can see. Someguy1221 (talk) 01:15, 9 October 2009 (UTC)
- But as for the lactose intolerance, even the average lactose intolerant individual can still process a small amount of dairy, which may have been sufficient. Someguy1221 (talk) 01:17, 9 October 2009 (UTC)
- The dairy thing is IMHO inadequate. There have been vegetarians in China and other East Asian countries for many years. While many of these may not have been strict, some may have been particularly I suspect some Buddhist vegetarianism. While these people may not have been opposed to the consumption of dairy I'm not convinced it was a common part of their diet for many of them be it from cows, goats, yaks, horses, buffalo or something else. Note that neither of our articles mentions milk or dairy products or lacto at all. In fact while modern day Indians vegetarians are lacto-vegetarians I wonder how common this was historically, e.g. Jain vegetarianism 7000 years ago. The fact that people were not opposed to drinking milk doesn't mean they regularly did. Of course it's difficult to know how strict people were. Nil Einne (talk) 11:49, 9 October 2009 (UTC)
B12 is not directly produced by the animals that humans eat, or whose milk and eggs are eaten by humans. B12 is not directly produced by any animals at all. For that matter, B12 isn't produced by any plants at all, either. B12 is unique among the vitamins, in that it is only produced by bacteria. See Vitamin B12#Synthesis.
B12 can be found pretty much anywhere that bacteria can be found, such as on your skin (in small amounts), or in dirt, or in feces. The feces of a human, even a vegan, contains far more B12 than a human needs. Unfortunately, the B12 in human feces is produced by bacteria that are lower in the digestive tract than the ilium, which is the primary absorbtion site for B12. It used to be possible to get B12 by drinking water from a source that's been contaminated with feces. As the drinking water article says, "Throughout most of the world, the most common contamination of raw water sources is from human sewage... In many parts of the world the only sources of water are from small streams often directly contaminated by sewage." But fecal contamination of drinking water has been eliminated in those parts of the world with sewage treatment plants, which of course are a relatively modern invention.
It also used to be possible to get B12 from the bacteria in dirt. But with modern plumbing (which has only been around for a couple thousand years), it's easy to wash all the dirt off of your vegetables. And any dirt on your skin gets washed off by frequent handwashing and daily bathing. However, bathing every day is a rather recent development. According to Bathing#Western history, bathing once a week was considered to be "frequent" bathing, until only about a century ago. On a personal note, my mom was only given the opportunitity to bath once a week when she was an exchange student in Germany, only about 60 years ago.
Due to all this modern hyper-cleanliness, the only reliable ultimate source of B12 for humans has become the bacteria that are found in the digestive tract, feces and environment of the animals used for food, or for vegans, the bacteria in the big vats of bacteria cultured for producing B12 supplements. Red Act (talk) 05:05, 9 October 2009 (UTC)
October 9
Convert atmospheric greenhouse gases to flood insurance rates
I'd like to see a series of graphs, or maybe I want an applet that can produce them, for describing what changes like "20% less greenhouse gases by 2020" would have on actual flood insurance rates. Where does one start for that sort of thing? 99.62.187.28 (talk) 04:59, 9 October 2009 (UTC)
- I don't think that there is no such document. A 20% reduction in greenhouse gases by 2020 is practically impossible. A 20% reduction in new emissions by 2020 is possible, but will still leave us heading to increased climate change, and the effect in 2020 will be marginal. There are a couple of studies on the effect of climate change on insurers, e.g. by the Association of British insurers[8] and by the US GAO[9]. Your question is vey underconstrained (What market? What time? How are the 20% achieved and what is the state of the economy?). But even with a more specific question, I doubt we have reliable prognoses. You can expect rates to develop to match damages (or a bit more, because the insurers need to pay the increased uncertainty), but our estimate of damages has a large uncertainty (that does not even primarily depend on climate). --Stephan Schulz (talk) 07:56, 9 October 2009 (UTC)
- I doubt that even the insurance companies can make that call yet. The problem is that flood insurance rates are not only determined by the number of people who get flooded - they also depend on the number of people who are NOT flooded who decided to buy insurance and pay premiums. So for example, if the news coverage in 2020 were full of "OMFG!!!! Look at teh stooopid peoplez gettin flooded out (lol!) without insurans!" (because that's how the news will read in 2020). If that kind of publicity caused people who live on higher ground to take out insurance in disproportionately large numbers - then it's possible for the ratio of people who are NOT flooded out to those who are could actually increase - and thereby drive the insurance costs down...not up! It's very likely that the law could change in order to help out the insurance companies by requiring flood insurance on all homes (just as we require 3rd party insurance for cars) - or that banks might require it as a condition of getting a mortgage. We don't know (and cannot reasonably guess) what might happen in that regard. However, we might be able to find the number of houses in zones that would be flooded if CO2 levels don't start to level out soon. We could then (presumably) estimate the value of those homes and make some kind of a guess about the total cost of replacing them. SteveBaker (talk) 13:25, 9 October 2009 (UTC)
- The other thing is that insurance companies don't ensure for certain losses. If you live next to a river that floods every few years - you won't be able to buy flood insurance at any price. As the insurers determine that sea levels are rising - they'll simply cease to offer flood insurance in areas that are 100% certain to be flooded every few years and take every opportunity to cancel existing policies in those regions. They'll make their money on the boundaries of those "now certain-to-flood" areas where houses that were not even at risk of floods before suddenly find themselves in the new 100 year flood-plain. This might also enable the insurers to keep rates stable, no matter the amount of sea level rise. SteveBaker (talk) 13:57, 9 October 2009 (UTC)
- It would probably be pretty straightforward to do a back-of-the-envelope calculation of temperature-sea level (though there is a significant lag in the warming of the oceans), some related info and references are found on the Current sea level rise page here. Carbon dioxide to temperature to sea level would be less straightforward due to the overprinting of natural variability. For inland areas (rivers and streams), I'd say forget it: there are so many factors that go into weather (as opposed to climate), and drainage-basin-scale weather patterns are what really control flooding in streams, that I would say that the natural variability there will outweigh any predictability of change in the hydrologic cycle over a decadal time-scale. Awickert (talk) 14:12, 9 October 2009 (UTC)
- Last week, I attended a lecture by a geophysicist from the US Geological Survey, Ross Stein whose project was to develop a comprehensive risk-assessment profile based on geophysical data about earthquakes. The challenges of accumulating some very heavy scientific data and putting it into a form that insurance and reinsurance conglomerates can understand are huge. The result is a GEM, "a public/private partnership initiated and approved by the Global Science Forum of the Organisation for Economic Co-operation and Development (OECD-GSF). GEM aims to be the uniform, independent standard to calculate and communicate earthquake risk worldwide." Their goal is to produce a free and open-source applet/web tool that will allow you to do exactly what you want - estimate the likelihood of catastrophic damage (from earthquakes), as measured in a variety of different paradigms (dollars, casualty rates, seismic magnitude, etc).
- Global warming and global sea-change risk falls under the same category - it has huge socioeconomic impact; governments, insurance companies, businesses, and private citizens all have a need to assess the risk; and it is also very hard to quantify the global risk as it relates to a specific region. At the same time, no local region has the resources to coordinate the statistical analysis for these sorts of global-scale data analysis. You might be interested in checking out the GEM project website - to see how one gets started accumulating this scale of information. The seed idea is to address a common need, and the execution of the idea is to interact with the large international organizations (such as the UN, several large insurance conglomerates, and government agencies like the USGS), to coordinate a strategy for real, science-based risk assessment. To my knowledge, no such initiative yet exists for global-climate-change/sea-level-change risk assessment - maybe the OP would like to suggest this angle to the GEM project. Nimur (talk) 14:57, 9 October 2009 (UTC)
"TAIPEI (AFP) – Global warming will cause the amount of heavy rain dumped on Taiwan to triple over the next 20 years, facing the government with the urgent need to beef up flood defences, a scientist warned Tuesday. The projection is based on data showing the incidence of heavy rain has doubled in the past 45 years, coinciding with a global rise in temperatures, said Liu Shaw-chen of Taiwan's leading research institute Academia Sinica. The estimate comes two months after Taiwan was lashed by Typhoon Morakot, the worst to hit the island in half a century, leaving more than 600 deaths in its wake...." -- http://news.yahoo.com/s/afp/taiwanclimatewarmingtyphoon 98.210.193.221 (talk) 18:11, 16 October 2009 (UTC)
Why preheat the oven?
I don't often use an oven for cooking, but whenever I do, the recipe invariably tells me to preheat it. Why? Surely the sooner the food goes in, the sooner it will be ready to come out. My guess is that we would save a lot of energy if people weren't preheating empty ovens. If preheating is important, maybe the oven article should make some mention of it...--Shantavira|feed me 08:03, 9 October 2009 (UTC)
- Just as a guess: the reason is to keep the energy transfer predictable. Different ovens heat at different rates, and so an oven that heats instantly from 70 degrees to 400 degrees would cook food differently than one that went from 70 to 400 over a period of an hour if you put it in before you started heating the oven. Veinor (talk to me) 08:08, 9 October 2009 (UTC)
- Yes, I also think that's why it's done. Cookbook authors don't want to have to mess around with telling you how to adapt the cook time based on how quickly various ovens heat up. It's easier when creating a recipe to just do a preheat, so the cook time will be the same in all ovens. Red Act (talk) 08:21, 9 October 2009 (UTC)
- While this cookbook author explanation is plausible, also note that cooking is a chemical process. Heating a mix of ingredients during some time at some (more or less) fixed temperature produces a result that might otherwise be impossible to produce by gradually heating, even if the same overall amount of energy is used. You probably also don't want your steak first soaked in cold oil and then gradually having it heated... - DVdm (talk) 10:36, 9 October 2009 (UTC)
- I'd agree with that, if you start cooking meat at too low a temperature, it will boil in its own juices rather than frying. Also your food isn't going to be that cooked by the time your oven comes up to temperature so preheating isn't going to use that much more energy. Smartse (talk) 11:18, 9 October 2009 (UTC)
- The times wouldn't even be constant for a given oven because you might want to first cook one thing - then cook something else, with the oven already hot. You'd never get consistent results. It's also possible that the cooking process might involve multiple chemical reactions that each happen at different temperatures - so by not pre-heating to the temperature where all of the reactions happen at once you might have undesirable results as one reaction starts before the others and last for a longer time. SteveBaker (talk) 12:38, 9 October 2009 (UTC)
- And come to think about it - the energy savings by not pre-heating might not be that great. Assuming your oven is reasonably well insulated (which they seem to be) - most of the cost is in heating the thing up from cold. Once it's already hot, the heating elements only have to provide enough energy to overcome the losses due to inadequate insulation. If you're really bothered about this - your best strategy is probably to pre-heat per instructions - but shut off the heat a little before the cooking time is over, relying on the insulation to keep the oven hot enough for cooking to continue to completion. SteveBaker (talk) 12:42, 9 October 2009 (UTC)
- Some recipes call for pre-heating the oven, some do not. Often these recipes were written decades ago. It's not about energy savings, it's about what trial-and-error has determined to be the best way to cook something. →Baseball Bugs What's up, Doc? carrots 12:51, 9 October 2009 (UTC)
- And come to think about it - the energy savings by not pre-heating might not be that great. Assuming your oven is reasonably well insulated (which they seem to be) - most of the cost is in heating the thing up from cold. Once it's already hot, the heating elements only have to provide enough energy to overcome the losses due to inadequate insulation. If you're really bothered about this - your best strategy is probably to pre-heat per instructions - but shut off the heat a little before the cooking time is over, relying on the insulation to keep the oven hot enough for cooking to continue to completion. SteveBaker (talk) 12:42, 9 October 2009 (UTC)
- Indeed, and sometimes very odd or pointless sounding things have a huge affect on the overall food chemistry. Most bread recipes call for your to spray water in the oven first, something my wife skipped for a long time because she thought the effect would be minimal. It turns out that it really affects the quality of the crust to have that extra humidity in there. --Mr.98 (talk) 12:58, 9 October 2009 (UTC)
- The predictability thing is certainly the main reason (same reason recipes call for unsalted butter and a certain amount of salt -- they don't know how salty your butter is). Another possible reason is safety: the "danger zone" for food spoilage is 40-140 degrees F, and putting the food in a hot oven will minimize the time it spends in that zone. --Sean 13:35, 9 October 2009 (UTC)
- Analogous to this is that when cooking something on the stove, sometimes you heat it to a boil first before adding other stuff, other times you bring the whole thing slowly to a boil. →Baseball Bugs What's up, Doc? carrots 13:50, 9 October 2009 (UTC)
- If you are concerned about saving energy, I always put things in the oven before it preheats, and my baked goods turn out fine (so long as I keep an eye on them). The stovetop not so much - noodles, for example, will waterlog if you put them into the water too early, so I generally boil first (unless I'm in a hurry and don't care about the consistency). Awickert (talk) 14:02, 9 October 2009 (UTC)
- Yuck. Dauto (talk) 19:03, 9 October 2009 (UTC)
- Well, you have to understand, I have no feeling in half of my mouth, so it is less of a problem. But yes, I don't do that when cooking for other people. Awickert (talk) 08:37, 11 October 2009 (UTC)
- Yuck. Dauto (talk) 19:03, 9 October 2009 (UTC)
- If you are concerned about saving energy, I always put things in the oven before it preheats, and my baked goods turn out fine (so long as I keep an eye on them). The stovetop not so much - noodles, for example, will waterlog if you put them into the water too early, so I generally boil first (unless I'm in a hurry and don't care about the consistency). Awickert (talk) 14:02, 9 October 2009 (UTC)
- Analogous to this is that when cooking something on the stove, sometimes you heat it to a boil first before adding other stuff, other times you bring the whole thing slowly to a boil. →Baseball Bugs What's up, Doc? carrots 13:50, 9 October 2009 (UTC)
When I worked as the oven man in a bakery the oven cycle was so fast that pre-heating became difficult, and I actually found that the ambient air temperature had to be taken into account regarding the cooking time to get reliable results. Pre-heating avoids the need for such calculations.Trevor Loughlin (talk) 14:21, 9 October 2009 (UTC)
- An oven thermometer or judicious use of the temperature control, or listening for a click when the setpoint is reached, can tell you how fast a given oven heats up. Some dishes are more demanding of a certain high temperature than others. I would expect that cakes and breads are more demanding, or that initial lower temperature will have more of an effect. My oven takes about 6 minutes to get to 400 F. Putting a frozen pizza or some such in before the setpoint is reached saves a couple of minutes compared to waiting for the setpoint to be reached, and if you need to eat and run it makes some sense. Certainly the minutes at 300 degrees will not contribute as much to the browning as the minutes at 400, but a minute at 300 F does contribute somewhat. The total time in the oven needs to be increased because of the lower than desired initial temperature, but the food gets to the table faster than if I wait for the oven to completely preheat. If the box says "12 to 15 minutes" I basically use the longer extreme and put the food in at 300 F. It should save energy, because the (gas) oven is not a sealed insulated container. There is a vent which has hot air escaping, and makeup air enters at the bottom. The oven cycles on and off during the baking cycle. For bread, I would wait for the desired temperature. For a roast or a chicken, I will wind up judging doneness by a meat thermometer, so I put it in when I turn the oven on. Its going to be a very long time anyway, and that gets dinner on the table sooner. Edison (talk) 15:39, 9 October 2009 (UTC)
- Yoicks! The reason for preheating the oven is that the results often suck if you don't. Lots of baked things require the outer surface to be heated a lot more than the inside in order to come out right, and that won't happen properly if the oven isn't preheated. For some things, such as a baked potato wrapped in foil, it doesn't matter, but if you try to bake bread in a non-preheated oven, the results will be pathetic. Looie496 (talk) 17:29, 9 October 2009 (UTC)
I never ever pre-heat the oven. It's a waste of natural gas. Vranak (talk) 20:10, 9 October 2009 (UTC)
In addition to the culinary points made above, the oven itself may behave differently during preheating: for example, both the top and bottom elements may come on until the temperature difference from the set-point is small. If what you're cooking is only supposed to be heated from below, you don't want that. --Anonymous, 04:47 UTC, 2009-10-10.
- This is why I preheat my toaster oven (Doubtless wasting untold electrical energy.), because if I don't it completely fails to cook anything evenly. (Whether it's bagels or instant pizzas.) 15:04, 10 October 2009 (UTC)
Wow, no one has actually given the main point yet. When active, the burners in an oven are locally much hotter than set point of the oven. As a result, they generate a lot of excess infrared radiation. This radiation will tend to char the outside surface of your food. For some foods this is fine, for others it will significantly impacts the taste and quality of the food. Cooking with infrared radiation is called broiling, and is often preferred for meats, but it is distinct from baking where the goal is to cook with hot air. By preheating the oven you create the pocket of hot air desired for baking and then the burners turn themselves down/off. By not placing food in the oven until after the oven's burners are reduced you minimize the food's exposure to the excess infrared radiation that could char your food. (In some situations you can also do this by covering the food with aluminum foil from all sides.) Preheating really is about controlling the way in which your food is cooked. Dragons flight (talk) 23:44, 10 October 2009 (UTC)
- That makes sense about the evenness-unevenness, but (pardon my ignorance, as an excuse I work entirely in the solid or liquid state) I would assume that the air also heats food by giving off IR radiation, albeit evenly. Is this correct? Or is it more of a molecular collision type heat transfer deal? Awickert (talk) 08:37, 11 October 2009 (UTC)
- At typical baking temperatures, e.g. 350F, the primary mode of heat transfer is still conduction by air molecules colliding with the food. Dragons flight (talk) 19:41, 11 October 2009 (UTC)
- OK, thanks, and I should have seen that - of course there is so little blackbody radiation at 350K that conduction would dominate. Thanks again, Awickert (talk) 01:38, 12 October 2009 (UTC)
Design with dimensions
I want to know where I can find the exact design of any engine(especially ic engines) with dimensions and specifications. I want any engine to be part of my mini project which I will doing with AutoCAD.To do my project i.e the ddrawing I need to know the exact dimensions of the valves and other components.Please help! —Preceding unsigned comment added by 122.169.170.8 (talk) 11:37, 9 October 2009 (UTC)
- I was going to try and help you by measuring the pieces (to the right), but I can't get the engine to stop moving. User Grbrumder (talk) 00:19, 10 October 2009 (UTC)
- Not sure this is what you meant, but at least in Firefox, pressing the Esc key will stop the animation.–RHolton≡– 22:26, 13 October 2009 (UTC)
- Esc also stops the animation in Vista Explorer. You have to reload the page to get it moving again. You can look at the individual frames of the animation in a suitable image editor such as GIMP which is free software. Cuddlyable3 (talk) 12:35, 15 October 2009 (UTC)
- Not sure this is what you meant, but at least in Firefox, pressing the Esc key will stop the animation.–RHolton≡– 22:26, 13 October 2009 (UTC)
- Unfortunately, even a single cylinder motorcycle engine, like the one shown here: [10], is of such complexity that you couldn't possibly show all the dimensions on a single diagram (of reasonable size), and still have them be readable. StuRat (talk) 18:35, 13 October 2009 (UTC)
- I want atleast the basic dimensions like the diameter of the piston head and radius of crankshaft...the stiffness of the spring in the valve and the materials used in the construction of engine.
Shouldn't the list of artificial objects on the moon be edited?
LCROSS's Centaur upper stage is up there now. —Preceding unsigned comment added by Dakiwiboid (talk • contribs) 12:43, 9 October 2009 (UTC)
- So what's stopping you? →Baseball Bugs What's up, Doc? carrots 12:50, 9 October 2009 (UTC)
- Indeed "Wikipedia - the free encyclopedia that anyone can edit.". Given the speed it whacked into that crater, there may not actually be much of it left on the moon! However, I agree that it belongs on that list. SteveBaker (talk) 13:11, 9 October 2009 (UTC)
- It wouldn't be the first lunar object that was intentionally augered in. In fact, all of the ejected LM's would have crashed. "Rest In Pieces". →Baseball Bugs What's up, Doc? carrots 13:19, 9 October 2009 (UTC)
- Indeed "Wikipedia - the free encyclopedia that anyone can edit.". Given the speed it whacked into that crater, there may not actually be much of it left on the moon! However, I agree that it belongs on that list. SteveBaker (talk) 13:11, 9 October 2009 (UTC)
- WP:BOLD Ks0stm (T•C•G) 15:04, 9 October 2009 (UTC)
LCROSS moon impact
It has been over 6 hours since the impacts. Earlier, it was predicted that 10 inch telescopes or larger in the Western US should see a plume. Apparently the scientific "shepherding" LCROSS craft following the initial impactor saw no plume. Did Hubbell or any observatory see anything? When is NASA expected to release initial analysis of the instrumentation findings from LCROSS? It seems like a replay of the Ranger 7 lunar impact spacecraft from 1964, except that the older mission beamed back clearer pictures. Edison (talk) 17:50, 9 October 2009 (UTC)
- I know this is the Internet age and all, but I'd relax and wait a few days. From the CNN article: "NASA said Friday's rocket and satellite strike on the moon was a success, kicking up enough dust for scientists to determine whether or not there is water on the moon." Comet Tuttle (talk) 18:04, 9 October 2009 (UTC)
- While I trust NASA not to completely lie, it doesn't seem unreasonably to expect them to put a PR spin on the event by calling a disappointing result "a success". I don't know if this is really the case, but it is at least possible. StuRat (talk) 17:48, 13 October 2009 (UTC)
- "I Aim at the Stars... but sometimes I hit the moon."--Mr.98 (talk) 20:36, 9 October 2009 (UTC)
- The plume (which, incidentally was expected to be made up of about 350 tons of moon-rock!) was supposed to be clearly visible on amateur telescopes down to maybe 10" - so it ought to have been pretty big. If there was anything to be seen, it should have been seen from earth. But the idea of the LCROSS widget was that it would fly through the plume - getting a direct sampling of it...shortly before crashing itself and kicking up another 150 tons of moon. I heard speculation this morning that the crater might actually be a LOT deeper than was originally thought. These tests are really amongst the most important ever done beyond the earth's orbit - the answer to the question of whether there is water on the moon in readily 'gettable' quantities should be molding mankind's entire manned spaceflight future. It's doubly important that it be found in the moons polar regions where 24 hour per day sunlight would be available for solar panels to turn it into breathable oxygen and hydrogen+oxygen for rocket fuel. It is literally the case that if we find water there - we go back to the moon next - and if not we abandon it as a largely uninteresting desert and head off to Mars. SteveBaker (talk) 21:32, 9 October 2009 (UTC)
- Thanks Steve, where can I find a reference for that? User Grbrumder (talk) 00:23, 10 October 2009 (UTC)
- Our own article about LCROSS#Mission covers the territory pretty well and has lots of good references. SteveBaker (talk) 01:44, 10 October 2009 (UTC)
- 15.5 hours post-impact. Thud. Dead. No data.Edison (talk) 03:22, 10 October 2009 (UTC)
- There's a difference between 'no data because it didn't work' and 'no data released to the public yet because they haven't been fully analysed and only twenty people on the planet would have any use for the raw numbers.' It took 15 years for the meat of the Ardipithecus studies to be published; I doubt if NASA will take quite that long to fulfil its legal obligation of publishing the LCROSS results. In the meantime, try to cultivate some zen-like patience :-) . 87.81.230.195 (talk) 07:40, 10 October 2009 (UTC)
- 15.5 hours post-impact. Thud. Dead. No data.Edison (talk) 03:22, 10 October 2009 (UTC)
- Our own article about LCROSS#Mission covers the territory pretty well and has lots of good references. SteveBaker (talk) 01:44, 10 October 2009 (UTC)
- Thanks Steve, where can I find a reference for that? User Grbrumder (talk) 00:23, 10 October 2009 (UTC)
- The plume (which, incidentally was expected to be made up of about 350 tons of moon-rock!) was supposed to be clearly visible on amateur telescopes down to maybe 10" - so it ought to have been pretty big. If there was anything to be seen, it should have been seen from earth. But the idea of the LCROSS widget was that it would fly through the plume - getting a direct sampling of it...shortly before crashing itself and kicking up another 150 tons of moon. I heard speculation this morning that the crater might actually be a LOT deeper than was originally thought. These tests are really amongst the most important ever done beyond the earth's orbit - the answer to the question of whether there is water on the moon in readily 'gettable' quantities should be molding mankind's entire manned spaceflight future. It's doubly important that it be found in the moons polar regions where 24 hour per day sunlight would be available for solar panels to turn it into breathable oxygen and hydrogen+oxygen for rocket fuel. It is literally the case that if we find water there - we go back to the moon next - and if not we abandon it as a largely uninteresting desert and head off to Mars. SteveBaker (talk) 21:32, 9 October 2009 (UTC)
- I realised no one properly answered your question. Various sources have said about 2 weeks or more. E.g. [11] [12]. I agree with you that the visible results were fairly disappointing. And it does seem to me this was not just overenthusiastic media or people used to movie visual effects but NASA themselves. They seemed to be hyping the mission and the likely visuals. They seemed to be supporting LCROSS parties and didn't do anything to discredit the claim you'd likely see something with a good enough telescope. They realised the video showing the plume and didn't say anything about likely not actually seeing any plume. And it seems I'm not the [13] [14] [15] [16] [17]. I personally only watched NASA TV. There are some images now [18] which do show something, if they'd shown these around the time (perhaps a few minutes afterwards) I think people would have been more satisfied but that didn't happen. It seems the most 'exciting' thing at the time was the high-five incident. A lot of science is decidedly unimpressive and the media are guilty of overhyping a lot of stuff but in these cases it was the scientists or perhaps more accurately the PR people for the scientists who were a big part of the problem. (To use a different example with the LHC I did watch part of the launch briefly. There was little to see of course but that was what I was expecting.) Nil Einne (talk) 06:40, 11 October 2009 (UTC)
(outdent) From what I hear from the science folks, the lack of a highly visible plume may actually be a good thing: it may mean that the impactor hit in a deep patch of lunar regolith deep in the crater, giving probably the best set of spectra to answer the water question. I'm sure it will be a while before preliminary results are released: I can imagine that deconvolving complex spectra can be difficult especially if you want to be absolutely correct before releasing it to the news media. Awickert (talk) 08:41, 11 October 2009 (UTC)
- I'm also curious as to why the plume is less visible than anticipated. I suspect that this means that the plume did not leave the shadow in the crater and rise into the sunlight. This, in turn, could be caused by the impact being slower than anticipated or at a shallower angle. I assume that NASA is able to control the velocity fairly precisely. Perhaps there was a protrusion which it struck at a shallow angle (maybe the central cone which many meteor craters have ?). Something like this:
| | !/\ __________/ \___________
- So, does that crater have a central cone, and could the impact have happened there ? StuRat (talk) 17:48, 13 October 2009 (UTC)
are quantum effects the sole reason why heat capacity goes to zero at 0K?
I get the idea that the amount of possible microstates decreases as you go to 0K, which explains why heat capacity decreases, and that quantum effects simply enhance this. But my prof tells me quantum effects are the sole reason. If so, would the increase in microstates w/respect to temperature be responsible for the inflection point in the heat capacity versus temperature graph? John Riemann Soong (talk) 20:37, 9 October 2009 (UTC)
- I don't think the concept of counting microstates makes sense outside quantum mechanics -- a classical system does not have a finite number of states for a given energy. And what do you mean by inflection point here? Looie496 (talk) 21:42, 9 October 2009 (UTC)
- There's an inflection point (point at which the derivative is 0) at 0K in the heat capacity as a function of temperature. See the graph at Specific heat capacity#Solid phase. Red Act (talk) 22:26, 9 October 2009 (UTC)
- As a former calculus teacher I have to object that an inflection point is defined as a point where the second derivative changes sign, not a point where the derivative is zero. Basically "inflecting" means changing from upward curvature to downward curvature, or vice versa. You can't have an inflection point at the edge of a function's domain. Looie496 (talk) 03:38, 10 October 2009 (UTC)
- Oops! My bad! Red Act (talk) 04:45, 10 October 2009 (UTC)
- As a former calculus teacher I have to object that an inflection point is defined as a point where the second derivative changes sign, not a point where the derivative is zero. Basically "inflecting" means changing from upward curvature to downward curvature, or vice versa. You can't have an inflection point at the edge of a function's domain. Looie496 (talk) 03:38, 10 October 2009 (UTC)
- There's an inflection point (point at which the derivative is 0) at 0K in the heat capacity as a function of temperature. See the graph at Specific heat capacity#Solid phase. Red Act (talk) 22:26, 9 October 2009 (UTC)
- For the theory of the temperature dependence of heat capacity at low temperatures, see Debye model. Red Act (talk) 22:26, 9 October 2009 (UTC)
- I don't understand all the finer details of that article. I know that there is a quantum contribution to the reduction in heat capacity, but what about the microstate-macrostate-entropy contribution? And you can count microstates classically -- via probability and statistical mechanics. John Riemann Soong (talk) 22:45, 9 October 2009 (UTC)
- The quantum contribution is the microstate-macrostate entropy contribution. Assume that heat capacity is positive. Then as temperature decreases, so does internal energy. As internal energy decreases, the energy level spacings in a given quantum system will "look bigger". In the limit that internal energy goes to 0, the energy level spacings will look infinitely large, so there is only one "possible" microstate: every particle in its lowest possible energy level. Thus in the limit that internal energy (and likely temperature) goes to 0 in a quantum system, entropy should also go to 0. In a classical situation, this argument doesn't work because the possible energy levels are distributed in a continuum. Someone42 (talk) 11:28, 10 October 2009 (UTC)
- I don't understand all the finer details of that article. I know that there is a quantum contribution to the reduction in heat capacity, but what about the microstate-macrostate-entropy contribution? And you can count microstates classically -- via probability and statistical mechanics. John Riemann Soong (talk) 22:45, 9 October 2009 (UTC)
biosynthesis of creatine
The article creatine shows an R-NH3+ cation attacking an imine C=N center (that's the only way to form the new C-N bond).... but ammonium cation can't possibly have any lone pairs to donate. Am I missing something? John Riemann Soong (talk) 22:59, 9 October 2009 (UTC)
- It's biochemistry, so the answer is almost always "some enzyme":) The article says (and links to) Arginine:glycine amidinotransferase. There, you can see individual steps of the catalytic cycle, including the exact fate of one of the H on the R-NH3+. DMacks (talk) 07:15, 10 October 2009 (UTC)
October 10
Inbreeding
Is there anyway to prevent the negative effect of inbreeding in the South China Tiger? All captive South China Tiger today descend from 2 male and 4 female caught in the 1950s and 1970s. What if there weren't any wild ones left to breed with? Is there anyway for an inbred animal to continue on --Queen Elizabeth II's Little Spy (talk) 03:48, 10 October 2009 (UTC)
Lines exhibiting inbreeding depression can be weeded out via evolution. Genetic diversity will just have to start afresh. I think that the entire human race was once at a bottleneck population of 10 individuals, from what DNA evidence tells us. John Riemann Soong (talk) 04:47, 10 October 2009 (UTC)
- According to Population bottleneck#Humans, the human population dropped to a few thousand, it doesn't mention any bottleneck as extreme as 10 individuals (and excludes a bottleneck of a single breeding pair). --Tango (talk) 18:41, 10 October 2009 (UTC)
- That is a bit of a leap JRS. The whole present genepool may easily be entirely descended from 10 individuals without them being the only population at any instant. The population being at ten for humanity on the planet seems on balance less likely than that a particular group which only had ten members went on to being the only surviving population without interbreeding... --BozMo talk 10:34, 10 October 2009 (UTC)
- Closer to home, Cheetahs also have very low genetic diversity, again interpreted as indicating a past genetic bottleneck event. In the case of the South China tiger, it is likely the South China Tiger Project is following procedures previously applied in breeding other endangered species, whereby potential captive breeding pairs are carefully selected from the individuals available in zoos worldwide so as to maximise genetic mixing within the available gene pool (the article hints at this), and will presumably also release animals back into the wild in locations calculated to maximise the likelihood of similarly diverse uncontrolled matings. 87.81.230.195 (talk) 07:30, 10 October 2009 (UTC)
Yawning phenomenon
Why do people usually yawn simultaneously ? —Preceding unsigned comment added by 113.199.155.180 (talk) 04:02, 10 October 2009 (UTC)
- Yawning is infectious. Rkr1991 (Wanna chat?) 04:19, 10 October 2009 (UTC)
- That's just restating the question. Why is it "infectious"? --Anonymous, 04:51 UTC, 2009-10-10.
- No, it's providing a link to a wikipedia article, wherein one can read about the topic of the question. It's not clear why the questioner evidently didn't look in this most obvious of places, or if so, what particular information there was unclear. DMacks (talk) 05:33, 10 October 2009 (UTC)
I looked at the article. Did you? It doesn't answer the question, just makes a flat statement that "yawning has an infectious quality... which is a typical example of positive feedback", in humans and chimpanzees. --Anonymous, 18:05 UTC, 20009-10-10.
- Maybe try reading the whole article next time? There's a whole section dedicated to it Yawning#Contagiousness. Perhaps it could be more detailed but it seems to give the right idea which is as discussed below that we don't know, there are lots of hypothesis. Incidentally, I do feel that Rkr1991's answer was poorly phrased since it wasn't that clear to me he/she was saying to look at the article (although I also agree the OP ideally should have looked at it first and then came here with any further questions). Nil Einne (talk) 05:56, 11 October 2009 (UTC)
- Apologies. I searched ahead for other mentions of "infectious" but it didn't occur to me that the article would switch to using a different word part way through. --Anonymous, 20:10 UTC, October 11, 2009.
- Short answer to "why is yawning infectious?" - research is ongoing, several possible explanations have been proposed, but none of them has yet been shown to be correct. Indeed, there is no general agreement on why people or other creatures yawn at all. 87.81.230.195 (talk) 07:11, 10 October 2009 (UTC)
:::Though I do know that just reding this made me yawn.:-)209.244.187.155 (talk) 12:28, 10 October 2009 (UTC)
For the same reason that people laugh together. We're just very very social creatures and what one person does influences everyone else in remarkable fashion. Vranak (talk) 23:23, 10 October 2009 (UTC)
- Well, none of these really answer the question. The truth is that the answer is simply not known. There are a lot of speculations about the mechanisms and functions of contagious behaviors, but very little in the way of cold hard facts. The existence of mirror neurons (which is controversial itself) probably has something to do with it. Looie496 (talk) 00:03, 11 October 2009 (UTC)
- I agree that we don't have an answer for this one. However, I believe the "mirror neuron" theory of yawning is busted. I forget where I saw that - but the existence of people like myself with Asperger's syndrome or Autism (which is in part a manifestation of the lack or malfunctioning of mirror neurons) provides an easy test. I certainly get caught up in the infectious yawning thing - yet I'm completely useless at picking up what other people are thinking. The infectious nature of yawning is obviously happening at a deeper level than things like copying other peoples body language when you agree with them. The most convincing explanation I've heard is that yawning is pre-linguistic communication (like laughing) and it comes from deep in our evolutionary past. The meaning is thought to be something like "we all need to switch activities right now", so when one person yawns, the other people yawn back to indicate that they understood the message - so pretty soon everyone in the group has seen the message. Moving from an activity to sleeping is just one of those things that we all tend to do together. I think it's significant that even talking about yawning makes one want to yawn - and reading a joke can make you laugh - which suggests that perhaps our higher level language abilities still have connections back to this pre-language stuff. SteveBaker (talk) 03:56, 11 October 2009 (UTC)
- Our article seems to claim the opposite: "A 2007 study found that young children with autism spectrum disorder do not increase their yawning frequency after seeing videos of other people yawning, in contrast to typically developing children. This supports the claim that contagious yawning is based on the capacity for empathy.[24]" Nil Einne (talk) 06:00, 11 October 2009 (UTC)
- I kid you not - I went to the yawn article, which I had never seen before, and started yawning as soon as I saw the illustration. Although it might just be mother nature telling me it's time to pack it in for the night. →Baseball Bugs What's up, Doc? carrots 08:05, 11 October 2009 (UTC)
thermodynamic resonance stabilisation
They always tell me that delocalised charge => more stable, and intuitively I can see why. But what is the rigourous explanation why? Does it relate to entropy and the distribution of energy among more microstates? John Riemann Soong (talk) 04:45, 10 October 2009 (UTC)
- Rigorous explanation. Hmm. Ultimately the only rigorous explanation comes from the quantum mechanics equations, which do not map reversibly onto the English language. As with so much of physics and the world the "characterisation" we use is just a way of looking at it not per sae rigorously true, because whichever way you do it is a simplification. The way I look at stability is in terms of intermediate energy states (in a quasi-static kind of way). Dislocalisation must have a more favourable Gibbs free energy or it would not happen but stability is more to do with how much of the structure (bonds etc) you have to disrupt to get to a reacted state and how big a hill that is to climb. The intermediate radicals are generally a big distortion. Put it another way, if you say you understand it intuitively then you understand it. --BozMo talk 06:45, 10 October 2009 (UTC)
- I consider more that electron delocalization (the resonance itself) is what increases the stability. Charge is just an artifact or result of the electron distribution, not the actual/primary feature that is delocalizing. LCAO is a convenient way to look at the electronic structure and see which atoms are electron rich vs poor. For example, allyl cation valence electrons are spread across all three carbon atoms but C2 provides more orbital contribution as this is the most constructive overlap (the σ looks like oOo). So C1 and C3 are relatively electron-deficient and σ is symmetric, so "a bit of a positive charge equally on each those positions". DMacks (talk) 07:09, 10 October 2009 (UTC)
- With regards to reactivity my rationalisation is that for example, it's harder for electrophiles to "attack" delocalised negative charge and nucleophiles to attack delocalised positive charge, but that's of course based on my intuitive understanding. In my mind I still have Gauss' law enclosed charge mixed up with all these other concepts.
- Okay, let's take say, reactivity of esters and amides towards nucleophilic substitution. So in simple terms, if Nu: attacks, it's harder to attack delocalised charge (especially if it's delocalised on the electrophilic carbon!) and you break resonance stabilisation and so on. But in reality, what is going on? Do the pi electrons move onto higher energy orbitals? John Riemann Soong (talk) 07:21, 10 October 2009 (UTC)
- You are not talking about discrete particles, but wavefunctions. I would caution you not to think too much in terms of "in reality". Trying to understand a complicated function (whether in Quantum Electronics or Fluid Dynamics or whatever) as though the truth was in the words and concepts and the equations were subservient to that will get you so far but as you get more into research it will not help you. I assume from the questions that you are approaching the point of becoming a researcher rather than just learning. SO you need to know the reality is the (huge continuous) data set, just like the universe is an imaginably big data set. All mental concepts (especially when refering to electrons as discrete particles) are at the very best only convergent to truth if you allow an indefinitely long description, or you work in a massively irreversibly compressed data set (such as the conceptual framwork we use for everyday life). That fact that short descriptions ("physical characterisations") work so well sometimes is remarkable, and working by concepts and patterns is a powerful way forward. But it does not make the descriptions "in reality". --BozMo talk 08:59, 10 October 2009 (UTC)
ring formation
I am quite frustrated by my class, because ring formation is presented as such a trivial step (and generally presented with hand-waving) when it my mind, it puzzles me. I know rings are unsaturated and that 5 or 6-membered rings are the best, but is there some big summary article that covers all the basic aspects of ring formation, driving forces, mechanistic geometries, stereoselectivities, etc. so I don't get tripped up on exams?
It's frustrating because I know the C-C bond forming mechanisms and everything goes well with linear molecules but I simply get tripped up by the sudden transition to rings. John Riemann Soong (talk) 06:33, 10 October 2009 (UTC)
- The real key is that rings aren't intrinsically "different" from non-rings. An intramolecular reaction is just two reactive components that happen to be attached to the same skeleton rather than two separate skeletons. What a ring does is constrain the motion or geometric possibilities of the reaction and add some structural concerns regarding stability. By looking at how the reactants are arranged in their reactive conformation and one can often see the same structure/stability issues as in the product. Intramolecularity also affects reaction rate: faster if two reactive centers are constrained to be near each other in proper reactive conformation, slower if they are constrained to be unable to approach each other for reaction mechanism. If you know 5 and 6 are stable ring sizes, then forming them is relatively easy, whereas other sizes are harder to close and/or more likely to undergo ring-opening reactions. Cyclohexane covers details of the stabilities and conformations of that ring, so, for example, it will be hard to close a cyclohexane if the reactive conformation involves having a sterically large group in an axial-like position. It doesn't matter if the ring itself is actually closed--the stability issue is based on local conformational effects. DMacks (talk) 06:53, 10 October 2009 (UTC)
- Okay what's a good way to efficiently predict whether a substituent will be axial or equatorial? Are you expected to work out all these conformational combinations in your mind? Is it time to bring back the model kit?
- Also, are there any rules of thumb for positioning reacting groups during a retrosynthesis? Sometimes I know I essentially need electron donating and electron withdrawing sites but my major problem (with rings) is getting them to fit on a small molecule without them being too far away or too close, and then I need to make sure my electron withdrawing group or electron donating group doesn't interfere or can be eliminated or reduced/oxidised conveniently at the end of the (pencil and paper) synthesis. Any tips? John Riemann Soong (talk) 07:10, 10 October 2009 (UTC)
- The first tip is: practice. The more experience you have, the more efficiently you can analyze a structure and recognize it as fitting a pattern you know.
- There is often no really foolproof way except to try lots of possibilities to see what looks good enough and has no obvious flaws. Try lots of different retrosynthetic disconnections. Look at many different aspects of a proposed starting material (and especially try to find reactions other than the one you want) and then decide which ones have structural or mechanistic problems and also which ones have more than one particularly good kinetic/thermodynamic effect. The real issue (and one that plagues every synthetic chemist!) is avoiding tunnel vision (seeing what you want to happen rather than some other possibility). Real chemistry is hard precisely because nature is even cleverer than we are, and can find the one reaction or structural special-feature we neglected to notice, or even a previously completely unknown "better" reaction than the one planned via retrosynthesis. On-paper is easier, and school-work is easier still because you only are expected to know a limited set of reactions.
- The second tips is: for geometry, it's almost always time to bring out the model kit unless you can visualize and diagram well (i.e., you rapidly get correct answers:) in 3D. If you are making a cyclohexane, draw the starting material in the same conformation as the product--maybe start by drawing the product and then erasing the bond being formed in the reaction. If you have studied cyclohexane conformations (most orgo texts devote many pages and maybe even a whole chapter to this specific topic!) you definitely know "axial is awful". Well not always because there could be other competing effects), and sometimes only "a little worse" (depending on what group is axial and because other conformations could have other instabilities). If you haven't looked at cyclohexanes lately, time to go back and reread. Or if you haven't learned that section yet but are at least expected to know about conformational effects, you can look at models and check for eclipsed and gauche interactions (the whole cyclohexane conformations topic is just a specific application of those same fundamental effects).
- The third tip is to practice. A lot. And then some more. There are rules-of -thumb for every reaction type, and certain types of structures "look like the product of a certain reaction" or "doesn't look like the product of a certain reaction" once you're familiar with the reaction. The ring positions of the donor and acceptor groups in Diels-Alder and electrophilic aromatic substitution reactions have particularly clear patterns. But all that can't be taught directly except by studying the reactions. DMacks (talk) 07:50, 10 October 2009 (UTC)
- "Predicting" stable configurations is the realm of molecular dynamics - which is a heavy-duty supercomputing-scale problem. A lot of empirically observed configurations are known to be stable; it's a difficult task to work out from first-principles of atomic interactions why these geometries would actually have a lower energy. Nimur (talk) 13:49, 10 October 2009 (UTC)
h1n1
- Moved from Talk:2009 flu pandemic
iam having touble getting out bed when lying down plus it hurts to cough an sneenze my chest feels like some one hit with a hammer —Preceding unsigned comment added by 173.81.224.6 (talk) 07:11, 10 October 2009 (UTC)
- Oh yes? And what about the advice sought for the Chinese Tiger above I don't see you up there.
- Answers to a question about the inbreeding issue of the South China Tiger do not constitute medical advice under the terms of our guidelines. On the other hand, directly diagnosing a set of symptoms in some actual, specific, living person is flat out not allowed. Please read: Wikipedia:Reference_desk/guidelines#What_the_reference_desk_is_not...with special reference to the third bullet point. Any debate about that belongs on our discussion page, not here. SteveBaker (talk) 17:38, 10 October 2009 (UTC)
- Oh yes? And what about the advice sought for the Chinese Tiger above I don't see you up there.
Person posting the question, you are describing a person who is ill. You can only get good explaination from a doctor. You could be experiencing anything from flu to broken ribs or pure imagination. GO TELL DOCTOR Preceding medical advice given by ~ R.T.G 12:15, 10 October 2009 (UTC)
Wikipedia gives medical advice, it is, go and see a doctor or other qualified professional. ~ R.T.G 12:15, 10 October 2009 (UTC)
Mushroom from Virginia
If found these mushrooms beneath a pine tree in Virginia south west of Washington, does anybody know what they are? I think they look similar to a Amanita muscaria but without the red colour. Thanks--Stone (talk) 07:47, 10 October 2009 (UTC)
- Maybe Amanita cokeri, if so the article is in need of a picture. Mikenorton (talk) 08:10, 10 October 2009 (UTC)
- or Lepiota cristata. What is the smell like?--BozMo talk 09:31, 10 October 2009 (UTC)
- Looks a lot smaller than the ones I saw. 20 cm (7 inch) was the diameter of the largest ones. Amanita muscaria I found here in europe look most like those I found in Virginia.Amanita smithiana and Amanita solitaria look also not that different.--Stone (talk) 10:58, 10 October 2009 (UTC)
- Very difficult to identify the actual species without information that we don't have, e.g. in A.cokeri the shape of the bulb and the spores is diagnostic[19]. The cap size can be up to 15 cm (6 inches), so not much smaller than the ones you saw. The location is right, oak-pine forests in the eastern USA, but that doesn't settle it. On the other hand A.solaria is only found in Europe, A.smithiana is found in the Pacific Northwest and Lepiota cristata is much too small (max. cap size = 5 cm). Mikenorton (talk) 11:50, 10 October 2009 (UTC)
- Sorry! I thought that Lepiota cristata is too small, but Amanita cokeri is OK. --Stone (talk) 14:43, 10 October 2009 (UTC)
- Ah yes, I see now that you were replying to BozMo, doh! Mikenorton (talk) 15:43, 10 October 2009 (UTC)
- Sorry! I thought that Lepiota cristata is too small, but Amanita cokeri is OK. --Stone (talk) 14:43, 10 October 2009 (UTC)
- Very difficult to identify the actual species without information that we don't have, e.g. in A.cokeri the shape of the bulb and the spores is diagnostic[19]. The cap size can be up to 15 cm (6 inches), so not much smaller than the ones you saw. The location is right, oak-pine forests in the eastern USA, but that doesn't settle it. On the other hand A.solaria is only found in Europe, A.smithiana is found in the Pacific Northwest and Lepiota cristata is much too small (max. cap size = 5 cm). Mikenorton (talk) 11:50, 10 October 2009 (UTC)
sugar D M
I am suffering from sugar since 24 years ago. I used to take insulin twice a day 76 units. My doctor discovered sever inflammations in my stomach (CURED NOW) and tests revealed that my pancreas is working with a capacity of 80%. Im taking 50 units of insulin in the morning and 500mg metformin HOL 3 tablets daily plus a tablet of Galvus Vidagliptin 50mg. Some times my suguar level becomes lower than 100mg! I want an explanation and if there is a cure. thanks —Preceding unsigned comment added by 188.161.163.247 (talk) 09:12, 10 October 2009 (UTC)
- He/she asked if there was a cure for diabetes and how exactly the problems occur. No advice sought here. ~ R.T.G 12:07, 10 October 2009 (UTC)
- He/she wants a medical explanation for their specific insulin results. We can't help with that, it's something they need to speak to a doctor about. — The Hand That Feeds You:Bite 18:39, 12 October 2009 (UTC)
- He/she asked if there was a cure for diabetes and how exactly the problems occur. No advice sought here. ~ R.T.G 12:07, 10 October 2009 (UTC)
- The Wikipedia article on diabetes may be of some help in explaining your situation. --Phil Holmes (talk) 12:13, 10 October 2009 (UTC)
Nutritional value of nut
Hi, if you ever kept birds you are always feeding them seeds and nuts. Birds are high octane creatures. Is the nut a really good all round food or? How do nuts in general match up to fruit in general? Is there a kind of nut that is especially good for us the way soya is in beans? ~ R.T.G 12:04, 10 October 2009 (UTC)
- In general, the macronutrient profile of nuts and fruits are very different.
- Nuts are very calorically dense (lots of calories per gram), and provide most of their calories in the form of fats. The fats in nuts are at least primarily the "good" unsaturated fats, not the worse saturated fats. Nuts are also quite high in protein, and indeed generally have even more protein than beans on a per-gram basis, although less than beans on a per-calorie basis. Little of the calories in nuts comes in the form of carbohydrates.
- Fruits are much less calorically dense than nuts, and provide most of their calories in the form of carbohydrates (starches and sugars). On a per-calorie basis, fruits generally have less protein than any other category of whole food, and they're very low in fat.
- In terms of vitamins, nuts would tend to be better in terms of the fat-soluble vitamins like A, D and E, and fruits would tend to be better in terms of the water-soluble vitamins like C or the B vitamins.
- Nuts are certainly a healthy food, when eaten in moderation, but I'm not aware of any specific nut species being exceptionally good for us. Eating too many nuts would be bad, though, because it'd be easy to get fat from them due to the high caloric density. Red Act (talk) 12:57, 10 October 2009 (UTC)
- Very good answer Red Act thank you ~ R.T.G 18:01, 10 October 2009 (UTC)
Clouds over Moscow
This looks very strange: [20] - What could cause it and why is it such a good circle? Thanks for info. --AlexSuricata (talk) 14:31, 10 October 2009 (UTC)
- Possibly a type of fallstreak hole? Karenjc 14:38, 10 October 2009 (UTC)
- If you want seriously weird clouds - these take some beating: http://www.wired.com/science/planetearth/magazine/17-10/st_clouds SteveBaker (talk) 02:58, 12 October 2009 (UTC)
Pepsi One's one calorie
Does Pepsi One really have a calorie, and if so, where does it come from? PCHS-NJROTC (Messages) 16:02, 10 October 2009 (UTC)
- I'm inclined to agree with the answer given here: [21] - the sucralose (sugar substitute) has food energy, but so little is added to the drink that the final product has only about one or two food calories in it. 66.178.144.193 (talk) 16:20, 10 October 2009 (UTC)
- "One calorie" is what we officially call, in the scientific community, a bullshit marketing term. The method of measuring caloric content in foods is not precise to the single calorie; it is actually rounded to the nearest 10 calories. Thus, any food with less than 5 calories will report as 0 calories on the nutrition label. "Just one calorie" was first used as such a marketing term for Diet Coke and I believe that Tic tac mints used to advertise themselves as "1 1/2 calorie" an even more rediculous claim given how bullshit claiming even 1 calorie is. It sounds more striking than "Zero calories", which lots of OTHER brands make the claim to, so claiming "One calorie" or "1 1/2 calories" is just a catchy way of being memorable. Certainly more memorable than "We have anywhere from zero to 5 calories, but we can't tell to that precision" --Jayron32 19:56, 10 October 2009 (UTC)
- Do tell us more, Jayron. Exactly why does the scientific community have such problems measuring the caloric content in foods that they are unable to provide figures less rounded than to the nearest 10? Sounds most odd. --Tagishsimon (talk) 20:05, 10 October 2009 (UTC)
- Anyone could measure the caloric content of such foods empirically, and get a number which is much more accurate than to the nearest ten calories. Its just that, that's not how food calories are measured. Doing a simple experiment like bomb calorimetry tends to overestimate the number of calories in unpredicatble ways, since there's lots of stuff which burns, but which the human body cannot use for energy (like cellulose, for example). Rather, the caloric content is calculated by counting up the grams of fat, protein, and digestable carbohydrate in the food, and multiplying those numbers by standards (9 cal/gram for fat, 4 cal/gram for carbs and protein) and adding the results. Recognizing that this method is also flawed, the resulting number from that calculation is always reported to the nearest 10 calories on U.S. food labels, so officially reported caloric values are never anything but a multiple of 10 calories. You could experimentally determine caloric content to a more accurate and precise result, except that the food industry doesn't do it that way. See Food energy for a description of most of what I explain above. --Jayron32 20:19, 10 October 2009 (UTC)
- That's what I was thinking; the calories on the "Nutrition Information" box seen in the US is determined by algebric equation considering the fat, sugar, and protein in a product, but Pepsi One apparently has none of these. However, the Nutrition Information box on Pepsi One reports the "one calorie," and I thought that information was required to be scientific based on government specifications. Of course we all know that the government almost never does anything as they say they will. 71.54.231.5 (talk) 20:52, 10 October 2009 (UTC)
- I'm curious, what does it actually say on the nutritional information (rather than the marketing)? In the UK, amounts smaller than can be reliably distinguished from zero at the standard precision (but aren't actually zero) are described as "trace" although that doesn't quite make sense for calories... --Tango (talk) 21:05, 10 October 2009 (UTC)
- On American labeling, all caloric values are rounded to the nearest 10 calories, and all gram/mg amounts are rounded to the nearest 1 gram/mg IIRC. See Nutrition facts label. So anything less than 0.5 grams is reported as 0 grams, and anything less than 5 calories is reported as 0 calories. --Jayron32 23:51, 10 October 2009 (UTC)
- The Pepsi website gives a mock label [22] with a 1 Calorie citation. Does anyone have an actual product sample to verify that? Also, are they legal requirements for Calorie labeling (e.g. the use of formulas and the nearest 10 rule) or can they company choose to fudge it for the sake of their marketing? Dragons flight (talk) 00:11, 11 October 2009 (UTC)
- Doesn't it depend on how many Tic Tacs (or whatever) you estimate the calories for? For example, if you estimate the energy content for 1 million Tic Tacs based on the precise quantities of fat etc and presuming your production equipment and ingredients is reliably precise and consistent, you'd surely be able to estimate to within 1 calorie? Having said that according to our article (and from memory) they're advertised as having less then 2 calories rather then 1.5 calories in a few countries including I believe in New Zealand and Malaysia Nil Einne (talk) 06:20, 11 October 2009 (UTC)
- Yes, you could, but as Jayron32 explains above, they don't. There really is nothing to gain by such precise measurements, other than marketing slogans and the marketers are happy to just makes things up, so why would anyone make the measurements? --Tango (talk) 06:25, 11 October 2009 (UTC)
- But that's the thing. Are we sure they don't? Just because something is usually the case doesn't mean it is always the case. And it would likely vary from location to location. (For example, I'm far from convinced Tic Tac could get away with saying less then 2 calories if they generally had more then 2 calories in a number of countries.) Also it would seem to me there's a big difference between something like say a pie, where there's likely to be a large variance from pie to pie since it will depend on the meat etc and a Tic Tac with a fairly uniform consistent and a standard set of highly refined ingredients. Also Jayron32 hasn't explained what quantities are used. As I've said, there is surely a difference between estimating for 1 million Tic Tacs and 1000 Tic Tacs. Is it standard to estimate for 1000 kg? 10000 kg? Or does it go by volume? Is it really true that every single manufacturer uses this standard quantity? I'm not saying the figure definitely has any merit. I'm just saying I'm far from convinced the figure is complete bullshit. In fact I would say arguing the precision is always +/- 5 calories is just as likely to be a wild simplification. (Just because it's normal practice perhaps encouraged by the regulators, doesn't mean that it's the best scientific claim.) Also I didn't say anything about measurements. In fact I was talking about quantities used for estimation (via calculation methods). Nil Einne (talk) 06:55, 11 October 2009 (UTC)
- They can probably say with reasonable confidence that there are less than 2 calories per tic tac averaged over an entire pack, which would be plenty to justify the marketing. They probably can't guarantee that every tic tac has less than 2 calories, since the manufacturing/QA process won't be good enough. --Tango (talk) 07:19, 11 October 2009 (UTC)
- But that's an entirely different point. Jayron is saying it's nonsense since the value is between +/- 5 calories because of the unreliability of estimation methods but I'm far from convinced. If the average value is ~2 calories within a fair degree of precision (which IMHO could be +.- 0.5 calories or higher), that's the average value. The value per Tic Tac is obviously not going to be exactly 2 calories but that's a different point. And when it comes down to it, it seems unlikely it's going to be between 0 to 5 calories either, particularly for something like a Tic Tac. Also if the average value is 1 calorie (it doesn't seem to be), it seems entirely reasonable a high percentage of Tic Tacs are going to be 2 calories. (Some odd stuff like e.g. 2 Tic Tacs joined together will be significantly different although then we get into the complicated question of whether that's 1 Tic Tac or not.) Note that I was the first person to bring up manufacturing methods, variance between products and consistency etc. Jayron appears to be solely talking about the method to estimating or measuring energy content and the unreliability thereof, not about the natural variance between products. To reiterate, while I'm partially the cause of the confusion, my original point was that whether you are estimating the value by calculating the energy content or measuring it via some highly accurate method you can likely come up with a value with a greater degree of precision then +/- 5 calories per average serving. Obviously there's going to be variance between servings (and in many cases even between batches) but that's a different point since it doesn't matter whether you use a highly accurate measurement method or estimate the difference is still going to be there. (You do know the precise values for the units you tested but as you've likely destroyed these it's a somewhat moot point.) Or to put it a different way, there's a difference between measurement/estimation error and between false precision of a single unit of some product because of differing consistency between units. Nil Einne (talk) 07:39, 11 October 2009 (UTC)
- They can probably say with reasonable confidence that there are less than 2 calories per tic tac averaged over an entire pack, which would be plenty to justify the marketing. They probably can't guarantee that every tic tac has less than 2 calories, since the manufacturing/QA process won't be good enough. --Tango (talk) 07:19, 11 October 2009 (UTC)
- But that's the thing. Are we sure they don't? Just because something is usually the case doesn't mean it is always the case. And it would likely vary from location to location. (For example, I'm far from convinced Tic Tac could get away with saying less then 2 calories if they generally had more then 2 calories in a number of countries.) Also it would seem to me there's a big difference between something like say a pie, where there's likely to be a large variance from pie to pie since it will depend on the meat etc and a Tic Tac with a fairly uniform consistent and a standard set of highly refined ingredients. Also Jayron32 hasn't explained what quantities are used. As I've said, there is surely a difference between estimating for 1 million Tic Tacs and 1000 Tic Tacs. Is it standard to estimate for 1000 kg? 10000 kg? Or does it go by volume? Is it really true that every single manufacturer uses this standard quantity? I'm not saying the figure definitely has any merit. I'm just saying I'm far from convinced the figure is complete bullshit. In fact I would say arguing the precision is always +/- 5 calories is just as likely to be a wild simplification. (Just because it's normal practice perhaps encouraged by the regulators, doesn't mean that it's the best scientific claim.) Also I didn't say anything about measurements. In fact I was talking about quantities used for estimation (via calculation methods). Nil Einne (talk) 06:55, 11 October 2009 (UTC)
- Yes, you could, but as Jayron32 explains above, they don't. There really is nothing to gain by such precise measurements, other than marketing slogans and the marketers are happy to just makes things up, so why would anyone make the measurements? --Tango (talk) 06:25, 11 October 2009 (UTC)
- Doesn't it depend on how many Tic Tacs (or whatever) you estimate the calories for? For example, if you estimate the energy content for 1 million Tic Tacs based on the precise quantities of fat etc and presuming your production equipment and ingredients is reliably precise and consistent, you'd surely be able to estimate to within 1 calorie? Having said that according to our article (and from memory) they're advertised as having less then 2 calories rather then 1.5 calories in a few countries including I believe in New Zealand and Malaysia Nil Einne (talk) 06:20, 11 October 2009 (UTC)
- The Pepsi website gives a mock label [22] with a 1 Calorie citation. Does anyone have an actual product sample to verify that? Also, are they legal requirements for Calorie labeling (e.g. the use of formulas and the nearest 10 rule) or can they company choose to fudge it for the sake of their marketing? Dragons flight (talk) 00:11, 11 October 2009 (UTC)
- On American labeling, all caloric values are rounded to the nearest 10 calories, and all gram/mg amounts are rounded to the nearest 1 gram/mg IIRC. See Nutrition facts label. So anything less than 0.5 grams is reported as 0 grams, and anything less than 5 calories is reported as 0 calories. --Jayron32 23:51, 10 October 2009 (UTC)
- Anyone could measure the caloric content of such foods empirically, and get a number which is much more accurate than to the nearest ten calories. Its just that, that's not how food calories are measured. Doing a simple experiment like bomb calorimetry tends to overestimate the number of calories in unpredicatble ways, since there's lots of stuff which burns, but which the human body cannot use for energy (like cellulose, for example). Rather, the caloric content is calculated by counting up the grams of fat, protein, and digestable carbohydrate in the food, and multiplying those numbers by standards (9 cal/gram for fat, 4 cal/gram for carbs and protein) and adding the results. Recognizing that this method is also flawed, the resulting number from that calculation is always reported to the nearest 10 calories on U.S. food labels, so officially reported caloric values are never anything but a multiple of 10 calories. You could experimentally determine caloric content to a more accurate and precise result, except that the food industry doesn't do it that way. See Food energy for a description of most of what I explain above. --Jayron32 20:19, 10 October 2009 (UTC)
- Do tell us more, Jayron. Exactly why does the scientific community have such problems measuring the caloric content in foods that they are unable to provide figures less rounded than to the nearest 10? Sounds most odd. --Tagishsimon (talk) 20:05, 10 October 2009 (UTC)
- "One calorie" is what we officially call, in the scientific community, a bullshit marketing term. The method of measuring caloric content in foods is not precise to the single calorie; it is actually rounded to the nearest 10 calories. Thus, any food with less than 5 calories will report as 0 calories on the nutrition label. "Just one calorie" was first used as such a marketing term for Diet Coke and I believe that Tic tac mints used to advertise themselves as "1 1/2 calorie" an even more rediculous claim given how bullshit claiming even 1 calorie is. It sounds more striking than "Zero calories", which lots of OTHER brands make the claim to, so claiming "One calorie" or "1 1/2 calories" is just a catchy way of being memorable. Certainly more memorable than "We have anywhere from zero to 5 calories, but we can't tell to that precision" --Jayron32 19:56, 10 October 2009 (UTC)
- (EC) I briefly glanced thorough the (joint) standards in New Zealand and Australia. [23]. Of greatest interest "The average energy content, and average or minimum or maximum quantities of nutrients and biologically active substances must be expressed in the panel to not more than three significant figures." This hints at something I've been thinking but didn't really mention. It seems to me to be somewhat silly to to suggest something complex like a pie with a large energy content (unless it's sugar or refined glucose perhaps) is +/- 5 calories the same as something with a small energy content like a Tic Tac. The precision surely depends at least a bit on the size of the value. It also says "Where the average energy content of a serving or unit quantity of the food is less than 40 kJ, that average energy content may be expressed in the panel as ‘LESS THAN 40 kJ’" which is about 10 calories. Note the "may". It doesn't say you aren't allowed to specify a more precise value if you know it. I presume if you do specify a more precise value, it needs to be backed up. (In any case, obviously saying 39kJ is in some ways more or less as bad as saying 41kJ.) Our article suggests that the value for average energy content for 100g (which is the standard used in NZ in addition to serving) is 1658kJ. Okay in NZ-A that would be 1660kJ. It appears to suggest there are about 200 Tic Tacs in 100g. So you end up with ~8.3kJ per Tic Tac which is slightly under 2 calories. Presuming the 1660kJ value is reasonably accurate it seems difficult for me to presume it's going to be that much higher, up to 5 calories. And it seems to me reasonable you can estimate the value for 100g to a greater degree of accuracy then between 0 to 4187 calories for 100g of TicTac. And as I've said, Tic Tacs are something of sufficient consistency that a value per serving is going to be reasonably accurate. What they actually say on Tic Tac labels here I don't know but I don't think that's really the question. Nil Einne (talk) 07:39, 11 October 2009 (UTC)
(outdent) I can confirm by direct inspection at the supermarket that Pepsi One in the US has a nutritional information box stating it has 1 Calorie, 0 Fat, 0 Carbohydrate, 0 Protein. Dragons flight (talk) 12:40, 11 October 2009 (UTC)
- The official Food and Drug Administration website has the following Claims That Can Be Made for Conventional Foods and Dietary Supplements. "The responsibility for ensuring the validity of these claims rests with the manufacturer, FDA, or, in the case of advertising, with the Federal Trade Commission." If a vendor or manufacturer violates these standards, legal action can be pursued by either the FDA or the FTC (depending on the type of violation). As far as measurements, the Compliance Guidance Document states that the "FDA has not stated how a company should determine the nutrient content of their product for labeling purposes. Therefore, there is no prohibition from using "average" values for its product derived from data bases if a manufacturer is confident that the values obtained meet FDA's compliance criteria." Per Nutrition Labeling Guidelines, the Pepsi may be called "Calorie Free" if it contains fewer than 5 calories per "recommended amount for consumer consumption." "The caloric value of a product containing less than 5 calories may be expressed as zero or to the nearest 5 calorie increment (i.e., zero or 5 depending on the level). Foods with less than 5 calories meet the definition of "calorie free" and any differences are dietarily insignificant." (So, the FDA only enforces to the nearest 5 calories - Pepsi One falls into this category). I think it would be silly to assume that a major beverage corporation didn't perform its due diligence before making a nutritional claim in an international marketing effort - it is clearly within acceptable limits of accuracy. The FDA has mandated that labeling claims must be validated either independently (by the manufacturer) in accordance with AOAC International-approved techniques, or using standard information from the USDA database. Information on these methods is available in this book, Nutrition Labeling, available online. This question could have been easily answered with a few good references - even the official Pepsi One website states very clearly: Information reflects rounding as required by the Food & Drug Administration (21 CFR 101). This may produce occasional irregularities in some values when comparing information for different serving sizes. They state very clearly - without any ambiguity - that there is rounding error in their measurement that is within acceptable legal tolerances. They even state the applicable legislation: US Code Title 21 CFR 101 Nimur (talk) 18:48, 11 October 2009 (UTC)
- Of course that's all well and good, but it doesn't actually answer the original poster's question. Assuming that it really does have approximately 1 Calorie as stated on it packaging and nutritional label, and that's not some sort of marketing ploy, then what ingredient(s) of Pepsi One are contributing materially to that one Calorie? Dragons flight (talk) 19:35, 11 October 2009 (UTC)
- There's lots of stuff there which may be marginally digestable by humans. Caramel color for example is pretty much just treated sugar caramel, and Pepsi One uses caramel color, so it could rightly claim that the one calorie came from the few milligrams of caramel color in its soda. If there were really about 1 calorie, and I had to make a guess as to the makeup of that 1 calorie, I would guess most of it is caramel color. --Jayron32 06:17, 12 October 2009 (UTC)
- Of course that's all well and good, but it doesn't actually answer the original poster's question. Assuming that it really does have approximately 1 Calorie as stated on it packaging and nutritional label, and that's not some sort of marketing ploy, then what ingredient(s) of Pepsi One are contributing materially to that one Calorie? Dragons flight (talk) 19:35, 11 October 2009 (UTC)
Water
What volume of water is in biomass form above sea level, and roughly what sea level alteration would this lead to if all life was exterminated. —Preceding unsigned comment added by 129.67.39.44 (talk) 17:42, 10 October 2009 (UTC)
- A very quick initial skim via Google suggests that the World's total biomass is in the region of 10-8 of the World's total surface water mass. This suggests that 99.999999% of the World's water is not part of current biomass, so even if all terrestrial biomass water was to be returned to the oceans (which would not necessarily happen as some water would enter the atmosphere) any sea-level rise would be insignificant. I'm sure others will be eager to firm up or refute this hasty guesstimation, if only to prove me an idiot. 87.81.230.195 (talk) 19:41, 10 October 2009 (UTC)
- Even the most brief thought suggests that it would make little difference to the sea level. Biomass is a very thin layer on some of the land surface. --Tagishsimon (talk) 19:45, 10 October 2009 (UTC)
- On the other had, desertification and increased evaporation on interior continental areas and increased rainfall above the oceans (perhaps on the order of 1 - 5%) could have a larger effect on sea level rise. ~AH1(TCU) 13:39, 11 October 2009 (UTC)
- Even the most brief thought suggests that it would make little difference to the sea level. Biomass is a very thin layer on some of the land surface. --Tagishsimon (talk) 19:45, 10 October 2009 (UTC)
- Thus far we have of course been discussing short-term results. Plugging your scenario into the strong version of the Gaia hypothesis suggests that, in the much longer term, the absence of life might disrupt the Hydrological cycle and in turn the facilitation of Plate tectonics by water chemically absorbed into the Lithosphere. With plate tectonics slowing down and stopping, absorbed water would no longer be volcanically recycled back into the Hydrosphere and surface water would dwindle to nothing, as has evidently happened on Mars. 87.81.230.195 (talk) 16:33, 12 October 2009 (UTC)
Fastest human BPM
What's the highest ever recorded heart rate of a human? Thanks Pineapplegirls (talk) 17:57, 10 October 2009 (UTC)
- Heart rate is naturally a lot higher in babies and young children, so do you want the highest for all humans or the highest for an adult human? --Tango (talk) 18:48, 10 October 2009 (UTC)
- I couldn't find a reliable source to answer this. However I would guess somewhere around 300 beats per minute in cases of Wolff-Parkinson-White syndrome. The heart cannot sustain this rate for long because the cardiac output drops off and cardiac arrest ensues. Axl ¤ [Talk] 18:50, 10 October 2009 (UTC)
- According to death from laughter, a Danish audiologist's heart rate was estimated to have peaked between 250 and 500 bpm prior to death. ~AH1(TCU) 13:37, 11 October 2009 (UTC)
- Hmm, I'm getting 404 Not Found for the reference. Axl ¤ [Talk] 16:39, 11 October 2009 (UTC)
- According to death from laughter, a Danish audiologist's heart rate was estimated to have peaked between 250 and 500 bpm prior to death. ~AH1(TCU) 13:37, 11 October 2009 (UTC)
- In the article on tachycardia (rapid heart rate of various kinds), the highest number mentioned is 250 BPM for the case of ventricular tachycardia. Red Act (talk) 17:21, 11 October 2009 (UTC)
- I think the definition of atrial flutter includes a rate of 250-300 bpm -- perhaps a distinction between atrial vs. ventricular beat rates should be made. DRosenbach (Talk | Contribs) 02:26, 12 October 2009 (UTC)
Music levels...
Just wondering, if I were to listen to music on the go at the same volume, would something like metal still be more damaging than easy listening? If perhaps the peaks (no idea about technical terms) were sharper. Even if I do listen to music on the go, I keep a respectable volume, I am curious anyway. Thanks! —Preceding unsigned comment added by Infiniteuniverse (talk • contribs) 22:58, 10 October 2009 (UTC)
- The volume is the primary factor in hearing damage, not the type of music. Remember, Beethoven went deaf and he never listened to heavy metal. Regardless of your musical taste, keep the volume down. You will appreciate it when you are much older and you don't have to blow all your money on hearing aid batteries. -- kainaw™ 01:39, 11 October 2009 (UTC)
- Was Beethoven's deafness anything to do with listening to loud music? --Tango (talk) 01:48, 11 October 2009 (UTC)
- According to the Ludwig van Beethoven article, the cause of his deafness is not known for certain, but seems to have been caused by disease rather than by loud music. →Baseball Bugs What's up, Doc? carrots 01:57, 11 October 2009 (UTC)
- The Hearing impairment article talks about the many causes. Loudness certainly is one cause, tied in with duration. The volume of classical music tends to ebb-and-flow more than, say, heavy metal. But listening to a shrieking soprano at high volume for a long time could likely be damaging. →Baseball Bugs What's up, Doc? carrots 02:07, 11 October 2009 (UTC)
- I rarely listen to music on the go, as I indeed do want to hear well later in life. I usually listen to podcasts, sometimes ones with some music, and that is when it is mostly quiet out, so I don't have to turn the volume up. I like to have the earbuds in when it is noisy and listen to nothing also, to dampen the sound slightly.Infiniteuniverse (talk) 06:02, 11 October 2009 (UTC)
- The Hearing impairment article talks about the many causes. Loudness certainly is one cause, tied in with duration. The volume of classical music tends to ebb-and-flow more than, say, heavy metal. But listening to a shrieking soprano at high volume for a long time could likely be damaging. →Baseball Bugs What's up, Doc? carrots 02:07, 11 October 2009 (UTC)
- According to the Ludwig van Beethoven article, the cause of his deafness is not known for certain, but seems to have been caused by disease rather than by loud music. →Baseball Bugs What's up, Doc? carrots 01:57, 11 October 2009 (UTC)
- Was Beethoven's deafness anything to do with listening to loud music? --Tango (talk) 01:48, 11 October 2009 (UTC)
- Unfortunately, many popular music albums, in all genres, are engineered for maximum loudness - see Loudness war. Therefore the difference should be minimal. MaxVT (talk) 19:09, 11 October 2009 (UTC)
October 11
Does ether and methanol form an azeotrope?
An experiment procedure effectively adds methanol to ether (that's dissolved my product) and then apparently if I let it boil for a short while before adding water to induce precipitation, most of the ether has evaporated. This is over steam bath -- I know ether boils quick, but this is at least 15 mL ether we're talking about here. How much would methanol depress the boiling point of the mixture, if they form a positive azeotrope?
Theoretically my worry actually is because I'm not sure if I'm dealing with a 2-solvent recrystallization or a 3-solvent recrystallization ... John Riemann Soong (talk) 01:54, 11 October 2009 (UTC)
- You could do a quick GC run. The retention times of both methanol and ether should be pretty standard, you could tell roughly how much of your solvent is either methanol or ether based on the chromatography. I am pretty sure you can get rough quantitative assements from a GC run by integrating the area under the peaks. --Jayron32 04:40, 11 October 2009 (UTC)
- Unfortunately I have no time to go back to the lab before my report is due (well mainly cuz I have like 2 exams before it). I'm not privileged enough to get a key to a room with a GC unsupervised. John Riemann Soong (talk) 06:18, 11 October 2009 (UTC)
Electroplating Copper on Iron
Hello. Given copper(II) sulfate solution, aluminum, an aluminum salt solution, and iron; I'd like to electroplate copper on iron via a spontaneous redox reaction. I plan to submerge Al in a beaker filled with aluminum nitrate solution and Fe in another filled with CuSO4(aq). Would Fe react with its electrolyte due to the reactivity series? If so, how can I electroplate Cu on Fe since most Cu salt solutions would react with Fe? Thanks in advance. --Mayfare (talk) 03:14, 11 October 2009 (UTC)
- Unfortunately, Iron may be a bad choice for your cathode here since it will react SO readily with the copper, that it may be hard to not get the iron to dissolve off into the solution while you are trying to plate it with the copper. Stainless steel, being a less reactive alloy, may work better, but pure iron is a fairly reactive metal. Drop an ungalvanized iron nail into a copper (II) solution, and within minutes the nail will start to pit and the copper will begin to plate in the pits. This is a sort of "chemiplating", but it will not give a nice, even finish like true electroplating would. --Jayron32 04:29, 11 October 2009 (UTC)
- Curious, how do alloys work during electroplating? Surely some sort of phase transition must take place? John Riemann Soong (talk) 06:12, 11 October 2009 (UTC)
- Any conductive material can be electroplated. The cathode will gather metal to it regardless of its identity, so ideally you want a conductive, but relatively non-reactive cathode to prevent the material from degrading by doing its own spontaneous redox reactions. Stainless steel would still react with copper (II) solutions, but likely much more slowly than would pure iron, so you would get a fighting chance to electroplate a smooth clean layer of copper onto the surface of the steel. Once the first layer of copper is there, it will protect the underlying metal, so it can build up on its own. --Jayron32 06:12, 12 October 2009 (UTC)
dc motor
diff. b/w wdgAnkit Badnara (talk) 03:50, 11 October 2009 (UTC)
- There's an article on DC motor. I take "b/w" to mean "between". "wdg"??? →Baseball Bugs What's up, Doc? carrots 04:25, 11 October 2009 (UTC)
- b/w could also mean backed with, a common abbreviation back in the days of 45 rpm phonograph singles, like "Hey Jude", b/w "Revolution (song)", indicating the "A-side" and "B-side" of the single. This could be a question for the Entertainment Desk for all we know. --Jayron32 04:35, 11 October 2009 (UTC)
- I think we're going to need a complete sentence - at a minimum - if we're going to have a shot at answering something. wgd might be "windings" - so maybe this is something like "What is the voltage difference between the windings of a DC Motor?"...but then it could equally be "Are there diffeomorphisms between wedgies?". SteveBaker (talk) 04:31, 11 October 2009 (UTC)
- ...or "What is the difference between the windings of the various kinds of DC motors (brushed vs. brushless)?", or "What is the difference between the windings of a DC motor and an AC motor?" Red Act (talk) 05:16, 11 October 2009 (UTC)
- If it weren't for the title, it could be a sociological question as to whether interracial marriage can be challenging: "Difficult black/white wedding?" Red Act (talk) 07:16, 12 October 2009 (UTC)
Crack on the Moon
Is there a significant crack on the Moon that dates back about 1400 years? And what's the story behind this?--Email4mobile (talk) 09:23, 11 October 2009 (UTC)
- The moon has been thought to be geologically dead for the past few billion years (other than impact cratering), so as far as we know, no. Also, there is no way for us to date to 1400 years ago, and no way for us to get accurate pre-telescope records of fine-scale features. There is an article on Rilles here on wiki, unfortunately I am not a planetary scientist so I can't give you any more info than that ('tis also well past my bedtime). Awickert (talk) 09:30, 11 October 2009 (UTC)
- If by "significant crack" you mean one that is consistent with the widespread claim in Islam that the Prophet split the Moon in two to convince the Unbelievers (Koran 54:1 - well actually, the Koran does not precisely support this myth either, but read it for yourself) then absolutely and emphatically no there is not. The picture you have linked to is a former NASA picture of the day and depicts the Ariadaeus Rille. The NASA article on this rille says that it is relatively young, but planetologists usually mean a lot older than 1400 years by that phrase. It is 300 km long, but that is nowhere near long enough to represent a repaired bisection of the moon. It consists of a line of sunken surface called a graben caused by a parallel line of geological faults. This is nowhere near a traumatic enough process to be connected with moon-splitting. In fact, there are a large number of such rilles (so called straight rilles) all over the surface of the moon going every which way. NASA has an informative article on them. You will need more than a line on the moon to convince this unbeliever. SpinningSpark 16:56, 11 October 2009 (UTC)
- We know significantly more about the geology of the Hadley Rille, because Apollo 15 landed there with intent to study it. The best understanding of lunar seismology is still pretty vague - most lunar scientists believe that these faults are cooling-artifacts. They presumably form after thermal expansion of large areas molten lunar crust - areas of lunar surface that melted in cataclysmic impacts (as opposed to the earth-like plate tectonics). However, we can't say for sure since we have very limited data on the moon's subsurface geology. Nimur (talk) 19:32, 11 October 2009 (UTC)
Voice
Does the male voice continue to deepen with age, after puberty? Younger males, around 18 and 19, on average seem to have more high pitched voices compared to older males. Clover345 (talk) 12:08, 11 October 2009 (UTC)
- Good question. I thought the answer would be in Vocal register but I can't find it there. Anyway you're right to some extent. Mens voices can deepen till their mid twenties. When men and women get old though mens voices rise again and womens lower so for very old people you might find it difficult to tell from the voice if they're male or female. Dmcq (talk) 13:57, 11 October 2009 (UTC)
- Your observations seem to be right. This article contains a helpful diagram of changes in the voice with age, which appears to show the fundamental frequency of the average male voice deepening quickly in the teeage years, then continuing to deepen more slowly until around 40, after which it is relatively stable until it starts rising again at around 60. Karenjc 16:24, 11 October 2009 (UTC)
- As with many male/female differences, development of the vocal cords appears to be regulated by hormones. There is a fairly detailed discussion in Vocal folds#Impact of hormones. --- Medical geneticist (talk) 00:10, 12 October 2009 (UTC)
how does double-stranded RNA work?
Nothing I search online seems to address the idea that the ribose sugar is too big and Watson and Crick's original prediction about how RNA couldn't be double-stranded. John Riemann Soong (talk) 15:51, 11 October 2009 (UTC)
- Alexander Rich (MIT), one of the discoverers of the structure of dsRNA around 1960, had this to say about the differences between dsRNA and dsDNA:
Eventually, it was discovered that the double-stranded (ds)RNA molecule adopted a conformation similar to the A form of DNA, exclusively using a C3' endo sugar pucker (Fig. 1). The reason for this adherence to the C3' endo sugar pucker in RNA becomes apparent on looking at the position of the additional oxygen that would be present at the C2' position of ribose (See Fig. 1). In the C3' endo conformation of dsRNA, there is adequate separation between the oxygen on C2' and the oxygen on C3' in contrast to a van der Waals crowding that occurs if the dsRNA sugar pucker were C2' endo. Because of the unfavorable energetic situation of ribose in the C2' endo conformation, RNA molecules are usually found in the C3' endo conformation. There is an energy barrier between the two puckers for ribose; in contrast, the deoxyribose ring has very little barrier.
— Alexander Rich, RNA Towards Medicine, doi:10.1007/3-540-27262-3_1, ISBN 978-3-540-27262-5, retrieved 2009-10-11
- There may be more recent and detailed commentaries, but I think this addresses the spirit of your question. -- Scray (talk) 19:02, 11 October 2009 (UTC)
decline in platelets
last month i suffered a brain haemorrhage. past 6 days my platelets are on a downward trend;reduced from 298k to 191k. is there a cause to worry? —Preceding unsigned comment added by Nipun310 (talk • contribs) 18:08, 11 October 2009 (UTC)
- Sorry, best wishes but we are forbidden to answer questions about people's personal medical conditions. Looie496 (talk) 18:16, 11 October 2009 (UTC)
- Here is the guideline that prohibits us from giving medical advice. Please ask your doctor. Red Act (talk) 18:31, 11 October 2009 (UTC)
- We can't try to explain what is happening in your situation or whether or not you should worry about it, since that would require detailed knowledge of your situation and it would be inappropriate for us to try to make a diagnosis over the internet. However, we have articles on platelets and hemostasis. A detailed explanation of your particular situation is best left to your doctor. --- Medical geneticist (talk) 00:28, 12 October 2009 (UTC)
paralysed from the waist down
does the wedding tackle still work or not, or does it depend? —Preceding unsigned comment added by 86.128.191.115 (talk) 20:53, 11 October 2009 (UTC)
- Fixed your link. --Anon, 22:02 UTC, October 11, 2009.
October 12
Rattlesnake Anti Venom avalibility
I did not get bitten but it sure was close today when I almost stepped on a very large Timber Rattler.
We both startled each other and I sure did jump quick. The snake is still alive and out behind the house somewhere as I don't kill wild animals that have as much right as I do to live.
My question goes more to "What if" I had been bitten and needed the Anti Venom quick.
Do hospitals keep this stuff on hand?
How long might it take for the hospital to locate it and get it in an emergency?
I live in the NY Catskill Mountains, Sullivan County. These snakes are here but not very common, I have seen only 3 in almost 60 years. —Preceding unsigned comment added by Gamalot52 (talk • contribs) 00:33, 12 October 2009 (UTC)
- It is my understanding that US hospitals keep antivenoms for all native snakes. As for how long can the hospital locate it? I'm sure they could have it to you quicker than it would take them to identify the snake. In any case, be careful. By all accounts, rattlesnakes hurt a lot, and can make you pretty sick. Call your local hospital and ask. They'll know if they have it on hand. Falconusp t c 01:48, 12 October 2009 (UTC)
- In my experience as a medical student in Kentucky, your antivenom would have been CroFab. It's expensive. If a small hospital were to stock it, have no patients, and let it expire, it would cost the hospital thousands of dollars per vial [24] and several vials are necessary to treat even one adult. This scenario is very likely given the sporadic nature of snake bites. Thus by pure economics, most small rural hospitals will not have it on hand, and you would be transferred to a tertiary care center if you really needed it. - Draeco (talk) 05:56, 12 October 2009 (UTC)
Why is sneezing convulsive?
As we now know, many animals sneeze. That's understandable, but why does it have to be convulsive - why does the individual have (almost) no control over it? Why can't it be just like other processes that clean the body (defecation and urination), that give the individual some discretion about the timing? It's not hard to think of circumstances in which it could be disastrous for an animal to sneeze at the wrong time - such as a when it means being detected by a predator or prey. I doubt that it's always of such life threatening importance that it has to be done immediately. Waiting a bit, while just breathing through the mouth for the time being, seems like a much better adapted alternative. Why can't we do that? — Sebastian 00:45, 12 October 2009 (UTC)
- Coughing is a similar deal though - we can't help doing that either. SteveBaker (talk) 02:54, 12 October 2009 (UTC)
- You're right. But coughing is convulsive for a reason: If something's stuck in our windpipe, we need to get it out quickly before we suffocate - there's no way to bypass the windpipe by breathing some other way. — Sebastian 03:02, 12 October 2009 (UTC)
- Actually I remember reading somewhere that if a human isn't systematically toilet trained from a young age then they will most likely fail to develop an ability to control when they go to the toilet, to a greater extent. I think the fact that you have mucous membrane in your nose and throat probably plays a large part, it's probably more sensitive to infection and such, so there is probably a biological advantage to not let dirt or whatever sit on your membranes for longer then possible. Vespine (talk) 04:57, 12 October 2009 (UTC)
- So i just had a quick read of the mucous membrane article and it actually says that the mucous acts against infection by trapping it, i'm sure it's still more prone then epidermis and your body still probably rather avoid (relatively) large bits of dirt sticking to it. Vespine (talk) 05:01, 12 October 2009 (UTC)
- Its thought that the ancestral mammalian condition was obligate nose breathing (that is, our evolutionary predecessors used to only be able to breathe through their nose, and not their mouth). For many (but not all) non-human mammals that is still the case. Thus having a semi-autonomous mechanism in place to clear the airway of obstruction/infection would be quite advantageous, as Sebastian notes about coughing. Note also that human neonates are pretty much obligate nose breathers for the first few months, so they don't really have the option of waiting a bit, while just breathing through the mouth for the time being. I expect uncontrollable sneezing is a physiological artifact from before oronasal breathing was an option. Rockpocket 06:53, 12 October 2009 (UTC)
- Thanks, that sounds plausible! — Sebastian 15:58, 12 October 2009 (UTC)
- Its thought that the ancestral mammalian condition was obligate nose breathing (that is, our evolutionary predecessors used to only be able to breathe through their nose, and not their mouth). For many (but not all) non-human mammals that is still the case. Thus having a semi-autonomous mechanism in place to clear the airway of obstruction/infection would be quite advantageous, as Sebastian notes about coughing. Note also that human neonates are pretty much obligate nose breathers for the first few months, so they don't really have the option of waiting a bit, while just breathing through the mouth for the time being. I expect uncontrollable sneezing is a physiological artifact from before oronasal breathing was an option. Rockpocket 06:53, 12 October 2009 (UTC)
- So i just had a quick read of the mucous membrane article and it actually says that the mucous acts against infection by trapping it, i'm sure it's still more prone then epidermis and your body still probably rather avoid (relatively) large bits of dirt sticking to it. Vespine (talk) 05:01, 12 October 2009 (UTC)
- Actually I remember reading somewhere that if a human isn't systematically toilet trained from a young age then they will most likely fail to develop an ability to control when they go to the toilet, to a greater extent. I think the fact that you have mucous membrane in your nose and throat probably plays a large part, it's probably more sensitive to infection and such, so there is probably a biological advantage to not let dirt or whatever sit on your membranes for longer then possible. Vespine (talk) 04:57, 12 October 2009 (UTC)
- You're right. But coughing is convulsive for a reason: If something's stuck in our windpipe, we need to get it out quickly before we suffocate - there's no way to bypass the windpipe by breathing some other way. — Sebastian 03:02, 12 October 2009 (UTC)
Windmill? Wind turbine? Wind Widget?
The article windmill makes it clear that that term refers to a wind powered grinding mill. Wind turbine refers to a wind powered electrical generator. These two devices have something in common, a big spinny bit. What is the spinny bit called? I'm looking for a name that includes both windmills and wind turbines, but excludes other similarly appearing spinny bits, such as propellers.
Alternately, I'm asking this question. What is the name for a device which captures wind energy, regardless of what purpose that energy is channeled into?
gnfnrf (talk) 01:25, 12 October 2009 (UTC)
- Does this diagram help? Intelligentsiumreview 02:07, 12 October 2009 (UTC)
- They are called sails - see Windmill_sail. Exxolon (talk) 02:09, 12 October 2009 (UTC)
- They seem to be called "sails" on a windmill and "blades" on a wind turbine. I guess it depends on their shape and purpose; blades are designed to move a lot faster.--Shantavira|feed me 07:57, 12 October 2009 (UTC)
- As far as I can tell, it looks to me like the difference between a sail and a blade may be a matter of the construction. A sail is generally made of a flexible material attached to a rigid frame, whereas a blade is one solid rigid piece. So modern wind turbines uniformly have blades, old-style windmills have sails, and typical (relatively) modern windmills as were commonly found in the American West have metal blades. I'm not certain that's the defining distinction between a sail and a blade; I'm just making an observation based on how the terms are used in the windmill and wind turbine articles. Red Act (talk) 12:23, 12 October 2009 (UTC)
- They seem to be called "sails" on a windmill and "blades" on a wind turbine. I guess it depends on their shape and purpose; blades are designed to move a lot faster.--Shantavira|feed me 07:57, 12 October 2009 (UTC)
- The third sentence of the windmill article implies, without making it quite clear, that in popular parlance they all tend to be called windmills. If you read the whole article carefully, you'll see that the word is in fact used therein to refer to devices other than grist mills. I know that I've never heard the water-raising devices used on the American plains (as described in the section Windmill#In Canada and the United States) referred to as anything other than windmills; and the existence of articles such as Boardman's Windmill and List of drainage windmills in Norfolk suggest that usage isn't confined to grist mills in England, either. Deor (talk) 12:31, 12 October 2009 (UTC)
- In fact, the more I look at the lede of the windmill article, the more it seems to me dead wrong. If you look at the dictionary definition linked in note 2 of the article, you'll see that windmill is in fact the correct word for any "device which captures wind energy, regardless of what purpose that energy is channeled into." Deor (talk) 15:23, 12 October 2009 (UTC)
Mytoses question
In my Biology class, we are doing a experiment about cells and mytoses (or howrevr you spell it. There's a question I can't figure out, and the page here is to complicated for me. The question is "What two differences are apparent at the poles of plant and animals cells?" (during mitoses). I thought the difference was that there is only a cell wall in a plant cell, but not in a animal. But there have to be 2! I've already tried, could someone explain it to me? Im not aksing for you to gimme the answer, just to explain. Help would be appreciated! Warmly, --Amber. —Preceding unsigned comment added by 69.210.134.227 (talk) 02:04, 12 October 2009 (UTC)
- Mitosis -- you're sort of asking for the second difference. If you'd tell use what you don't understand about the second thing, we can explain it to you in easier-to-understand words. DRosenbach (Talk | Contribs) 02:30, 12 October 2009 (UTC)
It's spelled mitosis; please check out that article, it mentions at least two differences. — Sebastian 02:31, 12 October 2009 (UTC)- I checked the article -- and it doesn't specifically speak of two differences at the poles. Perhaps there are unmentioned differences, such as regarding the asters, the microtubal arrangements, the centromeres/-somes, etc. I'd hardly say that cleavage vs. cell wall formation occurs "at the poles." DRosenbach (Talk | Contribs) 02:33, 12 October 2009 (UTC)
- You're right - I just didn't read the question thoroughly. Your reply was better than mine anyway; mine was only there due to an edit conflict, so I am striking it. I do take exception to your editing my reply though; it was meant as a reply to the question, not to your reply, and I'm undoing that herewith. — Sebastian 03:38, 12 October 2009 (UTC)
- I checked the article -- and it doesn't specifically speak of two differences at the poles. Perhaps there are unmentioned differences, such as regarding the asters, the microtubal arrangements, the centromeres/-somes, etc. I'd hardly say that cleavage vs. cell wall formation occurs "at the poles." DRosenbach (Talk | Contribs) 02:33, 12 October 2009 (UTC)
Higher plants have neither centrioles nor their product centrosomes. Perhaps that's your answer - Draeco (talk) 04:47, 12 October 2009 (UTC)
Silver in a plastic cutting board - antimicrobial/antibacterial or just BS?
I see in the Silver Nitrate article that there are legitimate disinfection uses for it, but I'm having a hard time seeing how scattering a few silver (cat?)ions across a plastic cutting board can cut down on the nasties living thereon... despite what the bodacious labeling wants to scream at me.
Can someone clear this up? 218.25.32.210 (talk) 02:08, 12 October 2009 (UTC)
- Probably BS. I think any antiseptic properties would be short-lived at best. Either (A) the silver is covalently bound to the plastic which is permanent but doesn't allow it to interact with bacteria or (B) it's free to dissolve in water (which is the key to most medical uses of silver) making it possibly effective during the first use but subject to having all the ions washed away for subsequent uses. Having said this, I don't know of any real scientific evidence on cutting boards in particular, and I doubt it exists. Asking the manufacturer might be your best bet. - Draeco (talk) 05:04, 12 October 2009 (UTC)
- I don't know about this specific product or what material is used, and I wouldn't be surprised if a manufacturer replied with the same marketing BS (if indeed that's what it is). However, there is literature about the effectiveness of silver in this type of use. See [25] for example. DMacks (talk) 05:10, 12 October 2009 (UTC)
Abrasive toothpastes
If toothpaste (and probably dentist cleaners too) contain abrasives which are at least as hard as teeth, then what is to prevent it from wearing them down to nothing? (well, at least polishing all the way through the enamel) If I guess a molecule of enamel is at least 600pm, and you brush twice a day for 25,000 days that's 0.03 mm. The dentist probably does at least that much again with his tools and his goops. Add to that that many people have really fast toothbrushes instead of manual ones, and it's remarkable that they could put sand in it and still have it remove so little each time. Sagittarian Milky Way (talk) 02:53, 12 October 2009 (UTC)
- I'm no expert on dental care products, but you began with an "if" that might be important! How do you know those abrasives are "at least as hard as teeth"? Wouldn't it be smart to make them a little harder than plaque, but softer than tooth enamel? -- Scray (talk) 03:28, 12 October 2009 (UTC)
- I know for a fact that silica is harder than teeth and calcium phosphates are just as hard (Mohs scale 7 and 5 respectively). Carbonates, sufates, organics, some elements, and halides tend to be soft. It only has to equal the hardness of something to scratch it. Sagittarian Milky Way (talk) 03:53, 12 October 2009 (UTC)
- I would think that toothbrush bristles do not press the abrasives against the enamel with enough force to do damage. - Draeco (talk) 04:39, 12 October 2009 (UTC)
- Actually, worn down enamel at the neck of the tooth is a very common problem caused by wrong brushing technique (too much force, to vigorous back-and-forth). To avoid it, follow best practices (little pressure, circular motion) and avoid brushing when the enamel has been weakened by recent acid contact (e.g. shortly after eating fruit). --Stephan Schulz (talk) 12:41, 12 October 2009 (UTC)
- This recent article[26] mentions that tooth enamel and dentin can repair themselves. Usually the body is constantly repairing small amounts of damage.Cuddlyable3 (talk) 14:25, 12 October 2009 (UTC)
Dentist here -- enamel and dentin will not repair themselves after having been worn away by overzealous toothbrushing. DRosenbach (Talk | Contribs) 23:01, 12 October 2009 (UTC)
- Somehow, I'd be too lazy to push either zealously or fast even if dentistry said it was better. I'd brush more often if that were the case. Sagittarian Milky Way (talk) 03:38, 13 October 2009 (UTC)
- I once asked my dental hygienist a question of this nature and she told me that teeth are organic and do regenerate themselves spontaneously. Assuming adequate health. Vranak (talk) 17:03, 12 October 2009 (UTC)
- Amazing, after all you hear about cavities you would've thought it was inert as a rock. Sagittarian Milky Way (talk) 20:00, 12 October 2009 (UTC)
- Suffice to say that there are many many people out there driving around very nice cars because of the widespread belief that only a dentist can keep your teeth from rotting away to stumps. Perhaps there is some truth to that notion but it's far from being the whole story. Vranak (talk) 22:45, 12 October 2009 (UTC)
- I'm sure excessive use is harmful, as toothpaste manufacturers and the American Dental Association all give upper limits for the recommended amount of brushing. 66.65.140.116 (talk) 20:28, 12 October 2009 (UTC)
- The main component contributing to tooth abrasion is overzealous brushing, hence the name "toothbrush abrasion." Without enough force, the abrasive content of toothpastes will not wear away tooth structure, and with excessive force, toothbrush bristles will cause abrasion even without any paste at all. Thus controlling for both false negatives and false positives, it is the force of the brushing and not the abrasive that does it -- the abrasives may add to the effect, but the majority of brushing time finds the toothpaste already dissipated in large part. As for manual vs. electric toothbrushes, little force used to push the bristles of the latter against tooth structure doesn't add any forces that are not present with manual brushing. DRosenbach (Talk | Contribs) 23:01, 12 October 2009 (UTC)
when is dQ different from dH?
I'm having a hard time separating these two concepts.
In addition the enthalpy article states dH = TdS which can be greater or equal than dQ?! John Riemann Soong (talk) 04:28, 12 October 2009 (UTC)
Also, I really don't get the equation H = U + PV -- how is enthalpy more than the internal energy, and why does an object having a finite volume against environmental pressure contribute to its enthalpy? John Riemann Soong (talk) 04:31, 12 October 2009 (UTC)
- Enthalpy is a defined value. It means absolutely nothing. The reason it was defined that way is to simplify expressions in constant-pressure processes, which is often what happens in research. Tim Song (talk) 05:18, 12 October 2009 (UTC)
- More specifically, pressure exerted on the walls of a container is itself a form of potential energy (it becomes kinetic when the walls blow out...) so even under constant pressure conditions there is potential energy which must be taken into account. The PV part of your enthalpy definition is that little bit of potential energy, which is added on to the potential energy internal to the molecules. It should be noted that internal energy is a finite and real, but immesurable quantity, a property that is passed on to enthalpy as well. Thus, we tend to deal in ΔH rather than H, since enthalpy changes should equal the total energy changes to a system as a result of a chemical change, so long as pressure does not change. In cases where the reaction vessel is open to the atmosphere, any minute pressure changes are "washed out" as the "system" becomes the entire atmosphere, and since lots of chemistry is done in open air, enthalpy is a convenient way to measure energy changes. In situations where you have a sealed reaction vessel, then enthalpy must also taken into account the changes in potential energy due merely to changes in pressure, complicating the calculations significantly. --Jayron32 06:06, 12 October 2009 (UTC)
- Which is why plain internal energy is used in those cases - constant volume => no pV work => dU = dQ if there is no other work. Tim Song (talk) 06:38, 12 October 2009 (UTC)
- More specifically, pressure exerted on the walls of a container is itself a form of potential energy (it becomes kinetic when the walls blow out...) so even under constant pressure conditions there is potential energy which must be taken into account. The PV part of your enthalpy definition is that little bit of potential energy, which is added on to the potential energy internal to the molecules. It should be noted that internal energy is a finite and real, but immesurable quantity, a property that is passed on to enthalpy as well. Thus, we tend to deal in ΔH rather than H, since enthalpy changes should equal the total energy changes to a system as a result of a chemical change, so long as pressure does not change. In cases where the reaction vessel is open to the atmosphere, any minute pressure changes are "washed out" as the "system" becomes the entire atmosphere, and since lots of chemistry is done in open air, enthalpy is a convenient way to measure energy changes. In situations where you have a sealed reaction vessel, then enthalpy must also taken into account the changes in potential energy due merely to changes in pressure, complicating the calculations significantly. --Jayron32 06:06, 12 October 2009 (UTC)
Okay, so enthalpy is any internal energy plus that little bit of potential energy due to "previous" work done against atmospheric pressure? I'm also having a little problem with the derivation of dH.
So H = U + PV; dH = dU + PdV + VdP
since U = H - PV, then dU = dH - PdV - VdP; does dH - VdP = dQ?
I still don't know how to make sense of the difference between dH and dQ. How can the increase in enthalpy be greater than the increase in heat added? John Riemann Soong (talk) 06:45, 12 October 2009 (UTC)
- If the heat added also does work; for example, thermal expansion is a form of work. The confusing thing for a chemist is that chemists don't like to think about work as a form of energy. All chemists care about is the "q" factor; heat energy changes as measurable by temperature change. However, in situations where real work is done, then that ALSO has to be taken into account. PdV and VdP are the "work" factors in your equations. All of these factors are interrelated, so its a complete mess if you are trying to work out all of the details. For example, a decrease in internal energy could cause both changes in the surroundings temperature (a change in q) OR it could do work on the surroundings. You could also envision situations where a process is exothermic, but endergonic, that is work is done ON the system, but heat is release BY the system. For us simple chemists, if we ignore work, all of this mess goes away. --Jayron32 06:51, 12 October 2009 (UTC)
- Okay, yeah, I can do that with my specific situation equations ... (isobaric, isochoric, etc.) but I just want to know facts that hold universally. So, let's say I transfer 1 J of heat to a system from the environment ... the enthalpy can increase by 1 J and on top of that, work can also be done? How does that not violate some form of conservation energy principle? John Riemann Soong (talk) 07:20, 12 October 2009 (UTC)
- No, if you transfer 1 joule of energy from the system to the environment (dH = - 1), that energy can do both work and heat; in other words dW + dQ = - 1 Joule. For most common processes, dW is so close to zero it might as well be zero; however you aren't doing work ON TOP OF the heating of the environment. Its just that in some situations, releasing 1 joule of energy from a chemical reaction will not result in 1 joule of heating; it may result in MORE than 1 joule of heating (if the surroundings do work on the system WHILE the system is also heating the environment) OR it may result in LESS than 1 joule of heating (if the system does work on the surroundings WHILE the system is heating the environment). Remember that dQ and dW can be opposite signs. Ignore what Dauto says below about enthalpy not obeying the laws of conservation. Enthalpy does obey the laws of conservation; unless you ignore work. If you ignore work when you shouldn't, then you are introducing an error which makes it appear to not obey the laws of conservation. --Jayron32 18:32, 12 October 2009 (UTC)
- It doesn't violate energy conservation because enthalpy isn't energy (even though it is also measured in Joules). U is the internal energy that must be subjected to the internal energy conservation. From the equation H=U+PV you get dH = dU + d(PV) = (dQ - PdV) + (PdV + VdP) = dQ + VdP. And that's that. Dauto (talk) 15:00, 12 October 2009 (UTC)
- So wait -- as heat is withdrawn from a system enthalpy decreases faster than U does? I'm also trying to think of this in terms of heat capacity at constant pressure versus heat capacity at constant volume. Heat capacity at constant volume is less than heat capacity at constant pressure because of the heat transfer during a constant pressure process is diverted to work i? So at constant pressure, is the q that becomes part of U less than the q that the environment gave to the system?
- Also, how do I distinguish between pressure of the system and pressure of the environment? John Riemann Soong (talk) 16:45, 12 October 2009 (UTC)
- If the system is isolated from the environment, in a sealed container, than the two may have a different pressure. If the system is open to the environment, then you have the definition of a constant pressure situation, which is what is required for dQ to equal dH. If you did not have a constant pressure situation, then you would have two energies to keep track of; dQ and dW (heat and work). It really doesn't matter whether we say that dQ = dH - dW or dQ + dW = dH; that is conceptually it doesn't matter whether the work factor causes us to misestimate the enthalpy or the heat if we assume that dQ = dH. However, if the reaction is open to the surroundings, then no meaningful work is done (strictly speaking, work is almost always done, but when the denominator on the fraction is the moles of gas in the entire atmosphere, then for all intents and purposes that number is so small as to be meaningless). So, to answer your question again, the two pressures are the same unless you have a closed system, and in that case, you need to account the work factor in calculating the enthalpy (the PdV + VdP). --Jayron32 18:24, 12 October 2009 (UTC)
- Jayron, dH = dQ + VdP which is not the same as dQ + dW because dW = PdV
- OK, yes, you are right about that bit. Technically, VdP does not represent actual work, since nothing moves; however VdP is the potential energy change generated by the change in pressure; functionally it behaves like work in this case. Its much easier to think conceptually that there are two factors here; heat energy and mechanical energy. Mechanical energy has two forms; kinetic (work) energy (PdV) and potential energy (VdP). Any deviations between Q values and H values are due to mechanical energy changes, either kinetic or potential. The crux of the OP's problem is ignoring these mechanical energy issues. Since mechanical and heat energy could be the same sign OR opposite signs, the deviation between dQ and dH could be either positive or negative. --Jayron32 19:13, 12 October 2009 (UTC)
- I agree with everything you said except with the carachterization of PdV and VdP as being kinetic and potential energies. I don't see how that fits. Dauto (talk) 19:40, 12 October 2009 (UTC)
- Kinetic energy means something is moving. If the volume of a container is changing (a non-zero dV) then something must be physically moving, so it is kinetic energy. If there is no movement, but there is a change in pressure, then there is a change in the force on the walls of the container, and that change in force is a potential energy. To see that, imagine what would happen if the pressure, say, ripped a hole in the wall of the container. That would represent the conversion of that potential energy into kinetic energy. Increasing pressure increases the forthcoming kinetic energy involved in such a rupture, which is the definition of potential energy. So changes in volume are kinetic changes, while changes in pressure are potential changes. --Jayron32 20:31, 12 October 2009 (UTC)
- I agree with everything you said except with the carachterization of PdV and VdP as being kinetic and potential energies. I don't see how that fits. Dauto (talk) 19:40, 12 October 2009 (UTC)
- OK, yes, you are right about that bit. Technically, VdP does not represent actual work, since nothing moves; however VdP is the potential energy change generated by the change in pressure; functionally it behaves like work in this case. Its much easier to think conceptually that there are two factors here; heat energy and mechanical energy. Mechanical energy has two forms; kinetic (work) energy (PdV) and potential energy (VdP). Any deviations between Q values and H values are due to mechanical energy changes, either kinetic or potential. The crux of the OP's problem is ignoring these mechanical energy issues. Since mechanical and heat energy could be the same sign OR opposite signs, the deviation between dQ and dH could be either positive or negative. --Jayron32 19:13, 12 October 2009 (UTC)
map distances and multiple crossovers
So let's say I have a cis genotype AB/ab (linked on the same chromosome) test-crossed with ab/ab and I'm supposed to find the amount of progeny that will end up AB/ab. If A and B are 10 m.u. apart, is this proportion 45% or 47.5% (or something a little larger than that?). Basically, if two genes are 1 m.u. apart, is the chance of no crossing over equal to 99%, or is that the chance of having parental type gametes? I'm trying to sort this out from the idea that the longer distances lead to underestimated recombination frequency. John Riemann Soong (talk) 06:39, 12 October 2009 (UTC)
- 1 map unit = 1% observed recombination. Therefore, given 100 meioses containing two loci 1 m.u. apart, one would expect to see 1 with a cross over event between the loci and the other 99 will not cross over. You should be able to apply this up to your experiment, which has two loci 10 m.u. apart. Bear in mind you would start off with half (50% is the value you would get with no crossing over at all) then subtract the percentage of those alleles that will have recombined between the two loci. Rockpocket 07:10, 12 October 2009 (UTC)
blood platelets
[Removed paraphrase of existing medical question.] APL (talk) 12:48, 12 October 2009 (UTC)
Time-matter conversion
I once read about some scientist's theory that said, among other things, that time and matter are just different forms of the same thing, much like heat and work are just different forms of energy. In this theory:
- Tachyons are the elementary particle that mediates time, just like the graviton would mediate gravity.
- When tachyons decelerate below the speed of light (I don't remember the exact process for this described in the paper), they "decay" into quarks, leptons and bosons, and when matter accelerates above the speed of light, it "decays" into tachyons. Therefore, c is not an absolute barrier, just a door between the realms of time and space.
- General relativity seems to preclude the transmission of information or matter above the speed of light; this is because it fails to consider the transformation from matter into time. Therefore, the only reason that matter cannot travel above the speed of light is because when it does it automatically becomes time.
I find it difficult to accept this theory, but because I have little knowledge of fundamental particle physics, I would appreciate it very much if someone could please tell me what is the observational evidence against the above theory. Thank you. --Leptictidium (mt) 13:08, 12 October 2009 (UTC)
- I don't think you need observational evidence against it - it just doesn't make sense. Time is the progression from cause to effect, it makes no sense to talk about it being mediated by a particle. --Tango (talk) 13:45, 12 October 2009 (UTC)
- I think the point here is to re-think what the definition of "time" is, so replying that "time" isn't what he says it is seems a little tautological. Again, I don't know the theory, but dismissing it just because "it doesn't make sense" in a very basic explanation seems silly to me, and unscientific. As one writer put it, the theory that the Earth sits on an endless series of turtles isn't wrong because it is ridiculous, it's wrong because we don't find any turtles at the South Pole... --Mr.98 (talk) 14:41, 12 October 2009 (UTC)
- Time is time. If you want to define a new concept you need to give it a new name. There is a difference between ridiculous and nonsense - the theory as described doesn't make sense, the Earth sitting on turtles makes perfect sense, it just happens to be wrong. --Tango (talk) 15:11, 12 October 2009 (UTC)
- I think the point here is to re-think what the definition of "time" is, so replying that "time" isn't what he says it is seems a little tautological. Again, I don't know the theory, but dismissing it just because "it doesn't make sense" in a very basic explanation seems silly to me, and unscientific. As one writer put it, the theory that the Earth sits on an endless series of turtles isn't wrong because it is ridiculous, it's wrong because we don't find any turtles at the South Pole... --Mr.98 (talk) 14:41, 12 October 2009 (UTC)
- A theory is scientific only if it has Falsifiability. This one doesn't. Cuddlyable3 (talk) 14:07, 12 October 2009 (UTC)
- That seems, uh, to be a little premature of a judgment, no? I mean, I don't know the first thing about this reported theory, but I do know that a lot of new theories and high-end physics sounds pretty silly and out-there for a non-practitioner, especially when it is new. --Mr.98 (talk) 14:39, 12 October 2009 (UTC)
- How could the speed of light be a "door"? It's a speed, not a door. In a diagram of space or time intervals, the speed of light is the locus of points that is identical in all frames of reference and separates definitely-space-like from definitely-time-like intervals; but this is not a "door". If you're looking for an analogy, consider it more of a wall. You might want to read the article spacetime for an overview. As has been pointed out, there's not observational evidence against these postulates because they don't make claims about observable phenomena. Nimur (talk) 14:14, 12 October 2009 (UTC)
- Where did you read this theory? And which scientists believe it?
- I don't ask to be contrary, I ask because the way you described it, it seems very confused. Perhaps if we could see the source... APL (talk) 14:33, 12 October 2009 (UTC)
- Yes, I think we'd need to know the details a little bit better to even start to get a sense of it. If it is a "real" theory and not just on-the-internet speculation, then there is likely a lot more to it. --Mr.98 (talk) 14:39, 12 October 2009 (UTC)
- I wouldn't take very seriously any theory that depends on the actual physical existence of tachyons in order to make sense. It smells like crackpot to me. Dauto (talk) 16:35, 12 October 2009 (UTC)
- Er, why not? I mean, yeah, nobody has any evidence yet for tachyons, and they seem a little fishy, but there's no reason that one can't say, "if they exist..." and so on. Scientists have been doing that for a long time. (Remember that there was very little hard evidence for atomism at all until the early 20th century, yet it proved a pretty useful concept before then.) Again, I don't know about this purported theory at all... but I think dismissing things just because they sound weird is a little silly, given how weird the reality of things is (I think complementarity is pretty weird, but that doesn't mean it's wrong). --Mr.98 (talk) 18:20, 12 October 2009 (UTC)
- Tachyons don't sound a little weird to me. They sound plain wrong. That theory described here doesn't pass the smell test. Atomism is not a good analogy. Dauto (talk) 18:38, 12 October 2009 (UTC)
- Er, why not? I mean, yeah, nobody has any evidence yet for tachyons, and they seem a little fishy, but there's no reason that one can't say, "if they exist..." and so on. Scientists have been doing that for a long time. (Remember that there was very little hard evidence for atomism at all until the early 20th century, yet it proved a pretty useful concept before then.) Again, I don't know about this purported theory at all... but I think dismissing things just because they sound weird is a little silly, given how weird the reality of things is (I think complementarity is pretty weird, but that doesn't mean it's wrong). --Mr.98 (talk) 18:20, 12 October 2009 (UTC)
- I wouldn't take very seriously any theory that depends on the actual physical existence of tachyons in order to make sense. It smells like crackpot to me. Dauto (talk) 16:35, 12 October 2009 (UTC)
- The observational evidence against any theory that says time and matter are interchangeable is that no-one has observed time changing into matter or vice versa. You would think the effects would be quite noticeable - "hey, I just lost 5 seconds and lost of tracks have appeared in my cloud chamber". Equivalence of work and heat is easily demonstrated; equivalence of matter and energy is demonstrated in every nuclear reactor; but there is no demonstration of the equivalence of matter and time. Gandalf61 (talk) 17:00, 12 October 2009 (UTC)
- ...though it's of note that the matter-energy equivalence was not observed until the twentieth century, even though it was technically happening all the time. --Mr.98 (talk) 18:20, 12 October 2009 (UTC)
- In the early 20th century, many well regarded scientists said relativity was a crackpot theory. Edison (talk) 19:11, 12 October 2009 (UTC)
- True and irrelevant. Dauto (talk) 19:14, 12 October 2009 (UTC)
- I'm sorry that I can remember no more details: I read about the theory in a magazine in an English library and only remembered about it the other day when watching a documentary about M-theory, but I don't remember the author or the name of the theory. Leptictidium (mt) 21:05, 12 October 2009 (UTC)
- True and irrelevant. Dauto (talk) 19:14, 12 October 2009 (UTC)
Weather
I need some good weather websites. Anyone have some? Surface maps, radars, cool graphics, stuff like that. I would like that. Thanks! <(^_^)> Pokegeek42 (talk) 14:03, 12 October 2009 (UTC)
- Google is your friend. Entering "Weather forecast" gave[27] me 37 million "hits". Cuddlyable3 (talk) 14:10, 12 October 2009 (UTC)
- If you're interested in weather in the United States, you should check out the National Weather Service, http://nws.noaa.gov. Their website provides the most authoritative forecasts in the country (in fact, it is the source for most redistributed commercial forecasts); and you can also access much of the raw data, including maps, radars, satellite imagery, and atmospheric conditions data. Being both a government website, and a website run by scientists, the imagery has a little less gloss and veneer than you might be used to if you mainly pull from commercial weather websites... but that rustic rawness is scientific accuracy. Nimur (talk) 14:20, 12 October 2009 (UTC)
- Depending on what you need... Unisys or NRL TC might be useful. -Atmoz (talk) 16:21, 12 October 2009 (UTC)
- And don't forget Wunderground (a.k.a. "Weather Underground") at http://www.wunderground.com/ and the amazing Masters's blog [28]. Bielle (talk) 16:55, 12 October 2009 (UTC)
- Depending on what you need... Unisys or NRL TC might be useful. -Atmoz (talk) 16:21, 12 October 2009 (UTC)
- If you're interested in weather in the United States, you should check out the National Weather Service, http://nws.noaa.gov. Their website provides the most authoritative forecasts in the country (in fact, it is the source for most redistributed commercial forecasts); and you can also access much of the raw data, including maps, radars, satellite imagery, and atmospheric conditions data. Being both a government website, and a website run by scientists, the imagery has a little less gloss and veneer than you might be used to if you mainly pull from commercial weather websites... but that rustic rawness is scientific accuracy. Nimur (talk) 14:20, 12 October 2009 (UTC)
Age hardening of Mild steel under water
Could anyone please advise on this topic.
I am designing a Remote underwater drilling rig. I will drill Mild Steel (Grade 43 A) at 600 RPM, with a Diamond Tipped Rotary Drill. The current force I have calculated to drill throught the 6mm thk plate is 1.7kN. Do I need to concider the effects of Age Harding during this underwater drilling process or not. If so presumably the effects of Age-Hardening could increase the surface hardness of the Mild Steel to four times, is this correct?
Please confirm
Regards,
Lyndon —Preceding unsigned comment added by Longone02031966 (talk • contribs) 14:25, 12 October 2009 (UTC)
North Texas weather
Why is it rainier than usual?Accdude92 (talk) (sign) 15:06, 12 October 2009 (UTC)
- The extra 6 inches of rain (beyond the average) since September 1 appear to be due to a series of four heavy-rain days (as opposed to steady drizzle over the entire month). My guess is that these were all part of this front (visible in this satellite IR image). Take a look at climate and weather - explaining deviations from average values is not always possible. This is a pretty significant statistical variation, though. One really big weather system can knock off the statistics pretty strongly. Nimur (talk) 15:21, 12 October 2009 (UTC)
- It's all the more weird because it follows a record-breaking series of 100 degF days through the summer. SteveBaker (talk) 01:05, 13 October 2009 (UTC)
- You seem to be getting our rain! We had over fifteen inches of rain in July alone here in northern UK, and August was mainly wet, too, but the autumn is unusually dry here. We would gladly exchange climates! Dbfirs 08:55, 13 October 2009 (UTC)
- It's all the more weird because it follows a record-breaking series of 100 degF days through the summer. SteveBaker (talk) 01:05, 13 October 2009 (UTC)
- Texas weather has no "why". When I moved from Denton to Toronto, long about November sometime, it was a little bit chilly, maybe around freezing. So I went on the web and looked up Denton. It was eight degrees. Fahrenheit!!!. --Trovatore (talk) 09:47, 13 October 2009 (UTC)
Urinary Tract Infections
It has commonly been stated that human females, generally, are more susceptible to urinary tract infections. One reason would logically be the shorter length of the urethra tract of females, and it being closer to the anus compared to males, but my question is do you know of any components in urine that might differ from male to female resulting in males not being as susceptible as females. I know that in old age there seems to be an equal frequency of infection between men and women as stated on the wiki UTI page. It also states that women lack bacteriostatic properties secreted from the prostate in males. This is not referenced, and if that is right would that "property" be prostatic acid phosphotase (PAP)? —Preceding unsigned comment added by Pjohnso8 (talk • contribs) 19:13, 12 October 2009 (UTC)
Time speed shift
In some days I have a quite strong feeling that the time, particularly, the minutes, run faster. Ultimately, the way which usually takes me say 30 minutes to walk, eats 35 min or so. Is there any explanation? 85.132.109.227 (talk) 19:39, 12 October 2009 (UTC)
- Our sense of time article is in sorry shape, but some of the reference links might be interesting, as might googling for Template:Websearch. --Sean 19:53, 12 October 2009 (UTC)
- Perception is a purely psychological phenomenon; that is such differences between expected time differences and actual time differences are entirely products of your own mind. The world itself is unchanged. It is a very common human trait to ascribe psychological effects to the world itself rather than to recognize them as purely internal processes. There is a real phenomenon where time for two people will pass at different rates, called time dilation, but that is a very different thing than what you are describing. --Jayron32 20:27, 12 October 2009 (UTC)
- Jayron32's phrases "products of your own mind" and "internal processes" should be interpreted liberally; some drugs alter the perception of time. Comet Tuttle (talk) 20:54, 12 October 2009 (UTC)
- As noted in this recent news story. --Sean 00:20, 13 October 2009 (UTC)
- Jayron32's phrases "products of your own mind" and "internal processes" should be interpreted liberally; some drugs alter the perception of time. Comet Tuttle (talk) 20:54, 12 October 2009 (UTC)
- Yes, but the drugs don't actually change the way time works. They change the way your mind works. That's the whole point. Doing drugs of this type doesn't expand your capacity for knowledge or wisdom, as was often claimed, it merely increased your own perception of your own knowledge or wisdom. You don't expand your consciousness, you just think you do. Nothing changes in the way the world works, just in how you perceive how it works. --Jayron32 02:39, 13 October 2009 (UTC)
KC135 boom
How long can the refueling boom on the KC135 be extended? Googlemeister (talk) 21:23, 12 October 2009 (UTC)
- This photograph of the boom operator's instrument panel has a dial labeled "Telescoping" which appears to range from 0 to 20 feet. It is not clear to me whether this is the full extended length or merely the extra length added by telescoping; also, the units ("feet") are obscured by the dial needle so I may be reading it incorrectly. Nimur (talk) 21:57, 12 October 2009 (UTC)
- I don't recall the actual length of the telescoping boom - 20 feet seems way too short. If you look at photos of the boom, it looks to be about as long as the vertical stabiliser fin is tall. The fin is 40 feet tall - so I'd guess the boom was at least that. I do know that the alternative 'hose and drogue' system used by some aircraft types (eg when refuelling two planes at once) extends to about 75 feet...so again, 20 feet for the boom system seems way too short. It's possible the dial is indicating how far one section of the telescoping boom is extended - rather than the total length. SteveBaker (talk) 00:56, 13 October 2009 (UTC)
Lump on penis
Time over distance...
... I have heard that clocks on the top of really tall buildings go through an hour a few minutes either fast or slower than a regular clock. But due to them being so high up, the time difference equals out so they match a regular clock on the ground. Is there any truth to this, if so, where can I find more information?
Thanks in advance!
74.218.50.226 (talk) 22:25, 12 October 2009 (UTC)
- No building on Earth is tall enough to experience any measurable time dilation from either the gravitational difference nor the net difference in rotational motion - certainly not minutes on the hour. Nimur (talk) 22:32, 12 October 2009 (UTC)
How about the buildings on other planets? Just kidding... I didn't think there was that big of a differance. Thanks!74.218.50.226 (talk) 22:38, 12 October 2009 (UTC)
- Not on other planets - but perhaps a tall building built on a neutron star might show some serious weirdness - but making a building more than a millimeter or two tall would be an impressive engineering feat! SteveBaker (talk) 00:47, 13 October 2009 (UTC)
Clocks at tops of buildings run very slightly faster than clocks at bottoms of buildings, due to gravitational time dilation. The difference is big enough to measure, but only if extremely precise equipment is used. The Pound–Rebka experiment, which was the first experiment to show this effect, took place in a building that wasn't even all that tall. The height difference between the bottom and top was only 22.5 meters (73.8 feet). Red Act (talk) 23:12, 12 October 2009 (UTC)
- Sorry, but it just sounds bogus. It is very surprising and counter-intuitive that there should be an easily measurable effect in such a small distance in such a gravitational field. Edison (talk) 03:40, 13 October 2009 (UTC)
- The original Pound-Rebka paper (1959) states that there should be a factor of 1.09x10-18 frequency multiplier for each centimeter above the earth's surface. The derivation seems dubious; the approximation seems more dubious; and the prospect of measuring an effect on the order of 10-18 even today, let alone in 1959, seems very dubious. Finally, the paper states a method for observing this effect - by observing hyperfine structure on nuclear gamma ray emission spectra (which can be measured very accurately) - but does not actually state that the gravitational redshift experiment has been performed. Thus the theoretical derivation of the frequency redshifting was not validated with experimental data. Nimur (talk) 04:01, 13 October 2009 (UTC)
- The 1960 followup paper, Apparent Weight of Photons, does explicitly claim to have completed experimental measurement. Again, I have to call "dubious" all over this paper - the introduction outlines their "systematic measurement errors" that they attempt to mitigate by what I would consider data cherry-picking (using only certain combinations of experimental results and measuring a delta); they compensate for the temperature shift by what appears to be an arbitrary multiplication; etc. I'm not a relativistic physicist; I know little about hyperfine structure of gamma ray spectra - but I am an experimental physicist - and I don't like claims that are buried so far below the noise floor that you have to subtract elaborate models of the noise to "find" your result. I think it speaks volumes that this work, published in the early 1960s, has not been brought up since then as a bastion of scientific method and empirical proof of relativity - it's been cited once or twice in five decades. The discerning wikipedian will probably want to read the paper series themselves and decide whether my harsh judgement of "dubious" is warranted; after all, these papers were peer-reviewed and published - but needless to say, the effect is miniscule if it is measurable at all. Nimur (talk) 04:14, 13 October 2009 (UTC)
- For context, Gamma ray spectrometer shows several plots from modern equipment (with about 12 bits of frequency resolution, or ~ 1000 "channels"). This is the top-of-the-line gear in 2009. To measure a frequency deviation of 10-18, one would need to pull about 64 bits (equivalent) of resolution - fifty years ago - out of a custom-built analog experimental apparatus. Nimur (talk) 04:29, 13 October 2009 (UTC)
- The 1960 followup paper, Apparent Weight of Photons, does explicitly claim to have completed experimental measurement. Again, I have to call "dubious" all over this paper - the introduction outlines their "systematic measurement errors" that they attempt to mitigate by what I would consider data cherry-picking (using only certain combinations of experimental results and measuring a delta); they compensate for the temperature shift by what appears to be an arbitrary multiplication; etc. I'm not a relativistic physicist; I know little about hyperfine structure of gamma ray spectra - but I am an experimental physicist - and I don't like claims that are buried so far below the noise floor that you have to subtract elaborate models of the noise to "find" your result. I think it speaks volumes that this work, published in the early 1960s, has not been brought up since then as a bastion of scientific method and empirical proof of relativity - it's been cited once or twice in five decades. The discerning wikipedian will probably want to read the paper series themselves and decide whether my harsh judgement of "dubious" is warranted; after all, these papers were peer-reviewed and published - but needless to say, the effect is miniscule if it is measurable at all. Nimur (talk) 04:14, 13 October 2009 (UTC)
- The original Pound-Rebka paper (1959) states that there should be a factor of 1.09x10-18 frequency multiplier for each centimeter above the earth's surface. The derivation seems dubious; the approximation seems more dubious; and the prospect of measuring an effect on the order of 10-18 even today, let alone in 1959, seems very dubious. Finally, the paper states a method for observing this effect - by observing hyperfine structure on nuclear gamma ray emission spectra (which can be measured very accurately) - but does not actually state that the gravitational redshift experiment has been performed. Thus the theoretical derivation of the frequency redshifting was not validated with experimental data. Nimur (talk) 04:01, 13 October 2009 (UTC)
- Pound-Rebka isn't some forgotten experiment that's been ignored by mainstream physicists. The experiment is widely referenced as being an important confirmation of general relativity in general relativity textbooks. For example, MTW, which is generally considered to be the "Bible" of general relativity, devotes more than two pages to the experiment (see pages 1056-1058). Other GR textbooks I happen to have that reference Pound-Rebka are "Gravity" by James Hartle, which devotes half of p. 118 to it, and "A first course in general relativity" by Bernard F. Schutz (see p. 120).
- The experiment didn't need to measure anything accurate to one part in 1018. The 10-18 is the relative amount of change per centimeter of height. But the apparatus had a height difference of 22.5 meters, so it was only necessary to measure a relative change of about 2.5x10-15.
- A relative change of 2.5x10-15 is enormous compared to what is measured by modern gravitation experiments. For example, LIGO measures relative changes down to 10-21, i.e., more than a million times smaller than what needed to be measured by Pound-Rebka. Red Act (talk) 05:45, 13 October 2009 (UTC)
- Here is a document by Dr. John Mester, a physics prof at your school, which refers to the Pound-Rebka experiment. So you could stop by his office and ask him about Pound-Rebka, if you're still dubious about it. Red Act (talk) 07:59, 13 October 2009 (UTC)
- I'll follow up on that lead. As I disclaimed earlier, I'm not an expert in this field - and the experiment did get published in a reputable peer-reviewed journal - so as much as I flail around shouting "dubious", my opinion is only worth so much. The Gravity Probe B mission, and the LIGO, also seeking to measure relativistic gravitational effects, also both suffer from tiny signal amongst huge noise. I think the ultimate answer here is that the OP's suggestion of "minutes per hour" is very far from reality; the predicted changes should be femtoseconds - which cannot be measured by even the most accurate atomic clocks. To measure these, it seems necessary to build a complex custom "device" and extrapolate a time dilation via a frequency shift. Nimur (talk) 11:47, 13 October 2009 (UTC)
- Here is a document by Dr. John Mester, a physics prof at your school, which refers to the Pound-Rebka experiment. So you could stop by his office and ask him about Pound-Rebka, if you're still dubious about it. Red Act (talk) 07:59, 13 October 2009 (UTC)
- I'd be really cautious about suggesting that the Pound-Rebka device needs 64 bits of precision to make a successful measurement. Remember, the experimenters don't need to measure the frequency from scratch. They just need an apparatus sensitive to minor differences in frequency — which the universe handily provides in the form of crystalline iron-57. It's the difference between measuring elapsed milliseconds between two events on the bench (trivial) and attempting to measure elapsed milliseconds since the start of the universe, twice, and taking a difference (ludicrous). TenOfAllTrades(talk) 12:38, 13 October 2009 (UTC)
October 13
The irregular dark spots on sidewalks
What are they and how are they formed? Thanks. Imagine Reason (talk) 00:55, 13 October 2009 (UTC)
- Very very old gum, actually. This is what you're talking about, right? Someguy1221 (talk) 01:15, 13 October 2009 (UTC)
- They could also be the imprints of fallen leaves: I've noticed that sometimes (perhaps when the cement of the sidewalk was recently poured) a fallen leaf will stain the sidewalk brown. Sometimes the outline of the leaf is unmistakably clear. (I think there's a sidewalk near me that displays this; perhaps I can take a picture tomorrow.) —Steve Summit (talk) 01:45, 13 October 2009 (UTC)
- Such leaf stains would be caused by tannins - effectively permanently dyeing the concrete. Even after the leaf blows or erodes away, the tannins can remain, staining the sidewalk. Nimur (talk) 04:19, 13 October 2009 (UTC)
volume change in an isothermal process versus an adiabatic process
I'm trying to figure out which is more in magnitude ... and not being very successful at it? I'm using the ideal gas equation. Help! I get as far as -Int[nRT/V dV] = change in U and stuff, but just need some direction here. John Riemann Soong (talk) 06:15, 13 October 2009 (UTC)
symbols of restraint in women's fashion
Hi - I'm interested in the idea that some clothes and accessories worn by women symbolise helplessness and/or restraint. Some examples might be tight skirts, stilettos, chokers - and at the more extreme end of the scale, footbinding and neck rings. I'm also interested in the idea, once ascribed to Catherine MacKinnon, that all sex is rape -- of course this isn't literally true, and is it turns out, it was never made as a serious quote, but I do wonder about the fact that the females of so many mammals, with some exceptions, are statistically smaller and weaker than the males. Are the two things related? Were we more likely to breed if a male was able to run down and restrain a female? Did this result in weaker females being selected for? Is this why we have 'restraint-fashion'?
Have these ideas ever been seriously discussed? If so, I'd appreciate it if someone could let me know who and where.
sorry - I realise these ideas are sort of ugly, but it'd be great if we could discuss them without rancor.
Thanks all,
Adambrowne666 (talk) 06:28, 13 October 2009 (UTC)
- The rules of the RD say that this is not a forum for discussion and if the subject of the question is dry then this rule is usually adhered to; but when sex is the subject of the question the rules go out the door - just watch. Caesar's Daddy (talk) 07:03, 13 October 2009 (UTC)
- Fashions like leggings or mini-skirts are far less restricting than men's trousers for example, and therefore contradict your hypothesis. Currently womens fashion seems to be similar to the fashion that male Cavalier's wore centuries ago (high boots, big belts for example). 92.24.99.195 (talk) 12:10, 13 October 2009 (UTC)
Physics
is C= 2F in [lens]? its no, but why? use n/v-1/u= n-1/r if n= 1.5 i.e. glass —Preceding unsigned comment added by Fantasticphysics (talk • contribs) 09:53, 13 October 2009 (UTC)
The sexual desire of post-op transexuals
Because they have lost their testes and do not have ovaries, does this mean they have no or a greatly reduced sexual desire? I recall hearing that they cannot orgasm. Perhaps taking female hormones results in desire, but I've heard that it may be normal for them to stop taking it after a while. 92.24.99.195 (talk) 11:40, 13 October 2009 (UTC)