Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
SineBot (talk | contribs)
m Signing comment by 165.212.189.187 - "event horizon: new section"
Line 261: Line 261:
:Don't worry about the other female kitten taking a maternal interest, from my experience with cats and kittens it's perfectly normal. In feral groups, female cats will raise the kittens between them. The time to worry is if an entire male cat takes interest in kittens as he will regard them as a threat and potential source of food. Oh and I'd get all your cats [http://www.cats.org.uk/cat-care/cat-care-faqs/cats-protection-cat-care-faqs-neutering neutered] as soon as they are old enough. The last thing the world needs is more cats (and I'm a cat lover). --[[User:TammyMoet|TammyMoet]] ([[User talk:TammyMoet|talk]]) 11:10, 21 February 2012 (UTC)
:Don't worry about the other female kitten taking a maternal interest, from my experience with cats and kittens it's perfectly normal. In feral groups, female cats will raise the kittens between them. The time to worry is if an entire male cat takes interest in kittens as he will regard them as a threat and potential source of food. Oh and I'd get all your cats [http://www.cats.org.uk/cat-care/cat-care-faqs/cats-protection-cat-care-faqs-neutering neutered] as soon as they are old enough. The last thing the world needs is more cats (and I'm a cat lover). --[[User:TammyMoet|TammyMoet]] ([[User talk:TammyMoet|talk]]) 11:10, 21 February 2012 (UTC)
::I think you mean "intact" male. Not entire. <span style="font-family:monospace;">[[User:Dismas|Dismas]]</span>|[[User talk:Dismas|<sup>(talk)</sup>]] 11:13, 21 February 2012 (UTC)
::I think you mean "intact" male. Not entire. <span style="font-family:monospace;">[[User:Dismas|Dismas]]</span>|[[User talk:Dismas|<sup>(talk)</sup>]] 11:13, 21 February 2012 (UTC)
::I've seen male cats take a keen interest in others' kittens. It's often perfectly harmless. Kittens like to play, and will naturally try to interest other cats in playing with them. Some cats (including adult males) are perfectly happy to oblige. Unless you sense aggression from the adult male, I wouldn't worry. And as to your situation, I highly doubt it will have any "adverse effect on the connection between the mother and her litter" in a detrimental way. Be thrilled that they're all on good terms. [[Special:Contributions/58.111.178.170|58.111.178.170]] ([[User talk:58.111.178.170|talk]]) 15:58, 21 February 2012 (UTC)


*Dont worry we will spay her, the other one's already been done. They just both escaped out of the house one time and got knocked up =/.The kittens though, if anyone's interested, are beautiful. Four with white fur with random black tortoiseshell markings scattered about it and a pure black one
*Dont worry we will spay her, the other one's already been done. They just both escaped out of the house one time and got knocked up =/.The kittens though, if anyone's interested, are beautiful. Four with white fur with random black tortoiseshell markings scattered about it and a pure black one

Revision as of 15:58, 21 February 2012

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


February 17

Motion sickness and flying

Which part of the planes moves less? Am I right to suspect that's when you sit over the wings? If you think about the plane as a teeter-totter, that makes sense when the turbines are on the wings. But, maybe in a plane with a heck turbine the tail would moves less. — Preceding unsigned comment added by Ib30 (talkcontribs) 00:42, 17 February 2012 (UTC)[reply]

The aircraft moves least at its center of gravity, and that will most likely be somewhere between 20% and 40% of the wing chord. However, the difference between the amount of movement at the center of gravity and the extremes of the passenger cabin will be very small. It won't be significant enough to determine whether a passenger will suffer sickness or not. If a passenger is vulnerable to motion sickness the best strategy will be to take advantage of one of the motion sickness medications available these days. Dolphin (t) 01:25, 17 February 2012 (UTC)[reply]
Medical advice???? — Preceding unsigned comment added by 165.212.189.187 (talk) 14:03, 17 February 2012 (UTC)[reply]
Concur with Dolphin, but my experience has been that the back of the tail section is notably rockier. At least on the midsize planes I usually use. Matt Deres (talk) 02:17, 17 February 2012 (UTC)[reply]
I agree. I assume that's why the first class section is ?always at the front. As a general rule, in ?any vehicle, you'll get a smoother ride towards the front, and it's quieter there too.--Shantavira|feed me 08:26, 17 February 2012 (UTC)[reply]
First class (or Business class if there is no First Class) is always closest to the entrance. This has the dual benefit of allowing the Economy passengers to pass through First or Business Class and see what they are missing out on, but First and Business Class passengers don't have to pass through Economy Class.
Putting First Class up the front is unlikely to have anything to do with comfort of the kind mentioned in the OP's question. Modern airline aircraft have their wings, and their centers of gravity, closer to the rear of the cabin than the front. This is particularly noticeable in the stretched versions of the Douglas DC-9, and in the Boeing 727-200. In these aircraft, the Economy passengers at the rear of the cabin are actually closer to the center of gravity than the First and Business Class passengers at the front of the cabin. If the quality of the ride was significantly affected by proximity to the aircraft's center of gravity, it would be better for the passengers at the rear of the cabin than for the crew and passengers at the front of the cabin. However, the quality of the ride is determined almost entirely by the weight and speed of the aircraft (best at heavy weight and slow speed), and almost not at all by the passenger's position relative to the center of gravity.
Mattt Deres has written above that in his experience, the ride is better at the front of the cabin, and worse at the back. I think this is just an impression and not one supported by any measurements I am aware of. I agree that passengers at the rear of the cabin suffer the tunnel effect due to looking down a long tube, more so than passengers at the front, or immediately behind a bulkhead, and this may cause a higher degree of discomfort for passengers at the rear of the cabin. Dolphin (t) 11:10, 17 February 2012 (UTC)[reply]

Loss of appetite

I'm currently enjoying a bout of walking pneumonia. Among the many entertaining symptoms is a loss of appetite (our article doesn't mention it, but my doctor said it's a fairly common complication of pneumonia). In my case, it seems to have struck at several levels: 1) Contemplating food no longer "whets" my appetite; plenty of things I used to enjoy eating and drinking now seem... almost repellent to think about. 2) Food has almost no flavour to me; I can still taste foods quite acutely, but that's all. Despite not having any nasal congestion, I don't seem to smell much. 3) My stomach growls occasionally, but it doesn't seem "attached" to anything; it might as well be warning me of getting my hair cut. Note, I have no actual nausea or tummy trouble that would seem to make sense of this.
Now, in terms of medical-type advice, I've been to my doctor and gotten x-rays and begun treatment, so I'm not interested in any help in that regard (no offense!). What I'm curious about is how these three seemingly disparate components of what constitutes "feeling hungry" could all be affected at once. Our article on loss of appetite redirects to the symptom of anorexia, which goes on to explain... nothing at all. It's like something snipped the same tiny pieces from my psychology, sense gathering, and digestion while leaving everything else pretty much intact (well, mostly).
Secondly, I'm curious about the mechanism and/or reason behind this. My lungs have some fluid/blockage in them due to an infection - how/why bugger up my appetite? I've read studies that suggest some people may avoid iron-rich foods during infections as a way of promoting mild blood anemia, which may inhibit some bacterial reproduction. It makes sense to evolve that reaction, but it seems a poor tactic to have me starve myself when I'm already out of breath and potentially facing down a deadly infection. Or is this the result of action by the viral or bacterial agent itself? Pointers for either question would be appreciated. Matt Deres (talk) 02:15, 17 February 2012 (UTC)[reply]

Loss of the sense of smell is called anosmia. Smell is a critical component of taste. (OR - my father came back from WW2 without a sense of smell, and told me once he missed it when eating because it seemed everything tasted bland or the same.) Smell is the sense that has the power to evoke the most powerful memories - for example, if I smell water in which plant material has stood and begun to fester, I am taken right back to my grandmother's florist shop in the 1960s. Maybe it's worth you investigating articles linked from Smell for some insight? At a guess, I'd say it would have something to do with the infection not just being confined to the lungs, but affecting the mucous membranes further up the respiratory tract as well. --TammyMoet (talk) 09:52, 17 February 2012 (UTC)[reply]
I found this paper "Anorexia of infection: current prospects" being cited by another discussing wasting caused by TB and HIV (the first is paywalled, but drop me an email and I can send you a copy). This has a section on "Mechanisms of anorexia in infection and cancer" which you might find useful. They seem to be saying that cytokines produced during infections leading to activation of the central anorexigenic system - it seems a pretty complicated process, but the hormone leptin will come in somewhere. None of those sources explain why, but I'd speculate that it is a bit like the fever response - the body acts in a way that is not particularly beneficial to itself in the short term, but which will hopefully kill off the infection, preventing it from taking over. Hope you get well soon! SmartSE (talk) 10:49, 17 February 2012 (UTC)[reply]
Thanks everyone. It hadn't occurred to me that it might be similar to the fever response where minor, short term fevers often have a benefit, while uncontrolled fevers are decidedly unhealthy. If the loss of appetite is an accidental over-reaction of an otherwise valid anti-infection strategy, that would easily explain why such disparate systems were hit at roughly the same time. Matt Deres (talk) 15:42, 19 February 2012 (UTC)[reply]

Relativity: what's wrong with this logic?

Okay, suppose we know that the laws of physics are invariant with respect to a shift in position, and invariant wrt a shift in time (ie the transformations preserve the laws of physics). Then wouldn't that imply that the laws are *also* invariant wrt changes in reference frames because a change in velocity amounts to continuously alternating between the transformation ? Or is there something wrong with this logic? 74.15.139.132 (talk) 02:29, 17 February 2012 (UTC)[reply]

If you work out the math, you'll see that you're describing inertial reference frames; but if you pay close attention to your math, you'll see that you aren't describing Non-inertial reference frames. For example, you cannot compose a rotation unless you vary the increment, or the differential element, in your notation, as a function of time. Nimur (talk) 03:19, 17 February 2012 (UTC)[reply]
So, are you saying that the principle of relativity can be deduced from spatial and temporal symmetry? 74.15.139.132 (talk) 02:49, 19 February 2012 (UTC)[reply]
Indeed, if you start with Maxwell's equations, you can derive the Lorentz transform using basic geometry. I believe this is covered in our article Lorentz transform, and it's also treated in certain editions of Griffith's electrodynamics, and more rigorously in Jackson's Electromagnetics text. When approached this way, relativity is "a basic consequence of geometry," and not "some magical cosmological voodoo mysticism that only Einstein's genius could have deduced, revealing fundamental inner workings of the universe." Personally, I don't like my physics to be voodoo-y; I prefer when it's just the formalization of the basic consequences of simple physical observation. But, I guess the voodoo-ization of physics sells more pop-sci tv specials. Nimur (talk) 18:35, 19 February 2012 (UTC)[reply]

angular velocity

the distance from the Sun to Earth is 1 AU. It takes Earth 1 year to orbit the Sun. Let say there is another planet exactly the same as Earth (same mass) but 2 AU away from the Sun. How much slower than Earth it rotates around the Sun? In other word, how long would it take to orbit around the Sun? After i know how long it takes to orbit the Sun then i can easy calculate how much slower it is compare to the Earth. The answer is not simply just 2 times slower, it doesn't work that way. Someone helps! Thanks!Pendragon5 (talk) 03:59, 17 February 2012 (UTC)[reply]

Technically, it revolves around the sun. And it's been a long time since college physics, but I think the orbital velocity is some function of an inverse-square relationship. That is, a planet twice as far from the sun as the earth is, might take 4 times as long to orbit the sun. But the experts need to weigh in on this. ←Baseball Bugs What's up, Doc? carrots04:44, 17 February 2012 (UTC)[reply]
The relationship that Bugs is remembering may be that of orbital speed. An object orbiting at 2 AU will have an orbital speed 2-1/2 that of earth , but having twice as far to go, its orbit will take 23/2 years. -- ToE 00:45, 20 February 2012 (UTC)[reply]
You asked a similar question last week or so, and just like last time, the equation is still located at Orbital period. Go to the section titled "Small body orbiting a central body". The equation hasn't moved from that article since the last time you asked about orbital periods. Just plug whatever numbers you want into it, and get any answer you want. Math is useful! --Jayron32 04:47, 17 February 2012 (UTC)[reply]
(ec)It's a little more complicated than what I said. See Kepler's laws of planetary motion. ←Baseball Bugs What's up, Doc? carrots04:48, 17 February 2012 (UTC)[reply]
I'm having problem with appling the equation to the problem. Can anyone answer my question above as an example for me? Show me what number to plug in my calculator and how are they being calculate to get the answer. Thanks!Pendragon5 (talk) 19:15, 17 February 2012 (UTC)[reply]
And this problem may be related but it's not the same problem as i asked last time. Last time is the orbit period of binary star. This time is the period of each individual object.Pendragon5 (talk) 19:44, 17 February 2012 (UTC)[reply]
I know it isn't the same question, but you can still get your answer from the exact same article. It's the exact same method as last time too: You read the equation, plug in the numbers of the appropriate units, and it spits out an answer. This is what equations do. The article even describes every single number in the equation. So lets do this again:
where:
If you put those numbers into your equation, you get the orbital period in seconds. There are 60 seconds in a minute, 60 minutes in an hour, 24 hours in a day, and approx 365.25 days in a year. Using those numbers you can convert to any time unit you want. --Jayron32 23:21, 17 February 2012 (UTC)[reply]
Number 26 is what i'm trying to solve. I have calculated Cluster II has diameter twice as long as Cluster I. The answer for this is square root of 2 or approximately 1.4142 times smaller. Can anyone show me how to get the answer and what formula to use. Thanks!Pendragon5 (talk) 23:28, 17 February 2012 (UTC)[reply]
It is not that complicated. At 1 AU the period is 1 year. At 2 AU the period is 23/2=2.82843 years. According to Kepler's third law. Bo Jacoby (talk) 17:30, 21 February 2012 (UTC).[reply]

Chameleons distance from lizards and geckoes

How closely related are chameleons to lizards and geckoes? A friend claims that chameleons are quite recently split off from the lizard and gecko line but I'm not buying it just on his say so. The body shape and the way their limbs work differs quite radically - which to my mind implies a substantial evolutionary distance. Judging just by body shape and locomotion even crocodilians and lizards are more similar than lizards and chameleons. Roger (talk) 09:52, 17 February 2012 (UTC)[reply]

Chameleons and geckos are lizards. Chameleons belong to the suborder Iguania and gekkos belong to the infraorder Gekkota --SupernovaExplosion Talk 10:00, 17 February 2012 (UTC)[reply]

Basic concept of Ideal gases

An air bubble is created in a lake, at depth 200 meters. At what depth the volume of the bubble will double?

I don't know how to approach this problem - what is the average temperature as a function of the depth, what is the pressure at 200m, etc. (please correct my English)

--77.127.248.16 (talk) 10:30, 17 February 2012 (UTC)[reply]

The approach to this is simple: don't bother with temperature, it's best to assume that doesn't change, and anyway there is no information about it. There is no standard relationship between depth and temperature. The static water pressure increases by about 1 bar (i.e. 105 Pascal or Newton/m2) for every 10 meters you go deeper. So at the water surface, the pressure is +/- 1 bar, at 30 meters it is 1+3=4 bar. Use the ideal gas law pressure*volume = n*R*T, where nRT is constant. -- Lindert (talk) 11:07, 17 February 2012 (UTC)[reply]
Thank you. Is there a way to prove the fact you mentioned about the pressure, without experiments, given the water's density? --77.127.248.16 (talk) 11:25, 17 February 2012 (UTC)[reply]
The increase in pressure is caused by the weight force of the water that is above it. if you take a surface A (in m2) at a depth x (in m), the volume of the water above it is A*x. The mass of this water is volume*density (the density D of water is 1000 kg/m3). The gravity force is calculated by mass*g, where g = 9.8 m/s2. So force = A*x*D*g. Pressure is force/surface, so dividing by A, we get pressure is x*D*g. If we take x = 10 m, we get pressure = 10*9.8*1000 = 0.98 * 105 kg/m*s2 (=Newton/m2), which is about 1 bar. -- Lindert (talk) 12:44, 17 February 2012 (UTC)[reply]
Let us know when you get your answer, and we will show you a way to do this in your head. -- ToE 01:16, 18 February 2012 (UTC)[reply]
I got my answer.
and , I believe. Is there a faster way, ToE?--77.125.92.194 (talk) 10:57, 19 February 2012 (UTC)[reply]
Good work. You have to be the final judge of the appropriateness of that answer. Without additional context suggesting otherwise, the problem you stated is likely meant to be solved as an isothermal process (what you did), but if it was from a course which had just discussed adiabatic expansion (and the bubble was somehow sufficiently insulated from the surrounding water) or if it was at the end of a chapter on the temperature profiles of lakes, then other assumptions would likely hold. You also need to decide if the 10m/atm rule of thumb is sufficiently accurate for the setting, or if the problem was set in an alpine lake where the surface air pressure was significantly less than 1 atm.
You probably wouldn't want to abbreviate your work any if it is homework, but the use of manometric pressure units can let you double check the answer in your head. Just as blood pressure can be expressed in millimeters of mercury, the atmospheric pressure can be expressed in inches of mercury, or a gas pressure (typically in a low pressure line) can be expressed in inches of water, it is common, particularly for divers, to express pressure in meters (or feet) of water (or seawater). Thus, using the rule of thumb, we have 1 atm = 10 meters-of-water.
The pressure at the bottom of the lake is 210 meters-of-water (10 for the initial 1 atm of the surface air-pressure plus 200 for the depth of the water). You know that the pressure needs to halve for the volume of the bubble to double, and that pressure is 105 meters-of-water at a depth of 95 meters. Check!
This is just as valid a way of working your problem as your calculation of pressures, and just as you could achieve a more accurate answer by using a more accurate value than 1atm / 10m, this method would achieve the same increase in accuracy by determining to surface atmospheric pressure to a value more accurate than 10 meters-sea-water. The biggest advantage of your method is that is offers an easier way to show your work. -- ToE 15:05, 19 February 2012 (UTC)[reply]
Your method is pretty cool :)
It was a question I was asked in a physics club (I'm in high school), so I do assume the process is isothermal, and I don't need to submit it. Thanks a lot!--87.68.69.8 (talk) 07:22, 20 February 2012 (UTC)[reply]

Gulls and distance from Skuas...

Both are in the suborder Lari - but how close, evolutionarily-speaking (in terms of how long ago the last common ancestor was) are the gulls are the skuas? Some gull species and some skua species could almost be palette swaps of each other (e.g. Great Black-backed Gull and Great Skua) and their behaviour is similar - both of which are not always indicative of anything though, hence the question... --Kurt Shaped Box (talk) 12:07, 17 February 2012 (UTC)[reply]

As a group, Charadriiformes is one of the earliest clade of birds, splitting off the rest of Neornithes and radiating sometime during the Late Cretaceous (from as early as ~93 mya to ~65 million years ago). A great deal of morphological convergence happens between different shorebird taxa though (coloration being the most obvious). Skuas and gulls are close but not quite as close as everyone else. Gulls, skimmers, and terns form their own clade. Skuas, jaegers, and auks form another, with auks being the most basal of all members of Lari.
In terms of the fossil record, the earliest known fossil alcid is Hydrotherikornis from the Late Eocene (~35 mya) of North America. The oldest known fossil stercorariid is from the Middle Miocene (~15 to 13 mya). The rest are unknown (or at least not reliably identifiable) in pre-Oligocene deposits. Earliest known larid/sternid (putative) is an undescribed specimen from the Early Oligocene (~33 to 28 mya) of Mongolia.
In terms of molecular clock data from mtDNA analysis of crown groups with fossil constraints - see this timetree table here, which puts the last common ancestor of the two clades (Alcidae-Stercorariidae + Laridae-Sternidae-Rhynchopidae) from just before the K/T extinction event that wiped out the rest of the dinosaurs. It's problematic though, given that a lot of the family affinities of the fossils of the group can not be reliably identified.
Also see Charadriiformes#Evolution, List of fossil birds, and Livezey 2010.-- OBSIDIANSOUL 16:02, 17 February 2012 (UTC)[reply]

Deriving the number of quantum states

I am interested to know how the value of as the number of possible quantum states that a metre cubed of matter with the density of a human can assume is derived (described in this video). Widener (talk) 13:33, 17 February 2012 (UTC)[reply]

See Boltzmann's entropy formula, which is sometimes called the Boltzmann equation (though that name can be applied to several unrelated equations as well. Ludwig Boltzmann was a mega-important science-type dude). The relevent bit in that article is the way to calculate "W"; which is the number of microstates a particular system can assume. A cubic meter of human has a lot of particles, so it has a lot of possible microstates. --Jayron32 13:44, 17 February 2012 (UTC)[reply]
Okay, that's interesting. The calculation given in that part of the article is a rough approximation of course (I don't think a human can be considered an ideal gas). Does that calculation underestimate or overestimate the true value? Widener (talk) 14:07, 17 February 2012 (UTC)[reply]
In that case, what you probably want is the Gibbs entropy calculation, which uses a slightly different factor than "W" to calculate entropy, it uses "p", which is defined as a Statistical ensemble of microstates. At this point, we've far exceeded my personal knowledge and skill in statistical thermodynamics; but my understanding is that the complex calculations involved in calculating "p" takes into account the sort of interactions that occur between particles in things like solids and liquids; those interactions will tend to restrict the number of possible microstates (for examples, molecules locked in a crystal lattice will have almost no translational or rotational energy states; all energy states will be be vibrational). Basically, my skills don't let me check to see whether the video is using the Boltzmann or the Gibbs definitions, but in principal, there exists an equation which corrects for the non-ideal conditions, and I think that the Gibbs definition takes that into account by considering the possibility of interactions which either restrict or expand the microstates of a system in different phases than the "ideal gas". There's also the Von Neumann entropy, which is more directly related to quantum mechanics. You'll note that all of these various entropy definitions still relate superficially to the Boltzmann formula; they all have the form S = constant * log (# of states). Where they differ is on their definitions of the constant and on the method of calculating the # of states; which is sort of where your problem is centered. Boltzmann kept things relatively simple and abstract. When you get to the more advanced entropy definitions, the calculations become much more complex, and frankly, are beyond my own direct comprehension. --Jayron32 14:36, 17 February 2012 (UTC)[reply]

?Uniform magnetisation/saturation of thin ferrite sheet

I have a requirement to magnetise into magnetic saturation a sheet of EPCOS soft ferrite polymer sheet 0.4 mm thick 200mm X 100mm. I need to do this uniformly over the whole sheet as fast as possible (ns) and the saturation needs to be in the direction of the long dimension (200 mm).What is the best way to do this? I thought of a single sheet of copper spanning the sheet but was concerned about how to achieve current unifority in this sheet. Then I considered a number of single turn inductors (made from say 2cm copper strip) spanning the sheet across the short (100mm) dimension each fed by a fast current source of some sort. These single turn inductors would in effect be transmission lines shorted at the far end and therefore probably represent the fastest way to establish the field in the ferrite. I would appreciate any comment on this problem especially from Keit. Thanks--92.25.101.91 (talk) 13:59, 17 February 2012 (UTC) PS I can get away with Saturation for 50ns, then out of satn for microseconds. Does this help the avalanche idea?--92.25.101.91 (talk) 14:03, 17 February 2012 (UTC)[reply]

What is the purpose of saturating the ferrite sample? Some sort of magnetic switch or magnetometer? Are you trying to characterise the ferrite sample in some way, such as estimating the energy lost, or testing for some sort of short term aging effect? If you want to test energy lost, there is a standard way of doing this, which could be adapted - resonance testing ("Q-meter") - see http://users.tpg.com.au/users/ldbutler/QMeter.htm. If you want to characterise aging, ask the manufacturer. Ratbone121.221.218.244 (talk) 15:14, 17 February 2012 (UTC)[reply]

No its just in order to reduce its permeability to 1. BTW someone is trying to block my access, so i may have to reply on my talk page and not here. 92.28.71.92 (talk) 16:38, 17 February 2012 (UTC)[reply]
Yes, but why do you want to reduce permability to 1? That is what saturation does. If you are trying to make a high speed magnetic field switch, there are easier ways to go about it (even things like just switch the source on & off, or enclose whatever is sensitive in a screen box made in two halves connected by electronic switches.) Otherwise, what you want sounds like "core driving", the difference between soft instead of hard ferrite, and a flat ferrite rather than a toriod. Before they invented semiconductor DRAM, computer memories were made with thousands of tiny ferrite cores. Special high-speed high current but low power rating transistors and circuits were developed to flip the magnetisation of the cores very fast. These transistors were known as "core drivers". You should find these sorts of transistors and example circuits in old databooks from the 1960's. Re blocking access, it might help if you registered and/or gave yourself a name. Wickwack60.230.216.226 (talk) 01:01, 18 February 2012 (UTC)[reply]

Ancient Astronomy, Calendars & Leap Years

Many ancient calendars such as the Sumerian calendar include leap-year days to prevent the calendar from drifting out of sync with the seasons. My question is: how did these cultures measure time so accurately that they knew they needed a leap year? --94.197.127.152 (talk) 17:51, 17 February 2012 (UTC)[reply]

Calenders are important for all agricultural societies, since determining the proper time to plant and harvest pretty much determines the success or failure of any given year's crop. Without artificial light and pollution, it is much easier to regularly observe the sky, and notice that the constellations change in a regular pattern. Just counting they days between identical astronomical configurations will, over time, give you a good indication of the length of a year, even without any external time keeping. In other words, you count the days between "Orion is first visible over that mountain yonder", and you will notice that this will be 365, 365, 365,366, 365, 365, 365, 366, ... days. You never need to measure that quarter day. --Stephan Schulz (talk) 18:12, 17 February 2012 (UTC)[reply]
That must mean, however, that the absolute height of the shortest and longest shadows would change, would it not? Also, do you have a citation for this? --188.220.46.47 (talk) 19:45, 17 February 2012 (UTC)[reply]
And apart from observing constellations, the sun is also a good reference point. The summer solstice and winter solstice were important in some ancient cultures and they can be found simply by measuring when the shadow is shortest and longest respectively. -- Lindert (talk) 18:51, 17 February 2012 (UTC)[reply]
Indeed, 1/4 day is a rather large error considering that people would want to do things like plant their crop on the same day every year, and start the harvest on the same day. It wouldn't take but a decade or two to notice that you're planting on the wrong day. Once you realize that you lose 1/4 day each year, you just tack that extra day on the year every fourth year sometime during the fallow season (like winter) when nothing interesting is going on. Once that practice is established, that knowledge is so simple and so ingrained it is unlikely to ever be forgotten. So leap days have been with us essentially continuously for as long as we've had civilization, at least as far as we can tell. Even surprisingly accurate leap-day calculations (such as the practice of skipping leap days on 3/4ths of the even centuries) date from about 2000 years ago, when people realized that it wasn't exactly 1/4 of a day. Such accuracy was clearly in place by the time of the Alfonsine tables, which were based on the Ptolemian year of 365 days, 5 hours, 49 minutes, 16 seconds. You don't need a calculator or computer to figure this stuff out, just lots of free time, accurate measurements of the sun and stars, and a modicum of intelligence. --Jayron32 19:17, 17 February 2012 (UTC)[reply]
To get that level of accuracy you would also need observations over centuries, recorded precisely. As was said before, all you need to do is count the number of days between the longest shadow days or shortest shadow days, but just a few years won't give you the accuracy needed. Of course, there is a simpler process. Don't attempt to come up with a calendar ahead of time beyond the current year, and just start a new year on either longest shadow day or shortest shadow day. In this case the last month would occasionally get an extra day. StuRat (talk) 19:53, 17 February 2012 (UTC)[reply]
No. The more accurate your astronomical measurement, the less time you need to take it. The naïve method of waiting a full cycle to measure cycle-time can be improved by measuring accurately within one period, and extrapolating. The better your mathematical toolkit, the more capabilities this provides. For example, when Uranus was first discovered, Herschel hadn't seen it lap a full period - but he was equipped with calculus and Newtonian theory of gravitation, and so he was able to reckon its orbit pretty darned accurately without waiting around 84 years for the planet to lap the sun. Similarly, if you're observing Earth's orbit, you can naïvely sample on the solstice each year: or you can intelligently measure every night - or every second, if you have modern equipment - and compute a good curve-fit. In fact, much of ancient mathematics was dedicated to the science of curve-fitting astronomical measurements, accounting for error, drift, and so on. It amazes me that this work was performed before formal algebra and calculus. Many archaeologists suspect that rudimentary knowledge of pre-Newtonian calculus and algebra existed in ancient times, evidenced by such accurate reckoning; but the records are sparse. Nimur (talk) 18:53, 19 February 2012 (UTC)[reply]
No. The ancients didn't have "modern equipment", so measuring the exact angle of the Earth in it's orbit wasn't possible. The best they could do is to measure the days on which the solstices occur, which means around a half day margin of error per cycle. Extrapolating from a single cycle would be wildly inaccurate. With such a low accuracy, measuring over long periods would be needed to resolve it further. StuRat (talk) 21:43, 19 February 2012 (UTC)[reply]
Perhaps you've heard of an astrolabe? The archaeological record is full of such devices; simple inclinometers and diopters and sighting tubes extend through to ancient Sumerian times. Here's a nice review-article: Archaeoastronomical analysis of Assyrian and Babylonian monuments... (2003). It discusses methodologies and accuracies, as well as issues of "projecting" modern astronomical knowledge onto ancient artifacts, but it cites a lot of additional papers and references specific major archaeoastronomical artifacts. Nimur (talk) 04:49, 20 February 2012 (UTC)[reply]
That page took forever to load, and didn't seem to include any discussion of the accuracy of an astrolabe. If you know the accuracy, please just list it here. (Also note that our article lists the oldest astrolabe at 150 BC, far later than the Sumerian civilization in this Q.) StuRat (talk) 05:58, 20 February 2012 (UTC)[reply]
Unfortunately, our articles are not always the most authoritative sources available! For this reason, I linked to a few other sources. Anyway, we do have articles on the history of astronomy. You may be interested to learn that our "360 degree" circle comes from Babylonian astronomy - so, I would presume to say that they were able to measure at least to one degree of accuracy! (We have more at Babylonian mathematics and Babylonian astronomy - I make my assumptions in good company). The MUL.APIN has its own article; it is one of many similar tablets, whose accuracy varies consideraby. Needless to say, literally thousands of texts have been written devoted to the study of Babylonian and Sumerian archaeoastronomy. I can dig through my bookshelf for some more discussion of the topic if you're very interested. Nimur (talk) 08:10, 20 February 2012 (UTC)[reply]
Well, I already put the accuracy from determining the solstices at about a half day, which is 0.5/365.25, or a bit under half a degree of accuracy. Do you have a source that gives a higher accuracy for this method or for another device/method the Sumerians had ? As for the 360 degree choice, I attribute that to it being a nice composite number (divisible by 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 18, 20, 24, 30, 36, 40, 45, 60, 72, 90, 120, and 180) in the approximately area of the number of days in a year. StuRat (talk) 20:21, 20 February 2012 (UTC)[reply]
To measure a year to the nearest day the constellations are a terrible idea. The only thing that changes about them is the distance and direction of them to the sun (which can't be measured as you can't see them both at the same time) and what time they rise, culminate or set, on what day, which unless you have accurate clocks you won't know the time accurately enough to do it (only 4 minutes off makes a calendar inaccuracy of one day). Or you could use the day you first or last see them in dawn or dusk twilight to set your calendar by, but I'd think you'd have a problem doing that to only a minute or two needed to find the exact day. And even if you made a perfect sidereal (=stars) calendar it would get out of sync with the seasons by a day every lifetime. The Sun is the reason for the seasons and the year, you should use the Sun.
Jayron32, the "every four year's" inaccuracy was ignored for over a thousand years, and the 3 out of 4 century practice wasn't even actually implemented until the 16th century, I think maybe it actually wasn't thought of until the 16th century (as a fix to the long ignored inaccuracy). And yes StuRat, to learn of that small inaccuracy without modern tools would take much longer records than to learn the 1/4 day inaccuracy. However, it is not true that figuring out the every 4 year rule would take large amounts of records. Merely counting days between the sunset point passing a landmark would result in your 365, 365, 365, 366 pattern in only a few years. As you did it over 2 then 3, 4, 5 cycles you'd get even more sure of the rule's usefulness. And if I wanted accuracy I would lock everything to points near the equinoxes, when things are changing the fastest. Plus or minus 24 hours of the solstice things change very, very slowly. Shadows are fuzzy. Which is why a spring or autumn sunset point is way better than day of longest shadow or shortest shadow. See also Persian calendar for another awesomely accurate old calendar. Sagittarian Milky Way (talk) 20:43, 17 February 2012 (UTC)[reply]
Well, 5 cycles of 4 years is 20 years right there. Also consider that bad weather might interfere with accurate readings some years, as might wars, etc. And, yes, shadows are fuzzy, but as long as you consistently use the same part of the shadow (let's say the first point where there is any shading at all), determining the dates on which they are shortest or longest should work. StuRat (talk) 20:50, 17 February 2012 (UTC)[reply]
Lindert is more on the money. If one sets up sight lines on the the north/south meridian, then at the end of the Winter solstice the sun will suddenly move over the 'local' noon meridian. This happens on the modern calendar on 25th of December (that date rings a bell). Counting 365 days from this point for 5 years and the crossing point will drift out by a day. So – then skip a day, and the sun will then cross the north/south 'local' meridian and will line up 'almost' exactly as it did 5 years earlier. However, after 30 years, such a skip, will the over-do-the-'almost'-error. So, wait a further 3 years and skip a day. Then the sun lines up perfectly again. This will give you a 33 year cycle which much more synchronise than today's calender -and it is something that can be witnessed by all. It suggests itself by observation alone -hence the introduction of leap years Unfortunately, the Roman's tax collectors found it made their sums more difficult and messed around with it. John Dee tried to reintroduce this 33 cycle back in Queen Elisabeth’s day. However, that's another story. The article on Christmas really could do with import from an Archaeoastronomer. At present it read like a kindergarten fairytale story. The narrative was formulated at a time when most people could not read/ write and were unlearned. Yet it enabled them to learn and remember the new 'calendar' that came out of Egypt – if you can make the connection with the biblical narrative. Anyway – I digress. --Aspro (talk) 20:58, 17 February 2012 (UTC)[reply]
The Sun crosses the noon meridian every single day. At local noon. If you meant crossing the 18 hour right ascension line, how are you going to know that? There are no lines painted on the sky. And actually, I have another idea, once you figure out that eclipses are the Earth's shadow and not a dragon eating the moon or something then you know where the antisun is. With an astrolabe to measure degrees maybe you'll eventually be able to get your accurate calendar. You cannot say the 5/33 cycle is more obvious than once every four years when the error is .2422 days which is so close to .25 days. Why would you not correct all of the error so that you can build up enough error to correct that later when there is a very obvious thing (1/4) that shows up after only a few years. Not everyone will do it. For example, just last year at NYC, at December 21st the Sun at noon was 25 degrees, 48 minutes, and 48 seconds high, on December 22nd the Sun was 25 degrees, 48 minutes, and 48 seconds high. December 23rd the Sun was 25 degrees, 49 minutes and 15 seconds high. That's not visible to the naked eye on the sky. How do you expect to notice a difference in a shadow with even 27 arcseconds difference? Or less than half of that on some years. And you expect to know where to measure. A shadow will have 1800 arcseconds of fuzzyness, no matter how big it is. How do we suppose we split the shadow fuzzyness to under 1% of accuracy? And Sumeria is Iraq, right - which, rain-wise, is almost a desert. So they would have few observational gaps. Sagittarian Milky Way (talk) 22:56, 21 February 2012 (UTC)[reply]
Yes, the winter solstice was used by earlier civilisations, but the current system uses the mean time between vernal equinoxes which is currently approximately 365.2424 days (and gradually increasing), compared with the mean tropical year of 365.24219 days. Sir John Herschel's correction (for the year 4000) might never be necessary (even if we are still here and still use a calendar then). The accuracy claim at Iranian calendars seems to be in error because it bases its calculation on the mean tropical year, whereas that calendar is designed (like ours) to keep the vernal equinox on the same date (which their actual calendar does by observation). Dbfirs 10:16, 18 February 2012 (UTC)[reply]
I think the premise of your question is wrong. Leap days, as we know them now, were invented only in the 1st century BC. That is, it was near that time when people determined that neither 365 day nor 366 day is an accurate enough length for a year, so they had to vary the number of days in a year. The Sumerian calendar you reference is different: it's a lunar calendar (somewhat like the Hebrew calendar), which means the months of this calendar are 29 or 30 days long so that they align with the phases of the moon, that is, each month shall start at new moon. Now, a year is longer than 12 lunar months, but shorter than 13 months. While it might be difficult to determine that a 365 day year is too short, because the difference is only about a quarter of a day; it's much easier to see that 12 months are too short for a year, for here the difference is about a dozen days. Thus, if the sumerians wanted to design a calendar that aligned months to the phases of the moon but also aligned years to the seasons, they clearly had to add a 13th month to some years, but not all years. (Contrast this to the Islamic calendar which aligns the months to the moon but does not align the years to the seasons, thus all years can be 12 months long.) – b_jonas 14:18, 19 February 2012 (UTC)[reply]

1) What causes cloudiness on X-rays in the lungs of people with TB ? Is it iron in the cells fighting the disease ?

2) For that matter, why do some elements (like calcium, presumably) stop X-rays better than others ? Is it simply the atomic mass that matters ? StuRat (talk) 20:17, 17 February 2012 (UTC)[reply]

I expect that the cloudiness is due to matter denser than air, which is what you would expect to have in lungs.
These elements absorb better because the electrons are at a much higher density, especially around the nucleus. You can imageine the xray photon as a sledge hammer coming and striking various things, say a fly or a rat. Which will absorb the most energy? It will be the one with more force keeping it where it is. Graeme Bartlett (talk) 04:53, 19 February 2012 (UTC)[reply]
Not being a physicist (as I tend to make obvious when replying to questions here), I'm a little confused by the suggestion that your hypothetical X-ray photon is affected by the 'force keeping the electrons in place' being stronger in more dense materials. Isn't it just the case that more density means more electrons, and therefore more chance of our photon colliding with one on its way from the source to the detector? (assuming that it is only photon-electron collisions that are significant - I've no idea if this is correct) Admittedly, this is based on a naive mechanistic model of such things as photons, electrons, and physicists, rather than the probabilistic 'reality' we've constructed in an attempt to explain things better. But if I'm wrong, and you can explain this in terms I can understand ('If'), I'd like to see such an explanation. AndyTheGrump (talk) 05:03, 19 February 2012 (UTC)[reply]
As for "cloudiness is due to matter denser than air", there must be more to it, because the entire human body should always appear cloudy on X-rays, then. Clearly we can see that there is tissue denser than air which is relatively transparent to X-rays. So, my question remains, what about TB is it that blocks X-rays ? StuRat (talk) 20:07, 20 February 2012 (UTC)[reply]
The absorption does not only depend on the density of electrons. For an equivalent mass platinum absorbs 23 times as much as aluminium. X-Rays are "absorbed" two ways, either scattering off electrons in another direction, or photoelectric ionization of the atom. Graeme Bartlett (talk) 10:34, 21 February 2012 (UTC)[reply]

Chernobyl on the Moon

If we had a major nuclear fission reactor on the Moon, and it malfunctioned in the worst possible way, what problems, if any, would it cause for colonists on the Moon (besides the obvious loss of power) and for people on Earth ? (I'm thinking that the lack of an atmosphere and oceans would mean the radiation would not be able to travel far from the reactor.) StuRat (talk) 20:26, 17 February 2012 (UTC)[reply]

I don't see why lack of atmosphere would keep radioactive gases from moving around. However, any colonists on the Moon are already highly protected from radiation by whatever structure it is that they live in, so I suspect that a little more would make no difference. Without having done any calculations, I think it's fair to say just by the magnitudes involved that the impact on the Earth would be entirely negligible.
Bigger question is, how do you get a nuclear power plant on the Moon in the first place? Uranium is heavy, and reaching escape velocity is extremely expensive per kilogram. I assume you would have to find the fissile material on the Moon (or perhaps in asteroids); does anyone know whether that's available in any noticeable concentration? --Trovatore (talk) 20:40, 17 February 2012 (UTC)[reply]
A few ppm concentration ought to be enough, since uranium is dense in energy. However I suspect uranium will be brought from Earth for a long time before anybody will take on the start-up cost of a mining and enrichment industry on Moon. --145.94.77.43 (talk) 21:40, 17 February 2012 (UTC)[reply]
Hmm, well here's a calculation someone should be able to do (actually I could probably do it if I had time and sufficient interest): How many kilowatt-hours can you get from a simple power plant from the fuel rods equal in mass to one colonist plus that colonist's share of other payload needed to support him, and how long would that amount of electrical energy support said colonist's average share of the requirements of the colony? Obviously there are a lot of unknowns, but an order-of-magnitude estimate should be possible. If it's less than a year or so, it seems unlikely to be practical, unless there aren't any decent alternatives. --Trovatore (talk) 21:51, 17 February 2012 (UTC)[reply]
(ec) StuRat, how do you expect a referenced answer for this? It's entirely speculative - a hypothetical scenario about the implications of a hypothetical disaster on hypothetical technology. For what it's worth, consider reading about radioisotope thermoelectric generators, which are actually used to provide energy to spacecraft; but have not yet been used on manned missions. Such devices do not have a reactor chamber, and could not have a steam explosion or a runaway fission event, like the disaster at Chernobyl. Nimur (talk) 20:43, 17 February 2012 (UTC)[reply]
A fission reactor is hardly hypothetical technology, we've had them for half a century now. Yes, they would need to be adapted to the lunar environment, such as not releasing steam to cool them, but the basic concept would still work. Calculating how far various gases would travel on the Moon before being lost to space or deposited on the surface also seems doable with some math, no speculation required. StuRat (talk) 21:10, 17 February 2012 (UTC)[reply]
The impact for moon colonists would no doubt depend on what kind of moon colony scheme you're proposing. Is the reactor part of a large, sealed complex? That would be a problem. Is it miles away with just power lines between it? Probably less of a problem than on Earth, then, since folks aren't going outside without some kind of major shielding anyway, aren't breathing in particulate matter, aren't growing crops, and don't have water flowing around. Air and water transport make for a lot of the dispersal issues of radioactive particles. --Mr.98 (talk) 21:07, 17 February 2012 (UTC)[reply]
Placing the reactor in a heavily populated area would seem unwise, yes (but then again, they place them in heavily populated areas here on Earth, which also seems unwise). StuRat (talk) 21:12, 17 February 2012 (UTC)[reply]
From memory, the mean path distance for neutrons on earth is about 4 miles. So if the containment vessel got breeched, then neutron flux would be higher for any given distance due to the lack of moisture laden air. The only effect it would have on Earthlings would probably be limited to them them being bombarded by regular bulletins from Fox news about the cock-up on the Chinese moon-base (well, the US are unlikely now to have a Luna reactor, this side of the next ice age -are they). Note: Apollo took Radioisotope thermoelectric generators to de Moon!--Aspro (talk) 22:34, 17 February 2012 (UTC)[reply]
A few of the Apollo Lunar Surface Experiments Package programs used RTGs, but they were very small. Some nice photos here, from NASA - Apollo Lunar Surface Experiments Package. More details and links can always be found about specific instruments at the ALSEP main pages, and the main Lunar Surface Journal website. From what I understand, the earliest of the RTGs where strictly for thermal regulation; later (Apollo 12 and beyond) missions used them for thermoelectric energy. Nimur (talk) 01:45, 18 February 2012 (UTC)[reply]
The escape velocity of the moon is 2.4 km/s. The 'daytime' lunar temperature is about 370 K. We can rearrange the Maxwell speed distribution equation to find the mass of particles at this temperature which would have average speed exceeding the escape velocity. This is about 2.3 * 10-27kg, or ~1300 atomic mass units. I don't know much about the particle sizes that emerge from nuclear fallout, but if they're gases, even of heavy metals, they're going to 'boil off' the moon during the lunar day. They'd probably end up in earth orbit. LukeSurl t c 00:29, 18 February 2012 (UTC)[reply]
Yeah, but at what density? If the earth hits a single nucleus every decade or so, then I wouldn't consider it a problem... --Jayron32 00:21, 18 February 2012 (UTC)[reply]
Hmm, actually, it seems that at the lunar surface the escape velocity of the earth/moon system is only marginally more than the escape velocity of the moon itself. I'd guess then that the vast majority of radioactive gas would escape into interplanetary space. Unless you got really unlucky and had an explosion directed at the earth I'm thinking the radioactive fallout from this lunar disaster would probably not hurt earthlings or lunar colonists outside the immediate vicinity of the site. LukeSurl t c 00:40, 18 February 2012 (UTC)[reply]
I'm not sure what it would mean for the explosion to be "directed" at the Earth. It's not going to be a coherent beam of radiation, it's going to be a cloud of radioactive dust and smoke spreading out. The Earth is just under 2 degrees in angular diameter when viewed from the moon. That makes it about 0.01% of the Moon's sky. Assuming the dust and smoke spread out equally in all directions, and ignoring the effects of gravity, that's approximately the proportion of the dust and smoke that would hit the Earth. It's a tiny proportion and would be spread out over the entire Earth. I can't see the radiation being significantly higher than the natural background. --Tango (talk) 20:06, 18 February 2012 (UTC)[reply]
Thinking a little more, a lot of the fallout was spread from Chernobyl from fires, which wouldn't be an issue on the airless moon. LukeSurl t c 00:46, 18 February 2012 (UTC)[reply]
Well, unless entirely automated, the reactor would need some oxygen for the human operators. If it was free air throughout the reactor, then fires would be possible, until it blew off the containment dome. If they wore breathing masks with oxygen tanks, hopefully they would take those with them when they evacuate. StuRat (talk) 00:56, 18 February 2012 (UTC)[reply]
I remember somebody published a claim (I think it was in Nature?) that an ordinary moonbase would unacceptably contaminate the entire Moon with its air, because it is such a hard vacuum and there are things that can be done in it which tiny traces of oxygen would make more difficult. And of course on our own Earth the atmospheric testing of nuclear bombs and explosion of reactors has led to health issues over huge areas and puts an end to the data sequences that can be obtained from core samples. I think it is safe to say that "Greens" would object to a mildly radioactive lunar surface, as this would make various measurements based on natural radioactivity much harder. Wnt (talk) 16:27, 20 February 2012 (UTC)[reply]
True, but they would also object to any people living on the Moon. StuRat (talk) 20:02, 20 February 2012 (UTC)[reply]

Thanks, everyone. It looks like my initial thought was correct (that a conventional fission reactor would be a relatively safe way to power a major Moon base). Of course, the object would be to avoid a Chernobyl, but it's good to see that even if it occurred, it wouldn't be so bad there. StuRat (talk) 20:02, 20 February 2012 (UTC)[reply]


February 18

Heterocycle-pi bonds

Cation–pi interaction states that "Heterocycles are often activated towards cation-π binding when the lone pair on the heteroatom is in incorporated into the aromatic system (e.g. indole, pyrrole). Conversely, when the lone pair does not contribute to aromaticity (e.g. pyridine), the electronegativity of the heteroatom wins out and weakens the cation-π binding ability." What if there is no lone pair, as in silabenzene or pyridinium? Whoop whoop pull up Bitching Betty | Averted crashes 05:20, 18 February 2012 (UTC)[reply]

Obviously if there is no lone pair, it does not contribute to anything. But the lone pair isn't what causes the atom itself to be electronegative. We have an extensive article about electronegativity, where you can learn about what atoms (and other situations) can affect it. DMacks (talk) 18:45, 18 February 2012 (UTC)[reply]

Swallowing (human) blood causes nausea. Why?

According to our epistaxis article "Swallowing excess blood can irritate the stomach and cause vomiting." Why is this? What about blood does this? Dismas|(talk) 05:43, 18 February 2012 (UTC)[reply]

I've placed a {{citation needed}} on that sentence. →Στc. 07:14, 18 February 2012 (UTC)[reply]
While a citation would be good, I have heard physicians say this as well, though I don't know why it happens. -- ToE 07:45, 18 February 2012 (UTC)[reply]
Our articles hematemesis and coffee ground vomiting discuss the vomiting of undigested and partially digested blood, but do not explain the direct cause of the vomiting. -- ToE 10:41, 18 February 2012 (UTC)[reply]
Some articles that might be relevant include Blood as food and Hematophagy. If raw blood irritates the stomach, nobody seems to have told the Maasai, who include raw cow blood in their diet. Matt Deres (talk) 15:50, 19 February 2012 (UTC)[reply]

I keep seeing the same thing: "vaccination only works if you do it to everybody."[1] Is this discussed in this article (Vaccination) in detail somewhere? It is kind of confusing to me: it implies that vaccination will not work if the vaccinated person is exposed to what they are vaccinated against... I mean, to me it implies that if everyone does not get vaccinated, there is no point to get vaccinated because it is useless. (And one can be fairly sure not everyone gets vaccinated.) Is the BBC misguided? Am I missing something? Int21h (talk) 05:40, 18 February 2012 (UTC)[reply]

herd immunity -- Finlay McWalterTalk 12:35, 18 February 2012 (UTC)[reply]
Below that "herd immunity" level, there is still a clear benefit for the individual being vaccinated for a disease covered by the models, as they are then less-likely to get the disease. Noting further that the mathematical models for "herd immunity" do not allow for micro-organisms which have non-human hosts as well. Thus mosquito-borne diseases would not have any "herd immunity" level for humans, even if vaccinations were nearly universal. Collect (talk) 13:04, 18 February 2012 (UTC)[reply]
Mosquitoes transmit diseases, they don't host them. A mosquito eats the blood of an infected person and then still have some infected blood on it when it eats the blood of an uninfected person, infecting them. You still catch it from a human, just via the mosquito. As long as there is a high vaccination level over a large enough area that you won't have significant numbers of mosquitoes flying from outside the area to inside it, the herd immunity will still apply. There are very few viruses and bacteria that can reproduce in both human and non-human hosts. --Tango (talk) 13:33, 18 February 2012 (UTC)[reply]
One of the main aims of vaccination is to minimise, and ideally eradicate a disease from the entire population (as has been achieved with Smallpox). Governments and other organisations who sponsor or pay for these programs are far more interested in whole population statistics than whether or not any given individual gets the disease, and base the success or failure of the vaccination programs on the population statistics. In this sense it could be said that 'vaccination only works if you do it to everybody', or at least near enough to everybody as doesn't matter because those not vaccinated will benefit from the herd immunity mentioned above. This is because if only a few people are vaccinated then the population statistics won't be greatly affected and it will look like it hasn't worked. On an individual level however vaccination certainly works if it's not done to everybody. For example, that's why you need to 'get your shots' before travelling to areas where certain diseases are widespread; you won't change the population statistics for the disease in the population where you're travelling, but you can minimise your own chances of getting it, as well as from bringing it back to your home turf and potentially spreading it within your own population. FWIW anti-vaccination campaigners love playing with all these sorts of misunderstandings (and presumably also love benefiting from the the herd immunity effect mentioned, despite their vocal protestations to the contrary). --jjron (talk) 14:18, 18 February 2012 (UTC)[reply]
For your own health, the ideal situation is for everybody else to be vaccinated, but not you. This would protect you from contracting contagious diseases, and also from any side effects of the vaccine. An exception exists if the disease can be transmitted from other animals (unless they can all be vaccinated, too). Of course, in the real world, not everybody is vaccinated. So, then, it comes down to a risk/benefit analysis of every vaccine. For flu vaccines, for example, certain high-risk groups are encouraged to be vaccinated, but not all healthy people. StuRat (talk) 17:13, 18 February 2012 (UTC)[reply]
Actually in the U.S. we encourage everyone to get flu vaccine (high-risk groups first, then everyone else). It just isn't a very effective vaccine. Rmhermen (talk) 23:33, 18 February 2012 (UTC)[reply]
With attenuated live virus vaccines (the Sabin polio vaccine being the classic), it is not necessarily true that an individual has the best option as the only person not vaccinated in a community. These vaccines are made by forcing mutation away from the active human form by injecting into "almost-compatible animals" eg monkeys, usually in several steps. Once in a while, a accinated indivual wiil, for whatever reason, have a compromised immune system. It is extremely extremely unlikely that he will develop polio, but he provides a vehicle by which the attenuated virus can re-mutate back to the human form. It is conjectured that that there is as much, if not more, polio virus in the commuity as there ever was, and its generated by live virus vaccine use. Sometimes the herd immunity principle does not work. Australia currently has a minor problem with whooping cough, as about 2% of mothers fail to get their babies vaccinated. The rate at which unvaccinated babies get very sick and sometimes die is significant. Ratbone120.145.28.184 (talk) 02:22, 19 February 2012 (UTC)[reply]
It is not unlikely that an individual will develop polio as a direct consequence of the vaccine. [2] In fact, most of the recorded cases now are due to administration of the vaccine. [3]--nids(♂) 01:07, 20 February 2012 (UTC)[reply]
The article Viscious cited actually states that the vaccine causes around 1 case per 2.5 million. I reckon that agrees with my "extremely extremely unlikely" statement. Vespine's comment below is very apt. Without the vaccine, the incidence of polio in that 2.5 million persons could have been much greater. Ratbone124.182.4.153 (talk) 02:31, 20 February 2012 (UTC)[reply]
I second the above, the comment made by User:Viscious81|nids is exactly the alarmist "anti-vax" rubbish that scares people into not vaccinating their kids. During the 20 years between 1980 and 1999 There was 154 cases of polio caused by Oral Polio Vaccine, (which was less safe then the shots used today). That's 8 people a year, out of a population of 220 million. Compared to TENS OF THOUSANDS of cases every year before the vaccine. If that falls into the definition of "not unlikely", then it's not unlikely I'm Mother Teresa. Vespine (talk) 22:08, 20 February 2012 (UTC)[reply]
Yes, Australia does have a problem with whooping cough (I knew several people who got it last year, mainly because their parents didn't get them vaccinated due to the anti-vaccination campaigners, and became very sick thereby (the exception was an older gentleman whose immunity had worn off as it had been so many decades since he was vaccinated)). But that wasn't a failure of the herd immunity principle, it was an indication that since the disease was never eradicated, and since enough of the herd have now failed to be vaccinated, that the disease has again become a problem within the population. --jjron (talk) 04:03, 19 February 2012 (UTC)[reply]
OR here. I was vaccinated against pertussis (whooping cough), but still got the disease. The doctor told my mother that the vaccine didn't stop you getting the disease, but it made the disease less serious. This obviously has implications for the herd immunity principle. OTOH, as this was nearly 50 years ago, the doctor could have been wrong and the vaccine may not have worked properly... --TammyMoet (talk) 11:41, 19 February 2012 (UTC)[reply]
Vaccines are never 100% effective, as stated in the Pertussis_vaccine article, the current vaccine is 80%-85% effective, therefore it is still quite possible to catch the disease, just far less likely. One slightly counter intuitive situation occurs in outbreaks of a disease like pertussis where (for example) 20 people might catch pertussis and 12 of them WERE vaccinated. "Anti vaccers" frequently misrepresent this information to say MORE people who were vaccinated caught the disease so vaccines actually INCREASED the risk of catching the disease; in reality, if 90% of the population was actually vaccinated, you would expect 18 vaccinated people to catch the disease if their chance of catching it was the same as the unvaccinated population. Vespine (talk) 01:19, 20 February 2012 (UTC)[reply]
Herd immunity works in principle, but accomplishing it in practice is difficult. For example, suppose you haven't been vaccinated for the flu, and 50% of the population is vaccinated. The first time you would have encountered the flu, you get lucky - that person was vaccinated, so he didn't contaminate the surface you touched after all. But the second time you encounter the flu, that person wasn't vaccinated, so you catch it anyway. Of course, if you get enough of the population vaccinated, not only might you get lucky every single time - other people might get lucky every single time and not be transmitting it either. It's a nonlinear effect, like putting out a fire. To take a practical example, consider that New Jersey and Connecticut both mandated flu vaccinations for preschoolers, who are particularly likely to transmit the disease in day care centers. Nonetheless, if they have any special herd protection now, I don't see it in the "National Flu Report" [4]. Wnt (talk) 16:10, 20 February 2012 (UTC)[reply]
I'm actually struggling to work out what you are trying to say with the above. Flu doesn't just "float around" a population of people who don't have it, waiting for someone to "encounter it". It's typically transmitted directly person to person, or in some cases can survive at most a couple of days on a surface where someone can pick it up. I agree that "herd immunity" is not a linear effect, it's like a "tipping point", it won't be precisely the same point in each case, but like with a lot of "statistics", there is a point over which it just seems to work; where the chance of someone who is not immunized meeting someone who has the flu becomes exceedingly slim, therefore they won't become infected, therefore they can't infect other people, etc, sort of a snow ball effect. Vespine (talk) 21:49, 20 February 2012 (UTC)[reply]

Adiabatic cooling and magma

According to Adiabatic process, magma will undergo adiabatic cooling right before eruption. Does this mean that most of the magma will then become a solid when erupted? I'm also a bit confused, adiabatic means that there should be no net heat transfer to or from the fluid. It seems to me that magma erupting from a volcano is releasing quite a bit of heat to the surrounding environment. How does this qualify as an adibatic process?

Additionally, if it is true that a liquid under pressure that is suddenly released will become cooler, is it possible to keep water at a high enough pressure so that when it's released, it will freeze and become ice? ScienceApe (talk) 15:32, 18 February 2012 (UTC)[reply]

The effect is particularly marked for magmas that rise quickly from great depths, such as those associated with kimberlites, rather than a magma chamber not far below the surface, where the effect would be very small. In the case of kimberlites the magma has undergone a dramatic reduction in pressure during the ascent from about 400 km depth over a period of hours to days and adiabatic cooling is a significant effect [5], although loss of volatile components and interaction with the wall rock are also important. Mikenorton (talk) 16:06, 18 February 2012 (UTC)[reply]
Is the effect more similar to why an air duster becomes cool when used? From what I understand the air duster becoming cooler is not an adiabatic process, but an effect of the liquid inside the can becoming a gas due to loss of pressure and absorbing heat from the environment as an endothermic process. ScienceApe (talk) 16:42, 18 February 2012 (UTC)[reply]
The paper that I linked to above, treats the melt and the exsolving gases of the ascending magma separately. For the melt component, about 150° of pure adiabatic cooling is predicted for magmas originating at 400 km depth. There is no phase change involved, the gases come out of solution in the melt phase as the pressure reduces. Mikenorton (talk) 17:40, 18 February 2012 (UTC)[reply]
So going back to my original question (or one of them), is it then possible to shoot water out and have it be adiabatically cooled to freezing as long as it's kept in a high enough pressure tank? ScienceApe (talk) 19:45, 18 February 2012 (UTC)[reply]
That seems plausible, although I haven't yet found anything that describes that precise scenario. High pressure shift freezing (basically what you're describing) is a technique used in food processing, see here. Mikenorton (talk) 05:42, 19 February 2012 (UTC)[reply]

Road Dust Reduction

Hi My wife and I live on a dirt road and during the somewhat dry season here(Western Washington USA)the dust from vehicles traveling on the road is horrendous. My wife claims that by stacking cut,dead, or fallen branches between the trees that seperate our house from the road cuts down on the dust particles. I claim that the particles would actually increase due to the deterioration of the flora stacked between the trees. Can someone please settle this debate for us? Thank you — Preceding unsigned comment added by Wigginonwhidbey (talkcontribs) 16:40, 18 February 2012 (UTC)[reply]

Debatable, but I'll side with your wife on this. You are correct, in that the branches/ debris will generate particles (dust). But, I think they will be larger much larger, and won't travel very far. Most importantly, this shedding will be fairly slow and continuous; a brush/branch "fence" should last several years, no? The problem with the road dust is that it gets kicked up in large pulses, which can then drift to your house. As the dust and air from the road blows through the fence, it will act like a filter, because of all the surfaces that dust can get caught on. The fence will also cut down on noise a little, especially if it is both high and deep. You could also plant in some vines (preferably non-aggressive natives) to hold the fence together, make it look nicer, and clean more air. SemanticMantis (talk) 16:54, 18 February 2012 (UTC)[reply]
Spraying some type of oil on the dirt road in front of your house would help a lot more. Water also works, but quickly evaporates, so you'd need a sprinkler system. People might not appreciate splashing mud on their cars, though. StuRat (talk) 17:01, 18 February 2012 (UTC)[reply]
Yes, oiling the road would be effective, if you didn't care about 1) polluting the water table, including wells that supply nearby houses 2)The cost and labor involved with repeated applications 3) the state and local laws, which may ban such things for any number of reasons. SemanticMantis (talk) 17:44, 18 February 2012 (UTC)[reply]
Oiling the road is common in some jurisdictions. I remember a few years ago riding on roads with signs noting that they had been treated this way (maybe in Alaska?). There are apparently several other options as well. Googling around, I found some places that do this have banned petroleum-based oil and instead found plant-based oil as a safe alternative[6] and other chemicals that might work instead [7]. Can't find any pages on WP about it (road oil would have been my guess). DMacks (talk) 18:39, 18 February 2012 (UTC)[reply]
Modern dust reducing agents for use on roads are often based on lignin which is perfectly safe for the environment. Whether this is an economic or practical option for the OP, however, I cannot say. DI (talk) 23:14, 18 February 2012 (UTC)[reply]
Perhaps they could get a vat of used cooking oil from the local fast food place for free. That stuff is only unhealthy when eaten, and smells nice, although it might make them hungry. StuRat (talk) 23:17, 18 February 2012 (UTC)[reply]
Spreading used cooking oil on the street would count as an illegal hunting method where I live. Deer licking the road salt is one problem but now cooking oil too... Rmhermen (talk) 23:28, 18 February 2012 (UTC)[reply]
Note that placing dried branches like that could pose a fire hazard. I can imagine a motorist flicking a lit cigarette into them, for example, and the fire then spreading to the trees. Another suggestion is that you fill in the area between the trees with live bushes. StuRat (talk) 20:53, 18 February 2012 (UTC)[reply]
Yeah, being from where I come from that was my immediate fear too. I'm not sure how common wildfire is in rural Washington, but I know it's common in California and other not too distant areas (Montana is another that seems to spring to mind). In fact in Australia I suspect the local council or other regional authorities would likely come and tell you to clear away such a potential fire hazard. I would tend to side with your wife on the (marginally) cutting down dust thing, but would avoid it regardless due to fire risk. As Stu suggested, why not just plant some (fire resistant) shrubbery instead? Safer, more attractive, and less labour intensive long-term. --jjron (talk) 03:51, 19 February 2012 (UTC)[reply]

DIssolving soap scum

I have a shower mat in my bath. It's made of what I presume to be translucent silicon rubber, and sticks to the bath surface using more than a hundred sucker pads. After only a few showers, a buildup of dark soap scum becomes evident around many of the sucker pads (because the mat is translucent, these are quite visible). These are unsightly, so I'd like to be able to remove them easily. Holding the undersurface of the mat up to the shower is only marginally effective, and manually cleaning off the scum, while straightforward, is unduly time consuming, as I have to manually clean under dozens of little sucker pads. I did a little test last week, and concluded that it takes longer to clean off the scum than the showers which incurred it took. I've tried using vinegar and a shop-bought anti-calcium agent - while these work fine at shifting genuine calcium buildups (my water is somewhat, but not especially, hard) the don't make much difference. What I need is an inexpensive agent that will remove soap scum without damaging the mat (or the bath, or me). Some Google searching finds people suggesting ammonia or caustic soda, neither of which seems like a safe or proportionate solution to a small ongoing problem. Is there something safe and cheap that will gently release the soap accumulation, without my resorting either to hard labour or chemical warfare? 87.114.249.141 (talk) 19:11, 18 February 2012 (UTC)[reply]

Soap scum isn't usually dark, that sounds more like mold. If so, use bleach to kill it. It could also be hard water residue. If so, use something like CLR. Most mats are opaque, with the "out of sight, out of mind" philosophy on the mold and other residue. Also note that the whole concept of a water-proof mat suctioned to the tub is a poor one, as that means dirty bath water will stay under there for a very long time, giving time for many nasties to grow. If the goal is to make you less likely to slip in the tub, perhaps installing rails or another type of anti-slip surface (epoxy embedded with sand, perhaps ?) would be a good alternative. StuRat (talk) 20:30, 18 February 2012 (UTC)[reply]
Agree with StuRat and would further suggest removing the mat after every shower and hanging it up to dry. Also if you are literally using soap, which often leaves a residue, switch to shower gel.--Shantavira|feed me 07:12, 19 February 2012 (UTC)[reply]
This might seem counter intuitive, but if the substance truely is soap, try scrubing it with oil or some greasy substance. Then you just have to wash of the oil with something else, which is easier to do. Plasmic Physics (talk) 07:48, 19 February 2012 (UTC)[reply]
This reminds me of dogs, who, to get that horrid floral scent off themselves after a bath, like to roll in manure. StuRat (talk) 21:02, 20 February 2012 (UTC) [reply]
When the OP says 'soap' does he mean soap or one of these modern 'feel good' shower gels. These are not soaps but detergents. . They get that feel good soap factor from the additions of glycols , glycerins and a whole lot of other stuff. These things leave residues which wont wash off. Read the label on whatever you use and if it isn't soap -throw it away. The old traditional stuff is cheaper (and feels the same) but has a lower profit margin for the supermarkets so they don't like to promote it. Real soap is easier to find in smaller shops. These rinsers off quickly with the aid of a soft cloth and does not promote mould. In addition, you will not have to buy moisturisers to replace the natural oils that the detergents take out of your skin with greater vengeance; nor endless and expensive spray bottles of some Wizzo-type bathroom cleaner – which never works as advertised.--Aspro (talk) 22:50, 20 February 2012 (UTC)[reply]

Formula doesn't work why?

When i used this formula for number 13, it doesn't give me the correct answer. The correct answer suppose to be 9 solar masses. Can someone tell me why it doesn't work? And how can i get the correct answer?Pendragon5 (talk) 20:32, 18 February 2012 (UTC)[reply]
Please show us your work. StuRat (talk) 20:43, 18 February 2012 (UTC)[reply]
Look at the second picture. Pendragon5 (talk) 22:13, 18 February 2012 (UTC)[reply]
OK, thanks for adding that. StuRat (talk) 23:21, 18 February 2012 (UTC)[reply]
Hmmm... your formula looks equivalent to the one in orbital period and your working looks correct... it's possible the official solutions are wrong (or we're both being really thick...). --Tango (talk) 23:17, 18 February 2012 (UTC)[reply]
It's possible but i doubt that. This is part of the astronomy event at national Science Olympiad in 2009, which means this was used 3 years ago at national SO event. So if there was a mistake then they must have found it out in 2009 because i'm sure many students would have report it if it doesn't seem to be correct. Plus i'm certainly sure that any errors should be all fixed by the time they published it as the practice test. Anyway i think number 14 is related to number 13 in someway. The question is what is the distance from Star A to the center of mass of the system. What mass of the system are they talking about? I don't know how to do number 14 either so perhaps by figuring out number 14 will give us a clue on number 13. The answer for number 14 suppose to be 1 AU. Any help is appreciated!Pendragon5 (talk) 00:18, 19 February 2012 (UTC)[reply]
See centre of mass for question 14 - by "the system" they just mean the two stars. Q14 is easy - the answer is definitely 1 AU. --Tango (talk) 01:21, 19 February 2012 (UTC)[reply]
For a system of two masses, mA and mB, the center of mass lies between them at a distance of rA from mA and rB from mB, such that mA rA = mB rB. This should be very familiar and fairly obvious to you, either as a consequence centering your coordinates at the center of mass by setting in the equation for the center of mass for a system of particles, , or a more kinesthetic sense of balancing the moments of mass of the two objects. You may also wish to read our Barycentric coordinates (astronomy) article.
I concur with Tango that your answer to Q 13 of 27 solar masses and the answer key's answer to Q 14 of 1 AU are both correct. (Un?)fortunately, the answer to Q 14 does not build on the answer to Q 13, but only uses the 3 AU separation given in that problem. -- ToE 02:28, 19 February 2012 (UTC)[reply]
Sorry that i'm not familiar with those things yet. I still have a lot to learn about astronomy. There is no teachers in my school that has any knowledge about astronomy. I have to start to learn it from scratch. It is by far something that attracted me the most since i was little. So anyway. Base from the luminosity-mass relationship. Star A's mass suppose to be twice as much as Star B's mass. Can someone explain to me why it's 1 AU. Do the math for me as an example for this one please, that would be a lot helpful to me to learn it. Thanks!Pendragon5 (talk) 05:07, 19 February 2012 (UTC)[reply]
Star A is 8 times the luminosity of the Sun. Using the mass-luminosity relationship, 8 = (M/Msun)^3, so M=2*Msun. From question 13, you know the total mass of the system, so you can figure out the mass of star B. With both masses as well as the separation between them, how do you find the distance of A to the center of mass? --140.180.6.154 (talk) 07:34, 19 February 2012 (UTC)[reply]
You have calculated that mA / mB = 2, from the definition of center of mass you know that mA rA = mB rB, and you were given that rA + rB = 3 AU. That is only three equations in four unknowns, but is that enough for you to solve for rA and rB?
Also, I apologize if my "should be very familiar and fairly obvious to you" language came across as harsh. What I intended to convey is that a student working these types of problems would be expected to understand the concept of center of mass sufficiently well that mA rA = mB rB should be very familiar and fairly obvious, and if it is not, then you should study the concept some more and not simply accept the formula as a given. "More power to you!" for what you are doing.
Was the initial formula you wrote provided to you, or is it something you expect to memorize? Do what's best for you, but personally, I'd hate to have to remember something like that, whereas the equivalent formula in our Orbital period#Two bodies orbiting each other which Tango mentioned:
seems easier and gives you more. When M1 >> M2 (M1 is much larger than M2), such as with the sun and the earth, this reduces to the equation given in Orbital period#Small body orbiting a central body. From the sun-earth orbit, you know that when a = 1 AU and M (equivalent to M1 + M2) = 1 solar mass, then P = 1 year. You are really only concerned with the proportionality for Q 13:
and what you earlier calculated now becomes obvious: if P stays the same (1 year) and a triples (3 AU), then M1 + M2 must go up by a factor of 27 to 27 solar masses. Easy, right? (Sorry that we haven't given you anything better than "the answer key is wrong". I keep hoping that someone is going to point out what we are all doing wrong, but I don't think that is going to happen.)
Finally, you might be interested in reading The Physics of Binary Stars which derives the formula for the orbital period. -- ToE 10:25, 19 February 2012 (UTC)[reply]
Well they don't expect students to memorize formulas. We can bring formulas and notes along with us. What matters is you have to understand the concept and know how to use them. I'm not a big fan of memorizing stuffs either however i have no problem memorize when i needed to. As most people, i prefer shorter formula = the better. Always look for the fastest way to solve problem is one's own advantage during competition due to limit amount of time. And it's also one's own advantage to memorize formulas to save time. Thanks for introduce me into a really easy formula for me to memorize, it's going to help me for sure. And 27 solar masses seem to the only answer that makes sense, i'm really surprised that no one has ever report this error. It's possible that we could have misunderstand the question but well i'm going to report it and see what happens. I got everything down now, thanks a lot!Pendragon5 (talk) 19:57, 19 February 2012 (UTC)[reply]

Abundance of chemical elements - graph

This graph of chemical element abundance in the Earth's upper crust seems to display a zigzag pattern of abundances for many (though not all) of the elements. As atomic number increases, abundance often decreases then increases then decreases again. Why does this pattern occur? 92.28.240.138 (talk) 22:03, 18 February 2012 (UTC)[reply]

Did you read the caption under the graph at Abundance of elements in Earth's crust? Deor (talk) 22:28, 18 February 2012 (UTC)[reply]
The zigzag effect between odd and even atomic numbers is known as the Oddo-Harkins rule. Thincat (talk) 23:13, 18 February 2012 (UTC)[reply]
Thanks for your helpful replies to my question. 92.28.240.138; 19:18, 21 February 2012 (UTC)

name that tree

[8] What kind of tree is this? Photo was taken in Kenya sometime in the past few weeks. Thanks. 67.117.145.9 (talk) 22:36, 18 February 2012 (UTC)[reply]

Wildly guessing, acacia? Rmhermen (talk) 23:38, 18 February 2012 (UTC)[reply]
That was my guess too, although other trees have that umbrella-like habit. Perhaps Acacia tortilis? I don't think there's enough in that photo for a positive ID. Deor (talk) 23:43, 18 February 2012 (UTC)[reply]
And my guess, too. The shape is typical. Exact identification requires more information. Dominus Vobisdu (talk) 23:47, 18 February 2012 (UTC)[reply]
No expert knowledge of African trees, but the linked photo bears an uncanny resemblance to an illustration of the Apple Ring Acacia shown on the Trees of Kenya website. "Generally found from Senegal in West Africa across the sahel and south to northern Ghana, Nigeria and to Sudan, then south to Kenya... It has the unique habit of losing its nutrient-rich leaves at the beginning of the rainy season." We have an article Faidherbia albida. Trees of Kenya's drawing of Acacia tortilis has less similarity to my eye, but it is "one of the most abundant species" in Kenya. Alansplodge (talk) 23:51, 18 February 2012 (UTC)[reply]
Thanks, that's probably as good as I can hope for. It was just a tourist photo that I thought looked cool, and I doubt that the person who photographed it has further info. 67.117.145.9 (talk) 01:35, 19 February 2012 (UTC)[reply]

February 19

Why doesn't the common cold infect nonhumans?

Why doesn't the common cold infect nonhuman mammals? Comet Tuttle (talk) 01:55, 19 February 2012 (UTC)[reply]

It does. Not exactly the same viruses as infect humans (they have evolved to use us as host and wouldn't be very effective in other hosts), but animals can certainly get colds. I just Googled "animals getting colds" and found loads of sites talking about it. --Tango (talk) 02:09, 19 February 2012 (UTC)[reply]
Note that outside the world of farming, you're very unlikely to get large numbers of strange animals packed together as closely as humans get in shops, offices, cinemas, trains, etc, which limits the potential for a disease like the cold to spread, which is why you might not see dogs or cats catching colds as often as humans do, and why cage-bound animals like hamsters may appear to never catch infections. Smurrayinchester 09:24, 20 February 2012 (UTC)[reply]

Mating the winner of a fight

Although it's intuitively clear that females prefer to mate with the winner, I want to know how common is that in general among mammals? In humans, it's certainly not always the case. 88.8.66.12 (talk) 02:32, 19 February 2012 (UTC)[reply]

I think most animals that engage in mating contests fight until one contestant submits rather than having either contestant get significantly injured (although there are certainly exceptions). I those cases, I expect the contest would just restart if one of the contestants tried to mate with the female after submitting. --Tango (talk) 02:52, 19 February 2012 (UTC)[reply]
Is there any animal in which it's "always" the case? --140.180.6.154 (talk) 07:29, 19 February 2012 (UTC)[reply]
It would be fairly safe to say in species whose main reproductive groups are usually one male and a group of females, where the male holds his place through fending off intruding solo males from time to time, this would always be the case ("always" being a very strong word, as there can be sneaky exceptions). Things like lions, gorillas, and even kangaroos tend to hold to the 'mate only with winning male' rule. In humans this may have been more common in more historic societies where a powerful chief or similar could maintain a large group of females, such as the stereotypical depiction of a harem. --jjron (talk) 07:54, 19 February 2012 (UTC)[reply]
It "may have been" - but it almost certainly wasn't. In any case, biologically speaking 'historical societies' are a blip: we are the result of 'prehistorical' ones. And what what little evidence we have from contemporary hunter-gatherer societies suggests that something approximating to 'serial monogamy' was more likely to have been the norm. And if you are going to compare us with other mammals, it seems logical to compare us first with our nearest relatives, the apes. Not that this helps much, given that one can find more or less anything from indifferent promiscuity Bonobos to (apparent) monogamy Gibbons. As for Gorillas, I well recall a lecture on primate sociobiology, which demonstrated all to well that neither the 'dominant male' silverback, nor the 'submissive' females would seem to got much excitement out of mating - hardly surprising, given the silverback's equipment (think of the cap on a ballpoint pen), or the duration of the encounter, which rarely exceeded ten seconds. If you want to understand human sexuality, study people... AndyTheGrump (talk) 08:21, 19 February 2012 (UTC)[reply]
Gorrilas don't pair up - generally one male that can dominate other males will collect a harem. I recall an article in Scientific American a few years ago where the author had established that if a species pairs up to raise young to independence (eg humans, many species of birds), sex is highly pleasurable for both sexes, and sex is frequent, playing a bonding role, and rape is unusual - and where species do not pair up (domestic cats being an excellent example) to raise young, sex is only marginally pleasurable for the male, and probably not at all for the female. For such species, sex is only for reproduction and probably constitutes rape. The frequency and pleasurability of sex does not map at all to the closeness of one species to another, but extremely well to how the young are raised. Generally, species that pair up, the pairs are less likely to influenced by how dominant the males is with other males, and almost all males get partners. In species that do not pair up, weaker males may never get any sex. All this seems to hold, more or less, for reptiles, birds, and mammals. So, yes, you can learn something of human sexuallity by studying completely different species. Ratbone121.215.64.242 (talk) 10:12, 19 February 2012 (UTC)[reply]
Elephant seals, horses (actually all extant equids), springbuck (and many other antelope), and many other herding species have dominant teritorial males who fight to maintain control of a territory and the sole right to mate with females in the territory. Roger (talk) 09:29, 19 February 2012 (UTC)[reply]
But, can females "move abroad" to another male? Is some kind of unpredictability in the process? 188.76.228.174 (talk) 14:39, 19 February 2012 (UTC)[reply]
Having watched Red Deer in London's royal parks often enough during the rutting season, I'd say that the hinds are well-motivated to 'move abroad', often taking advantage of confrontations between stags to break away from the herd - whether they'd behave in the same manner in a more natural environment, where there was a risk of predation involved with being alone, I'm not sure. It is also questionable as to whether the herds have 'territories' at all - instead they wander around as a group within an area shared with other herds. It would be interesting to learn just how cohesive a 'herd' is over time anyway, during the rut. AndyTheGrump (talk) 18:43, 19 February 2012 (UTC)[reply]

Occurance of multiple cancer types

This is NOT a request for medical advice - I am just currious. A friend has been told she has 3 different types of breast cancer (in one breast!) - i.e. each tumour is of a different type. How often does this occur? Ratbone120.145.28.184 (talk) 03:32, 19 February 2012 (UTC)[reply]

The best data is for the US state of Connecticut, in which slightly under 2% of cancer patients were diagnosed with at least two primary tumors simultaneously [9]. I'm not sure what the probability is for three simultaneous cancers in the same breast, but presumably it's even smaller. Although if a person is genetically predisposed to developing breast cancer, then perhaps the chance is actually not small at all. Someguy1221 (talk) 10:34, 19 February 2012 (UTC)[reply]
Agreed. If all of the cancers were independent events, this would be extremely rare. However, whichever factors caused one of the cancers are likely to have also contributed to the others. A genetic predisposition is one factor, but there could also be exposure to radiation, carcinogenic chemicals (perhaps due to smoking), poor nutrition, age, obesity, a sedentary lifestyle, and a depressed immune system (due to stress, disease, etc.). StuRat (talk) 21:28, 19 February 2012 (UTC)[reply]

Many thanks, especially to Someguy. The papers hows that it is unusual to get 2 simulataneous tomours. Since this would include simultaneous tumours of the same type, tumours of diferent type must be a sub-set, so three diferent types must be quite rare. Ratbone121.221.28.225 (talk) 14:49, 20 February 2012 (UTC)[reply]

How does intake of food acids affect the human body?

How does intake of food acids affect the human body? Thanks. 112.118.205.3 (talk) 09:18, 19 February 2012 (UTC)[reply]

Humans cannot survive without proper intake of essential amino acids --SupernovaExplosion Talk 12:03, 19 February 2012 (UTC)[reply]
Certain essential vitamins are also acids. Without Ascorbic acid, for example, you can get scurvy. The name "ascorbic acid" even means, broadly, "prevents scurvy". --Jayron32 14:02, 19 February 2012 (UTC)[reply]

Carnitine

Do carnitine have any real effect in reducing body fat? This says "studies do show that oral carnitine reduces fat mass". Then why do it says there is no scientific evidence for this fact? If a person start taking carnitine supplement, will he become dependent on supplement i.e. face problems with normal fat metabolism after supplement withdrawal? --SupernovaExplosion Talk 15:35, 19 February 2012 (UTC)[reply]

That link says "Although L-carnitine has been marketed as a weight loss supplement, there is no scientific evidence to show that it works. Some studies do show that oral carnitine reduces fat mass, increases muscle mass, and reduces fatigue, which may contribute to weight loss in some people." So, the increase in muscle mass may be more than the decrease in fat mass, so won't necessarily lead to weight loss. And "some studies" hardly sounds definitive. StuRat (talk) 21:24, 19 February 2012 (UTC)[reply]

Acetyl-L-carnitine and lipoic acid are recommended supplements for older people, the older you are the less your body produces of them, see e.g. here:

Finally, I take 400 mg of lipoic acid and 1,000 mg of acetyl-L-carnitine (ALCAR) daily. This is based on the research by LPI's Tory Hagen on the role of these "age-essential" micronutrients in improving mitochondrial function and energy metabolism with age, and the research in my own laboratory indicating that lipoic acid has anti-inflammatory properties and lowers body weight and serum triglycerides in experimental animals. In addition, lipoic acid is well known to stimulate the insulin receptor and improve glucose metabolism, and is used in Europe to treat diabetic complications.

Count Iblis (talk) 00:43, 20 February 2012 (UTC)[reply]

In that case, it's quite possible that the supplement will have no effect on those who already have enough. StuRat (talk) 05:47, 20 February 2012 (UTC)[reply]
Yes, but people over the age of 35 are biogically "old", so they may already need it at that age. Someone at age 20 doesn't biologically age much over the course of a year, but someone at age 40 will biologically age much more significantly in a period of one year. If you could keep the rate of aging the same as it is for 20 years old regardless of age, then you would live for many thousands of years. It has been suggested that taking cheap over the counter supplements like vitamin D, acetyl-L-carnitine, fish oils etc. could significantly slow the aging process. In a recent article some persons were were interviewed who have taken such supplements for many years, one of them is 85 years old, but he is way above average fitness for someone of even age 60 let alone 85. He still competes in athletics, he participates in the 110 metres hurdles. Count Iblis (talk) 15:02, 20 February 2012 (UTC)[reply]

Orbits of stars within globular clusters

File:Astronomystuff3.JPG

Number 26 is what i'm trying to solve. I have calculated Cluster II has diameter twice as long as Cluster I. The answer for number 26 is square root of 2 or approximately 1.4142 times smaller. I was trying to solve it by approaching the period orbit formula ways. I know that it has to do something with how fast the star rotate around the clusters. It doesn't seem to me that the formula for period orbit can be applied to this. Can anyone show me how to get the answer and what formula to use? Thanks!Pendragon5 (talk) 20:06, 19 February 2012 (UTC)[reply]

Watch your "rotations" and "revolutions"! While a galaxy rotates, its individual stars revolve about the galactic center. Globular clusters can't even be said to rotate as their star motion is pretty random (which is why they are not discs), though an individual star may revolve (or orbit) about the cluster. I've also changed your section title to reflect the nature of your question; hope you don't mind. -- ToE 00:52, 20 February 2012 (UTC)[reply]
This strikes me as a poorly worded question as its setup discusses the angular diameter (as observed from earth) of the globular clusters, leading to the conclusion that Cluster II has twice the diameter of Cluster I, though we are told they are of the same mass. It then asks about the ratio of the angular velocities of stars at the outer edge of the two clusters -- presumably with respect to the center of their respective cluster. Perhaps that should go without saying, but having just discussed the observed angular diameter, I would prefer if they explicitly indicated that they were not asking about the observed angular velocity.
Motion of stars within globular clusters is described at Globular cluster#N-body simulations, but that is certainly not what they are expecting to be solved here. I assume, as does the OP, that they are discussing stars which remain on the edge of the clusters in fairly circular orbits. (In highly elliptical orbits, their orbital angular velocities would be greatly reduced in the vicinity of apoapsis.) If Orbital period#Small body orbiting a central body applies (should it?), then the period is given by and the ratio of their periods would be 23/2, not the 21/2 suggested by the answer key.
But wait! Perhaps they have been asking about the observed angular velocity of stars moving across the face of the clusters all along -- would that throw in the factor of two we need? Alas, the faster star is in the closer, denser cluster. Its orbital angular velocity is 23/2 times that of the star in the more distant, less dense cluster, so with an orbit half the size, its orbital velocity is 21/2 times greater, and being half the distance from sol, its proper motion would be 23/2 times greater. (The coincidence coming from the setup of the problem.)
Were I the OP, I'd be rather perturbed that the two apparent errors encountered in the answer key (see #Formula doesn't work why?) both involve calculations of orbital periods. -- ToE 00:23, 20 February 2012 (UTC)[reply]
Do you mean "orbital period" when you said OP? I'm kind of lost on the wording you used, sorry if i'm slow at understanding, when you said "Perhaps that should go without saying,". What should go without saying what? And you said "I would prefer if they explicitly indicated that they were not asking about the observed angular velocity." then at the bottom you said "Perhaps they have been asking about the observed angular velocity of stars moving across the face of the clusters all along". So are they asking about the observed angular velocity or not? And what do you mean by moving across the face of the cluster? You mean it's moving in the straight line across the diameter of the cluster? And some more info i forgot to add, perhaps it would help you explain to me better. Since the angle of Cluster II is half of the angle of Cluster I and they both have the same angular diameter so that's mean Cluster II and twice as further away as Cluster I. Therefore Cluster II has twice diameter as Cluster I. And i'm also confused on this statement "the faster star is in the closer, denser cluster". We don't know which one Cluster is denser than which. And sorry that i don't understand the second to the last of the paragraph of yours. Alright based from our calculation we know that Cluster I is rotating 23/2 faster than Cluster II then the next few lines... I have been trying to understand this but eventually i still don't get it. It's ok if you don't satisfied my confusing, i know you have tried you best to explain to me. If there is something you can explain further, please do.Pendragon5 (talk) 01:42, 20 February 2012 (UTC)[reply]
"OP" means "original poster", ie. the person that asked the question (so that's you!). --Tango (talk) 01:49, 20 February 2012 (UTC)[reply]
Angular velocity is always with respect to some point, and I assume that the question is asking about the orbital angular velocity of the star and not the observed angular velocity from earth, and "perhaps that should go without saying" because if they were interested in the latter, they would probably have referred to it as proper motion. It is a bit confusing because I think they must be making a lot of unspoken assumptions. From my understanding of globular cluster, their stars typically do not follow elliptical orbits, but instead move about in complicated (chaotic?) patterns as they interact with the other bodies in the cluster until they are eventually ejected, and that this ejection process leads to a very slow evaporation of the cluster. For any calculation to be possible, I figure that they must be speaking of stars that don't just happen to be at the outer edge of the clusters at this moment, but which maintain a fairly circular orbit about the cluster at its outer edge. I don't know if this is the common behavior of stars at the periphery of globular clusters, but what else could they expect you to calculate?
Next, I'm assuming that if the two stars doesn't pass too close their neighbors, a "raisin embedded" shell theorem should allow us to calculate the orbital period via the Orbital period#Small body orbiting a central body formula, and this yields the 23/2 ratio. I understand that this is the value you got as well. (This is the point where I'm hoping that an astronomer will butt in and point out what we are doing wrong.)
Since the answer key says 21/2, I then tried to figure out how to make that work, possible by using the fact that one cluster was twice as far away as the other, and hoping that this 2 might cancel out a 2 in the 23/2 yielding 21/2. This could only come into play if we were supposed to be comparing the observed angular velocities -- the proper motion -- all along, and since no other configuration was specified, I considered that in which the stars' motions were entirely transverse (at right angles to the line of sight). This happens as the stars are moving across the face of the clusters as they are presented to earth. There is no real reason to assume any of this; I am grasping at straws, trying to make the answer key's 21/2 work.
So we know that the star at the outer edge of Cluster I (the smaller, denser cluster -- denser simply because it is of the same mass but half the diameter) is moving at an orbital angular velocity of 23/2 times that of the star at the outer edge of Cluster II, right? But the orbit size around the outer edge of Cluster I is half the size of that for Cluster II, so to maintain that 23/2 orbital angular velocity advantage, it only needs to be moving 21/2 times as fast as the other star in terms of actual orbital velocity (in km/s or furlongs per fortnight or whatever) . But any motion in the closer cluster is twice as apparent as the same motion in the one twice as far away, so the faster star has 23/2 times the proper motion as the slower star, not the 21/2 I was grasping at. (So no, I don't think they were asking about observed angular velocity; it was a wild goose chase for me, but a red herring for you.) This arriving at the same 23/2 value is just a coincidence setup by the problem in which the two clusters have the same observed angular diameter, and thus the one which is twice as far away has twice the physical diameter as well.
Thus I fail to find any reasonable (or only moderately unreasonable) interpretation of the question yielding the answer in the key and must assume that either it is wrong or we are both making some mistake. -- ToE 20:10, 20 February 2012 (UTC)[reply]
Oh, I see how my "moving across the face of the cluster" would be confusing as their "at the outer edge" probable meant the observed circular (2D) edge of the cluster, not the spherical (3D) edge. In that case, my proper motion wild goose chase would apply only to those stars whose orbital plane was perpendicular to the line of observation. -- ToE 22:24, 20 February 2012 (UTC)[reply]

Securing data physically

An alternative to encryption (one immune to quantum computing) would be a storage device that can be freely written to but is physically impossible to read without a password. Furthermore it would be good that at the time of writing it could be verified that (1) the device is set to use the (unknown to the verifier) password of the intended recipient and (2) the device will function as intended, i.e. the manufacturer has not rigged the device. Does a device of this kind exist, at least in theory? --145.94.77.43 (talk) 20:27, 19 February 2012 (UTC)[reply]

I wouldn't count a password as physically securing data. A physical switch you must toggle to put the device into read mode and another switch for write mode would be better. And they have to be real hardwired switches, not just ones that set some flag the software is supposed to use, because software can't be trusted. Being able to write to a device without reading from it does pose some challenges, though, like knowing if there's room for the data, verifying that it was written correctly, etc. The device would need to have the intelligence to check for such things itself. StuRat (talk) 21:19, 19 February 2012 (UTC)[reply]
"Being able to write to a device without reading from it does pose some challenges..." Well, if you can find a way to force users to write in Perl. . .--Atemperman (talk) 13:16, 20 February 2012 (UTC)[reply]
You seem to be describing something else. From their mention of encryption, it seems clear the OP wants some method to stop unwanted parties reading the data (e.g. after finding, stealing or confisicating the device) but doesn't want to use encryption out of fears quantum computing will break it. I don't see how what they're describing is going to work they're probably worrying too much about the encryption being broken rather then simply accidentally or being forced to reveal the password, but that's more of an aside. As for what your describing there seems little point. A hardwired write switch may have some use (although I don't think it's as foolproof as you believe). However there's not going to be much reason to use a switch to allow the device to be written too but not read. There may be some limited use in strange scenarios like where your copying data to the device but it also has info you don't want read. But in most cases it would make far more sense to have 2 or more devices and don't attach the device with sensitive data to something you don't trust. It's definitely not going to help in the OPs case unless your purported adversary is incredibly stupid (and if they are, they're not going to be able to break encryption even with quantum computers). Nil Einne (talk) 21:33, 19 February 2012 (UTC)[reply]
Yes, they used the word "physical", but what they were asking really had nothing to do with physical security. StuRat (talk) 00:36, 20 February 2012 (UTC)[reply]
Honestly, it looks fairly similar to public key encryption: the author of the file can't actually decrypt what he's writing; only the recipient can. I don't know of a device that bakes that sort of thing in, but there's no reason in practice that you couldn't rig up a software solution to apply that sort of thing. Note, though, that "quantum computing" won't enter into this at all. All encryption is, at some level, password-based, and so replacing one password (symmetric encryption) with another (public key encryption) isn't fundamentally altering that idea. — Lomn 01:32, 20 February 2012 (UTC)[reply]
What I was after was making something that is impossible to crack (even when the pass-phrase is much shorter than the data), as opposed to overwhelmingly improbable. --145.94.77.43 (talk) 04:05, 20 February 2012 (UTC)[reply]
I don't understand how you expect this to work at all. Imagine if you can only read it with the password, it doesn't make it impossible to bruteforce. Just keep trying with a different password. (That's bruteforcing!) In fact, if you really in some way can't 'read it' without a password, that makes it even easier since you know it's a failure so can quickly move on to the next candidate. (Depending on how you use encryption, you may only know if you failed to decrypt by the data not making sense.) Any sort of imposed limitation on number of tries, whether in software or hardware is of limited purpose since it can be overiden by using hardware or software which doesn't impose such a limitation. Perhaps if it physically takes a while to read, bruteforcing would be made more difficult but still not impossible. If reading is physically destructive and too many tries you kills the device it may seem this would work. However we get back to the first flaw in the concept. How do you design a device which can only be read with a password? However you store the data (magnetic, holographic whatever), ultimately there must be some way to read the data which the device contains and store it without bothering with any 'password'. Then you can just in software simulate whatever happens when the 'password' is used to read the device. Nil Einne (talk) 05:26, 20 February 2012 (UTC)[reply]
A good point about having a chance to guess the password anyway, but you are forgetting that the device is hardware in itself and cannot be overridden. Simulation is not a weakness either since the state of the device is unknown and by definition the device must somehow fend off any unwanted attempt to probe or tamper it. --145.94.77.43 (talk) 06:05, 20 February 2012 (UTC)[reply]
That doesn't make sense. "Hardware" vs "software" is irrelevant for this (it's just implementation detail). If you have a password, that password can be guessed. No information is more secure than the password. "Hardware" vs "software" is functionally meaningless here; cryptographically, all that's relevant is that you have password-protected data, and any means you have to legitimately enter a password constitutes a means for an attacker to enter a password. As for "the state of the device is unknown", obscurity is not a substitute for security. The world is rife with systems that were "secure" by virtue of a supposedly hidden secret (see: most industry DRM over the last decade). They're typically cracked within a year -- and those are industries with billions of dollars at stake! Now, tamper-resistant is its own thing, and it's very real, but you trade that against accessibility. You could make a box that self-destructs if someone tries to open it, or enters the wrong password, but you have to be willing to accept that you might destroy your own data if you type the password wrong. Of course, if they can bypass the self-destruct, or if you store the data in a second place for "backup", it's as if you didn't have any of that security at all. — Lomn 14:43, 20 February 2012 (UTC)[reply]
Indeed a key point 145 seems to be missing is there's arguably no way you can make your device tamper-resistance perfect. Sure you may make things more difficult for your adversary, but ultimately there's always the possibility they will break whatever safeguards you put inplace and can therefore overide any restrictions you attempt to enforce. Nil Einne (talk) 15:06, 20 February 2012 (UTC)[reply]

Physically impossible--no not in any known way. Difficult in practice--sure, see security token. 67.117.145.9 (talk) 22:59, 19 February 2012 (UTC)[reply]

I have the impression that most people worrying about the security of encryption are just being paranoid. — Preceding unsigned comment added by Ib30 (talkcontribs) 00:16, 20 February 2012 (UTC)[reply]

Nonsense. There are many examples in practice of nonsecure encryption and the negative consequences thereof. Modern secure algorithms are such only because people are constantly and professionally paranoid about them. — Lomn 01:32, 20 February 2012 (UTC)[reply]
Indeed. The pure mathemetical underpinnings of most modern widely-used cryptographic schemes have been pretty thoroughly scrutinized, and if implemented correctly are secure against any current brute-force computational attack. However, it's actually quite difficult to translate that mathematics into a useful software implementation without inadvertently introducing serious weaknesses. Bruce Schneier's blog is a great way to get daily bite-sized snippets of news about security and risk management in the modern era, including fairly frequent stories related to (sometimes good, but usually poor) uses of crytographic algorithms and technology. Three days ago, he covered this story, which involved a discovery by cryptography researchers that something like 1 in 250 public keys in the wild share a common factor, making them inherently insecure. (This was probably due to the use of flaky, not-quite-random random number generators in the creation of the keys.) TenOfAllTrades(talk) 05:24, 20 February 2012 (UTC)[reply]
"Most people" don't have almost any information of value, that deserves any effort to be protected against "quantum computing" or that deserves any effort to be cracked. And most people cannot protect their most valuable information like credit cards and bank accounts themselves, since their respective banks are at charge. So, in this sense it's nonsense to worry about common algorithms or about common programs. Just use them correctly and you are save. On the other hand, if you are into an industry which heavily depends on security (to protect your digital TV transmission, credit cards processing information, etc) then you'll have to be up-to-date to know what others are doing to crack your data. 88.8.66.12 (talk) 14:24, 20 February 2012 (UTC)[reply]
Again, nonsense. "Most people" (and why the scare quotes?) rely on secure encryption for sensitive data on a daily basis, whether they realize it or not. — Lomn 14:44, 20 February 2012 (UTC)[reply]
What? If they do not realize the use, why being paranoid? Most people do not have any need or possibility to manage the encryption they use. I doubt you understood the previous comments. Take a look at: that's Lomn. 88.8.66.12 (talk) 16:10, 20 February 2012 (UTC)[reply]
Save data to flash drive. Physically remove flash drive when done. Store in vault. How's that sound? (Yeah, I know, flash memory degrades eventually. But it does survive a trip through the wash). Guettarda (talk) 21:00, 20 February 2012 (UTC)[reply]

February 20

in an unstirred bulk polymerization that proceeds to 100 percent conversion how can the effects of shrinkage and heat of polymerization be handled?

Just wondering what your take on this would be. — Preceding unsigned comment added by 96.238.137.168 (talk) 01:23, 20 February 2012 (UTC)[reply]

With safety gloves. ←Baseball Bugs What's up, Doc? carrots03:04, 20 February 2012 (UTC)[reply]
If you know how much shrinkage to expect, you can make it larger than needed, to compensate for the shrinkage. As for excess heat, fans could be used, or other cooling methods. StuRat (talk) 05:42, 20 February 2012 (UTC)[reply]

Rechargeable battery

Why does keeping a laptop plugged in constantly ruin the battery life? --108.225.117.174 (talk) 03:02, 20 February 2012 (UTC)[reply]

Who says it does? ←Baseball Bugs What's up, Doc? carrots03:03, 20 February 2012 (UTC)[reply]
"if you keep your laptop plugged in, you force your battery to remain at 4.2V continuously and these side reactions continue to happen and slowly kill the battery" "Laptop batteries may last a little longer if you let them fully discharge occasionally". Clarityfiend (talk) 03:15, 20 February 2012 (UTC)[reply]
See Memory effect. If you never discharge a battery all the way, it sort-of "forgets" what "all discharged" means, and you lose battery life. --Jayron32 03:44, 20 February 2012 (UTC)[reply]
The memory effect is a feature of NiCd batteries. Most laptops have a lithium-ion battery which has battery-management circuitry to prevent both overcharge and over-discharge. Both are harmful to battery life, with deep discharge making the battery unusable. The general advice seems to be to avoid leaving the power supply permanently connected, but many users leave laptops permanently plugged in with little noticeable effect on battery life. Ideally, the battery management circuitry should have a software link so that it can be adjusted to a "permanently plugged-in" setting where it stops charging just below the maximum power to extend battery life, and with a maximum charge setting for those who need many hours of use from battery alone. None of the laptops that I've used have this facility. I wonder why they don't. Dbfirs 13:46, 20 February 2012 (UTC)[reply]

Infectious non-contagious diseases

From Marburg virus disease:

Marburgviruses are highly infectious, but not very contagious.

After looking through Infectious disease and Contagious disease, I can't figure out what this means — especially since the infectious disease article says "Infectious diseases that are especially infective are sometimes called contagious." How can a highly infectious disease not be very contagious? Or how can a less-contagious disease be characterised as highly infectious? Does it keep reinfecting hosts that already have it? Nyttend (talk) 04:44, 20 February 2012 (UTC)[reply]

Contagious is a measure of how easily it spreads in a human population. Ebola is another virus which is infectious but not very contagious, owing to the fact that victims quickly become incapacitated and die after infection before really having a chance to infect other people. On the other hand, HIV is given as an example highly contagious disease since victims can walk around for years without even knowing they're infected passing it on to people far away from where they initially caught the infection. Vespine (talk) 05:19, 20 February 2012 (UTC)[reply]
For completeness, diseases like Lyme disease are infectious but not contagious at all, since you can't catch them from other people. Vespine (talk) 05:20, 20 February 2012 (UTC)[reply]
So highly infectious basically means "if you get even a few pathogens, they have a high chance of infecting a substantial portion of the body and causing observable effects"? Nyttend (talk) 05:25, 20 February 2012 (UTC)[reply]
Well, I'm not a doctor but that's the way I understand it. I think it's more of a "threshold", as opposed to infecting "portions" of the body, but yeah... Also the Contagious disease article does say The boundary between contagious and non-contagious infectious diseases is not perfectly drawn. It contradicts me and says HIV is a non-contagious disease since it is not easily transmitted by physical contact, but then goes straight on to say In the present day, most sexually transmitted diseases are considered contagious. So I think the term has a slightly "loose" meaning. Vespine (talk) 05:28, 20 February 2012 (UTC)[reply]
I'm pretty sure "contagious" means "can be transmitted between individuals in a population". Malaria is infectious but not contagious -- you get malaria from mosquitos, not from other humans.--Atemperman (talk) 13:13, 20 February 2012 (UTC)[reply]

which insect is this ?

http://www.mediafire.com/imgbnc.php/cddd07abfa9b042f748ea3294428b71476d5fba1075598ef99d7079528e1c3a26g.jpg which insect is this ? — Preceding unsigned comment added by 182.178.249.145 (talk) 15:24, 20 February 2012 (UTC)[reply]

This will be tough without more info. Things that will help, if you know them (e.g. if you took the photo): Where was the photo taken? In what season? What kind of tree is that? How large is the specimen? Nevertheless, I'll start with some guesses:

Spectrum of stars

File:Astronomystuff0.JPG
you mean this angle?

— Preceding unsigned comment added by Pendragon5 (talkcontribs) 23:16, 20 February 2012 (UTC)[reply]

File:Astronomystuff4.JPG

Can anyone helps me on number 16? Some drawing goes with the explanation will be highly appreciated! The answer for number 16 should be from 50-70 degrees (all they want is a rough calculation) so what matters is how to do it. Thanks!Pendragon5 (talk) 19:50, 20 February 2012 (UTC)[reply]

The redshift will give you the radial velocity (the component of velocity directly away from or towards you). You calculated the total velocity in the previous question. A bit of trigonometry will then give you the angle. --Tango (talk) 20:57, 20 February 2012 (UTC)[reply]
Can you this problem as an example for me please? I don't even know what angle are they looking for? If possible can you draw a picture to help me visualize and post it on here? It would help me a lot to solve a problem if i can visualize it.Pendragon5 (talk) 22:10, 20 February 2012 (UTC)[reply]
Draw a line between the observer and the star, then draw an arrow from the star pointing in the direction the star is moving. They want the angle between that line and that arrow. --Tango (talk) 23:09, 20 February 2012 (UTC)[reply]
No, not that angle. The star isn't going to move a significant distance in your lifetime, so considering it at two points isn't helpful. You're just interested in the instantaneous velocity. --Tango (talk) 23:35, 20 February 2012 (UTC)[reply]
I don't understand this at all. I can't even visualize what angle am i working on. I don't get what they mean by measured wavelength and laboratory wavelength. I have no knowledge of connecting the information they give to find the angle. Can you just do it as an example problem for me? So i can do the similar problem if i was given one. I'm totally blacked out in this problem sorry.Pendragon5 (talk) 23:55, 20 February 2012 (UTC)[reply]
I linked to an article, have you read it? --Tango (talk) 00:26, 21 February 2012 (UTC)[reply]
Yea i have been working on it and trying to understand it for the last few hours. Alright let just say that i'm being stupid on this ok.Pendragon5 (talk) 00:42, 21 February 2012 (UTC)[reply]
You know the projection of the radial velocity onto the direct line between you and the star. (You measured the red-shift, right? That's the velocity relative to you. And you calculated the orbital velocity, which is the total velocity of the star, right?) You're looking for the angle between the line that connects you to the star, and the line that the star is moving along. In physics parlance, the redshift tells you the projection of the total velocity on to your radial line. I'd use a dot product formula any time I have to calculate a projected vector; you can rearrange this formula. Have you studied much trigonometry yet? We can walk you through the details in a very step-by-step fashion, but even if we do, it might not be very helpful until you've got a really solid background in algebraic trigonometry. Nimur (talk) 01:52, 21 February 2012 (UTC)[reply]
i don't know how much is needed to be considered as having solid background in algebraic trigonometry. The highest math class i have done is precalculus, that's all. I remembered learn about dot product before but well we didn't apply it into complicated ideas, we basically do easy stuffs and obviously problems. And again i still can't picture what angle are we trying to find? Can someone just draw it on a piece of paper and upload it here please, that would be extremely helpful to me.
And no i still don't even know how to calculate the redshift. When i look at the formula, i don't know which one is which. Where should the 656.5386 nm measured value be in the formula? Where should the 656.3 nm laboratory value be in the formula? I still don't understand this statement "the line that connects you to the star, and the line that the star is moving along". The line that the star is moving along should be changing continuously since the star is always moving or what do you mean by moving along? What exactly angle is this? Pendragon5 (talk) 02:19, 21 February 2012 (UTC)[reply]
Calculation of redshift,
Based on wavelength Based on frequency

And i'm strongly doubt that they expect students to know advance math. All they required is basic math like algebra. Is there a faster way that can give a really rough answer? That's why the answer in the answer key is any answer from 50 degree to 70 degree is correct. So the rough calculation should be fine.Pendragon5 (talk) 02:26, 21 February 2012 (UTC)[reply]

You can either use math or memorization. The major tools that higher maths like calculus gives you is the ability to manipulate equations in complex ways to get new equations that tell you new things about the world. You can either start from one equation and do calculus to get all the equations you need, or you can memorize a whole bunch of equations. For example, you can essentially derive all of your Newtonian mechanics equations from a few basic starting points and a 1-semester calculus class. Or you can memorize like 50 equations and their applications. That's why people learn higher order math: it simplifies your life to know one equation and ways to manipulated into 50 new equations than to have to memorize all 50 without any idea how they relate to each other. --Jayron32 04:54, 21 February 2012 (UTC)[reply]
As with several of the other astronomy questions Pendragon5 has been asking about, this one implies a couple of unstated assumptions. First is that the binary system has no significant overall radial motion with respect to us, so that the entire redshift is a function only of the radial (with respect to our line of sight) component of the star's orbital velocity. If we want to come up with the actual orbital inclination and not just an upper limit on the inclination, then we must assume that the wavelength given represents the maximum redshift observed during the star's 10 day orbit. Calculating the redshift yields a radial velocity of almost exactly half of the orbital velocity, so we know that at least the answer key is correct this time (though I don't know why they bracket the answer with a range of 50°-70°. If the second assumption doesn't hold, and this is just a single random spectral measurement of Star C, then all we can say is that 60° is the maximum inclination. If the first assumption doesn't hold, then all bets are off and nothing can be calculated. While I'm griping at the statement of the problem, I'll point out how strange it is to specify the observed Hα wavelength to 7 significant figures, but to round off the reference value to 4 significant figures. Hmph. -- ToE 13:40, 21 February 2012 (UTC)[reply]

Flying a tethered quadrocopter

Would it be possible to fly this RC quadrocopter indefinitely (or until mechanical failure) while having it connected to AC power via the use of a long extension cable? That is, can I remove the battery from the quadrocopter, make a long extension of the thinner wire before the power supply and fly the drone without a battery while tethered to its power supply? It says the payload is 700g, what are the implications as far as drawing out more wire? I mean, does this put more load on it and if so what is the relationship or rate?

As a side note, I do realize that it should include a small backup battery to stop it from crashing in case of disconnect and that I am not clear on how the drone will be autonomously hovering in the same spot indefinitely, but that doesn't matter yet. Also, I have seen a few videos of the Parrot AR.Drone being flown while tethered to its standard short-cabled AC adapter. I can see that the stability is affected even while flying indoors and that this is a potential problem. Even so, I would like to know the feasibility of hovering much further away whilst tethered, regardless of this potentially causing significant load on the copter.

In summary, I'd like to know:

  1. Can the quadrocopter be flown without a battery while directly connected to the power supply?
  2. Is it possible to extend the less weighty wire before the power brick?
  3. What is the rate that (I assume) load increases as distance from the power supply increases?
  4. I also found this pay load/power consumption chart. Lhcii (talk) 20:19, 20 February 2012 (UTC)[reply]


1) Your RC chopper is unlikely to work directly off A/C, without the battery, unless modified.
A few other thoughts:
A) 700 grams isn't much length for an electrical cord, but would be more if you eliminated the insulation (you would need minimal insulation to keep the positive from shorting to the negative, unless they were somehow kept physically apart). However, uninsulated wire would be unsafe, and would also be less effective as a tether, having only the strength of the copper to keep the chopper in place. So, overall, leaving the full insulation in place would make sense, which would only allow for a very short tether.
B) You might find that the time between mechanical failures is less than you might think, since it's designed for only short flights. It might overheat or loose lubrication rapidly.
C) I also share your concern over the stability of tethered flight, but attaching the tether directly to the CG point is the least likely to disrupt that stability.
D) Introducing a spring into the system (preferably at the bottom of the tether) would help prevent it from breaking it's tether on updrafts.
E) Constructing a tower to support most of the weight of the wire would allow the chopper to fly higher. StuRat (talk) 20:49, 20 February 2012 (UTC)[reply]
Thanks, StuRat, I appreciate the suggestions. I think that building a tower will be a perfect solution to the problem for many situations, attaching it with a spring is a great idea as well! I was thinking of using a gimbal, but that is much simpler and thus lighter. I am disappointed that 700g is so little wire, but I'm sure I could also increase my range by upgrading the chopper or adding additional choppers to ferry the power cables and possibly a camera as well(also increasing weight, I know).Lhcii (talk) 23:13, 20 February 2012 (UTC)[reply]
Something else to consider is ground effects. That is, your chopper will get more lift when closer to the ground. Thus, you could have a fairly stable spot where, if it drops lower the ground effects will lift it back up, or, of it rises up the lack of ground effects will bring it back down. To use this form of stability, it would need to be quite close to the ground though, no more than a few feet up. And, of course, the up/down stability is only one of many types to worry about (with pitching, rolling, moving forward/back or right/left, and twisting the cord being a few others). To address all of those, you might want 4 cables, one attached below each rotor, at maybe a 45 degree angle. Two of those cables could be the positive and negative wires. You'd want the cable weights to match, and the + and - wires to be on opposite sides, in case they don't match. This would mean 4 towers, and maybe a stepladder in the center to launch and retrieve the chopper (or maybe scaffolding with the cables tied off at each corner).
BTW, why do you want to do this ? (Your answer might affect our suggestions.) StuRat (talk) 23:20, 20 February 2012 (UTC)[reply]

Dissolution of Biphenyl

Hello. I would like to synthesize triphenylmethanol via a Grignard reaction. A common side-product is biphenyl, which prefers to dissolve in hexanes over ether. How can adding hexanes purify a mix of triphenylmethanol, biphenyl, and ether? The boiling point of biphenyl is higher than of triphenylmethanol. Rotary evaporation will not separate biphenyl from my desired product, eh? Thanks in advance. --Mayfare (talk) 22:19, 20 February 2012 (UTC)[reply]

Have you tried chromatography methods to seperate your products? If thin layer chromatography indicates a reasonable difference in Rf values (i.e. if the spots seperate enough), then you stand a good chance of Column chromatography working to seperate the products from each other. Collect all of your factions, identify which fractions contain your desired product using a combination of the TLC results (the compounds come off of the column in the same order they moved up the TLC plate) and analytical techniques (NMR and/or Mass spectrometry should work) and then collect those fractions and isolate your desired products, perhaps by distillation or crystalization or something like that. --Jayron32 23:02, 20 February 2012 (UTC)[reply]
This isn't something I know well, but a simple Google search turns up many hits - apparently it's a common classroom exercise.[10] For example [11],[12],[13], and [14]. To quote this last (with a little imperfection of grammar), "The petroleum ether will dissolve the non-polar biphenyl, while the product triphenylmethanol does not." (The point is, the product is an alcohol) Using chromatography... someone's thinking like a biochemist. ;) Wnt (talk) 04:01, 21 February 2012 (UTC)[reply]
I did, in my mispent youth, spend about 18 months working in an organic synthesis lab. It was the most mispent part of my youth, in hindsight. But hey, it prepared me to answer this question. --Jayron32 04:48, 21 February 2012 (UTC)[reply]

Aluminum pot

I have an aluminum pot which, when filled with water and allowed to sit, gets white nodules forming on the bottom of the interior. Are these some type of salt ? I use this pot to replace water in my "humidifier" (a larger pot I leave on the stove all day). I don't think I want to eat anything out of this pot. Interestingly, I have another aluminum pot which doesn't seem to do this. What might be different in their constructions ? StuRat (talk) 22:20, 20 February 2012 (UTC)[reply]

Pure aluminium quickly develops an impervious, transparent oxide layer which prevents further oxidization. It seems likely that your pot was formed form aluminium which contained impurities. These (probably iron) prevent this protective film of oxides from forming. Never-the-less the aluminium -which is very reactive- still insists on oxidizing but in a different form. --Aspro (talk) 23:37, 20 February 2012 (UTC)[reply]
If one looks on page 33 one will see Fig 7 showing such an iron inclusion.[15]--Aspro (talk) 00:14, 21 February 2012 (UTC)[reply]

Sounds like calcification deposits. I expect the one pot has tiny roughnesses that serve as nucleation centers to allow the dissolved solids in the water (which is probably pretty hard) to come out of solution. Looie496 (talk) 00:11, 21 February 2012 (UTC)[reply]

I second this explanation (calcification). Aluminum, as noted above, is highly reactive. This means that it forms its protective oxide layer instantly, pretty much regardless of inclusions or impurities. This makes aluminum, in practice, almost inert. David Spector (user/talk) 02:27, 21 February 2012 (UTC)[reply]
We have city water, which isn't particularly hard. Also, why does it form nodules, instead of a continuous film ? StuRat (talk) 03:25, 21 February 2012 (UTC)[reply]

Pregnant cats

Hi. Our tabby female cat recently gave birth to three kittens who have since been purchased and rehomed. A short time after our other cat became pregnant and this evening gave birth to five kittens. The other cat seems to be taking a motherly interest in this new batch of kittens. The new mother doesn't seem to mind but we're concerned that this will have an adverse effect on the connection between the mother and her litter? If so how do we prevent the other cat from exhibiting this maternal behaviour? --Hadseys (talk) 22:31, 20 February 2012 (UTC)[reply]

I imagine some type of hormone therapy could remove the "maternal" hormones from the first momcat, but, of course, no vet would do that. Keeping them separated would work, but really, I don't see the danger. It's far better to have two mothers than have one trying to kill them, after all. The only physical risk I see is if the first momcat has stopped producing milk, but prevents the kittens from getting milk from their mother. If this is the case, then yes, keep them locked in different rooms until the kittens are weaned. If you do have to separate them, I suggest some surrogate kitten for the kittenless one. Rolled up socks seem to work. StuRat (talk) 23:44, 20 February 2012 (UTC)[reply]
Don't worry about the other female kitten taking a maternal interest, from my experience with cats and kittens it's perfectly normal. In feral groups, female cats will raise the kittens between them. The time to worry is if an entire male cat takes interest in kittens as he will regard them as a threat and potential source of food. Oh and I'd get all your cats neutered as soon as they are old enough. The last thing the world needs is more cats (and I'm a cat lover). --TammyMoet (talk) 11:10, 21 February 2012 (UTC)[reply]
I think you mean "intact" male. Not entire. Dismas|(talk) 11:13, 21 February 2012 (UTC)[reply]
I've seen male cats take a keen interest in others' kittens. It's often perfectly harmless. Kittens like to play, and will naturally try to interest other cats in playing with them. Some cats (including adult males) are perfectly happy to oblige. Unless you sense aggression from the adult male, I wouldn't worry. And as to your situation, I highly doubt it will have any "adverse effect on the connection between the mother and her litter" in a detrimental way. Be thrilled that they're all on good terms. 58.111.178.170 (talk) 15:58, 21 February 2012 (UTC)[reply]
  • Dont worry we will spay her, the other one's already been done. They just both escaped out of the house one time and got knocked up =/.The kittens though, if anyone's interested, are beautiful. Four with white fur with random black tortoiseshell markings scattered about it and a pure black one
Have you got pictures? Whoop whoop pull up Bitching Betty | Averted crashes 15:24, 21 February 2012 (UTC)[reply]

is this an observable fact? 203.112.82.128 (talk) 23:15, 20 February 2012 (UTC)[reply]

Yes. You can observe the sun, moon and planets moving in the sky and they move in exactly the way the heliocentric model of the solar system predicts. --Tango (talk) 23:38, 20 February 2012 (UTC)[reply]
You can distinguish the modern heliocentric model from the Ptolemaic model by the phases of Venus, from the Tychonic model by the stellar parallax, and from Newtonian orbits by careful observation of the perihelion of Mercury, among many other careful observations that align with the modern understanding (I have picked out only a few historically significant ones). Observations do not prove theories, but they do disprove them, leaving only those theories which fit the data and remain falsifiable. --Mr.98 (talk) 23:47, 20 February 2012 (UTC)[reply]
And, of course, we've sent ships to all the planets and the Sun. I don't think they would hit their targets if our heliocentric model of the solar system was wrong. StuRat (talk) 23:49, 20 February 2012 (UTC)[reply]
I agree with the above conclusions, but I think the above answers are not quite accurate. The pre-Copernican Ptolemaic system was amply verified by observations and made pretty much the same predictions. The main problem with the ptolemic system was that it assumed circular orbits, which the copernican model also assumed to begin with. From the geocentric model article: The geocentric system was still held for many years afterwards, as at the time the Copernican system did not offer better predictions than the geocentric system, and it posed problems for both natural philosophy and scripture. The Copernican system was no more accurate than Ptolemy's system, because it still used circular orbits. This was not altered until Johannes Kepler postulated that they were elliptical (Kepler's first law of planetary motion). If Kepler's laws were applied to the ptolemic system, there is no reason it couldn't have been used to send space craft to the sun and planets. Fundamentally it's a question of relativity and perspective. Also, strictly speaking, Heliocentrism is the theory that the sun is at the center of the UNIVERSE, which we obviously no longer believe to be the case, but locally to our solar system, it holds true. (EDIT: I posted this before I saw Mr.98's reply which I don't have any issues with)Vespine (talk) 00:01, 21 February 2012 (UTC)[reply]
Would going with elliptical orbits improve the geocentric model beyond what epicycles did ? For example, could they explain the apparent retrograde motion of Mars ? Or do you mean a geocentric model with both elliptical orbits and epicycles ? Also, I don't think they would be accurate enough for space shots, unless you are assuming local corrections to navigation. StuRat (talk) 00:09, 21 February 2012 (UTC)[reply]
I'll admit I have no idea:) I could be wrong, but you COULD make an accurate working model from geocentrism, even if you had to add "planetary constants" for each orbit or whatever, it might make it exceedingly complex, but the point is that it doesn't necessarily make it false and conversely doesn't necessarily make heliocentrism a fact. I'm probably reading WAY too much into probably just a casual question. If the OP is interested I think this is a question about Philosophy of science, Scientific realism and even Model dependent realism is an interesting article. Vespine (talk) 00:15, 21 February 2012 (UTC)[reply]

OP here, i think we could still send ships to the sun if we have a good geocentric model, what im interested in is do we establish heliocentrism as fact analogous to the fact that birds can fly or a basketball is sphere. 203.112.82.128 (talk) 00:45, 21 February 2012 (UTC)[reply]

In which case, as I suspected, this is a philosophical question the answer of which depends on which stand point you accept. The articles I linked above are a good starting point. I'm only a novice philosopher but for example if you subscribe to Scientific realism I think you would argue that heliocentrism can be a fact, if however you subscribe to Model dependent realism then, if I understand correctly, you don't really believe in facts, just the usefulness of the models we adopt. Vespine (talk) 01:18, 21 February 2012 (UTC)[reply]
Maybe what im really interested in is if we have a probe that can actually see the planets move around the sun or something like that.203.112.82.128 (talk) 01:33, 21 February 2012 (UTC)[reply]
We can see the planets move around the Sun from Earth! For example, almost every year I watch Mars undergo apparent retrograde motion. This is something I can see as plainly as snow in winter. This particular motion makes more sense if you realize that Mars is exterior to Earth and orbits the Sun, consistent with an orbit dominated by Newtonian gravity. I watch Jupiter and Venus orbit the Sun, and I watch Moon orbit the Earth. You don't have to look very hard to see these things: they're big, bright, and move slowly. Nimur (talk) 01:43, 21 February 2012 (UTC)[reply]
So that means nothing can disprove heliocentrism? 203.112.82.129 (talk) 01:58, 21 February 2012 (UTC)[reply]
It's not that nothing can disprove it, but that it's consistent with all observations, which the previous models were not. And with all the observational data we now have, it would take a truly elaborate and complicated geocentric model to also match those observations. The modern heliocentric model also benefits from the fact that it can be constructed directly from universal laws of physics, which have themselves been extensively demonstrated to agree with observations unrelated to planetary motion. You reach a point where asking if something is a fact just becomes meaningless. See brain in a vat, for example. It's trivial to construct a theory centered on that concept that is consistent with all human observations. If you're asking it's safe to consider heliocentrism to be a fact, the answer is yes. If you're asking what it means to be a fact, you have wandered into the realm of philosophy. Someguy1221 (talk) 02:20, 21 February 2012 (UTC)[reply]
Philosophically, we could still have a geocentric system - just take all of our standard equations, convert to a coordinate system centered on the Earth's surface (or to be precise, some specific part of it), and in that coordinate system they still revolve around the Earth. Either way you just work the numbers; what they mean is merely "an interpretation", like the many-worlds interpretation or the Copenhagen interpretation of quantum mechanics. That does seem kind of absurd though, doesn't it? Wnt (talk) 02:49, 21 February 2012 (UTC)[reply]
Of course, to accept a geocentric system, we would need some mechanism by which the far more massive Sun would orbit the Earth. Does the Earth have hidden mass, like a black hole inside a hollow sphere ? If so, why isn't the force of gravity more for us ? Does the Sun have much less mass than we think ? If so, how can it support nuclear fission ? We would have to toss out much of physics to come up with a universe where geocentrism is still possible. StuRat (talk) 03:20, 21 February 2012 (UTC)[reply]
There are plenty of observations that could have disproven it, but in fact, they disproved other alternatives. --Mr.98 (talk) 13:15, 21 February 2012 (UTC)[reply]
Also note that heliocentrism isn't 100% right. The Sun does wobble a bit as it is also affected by the gravitational attraction of the planets. In a two-body system it's more correct to say that both objects orbit about the barycenter of the system. With three or more bodies, it gets even more complex. StuRat (talk) 03:23, 21 February 2012 (UTC)[reply]

Is Heliocentrism an observable fact? No! The center of the universe is not a well defined place. To Aristotle the center of the world was the point towards heavy objects are attracted, which is the center of the Earth. To Aristarchos of Samos and Copernicus the center was the Sun. To Kepler the Sun is not in the center of the elliptical planetary orbit, but rather at a focal point. To Newton any point may serve as the center, and there is not even an absolute zero velocity - any constant velocity displaced coordinate system will do - but there is an absolute zero point for acceleration, which is the acceleration of a particle that is subject to no force. To Einstein even the absolute zero acceleration is undefined, as nonzero acceleration may be identified with a gravitation. The Cosmic microwave background radiation#CMBR dipole anisotropy seems to indicate a zero point of velocity, but no geometrical center of the universe can be identified. Bo Jacoby (talk) 12:51, 21 February 2012 (UTC).[reply]

It all depends what one really means by "heliocentrism." --Mr.98 (talk) 13:15, 21 February 2012 (UTC)[reply]
One thing here: the OP is referring to "facts" like "birds can fly" or "a basketball is a sphere" as the gold standard of truth. But this is misleading. There's a preference for visual epistemology here which is very problematic: "seeing" something is not necessarily better or more iron-clad evidence than many other forms of observation. Secondarily, as with those examples, life gets complicated. Basketballs are not perfect spheres; not all birds can fly; and the definition of "bird" and "flight" can vary quite a bit once one starts hashing out very precise definitions (is gliding flying? if we put an ostrich on an airplane, can it fly?). Establishing even basic "facts" from apparently raw and unadulterated sense data is more tricky than it seems — aside from the fact that your sense data may be flawed or unreliable (you could be crazy, or dreaming, or inhibited in some way), you also have to make the initial selection of what sense data to retrieve in the first place, which shapes your entire outlook (relying solely on a visual spectrum of light will cause you to miss quite a lot of observable phenomena). If you want to start parsing out interesting and complicated epistemological questions about how we know anything and what is a fact, there is a rich history of these issues in the philosophy of science that one might explore. --Mr.98 (talk) 13:15, 21 February 2012 (UTC)[reply]
... and remember that, in the words of the Galaxy Song, "The sun and you and me and all the stars that we can see, Are moving at a million miles a day" around the centre of our galaxy. Actually, according to orders of magnitude (speed), it is more like 10 million miles a day. But the point is that is several times greater than the Earth's orbital speed relative to the Sun. Gandalf61 (talk) 15:25, 21 February 2012 (UTC)[reply]

February 21

blocking unsolicited junk email

I'm tired of receiving unsolicited junk email from online dating services. Is there any way to block them?24.90.204.234 (talk) 04:56, 21 February 2012 (UTC)[reply]

The Computer Ref Desk would be a better choice for this post. If they always use the same email address, it's simple to blacklist that address in most email systems. If not, then it gets trickier, and some type of spam filter must be used instead (which might also block other emails). StuRat (talk) 06:46, 21 February 2012 (UTC)[reply]

SIngle-Atom transistors—The end of "Moore's Law?"

Greetings!

For a few years, now, I've noticed that transistors in electronic devices keep getting smaller, and, inversely, said devices keep becoming cheaper, robuster, and more powerful. I've also noticed the current brouhaha over Intel's miniaturizing (desperately, it seems) its end-user transistors to 22 nanometers this year, 14 nanometers by 2014, and perhaps 8 nanometers by 2016.

Just two days ago, however, this Purdue University Report came out stating that some researchers over there have produced a functioning transistor consisting of a single phosphorus atom. I'm curious, what (if anything) may this mean for "Moore's Law," and the near future of end-user electronics?

At 100 picometers in diameter, a phoshorus atom is about 1/220 of what one now considers cutting-edge. May Intel—or one of its competitors—now drop its current roadmap, and instead pursue further research on this? Also, what does this mean for the possible advent of quantum computing?

And lastly, is this (as one of the sources in the article proclaims) really the lower limit? There are eight elements that are both smaller than phosphorus and solid at room temperature. Carbon, for instance, is less than half as massive as phosporus—12 grams per mole and 31 grams per mole, respectively. Why not, one day, a carbon-atom transistor, or the like? Pine (talk) 06:59, 21 February 2012 (UTC)[reply]

This was reported in Australian media, in somewhat better prose - see for example http://www.abc.net.au/news/2012-02-20/team-designs-world27s-smallest-transistor/3839524. This articles state that if ready for commerical exploitation by 2020, then Moore's Law will be matched. Researchers have been working on this for sometime, and don't hold your breath expecting to see something in the shops. High-tech & a real achievement though it may be, with respect to large scale commercial applications, it is not even equivalent to Bell Labs researchers figuring out the physics for point contact transistors during World War 2, from which (just stretching a little bit) you can trace Intel's latest products back to. Just because the active (ie switching) element is based on a single atom does not mean that a usuable transistor is that small, any more than the size of current transistors is just the size of the gate or PN-junction. You have to surround it with essentiallly clear field space many times larger. And don't forget it is so small that quantum effects are significant (read: a single transistor is unreliable), and it only works at temperatures requiring liquid helium as a coolant. In this context, your second question about a carbon-atom transistor is not actaully a useful question. In any case, phosporus is a donor atom for silicon or germanium. Carbon, analogous to Si or Ge, would be the substrate. Keit124.182.138.90 (talk) 07:57, 21 February 2012 (UTC)[reply]

Accuracy of the speed of light in relation to FTL Neutrinos

So, we've all heard the headlines of the faster-than-light neutrinos. Everyone's crazy about checking the timing and distance to super-accurate atomic clocks, GPS coordinates, etc., all for a difference of 60 nanoseconds, yet has never mentioned the possibility of c being off.

Why has noone discussed the possibility that c is just 60 ns faster than previously thought? Perhaps neutrinos travel at the TRUE, faster c and light in a vacuum (or quantum foam of said vacuum) slightly slows light down by a very a small amount?

I'm merely wondering why this hasn't been discussed at all. Humans measuring c as 60 ns off seems a lot more plausible than faster than light travel with all the paradoxes that it introduces like breaking causality. — Preceding unsigned comment added by Ehryk (talkcontribs) 09:06, 21 February 2012 (UTC)[reply]

I'll just say that any science relating to tachyons, right now, is embryonic at best; namely, we haven't even established their existence with any reliable certainty. It may be that c was miscalcuated by 60 ns (highly unlikely), or that somebody at CERN's calculations were off (somewhat likelier, although the results were the same every time).
This is just an anecdote, but it may (partly) answer your question:
As far as I know, Albert Einstein himself never said—one way or the other—that FTL travel was impossible; rather, that as an object aproached c, its mass would increase to the point that it could never achieve said speed, but only travel at a speed arbitrarily close to it. Also, Neils Bohr (to whom—along with Ernest Rutherford—we owe the current, subatomic model) even went so far as to suggest that c may not be so much an "upper limit," as much as an "assymptote." Viz., an object can travel either faster or more slowly than c, just not at c.
At any rate, FTL neutrinos, right now, remain highly speculative, so please take all this with a grain of salt. Pine (talk) 09:31, 21 February 2012 (UTC)[reply]


If c was only important for the speed of light, this may be an option. However, the quantity is interrelated with other physical constants and formulas. For example, E=mc2, which is used in radioactive decay, c2 = 1/(ε0μ0), which influences electromagnetism. It's not that simple to redefine c, even our standard unit of length (meter) is defined relative to the speed of light. I would think 'fixing' the neutrino problem in this way, 'breaks' a whole lot of other scientific models. -- Lindert (talk) 09:36, 21 February 2012 (UTC)[reply]
This possibility was actually discussed here in November. See item #1. 98.248.42.252 (talk) 09:52, 21 February 2012 (UTC)[reply]

Falling balls

Since gravity is always the same, would a ball dropped from a certain height hit the ground at the same time as a ball rolling down an inclined plane from that height? Whoop whoop pull up Bitching Betty | Averted crashes 13:44, 21 February 2012 (UTC)[reply]

No. The rolling ball has to deal with rolling resistance and with more wind resistance than the vetically plummeting ball. --Tagishsimon (talk) 13:49, 21 February 2012 (UTC)[reply]
What about a frictionless inclined plane? Whoop whoop pull up Bitching Betty | Averted crashes 13:50, 21 February 2012 (UTC)[reply]
Well, no: consider a plane "inclined" so that it was a kilometer long and dropped by a meter (ignoring Earth's curvature). How would your puck (as it's not rolling, no reason to use a ball) cover that distance in the 0.45 s it takes to fall a meter? (The mathematical answer is that your speed at any given height is the same between sliding and falling, but since you obviously slide sideways, you're not moving down as fast.) --Tardis (talk) 13:59, 21 February 2012 (UTC)[reply]
(edit conflict) No. For a ball dropping free the potential energy is converted into kinetic energy, so you can calculate the vertical velocity from the height dropped (if you neglect the air resistance). Rolling down the inclined plane you need to provide rotational energy, and also to get to the ground at the same time the speed down the plane would need to be faster than the vertical component, hence you'd need more energy to get there at the same time. - David Biddulph (talk) 13:55, 21 February 2012 (UTC)[reply]
Or in force terms the inclined plane is still always pushing upwards ob the ball (has an upward component to its reaction) which slows the acceleration downwards. --BozMo talk 14:36, 21 February 2012 (UTC)[reply]

I see whoop whoop is back after a few days absence asking weird questions he probably already knows the answer to, or does he just have a giant book full of daft questions to ask at a geek party, and just trots out a couple each time to see if there is anyone out there to talk to? Stop it Sir. Wickwack60.230.221.101 (talk) 15:24, 21 February 2012 (UTC)[reply]

Discharge tubes

Source of Questions A and B
Source of Questions C and D

About discharge tubes:

A) Why is it that, for helium, neon, and argon, the central portion of the tube is brighter than the ends, but for krypton and xenon the ends are brighter than the central portion?

B) What does a discharge tube containing radon look like?

C) Why do hydrogen and deuterium have different spectra when their electron configuration is exactly the same?

D) Would the discharge tube for tritium look as different from the ones for hydrogen and deuterium as the deuterium tube is from the hydrogen tube? And what about the tubes for quadium and quintium?

Whoop whoop pull up Bitching Betty | Averted crashes 14:12, 21 February 2012 (UTC)[reply]

Answering (c): probably because those are two different pictures (H2, D2), and the camera settings are not the same. The hydrogen exposure time is shorter than the deuterium exposure time; thus, it appears dimmer. — Lomn 14:29, 21 February 2012 (UTC)[reply]
...Though for the sake of completeness I'll note that the atomic emission spectra of hydrogen-1 and deuterium really are slightly different (but you wouldn't expect to see it without sensitive instruments). The derivation of the Rydberg constant assumes a stationary nucleus—that is, that the center of the nucleus sits at the center of mass (barycenter) of the nucleus-electron system. In practice the nucleus is not infinitely massive with respect to the electron, and so there are (very small) adjustments to the apparent Rydberg constant made when using the Rydberg formula for each isotope of hydrogen. The lowest-energy Balmer line is centered at 656.3 nm for hydrogen, and 656.1 nm for deuterium, for instance. There are also appreciable differences in the ultraviolet spectra of the molecular H2 versus D2, though this contribution to the photographs should be small for discharges imaged through UV-opaque glass. TenOfAllTrades(talk) 15:40, 21 February 2012 (UTC)[reply]

event horizon

How far away would I have to be from the event horizon of a black hole with an event horizon of 100,000,000 light years so as not to get sucked into it? — Preceding unsigned comment added by 165.212.189.187 (talk) 15:47, 21 February 2012 (UTC)[reply]