Wikipedia:Reference desk/Science
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
February 10
black hole event horizon
hello this is hursday. does black hole event horizon always have to be spherical or can it be other shapes? (Dr hursday (talk) 01:07, 10 February 2010 (UTC))
- They have been known to share the topology of cup cakes and muffins. (Nurse ednesday.) —Preceding unsigned comment added by 58.147.58.179 (talk) 01:29, 10 February 2010 (UTC)
- I believe they are always spherical. Even spinning and/or electrically charged black holes have spherical event horizons. However, the spinning ones also have another kind of "horizon" called the "ergosphere" that takes on the shape of a flattened sphere. SteveBaker (talk) 03:04, 10 February 2010 (UTC)
- Probably stupid Theoretical question : Can't other nearby massive objects effect the shape of the event horizon? If I had a black hole that happened to be near another black hole, wouldn't the event horizon extend farther out on the side that faced away from the second black hole? What about the point at the center of gravity of the two black holes, couldn't that not be inside the event horizon, even if happened to be close enough to fall within a spherical event horizon? Or do I badly misunderstand? APL (talk) 03:26, 10 February 2010 (UTC)
- I don't have time for a long answer right now, but yes, interacting black holes can give rise to a transient event horizon that is distorted into other shapes beyond the simple sphere. Dragons flight (talk) 04:18, 10 February 2010 (UTC)
- Hello this is hursday again. yes i was wondering if the presents of other black holes around might distort the gravitational field enough to change the event horizon if so could then particle that is has passed event horizon of one black hole then have another large black hole zoom by it very fast and free the particle the particle would have to be in the event horizon of the black hole zooming by but the black hole couldnt be in the event horizon of the black hole or it would get sucked into the black hole as well. also I was wondering if at the point just after a large mass is absorbed into the event horizon would that not shift the center of mass of the black hole away from the center as the mass inside the event horizon is not equally distrubted? (~~~~)
- Replying to hursday and APL: as far as I know, once you stop tossing stuff into a black hole, the event horizon becomes spherical, period. There's no equatorial bulge, no distortion by nearby masses, not even any Lorentz contraction (the horizon is spherical in any coordinates). This doesn't apply if there are gravitational waves around or if black holes merge (both of those count as tossing stuff in). It also doesn't apply to the black hole's gravitational field (outside the horizon), which can be permanently asymmetrical. It's never possible to free a particle from inside the event horizon because the event horizon is defined in the first place as the boundary of the region from which things can't get out. This does imply that event horizons are unphysical; they know the future in a way that physical objects can't. -- BenRG (talk) 06:55, 10 February 2010 (UTC)
- Are you sure about the 'no lorentz contraction' bit? Dauto (talk) 13:55, 10 February 2010 (UTC)
- Pretty sure. The black hole metric restricted to the event horizon is just ds² = rs² dΩ². There's no dt² or du² or whatever, because the third direction is lightlike. If you take any spacelike slice through the full geometry and restrict it to the horizon, it becomes a constraint of the form t(Ω) = something, but it doesn't matter because t never occurs in the metric. Contrast this with a sphere in Minkowski space, where you have ds² = R² dΩ² − dt² on the sphere and ds² = R² dΩ² − t'(Ω)² dΩ² on the sliced sphere. That factor of 1 − t'(Ω)²/R² is where the Lorentz contraction comes from. -- BenRG (talk) 05:38, 12 February 2010 (UTC)
- Slight clarification - you don't just have to have stopped tossing stuff in, but you have to never be going to toss anything in in the future. As I understand it, any object that is going to cross the event horizon at any point in the future creates a (usually very small) dimple in the event horizon from the moment the black hole comes into existence (as you say, event horizons know the future). --Tango (talk) 10:02, 10 February 2010 (UTC)
- Are you sure about the 'no lorentz contraction' bit? Dauto (talk) 13:55, 10 February 2010 (UTC)
- Replying to hursday and APL: as far as I know, once you stop tossing stuff into a black hole, the event horizon becomes spherical, period. There's no equatorial bulge, no distortion by nearby masses, not even any Lorentz contraction (the horizon is spherical in any coordinates). This doesn't apply if there are gravitational waves around or if black holes merge (both of those count as tossing stuff in). It also doesn't apply to the black hole's gravitational field (outside the horizon), which can be permanently asymmetrical. It's never possible to free a particle from inside the event horizon because the event horizon is defined in the first place as the boundary of the region from which things can't get out. This does imply that event horizons are unphysical; they know the future in a way that physical objects can't. -- BenRG (talk) 06:55, 10 February 2010 (UTC)
- Hello this is hursday again. yes i was wondering if the presents of other black holes around might distort the gravitational field enough to change the event horizon if so could then particle that is has passed event horizon of one black hole then have another large black hole zoom by it very fast and free the particle the particle would have to be in the event horizon of the black hole zooming by but the black hole couldnt be in the event horizon of the black hole or it would get sucked into the black hole as well. also I was wondering if at the point just after a large mass is absorbed into the event horizon would that not shift the center of mass of the black hole away from the center as the mass inside the event horizon is not equally distrubted? (~~~~)
- Yes, the horizon is defined globally, not locally. And by globally we mean mass distribution everywhere past, present and future. The horizon is not a real physical entity. We might be crossing the horizon of a huge blackhole right now as we speak, and not even be aware of it. Dauto (talk) 13:55, 10 February 2010 (UTC)
- Also, the singularity is hypothesized to be ring-shaped if the black hole rotates. ~AH1(TCU) 00:09, 14 February 2010 (UTC)
Blue gold
I read on wikiarticle that blue gold exist. what does blue gold look like (Dr hursday (talk) 01:10, 10 February 2010 (UTC))
- Bluish. See Blue gold (jewellery) (and the cited ref) for info about it.
Sadly, I can't find an actual picture.See [1] ("thanks, google-image search!") Would be great to get a free set of pictures of colored gold for the article. DMacks (talk) 01:24, 10 February 2010 (UTC)- Depending on the context, it could have several other meanings as well. What article did you see the term in? A quick google search shows that water is sometimes called "blue gold", especially in political or economic contexts (see this, for example). And there are numerous references to blue and gold. Buddy431 (talk) 01:38, 10 February 2010 (UTC)
EPR_paradox and uncertainty
if you make these twin particles with opposite whatever, then why can't you just measure the position of one with high certainty (sending the velocity of that one into uncertainty) which position the other half of the twin then "will have had" all along, while doing the same thing for the other one but for the velocity? Then you know the initial velocity and position of both (though the current position/velocity of each respectively is uncertain). Isn't that disallowed by the heisenberg's uncertainty principle (thup)? Also thank you for clearing up any misconceptions I might have had. —Preceding unsigned comment added by 82.113.106.88 (talk) 02:15, 10 February 2010 (UTC)
- Well, for one, the uncertainty principle isn't relevant to the EPR paradox as it's been experimentally tested and resolved (see Bell's theorem). The article ought to refer to the observer effect. The uncertainty principle is for continuous values, not the discretes (photon polarizations) of the EPR paradox. However, to address why the uncertainty principle can't be abused in such fashion:
- In short, the product of the uncertainty of the position and the uncertainty of the momentum must be at least some positive value. If you were to set one uncertainty to zero (say position), then the other (possible momentum) would span an infinitely large range of values, or more accurately, would not be defined. It's not simply sufficient to say that you can't measure the definite position and momentum of an object at the same time. Rather, it is better to say that particles do not have definite positions or momentums to be measured, merely probabilities. — Lomn 04:14, 10 February 2010 (UTC)
- Yes, but in this case there are two twin particles which originated at the same location and left it with exactly opposite (times -1) velocities. I see what you mean about one of them, call it A: if you know A's position perfectly, it's momentum could span an infinitely large range of values. So let's do that. Meanwhile, if you know B's momentum perfectly, then B's location would span an infinite range of values. But combining A and B, you can get the original location AND velocity of the particles at starting point... Can you address this? Everything you wrote above seems to be about one (either A or B) of the particles, rather than both... 82.113.106.91 (talk) 12:34, 10 February 2010 (UTC)
- At the level involved, however, you don't know that the particles originated at exactly the same place (nor exactly where that place is), nor that they left with exactly opposite momentums (even where c is a given, the direction is not known with certainty). The Schrödinger equation, which describes the nature of a particle's position and momentum, is what illustrates that these concepts are not distinct values that can be known. It's not that you can't measure them because of some limitation in measurement -- it's that they don't exist as single answers to be measured. For an introductory book on the subject, I recommend How to Teach Physics to Your Dog. — Lomn 13:40, 10 February 2010 (UTC)
- Yes, but in this case there are two twin particles which originated at the same location and left it with exactly opposite (times -1) velocities. I see what you mean about one of them, call it A: if you know A's position perfectly, it's momentum could span an infinitely large range of values. So let's do that. Meanwhile, if you know B's momentum perfectly, then B's location would span an infinite range of values. But combining A and B, you can get the original location AND velocity of the particles at starting point... Can you address this? Everything you wrote above seems to be about one (either A or B) of the particles, rather than both... 82.113.106.91 (talk) 12:34, 10 February 2010 (UTC)
- Your question is exactly the question that the original EPR paper asked. The obvious interpretation of the uncertainty principle is that the physical process of doing a measurement on a system must alter it in some way. For example, if you bounce a large-wavelength photon off the system, you alter the momentum only slightly, but get only a vague idea of the position; if you use a small-wavelength photon, you get a better idea of the position, but alter the momentum a lot. EPR had the idea (obvious in hindsight) of making two measurements that are separated in space so that no physical influence from one can reach the other, even at light speed. If the uncertainty principle is a side effect of physical "measurement interference" then it has to fail in this case. If it works in this case, it must work by some weird mechanism that's unrelated to ordinary physical interaction, which Einstein famously dismissed as "spooky action at a distance". Bell was the first to point out that the correctness of the uncertainty principle in this situation could actually be tested in real life. Subsequent real-life tests have concluded that it does hold—i.e., the bouncing-photon explanation of the uncertainty principle is wrong and the "spooky action at a distance" is real. The philosophical implications of this still aren't understood. -- BenRG (talk) 06:16, 10 February 2010 (UTC)
- Thank you for the answer. Can you explain in simple terms to me what the actual "spooky" results are? What happens when you learn that of A .... B, which had been at one point but now have opposite velocities, A is moving due left at exactly "1230 units per second" (hence B will have been moving with that velocity all along) meanwhile, B is in such-and-such a place, exactly. Couldn't you combine the 1230 unites per second and the such-and-such place to get both components (momentum and location) perfectly? What actually happens when you measure the velocity of one half of the particles and the location of the other, with a high degree of accuracy? 82.113.106.91 (talk) 12:40, 10 February 2010 (UTC)
- They are hard to break down in simple terms, unfortunately. (I've tried to do it for a few years now in teaching non-science undergraduates.) The less hand-wavy approach is to show how the statistics work. Mermin's classic article on this (referenced in the article on Bell's theorem) does this a bit; this page is derived from it and has a similar line of argumentation. Science people find that a useful way to simplify, but those less quantitatively inclined do not, in my experience. The basic answer is that your "I can have all the information approach," which was what Einstein thought, turns out not to work out correctly if you run your test over time. There is a testable difference between the quantum outcome and the EPR prediction.
- Another approach is to jet all the actual explanation and just explain why it is spooky. Basically what you find is that when you measure one of the properties, it does modify the other property you are measuring in the paired particle even if there is no "communication" between the particles (in other words, even if there is no reason that it should). That's the "spooky" bit. One professor I had put it as such: Imagine there were two twins, separated by an ocean. Both happen to enter into a bar at the exact same time and order wine. When one orders red wine, the other one, instantaneously, knows they want white wine. If the first had instead ordered white, the second would instead order red. They have no communication between the two twins, no way that we know of to "know" what the other is ordering, yet each and every time their orders are exactly the opposite of their twin. Spooky, no? They (your particles) are truly entangled—one you do to one does affect what you do to the other—even if there doesn't seem to be any obvious reason why that should be. Again, this is testable (and is the basis for quantum cryptography). Unfortunately getting a deeper understanding of how this works is non-trivial. (I understand maybe half of it, having sat through quite a few lectures on it at this point.) --Mr.98 (talk) 13:33, 10 February 2010 (UTC)
- Expounding on the spooky action: the part that leaves it consistent with the rest of physics (specifically, with c being the speed limit of information transfer) is that nothing can be learned by observing only the local one twin. I'd suggest that it's more accurate if you remove the bit about "both entering the bar at once" and instead go with this:
- Imagine there are two twins, separated by an ocean, who both like wine. When one decides that he prefers white wine, the other's preference immediately becomes for red wine. When one decides that he prefers red, the other immediately prefers white. One twin enters a bar and orders red, and immediately, faster than the speed of light, the other prefers white. There's your spooky action. Now to fix information transfer: A scientist follows around Twin B, periodically asking him what wine he would prefer. The scientist doesn't know when Twin A enters a bar and orders wine. When Twin B answers "white", is it because Twin A just ordered red, or is it because Twin B had to answer something (because even if he could prefer either, he must answer one or the other)? The two are indistinguishable unless some other means of communication (which is limited by the speed of light) is used to inform the scientist about Twin A. — Lomn 14:17, 10 February 2010 (UTC)
- The point is that the spookie action cannot be used to send a message. Dauto (talk) 15:36, 10 February 2010 (UTC)
- Which means it doesn't violate special relativity, but it is still spooky! --Mr.98 (talk) 16:40, 10 February 2010 (UTC)
- What you're describing here is Bertlmann's socks, which is easily explained classically. I know that you're trying to use an analogy that a layperson will understand, but I don't think it helps to describe "quantum weirdness" with an example that doesn't actually contain any quantum weirdness. The gambling game I describe below is a genuine example of nonclassical correlation, not an analogy, and I think it's simple enough to replace Bertlmann's socks in every situation. -- BenRG (talk) 20:54, 10 February 2010 (UTC)
- The point is that the spookie action cannot be used to send a message. Dauto (talk) 15:36, 10 February 2010 (UTC)
You guys are not getting my question. Say the twins had two properties like speed and velocity that you couldn't simultaneously know. As an analogy, say they had a height and a weight, but the more exactly you measured the height, the less sure you could be of the weight. However, you do know that their height and weight is the same. So, you just measure the height of one very exactly (at the moment you do that, its weight becomes anything from 0.0000000001 to 1000000000 units and more - you've lost all certainty), let's say you get 5'10". Then you measure the weight of the OTHER with high certainty. Let's say you get 150 pounds. That means the height of the other becomes anything from 1mm to 1km and more. However, you do know that well before your measurements, the height and weight of each was the same. So, you can deduce that the height of each was 5'10.56934981345243432642342 exactly, to arbitrary precision, and that the weight of each was 150.34965834923978847523 exactly, also to arbitrary precision; this violates the idea that the more precisely you know one, the less precisely you know the other. It depended on the fact that the twins were guaranteed to be the 'same'. It's not about communication, you guys are misunderstanding. It's not faster-than-light communication that worries me. It's that you can just compare the two readings and come up with the above high-precision reading for both components. Now without the analogy: it was stated that you can get particles that have the exact opposite velocity when they leave each other from a specific place(entangled or not). My question is, using the analogy of height and weight, why can't you learn the height of one, and the weight of the other, after they have left their common source at the EXACT opposite 'height' (one is guaranteed to grow downward, below the ground or something). Is my assumption that two particles can be guaranteed to leave with opposite velocity from a fixed position wrong? Or what am I missing? It seems to me you could deduce the exact velocity /and/ position of both, by measuring the other component in the other.
Please guys, I'm not asking about information transfer! I'm asking, when you physically bring the two notepads with the readings next to each other, you now have both components of the original particles... how is that possible? Thanks for trying to understand what my question actually is, instead of answering more of what I am not asking. Thank you. 82.113.106.99 (talk) 16:49, 10 February 2010 (UTC)
- Quote: "Is my assumption that two particles can be guaranteed to leave with opposite velocity from a fixed position wrong?" Yes, it's wrong. In order for them to have exactly oposite velocities the pair would have to be exactly at rest before, and in order for you to be able to pinpoint the position of A after measuring the position of B you would have to know exactly where they were before. So you need to know exactly both where the pair was and how fast it was moving before the split. Heisenberg's uncertainty doesn't allow that. Note that this has nothing to do with EPR paradox. It's a complete separate thing and doesn't require entanglement or spooky action at a distance. Dauto (talk) 17:53, 10 February 2010 (UTC)
- Thank you, you're exactly right: my question had nothing to do with EPR, or entanglement or spooky action at a distance. The only connection was the experimental setup, getting particles explosively popping off of each other... Thank you for your answer, I think it makes everything very clear. Sorry that I didn't phrase my question more explicitly, this is very hard stuff for me :). 82.113.106.99 (talk) 18:09, 10 February 2010 (UTC)
- (ec) I think we're getting your question—but I don't think you're quite getting the answer. (Which I sympathize with, because it is not the easiest thing to explain.) The point of Bell's inequality is that you do not actually get total information about the particles—that your measurement of one of them does affect your measurement of the other, and this is experimentally testable (it is not just a philosophical distinction). The issue of "communication" comes in because you then naturally say, "so how is that possible if the particles are different and unconnected—how could measuring one affect the other, if there is no connection between them?" --Mr.98 (talk) 18:00, 10 February 2010 (UTC)
What's wrong with this loophole?
By the way, what's wrong with this loophole? I mean, where is the flaw in that math? Thanks. 82.113.106.99 (talk) 17:12, 10 February 2010 (UTC)
- It's an issue of interpretation of what Bell's inequality means for quantum mechanics. You're going to have to understand Bell's inequality though before you are going to understand the different interpretations of local hidden variable theory... it is not a problem with the math, it is a question of interpreting the results. Bell's theorem rules out certain types of theories but not others. --Mr.98 (talk) 18:00, 10 February 2010 (UTC)
- Here's a refutation of the paper, for what it's worth. The paper's conclusion is obviously wrong. What its publication really shows is that there are a lot of physicists who don't understand Bell's theorem.
- Here's an example of what people like Hess and Philipp are up against. Imagine two people are playing a gambling game that's a bit like The Newlywed Game. They play as allies against the house. They're allowed to discuss a strategy first; then they're moved into separate rooms where they can't communicate with each other. Each one is then asked one of three questions (call them A, B, and C) and must give one of two answers (call them Yes and No). Then they're brought back together and receive a payoff depending on whether their questions and answers were the same or different. The payoffs are shown on the right. "−∞" is just a penalty large enough to wipe out any other gains they've made and then some.
same Q | diff. Q | |
same A | 0 | −2 |
diff. A | −∞ | +1 |
Game payoffs |
- What's the best possible strategy for this game? Any strategy must, first and foremost, guarantee that they'll give the same answer when asked the same question. This means that there can't be any randomness; they have to agree beforehand on an answer to every question. There are eight possible sets of answers (NNN, NNY, NYN, …, YYY). NNN and YYY are obviously bad choices, since they will always lose when the questions are different. The other six choices all give different answers with probability 2/3 when the questions are different. Because of the way the payoffs are set up, this leads to the players breaking even in the long term. So it's impossible to do better than break even in this game.
- But in a quantum mechanical world, the players can win. In the initial discussion stage, they put two electrons in the entangled spin state (|↑↑〉 + |↓↓〉) and each take one. Then they measure their electron's spin in one of three different directions, mutually separated by 120°, depending on the question, and answer Yes if the spin points in that direction or No if it points in the opposite direction. According to quantum mechanics, measurements along the same axis will always return the same value, while measurements along axes separated by 120° will return different values with probability 3/4, which is more than 2/3. Thus the players will win, on average, 1/4 of a point on each round where they're asked different questions.
- Can you come up with a classical model that explains this? You are allowed to model each electron with any (classical) mechanical device you like; it may have an internal clock or an internal source of randomness if you want. It takes the measurement angle as input and returns Up or Down as output. The two electrons are not allowed to communicate with each other between receiving the question and returning the answer (that's the locality requirement).
- Hess and Philipp think that they can win the game by putting an internal clock in their electron devices. I hope it's clear now that they're wrong... -- BenRG (talk) 20:54, 10 February 2010 (UTC)
Burning down stone buildings?
Reading Herostratus makes me wonder: how is it possible to burn a stone building? I get the impression that ancient Greek temples were made of solid marble, not wood faced with stone, and of course I know that marble doesn't generally burn. Nyttend (talk) 05:24, 10 February 2010 (UTC)
- Ancient greek buildings certainly had a framework and detailing done in marble or other stone, but there was also likely lots of flamable stuff (wood, tapestries, decorations, etc. etc.) all over them, so that could very easily burn. Furthermore, thermal stresses caused by the fire could cause the integrity of the marble structure to fail, ending up with a pile of rubble. Said pile would likely be recycled for use in other buildings, so in the end there wouldn't be much of the "burned" building left. --Jayron32 06:09, 10 February 2010 (UTC)
Cyrus the King of Persia commanded to build a row of wood between every three rows of stone in the walls of the Second Temple in Jerusalem, in order to make it easier to burn down the Temple in the event of a revolt.(Ezra 6:4; Babylonian Talmud, Rosh Hashanah 4a) Simonschaim (talk) 08:17, 10 February 2010 (UTC)
- Roofs would often have been made of wood as well. Large spans like a roof covered only with stone are not simple. The Pantheon was (and is still) considered a marvel and its roof is concrete. 1 Kings 6 gives details of the large amount of wood (and where it was used) in the First Temple at Jerusalem. 75.41.110.200 (talk) 14:42, 10 February 2010 (UTC)
- Yes, but the gold-plated cedar wood isn't typical of Greek temples; thanks for the point about the roof, however. Nyttend (talk) 15:52, 10 February 2010 (UTC)
- I think IP 75.41. means Parthenon rather than Pantheon. -- Flyguy649 talk 16:26, 10 February 2010 (UTC)
- No, he means the Pantheon, Rome, in many ways more impressive than the Parthenon. Mikenorton (talk) 16:31, 10 February 2010 (UTC)
- I don't think anyone except the Romans used concrete in antiquity; nobody else could have built something with a concrete roof. Nyttend (talk) 21:10, 10 February 2010 (UTC)
- No, he means the Pantheon, Rome, in many ways more impressive than the Parthenon. Mikenorton (talk) 16:31, 10 February 2010 (UTC)
- I think IP 75.41. means Parthenon rather than Pantheon. -- Flyguy649 talk 16:26, 10 February 2010 (UTC)
- Yes, but the gold-plated cedar wood isn't typical of Greek temples; thanks for the point about the roof, however. Nyttend (talk) 15:52, 10 February 2010 (UTC)
- Greek temples were built of wood until around the 6th century BC (see Temple (Greek)#Earliest origins) - although there had been a lot of stone temple construction by Herostratus' time, there would still have been some wooden temples around (one theory suggests that wooden elements were often replaced one at a time, as required). Indeed, an earlier Temple of Artemis was built of wood, but it was rebuilt as a grand, fairly early example of construction in marble. However, it had wooden beams in the roof and contained a large wooden statue. Warofdreams talk 16:21, 10 February 2010 (UTC)
- According to the lime mortar article, it is made of calcium hydroxide, which decays at 512C. Silca sandstone melts at 1700C, but I can't tell the softening point. Wood fires can easily reach the melting point of copper (1000C), but not iron (1800C). So the mortar (which the Romans used, and probably the Ancient Greeks as well) would fail and the building is likely to collapse. However, this is not my area of expertise, so I'm just taking an educated guess. CS Miller (talk) 23:55, 10 February 2010 (UTC)
- Also, marble (and chalk) is calcium carbonate, it decomposes at 1200C. Do you know what stones the building was made from? CS Miller (talk) 00:59, 11 February 2010 (UTC)
- The article says that the temple he burned was built of marble. Is that what you mean? Nyttend (talk) 05:07, 11 February 2010 (UTC)
- Oops, didn't see that you stated that the building was made of marble. A wood fire will reach the decomposition temperature of lime-mortar and marble. Both of these decompose to calcium oxide (quicklime). These decompositions are endothermic, meaning that heat-energy is absorbed, not released during the reaction. Calcium oxide has about half the molar density of calcium hydroxide and calcium carbonate, so the mortar and marble will double their volume on decomposition. Calcium Oxide is 16.739403 ml/mol, Calcium hydroxide (lime-mortar) is 33.511081 ml/mol, calcium carbonate in calcite form is 36.932435 ml/mol. So in short a wood fire could reduce a marble building to a large pile of quicklime, if there was enough wood. CS Miller (talk) 14:24, 14 February 2010 (UTC)
- The article says that the temple he burned was built of marble. Is that what you mean? Nyttend (talk) 05:07, 11 February 2010 (UTC)
- Also, marble (and chalk) is calcium carbonate, it decomposes at 1200C. Do you know what stones the building was made from? CS Miller (talk) 00:59, 11 February 2010 (UTC)
- According to the lime mortar article, it is made of calcium hydroxide, which decays at 512C. Silca sandstone melts at 1700C, but I can't tell the softening point. Wood fires can easily reach the melting point of copper (1000C), but not iron (1800C). So the mortar (which the Romans used, and probably the Ancient Greeks as well) would fail and the building is likely to collapse. However, this is not my area of expertise, so I'm just taking an educated guess. CS Miller (talk) 23:55, 10 February 2010 (UTC)
heat pump
is chillers and heat pump are the same —Preceding unsigned comment added by 180.149.48.81 (talk) 08:43, 10 February 2010 (UTC)
- See chiller and heat pump. The answers are there.--Shantavira|feed me 13:08, 10 February 2010 (UTC)
- My house has one unit that is either considered to be a heat pump or an air conditioner depending on which way it's driven. It cools the house by heating the back yard and heats the house by air-conditioning the back yard. So in at least some situations, they are the same thing. However, there are air conditioners and heat pumps that are optimised for one specific function that are not intended to be reversible - so, in general, they are not necessarily the same thing. I suppose it would be most correct to say that a "chiller" is one application for heat pump technology. SteveBaker (talk) 20:14, 10 February 2010 (UTC)
soft iron
i was reading about transformers and torroids and to conclusion that both are nearly same. emf is induced in both of them when current passes through 1 coil,in 2 coil. one more similarity is eating my mind is that why soft iron is used in both of them to wind wire - why not some another metal or ceramic? does soft iron has any role in producing emf or anything.--Myownid420 (talk) 09:23, 10 February 2010 (UTC)
- We have a brief explanation at Soft_iron#Common_magnetic_core_materials. I found this, as I hoped, by typing soft iron into the search bar. If you don't understand the explanation there, or want more detail, feel free to ask a more specific question. 86.183.85.88 (talk) 13:52, 10 February 2010 (UTC)
- The article says nothing about its "softness" (it would make a lousy pillow). Is it the carbon content? Is it really "softer" in a physical sense than steel? Is pig iron from an iron furnace "soft" before it becomes "hard" by being worked or alloyed with carbon? How do wrought iron, cast iron, mild steel, tool steel, and stainless steel compare in "softness?" If I want to buy "soft Iron" where would I go and what product would I ask for? I only know that steel retains magnetism better than a nail which is supposed to be "iron." Edison (talk) 16:18, 10 February 2010 (UTC)
- You could try Googling "soft iron" and looking at the third result. --Heron (talk) 19:24, 10 February 2010 (UTC)
- I suggest it is safer to give the actual link since Google search results order can vary. Cuddlyable3 (talk) 20:31, 10 February 2010 (UTC)
- The "third result" is for "curtain rods with a 'soft iron' finish." I expect that such rods are steel, which is strong and cheap, and that the "soft iron"" only refers to the appearance. Edison (talk) 05:05, 11 February 2010 (UTC)
- The third result for me is a company that supplies science equipment to schools advertising that it sells packs of affordable soft iron rods for making electromagnets. I'm a little surprised that, with your username and the personality you choose to project, you appear to know nothing about soft iron, even 2 years after a similar conversation. 86.183.85.88 (talk)| —Preceding undated comment added 16:09, 11 February 2010 (UTC).
- Yes, that's the one I meant. Sorry, I forgot that Google is localised. --Heron (talk) 18:44, 11 February 2010 (UTC)
- The third result for me is a company that supplies science equipment to schools advertising that it sells packs of affordable soft iron rods for making electromagnets. I'm a little surprised that, with your username and the personality you choose to project, you appear to know nothing about soft iron, even 2 years after a similar conversation. 86.183.85.88 (talk)| —Preceding undated comment added 16:09, 11 February 2010 (UTC).
- The "third result" is for "curtain rods with a 'soft iron' finish." I expect that such rods are steel, which is strong and cheap, and that the "soft iron"" only refers to the appearance. Edison (talk) 05:05, 11 February 2010 (UTC)
- Here are some articles and with their spellings in Wikipedia. A torus is a doughnut shape. A toroid is also round but more like a sawn-off piece of pipe. Two windings on a toroid- or torus-shaped core make up a toroidal transformer, it is exactly a transformer but not all transformers are toroids.
- Materials such as steel that are good for making permanent magnets are poor choices of core material for a power transformer because their retention of magnetism after a field is removed causes power loss to heating by Hysteresis. "Soft" iron[2] has low hysteresis and is almost pure iron. It is a cheap low-grade structural metal and is also used to make Clothes hangers. Cuddlyable3 (talk) 20:31, 10 February 2010 (UTC)
- I really would expect clothes hangers to be made of recycled steel rather than elemental iron. Edison (talk) 04:44, 11 February 2010 (UTC)
- Materials such as steel that are good for making permanent magnets are poor choices of core material for a power transformer because their retention of magnetism after a field is removed causes power loss to heating by Hysteresis. "Soft" iron[2] has low hysteresis and is almost pure iron. It is a cheap low-grade structural metal and is also used to make Clothes hangers. Cuddlyable3 (talk) 20:31, 10 February 2010 (UTC)
- Plus ca change. You could get soft iron from a company that makes soft iron for the cores of electromagnets, for example. You can get iron of varying degrees of hardness, mostly fairly hard, by buying some iron nails. If you buy them from a variety of companies, you'll probably be able to demonstrate to yourself that they have differing retentivities. 86.183.85.88 (talk) 23:00, 10 February 2010 (UTC)
- What specialized company which supplies transformer cores to ABB or GE would bother to sell a pound of their product to a hobbyist? Seems unlikely. A certain alloy number or name would be a more useful thing to look up online. Salvaging a core from a transformer or motor is another idea with better prospects. I have made a madly powerful electromagnet by sawing apart a motor stator. Edison (talk) 04:59, 11 February 2010 (UTC)
- This does seem to be a typical example of something that everyone who learns about this stuff in the real world knows, but the Internet population doesn't seem to bother its head about. This paper from the late 50s gives an idea of the basics. Perhaps it is neither basic enough, nor new enough, to feature in pop science? 86.183.85.88 (talk) 23:35, 10 February 2010 (UTC)
The Future
In your opinion(s), what/which areas of active scientific research will make the greatest impact on our technological development, say, by 2060? In my uninformed opinion I would think nanotech, but that's just me... —Preceding unsigned comment added by 173.179.59.66 (talk) 10:50, 10 February 2010 (UTC)
- We don't do opinions, we do references. You can try going through this list of people that make a living out of giving opinions on the subject and see what they've been saying. I, however, would say that it is impossible to predict technological development 50 years in the future with any meaningful accuracy. I'd say the biggest technological development in the last 50 years is the personal computer (and, more recently, various other devices that can perform the same functions - eg. mobile phones) and I don't think many people predicted the importance of PCs in 1960 (the microchip had only just been invented and no-one had put it in a computer yet, and that was one of the major developments that made small-scale computers possible). --Tango (talk) 11:09, 10 February 2010 (UTC)
- In fact, I think most people in 1960 would have predicted nuclear reactors as the most important development over the next 50 years, and it turned out they played a very minor role. --Tango (talk) 11:14, 10 February 2010 (UTC)
- Yes, and extensive human space travel (e.g. moon colonies and the like). And jetpacks. AndrewWTaylor (talk) 12:01, 10 February 2010 (UTC)
- Oh, yes, I forgot about space travel. I think nuclear energy was expected to have a bigger impact overall, though. --Tango (talk) 12:05, 10 February 2010 (UTC)
- I agree nuclear power didn't live up to the predictions of energy too cheap to be worth metering, but it still delivers 15% of the world's electricity. If it ever becomes a practical reality, fusion power will have a huge impact. Room temperature superconductors would also have a huge impact, but are a more remote possibility. See our Timeline of the future in forecasts for some more ideas. Gandalf61 (talk) 12:06, 10 February 2010 (UTC)
- I was about to say - fusion power would have huge consequences. Noodle snacks (talk) 12:13, 10 February 2010 (UTC)
- Energy too cheap to meter was one part of predictions about the atomic age, the other was small-scale reactors in individual devices (nuclear powered cars, for example). The former happened to a small extent, but not really a significant one (nuclear energy is cleaner than fossil fuels, but not really cheaper), the latter didn't happen at all (we have nuclear submarines, which existed before 1960, but that's it). Fusion power would probably have a major impact, but there is no way to know if it will be invented in the next 50 years. --Tango (talk) 12:22, 10 February 2010 (UTC)
- To be sure, scientists, politicians, and industrialists all spent a lot of time trying to distance themselves from the "too cheap to meter" fantasy (which frankly I doubt Strauss really thought would be about fission, anyway). The difficulty is that the popular imagination grabs onto such bountiful ideas and holds them tightly, no matter what cautions are put up by those more informed. Similarly the reactors in cars, etc., which even the more optimistic scientists (e.g. Edward Teller) recognized as having likely insurmountable technical difficulties, along with potentially just being bad ideas from the standpoint of safety. Even the more optimistic of the realistic possibilities—e.g., that widespread nuclear power would allow you to just convert your cars to being electric—was simply not as "interesting". --Mr.98 (talk) 16:09, 10 February 2010 (UTC)
- Checking Google Book Search for "future" and "technology" in 1959-1961 publications, there is discussion of electrification in developing countries, nuclear fusion, space exploration, "atomic cars," the "energy cell" for portable electricity, hydrogen fuel produced by solar energy and used in fuel cells, satellite reconnaisance, "roller roads" loaded with cars on carriers, moving at 150 mph by 1970, negative ions introduced in the air in our living spaces. One Nobel Prize winning scientist, Nikolai Nikolaevich Semenov, (1896–1986) in 1959 predicted that in the future the electricity available per person worldwide would increase from 0.1 kilowatt to 10 kilowatts, allowing greater comfort and productivity, and that nuclear fusion would increase the power by another factor of ten, allowing climate control. He predicted that by 2000 synthetics would largely replace natural materials (fiber, animal products, wool, metal) not just in in clothing but in buildings and machines. Automation would shorten the workday to 3 or 4 hours (true enough now, if you average in the under/unemployed), irrigation and technology would allow ample food for everyone, understanding of "heredity" would revolutionize medicine. Edison (talk) 16:12, 10 February 2010 (UTC)
- He deserves his Nobel prize - those are some good predictions. Not perfect, but good. --Tango (talk) 16:30, 10 February 2010 (UTC)
- Checking Google Book Search for "future" and "technology" in 1959-1961 publications, there is discussion of electrification in developing countries, nuclear fusion, space exploration, "atomic cars," the "energy cell" for portable electricity, hydrogen fuel produced by solar energy and used in fuel cells, satellite reconnaisance, "roller roads" loaded with cars on carriers, moving at 150 mph by 1970, negative ions introduced in the air in our living spaces. One Nobel Prize winning scientist, Nikolai Nikolaevich Semenov, (1896–1986) in 1959 predicted that in the future the electricity available per person worldwide would increase from 0.1 kilowatt to 10 kilowatts, allowing greater comfort and productivity, and that nuclear fusion would increase the power by another factor of ten, allowing climate control. He predicted that by 2000 synthetics would largely replace natural materials (fiber, animal products, wool, metal) not just in in clothing but in buildings and machines. Automation would shorten the workday to 3 or 4 hours (true enough now, if you average in the under/unemployed), irrigation and technology would allow ample food for everyone, understanding of "heredity" would revolutionize medicine. Edison (talk) 16:12, 10 February 2010 (UTC)
- To be sure, scientists, politicians, and industrialists all spent a lot of time trying to distance themselves from the "too cheap to meter" fantasy (which frankly I doubt Strauss really thought would be about fission, anyway). The difficulty is that the popular imagination grabs onto such bountiful ideas and holds them tightly, no matter what cautions are put up by those more informed. Similarly the reactors in cars, etc., which even the more optimistic scientists (e.g. Edward Teller) recognized as having likely insurmountable technical difficulties, along with potentially just being bad ideas from the standpoint of safety. Even the more optimistic of the realistic possibilities—e.g., that widespread nuclear power would allow you to just convert your cars to being electric—was simply not as "interesting". --Mr.98 (talk) 16:09, 10 February 2010 (UTC)
- I agree nuclear power didn't live up to the predictions of energy too cheap to be worth metering, but it still delivers 15% of the world's electricity. If it ever becomes a practical reality, fusion power will have a huge impact. Room temperature superconductors would also have a huge impact, but are a more remote possibility. See our Timeline of the future in forecasts for some more ideas. Gandalf61 (talk) 12:06, 10 February 2010 (UTC)
- Oh, yes, I forgot about space travel. I think nuclear energy was expected to have a bigger impact overall, though. --Tango (talk) 12:05, 10 February 2010 (UTC)
- Yes, and extensive human space travel (e.g. moon colonies and the like). And jetpacks. AndrewWTaylor (talk) 12:01, 10 February 2010 (UTC)
- In fact, I think most people in 1960 would have predicted nuclear reactors as the most important development over the next 50 years, and it turned out they played a very minor role. --Tango (talk) 11:14, 10 February 2010 (UTC)
- Among the difficulties with futurology is that 1. current trends often do not extrapolate (for various reasons), 2. new things come up that are unexpected, and 3. even when current trends do extrapolate, it's very hard to figure out what will really matter. Computers are a great example—in 1950, you could definitely have extrapolated that computers would become more and more prevalent. You might have figured out something akin to Moore's law (which came 15 years later, but that doesn't change too much), which, if true, would lead you to think that computer processors would be getting incredibly powerful in 50 years. On the other hand, you probably wouldn't have foreseen some of the other important developments in, say, LCD technology and battery technology which makes these things so beautiful, cheap, and portable. And even if you did expect computers to become a "big thing", anticipating how they would be used by individuals is still a massive jump. I've read wonderful articles from as late as the 1970s that predicted that home computing would be a big thing, but what would people use them for? The article authors (all avid home computer people) either imagined way too much (artificial intelligence, etc.) or way too little (people would use their computers almost exclusively for spreadsheets and word processing). Now multiply all the ways you can go wrong by the number of possible authors and commentators, and you have some people who will look prescient in 50 years, but most will look off to a fairly great degree. It's fairly impossible to know which of the commentators today are going to be the right ones, retrospectively, and focusing on those people who were "correct" by some interpretations is an exercise in cherry-picking. --Mr.98 (talk) 17:41, 10 February 2010 (UTC)
- Always keen to point out my little quip about Wikipedia, I'd suggest that many futurists of the last century predicted that a machine would exist that would serve as a great encyclopedic compendium to democratically distribute information to the masses. H.G. Wells predicted it in 1937, Vannevar Bush predicted it in 1945, and Doug Engelbart demonstrated one in 1968. The exact incarnation of Wikipedia takes its present form as a distributed, internetworked set of digital-electronic-computers with a graphic terminal interface - but even in 1945 that idea was pretty well predicted (the significance of digital information was not yet understood until at least the 1960s). Yet, the concept of the usage-case and the societal impact that it would have is the really innovative and predictive leap. Whether the implementation is by photographic plate or by integrated transistor circuit is less important (in fact, how many users of the internet could tell you which technology actually makes their computer run? Therefore, it's irrelevant to them). So, when you look back at prior futurists' claims, you have to distinguish between predicted concepts and predicted implementations - implementations are for engineers to deal with, while concepts are the marks of revolutionary leaps of human progress. "Quietly and sanely this new encyclopaedia will, not so much overcome these archaic discords, as deprive them, steadily but imperceptibly, of their present reality. A common ideology based on this Permanent World Encyclopaedia is a possible means, to some it seems the only means, of dissolving human conflict into unity." ... "This is no remote dream, no fantasy. It is a plain statement of a contemporary state of affairs. It is on the level of practicable fact. It is a matter of such manifest importance and desirability for science, for the practical needs of mankind, for general education and the like, that it is difficult not to believe that in quite the near future, this Permanent World Encyclopaedia, so compact in its material form and so gigantic in its scope and possible influence, will not come into existence." Nimur (talk) 18:12, 10 February 2010 (UTC)
- The question isn't whether people could have predicted things (I would personally say that the examples you are mentioning are, in many ways, fairly selectively chosen anyway—you're ignoring all the many, many ways that they have nothing to do with Wikipedia whatsoever in focusing on the few, narrow ways they have anything to do with it), because in a world with enough people speculating about the future, some of them are going to be right even if we assume the speculation is entirely random. The question is how you can possibly have any confidence in picking out the good predictions before you know how things worked out. It's easy to trace things backwards—it's pretty difficult to project them forward. And if you leave out the error rate (the number of things these same people predicted which did not come to pass), then their predictive ability becomes much less dodgy. --Mr.98 (talk) 18:30, 10 February 2010 (UTC)
- Firstly, making a long list of all the possible things there might be in 25 to 50 years isn't "predicting the future" - that's a scattergun approach that's guaranteed to score a few hits - but if you don't know which predictions are the right ones and which ones will be laughable by then - then you haven't predicted a thing. But Nimur's example is a good one: 70 years ago, H.G.Wells said that it "'...is difficult not to believe that in quite the near future, this Permanent World Encyclopaedia, so compact in its material form and so gigantic in its scope and possible influence, will not come into existence" - I don't think 70 years is "quite the near future". That was one of those "more than 5 years means we don't know" kinds of prediction. Sure it came true eventually - but nowhere close to how soon he expected. Then we look at Bush's 1945 prediction - and again "The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work." - 60 years later we have it...again, this was a vague prediction that was wildly off in timing. Engelbart did better - but even so, he was off by 30 to 40 years. If my "five years away" theory is correct then we should be able to find the first confident, accurate prediction: Wikipedia has been around for 9 years - but it's only really taken off and become more than a curiosity for maybe 6 years - and that means that the first accurate, timely prediction of Wikipedia ought to have been about 11 years ago - which is when Wales & Sanger started work on Nupedia. That fits. When something is utterly do-able, desirable and economically feasible, you can make a 5 year prediction and stand a good chance of being right on the money...but from 25 years away, all you can say is "This might happen sometime in the future - but I have no idea when." - which is what the previous predictors have evidently done. SteveBaker (talk) 03:55, 11 February 2010 (UTC)
- So we can know pretty well that most of the stuff promised to us in Back to the Future 2 is not going to happen. Googlemeister (talk) 20:50, 11 February 2010 (UTC)
- Firstly, making a long list of all the possible things there might be in 25 to 50 years isn't "predicting the future" - that's a scattergun approach that's guaranteed to score a few hits - but if you don't know which predictions are the right ones and which ones will be laughable by then - then you haven't predicted a thing. But Nimur's example is a good one: 70 years ago, H.G.Wells said that it "'...is difficult not to believe that in quite the near future, this Permanent World Encyclopaedia, so compact in its material form and so gigantic in its scope and possible influence, will not come into existence" - I don't think 70 years is "quite the near future". That was one of those "more than 5 years means we don't know" kinds of prediction. Sure it came true eventually - but nowhere close to how soon he expected. Then we look at Bush's 1945 prediction - and again "The perfection of these pacific instruments should be the first objective of our scientists as they emerge from their war work." - 60 years later we have it...again, this was a vague prediction that was wildly off in timing. Engelbart did better - but even so, he was off by 30 to 40 years. If my "five years away" theory is correct then we should be able to find the first confident, accurate prediction: Wikipedia has been around for 9 years - but it's only really taken off and become more than a curiosity for maybe 6 years - and that means that the first accurate, timely prediction of Wikipedia ought to have been about 11 years ago - which is when Wales & Sanger started work on Nupedia. That fits. When something is utterly do-able, desirable and economically feasible, you can make a 5 year prediction and stand a good chance of being right on the money...but from 25 years away, all you can say is "This might happen sometime in the future - but I have no idea when." - which is what the previous predictors have evidently done. SteveBaker (talk) 03:55, 11 February 2010 (UTC)
- The question isn't whether people could have predicted things (I would personally say that the examples you are mentioning are, in many ways, fairly selectively chosen anyway—you're ignoring all the many, many ways that they have nothing to do with Wikipedia whatsoever in focusing on the few, narrow ways they have anything to do with it), because in a world with enough people speculating about the future, some of them are going to be right even if we assume the speculation is entirely random. The question is how you can possibly have any confidence in picking out the good predictions before you know how things worked out. It's easy to trace things backwards—it's pretty difficult to project them forward. And if you leave out the error rate (the number of things these same people predicted which did not come to pass), then their predictive ability becomes much less dodgy. --Mr.98 (talk) 18:30, 10 February 2010 (UTC)
- Always keen to point out my little quip about Wikipedia, I'd suggest that many futurists of the last century predicted that a machine would exist that would serve as a great encyclopedic compendium to democratically distribute information to the masses. H.G. Wells predicted it in 1937, Vannevar Bush predicted it in 1945, and Doug Engelbart demonstrated one in 1968. The exact incarnation of Wikipedia takes its present form as a distributed, internetworked set of digital-electronic-computers with a graphic terminal interface - but even in 1945 that idea was pretty well predicted (the significance of digital information was not yet understood until at least the 1960s). Yet, the concept of the usage-case and the societal impact that it would have is the really innovative and predictive leap. Whether the implementation is by photographic plate or by integrated transistor circuit is less important (in fact, how many users of the internet could tell you which technology actually makes their computer run? Therefore, it's irrelevant to them). So, when you look back at prior futurists' claims, you have to distinguish between predicted concepts and predicted implementations - implementations are for engineers to deal with, while concepts are the marks of revolutionary leaps of human progress. "Quietly and sanely this new encyclopaedia will, not so much overcome these archaic discords, as deprive them, steadily but imperceptibly, of their present reality. A common ideology based on this Permanent World Encyclopaedia is a possible means, to some it seems the only means, of dissolving human conflict into unity." ... "This is no remote dream, no fantasy. It is a plain statement of a contemporary state of affairs. It is on the level of practicable fact. It is a matter of such manifest importance and desirability for science, for the practical needs of mankind, for general education and the like, that it is difficult not to believe that in quite the near future, this Permanent World Encyclopaedia, so compact in its material form and so gigantic in its scope and possible influence, will not come into existence." Nimur (talk) 18:12, 10 February 2010 (UTC)
- When scientists and technologists say "We'll be able to do this in 50 years" - they mean "This is theoretically possible - but I have no idea how". When they say "We'll be able to do this in 10 years" - they mean "This is technologically possible - but very difficult" and only when they say "We'll be able to do this in 5 years" should you have any expectation of actually seeing it happen. Five years is therefore the limit of our ability to predict the future of science and technology with any degree of precision at all. Flying cars and personal rocket packs are always 10 years away - and have been 10 years away since the 1950's. SteveBaker (talk) 20:10, 10 February 2010 (UTC)
- Here is Steve's post summarized in table form. —Akrabbimtalk 20:17, 10 February 2010 (UTC)
- I'd say we can make pretty reliable negative predictions 10 years in advance - if something isn't already in development, then it won't be widespread in 10 years time. The development process for new technology takes about 10 years, so what is widespread in 10 years time will be a subset of what is in development now - it is essentially impossible to know which items in development now will become widespread, though. (Note, I'm talking about completely new technology. Incremental changes happen much quicker - about 2 years development, often.) --Tango (talk) 20:21, 10 February 2010 (UTC)
- I'll take 'greatest impact' to mean 'most impressive' and go with
- transcontinental flights in two or three hours
- quiet, environmentally friendly, autopiloted cars
- highly immersive, interactive, tactile online gaming
- Vranak (talk) 21:48, 10 February 2010 (UTC)
- There are problems with all three of those - although they aren't technological ones:
- We could build a 3 hour transcontinental airliner - but as Concorde conclusively proved, people are not prepared to spend the extra for a shorter intercontinental flight in sufficient numbers to be profitable...given the inevitability of exotic and expensive technology required to do it.
- Quiet and environmentally friendly cars already exist - but their range is a little short and they aren't cheap. But I think it'll be a while until we get "autopiloted" because of the liability laws. If "you" are driving the car and something goes wrong and there is a crash, then "you" are responsible. If the car is driving itself, then the car company becomes liable - and that's not the business they want to be in. We're starting to see cars that help you drive - eg by warning you when you drift out of your lane or get too close to the car in front - and (like my car) by applying the brakes to the right wheels to help you avoid a skid...but although we have the technology to do much more than that - the car makers are being very careful to make these systems help out the driver and NOT be responsible in the end.
- The problem with immersive (like virtual reality) systems is a chicken and egg situation. On PC platforms, the additional hardware has to appear before games companies can take advantage of it - and nobody will buy the hardware unless there are games for it - so things have to follow an evolutionary rather than revolutionary path. That leads to gradually better and better graphics cards - but it doesn't lead to a sudden jump to helmet-mounted displays, tactile feedback, position-measuring gloves, etc. The way this would have to happen is via the game console market - and as we've seen with the Wii's clever Wiimote and balance board, people like this and will buy it. But the problem with game consoles is the way they are sold. The business model is to sell the machine for much less than it costs and make the money back on the games. But if the hardware suddenly costs a lot more - there is no convenient way to get that money back without pushing games up to $100 to $150 (I can tell you don't like that idea!) - or coming out with a $1000 console that nobody will buy. The Wii managed to take the tiny step it did by using exceedingly cheap technology. The accelerometers and low-rez IR camera cost only a couple of bucks to make - so that was easy. Worse still, the public are heading rapidly the other way. People are buying $5 iPhone and Android games in massive numbers - and the PC and console market are drying up - except in these Wii-like areas where they widened the demographic. Developers like this trend because employing several hundred people for three years to produce a $60-a-pop game is an incredibly risky business. Only 35% of them ever turn a profit and that's a major problem for the people with the capital to fund them. On the other hand, if you have 100 three-man teams each writing an iPhone app, the law of averages means that you get a very accurate idea of the return on your investment...the risks are vastly lower. The idea of pushing consumers and developers into $1000 consoles and $150 games is really not very likely in the near future. Microsoft, Sony & Nintendo know this - which is why there is no Xbox-720, PS-4 or Nintento-Poop (or whatever stoopid name they'd call it) expected either this Xmas or the next.
- So sorry - while the technology certainly exists for all three of the things on your "wish list" - I'd be really very surprised to see any of them them happen within 10 years. SteveBaker (talk) 23:30, 10 February 2010 (UTC)
- Both Microsoft and Sony are of course following Nintendo with PlayStation Motion Controller and Project Natal Nil Einne (talk) 09:29, 11 February 2010 (UTC)
- There are problems with all three of those - although they aren't technological ones:
- We already had #1 forty years ago.. see Concorde. We don't do it any more for a variety of economic and political reasons. --Mr.98 (talk) 23:28, 10 February 2010 (UTC)
- Concorde actually took three and a half hours to get across the atlantic - not "two to three" - but because everyone flew first-class and they had specially expedited boarding, luggage reclaim and customs, you'd probably save an hour at the airport compared to a commercial flight! If the Concorde-B had ever been built (which had 25% more powerful engines), I think they would have broken the 3 hour mark. SteveBaker (talk) 23:41, 10 February 2010 (UTC)
- First of all, let's refer back to the original question. 2060. Not 2020. Second, I was thinking about those spaceship-plane hybrids that go into the near reaches of outer space. Nothing to do with conventional aeronautics. Something that'll take you from Paris to Sydney in a couple hours. Vranak (talk) 00:41, 11 February 2010 (UTC)
- I thought you meant sub-orbital hops. There may be something in that. While London to New York in 2 or 3 hours isn't sufficiently better than the 5 or 6 hours you can do it in a conventional plane to warrant the extra cost, London (or Paris if you prefer) to Sydney in 2 or 3 hours would be sufficiently shorter than then current 20 hours (or whatever it is) that people would pay quite a lot for it. --Tango (talk) 01:03, 11 February 2010 (UTC)
- You mean like Scramjet#Applications [3]? I remember a book from the late 80s IIRCsuggesting something like that. I can't remember if they had a time frame but it didn't sound particularly realistic at the time and I'll freely admit I'm still far from convinced we'll see it in even in 2060. We'll see in 50 years whether I'll be eating my words I guess. Nil Einne (talk) 09:08, 11 February 2010 (UTC)
- We already had #1 forty years ago.. see Concorde. We don't do it any more for a variety of economic and political reasons. --Mr.98 (talk) 23:28, 10 February 2010 (UTC)
- I expected that higher definition television would come along in my lifetime to replace the 1941 black and white system and the 1950's color add on (NTSC=never twice same color), but I am stunned to read that 3D TV will be in stores with at least a couple of broadcast channels plus DVDs in less than a year. I never really expected that to be a consumer item in my lifetime. (Maybe I'll hold out for "Smellovision" with olfactory output.) If car GPS units had an order of magnitude more spatial resolution, or if lane markers were embedded in the pavement, it should be possible with straightforward extension of existing technology, demonstrated in the DARPA Challenge, to tell your car of the foreseeable future to drive you to a certain address, or to drive you home, as surely as if you had a chauffeur. I am amazed by the IPOD, considering the reel-to-reel tapes or LPs of a few years ago. The coming practical LED light bulb is amazing (as yet a bit dim/expensive). So for scientific advances affecting our technology and lives by 2060, medical science should benefit from study of genetics, with ability to predict who is likely to get a disease, and what medicines are best for that individual, and with genetic solutions to cancer, diabetes and other diseases with a hereditary component, perhaps at the cost of engineering out "undesirable" traits such as Aspergers, leaving a shortage of techies and scholars. Militaries and sports empires might genetically engineer super soldiers and super athletes, just a super brains might be engineered. The ability to speak articulately might be given to some animals like chimps and dolphins who are somewhat intelligent, or even to domestic pets (the cat , instead of "Meow" says "Pet me. Don't stop. Now feed me." The dog, instead of "Woof! Woof! Woof! says "Hey!" "Hey" "Hey!"). Breakthroughs in batteries or other energy storage technologies could make fossil fueled cars a relic of the past, just as matches made tinder-boxes obsolete. Nuclear proliferation, or new technologies for destruction, could render the metropolis obsolete, as too handy a target. Terrorists with genetically engineered bioweapons could try to exterminate their perceived enemies who have some different genetic markers, ending up with a much smaller global population. Crime solving of 2060 would be aided by many more cameras and sensors which track our movements and activities, and more genetic analysis will be done on trace evidence, based on a larger database of DNA. Zoos might include recreated mammoths and dinosaurs, by genetically manipulating their relatives, the elephant and the bird. Astronomy will certainly have vastly improved resolution for detecting smaller planets, including Earth-like ones. More powerful and efficient generators of electricity could make unmanned interstellar probes possible. Mind reading or interrogation will be facilitated by cortical evoked potentials,and MRI. There will be more robotic airplanes, foot soldiers and submersibles in future wars. Cyber warfare will be an important part of major conflict. Computer intelligence could far outstrip that of humans. Improved robotics could create robots who were for a few years the best of servants and thereafter the worst of masters for the remnant of humans. All themes that have been explored by futurists and sci fi writers. Edison (talk) 04:22, 11 February 2010 (UTC)
- 3D TV (in the sense that it's about to be dumped onto the airwaves) is really no big deal - the technology to do it has been around since the invention of liquid crystal displays and infra-red TV remotes. You have a pair of glasses with horizontal polarization...and behind that a set of liquid crystals that polarize vertically when they are turned on - and horizontally when they are off. By applying voltage to the two lenses alternately, you block the view from one eye and then the other. The glasses have a sensor build into the frame that detects infrared pulses coming from the TV. The TV says "Left eye...now Right eye...now Left eye" in sync with the video that displays left-eye then right eye images. That's about a $5 addition to an absolutely standard TV...I've had a pair of those 3D "LCD shutter" glasses for at least 15 years - they were $30 or so back then - but with mass production, they could have sold them for $10. These days, they probably cost a couple of bucks a pair. We could easily have made this exact system for a very reasonable price back in the 1970's...it simply required the will on the part of the movie and TV companies to make it happen - and a sense that the public are prepared to pay the price to see it. Of course these 3D TV's are gonna cost a packet to start with - but within a couple of years, all TV's will have it. SteveBaker (talk) 03:18, 12 February 2010 (UTC)
- SB already mentioned the perpetually coming flying cars. There's also of course the Arcology which some may believe we're now moving away from with the proliferation of the internet and other factors like that Edison mentions which make metropolises lass desirable. In a similar line, it's usually funny how many of these predicitions of the future fail to predict social changes.
- Perhaps few here would have heard of it but Anno Domini 2000 – A Woman's Destiny receives some attention particularly for its predictions of female involvement in politics and other aspects of life and some of the other predicitions also have a ring of truth [4] yet if you read it (it's in the public domain so is available online if you wish to, e.g. [5]], much of it comes across as at best 'quaint' particularly when it comes to the importance of the British Empire and the monarch in the imperial federation and in the US rejoining the Imperial federation.
- Similarly many of those predictions of the future in the 40s, 50s and 60s like this [6], [7], [8], [9] & [10] include numerous predicitions of how the housewife of the future will manage the home. (Incidentally, those examples may be interesting to read in general to see how right and how wrong they are.)
- This may not seem particularly relevant to the question, but I would say it is, since prediciting the future does require some knowledge of social & politicial changes that have taken place (a related example, I've heard the story before that 30 or so years ago few were prediciting the current demands for lean meat which makes things difficult since selective breeding takes a long time).
- BTW, in case people didn't notice, there are two links [11] [12] which may be of interest for those looking at predicitions of the past.
- Nil Einne (talk) 09:08, 11 February 2010 (UTC)
Artificial intelligence and Raymond Kurzweil's technological singularity. Intelligent organic life, if it survives long enough, probably inevitably becomes an organic-cybernetic hybrid -- a cyborg. When the same thing happens to the human race, perhaps intelligent life from elsewhere in the universe (very long-lived and ancient, because cybernetic) will communicate with it. The first step in that direction will be artificial intelligence and organic-cybernetic hybrids (cyborgs), both of which could evolve exponentially in the next century.
how many moons dwarf planets have total if we found all of them
How many moons will Pluto Eris Makemake and Haumea have most if we found all of them. They won't have more than 5. Pluto formerly only Charon in 2005 2 more Eris have two moons. To Makemake have any moons virtually. Could Haumea have 3 moons. Jupiter technically could have over 100 moons just many moons hidden moons.--209.129.85.4 (talk) 21:00, 10 February 2010 (UTC)
- We cannot possibly answer how many moons will be found around dwarf planets; indeed, we cannot answer if or when all such moons will be or are found. At present, the five dwarf planets have six known moons. — Lomn 21:18, 10 February 2010 (UTC)
- Why won't they have more than 5 moons? --Tango (talk) 21:20, 10 February 2010 (UTC)
- Why not? Duh! Because they are small and they have low gravity--209.129.85.4 (talk) 21:35, 10 February 2010 (UTC)
- Very little gravity is required to have very small moons (particularly on the extremely distant DPs that don't/won't interact with Neptune). There is no theoretical upper limit. — Lomn 21:46, 10 February 2010 (UTC)
- But why should the limit be 5? Why not 4 or 6? --Tango (talk) 23:12, 10 February 2010 (UTC)
- Why would Pluto be a dwarf planet but not Charon? I thought they were a binary dwarf planet pair? Googlemeister (talk) 21:21, 10 February 2010 (UTC)
- Per our Pluto article, the IAU has not yet formalized a definition for binary dwarf planets. If you prefer to ignore the IAU, then by all means consider Charon a dwarf planet in its own right (I find that a reasonable position). — Lomn 21:46, 10 February 2010 (UTC)
Charon doesn't count as a dwarf planet under the IAU rules because the point around which Pluto and Charon orbit is (just) below the surface of Pluto - so by a rather small margin, Charon is (like our Moon) still a "moon" no matter how big it is.SteveBaker (talk) 23:10, 10 February 2010 (UTC)- Our article, Charon#Moon or dwarf planet?, seems to disagree with you. --Tango (talk) 23:52, 10 February 2010 (UTC)
- Yeah - you're right. My mistake. So why is there any debate? Charon is larger than many other bodies described as "dwarf planet" under the new rules. I can't think of any reason not to describe Pluto/Charon as a binary dwarf-planet. SteveBaker (talk) 03:11, 11 February 2010 (UTC)
- I don't think there is debate, that's the problem. There needs to be some debate and settle on an official definition of a double planet. The "barycentre outside both bodies" definition is unofficial in the same way the (very vague) definition of "planet" was unofficial before 2006. There are other possible definitions to be considered, although I doubt any of them would chosen - mostly things to do with the smaller body's interactions with the Sun being more important than its interactions with the larger body. Those definitions sometimes make the Earth-Moon system a double planet. That's probably the biggest reason they won't be chosen - people can get used to Pluto no longer being a planet, but the Moon no longer being a moon? That would be hard! --Tango (talk) 12:09, 11 February 2010 (UTC)
- Yeah - you're right. My mistake. So why is there any debate? Charon is larger than many other bodies described as "dwarf planet" under the new rules. I can't think of any reason not to describe Pluto/Charon as a binary dwarf-planet. SteveBaker (talk) 03:11, 11 February 2010 (UTC)
- Our article, Charon#Moon or dwarf planet?, seems to disagree with you. --Tango (talk) 23:52, 10 February 2010 (UTC)
- Per our Pluto article, the IAU has not yet formalized a definition for binary dwarf planets. If you prefer to ignore the IAU, then by all means consider Charon a dwarf planet in its own right (I find that a reasonable position). — Lomn 21:46, 10 February 2010 (UTC)
- There isn't a formal definition for the words "Moon" and "Moonlet" - so in theory, you could claim that every single one of the trillions of little chunks of ice orbiting Saturn was a moon. In the absence of a decent definition, if we found even the most tenuous ring system around any of those dwarf planets, then the number of teensy little "moons" would be hard to pin down. Right now, there are 336 named objects that are labelled "moons" - but with over 150 more ill-classified bits and pieces awaiting accurate orbital calculations flying around Saturn alone - it's a silly game. Until we have a formal definition, it would be tough to answer this question even if we had perfect knowledge of what is orbiting these exceedingly dim and distant dwarf planets. With the best telescopes we have, the dwarf planets themselves are just a couple of pixels across - finding moons that are maybe 1/1000th of that size would be very hard indeed without sending a probe out there - and doing that is a 10 to 20 year mission. SteveBaker (talk) 22:47, 10 February 2010 (UTC)
- Which we launched 4 years ago: New Horizons Rmhermen (talk) 23:42, 10 February 2010 (UTC)
- But since most of those dwarf planets (Eris, Sedna, MakeMake, etc) were only discovered a matter of months before the launch, I don't think New Horizons will be able to visit them. Our New Horizons article explains that a flyby of Eris has already been ruled out. It's basically a Pluto/Charon/Kyper-belt mission. Since it's not going to reach even Pluto until 2015, and those other dwarf planets are WAY further out there - we may discover other dwarf planets for it to visit in the meantime. SteveBaker (talk) 13:47, 11 February 2010 (UTC)
- If New Norizons originally plan to go to other planet dwarfs (Makemake, Eris) by cancelling visits is probably once again budget problems.--209.129.85.4 (talk) 20:48, 11 February 2010 (UTC)
- I think it was technical problems, actually. It wasn't feasible to give the probe enough fuel for such a large course change. --Tango (talk) 20:53, 11 February 2010 (UTC)
- How exactly could they 'originally plan to go to other planet dwarfs' if (according to SB) they were only discovered a few months before the launch? Nil Einne (talk) 18:28, 13 February 2010 (UTC)
- If New Norizons originally plan to go to other planet dwarfs (Makemake, Eris) by cancelling visits is probably once again budget problems.--209.129.85.4 (talk) 20:48, 11 February 2010 (UTC)
- But since most of those dwarf planets (Eris, Sedna, MakeMake, etc) were only discovered a matter of months before the launch, I don't think New Horizons will be able to visit them. Our New Horizons article explains that a flyby of Eris has already been ruled out. It's basically a Pluto/Charon/Kyper-belt mission. Since it's not going to reach even Pluto until 2015, and those other dwarf planets are WAY further out there - we may discover other dwarf planets for it to visit in the meantime. SteveBaker (talk) 13:47, 11 February 2010 (UTC)
- Which we launched 4 years ago: New Horizons Rmhermen (talk) 23:42, 10 February 2010 (UTC)
- There is an active search for other dwarf planets within the cone New Horizons can reach after visiting Pluto. Hopefully someone will find one in time. --Tango (talk) 16:31, 11 February 2010 (UTC)
earth-moon double planet
How long will it be until the barycenter of the earth-moon system is locate outside the surface of the earth, which would, under current definitions, promote the moon from a moon to the second body of a binary planet? (Imagine the wikipedia article "moon (dwarf planet)" Googlemeister (talk) 21:28, 10 February 2010 (UTC)
- Per our orbit of the Moon article, assuming a steady solar system, the Moon would eventually stabilize at an orbit with a semi-major axis of some 550,000 km. The barycenter of the Earth-Moon system would move into free space at a semi-major axis of about 525,000 km. Since it would take ~50 billion years to reach the maximum, I'd guess it would take ~40 billion years to reach the double planet portion. Of course, the "steady solar system" thing won't hold for 40 billion years. As the article notes, in two or three billion years, the Sun will heat the Earth enough that tidal friction and acceleration will be effectively eliminated. As such, it's not clear that the Earth-Moon system ever will have a free-space barycenter, even though it's theoretically possible. — Lomn 22:03, 10 February 2010 (UTC)
- Per our orbit of the Moon article, you could argue the Moon is already half of a double planet, since the Moon's orbit around the Sun is everywhere concave. --Michael C. Price talk 12:08, 14 February 2010 (UTC)
teleportation
Is there any actual research in the field of teleportation? I mean obviously Star Trek style teleportation is pretty unlikely (especially living things), but has there have been experimentation involving transporting inert material between say two wired locations? Googlemeister (talk) 21:35, 10 February 2010 (UTC)
- There's plenty of actual research in quantum teleportation, but that's not really "teleportation". I don't believe there's been any meaningful work in turning atoms into bits back into atoms. — Lomn 21:51, 10 February 2010 (UTC)
- Ob.XKCD is at: http://xkcd.com/465/ SteveBaker (talk) 22:34, 10 February 2010 (UTC)
- Current research (that I know of) is in teleporting energy, not matter. This is performed by taking advantage of entanglement to put energy into a particle in one place and get energy out of a particle in another place (which is such a generalized statement that it isn't completely true - but easily understood). Because energy/mass is very similar in physics, it may be possible in the future to go from matter to energy to teleportation to energy to mass again. So far, once you convert matter to energy (think atomic bomb), it isn't very easy to get the matter back again. -- kainaw™ 22:39, 10 February 2010 (UTC)
- Instantaneous, literal teleportation is a problem that physics may not ever overcome except for the most trivial examples of moving single particles or packets of energy. Conservation laws of all kinds would have to be mollified before we could possibly do this for 'macro-scale' objects.
- But the other approach is to scan and measure the object at one location - destroy it utterly - then transmit the description of the object and recreate it at the other end. StarTrek teleportation is sometimes like that (depending on the whim of the authors of a particular episode - it's not very consistent) because teleporter errors have on at least two occasions resulted in winding up with two people instead of one...however, the difficulty is that there is typically only one machine at one end of the link and to do the scan/destroy/transmit/receive/recreate trick, you need two machines, one to scan/destroy/transmit and another to receive/recreate. However, with two machines, this starts to look almost do-able. If you take a fax machine and bolt it's output to a shredder - you have a teleporter (of sorts) for documents with present-day technology. Interestingly, if the shredder should happen to jam, you end up with one Will Ryker left behind on the planet and and another one on the Enterprise. Doing that with things as complicated and delicate as people is a lot harder. You have to scan at MUCH higher resolution, and in 3D - and the data rates would be horrifically large - and we have absolutely no clue how to do the "recreate" step (although I'd imagine some nanotechnological universal replicators would be involved). There is also the ethical issue of the 'destroy' step and the extreme unlikelyhood of anyone having the nerve to step into one!
- The thing I do think may one day be possible is the idea that we scan people's brains into computers when their physical bodies are about to conk out - and allow them to continue to live as computer programs that simulate their former biological brains so precisely that the persons can't tell the difference. If you could do that then future robotic-based humans could teleport at the speed of light to remote locations fairly easily by sending their software over a digital communications link and reinstalling it on a remote computer/robot. This would be an amazing thing and would allow you to walk into a teleportation booth in NewYork, dial a number and pop up in Australia a few seconds later, being completely unaware of the (brief) journey time. Because you're unaware of the journey time, distance would be little obstacle - providing there is a suitable robot/computer waiting for you at the other end.
- SteveBaker (talk) 23:06, 10 February 2010 (UTC)
- "Star Trek style teleportation is pretty unlikely (especially living things)" While such teleportation is pretty unlikely, I don't see how teleporting a living thing would be any more difficult than teleporting a finely detailed non-living thing, such as a microprocessor. 58.147.58.179 (talk) 04:06, 11 February 2010 (UTC)
- A microprocessor doesn't move. A living being does. So, if you scan a microprocessor from top to bottom, you can get the whole thing scanned in atom by atom. Yes, the electrons are whizzing around, but the atoms are rather stable. If you scan a living being from top to bottom, the atoms are not remotely stationary. They are moving here and there, all over. You will scan some atoms multiple times and some not at all. The scan will be like a photo that is blurred when a person moves during the photo. So, you have, on top of teleportation, the concept of a shutter speed for the scan. -- kainaw™ 04:24, 11 February 2010 (UTC)
- Also, for most objects (a teacup, for example) you could "summarize" the object rather simply by saying "the shape of the object such-and-such and it is made entirely of a uniform ceramic of such-and-such average composition with a surface coloration made of such-and-such chemicals laid out such as to produce an image like this" - and what the 'reconstruct' stage could conceivably do would be to reproduce a perfectly good, identical-looking, fully functional teacup at the other end - although it might differ in irrelevent, microscopic ways. But with a living thing, teeny-tiny errors at the atomic level in something like a DNA strand or tiny structural details about where every single synapse in the brain connects to are utterly crucial - and no reasonable simplification or summarization is possible. When you photocopy or fax a document, the copy isn't 100% identical down to the individual fibres of the paper - but that doesn't matter for all of the usual practical purposes. However, if that detail DID matter, then faxing and photocopying documents would be impossible. When you copy a human, they are really going to want every neural connection and every base-pair on their DNA perfectly reproduced and not being regenerated from a "bulk description" such as we could use for a teacup. SteveBaker (talk) 13:38, 11 February 2010 (UTC)
- I always assumed this is why Captain Picard always complained about the quality of replicator food (and tea). The engineers that designed the thing were probably too aggressive with their compression algorithms. APL (talk) 15:22, 11 February 2010 (UTC)
- I thought he complained about the food because he was French. Googlemeister (talk) 17:39, 11 February 2010 (UTC)
- There's also the problem of replicating electrical charges, and probably other sort of dynamic aspects of the body like pressures, which would be necessary in order for the 3D printer to print out a living being instead of a dead one. A computer would be easier to teleport because you could replicate it switched off and then switch it on again later. 81.131.39.120 (talk) 21:44, 11 February 2010 (UTC)
- I always assumed this is why Captain Picard always complained about the quality of replicator food (and tea). The engineers that designed the thing were probably too aggressive with their compression algorithms. APL (talk) 15:22, 11 February 2010 (UTC)
- Also, for most objects (a teacup, for example) you could "summarize" the object rather simply by saying "the shape of the object such-and-such and it is made entirely of a uniform ceramic of such-and-such average composition with a surface coloration made of such-and-such chemicals laid out such as to produce an image like this" - and what the 'reconstruct' stage could conceivably do would be to reproduce a perfectly good, identical-looking, fully functional teacup at the other end - although it might differ in irrelevent, microscopic ways. But with a living thing, teeny-tiny errors at the atomic level in something like a DNA strand or tiny structural details about where every single synapse in the brain connects to are utterly crucial - and no reasonable simplification or summarization is possible. When you photocopy or fax a document, the copy isn't 100% identical down to the individual fibres of the paper - but that doesn't matter for all of the usual practical purposes. However, if that detail DID matter, then faxing and photocopying documents would be impossible. When you copy a human, they are really going to want every neural connection and every base-pair on their DNA perfectly reproduced and not being regenerated from a "bulk description" such as we could use for a teacup. SteveBaker (talk) 13:38, 11 February 2010 (UTC)
- A microprocessor doesn't move. A living being does. So, if you scan a microprocessor from top to bottom, you can get the whole thing scanned in atom by atom. Yes, the electrons are whizzing around, but the atoms are rather stable. If you scan a living being from top to bottom, the atoms are not remotely stationary. They are moving here and there, all over. You will scan some atoms multiple times and some not at all. The scan will be like a photo that is blurred when a person moves during the photo. So, you have, on top of teleportation, the concept of a shutter speed for the scan. -- kainaw™ 04:24, 11 February 2010 (UTC)
The answer to "blurring" would be to freeze them-though that's another questionable technology.Trevor Loughlin (talk) 09:24, 11 February 2010 (UTC)
- Indeed - cryogenics is more likely to be possible than teleportation, but it is still very difficult. --Tango (talk) 12:11, 11 February 2010 (UTC)
- The amount of energy required to create matter at the destination end is stupendously huge. 67.243.7.245 (talk) 15:35, 11 February 2010 (UTC)
- You could conceivably have giant tanks of raw material.
- Alternatively, if you have an easy way to convert between energy and matter, you could feed random junk into the machine, or save the energy from the last out-going transport. APL (talk) 16:52, 11 February 2010 (UTC)
Diethyl ether
how did they use Diethyl ether for surgery if it forms peroxides . any peroxides over 3% are extremely caustic and would burn the patients skin —Preceding unsigned comment added by 67.246.254.35 (talk) 21:50, 10 February 2010 (UTC)
- "Don't use it if it's got a high peroxide content."? It's easy to detect them and it's easy to remove them from the liquid. How is/was ether administered?--does that method result in transfer of substantial amounts to the skin? DMacks (talk) 00:07, 11 February 2010 (UTC)
yes , they put it on a cloth and put it over their mouth and nose —Preceding unsigned comment added by 67.246.254.35 (talk) 00:18, 11 February 2010 (UTC)
- (edit conflict) I recall it depicted (TV/Movies) as being dripped manually onto a pad or mask over the nose/mouth. See also Diethyl ether, Diethyl ether (data page) and Diethyl ether peroxide —220.101.28.25 (talk) 00:24, 11 February 2010 (UTC)
- In particular seeDiethyl ether#Safety re peroxide formation. No information that I can see however about how it was applied, though pads as mentioned seems likely. --220.101.28.25 (talk) 00:33, 11 February 2010 (UTC)
yes i read those articles they did not answer my question--67.246.254.35 (talk) 00:31, 11 February 2010 (UTC)
- When it comes to medical chemicals, any impurity might be cause for concern, and 3% of another non-inert contaminant is very high--I would be concerned well before that level is reached. For example, physiological effects (blood pressure) apparently become noticeable already at 0.5% ether peroxide concentration.("Pure Ether and Impurities: A Review". Anesthesiology. 7 (6): 599–605. November 1946.) DMacks (talk) 01:00, 11 February 2010 (UTC)
so how do you tell if there are peroxides in the ether? —Preceding unsigned comment added by 67.246.254.35 (talk) 05:51, 11 February 2010 (UTC)
- Answering an above question: the article states that diethyl ether is sold containing a trace amount of BHT, which will mop up any oxygen before it can form an ether peroxide. I expect that by storing the ether in well-sealed, dark bottles it would be possible to make the amount of peroxides formed over the lifetime of the bottle (as an anaesthetic) negligible. Also, 67, please do not blank your talk page if I have written on it. Brammers (talk) 08:08, 11 February 2010 (UTC)
- The diethyl ether peroxide page that you (67) had read includes a whole section about testing for its presence. DMacks (talk) 15:56, 11 February 2010 (UTC)
Chemistry question in regards to chemical bonds
What chemical bond is most vulnerable to an acid attack and why? I think it may be ionic but I am unsure. —Preceding unsigned comment added by 131.123.80.212 (talk) 23:12, 10 February 2010 (UTC)
- What are your choices? Is this a homework problem? DMacks (talk) 00:46, 11 February 2010 (UTC)
- I was just given that question. It is a lab question for my Petrology class (undergraduate geology class). I have scoured all possible resources to find the answer (i.e. Google, Ebscohost, Google Scholar, etc.) but with no such luck. —Preceding unsigned comment added by 131.123.80.212 (talk) 01:15, 11 February 2010 (UTC)
- Acids don't really attack chemical bonds, per se. See Lewis acid-base theory for an explanation; but in general acids produce H+ ions in solution, and bases (which is the stuff that acids react with) all generally tend to have unbonded pairs of electrons in them, (see lone pair). So the acid doesn't attack the bond, it attacks electrons that aren't currently involved in bonds. Now, once the acid forms a new bond with that lone pair, other bonds in the molecule may break at the same time, but one would have to see a specific reaction to know exactly what would happen. The best I can say is that the question, as the OP has worded it, is so vague as to make it very difficult to answer. --Jayron32 02:58, 11 February 2010 (UTC)
- I was just given that question. It is a lab question for my Petrology class (undergraduate geology class). I have scoured all possible resources to find the answer (i.e. Google, Ebscohost, Google Scholar, etc.) but with no such luck. —Preceding unsigned comment added by 131.123.80.212 (talk) 01:15, 11 February 2010 (UTC)
The most basic thing in solution. H+ usually goes after lone pairs, as they are the highest energy electrons present generally (they tend to form HOMOs). That's because unshared electrons tend to be in a higher energy well. There are cases where some chemical bonds are higher in energy than a lone pair, but you really have to work hard to get rid of most other bases first. In fact for example I think sulfuric acid attacks alkenes directly -- you don't want too much water because H2O may in fact be more basic than say, cyclohexene. John Riemann Soong (talk) 03:25, 11 February 2010 (UTC)
- This question is addressing weathering as in soils, when highly acidic conditions increase the solubility of certain minerals (because of the dissociation of water to H+ and OH-). I was trying to decide between ionic/metallic/covalent bonding of certain minerals in soil, which would be most vulnerable to what this professor calls an "acid attack". Any more help would be greatly appreciated!
—Preceding unsigned comment added by 131.123.80.212 (talk) 23:48, 10 February 2010 (UTC)
- (ec with post above) (Responding to John Riemann) Right. One answer that would be correct would be "double and triple (covalent) bonds". See, for example, Electrophilic addition. A pi bond is generally weaker than a sigma bond, so easier for an electrophile (aka lewis acid) to go after.
- (Responding to 131's post above) Reading your post above, though, I don't think that's what you're looking for. You're basically asking why certain minerals dissolve better in acidic conditions. Many of the things in the earth are oxides, especially basic oxides (which is a pitiful article, really). I'm not really sure of the chemistry of these things, but often times, an acid will convert a basic oxide into a hydroxide, which tend to be more soluble in water. Carbonates are especially suseptable to acid (they decompose to carbon dioxide and water). That's why acid rain can do so much damage to marble (that's calcium carbonate) statues and buildings. Buddy431 (talk) 05:06, 11 February 2010 (UTC)
February 11
What kind of tree is this?
I'm thinking of getting a tree tattooed on my back. This is the tree in which I spent a lot of my childhood; I don't want to make this particular tree my tattoo, but if I could remember what kind of tree it was, I could sort of start from there. Can anyone else ID what kind of tree I might have been playing in? It was so long ago, if I ever knew, I've forgotten ... -FisherQueen (talk · contribs) 01:34, 11 February 2010 (UTC)
A big tree to be carryin' around, an Oak by the looks of the bark but a leaf would be helpful. hydnjo (talk) 01:55, 11 February 2010 (UTC)
- A closeup picture of a leaf, a flower, or a fruit would definitely help. --Dr Dima (talk) 02:02, 11 February 2010 (UTC)
- I know- I suck! This is the only picture of the tree I have, and it's three hundred miles away in the backyard of a stranger... -FisherQueen (talk · contribs) 02:18, 11 February 2010 (UTC)
- Judging by the lobes on the leaves, which I can kinda sorta make out, my top three guesses are, in order:
- An oak of some sort, likely in the "red oak" type, see List of Quercus species, Section Lobatae.
- A Liriodendron tulipifera, aka American tuliptree, aka Tulip poplar, aka Yellow poplar
- A Liquidambar styraciflua, aka American sweetgum, though these often have very straight trunks with no lower branches when mature; so this doesn't look much like that.
- Those are my three best guesses. If we knew where this was from, or could get a better look at the leaves or seeds, it would help! --Jayron32 02:52, 11 February 2010 (UTC)
- The 'where' I can do: northwestern Pennsylvania. -FisherQueen (talk · contribs) 02:55, 11 February 2010 (UTC)
- In that case, throw out the Sweetgum; they are endemic in the South but their range ends hundreds of miles from any part of Pennsylvania. The tulip poplar is also doubtful, as its range kinda dies out near the Pennsylvania/Maryland line, though it may be this. I would explore some of the red oak species. --Jayron32 03:10, 11 February 2010 (UTC)
- The 'where' I can do: northwestern Pennsylvania. -FisherQueen (talk · contribs) 02:55, 11 February 2010 (UTC)
Oak, unquestionably, but likely currently covered in snow, so Quercus niveobrutus. alteripse (talk) 03:13, 11 February 2010 (UTC)
- Thanks! That sounds plausible; I will look at pictures of oak trees as I seek tattoo inspiration. -FisherQueen (talk · contribs) 03:17, 11 February 2010 (UTC)
- I have to issue a note of caution about tattoos: The problem with them is that they last longer than the original symbolism. Today, a peaceful oak tree is a nice statement - and if you like that kind of thing, not such a terrible thing to have on your back. But how do you know that in (say) 10 to 20 years time, the Oak tree won't become the symbol of a terrorist group or an evil mega-corporation, or perhaps some other social group that you'd prefer not to be associated with? Suppose, for example, that you're not gay and do not wish to be assumed to be so - then, if in the 1980's you'd decided that a tattoo of a rainbow over the greek letter Lambda (I dunno - maybe your name begins with an "L" and you like greek lettering). Since that pretty much says "I'm gay and I want you to know it" in the years 2000 and onwards - that would be a major problem for you. So think carefully. SteveBaker (talk) 14:10, 11 February 2010 (UTC)
Does this group of compounds exist?
Are they stable enough to exist in say, a bottle you would purchase from Sigma-Aldrich? What do you call these compounds?
Also, is the decarboxylation step shown possible, maybe in the presence of the appropriate catalyst, to generate carbenes? I'm just wondering why it's so hard to find discussion of aromatic cyclic carbonates, carbamates and ureas, etc. The keywords I have been using so far have been along the lines of "cyclic carbonate ester" + aromatic, etc. The molecule should be aromatic and therefore fairly stabilised, right? John Riemann Soong (talk) 03:08, 11 February 2010 (UTC)
- Apparently you can buy them. See for example this.
- Hmmm ... such unwieldy names. Why aren't these compounds more common, and why do molecules with this motif seem to always have both sites substituted? This seems to tell me that something interesting happens if there's a hydrogen on those C=C atoms! John Riemann Soong (talk) 04:59, 11 February 2010 (UTC)
I don't know off the top of my head, but I had a look on Web of Knowledge for you, and here are a few papers:
Ben (talk) 01:50, 12 February 2010 (UTC)
Do any of the chemists here have comments? The cyclic carbonate ester is basically like a permanent "enol ester". Trying to restore a C=O results in decarboxylation ....? Or is it possible to carry out aldol reactions without decarboxylation? John Riemann Soong (talk) 17:16, 13 February 2010 (UTC)
- Search for vinylene carbonates - it's listed as a synonym in the Sigma link I gave you. Use your initiative!
- Thanks for the tip. I'm not good at finding synonyms. Why doesn't wiki have an article for it? I'm not a grad student. John Riemann Soong (talk) 18:43, 14 February 2010 (UTC)
Paxil
What happen to Paxil Official Website (http://paxilcr.com)? As for me I typed in Paxil in google and it isn't there anymore. —Preceding unsigned comment added by Mybodymyself (talk • contribs) 03:32, 11 February 2010 (UTC)
- They seem to have pulled quite a lot of pages relating to this drug - but going to the http://www.gsk.com site and typing "Paroxetine", "Seroxat" or "Paxil" (which are the same drug under different names) into their search engine produces hundreds of pages on the subject - so I don't think they are supressing anything - it's possibly just a rearrangement of their site. http://www.gsk.ca/english/html/our-products/paxil.html seems to be the main entrypoint. SteveBaker (talk) 04:31, 11 February 2010 (UTC)
Thank you.--Jessica A Bruno (talk) 21:03, 11 February 2010 (UTC)
Shape of Human foot
.
Please look at the accompanying picture. On the left is normal human foot we all have (approximate shape). My question is that : is there any record of a human sub-race, living anywhere on the planet, that has the foot shaped like to that at the right ???? Unless I am terribly wrong I think I have seen men with foot like at the right ! Did I see on a long forgotton TV program or is it somewhere in my racial memory....
Jon Ascton (talk) 04:58, 11 February 2010 (UTC)
- There are a number of genetic and developmental disorders that result in missing or fused digits. See syndactyly and ectrodactyly for example. --Dr Dima (talk) 05:12, 11 February 2010 (UTC)
- I understand what you are saying but I am not talking about individual specimens rather I want to know is there any country or community where everyone has this as 'normal' condition ? Jon Ascton (talk) 05:42, 11 February 2010 (UTC)
- I think you are asking about the Vadoma tribe. Here[13] is a photo of one of their members which I found with a Google image search. Richard Avery (talk) 07:08, 11 February 2010 (UTC)
- Wow! That's interesting. I wonder whether the improved ability to climb trees that's mentioned in our article has resulted in some kind of selection pressure for that particular gene? Are we actually seeing an evolutionary event? SteveBaker (talk) 14:01, 11 February 2010 (UTC)
- Parapatric speciation? Possible, but not very likely. Even under conditions ideal for speciation it would take many, many generations before the 5-toe and 2-toe Homo species can no longer produce fertile children with each-other. Also, as noted by Comet (see below), claim of the improved ability to climb trees lacks verifiable reference so may be inaccurate. --Dr Dima (talk) 18:04, 11 February 2010 (UTC)
- You can have evolution without speciation. Lactose tolerance and alcohol tolerance both evolved in humans fairly recently, but lactose intolerant people and those that experience the Alcohol flush reaction have no difficulty breeding with the rest of us. That doesn't mean it wasn't evolution. (I don't know if the people mentioned actually lack that tolerance due to not having a certain mutation, rather than due to having a different mutation, but it's entirely plausible.) --Tango (talk) 19:14, 11 February 2010 (UTC)
- You can certainly have evolution without speciation. However, that usually requires an existence of selection pressure on the entire species (predator-prey coevolution, changing environment, etc); although a random genetic drift is also possible. In the case of foot ectrodactyly I understood that SteveBaker is asking specifically about speciation, as the selection pressure to climb trees (if true) only exists for a particular sub-population of the Homo sapiens species. Indeed, majority of H. sapiens experience zero or negative evolutionary pressure to climb trees; and, more generally, for the majority of H. sapiens the foot ectrodactyly offers no reproductory advantage I can think of. --Dr Dima (talk) 17:45, 12 February 2010 (UTC)
- You can have evolution without speciation. Lactose tolerance and alcohol tolerance both evolved in humans fairly recently, but lactose intolerant people and those that experience the Alcohol flush reaction have no difficulty breeding with the rest of us. That doesn't mean it wasn't evolution. (I don't know if the people mentioned actually lack that tolerance due to not having a certain mutation, rather than due to having a different mutation, but it's entirely plausible.) --Tango (talk) 19:14, 11 February 2010 (UTC)
- Parapatric speciation? Possible, but not very likely. Even under conditions ideal for speciation it would take many, many generations before the 5-toe and 2-toe Homo species can no longer produce fertile children with each-other. Also, as noted by Comet (see below), claim of the improved ability to climb trees lacks verifiable reference so may be inaccurate. --Dr Dima (talk) 18:04, 11 February 2010 (UTC)
- Wow! That's interesting. I wonder whether the improved ability to climb trees that's mentioned in our article has resulted in some kind of selection pressure for that particular gene? Are we actually seeing an evolutionary event? SteveBaker (talk) 14:01, 11 February 2010 (UTC)
- I think you are asking about the Vadoma tribe. Here[13] is a photo of one of their members which I found with a Google image search. Richard Avery (talk) 07:08, 11 February 2010 (UTC)
- I fact-tagged the uncited claim that the condition may help in tree climbing. Comet Tuttle (talk) 17:45, 11 February 2010 (UTC)
- Aside from the actual point of your question, there's no such thing as racial memory. Comet Tuttle (talk) 17:45, 11 February 2010 (UTC)
- Right. Now that is proved beyond doubt - I saw it on TV etc. long ago that's why had problem recalling actual source of information. I was mentioning racial memory just because I had serious doubt about contemporary existance of these people. But they do exist. Wikipedia is wonderful thing....
Jon Ascton (talk) 21:31, 12 February 2010 (UTC)
how do ants get into a house
and can thety damage fiberglass insulation and make a house drafty? —Preceding unsigned comment added by 67.246.254.35 (talk) 05:53, 11 February 2010 (UTC)
- Well, when they get into my house they either walk in under the back door or they slip through a crevice where the pipe for the outdoor faucet comes through the brick. Any little opening like that will do. Fiberglass is inedible and I can't imagine ants being large enough to damage it by pushing it around, though other intruders like squirrels and raccoons certainly can. --Anonymous, 06:58 UTC, February 11, 2010.
- An entire ant nest (the kind that form a mound maybe a foot across and half as high) can push fiberglass insulation around - and I suppose a subsequent abandonment of the nest might make it collapse and thereby cause a draft - but it's a bit of a stretch. But fibreglass insulation isn't there to stop drafts anyway - it's merely a thermal barrier and air can certainly pass through it. Draft protection is what the actual walls are supposed to do. SteveBaker (talk) 13:23, 11 February 2010 (UTC)
stages of urine decomposition
where can i find info on stages of urine decomposition i tried google —Preceding unsigned comment added by 67.246.254.35 (talk) 07:48, 11 February 2010 (UTC)
Urine is a simple aqueous solution. The term decomposition is not very applicable, since most of the components are not degradable. There are very small amounts of protein in normal urine and the term might be used for the further breakdown of these components by bacteria in sewage or soil (depending on where you pee). Or am I not understanding your question? alteripse (talk) 17:39, 11 February 2010 (UTC)
- Urine mainly has Urea and water in it. Everything else is just trace amounts. So read about Urea to find out more. Ariel. (talk) 04:24, 12 February 2010 (UTC)
h1n1 swine flu
what is the reason of swine flu? —Preceding unsigned comment added by BHADAW SOREN (talk • contribs) 08:18, 11 February 2010 (UTC)
- Are you asking about the Influenza A virus subtype H1N1? 58.147.58.179 (talk) 10:31, 11 February 2010 (UTC)
- See swine flu; not everything has a reason.--Shantavira|feed me 11:56, 11 February 2010 (UTC)
- . . . but most things have a cause. If there were no production of meat we would not have seasonal flu or several other diseases. -Craig Pemberton 19:13, 11 February 2010 (UTC)
- What is the basis for your claim that flu does not exist where there is no meat production? -- kainaw™ 19:17, 11 February 2010 (UTC)
- Well, there is very strong evidence that domestication of animals resulted in the introduction of a lot of diseases to early humans. I don't think getting rid of animals bred for meat now would get rid of seasonal flu, though, we would need to go back in time and do it. --Tango (talk) 19:26, 11 February 2010 (UTC)
- What is the basis for your claim that flu does not exist where there is no meat production? -- kainaw™ 19:17, 11 February 2010 (UTC)
- There is evidence that past strains and current strains of flus have come from animals. It appears that Craig Pemberton is claiming that all strains of flu have come from meat production. Claims of "all" or "none" really need to be backed up. -- kainaw™ 19:34, 11 February 2010 (UTC)
- Hmm I seem to recall seeing a study that traced the origin of most seasonal mutations in the flu back to nations where people raise chickens domestically and have poor sanitation (Philippines seems to ring a bell?) but I can't seem to find it. I could be mistaken. But a lot of disease comes from meat. The zoonosis page is interesting. Even AIDs may have come through the consumption of bushmeat but that's just a theory. -Craig Pemberton 20:13, 11 February 2010 (UTC)
- This is not quite right, but it describes Asia as a flu "reservoir". -Craig Pemberton 20:17, 11 February 2010 (UTC)
- Close association of domesticated birds and men does aid the development of strains that are more virulent and transmissible in humans, but type A influenza is endemic in the wild waterfowl communities of Southeast Asia and even is we all stopped eating birds, the flu wouldn't simply go away. Dragons flight (talk) 20:59, 11 February 2010 (UTC)
- The OP's claim that "AIDS may have come through the consumption of bushmeat" really needs to be backed up with hard evidence -- the AIDS virus is not transmitted through food consumption. 24.23.197.43 (talk) 08:15, 12 February 2010 (UTC)
- I think the theory is that it came from butchering the animal and all the blood that is released. And AIDS can be transmitted through food if you consider blood to be food. Probably raw meat will transmit it too. Ariel. (talk) 09:15, 12 February 2010 (UTC)
- Consumption is the wrong word, but the general belief is bushmeat is the most likely cause, as supported by our Simian immunodeficiency virus and Origin of AIDS. Whether this was from butchering the animal or injuries sustained while hunting the animals, we will likely never know, just as we can never be sure bushmeat is the cause, but it is one of the simplest and most plausible explainations. More details are discussed in the articles particularly the later one. Nil Einne (talk) 17:23, 12 February 2010 (UTC)
- [citation needed] 146.74.231.39 (talk) 22:51, 12 February 2010 (UTC)
- Why ask Ariel when I already provided an article with citations? Nil Einne (talk) 18:18, 13 February 2010 (UTC)
How do the satellites move on their orbits?
how do the satellites move on their orbits after being launched from earth?
how do they get into their orbits?
thank you —Preceding unsigned comment added by 117.197.244.248 (talk) 13:17, 11 February 2010 (UTC)
- They are launched on large rockets (I'm betting you knew that!) that push the satellites so high that there is (almost) no air resistance to slow them down - and so fast that they reach a point where they are falling back towards the earth at exactly the same rate that the curvature of the earth makes the ground be further away beneath them. They fall continually - but because they are going so fast sideways they end up going in a complete circle. This is why we call the "zero-g" environment inside the International Space Station "free fall" because strictly speaking, there is gravity there - but the space station and the astronauts inside it are falling freely around the earth. Our article Newton's cannonball illustrates this rather nicely. SteveBaker (talk) 13:30, 11 February 2010 (UTC)
- Despite everyday experience, objects will continue to move unless something stops them (in our lives there is always friction and air resistance stopping things). In space there is nothing to stop the satellites moving so, once they are in orbit, they stay there going round and round (satellites in Low Earth Orbit do need to be boosted occasionally because of the tenuous atmosphere that is still at those altitudes). Getting the satellite up to the right altitude and moving fast enough not to fall back requires a big rocket. --Tango (talk) 13:39, 11 February 2010 (UTC)
- The rockets steer themselves into the speed and direction the sat need to stay in orbit, let the sat go and then steer back to earth. There's a fine point here. If you just "threw" the sats up above the atmosphere they would not enter orbit, you need to steer the rockets. EverGreg (talk) 19:02, 11 February 2010 (UTC)
- As per Kepler's laws of planetary motion, all satellites (wherever natural [i.e. moons around a planet, or planets around a star], or man-made), will orbit in an eclipse around the parent object. One of the foci(focuses) will be the centre of mass of the combined system. As most satellites are far smaller than their parent, this is the centre of parent. A beam of light emerging from one foci of an eclipse will converge on the other. Also, if you draw an eclipse by hammering two nails into a bit of wood, looping a string around them, and then draw by a pen pulling the string tight, then the nails are at the foci. CS Miller (talk) 14:52, 13 February 2010 (UTC)
Time Travel
I know this wudnt qualify as time travel in the scientific sense but Im sure we all face this while travelling behind in time zones I left Dubai International Airport at 18:05 on one given day and reached Doha International Airport at 18:00 on the same day. Does this mean, i travelled back in time? Technically speaking yes?? —Preceding unsigned comment added by 213.130.123.12 (talk) 14:35, 11 February 2010 (UTC)
- No, you haven't traveled back in time. You've simply changed your watch. — Lomn 14:37, 11 February 2010 (UTC)
- Indeed, when you left dubai it was 17:05 in doha, and when you arrived it was 18:00. That is 55 minutes forward in time by all accounts. —Preceding unsigned comment added by 129.67.116.217 (talk) 14:44, 11 February 2010 (UTC)
- It is only "time travel" in the sense that our conception of the hours of the day are correlated with one's position on the Earth. If you rapidly change your position on the Earth (say, by jet), then you affect what hour of the day it is. You have not in any meaningful scientific sense "traveled in time," but you have changed what time, say, the Sun will be observable directly over head, as compared to your previous location. Time zones are just a way to standardize and simplify that issue so that every town doesn't have it's own arbitrary definition of what "noon" is. --Mr.98 (talk) 14:53, 11 February 2010 (UTC)
- I guess if it makes you feel cool you could say you "traveled backwards through time-of-day". It'd be a strange way of describing the act of traveling east. APL (talk) 15:08, 11 February 2010 (UTC)
- (You want "west", in most cases, including from Dubai to Doha.) --Tardis (talk) 15:45, 11 February 2010 (UTC)
- Yes. West, not east. I get those two confused. APL (talk) 16:44, 11 February 2010 (UTC)
- (You want "west", in most cases, including from Dubai to Doha.) --Tardis (talk) 15:45, 11 February 2010 (UTC)
thanks for the answers, i knew I hadnt travelled back in time, however that day for what it was worth, I experience 18:00, 18:01, 18:03, 18:04 and 18:05 twice. which was a funny feeling. call it resetting the watch or whatever... —Preceding unsigned comment added by 213.130.123.30 (talk) 15:53, 11 February 2010 (UTC)
- When you say that you "experienced" 18:00 twice, you don't mean that there was any similarity between those two minutes (other than what the nearby clocks said, and that's because you had changed to a different set of nearby clocks). What you really mean is that you assigned the same name ("18:00") to two different minutes of time. The use of the label, not the time, is what was repeated, and as it's a purely artificial construction it's no surprise that we can use a label repeatedly if we want to. --Tardis (talk) 16:22, 11 February 2010 (UTC)
- Well, I'd argue this goes a little too far by calling it purely artificial. The sun was in exactly the same position relative to his head for a long time that day. That counts for something. Comet Tuttle (talk) 17:40, 11 February 2010 (UTC)
- And it was in other positions relative to his head for much less time than normal. We should also note that it wasn't even the same "1800" twice; one was "1800 (UTC+X)" and one was "1800 (UTC+[X+1])". — Lomn 18:24, 11 February 2010 (UTC)
- If you live somewhere with daylight saving time then you experience an hour repeated like that once a year anyway. It happens in the early hours of the morning, so you probably sleep through it, but it's the same basic idea. --Tango (talk) 19:07, 11 February 2010 (UTC)
- Well, I'd argue this goes a little too far by calling it purely artificial. The sun was in exactly the same position relative to his head for a long time that day. That counts for something. Comet Tuttle (talk) 17:40, 11 February 2010 (UTC)
- The point is that the idea of changing your watch every time you move around the planet is merely a human quirk. If the world operated by GMT then nothing odd would have happened. We don't do that because some people find it confusing to have to get up and go to bed at different times depending on where they happen to be living - but, as I said, that's just an odd human quirk. Someone from another planet might find that a VERY strange thing to do. (And don't get me started on Daylight Savings time!) Anyway - if you are a lover of your particular form of time travel, take a trip to the south pole and walk around the pole in circle. You can cross 24 time zones in as many seconds...just think how young you'll get then! (HINT: No, you won't.) SteveBaker (talk) 21:11, 11 February 2010 (UTC)
- If you try the experiment Steve suggested, remember that the Ceremonial South Pole is actually a short ways off from the real pole.
- This is one of those great disappointing truths of science. APL (talk) 22:14, 11 February 2010 (UTC)
- Steve, it's not "merely a human quirk" but a quirk of nearly every complex lifeform that lives where the sun ever shines. The sun regulates almost everything on the Earth's surface, and there's really nothing artificial about that. As much as we like to believe that we don't have to bow to the whims of mother nature anymore, we still get up when the sun rises and go to bed when the sun sets (well, most of us do). I agree with Mr. Tuttle; the position of the sun counts for something, though maybe not exactly time travel. Buddy431 (talk) 00:36, 12 February 2010 (UTC)
- Ok, but it would be entirely possible to use GMT entirely and just remember that local sunrise is at roughly 1:30am or something. Or simply not bother with any sort of absolute timekeeping. Choosing to handle this with clock changing is entirely a human thing. Certainly polar bears wouldn't have any sensation of going back in time if it happened to cross a couple of timezones. Farther from the poles, the dateline is a inevitable consequence of setting our watches to match the sun, but animals (Ok, fish) presumably don't think they're backwards a day when they cross it. APL (talk) 00:56, 12 February 2010 (UTC)
- Yep - if we simply numbered the GMT hours 1 through 24 then you'd just need to get used to getting up at 20 and going to bed at 8 or getting up at 12 and heading to bed at 24 - or whatever times got you to work in daylight and home in time for supper. The quirk is to insist that you choose the same numbers on the clock no matter where you happen to live. We can easily set our alarm clocks for any arbitary number - you'd be used to the 'new' numbering in a week. SteveBaker (talk) 02:35, 12 February 2010 (UTC)
- Ok, but it would be entirely possible to use GMT entirely and just remember that local sunrise is at roughly 1:30am or something. Or simply not bother with any sort of absolute timekeeping. Choosing to handle this with clock changing is entirely a human thing. Certainly polar bears wouldn't have any sensation of going back in time if it happened to cross a couple of timezones. Farther from the poles, the dateline is a inevitable consequence of setting our watches to match the sun, but animals (Ok, fish) presumably don't think they're backwards a day when they cross it. APL (talk) 00:56, 12 February 2010 (UTC)
- Steve, it's not "merely a human quirk" but a quirk of nearly every complex lifeform that lives where the sun ever shines. The sun regulates almost everything on the Earth's surface, and there's really nothing artificial about that. As much as we like to believe that we don't have to bow to the whims of mother nature anymore, we still get up when the sun rises and go to bed when the sun sets (well, most of us do). I agree with Mr. Tuttle; the position of the sun counts for something, though maybe not exactly time travel. Buddy431 (talk) 00:36, 12 February 2010 (UTC)
- I would bet that other animals probably do suffer from jetlag too if they travel fast enough. I'd consider that the "traveling in time" sensation. I don't think it's a human quirk that it throws our systems off a bit to travel that rapidly. I find it quite physically disconcerting to travel many time zones, so that my body's rhythms think it should be dark and sleepy time, but everyone else thinks it is the middle of the day. I would say that puts it just beyond quirk or assigning of arbitrary numbers to phenomena. The numbers themselves are at this point arbitrary (noon no longer means when the sun is at its zenith), but what they symbolize more broadly is not (changing position of sun). --Mr.98 (talk) 02:39, 12 February 2010 (UTC)
On this theme, Richard Brautigan wrote a poem called "Land of the Rising Sun" (referenced in the title of the collection containing it, "June 30th, June 30th") after flying home from Japan; part of it reads:
I greet the sunrise of July 1st
for my Japanese friends,
wishing them a pleasant day.
The sun is on its
way.
Tokyo
June 30th again
above the Pacific
across the international date line
heading home to America
Thanks for all the answers. Appreciated.
Gravitomagnetic guage conditions
Sorry, I am not well acquainted with tensor notation, but in this paper, the author instigates a guage condition to the gravitomagnetic field in equation 10. I was wondering what this equation is, and what it symmetry it is preserving. Thank you. —Preceding unsigned comment added by 129.67.116.217 (talk) 14:41, 11 February 2010 (UTC)
- For anyone who can't access the paper, equation (10) is , where h is the h from linearized gravity. I hadn't seen this before, but it seems to be the linearized form of the harmonic coordinate condition. Like any gauge condition it has no physical significance; what it does is restrict your choice of coordinates in some way, in this case to harmonic coordinates. The electromagnetic counterpart of this is the Lorenz gauge condition, which lets you simplify by dropping the first term on the right hand side. In the gravitational case, without the gauge condition you would have
- (I got this from linearized gravity#Derivation for the Minkowski metric and rewrote it in notation closer to the paper's; note that indices are raised and lowered with η, not g). With the harmonic gauge condition the first three terms on the right side cancel, leaving only the fourth. -- BenRG (talk) 23:10, 11 February 2010 (UTC)
Question on Photography
Need answers from people who can give me a comparison of two cameras and their performance. I googled it but the details are very specific and technical. I need explanation in lay man's terms. Essentially Im comparing Nikon D90 and Canon IXUS. We are planning our first trip to Europe, we would be traavelling from England, Scotland, France, Austria, Netherlands, Germany, Switzerland and Italy. We would be travelling to some exotic locaales which I would like to capture on my cam forever. Though I have a DVD camcorder which can click still pics I also bought another 12 MegaPixel Camera the Canon IXUS. Im not a photography expert (Im a medical doctor) and Im just picking up the camera to click decent pics that I can preserve forever as a digital copy to remind me of our European Holiday. Money isnt the issue and I could buy a Nikon D90 as well. But will the Nikon D90 be of any further use to an ameteur like me, or can I still get good pics with my Canon? —Preceding unsigned comment added by 213.130.123.12 (talk) 17:42, 11 February 2010 (UTC)
- For casual pics on a holiday you don't really need a DSLR. It's more expensive and prone to damage and theft. Without investing about 15 hours learning the basics of photography you run the risk of actually taking worse images than a point-and-shoot. That said, we own the D300 and absolutely love it. Wish it took video but otherwise it's great. We went with Nikon because they are backwards compatible with our old lenses. If you feel like you might one day soon get into macro or telephoto photography or something then consider springing for the fancy ones but be ready to make a small investment of time. -Craig Pemberton 19:19, 11 February 2010 (UTC)
- The biggest advantage of a DSLR for an amateur is that they take better pictures in low light, because the sensor is much larger. If that matters to you, the D90 might be worth it. Otherwise, probably not.
- You mentioned megapixels, so I'll mention that you shouldn't buy based on megapixels. When they increase the pixel count without increasing the sensor size, it reduces the amount of light hitting each pixel, which increases noise, which they compensate for with aggressive digital noise reduction, which reduces the effective image resolution, leaving you with questionable overall benefit. The only reason pixel counts keep increasing is that they have to increase something to make the cameras seem better than the previous generation. This is not such a problem with DSLRs because of the larger sensor, but anything more than 5 megapixels is not much use anyway unless you plan to crop the images or make huge prints. -- BenRG (talk) 20:31, 11 February 2010 (UTC)
- How many megapixels you need depends on the purpose you are going to put the photos too. If you want to make posters out of them, you need really high resolution. If you just want to view them on a computer screen, bare in mind that a typical screen resolution is 1280x800. That is about 1 megapixel. That means you won't be able to tell the difference between any resolutions greater than 1 megapixel (without zooming in, anyway). --Tango (talk) 22:17, 11 February 2010 (UTC)
- Another benefit of DSLR is its distinctive "look". From what I have seen a photo taken from a DSLR will look far more natural than one with a point and shoot, and much closer to film and to your eye. (much higher dynamic range perhaps?) Also modern compact cameras often sacrifice light sensitivity for megapixels, which drives up noise and necessitating aggressive de-noising algorithms. This means high frequency signals (the fine details) are all obliterated and smeared, giving it a painting like effect. Don't rely on your DSLR to replace both your IXUS and video cam though, video recording on DSLRs aren't very practical (they are, however, good for proper, controlled, theatrical work). --antilivedT | C | G 06:31, 12 February 2010 (UTC)
- I think it's a combination of the high dynamic range (producing nicer colors), and the fact that higher-quality sensors produce prettier noise (which gives you the DSLR "texture" on solid areas). 210.254.117.185 (talk) 12:50, 12 February 2010 (UTC)
- I would add shallow depth of field to the list of contributors to the "DSLR look", due to the larger sensor and generally faster lenses. -- Coneslayer (talk) 13:18, 12 February 2010 (UTC)
- Are you going during winter or summer? The current European winter storms of 2009-2010 are severe enough to hamper travel in many of those countries currently. ~AH1(TCU) 23:47, 13 February 2010 (UTC)
- I would add shallow depth of field to the list of contributors to the "DSLR look", due to the larger sensor and generally faster lenses. -- Coneslayer (talk) 13:18, 12 February 2010 (UTC)
- I think it's a combination of the high dynamic range (producing nicer colors), and the fact that higher-quality sensors produce prettier noise (which gives you the DSLR "texture" on solid areas). 210.254.117.185 (talk) 12:50, 12 February 2010 (UTC)
- Another benefit of DSLR is its distinctive "look". From what I have seen a photo taken from a DSLR will look far more natural than one with a point and shoot, and much closer to film and to your eye. (much higher dynamic range perhaps?) Also modern compact cameras often sacrifice light sensitivity for megapixels, which drives up noise and necessitating aggressive de-noising algorithms. This means high frequency signals (the fine details) are all obliterated and smeared, giving it a painting like effect. Don't rely on your DSLR to replace both your IXUS and video cam though, video recording on DSLRs aren't very practical (they are, however, good for proper, controlled, theatrical work). --antilivedT | C | G 06:31, 12 February 2010 (UTC)
Thanks guys for all the answers. Truely appreciated. I went ahead and bought the D90. I will handle the fancy big one while wifey will handle the easier to use Canon IXUS. I have a feeling my wife will end up taking better pics though ;)) we are going in spring - April May we would like to check out the bloom of Tulips in Keukenhoff gardens in Amsterdam and the snow or whatever is left behind in the Swiss Alps and the hustle and bustle of Rome, London and Paris. Thanks for the answers and I hope I have some good pictures to remember and some pleasant memories
Mains transformer with open circuit
I have a mains transformer which used to power a doorbell, but is now disconnected on the doorbell side. The other end of the transformer is still connected to the mains, 220 volts, 50Hz. How much energy is the transformer likely to be using up per year? Thanks 89.243.182.24 (talk) 18:04, 11 February 2010 (UTC)
- It depends on the transformer. You'll need to actually measure the current being drawn. Doing that on something connected to the mains sounds dangerous to me (if it is connected by a regular plug, there are devices to do it, but if that were the case I'm sure you would just unplug it). If you can touch the transformer (through appropriate insulation, of course) then you can get an idea from the heat it gives off. If it is very hot, it is using lots of power. --Tango (talk) 19:03, 11 February 2010 (UTC)
- Noting the heat is a fine way to get an approximate idea of the energy given off. It is even better than measuring the current, since volts times amps will be greater than actual watts for a transformer only drawing its exciting current. I expect that the energy consumed per year by a disconnected doorbell is extremely close to the consumed by a doorbell in normal use, exclusive of perhaps a lighted doorbell. A doorbell in standby mode (just energizing the transformer) draws 2.1 to 2.2 watts, for an annual electric use of 19 kilowatt hours or less. Where I live, that would cost under $1.90 (US) per year, actually much less because I have time of use billing, so the nighttime and weekend usage is very cheap. Standby electric use, page 8/16. Edison (talk) 19:47, 11 February 2010 (UTC)
A transformer with an open circuit secondary winding will just draw magnetic core magnetising current. If this is a cheap and nasty transformer (with low primary inductance i.e not many primary turns), this current may be appreciable. The power dissipated in the primary winding will, of course, be I^2 * R where R is the primary wdg resistance and I is the mag current. If we neglect core losses in the transformer, then this will be equal to the total power loss. It is definitely true to say that all the energy dissipated in the transformer will be given off as heat (maybe plus some sound), so if you can fry an egg on it, its probably worth disconnecting from the mains. —Preceding unsigned comment added by 79.76.205.40 (talk) 23:48, 12 February 2010 (UTC)
- I doubt that you can fry an egg with 2.1 watts. Maybe you could keep it warm enough to hatch a chick if it were fertile, but only in a well insulated incubator. Edison (talk) 01:33, 13 February 2010 (UTC)
Bread made with yeast
According to the article Leavening agent, the yeast used to make bread has ethanol as a waste product. Although not all bread is made with yeast. 1) How much alcohol is there in yeast-made bread, when consumed fresh? 2) If your recipe says put half a teaspoon of (dried) yeast in your bread dough, but you put in say two teaspoonfuls full, what will happen? Will the extra yeast be wasted because the higher concentration of waste ethanol or carbon dioxide will simply kill it off? Does a law of diminishing return apply? Or will you get extra-fluffy bread? 3) Does yeast have a taste? When I first started using bread yeast, it seemed to have a chemical taste. But now it seems to have no taste at all. Was the early yeast batch defective, or have I habituated to the taste? Thanks 89.243.182.24 (talk) 21:28, 11 February 2010 (UTC)
- Alcohol, being a volatile liquid, evaporates pretty readily. So, the process of baking evaporates almost 100% of the alcohol out of bread. Residual flavor may still be present, but I'd be surprised if there's even trace quantities of ethanol in the bread (the same is true for many other usages of alcohol in cooking, e.g. cooking wine, beer batter, and beer chicken, for example). Yeast certainly does have a taste; in addition, commercially available yeasts sometimes also contain other substances (e.g. nutrients or pH balancing chemicals) which also have a flavor. Adding too much yeast will possibly have negative consequences to flavor, and may cause undesirable rise characteristics for the dough (typically, rising too fast - but in some cases, the yeast consume the food too quickly, bloom, and die rapidly - resulting in not enough rise). Nimur (talk) 21:44, 11 February 2010 (UTC)
- The amount of alcohol that "survives" cooking is probably higher than most people expect; see Cooking with alcohol#Alcohol_in_finished_food. -- Coneslayer (talk) 13:14, 12 February 2010 (UTC)
Non-sodium baking powder
Contains monocalcium phosphate and potassium bicarbonate. Are there any known health-risk associated with these ingredients please? Apart from the carbon dioxide, what chemicals are left over after use in cooking? 89.243.182.24 (talk) 21:47, 11 February 2010 (UTC)
- The chemicals left over are carbon dioxide, water, potassium ions, calcium ions, and inorganic phosphates. Potassium ions can have some toxic effects at high doses (see this archive, as well as potassium cations in the body and hyperkalemia), but if your kidneys are working fine, the little amount in the baking powder won't matter. Humans get way more phosphates than they need, and usually this isn't a problem. Again, if the kidneys don't work, some problems can develop, but most people don't need to worry about this. Calcium's even less problematic, and most people could do to get more of it. If are getting too much calcium (which is unlikely), hypercalcaemia may develop (which our article suggests isn't a big deal in itself, but if you do have it you'd better see a doctor to find out what the underlying cause is), and you may increase your risk of getting kidney stones, though this is disputed. Buddy431 (talk) 02:38, 12 February 2010 (UTC)
Plotting lines of force (computer graphics)
I want to plot lines of magnetic force near structures such as a solenoid. The lines show the direction of force on a tiny isolated magnetic North pole and the spacing between the lines is proportional to the magnitude of the force. It looks like the lines of force in the figure at left are hand drawn and not calculated. I can describe the vector field by calculating by integration the force vector at each pixel. But how do I go from a field of vectors to lines of force? Cuddlyable3 (talk) 22:16, 11 February 2010 (UTC)
- The vector field points in the same direction as the lines of force, so you can draw a line of force by starting at any arbitrary point and adding increments of some mutiplier delta times the vector field at the current point. The only tricky thing is to get a reasonable spacing between lines of force -- the usual method is to space them evenly across some chosen line, and then let them go whither they choose from there. Looie496 (talk) 22:28, 11 February 2010 (UTC)
- (ec) The brute force approach is numerical integration. You start at any arbitrary point, and move by a small increment whose direction and step size if proportional to the vector field at that point. Repeat ad naseum and you will eventually trace out a field line. You need to make the step size small though if you want to have a good approximation of coming back to where you started without numerical error getting in the way. To draw additional lines, integrate the vector field intensities while moving perpendicular to your initial point. Once your integrated intensity reaches some preset increment for your spacing factor, start a new field line at your new location. In this way the density of lines is determined by local intensity.
- Of course that is all rather messy. I'm sure there is a much neater way to draw 2-D magnetic field lines, but I don't recall what it is at the moment. Something like the level sets of the z-component of the magnetic vector potential or something. Dragons flight (talk) 22:44, 11 February 2010 (UTC)
- The trouble is that numerical integration may be unstable - i.e. your result if you simply apply Euler integration may not converge and your magnetic field lines will not close themselves (i.e. ≠ 0, violating Maxwell's equations). You may need some fancier integration math. In fact, this is a known problem for integration of paths along 1/r^2 vector fields - even though the solution is analytically stable, its numerical approximation by Euler method is not. We discussed this a while back but I can't find it in the archives. Nimur (talk) 23:05, 11 February 2010 (UTC)
- August 9, 2009 - regarding planetary dynamics. Again, here is a nice demo of the failure to converge in even simple problems. It can be proven that the integral of certain 1/r^2 vector fields will not converge for any step-length. I'm not 100% certain if that's the case for the magnetic field setup Cuddlyable has started with, but it's an issue to be aware of. Nimur (talk) 23:10, 11 February 2010 (UTC)
- For this problem I hope to develop a pixel-by-pixel algorithm similar to Bresenham's line algorithm. Cuddlyable3 (talk) 13:58, 12 February 2010 (UTC)
- August 9, 2009 - regarding planetary dynamics. Again, here is a nice demo of the failure to converge in even simple problems. It can be proven that the integral of certain 1/r^2 vector fields will not converge for any step-length. I'm not 100% certain if that's the case for the magnetic field setup Cuddlyable has started with, but it's an issue to be aware of. Nimur (talk) 23:10, 11 February 2010 (UTC)
- There is a program now at Wikimedia Commons, which performs the task of producing fieldline plots. It uses the Runge-Kutta method. See VectorFieldPlot. Geek1337 (talk) 17:33, 12 June 2010 (UTC)
Square Wave
Imagine a generator produces a square wave. Does the signal switch from no voltage to a certain voltage, or does the voltage go from positive to negative? —Preceding unsigned comment added by 173.179.59.66 (talk) 23:00, 11 February 2010 (UTC)
- That is a matter of preference; it's a specification of the type of square wave. Technically, that would be called the DC bias: if the signal switches from +MAX to -MAX with 50% duty cycle, it has 0 DC bias. If it switches from 0 to +MAX, then it has MAX/2 dc bias. Nimur (talk) 23:17, 11 February 2010 (UTC)
- It is unnecessary to say 50% duty cycle because that is in the definition of a square wave (as opposed to a general rectangular wave). Cuddlyable3 (talk) 23:41, 11 February 2010 (UTC)
- (I was being pedantic to preempt any nitpickers, but I failed!) Nimur (talk) 00:11, 12 February 2010 (UTC)
- Has anyone seen a nitpicker here? File:Ape shaking head.gif Cuddlyable3 (talk) 00:22, 12 February 2010 (UTC)
- (I was being pedantic to preempt any nitpickers, but I failed!) Nimur (talk) 00:11, 12 February 2010 (UTC)
- It is unnecessary to say 50% duty cycle because that is in the definition of a square wave (as opposed to a general rectangular wave). Cuddlyable3 (talk) 23:41, 11 February 2010 (UTC)
Okay cool, that makes sense. Another question: imagine an RC circuit is attached to a square-wave generator (as was the case in my signal processing lab). Would having a DC bias affect the change in current when the voltage switched from max to min? I would guess not, because the lab manual doesn't mention DC bias, but I can't see why it would be true. —Preceding unsigned comment added by 173.179.59.66 (talk) 03:09, 12 February 2010 (UTC)
- You guess right. The DC bias is there "all" the time which is much longer than the time for the voltage on the capacitor C to settle.Cuddlyable3 (talk) 13:51, 12 February 2010 (UTC)
Possible to make a small quantity of harmless "smoke" for assessment of PC airflow?
Is it possible to safely produce a quantity of smoke/condensed vapour as might come from dry ice, so that it may be sucked into my PC and observed from a window in the side? Dry ice and smoke machines aren't really options. Would one of these ultrasonic foggers work without potentially damaging the computer? If the computer has been running continuously and is therefore warm, I figure that the vapour ought not to condense on anything. ----Seans Potato Business 23:47, 11 February 2010 (UTC)
- Maybe try the canned smoke used for testing smoke detectors. I have no information on whether that will do anything bad to the internals of the PC. Don't use a (hot) fogging machine - the fog is made from oil, which will coat your computer. But water poured on top of dry ice will work. Ariel. (talk) 23:53, 11 February 2010 (UTC)
- Maybe what you need is something called a "smoke pencil". Look it up and see if it's suitable for your purpose. --173.49.16.103 (talk) 00:34, 12 February 2010 (UTC)
- Careful - some smoke pencils use glycol or gycerin (a kind of oil). I wouldn't put it in a computer, particularly near the hard drives. Ariel. (talk) 05:44, 12 February 2010 (UTC)
- A stick or two of incense?--Shantavira|feed me 08:59, 12 February 2010 (UTC)
- Buy a cigarette. One wont harm you! —Preceding unsigned comment added by 79.76.205.40 (talk) 23:30, 12 February 2010 (UTC)
- ...except the one that gives you cancer. As well as heart disease, stroke, and other circulation problems. 89.240.201.172 (talk) 12:41, 15 February 2010 (UTC)
February 12
Apollo 13 and teflon
The article on Apollo 13 says:
"...Damaged Teflon insulation on the wires to the stirrer motor in oxygen tank 2 allowed them to short and ignite the insulation. The resulting fire ..."
"...power passed through the bare wires which apparently shorted, producing sparks and igniting the Teflon."
First of all teflon will not burn. Even in an oxygen atmosphere. So that can't be true.
Now, while teflon does decompose at about 350 °C, the analysis said that:
"... This raised the temperature of the heater to an estimated 1,000 °F (540 °C)."
So if the teflon was going to do anything it already did, and a spark is not going to have any additional effect.
So what actually happened? A short maybe? I have a hard time believing they had no fuses though. Ariel. (talk) 00:00, 12 February 2010 (UTC)
- NASA Spacecraft Incident Investigation, Panel 1, MSC Apollo 13 Investigation Team (June 1970) report states: "The first short-circuit might have contained as much as 160 joules of energy, which is within the current-protection level of the fan circuits. Tests have shown that two orders of magnitude less energy than this is sufficient to ignite the polytetrafluoroethylene insulation on the fan circuits in the tank." Your assertions about Teflon do not match the statements made by the official accident report, or any of its followups. As far as circuit-breakers or fuses, as just cited, the energy was sufficiently low as to not trip a fuse or over-current sensor, but still enough to ignite the fire. If you're interested in details, these accident reports are a great place to start - they are publicly available in their entirety, and you can see all the ground test data, recorded telemetry and spacecraft measurements, and speculations to fill in the rest of the gaps. Some amount of uncertainty will forever plague all accident investigations, but NASA spent a lot of effort tracking down what happened here, and had the good forethought to instrument everything possible on their Apollo spacecraft to assist in diagnostics. Nimur (talk) 00:16, 12 February 2010 (UTC)
- Well, that calls for a test. So I took some teflon put it in a fire on my stove - and it doesn't burn (it does decompose and glow, but it doesn't burn). Obviously they said what they said, but it doesn't make sense. Just by looking at the chemistry of teflon you can see that trying to burn it consumes energy, it doesn't release it. And if there was anything flammable there (besides the teflon), why didn't it burn when the tank was heated to 1000 °F? Ariel. (talk) 00:27, 12 February 2010 (UTC)
- I wonder why you've completely ignored the previous answer by Nimur which says that Teflon was not involved? 89.243.182.24 (talk) 00:50, 12 February 2010 (UTC)
- Hu? Where did he say that? If teflon was not involved that would moslty answer my question - but he didn't say that. Perhaps you don't know that polytetrafluoroethylene and teflon are the same thing. Ariel. (talk) 01:04, 12 February 2010 (UTC)
- My apologies for misunderstanding. Sorry. 92.29.136.128 (talk) 15:33, 12 February 2010 (UTC)
- That's right - PTFE is teflon. As far as your experiments, I hardly advocate burning teflon, or even trying - because it can release some really toxic fumes - but if you're going to test this, you need to get yourself a supercritical oxygen tank at high pressure to replicate the conditions at hand. I've said it before and I'll say it again (only lightly exaggerated) - everything burns in the presence of strong oxidizer. Do you want to look at the room-temperature chicken combustion video again? Now imagine that oxidizer at thousands of psi instead of atmospheric pressure. Strong oxidizers have very unusual, unintuitive chemical behaviors. At some point, after igniting the teflon insulation, it seems that the tank walls also ignited. It may seem "impossible" to you, but this is a major problem for spacecraft or any other oxidizer-carrying tank. When you have 100% cryo oxygen at high pressure, solid steel burns like gasoline. All it takes is a small spark. Nimur (talk) 01:08, 12 February 2010 (UTC)
- You are right that I need an oxygen environment to test this properly. But you are ignoring something: teflon will not burn! Teflon is carbon burned with fluorine. Oxygen will do nothing to it. Fluorine is much more electronegative than oxygen (4 vs 3.44). Ariel. (talk) 01:14, 12 February 2010 (UTC)
- (ec)It occurs to me that your argument is addressing whether the oxygen would replace fluorine, but it doesn't address whether oxygen would attack the C-C bonds. Why shouldn't (CF2)n react with oxygen, giving carbonyl fluoride? I'm not a chemist and this could easily be nonsense, but if it is, I'd like to know why it's nonsense. --Trovatore (talk) 01:27, 12 February 2010 (UTC)
- Hu? Where did he say that? If teflon was not involved that would moslty answer my question - but he didn't say that. Perhaps you don't know that polytetrafluoroethylene and teflon are the same thing. Ariel. (talk) 01:04, 12 February 2010 (UTC)
- I wonder why you've completely ignored the previous answer by Nimur which says that Teflon was not involved? 89.243.182.24 (talk) 00:50, 12 February 2010 (UTC)
- Well, that calls for a test. So I took some teflon put it in a fire on my stove - and it doesn't burn (it does decompose and glow, but it doesn't burn). Obviously they said what they said, but it doesn't make sense. Just by looking at the chemistry of teflon you can see that trying to burn it consumes energy, it doesn't release it. And if there was anything flammable there (besides the teflon), why didn't it burn when the tank was heated to 1000 °F? Ariel. (talk) 00:27, 12 February 2010 (UTC)
- The exothermic reaction is replacing the carbon-carbon bonds with carbon-oxygen bonds. You can do that quite easily with liquid oxygen, but generally doesn't happen spontaneously in air because the rate at which oxygen gets around the fluorine to attack the carbon backbone is too low at atmospheric pressures. Dragons flight (talk) 01:23, 12 February 2010 (UTC)
- So much for my being sure teflon will not burn. Thank you (both of you). Ariel. (talk) 01:35, 12 February 2010 (UTC)
- Maybe the teflon is not 100% PTFE - it may contain any number of binders, polymers, adhesives, etc. In any case, unless you have a sample of the exact material used on the Apollo 13 Service Module, (which WAS tested by NASA, and DID burn in post-accident tests), your test results are of limited use for comparison. As the accident report spells out, the teflon (or whatever the insulation was actually made of) burned for about 80 seconds; after this, the inconel began to burn (yes, that's not something we typically think of as flammable, but it did burn), and ultimately, the vacuum dome failed structurally (a combination of burnthrough and overpressure) - even though the heat was insufficient to over-pressure an intact tank, the structurally damaged tank was less sturdy and may have had a gas leak as a result of the fire at the insulator entrypoint to the tank interior. Nimur (talk) 01:19, 12 February 2010 (UTC)
- Page 82 of the accident report I linked above goes through a very rigorous breakdown of every likely, possible, and unlikely burn source, ignition source, mechanical stress, etc. If you're unconvinced of the teflon burning, there are dozens of other "unlikely" options to choose from, and a few "possible" options; it will be productive to read through that report and bring yourself up to speed on what NASA concluded was possible/probable. These guys are counting the number of milli-Joules of energy available from every possible source - chemical, mechanical, propulsion, RF... in order to account for every wobble measured by the spacecraft's roll, pitch, and yaw sensors. If you're absolutely sure that teflon is chemically inert, you'd best check your numbers against the extraordinarily thorough accident report. After Apollo 13's disaster, NASA sunk a lot of man-hours in to this investigation. Nimur (talk) 01:24, 12 February 2010 (UTC)
- I wasn't trying to challenge, I was trying to understand. And yes, I'm quite sure teflon is inert. I'll have to read the report when I have a bit more time. When I asked this I thought it was an error in the article, not something from the report. Ariel. (talk) 01:30, 12 February 2010 (UTC)
- I hope I'm not sounding dismissive. It's good that you're scientifically skeptical. I wouldn't expect you to take NASA's word on faith alone - but the report really is a good read. It will give you some perspective about how many decimal-points-after-the-zero they investigated. For example, the spacecraft was observed from at least three independent ground observatories (optical telescopes!) who measured a brightness change corresponding to the nebulous cloud of oxygen that was vented; from this, a rate and quantity of vented oxygen were calculated. These images were compared with a 1/6th scale model which was suspended and photographed with a variety of different damage simulations. The strength of the RF signal from the high gain antenna (destroyed) was used to tweak the numbers on the roll and pitch rate and extrapolate data following the loss of telemetry. I would hope that with all this diligence, with so much investigation into miniscule details, that something as obvious as "that material wouldn't burn" has not been overlooked. If it does turn out that teflon will not burn under any such circumstances, then I would apply Occam's razor and suggest that the terminology "teflon" refers instead to "a mostly teflon-based insulator which contains other materials, that, as a whole, can burn, and burns with a measured rate confirmed by ground experiment." Nimur (talk) 01:41, 12 February 2010 (UTC)
- Turn out I was wrong about teflon not burning. (See higher up in this question.) I still might read the report. Ariel. (talk) 01:47, 12 February 2010 (UTC)
- I hope I'm not sounding dismissive. It's good that you're scientifically skeptical. I wouldn't expect you to take NASA's word on faith alone - but the report really is a good read. It will give you some perspective about how many decimal-points-after-the-zero they investigated. For example, the spacecraft was observed from at least three independent ground observatories (optical telescopes!) who measured a brightness change corresponding to the nebulous cloud of oxygen that was vented; from this, a rate and quantity of vented oxygen were calculated. These images were compared with a 1/6th scale model which was suspended and photographed with a variety of different damage simulations. The strength of the RF signal from the high gain antenna (destroyed) was used to tweak the numbers on the roll and pitch rate and extrapolate data following the loss of telemetry. I would hope that with all this diligence, with so much investigation into miniscule details, that something as obvious as "that material wouldn't burn" has not been overlooked. If it does turn out that teflon will not burn under any such circumstances, then I would apply Occam's razor and suggest that the terminology "teflon" refers instead to "a mostly teflon-based insulator which contains other materials, that, as a whole, can burn, and burns with a measured rate confirmed by ground experiment." Nimur (talk) 01:41, 12 February 2010 (UTC)
- I wasn't trying to challenge, I was trying to understand. And yes, I'm quite sure teflon is inert. I'll have to read the report when I have a bit more time. When I asked this I thought it was an error in the article, not something from the report. Ariel. (talk) 01:30, 12 February 2010 (UTC)
- Page 82 of the accident report I linked above goes through a very rigorous breakdown of every likely, possible, and unlikely burn source, ignition source, mechanical stress, etc. If you're unconvinced of the teflon burning, there are dozens of other "unlikely" options to choose from, and a few "possible" options; it will be productive to read through that report and bring yourself up to speed on what NASA concluded was possible/probable. These guys are counting the number of milli-Joules of energy available from every possible source - chemical, mechanical, propulsion, RF... in order to account for every wobble measured by the spacecraft's roll, pitch, and yaw sensors. If you're absolutely sure that teflon is chemically inert, you'd best check your numbers against the extraordinarily thorough accident report. After Apollo 13's disaster, NASA sunk a lot of man-hours in to this investigation. Nimur (talk) 01:24, 12 February 2010 (UTC)
- Maybe the teflon is not 100% PTFE - it may contain any number of binders, polymers, adhesives, etc. In any case, unless you have a sample of the exact material used on the Apollo 13 Service Module, (which WAS tested by NASA, and DID burn in post-accident tests), your test results are of limited use for comparison. As the accident report spells out, the teflon (or whatever the insulation was actually made of) burned for about 80 seconds; after this, the inconel began to burn (yes, that's not something we typically think of as flammable, but it did burn), and ultimately, the vacuum dome failed structurally (a combination of burnthrough and overpressure) - even though the heat was insufficient to over-pressure an intact tank, the structurally damaged tank was less sturdy and may have had a gas leak as a result of the fire at the insulator entrypoint to the tank interior. Nimur (talk) 01:19, 12 February 2010 (UTC)
Time Travel question
I was booking a flight recently and i noticed that one on of the sites i could pick a return flight that was earlier than my departing flight this seem to create paradox in that i can return before depart and prevent self from departing in the first place (Dr hursday (talk) 00:07, 12 February 2010 (UTC))
- What exactly is your question? If you are wondering how this could be, it is because of time zones. For example, 12pm in the eastern US is exactly equivalent to 5pm in England - they are the same time. Things going on in New York at this time are not occurring before those in London, it is just the local expressions of time being normalized to the daylight. (I renamed the section since there has already been a time travel section recently) —Akrabbimtalk 00:16, 12 February 2010 (UTC)
- Isn't the answer simply that flight booking software is stupid? Dragons flight (talk) 01:03, 12 February 2010 (UTC)
- To avoid confusion, flight depart/arrive times are always printed in local time. So, I wouldn't call it stupid. It is just to avoid confusion. -- kainaw™ 01:19, 12 February 2010 (UTC)
- I'm not sure what the original post is referring to, but I took it to mean that when taking a return flight on the same day as the outgoing flight, the user may be presented with flights that are genuinely earlier than the original flight arrives. I've seen this happen with day trips on regional flights (i.e. my intent is to fly into LA in the morning and fly out in the evening, but the software will happily inform me about the very cheap flights leaving LA at 6 AM even though my incoming flight doesn't get there until 9 AM.) Dragons flight (talk) 05:02, 12 February 2010 (UTC)
- Oh, on second read, it seems you are right. Never mind my answer then. —Akrabbimtalk 15:09, 12 February 2010 (UTC)
- I'm not sure what the original post is referring to, but I took it to mean that when taking a return flight on the same day as the outgoing flight, the user may be presented with flights that are genuinely earlier than the original flight arrives. I've seen this happen with day trips on regional flights (i.e. my intent is to fly into LA in the morning and fly out in the evening, but the software will happily inform me about the very cheap flights leaving LA at 6 AM even though my incoming flight doesn't get there until 9 AM.) Dragons flight (talk) 05:02, 12 February 2010 (UTC)
Moon Question
astronaunts traveling to the moon in a spacecraft that travels at the rate of 6,260 km per hour, at this rate, how long will it take for the astronaunts to reach the moon, while assuming a straight line and average distance. —Preceding unsigned comment added by 72.85.214.110 (talk) 00:29, 12 February 2010 (UTC)
- Sorry, we don't do homework questions. APL (talk) 00:57, 12 February 2010 (UTC)
- Two options:
- find out how far the Moon is away (look it up; try Moon) and find out (look it up; your math(s) teacher may be available) how to calculate a time from a velocity over a given distance.
- Give the answer "about 162 years" (then complain about the complete uselessness of Wikipedia as a research tool).
- By the way, it's "astronauts". An astronaunt is your mother's cosmic sister. Tonywalton Talk 02:09, 12 February 2010 (UTC)
- Well, if the Moon were 12,520 km away (2 X 6,260) it would take about 2 hours to get there, assuming we didn't stop too many times at rest areas for peanut butter and jelly sandwiches. How far away is the Moon? Answer: I don't know. But if we could find out how far away the Moon is, we could divide that distance by 6,260 (assuming that distance were expressed in units of kilometers). The answer (to that calculation) will be the number of hours (possibly including a fraction of an hour) of travel time. What we really need to know is how many kilometers the Moon is away as an average distance. Bus stop (talk) 02:25, 12 February 2010 (UTC)
- While we're at it, assuming a straight line and constant velocity for travel from the earth to the moon is a bad approximation - even if it is the homework assignment. Orbit makes such a trajectory very difficult - not impossible, but very difficult to achieve in practice. Nimur (talk) 04:07, 12 February 2010 (UTC)
- Don't let these smart people and their sandwiches distract you. You will get the answer by dividing one number by another number, and you already have one of the numbers. Good luck. Cuddlyable3 (talk) 13:35, 12 February 2010 (UTC)
- It will depend on where you are. If you are sitting on the launchpad, then you would never get there, and if you were 2 miles from the moon, you will reach it momentarily. Even if we wanted to answer this HW question for you, we don't have enough information. Googlemeister (talk) 13:59, 12 February 2010 (UTC)
- The indicated speed would be under the Earth's escape velocity assuming that the spacecraft is travelling at a near-constant velocity, and you'd never get off the Earth. To escape from the Earth's gravity requires a speed of over 40,000 km/h. ~AH1(TCU) 23:44, 13 February 2010 (UTC)
- No, the OP specified the velocity, not that it was a ballistic trajectory after a rocket motor had shut off. Perhaps God or Xenu or Lex Luthor is maintaining the spacecraft at a constant velocity from the beginning to the end of the voyage. The article Moon confuses the issue. Did the astronaunts depart from the center of the Earth to go to the center of the Moon? Then they have to travel 384,403 km. If they left from the surface of the Earth to travel to the surface of the Moon, then the distance will be decreased by the mean radius of the Earth (6,371 km) and the mean radius of the Moon (1737 km). If they departed from the highest or lowest point on the Earth's surface and arrived at the highest or lowest point on the Moon's surface, that would require a small correction. Edison (talk) 05:03, 14 February 2010 (UTC)
- The indicated speed would be under the Earth's escape velocity assuming that the spacecraft is travelling at a near-constant velocity, and you'd never get off the Earth. To escape from the Earth's gravity requires a speed of over 40,000 km/h. ~AH1(TCU) 23:44, 13 February 2010 (UTC)
- It will depend on where you are. If you are sitting on the launchpad, then you would never get there, and if you were 2 miles from the moon, you will reach it momentarily. Even if we wanted to answer this HW question for you, we don't have enough information. Googlemeister (talk) 13:59, 12 February 2010 (UTC)
- Don't let these smart people and their sandwiches distract you. You will get the answer by dividing one number by another number, and you already have one of the numbers. Good luck. Cuddlyable3 (talk) 13:35, 12 February 2010 (UTC)
Capacitors
Yes, this is a homework question, but I have no idea where to even begin. Can someone at least point me in the right direction and give me a few useful equations please?
Two 13.0 cm -diameter electrodes 0.59 cm apart form a parallel-plate capacitor. The electrodes are attached by metal wires to the terminals of a 15 V battery. What is the charge on each electrode while the capacitor is attached to the battery? --24.207.225.13 (talk) 01:26, 12 February 2010 (UTC)
- This is nothing that Capacitor doesn't answer. It really isn't all that complicated, so giving you an equation directly would practically be akin to doing the problem for you. —Akrabbimtalk 01:29, 12 February 2010 (UTC)
- First, you need to figure out the capacitance of the capacitor from its geometry and the dielectric constant of the material separating the plates. In a steady state, the voltage across the terminals of the capacitor must equal that of the battery—that's how you get zero current flow in a steady state. With that knowledge, you can use the relationship between stored charge and voltage in a capacitor to work out the answer you're looking for. --173.49.16.103 (talk) 02:05, 12 February 2010 (UTC)
- You will certainly be expected to assume for simplicity that the area of the plates is so much larger than the space between them that edge effects can be ignored. Treat the electrical field as though it is uniform over the whole plate area. Cuddlyable3 (talk) 13:30, 12 February 2010 (UTC)
- The charge on one electrode is minus the charge on the other electrode.. —Preceding unsigned comment added by 79.76.205.40 (talk) 23:33, 12 February 2010 (UTC)
- What tickles my fancy is what happens to the voltage across the capacitor if the battery is disconnected after charging the capacitor, and the spacing between the plates is changed to 10 times or 1/10 the original spacing. Quite amazing, and there are demo devices which do this. (Assume charge is conserved, since there is no path for current between the plates). Edison (talk) 04:50, 14 February 2010 (UTC)
mic
i wanna know how does a mic work? or just tell me how to change sond energy into electrical energy.? does mic contains piso crystle( a crystal that can change pressure into electricity)? cant this idea be used to change sound energy aroun' us into electrical energy- put mics on the muffer tip of car or infront of horn. - answer my first question dont start telling me about piso crystle! i dont want to know abot it now-§§§§§§§
- Have you checked microphone? -- kainaw™ 03:25, 12 February 2010 (UTC)
- Also, microphones do not create energy. They alter an electrical signal according to sound waves. -- kainaw™ 03:27, 12 February 2010 (UTC)
- Some microphones use piezoelectric crystals, but others use a solenoid or a mass-spring system, and still others use a capacitor diaphragm. Each of these has advantages and disadvantages; each works according to a different principle; but in general the idea is to convert pressure changes in the air (sound waves) into a time-varying electric signal. Most microphones also need an amplifier to boost and condition that electric signal for later use and eventual playback. Nimur (talk) 04:11, 12 February 2010 (UTC)
- Kainaw: Some microphones do create electricity. You can use a mic to generate electricity from the sound around you - but not very much. It's not enough to be worth it. If you scream into headphones you can measure electricity in the jack. Ariel. (talk) 04:22, 12 February 2010 (UTC)
- Yes, they all create electricity, but they do not create energy. Kainaw was referring to conservation of energy; it's unlikely that a microphone could harness waste energy from a car's exhaust or air-drag in any meaningful or productive way. Nimur (talk) 04:40, 12 February 2010 (UTC)
- Kainaw: Some microphones do create electricity. You can use a mic to generate electricity from the sound around you - but not very much. It's not enough to be worth it. If you scream into headphones you can measure electricity in the jack. Ariel. (talk) 04:22, 12 February 2010 (UTC)
- Mics do harness energy, i.e. convert energy from sound to electricity. I assumed Kainaw was saying mics only vary resistance (meaning they must be externally powered), and I was saying some can generate their own signal. Maybe that's not what he was saying, sorry if I misunderstood him.
- Anyway, the energy they convert is in tiny amounts. Mainly because the mic is very small. Wave power is exactly the same thing, except with water instead of air. Ariel. (talk) 05:35, 12 February 2010 (UTC)
- See also Carbon microphone which works on the principle of slightly varying resistance, so requiring a current to be supplied to them to work. —220.101.28.25 (talk) 07:10, 12 February 2010 (UTC)
- In every case a microphone absorbs some energy from the sound. Yes, microphones could extract tiny amounts of power (energy) from noisy parts of cars. The power would be tiny but it could be maximised if the microphone resonated at a frequency related to the engine r.p.m. Similar attempts are made to extract power from a radio antenna. Cuddlyable3 (talk) 13:23, 12 February 2010 (UTC)
- The complete answer here is that yes, it should be possible to harness energy from sound as you describe...and yes, with some (but not all) kinds of microphone, you could do that. However, the amount of energy would be so amazingly tiny - and the efficiency with which it could be converted would be so poor that it would certainly not be worth the effort to do so. There just isn't much energy there to be had in the first place. SteveBaker (talk) 17:13, 12 February 2010 (UTC)
- Thomas Edison in the 1870's built a device called the "Phonomotor," which collected sound energy in a horn, cause a diaphragm to vibrate, used the vibration to operate a fine gear train via a ratchet to create rotary motion. It got some press around the world, with humorous comments about the power of a mother-in-law's voice to bore a hole in a piece of wood, but had no practical application. (Maybe people lost interest because it was a "boring machine"). If you had a device which converted sound to electricity with 100% efficiency, and it had a 1 square meter collector, it would require 138dB (deafeningly loud) to power a 60 watt light bulb, per [14], which says that would require the combined voices of a million people. Edison (talk) 03:01, 13 February 2010 (UTC)
- Strategically placed Phonomotor's could convert noise pol-pol-pol-pollution into useful energy. Cuddlyable3 (talk) 18:08, 13 February 2010 (UTC)
Sunscreen and acne
Does sunscreen increase acne? My dermatologist told me not to use sunscreen and he told it increase acne. Don't want to seek any medical advice, just want to know the scientific basis behind this claim and the physiological mechanism through which sunscreen increases acne. --Qoklp (talk) 04:09, 12 February 2010 (UTC)
- Oil-based sunscreens can increase acne. It isn't complicated. Putting oil on your face can increase acne if you are already prone to it. There are many oil-free sunscreens that will have a noticeably less increase in acne compared to the oil-based ones. How your specific face will react to any specific sunscreen will not be known until you try it. -- kainaw™ 04:13, 12 February 2010 (UTC)
- The term to look for is "noncomedogenic". (Hmm, it looks like we don't have that page yet. Okay, try Comedogenic instead.) -- 174.21.247.23 (talk) 04:17, 12 February 2010 (UTC)
- Hmm, Comedogenic redirects to Acne cosmetica. Perhaps Noncomedogenic and Non-comedogenic should as well. 58.147.58.179 (talk) 04:42, 12 February 2010 (UTC)
Dr Kevorkian
where can i watch i tried google when he videotaped himself performing a lethal injection on some old dude on 60 Minutes, he was charged and convicted with murder in the second degree. —Preceding unsigned comment added by 67.246.254.35 (talk) 05:24, 12 February 2010 (UTC)
- The 60 Minutes video and reactions from the MDA and in comedy(1), comedy(2) Cuddlyable3 (talk) 13:10, 12 February 2010 (UTC)
Scar removal/ Erase Scars
Hi,not medical advice, but is it absolutely impossible to remove a scar.
A while ago, about ten years back, I went to touch a pie but the filling got stuck on my hand and when my hand went back as a natural jerk reaction the pie filling stuck to it and was flung on my face, leaving a small scar.
I was wondering if there was anyway to remove a scar in the future or will it be impossible forever.
Also, I what causes scars to be scars and normal skin doesn't form after injury.
- Scar should contain an explanation for why you get scars and not normal skin. There are certainly methods to reduce scarring, but I believe it's on a case-by-case basis - your doctor will certainly be able to give you the information you need. Vimescarrot (talk) 06:46, 12 February 2010 (UTC)
- You already asked about removed on on the 8th. Was there are problem with TenOfAllTrade's answer? You're certainly not going to get anyone here to comment on your specific case. APL (talk) 08:09, 12 February 2010 (UTC)
About burden of proof and scientific method
Burden of proof#Burden of proof in epistemology and scientific methodology says that it is "the responsibility of the person who is making the bold claim to prove it".
On the other hand, Scientific method#Introduction to scientific method says that "Note that this method can never absolutely verify (prove the truth of) 2. It can only falsify 2.[8] (This is what Einstein meant when he said "No amount of experimentation can ever prove me right; a single experiment can prove me wrong."[9])"
It seems to me that there is a mismatch between two. Like sushi (talk) 08:22, 12 February 2010 (UTC)
Your confusion stems from two different interpretations of the word "proof". In the first sentence, it doesn't mean "proof" in the absolute, mathematical sense, but rather "provide sufficient evidence for his claim". On the other hand, a single experiment can indeed disprove (in the absolute mathematical sense) a hypothesis. (note that theories that have been proven wrong in this way do not always lose their usefulness, not if they were good theories that had lots of "proof" in the first place, classical physics being the prime example of this) —Preceding unsigned comment added by 157.193.173.205 (talk) 08:31, 12 February 2010 (UTC)
Thank you. Then it is that the defender must provide enough evidence, while the accuser must disprove it (in the absplute sense). Like sushi (talk) 09:00, 12 February 2010 (UTC)
- You might also be interested to know that when a "single experiment" appeared to prove Einstein wrong (I think it was some measurement of the transit of Mercury, or something), he wrote, and I quote loosely from memory: "I can't understand why some people are so impressed by these measurements and figures; the elegance of the theory is enough to convince me it must be right." And of course, we now know Einstein was right and the measurement was in some way wrong, but at the time it was a serious result that a lot of serious scientists thought meant serious trouble for relativity. That is to say, it's never really very clear-cut what constitutes a disproof and what can be simply brushed aside, and many scientific theories have been abandoned without ever being "absolutely" disproved. There are also any number of other possible complications that violate supposedly sacrosanct principles of scientific inquiry: sometimes an experiment is so expensive it can't be freely reproduced elsewhere (detecting neutrinos with a giant underground tank); sometimes the mathematics is so dense that even many experts get it wrong or just take it on faith (as in the early days of general relativity, and possibly string theory today?), etc. etc. All in all: any very broad generalization in the philosophy of science should be taken with a pinch of salt. But they can still serve as useful guides.--Rallette (talk) 10:01, 12 February 2010 (UTC)
- (edit conflict) When someone makes a claim to truth, it is their responsibility to offer some evidence or justification that makes the claim seem reasonable - that is the 'burden of proof' in the first case. it doesn't mean they have to 'definitively' prove it (in fact, it's impossible to definitively prove anything outside of pure mathematics), it just means they have to do something more than say it's true, so that other people might be convinced by it. --Ludwigs2 10:06, 12 February 2010 (UTC)
- "a pinch of salt", and "so that other people might be convinced by it".
- How can I become able to say so? I seem to have to learn a lot to convince other people, and the amount of salt is so difficult (I don't still have something to convince other people of, in the first place). Well, it is just a useless complaint, never mind!
- Like sushi (talk) 10:45, 12 February 2010 (UTC)
- Like sushi, yours is not a useless complaint. "How can I be able to say (that the person has provided sufficient evidence for his claim)" is a question that deserves an answer. In order of priority the claim should 1) be demonstrable by repeatable experiments with unambiguous interpretations, 2) have an impact on the current theories in an explicable way, 3) add to the total understanding of the field and 4) achieve this in a way that is "elegant" in the sense of Occam's razor. Cuddlyable3 (talk) 12:42, 12 February 2010 (UTC)
- Rallette's response is absolutely excellent. These are broad guidelines that serve to shape some kinds of activity and belief, but are more often applied in retrospect than they are at the time. We tell very nice compact histories of science in which a single experiment disproves a century's worth of theory, but this is the exception, not the rule, and even when that does happen, it can take a long, long time for everybody to agree that is what happened. Example: August Weismann cuts the tails off rats, disproves Lamarckism, etc. Problem: True Lamarckians would never have thought that what Weismann was doing actually violates Lamarckism—it only falsifies a very straw-man version of the theory. Example: John Snow shows with a single map that cholera is obviously water-borne. Problem: Almost no doctors actually believed that Snow's experiment was correct, and after a century of coming up with other complicated explanations for cholera (misasma, morality, etc.), they dismissed Snow's theory and evidence very easily. It took a good generation for the turn-over in thinking about cholera to happen, great evidence be damned.
- All of which brings up the quotation of Max Planck: "A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it." None of which says that falsification is incorrect or that the burden of proof shouldn't lie with those making the stronger claim. But in practice, disagreement about whether something has truly be falsified, and these question of who is making the stronger claim, are both somewhat subjective properties. Scientists are not pure logic in vats—they are human beings, organized into professional communities, and bear all of the handicaps of psychology and sociology just like the rest of us. --Mr.98 (talk) 13:22, 12 February 2010 (UTC)
- I agree that there are two meanings of the word 'proof' being used here. If you have a totally wild and crazy hypothesis - then it is without doubt your reponsibility to provide enough proof to convince other people to take you seriously. So, for example, if you claim that you can teleport objects across the room with the power of thought alone - then serious scientists are not even going to bother to talk to you until you can provide a pretty serious demonstration of that happening. But if you are an established astronomer and you claim to have found yet another extrasolar planet orbiting some distant star - then there would be only a small amount of skepticism and you could probably get other astronomers to double-check your results so you can publish your findings in a serious journal and have them come to be accepted as fact. A better way to think about this is "Extraordinary claims require extraordinary evidence". The more outrageous the thing you're claiming, the harder it is to get the attention of others and the better the evidence you have to provide. But this doesn't have to be mathematically perfect proof. If you walk into the office of a serious scientist and demonstrate your ability to teleport a pencil from one end of his desk to another - then that's probably enough proof to get him interested enough to do a controlled experiment and either prove or disprove it to a higher standard.
- A practical example was the case of the Mpemba effect - which is the very odd finding that cold water freezes more SLOWLY than warm water. This is a pretty outrageous claim - and worse still, it was made by a high school student in Tanzania. It took six years of work to just to get that result published - and it's still not widely accepted.
- On the other side though - it is true that there is very rarely absolute proof of anything. Einstein was right - it would only take one scientist to demonstrate how to make a golfball travel 10% faster than the speed of light to demolish relativity theory overnight. However, each experiment that confirms relativity pushes it asymptotically towards complete acceptance. But this does happen. Prior to Einstein, everyone assumed that Newton's centuries old laws of motion were correct - but one experiment which established the constant speed of light (plus a good deal of clever thinking on behalf of Einstein) was all it took to demolish it utterly in a matter of just a few years. Newton's laws were never proved conclusively - they never could be because at any time, one more experiment could bust them wide open.
- However, what gives us confidence in well-established laws (Thermodynamics, Evolution, etc) is that if they are wrong - then just like Newton's laws of motion, they can only be a little bit wrong over the domain that they've been tested. Maybe thermodynamics breaks down inside a black hole - maybe evolution turns out not to be true amongst yet-to-be-discovered life under the ice of Europa - but that doesn't make them useless. Sure, Newton was wrong - but he was sufficiently right to allow almost all of modern technology to successfully rely on his laws.
- SteveBaker (talk) 17:05, 12 February 2010 (UTC)
- Strictly speaking, a golf ball traveling at 1.1c would not disprove relativity (it requires infinite energy to accelerate a massive particle from sub-light speed to the speed of light, but there is no instantly obvious reason it couldn't always have gone faster than the speed of light). But it would have other highly problematic consequences, assuming relativity still worked. The one you seem to be hung up on is that the usual formulas would imply that its rest mass was an imaginary number (assuming its energy is a real number) — that one doesn't much bother me; the derivations for the energy-mass relationship don't go through above the speed of light anyway, or at least none of the ones I've seen go through, so maybe it's just a different equation in that regime.
- The more serious consequence is that if you could modify the behavior of such golf balls, and detect the effect of someone else's modification, and assuming relativity were still true (so that all this worked the same in every reference frame), then you could send signals back in time, as per the tachyonic antitelephone. Then you get the grandfather paradox and all that. These are seriously counterintuitive consequences. But I would not take them as "disproving relativity" in and of themselves. --Trovatore (talk) 05:03, 13 February 2010 (UTC)
Theory of justification might be worth a look. Quite a lot of people in this thread are being justificationists, I think, and Karl Popper would probably disapprove. I'm never very at ease with trying to guess what he would say, but he might propose a burden of showing how your new theory could be disproved if it was wrong, rather than a burden of proof. This is certainly the case in Occam's razor situations like the celestial teapot, where the problem with theory is not a lack of proof but a lack of testability. Epistemology is a work in progress, anyway, so it's not all that surprising if wikipedia articles contradict each other. 213.122.51.100 (talk) 04:41, 13 February 2010 (UTC)
Popping ears
If you inhale, block your nostrils and wait, there is a sensation of 'popping' from both ears, not necessarily simultaneously. What is happening please? If, in fact, one ear does not 'pop', over a long period, does that indicate a blockage? How does that relate to hearing ability? With many thanks. —Preceding unsigned comment added by 93.24.238.214 (talk) 08:39, 12 February 2010 (UTC)
- Ears pop to equalize the pressure between the outside of the ear and the inner ear. Unequal pressure can potentially lead to damage in the ear, along with general discomfort, so ear popping is the body's way of resolving the pressure difference before it becomes an issue. Colds can cause ear popping because mucus secretions block the Eustachian tubes, making it difficult to normalize the pressure. When someone with a cold blows his or her nose, clearing the sinuses of mucus, the Eustachian tubes can open up so that the pressure will normalize and the ears pop. Incidentally, it is a very bad idea to pinch the nose shut and blow hard to pop ears if they are uncomfortable, because this can cause mucus to blow up the Eustachian tubes, causing an infection in the ear. Source: Wisegeek. -Avicennasis @ {{subst:CURRENTTIME}}, {{subst:#time: xjj xjF xjY }} / @ 09:02, 12 February 2010 (UTC)
- See also Valsalva maneuver. TenOfAllTrades(talk) 14:48, 12 February 2010 (UTC)
--79.173.235.183 (talk) 18:14, 12 February 2010 (UTC)
Civil Engineers Only .....reinforced concrete column behavior under load
we all know how to reinforce a concrete column by arranging steel bars around the perimeter and confining steel bars using stirupps ..the question is how steel work in columns ... i tried to figure it out and this is the result :
when start loading the column it respond by getting short,by other words its began to shrink and as we all know strain could'nt happen in one direction (poissons ratio) so the concrete must expand horizontally in order to allow vertical shortening ... so that in order to stop vertical strain we have to prevent lateral strain .
now let's think about it ... steel bars around the perimeter can resist compression loads alot more than concrete which leeds to the same relation between strain value for both materials ... according to that steel will held the concrete around it in position depending on adhesion between concrete and steel forming something like a fence wich can hold the concrete within as the load increased the concrete in the middle is subjected to a big load and is forcing the fence to deform so the inner concrete can expand laterally .... here where stirupps can work .... it transfer the lateral deforming in the fence into tensile strain in the stirupps so it helds the fence in position preventing lateral strain which preventing vertical strain which in the result will prevent the column to collapse ....
will ... the question is ... am i right ...? and anthor question why is there a maximum reinforcement ratio for column...? --Mjaafreh2008 (talk) 09:58, 12 February 2010 (UTC)
- Remember that concrete is strong under compression. The steel reinforcement is there to provide strength under tension. Astronaut (talk) 15:16, 12 February 2010 (UTC)
- Incidentally: Insisting that a question is only answered by Civil Engineers pretty much guarantees you'd get no answers because the odds of there being such people here on any given day is small. However, the questions you are asking could be answered by almost anyone with an engineering background. SteveBaker (talk) 16:29, 12 February 2010 (UTC)
- Maybe he just wants the engineers to be well behaved? Googlemeister (talk) 21:35, 12 February 2010 (UTC)
Hi there .... I look for the answer in other places , and in some civil eng sites without any good .... I'am already thankful for any help i can get .--79.173.235.183 (talk) 18:14, 12 February 2010 (UTC)
- Could be something to do with the shear strength of concrete! —Preceding unsigned comment added by 79.76.205.40 (talk) 00:57, 13 February 2010 (UTC)
Does a flame cast a shadow?
Well, does it? It's kinda hard to tell because the flame itself generates a lot of light that would tend to light up any area that it would shadow - but just how opaque is the flame itself to (say) sunlight? SteveBaker (talk) 16:31, 12 February 2010 (UTC)
- Well it depends on the relative brightness of the two light sources. I have certainly seen the shadow of a candle flame cast by bright sunlight, but it's a kind of translucent shadow. It's very easy to check this out for yourself if the sun is shining and you have a candle.--Shantavira|feed me 16:46, 12 February 2010 (UTC)
- I guess the question here is this: does flame absorb light? This is an excellent question, and I suspect that the answer is yes, but this is pretty far out of my comfort zone. Is there a plasma physicist in the house? – ClockworkSoul 16:54, 12 February 2010 (UTC)
- A yellow flame consists of glowing soot. Soot is black. Go figure. Cuddlyable3 (talk) 16:57, 12 February 2010 (UTC)
- I've observed the "shadow" from a flame as well. I assumed it was not from the flame blocking light, but from the hot gas inside the flame acting as a lens, redirecting the light. Whatever portion of light is not absorbed should certainly be refracted. If you take a flame into bright sunlight, look for a brighter fringe of light around the flame's "shadow." 146.186.131.95 (talk) 17:05, 12 February 2010 (UTC)
- I guess the question here is this: does flame absorb light? This is an excellent question, and I suspect that the answer is yes, but this is pretty far out of my comfort zone. Is there a plasma physicist in the house? – ClockworkSoul 16:54, 12 February 2010 (UTC)
- A fundamental law of physics says that probabilities of absorption and emission of a photon are strictly related. Anything that can emit light can also absorb light at the same wavelengths. Anything that can absorb light can also emit light at the same wavelengths. Flame -- hot mixture of chemically reacting gases -- emits light (mostly in infrared and in visible range), therefore it also absorbs visible and infrared light. So yes, any flame can cast a shadow. It may not always easy to technically visualize and record that shadow, though; but it can always be done in principle. Steve certainly knows this, but for the general audience it is important to make the science clear first. Now, if you want to visualize the shadow of the flame, you can take a candle out into the sunlight - it will make a clear shadow of the flame. Some of that shadow will be because of the refraction rather than absorption, though. --Dr Dima (talk) 18:07, 12 February 2010 (UTC)
- Steve, you should be ashamed of yourself... get a candle and a torch and find out for yourself... (The answer, however, is "yes" - a flame is a plasma, meaning there are lots of free electrons floating around. Those electrons will absorb light.) --Tango (talk) 20:30, 12 February 2010 (UTC)
- It does depend if the flame is sooty and yellow versus premixed and blue so I think it is the soot not the electrons which do most of the absortion. Certainly if you pick the right laser there is enough transparency in premixed flames for laser tomography. Flames also have a massive temp difference therefore density difference across them. I have seen the flame and violin demo a few times (sound has a huge impact on flame structure) and even tried probing flames ultrasonically. The violin one you could do in a school lab with a big floppy bunsen I reckon. --BozMo talk 20:48, 12 February 2010 (UTC)
- I don't have a candle anywhere in the house...but in any case, I was hoping perhaps for some kind of quantitative feel for how much a flame is likely to attenuate the light passing through it...sooty yellow and unsooty blue flames are both interesting. FWIW, I'm doing some computer graphics fires and things like rocket exhausts and trying to understand at what 'density' to draw the shadow of a flame (if at all). If it were possible to estimate the ratio of light emitted by the flame to that passing transparently through it, I could make a call as to when the shadow would be negligably visible and save a bunch of computer time by not drawing it when it's too tenuous to be noticable. The smoke produced by the flame is a different matter - that obviously does cast a shadow - and I have that much nailed. I just can't think of any source that tells me the transmissivity of a flame...and it's insanely difficult to measure (even assuming I could set a building on fire to get a nice BIG flame to measure)! SteveBaker (talk) 05:15, 13 February 2010 (UTC)
- You can have the entire range of absorption, from fully opaque flame of poorly burning oil to practically invisible flame of pure ethanol. As you said, experiment is crazy hard, but it may be the only option. Take a green laser pointer and try to measure if something comes through. There is not much green in the flame (unless you add some copper compound to the fuel), but the eye is most sensitive to the green; so shadow for green is like shadow for white. You will need to find a bandpass filter for your photovoltaic cell that blocks most light from the flame but allows the green to pass. And a cell big enough to compensate for the refraction of light by flame. I never tried this, so I'm not sure it will work at all. --Dr Dima (talk) 06:11, 13 February 2010 (UTC)
- Since eclipsing binary stars cause a dimming of the overall brightness of the system when one star passes in front of the other, I'd say that the opacity of fire and plasma are enough as to cause a "shadow" in the way that it blocks some of the light from the other light source. ~AH1(TCU) 23:34, 13 February 2010 (UTC)
thoracentesis
As a practicing general surgeon, I am asked to do thoracentesis regularly. I go for a complete drainage of the pleural space, removing all the fluid I can, safely. This is often amounts over the 1 liter your article notes. This causes consternation among my patients. Your article does not describe the cough reflex associated with reinflation of the alveoli (as a newborn does during the immediate inflation of the lungs (wah wah cough cough, waaahhh), a native, protective reflex to airbreathers, nor does it mention the pleuritic pain associated with reapproximation of the separated pleural surfaces. It is that second pain that caused the patient most recent in my memory to look at your article and come in all hot and bothered. The chest x-ray showed persistent clearing of the pleural effusion with no infiltration or pneumothorax. Exam did show decreased inspiratory effort (pain) and slight rb. I ask that your editors make some mild changes. That business of "removing more than 1 liter at a time associated with complications" needs modifier that not removing all means having to repeat the procedure, also at increased risk. —Preceding unsigned comment added by Tlfmd (talk • contribs) 17:41, 12 February 2010 (UTC)
- We have a guideline here called Be Bold, which says: Go ahead and make the changes yourself! You're the expert. Click "edit this page" at the top of any article and go ahead and make any changes that are needed. Normally we want "inline citations" on every claim. Formally it is preferred that articles have inline citations from top to bottom, but in practice on Wikipedia this is rare — though it's needed if the article is to be considered high enough quality to be a "Good Article" or a "Featured Article". Comet Tuttle (talk) 17:52, 12 February 2010 (UTC)
- I'd just like to add a note that Wikipedia Talk:WikiProject Medicine (aka WT:MED) is an excellent place to bring up concerns of this sort. Looie496 (talk) 19:03, 12 February 2010 (UTC)
- Tlfmd, thanks for pointing that out. Although I'm surprised that you, as a general surgeon, are performing thoracentesis. My experience (in the UK) is that general surgeons do not deal with pleural effusions (indeed they shouldn't deal with pleural effusions). Axl ¤ [Talk] 09:43, 13 February 2010 (UTC)
- From the British Thoracic Society guidelines: "The amount of fluid evacuated will be guided by patient symptoms (cough, chest discomfort) and should be limited to 1–1.5 litres." "Large pleural effusions should be drained in a controlled fashion to reduce the risk of re-expansion pulmonary oedema." Axl ¤ [Talk] 09:54, 13 February 2010 (UTC)
ant poison methanol
im allergic to boric acid i have a ant problem can i use methanol to kill them i heard it tastes sweet i dont want to use any regular ant poison —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 20:18, 12 February 2010 (UTC)
can i get an answer please i dont want to call an exterminator —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 21:09, 12 February 2010 (UTC)
- Methanol? As in the poisonous alcohol (AKA wood alcohol)? Do you mean menthol maybe? Either way, get some bait traps instead. Or remove the food source. Ants are attracted to food. You can use tons of traps and poison and they'll keep coming back till you get rid of the food source. Ariel. (talk) 21:45, 12 February 2010 (UTC)
yes i mean Methanol, As in the poisonous alcohol. i already removed the food source. im not going to use bait traps i want to use methanol will it work , is it toxic to ants? and will they eat it? —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 21:52, 12 February 2010 (UTC)
- Perhaps you'd like to tell us why you think it might be poisonous to ants? And by the way, demanding an answer from the volunteers who frequent this desk fifty-one minutes after you originally posed the question is not a good strategy for encouraging people to answer you. --ColinFine (talk) 23:12, 12 February 2010 (UTC)
- Please sign your posts. It's the sweet thing to do. Cuddlyable3 (talk) 23:21, 12 February 2010 (UTC)
- The practicality of methanol will be limited because it is volatile and will quickly evaporate. I think it is highly unlikely that they will eat it anyway. If you are insistent on using methanol, your best results will come from simply drowning them in it. (But don't inhale the vapors due to the toxicity!) —Preceding unsigned comment added by 72.94.164.21 (talk) 23:24, 12 February 2010 (UTC)
WILL THEY EAT IT —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 23:39, 12 February 2010 (UTC)
Orange oil and neem oil are far safer than methanol, and may work better, too. Of the two oils, orange oil smells much better :) . However, you will absolutely need to inquire if any one of those oils are recommended for use against ant infestations. I do not and can not give you an advice what to use; but I strongly suggest you do NOT use methanol indoors. --Dr Dima (talk) 23:49, 12 February 2010 (UTC)
- I second that. Methanol is not only highly toxic, but its volatility is troublesome for this application -- its vapor pressure is high enough to allow it to give off hazardous toxic vapors (and to evaporate too quickly to have any effect on ants), but low enough to allow traces of the stuff to persist for a long time. (I know this firsthand, we use methanol at the refinery to scrub sulfur out of the flue gases, and we actually have to wear gas masks any time we're working with the stuff!) Worse, it's also water-soluble and can soak into the soil, so using it anywhere near water supplies or even household plants (or even on your lawn) is a very bad idea (quite apart from the fact that it's a violation of at least three EPA and OSHA regulations). Also, methanol is highly flammable -- not quite as flammable as gasoline, but pretty close -- so you should be careful using it around electrical equipment, because it's particularly prone to ignition by sparks. My advice to you and to everyone else is, don't screw around with methanol unless you REALLY know what you're doing. And last but not least, it prob'ly won't even work against ants because (1) it will evaporate too quickly, and (2) its toxicity to humans is due to it oxidizing to formic acid, which ants make naturally and therefore have immunity to it. Oh, and did I mention that it could damage furniture and wood floors by dissolving the lacquer coating? 24.23.197.43 (talk) 00:29, 13 February 2010 (UTC)
- Firstly, even if they are inside your house - they don't live there. If you can wipe out the nest (which is outdoors someplace) then within a day or two, there won't be any more ants indoors. Knowing that gives you more treatment options. Sure, you're allergic to boric acid - but if you wear a mask and gloves, you can apply it outdoors without coming into contact with it. There are many other options too. I had good luck with vicious Texas fire ants by kicking the top off of the nest and pouring a gallon of boiling water mixed with washing up liquid (to help it get into the small burrows). Merely warm water won't do. And, yes, you need to be sure that all of your food is in proper containers or they'll come back. SteveBaker (talk) 02:52, 13 February 2010 (UTC)
i dont want advice on other things to use just will methanol kill them —Preceding unsigned comment added by Thekiller35789 (talk • contribs) 04:49, 13 February 2010 (UTC)
- It seems like you already got a clear "NO!" on that - we're just trying to offer alternatives that you may not have thought of. SteveBaker (talk) 05:05, 13 February 2010 (UTC)
- or, as pointed out above, the answer could be "yes" if you pick up the ants and hold them under the surface of your methanol. Plain water would be safer but would take longer. Seriously, though, take advice and look for alternatives. We think that methanol might do you more harm than real ant killer, despite your allergy. Dbfirs 22:14, 13 February 2010 (UTC)
- If methanol vapor were so deadly, I doubt Sterno would be so widely used in chafing dishes or alcohol lamps supplied in chemistry sets. Just don't try to drink it. Edison (talk) 04:44, 14 February 2010 (UTC)
Human field of view
How many degrees of view can humans see horizontally and vertically? 92.29.82.48 (talk) 20:48, 12 February 2010 (UTC)
- Our Field of view article claims it's about 180 degrees horizontally, and that "it varies" vertically. The article could use some improvement. Comet Tuttle (talk) 22:29, 12 February 2010 (UTC)
- The article Peripheral vision is in poor shape too.Cuddlyable3 (talk) 23:19, 12 February 2010 (UTC)
- 180 degrees is the statistical average for most people, but some folks can see as many as 240 degrees horizontally. (My own field of view, BTW, is about 220 degrees.) FWiW 24.23.197.43 (talk) 00:33, 13 February 2010 (UTC)
- Our area of distinct vision, in which we can see details or read normal print, without eye movements, is tiny, confined to the Fovea of the eye, subtending less than 2 degrees of visual angle horizontally (or about twice the width of the thumbnail at arm's length) and less vertically. The edges of the claimed 180 degree field would be nearly blind, only to detect things like movement of objects or flashing lights, whereupon we would normally direct the fovea to that region of visual space by eye and head movements. The horizontal field of view is obviously greater if you combine the fields of view of the two eyes. Prey animals like rabbits or horses have their eyes set more to the side than to predator aniimals like cats, so they have a 360 degree horizontal field of view, at the expense of depth perception in the forward direction the predators enjoy. As an experiment, look at the computer screen with your two thumbs extended next to each other, at arms length, below a line of text. Without eye movements, can you read words to the left and right of the thumbs? Now turn your chair at a 90 degree angle to the computer screen, and move your fingers in front of the screen, at about 90 degrees from your visual fixation point. Rotate until the movement disappears. You have mapped your visual field. Eye doctors do tests for the extremes of the visual field called Perimetry. Some diseases cause the visual field to shrink. Edison (talk) 02:16, 13 February 2010 (UTC)
- 180 degrees is the statistical average for most people, but some folks can see as many as 240 degrees horizontally. (My own field of view, BTW, is about 220 degrees.) FWiW 24.23.197.43 (talk) 00:33, 13 February 2010 (UTC)
- The article Peripheral vision is in poor shape too.Cuddlyable3 (talk) 23:19, 12 February 2010 (UTC)
- There are many numbers here because there are many ways to measure it. From extreme head-turn plus eye swivel to opposite head turn plus eye swivel is very close to 360 degrees. With eye motion only, it's more like the 220 that '24 suggests. Without eye motion - and including the outer regions of the retina where we can only see in monochrome (but are very sensitive to motion), it's more like 90 degrees - and for full color vision, more like 50 to 60 degrees. But our ability to conentrate our gaze on a smaller region still comes from brain function rather than actual vision so it's possible to concentrate only on (say) the middle 10 degrees - without noticing what's going on outside of that small region (eg when reading). These numbers are all slightly variable from individual to individual - and also depend on your eyesight (and whether you wear glasses). SteveBaker (talk) 02:08, 13 February 2010 (UTC)
- At 90 degrees or so from fixation, only gross changes are seen, so glasses are less important than in the areas nearer fixation. The lens does not focus that sharply nor is the retinal that acute at visual angles far from fixation. Edison (talk) 02:29, 13 February 2010 (UTC)
- Your coloured field of view is also smaller than your black and white field of view. You don't normally notice as your brain is very good at making up what colours things in you peripheral vision are. To test this, take a few coloured pens or pencils, hold one behind someone so that they don't know which colour you are using, then with the person still looking forwards bring the pencil round slowly round the side of their head towards the front of their head. You can only accurately describe the colour when its about 45 degrees away from your front. If you do the same thing but show someone which pencil you're using they'll be able to tell you the colour much sooner than before. Maybe someone can remind me as to why this happens. Smartse (talk) 17:42, 13 February 2010 (UTC)
Chemistry diagrams
http://en.wikipedia.org/wiki/Penicillin
On the right hand side of the page there is a diagram depicting the structure of penicillin. Between the HN and C atoms there is a black shaped wedge. Between the c and H bond (top middle) and the C and C bond (bottom right) there are wedges with parallel lines. Please could you explain the meaning of these bonds - I can't find reference to them anywhere90.216.189.67 (talk) 20:56, 12 February 2010 (UTC)
Thank you
- Our article Skeletal formula describes how this type of chemical structure depiction can be interpreted. -- Ed (Edgar181) 21:03, 12 February 2010 (UTC)
- Put simply the black wedge shows that the molecule is 3D and that that bond is coming out of the paper/screen towards you. Smartse (talk) 17:44, 13 February 2010 (UTC)
- Minor quibble - all molecules are 3D. The wedges are there to indicate chirality - that is, how the molecule is structured around the atom with the wedges matters. Chiral compounds can have very different properties from their mirror image compounds. The classic example is thalidomide, where one compound is effective against morning sickness, where the other causes severe birth defects. -- 174.21.247.23 (talk) 19:25, 13 February 2010 (UTC)
- Put simply the black wedge shows that the molecule is 3D and that that bond is coming out of the paper/screen towards you. Smartse (talk) 17:44, 13 February 2010 (UTC)
Cryogenics
Hello i was wondering what has been in the field of cryogentics sucessfully done have scientist been able to freeze mice and revive with full function? i read article on cryogentic but did not see information about current ability our science is in area? (Dr hursday (talk) 22:53, 12 February 2010 (UTC))
- The article you want is Cryonics.Cuddlyable3 (talk) 23:14, 12 February 2010 (UTC)
- The short answer to your question is "No". The reason is, when mice are frozen, all the water in the tissues freezes and forms ice crystals -- and since ice expands during freezing, those crystals usually poke holes in cell membranes, killing the cells. It's the same reason why you don't put fruits and veggies in your freezer -- the ice crystals destroy the cells and make the veggies (or fruits) soften when thawed. In the case of mice or other animals, the blood also freezes and its expansion bursts blood vessels -- sort of like a water main bursting in sub-zero weather. You could of course put chemicals into the blood that would keep it from freezing, but the problem is, those chemicals are toxic enough to kill or to cause irreversible damage. So far, the only living organism to have been successfully frozen in dry ice and revived with full function is Han Solo in Star Wars; but it's only a movie, it's not anything like real life. 24.23.197.43 (talk) 00:55, 13 February 2010 (UTC)
- Wood Frogs freeze solid and thaw out with no apparent ill effects all the time. Perhaps we could learn from them. Edison (talk) 01:49, 13 February 2010 (UTC)
- I have fixed your link, Edison - internal links inside of external links confuse the MediaWiki engine. Nimur (talk) 16:49, 13 February 2010 (UTC)
- Well, those few animals that can do this have a natural antifreeze in their bodies. It might be possible to do some genetic engineering and gain that benefit. However, it's not clear how well things like higher brain functions would survive, for example. If a frog lost all of it's memories through the process, it wouldn't really matter a damn - but with humans...not so good! I don't think it's impossible that we could do that one day - but don't bank on it. SteveBaker (talk) 02:03, 13 February 2010 (UTC)
- There are quite a few memories I would not mind losing. Edison (talk) 04:37, 14 February 2010 (UTC)
- I have seen no references that say the wood frogs lose their memories each time they freeze and thaw. If the memories are stored as new synapses, they might survive. Edison (talk) 04:39, 14 February 2010 (UTC)
- There are quite a few memories I would not mind losing. Edison (talk) 04:37, 14 February 2010 (UTC)
- This article in 2008 discussed work that Israeli scientists were doing on pig livers that could be frozen and then still functioned in another pig. Maybe read some papers in Rejuvination research if you want more up to date information. Smartse (talk) 17:55, 13 February 2010 (UTC)
- As Steve says, it's the brain that's the tricky bit. Livers have amazing regenerative capabilities. --Tango (talk) 19:31, 13 February 2010 (UTC)
- You be be thinking about the successful cloning from a frozen mouse, which was reported a few years back. [15] See also this interesting, if somewhat tangential, story about super-cold squirrels. [16] Rockpocket 02:21, 14 February 2010 (UTC)
February 13
Travelling to the future
I believe that the following is correct:
- If you approach the speed of light, then relative to a slower moving body, time will seem to move more slowly
- Therefore, if you circled the Earth at close to the speed of light, time on Earth would appear to pass more quickly than time on your "spaceship"
- So, if you left now, got up to the speed of light quickly and then can back in five years, far more than five years could have elapsed on Earth (so you are in a sense travelling into the future)
Also:
- With the lack of friction in space, you could potentially use a form of propulsion to gradually accelerate yourself to the speed of light
- It would be within the reach of current technology to create a spaceship that could withstand being accelerated to close to the speed of light (because the accelaration forces would be very low) - although if it hit anything it could be destroyed
- With the like of ion drives and a nuclear power source it is feasible to build a propulsion system that could accelate the spacecraft to close to the speed of light
My questions are:
- Are all of the above assumptions correct?
- Is there anything else that makes this completely unfeasible?
- If it is unfeasible, what kind of technological advances would be needed to make it feasible? (or will it likely always be impossible)?
- Could the spacecraft be accelerated close enough to the speed of light, quickly enough to mean that somebody launching tomorrow, would be able to travel a significant way into the future, and how far could they travel?
- For example, if a 20 year old man, with a life expectancy of 80, left today, how far could he travel into the future, taking into account that he would have to accelerate and decelerate again, with the 60 years of his remaining lifespan.
Any thoughts on this would be really appreciated. Thanks Blooper Watcher (talk) 01:23, 13 February 2010 (UTC)
- I did some quick calculations myself, based upon a thrust figure of 88,000 mN (which seems to be the highest achieved by an ion engine). Assuming a mass of 10,000 kg for the spacecraft, I reckon that over 1,000 years you would accelarate to approximately 283,000 km/s (which is just shy of the speed of light). The calculation being:
- (88 N * (1,000 * 365 * 24 * 60 * 60)) / 10,000 = 283,824,000 m/s
- Obviously that is too long for a human lifespan (although I am not sure how close you need to get to the speed of light to see significant variations in relative time), and the engine would need to be 100 times more powerful to get to the speed of light in 10 years (and 10 more years to decelerate). But a 100 fold increase in power does not seem beyond the realms of possibility. Blooper Watcher (talk) 01:42, 13 February 2010 (UTC)
- There is no doubt that if you could somehow accelerate yourself to somewhere close to the speed of light - then return to Earth again - then you would have "travelled to the future" - the faster you went, and the longer you stayed at that speed, the further into the future you'd go. In as much as you would seem to others to have lived more than your expected lifespan - and you would be "in the future" - complete with flying cars, personal rocket packs and a small fortune sitting in your bank account thanks to the one penny you left in there and the actions of compound interest!
- That's a scientific fact that almost all scientists would agree on.
- However, we don't have a way to get something as large as a human up to anything remotely close to that speed...just look at the hardware in the Large Hadron Collider, needed just to get some teeny-tiny sub-atomic particle up to close to light speed! But it's been calculated that the Apollo astronauts (who travelled faster than anyone else on earth ever has) did travel a tiny amount into the future. Apollo 8 astronaut Frank Borman is said to have actually demanded 400 microseconds of overtime pay from NASA because of the time dilation incurred in his trip.
- So yeah - we have the technology to push someone 400 microsecond into The World Of The Future! SteveBaker (talk) 01:54, 13 February 2010 (UTC)
- Thanks Steve. Do my figures for the acceleration using the most powerful current ion drive look correct? It would seem to me that a 100 improvement in the technology would be feasible which, again, if my figures are correct would be acceleration up to the speed of light in 10 years, and then decelarating back down in another 10 years. The thing is, how close do you need to get to the speed of light to get a noticable effect?
- Let's say that we used the current drive, and over 10 years got up to 1% of the speed of light, stayed there for 40 years and then decelrated back down to stationary. How far, beyond the 60 year journey, would we have travelled? If we could accelarate up to 99% using a more powerful drive, and did the same 10 year accelarate, 40 years travel, 10 years decelerate, how much time travelling would we have done?
- Also, aside from getting up to speed. Is there anything else that would always make this unfeasible? Blooper Watcher (talk) 02:04, 13 February 2010 (UTC)
- One more thing, shouldn't NASA have been chasing Borman to pay some money back? Seems to me that he worked 400 microseconds *less* than the guys on the ground. Blooper Watcher (talk) 02:08, 13 February 2010 (UTC)
- You should remember that mass is also relative. As you accelerate, the mass increases and you'll need more force to achieve the same acceleration (that's why you cannot reach the speed of light - see Speed of light#Upper limit on speeds about that). Your calculations do not seem to take that into account. Also, about "Therefore, if you circled the Earth at close to the speed of light, time on Earth would appear to pass more quickly than time on your "spaceship"" - it is going to be very hard to circle the Earth at such speed as it is going to be much higher than the escape velocity... --Martynas Patasius (talk) 02:42, 13 February 2010 (UTC)
- For sure there is no way you could do this in orbit. The effects of relativity as you approach the speed of light make it increasingly difficult to push your speed that little bit faster. Even ion drives require reaction mass - and if you made them 100 times better, they'd need 100 times the amount of reaction mass - which would drastically increase the mass of your spacecraft and thereby eat most of the benefit you got from better engines. This has been thought about many times before by many smarter people than us - and it's quite utterly impractical. SteveBaker (talk) 05:03, 13 February 2010 (UTC)
- You should remember that mass is also relative. As you accelerate, the mass increases and you'll need more force to achieve the same acceleration (that's why you cannot reach the speed of light - see Speed of light#Upper limit on speeds about that). Your calculations do not seem to take that into account. Also, about "Therefore, if you circled the Earth at close to the speed of light, time on Earth would appear to pass more quickly than time on your "spaceship"" - it is going to be very hard to circle the Earth at such speed as it is going to be much higher than the escape velocity... --Martynas Patasius (talk) 02:42, 13 February 2010 (UTC)
- As you say, there would be a big problem if the spacecraft hit anything when going at relativistic velocities. Unfortunately, it is guaranteed to hit something - the interstellar medium. In the vicinity of the Sun, the interstellar medium contains about 50,000 atoms (we'll assume they are all atoms of hydrogen, which is close enough to the truth) per metre cubed. If we assume a speed of 0.99c, that's a gamma factor of about 7. That means that, from the perspective of the spacecraft 50,000*7=350,000 atoms per metre cubed (due to length contraction) and each has a mass of 7 times that of hydrogen (due to relativistic mass) and they are all travelling at 0.99c. That means that, in every second, for every metre squared of cross sectional area, the ship will hit 350,000*300,000,000*7 u of gas moving as 0.99c, that applies a force of 0.00037 N. That compares to a car travelling at 30 m/s (70mph) having air resistance of 1080 N. Clearly, that isn't a big problem (although it is still a problem, since ion drives typically produce far far less force than a car's engine), but if we increase the velocity to 0.999999997c, it becomes roughly the same resistance as the car. So, as you can see, the resistance from the interstellar medium does provide a limit to how fast you can travel, although that limit is pretty high. (All of these calculations are assuming perfectly elastic collisions - if the hydrogen doesn't just bounce off the ship then you need to look at the energy of the atoms, which is extremely high since it is proportional to velocity squared, so we would probably find the ship being significantly eroded by the constant collisions.) --Tango (talk) 13:26, 13 February 2010 (UTC)
- Tango, all that hydrogen can be seen as an aset instead of a liability. In principle it could be collected and used to power a fusion reactor that suplies energy to the ion drive. Dauto (talk) 15:40, 13 February 2010 (UTC)
- Bussard ramjet perhaps? 220.101.28.25 (talk) 16:36, 13 February 2010 (UTC)
- That article discusses the feasibility. Most studies seem to show that it wouldn't work (you get greater drag from the collector than you can generate thrust). Our article gives an example where it would work, but assumes an extremely high exhaust velocity with no justification for it being possible. --Tango (talk) 17:09, 13 February 2010 (UTC)
- The now-cancelled Project Orion was to get a spacecraft to a theoretical upper limit of 10% of the speed of light. ~AH1(TCU) 23:25, 13 February 2010 (UTC)
- As a torchship scuts through space at near the speed of light, a scoop on the front funnels any molecules encountered into the reaction chamber where they are accelerated to provide additional thrust. The scoop and the channel the molecules pass through are made of Unobtanium, a metal which is not readily available at present (but neither is Plutonium), but which has 157 hits at Google Book Search. Edison (talk) 04:35, 14 February 2010 (UTC)
- The now-cancelled Project Orion was to get a spacecraft to a theoretical upper limit of 10% of the speed of light. ~AH1(TCU) 23:25, 13 February 2010 (UTC)
- That article discusses the feasibility. Most studies seem to show that it wouldn't work (you get greater drag from the collector than you can generate thrust). Our article gives an example where it would work, but assumes an extremely high exhaust velocity with no justification for it being possible. --Tango (talk) 17:09, 13 February 2010 (UTC)
- Bussard ramjet perhaps? 220.101.28.25 (talk) 16:36, 13 February 2010 (UTC)
Carbon dioxide and plants at night
A question has come up on a sleep forum, in regard to plants in the bedroom. I've tried to read the photosynthesis articles (difficult here, too simple at simple), and haven't found an answer. Do (some? all?) plants release carbon dioxide in the dark? Thank you. - Hordaland (talk) 10:36, 13 February 2010 (UTC)
- Plants release carbon dioxide continuously, since they respire in the same way animals do. However, during the day (when they are in light) they also photosynthesise, and this converts carbon dioxide to oxygen. Therefore, it is generally true that the amount of carbon dioxide they take in during the day exceeds the amount they give out. This is not true at night time. However, the amount of CO2 they release should not be enough to cause any problems to people in the same room. --Phil Holmes (talk) 11:20, 13 February 2010 (UTC)
- Yes, all plants do this - but the amount they produce is completely negligable compared to the amount a person sleeping in that room would produce. There are families in poorer countries who sleep 10 people to a room - and they don't suffocate! This is SO far from being a possible problem that might concern a "sleep forum" that it's almost laughable - so tell the people there "Don't worry - it's such a tiny effect that it's completely unimportant." SteveBaker (talk) 15:09, 13 February 2010 (UTC)
- Thank you both! The forum (listserve, actually) is for sufferers of severe Delayed sleep phase syndrome, and we take each others' questions seriously. There is no effective treatment, so we question any little thing that might help -- or not. I will politely explain that the questioner needn't consider the CO2 from the "humungus bamboo tree" next to the bed. Thanks again! Hordaland (talk) 16:24, 13 February 2010 (UTC)
- Steve, "suffocate" is your word. We haven't contemplated any such thing. Hordaland (talk) 16:31, 13 February 2010 (UTC)
- The room would have to be airtight if something like hypercapnia were to occur. ~AH1(TCU) 23:20, 13 February 2010 (UTC)
- Steve, "suffocate" is your word. We haven't contemplated any such thing. Hordaland (talk) 16:31, 13 February 2010 (UTC)
- Thank you both! The forum (listserve, actually) is for sufferers of severe Delayed sleep phase syndrome, and we take each others' questions seriously. There is no effective treatment, so we question any little thing that might help -- or not. I will politely explain that the questioner needn't consider the CO2 from the "humungus bamboo tree" next to the bed. Thanks again! Hordaland (talk) 16:24, 13 February 2010 (UTC)
magnetic vector potential actual meaning?
whats exactly ment by magnetic vector potential?is it only a conceptual idea? what tha physical meaning of magnetic vector potential. —Preceding unsigned comment added by Dakshu (talk • contribs) 10:51, 13 February 2010 (UTC)
- The vector potential is definatly real. read for instance Aharonov–Bohm effect. Dauto (talk) 13:54, 13 February 2010 (UTC)
- Well... the Aharonov–Bohm effect depends only on the integral of A around a closed loop. E and B tell you the integral of A around all infinitesimal loops. Assuming space is simply connected, you can integrate the infinitesimal loops inside the finite loop to get the integral around the finite loop, unless there's a singularity inside. So the only case in which the Aharonov–Bohm effect depends on something beyond the E and B fields is when there's a singularity in the field. Even then you can get it from the E and B fields alone if you treat them as generalized functions.
- On the other hand, to calculate the Aharonov–Bohm effect from the E and B fields you have to integrate them over a region where the electron doesn't go. This isn't even a quantum-mechanical takes-all-paths situation, because this region is excluded from the path integral too—the electron really isn't there. But if you use A, the effect depends only on the region where the electron does go. In that sense, A is closer to the reality. But A has a large gauge freedom, meaning that only part of it is real, while the E and B fields do a perfect job of capturing the part of A that's real. What Aharonov–Bohm shows is that they capture it in a way that's nonlocal with respect to (some of) the physics.
- There's a GR counterpart to Aharonov–Bohm in lensing around cosmic strings. The exterior gravitational field of a cosmic string is zero. Spacetime is flat around the string; objects do not fall towards the string. But if two objects moving in parallel pass on opposite sides of a cosmic string, they'll end up heading towards each other on the other side, because the integral of the metric on a loop around the string is less than a full circle. Instead of using the metric, you can get the same result by integrating the scalar curvature over a surface that intersects the cosmic string, as long as you don't mind that you're integrating over a region where the particles don't go. -- BenRG (talk) 23:07, 13 February 2010 (UTC)
- Think of it as the magnetic field's version of a voltage (or, electric potential). As the voltage is just a mathematically different way to express an electric field, the vector potential is simply another way to represent the magnetic field, in terms of relative potential energy. Then there are just different mathematical conveniences and uses than the B and H fields. Magnetic potential goes into more detail with math and stuff if you haven't read it yet. —Akrabbimtalk 14:15, 13 February 2010 (UTC)
- The reason a magnetic field's energy must be described with a vector potential, as opposed to a scalar potential, is because the force imparted by a magnetic field does not only depend on position (it also depends on relative velocities and orientations). In order to quantify this field in a way that satisfies conservation of energy, the potential field must be described by a vector. This is a direct consequence of the Lorentz force law, which is empircally observed. It helps to thoroughly understand what a potential field means in general before trying to apply that mathematical concept to the somewhat pathological case of magnetism (whose force is defined by a vector cross product and a time derivative - not difficult concepts, but enough to make the math much harder than the electrostatic potential case!) Nimur (talk) 16:44, 13 February 2010 (UTC)
- Nimur, does an isolated positive electric charge move if you place it in a scalar electric potential field? Cuddlyable3 (talk) 17:49, 13 February 2010 (UTC)
- A meaning of pathological is Relating to or caused by a physical or mental disorder.[17]. Nimur, is there another word you could use to characterise the case of magnetism? Cuddlyable3 (talk) 17:19, 13 February 2010 (UTC)
- Sorry. I meant it in the sense that the mathematical convention seriously breaks from the normal form of a scalar potential field. I use "pathological" in the sense of any instance that diverges from the norm in a way that breaks simplifications. I think this usage is very common in math, physics, and computer programming - it has evolved and is distinct from the meaning in the medical field. I don't think I'm coining a neologism - let me find some prior usage. Nimur (talk) 17:27, 13 February 2010 (UTC)
- There's nothing wrong with using the word 'pathological' per si. But I don't agree with what Nimur is saying since the electric field expression also depends on the vector potential. Dauto (talk) 17:25, 13 February 2010 (UTC)
- Sure - but we can construct a totally scalar electric potential field - that's the simplification I'm referring to above. It is possible (although it limits the situations you can consider) to construct the electric potential in scalar form. No possible simplification exists to represent the magnetic potential in scalar form. Nimur (talk) 17:29, 13 February 2010 (UTC)
- There's nothing wrong with using the word 'pathological' per si. But I don't agree with what Nimur is saying since the electric field expression also depends on the vector potential. Dauto (talk) 17:25, 13 February 2010 (UTC)
- Dauto (talk) 17:29, 13 February 2010 (UTC)
- Electrostatics - therefore, there is no time variation. I think I mentioned this before. Nimur (talk) 17:30, 13 February 2010 (UTC)
- Read Magnetic potential#Magnetic scalar potential.Dauto (talk) 17:43, 13 February 2010 (UTC)
- Nimur, does an isolated positive charge move if you place it in a scalar electric potential field? Cuddlyable3 (talk) 17:49, 13 February 2010 (UTC)
- Yes. That does not mean the field is time-varying - that does not mean a full electrodynamics treatment is necessary. See test particle if you don't understand why. This is called electrostatics and it is a well-developed, mathematically rigorous approach. Obviously, it is inapplicable in situations where the time-variance is non-negligible - that is, by definition, electrodynamics - and it is more general, requires harder math, and requires the vector potential Dauto has spelled out above. Nimur (talk) 00:07, 14 February 2010 (UTC)
- Nimur, does an isolated positive charge move if you place it in a scalar electric potential field? Cuddlyable3 (talk) 17:49, 13 February 2010 (UTC)
- Read Magnetic potential#Magnetic scalar potential.Dauto (talk) 17:43, 13 February 2010 (UTC)
- Electrostatics - therefore, there is no time variation. I think I mentioned this before. Nimur (talk) 17:30, 13 February 2010 (UTC)
- The vector potential does have a physical meaning, but it's somewhat difficult to visualize. I'll try to explain it by a toy example. Imagine you have a cylinder of wire mesh, kind of like this. But it's twisted: the circles are still circles, and they're still parallel to each other, but they've been rotated relative to each other so the wires running between them aren't straight any more. No matter how twisted the cylinder is, you can always untwist it if you're strong enough. Now imagine you have a torus of wire mesh, kind of like this. Again, it's twisted. You can always locally untwist any part of it, but in contrast to the cylinder, you can't necessarily untwist the whole thing. If the sum of all the relative rotations of adjacent circles all the way around the torus is zero, then you can untwist the whole thing; otherwise, you can't.
- This "freedom to twist" is called "gauge freedom". The only properties of the wire mesh that we care about are those that you can't untwist. This means that in the case of the cylinder, none of the twisting matters; it's all "just gauge". In the case of the torus, the only thing that matters is a single number giving the total amount of twist around the whole torus.
- Now (here's where it gets hard) increase the number of dimensions. You can think of it this way: if you take a line and replace each point with a circle, you get a cylinder. If you take a loop and replace each point with a circle, you get a torus. What you want to do now is take space and replace each point with a circle. As you go from point to point in space, there's a corresponding twisting of the circles, and the question is to what extent you can untwist them and to what extent you can't. The answer is that what you can't untwist is completely captured by the total twist around all closed loops in space (the total twist around the loop being defined in the same way as it was for the torus). You can get the twist around large loops by adding the twist around smaller loops inside the large loop, as illustrated at Stokes' theorem#Underlying principle. This means that if you know the twisting around infinitesimal loops then you can find the twisting around all loops, and hence you know everything there is to know about the twisting. This is what the B field tells you. The magnitude of the B field at a point is the maximal amount of twisting of any infinitesimal loop at that point, and the direction of the B field tells you which loop is maximally twisted (namely, the one in the plane perpendicular to the field).
- What the A field tells you is the amount of twisting as you move from one point to another. The direction of the A field at a point is the direction in which the twisting is maximal, and the magnitude of the A field is the amount of twisting in that direction. The A field has a gauge freedom corresponding to your freedom to twist. The B field doesn't; it tells you just the part of A that matters.
- This analogy is much closer to reality than it might seem. If you imagine there really is a circle at every point in spacetime, twisted in the way I described, and write down the general relativistic equations for 5D gravity in that 5D space, you actually get Maxwell's equations and 4D gravity, the A and B fields literally mean what I just said they mean, and electric charge corresponds to momentum around the extra dimension. This idea dates back to the 1920s and is called Kaluza–Klein theory. -- BenRG (talk) 23:07, 13 February 2010 (UTC)
Did Jung have any faith in supernatural ? As far as I know Freud did not beleive that any entity does survive bodily death. But Jung's collective unconscious is some what near and dear to spooks...no ?
Jon Ascton (talk) 11:39, 13 February 2010 (UTC)
- Many interpretations of Jung's writing do exist - it wouldn't be too far to consider his archetypal unconscious as a "supernatural" concept - even though he surely considered it "science". Because the collective unconscious was not based on empircal evidence, though, it's not really a scientific conclusion. Later generations of psychologists categorically discredit that line of reasoning in favor of observation-based theories of psychology. Nimur (talk) 16:39, 13 February 2010 (UTC)
- See our article on synchronicity -- most people think of that as supernatural. Looie496 (talk) 18:15, 13 February 2010 (UTC)
- Also look at collective unconscious. ~AH1(TCU) 23:17, 13 February 2010 (UTC)
Why did the Challenger shuttle explode in a white cloud?
When watching the explosion again recently, I was struck by the fact that the "smoke" was all white, rather than black/grey that I would expect to see from an explosion. Is this because the white stuff is actually refrigerated fuel that did not combust and instead turned to a ball of vapour that obscured any of the smoke? Blooper Watcher (talk) 14:42, 13 February 2010 (UTC)
- The fuel in the external tank is about 800 tons of liquid hydrogen and liquid oxygen. When that exploded, the reaction produced 800 tons of water vapor - which is white. What you're seeing is a small cloud - in the sense of a "rain cloud". (Um - actually, less than 800 tons - it would have used a good chunk of that during the launch sequence - but still a heck of a lot.) SteveBaker (talk) 15:00, 13 February 2010 (UTC)
- (ec)Small quibble. The Shuttle itself did not explode, it was the external fuel tank that
exploded'deflagrated'(?). Aerodynamic forces broke up the orbiter. See Shuttle challenger disaster. The long continuous trails are exhaust from the solid rocket boosters. As per SteveBaker & cloud See Shuttle_challenger_disaster#Post-breakup_flight_controller_dialog 220.101.28.25 (talk) 15:09, 13 February 2010 (UTC) - This NASA link can probably answer any questions about the Challenger STS-511 --220.101.28.25 (talk) 15:19, 13 February 2010 (UTC)
- As an interesting aside, since it produces so much water, the launches of shuttles like these cause sudden downpours a few hours later. Vimescarrot (talk) 16:27, 13 February 2010 (UTC)
- (citation needed?) Eight hundred tons of water spread over a path 100 km long and 100 meters wide (let's say) works out to a lousy 80 grams per square meter -- less than a tenth of a millimeter of liquid water, so I don't imagine it's just the Shuttle exhaust falling out of the sky. What I could believe is that the long trail of water vapour could nucleate cloud formation and trigger rainfall that way.... TenOfAllTrades(talk) 17:02, 13 February 2010 (UTC)
- A (probably overly-dramatic) show presented by James May. Probably James May's 20th Century. Vimescarrot (talk) 17:07, 13 February 2010 (UTC)
- (citation needed?) Eight hundred tons of water spread over a path 100 km long and 100 meters wide (let's say) works out to a lousy 80 grams per square meter -- less than a tenth of a millimeter of liquid water, so I don't imagine it's just the Shuttle exhaust falling out of the sky. What I could believe is that the long trail of water vapour could nucleate cloud formation and trigger rainfall that way.... TenOfAllTrades(talk) 17:02, 13 February 2010 (UTC)
- As an interesting aside, since it produces so much water, the launches of shuttles like these cause sudden downpours a few hours later. Vimescarrot (talk) 16:27, 13 February 2010 (UTC)
- Even smaller quibble. Water vapor itself is basically colorless in the visible spectrum. The white stuff, like any cloud, is condensed water (i.e. tiny droplets of liquid). Buddy431 (talk) 16:33, 13 February 2010 (UTC)
- (ec)Small quibble. The Shuttle itself did not explode, it was the external fuel tank that
drag vs Reynolds #
There is a relationship between coefficient of drag and Reynolds number. When the object is sphere, the relationship can be pictured like this. I've seen versions of this figure in a number of chemical engineering textbooks. What I haven't been able to find out is whether this relationship is an empirical correlation, or an analytical solution, i.e. is this a relationship that is just observed to be true in experiments, or is it a relationship that must be just so according to the theory in this area? ike9898 (talk) 19:23, 13 February 2010 (UTC)
February 14
boiling
when water and milk are boiled, water boils but milk overflows. why??