Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 86.179.119.114 (talk) at 12:52, 28 April 2013 (Data capacity of vinyl record: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


April 24

Magnetohydrodynamic drive

Can't magnetohydrodynamic drive be used in jet engines. If it requires high voltage of current couldn't it provided with battery power for a passenger jet liner like Air Bus. If we can achieve this we can drastically reduce cost of air travel and passenger risk at time of crashes. — Preceding unsigned comment added by Mihinduep (talkcontribs) 01:49, 24 April 2013 (UTC)[reply]

The reason why hydrocarbon fuels are used in transportation, especially air transportation, is that the energy stored per kg of fuel is vastly greater than the energy you can store per kg of battery weight. Toy aircraft that are battery powered work because their mass is very low for the wing area (if you reduce the size of something, say by 50% in length, weight goes down 0.5 cubed, ie drops 88% but are only goes down 75%, so the wing loading has improved 88/75 ie 17%, and they only fly for a few minutes between charges anyway.) As a general rule, if you are wondering "why don't they ...." on any commercial subject, there's a darn good reason - and it's usually pretty obvious if you look. Wickwack 120.145.21.97 (talk) 02:07, 24 April 2013 (UTC)[reply]
Wickwack: I see what you are trying to prove, but your math is way off. You don't compare 88 to 75, you compare what's left of wing area and weight: 12.5% vs. 25%. The result is not even close to 17%, it's a factor of two... - ¡Ouch! (hurt me / more pain) 09:33, 25 April 2013 (UTC)[reply]
Correct - I don't know how I made such a silly error. Wickwack 121.221.28.17 (talk) 00:36, 29 April 2013 (UTC)[reply]
Even if you had batteries capable of supplying the power needed, a magnetohydrodynamic drive wouldn't be any use - they only work with electrically-conducting fluids. AndyTheGrump (talk) 02:32, 24 April 2013 (UTC)[reply]
Actually, they can work with gasses not normally regarded as conducting - they just need to be ionised (which absorbs energy of course). Wickwack 121.215.143.151 (talk) 13:04, 24 April 2013 (UTC)[reply]
And also, ionizing air requires extremely high local temperatures (on the order of 3000 C or so) -- which means that an MHD drive engine using atmospheric air would only be capable of running for a few minutes before burning out. 24.23.196.85 (talk) 22:53, 24 April 2013 (UTC)[reply]
Any system capable of heating air to 3000 C would hardly need a magnetohydrodynamic drive to create thrust - one could simply use the ramjet principle, in the way that the nuclear powered Project Pluto system did. Assuming you could find something to build it from... AndyTheGrump (talk) 23:12, 24 April 2013 (UTC)[reply]

Robot like thing for children learning by movement

About 2 or 3 years ago (I think), I came across a robot/automaton like things for kids. It wasn't humanoid but instead I think generally moved like a caterpillar or worm, you put the parts together (but I could be remembering wrong and there may have been other things it could do). A key fact I do remember is you programmed or trained it by movement. So if you moved it a certain way, it would learn to do that. I also remember it was quite expensive, I think a decent kit of stuff cost like US$500 or so. I believe it was from the US. I wanted to read about it again, twice now, but both times have failed to find it. Does anyone know what I'm referring to? Cheers Nil Einne (talk) 03:04, 24 April 2013 (UTC)[reply]

I bet you could build that with Lego Mindstorms...and for less than $500. SteveBaker (talk) 13:23, 24 April 2013 (UTC)[reply]

Maximum Burning Temperature

Where does this concept come from? I've only heard about it in the context of 9/11 conspiracy pages (in the sense, jet fuel has a Maximum Burning Temperature of x °, steel need 2* x ° to melt). Can't you reach any temperature if you keep adding heat and have a good oven to keep it from dissipating? OsmanRF34 (talk) 13:06, 24 April 2013 (UTC)[reply]

(ec)I googled [melting point of steel] and the first thing that came up was this,[1] which demolishes the 9/11 conspiracists by pointing out that steel is sufficiently weakened by heat well below the jet fuel burning temperature. ←Baseball Bugs What's up, Doc? carrots13:25, 24 April 2013 (UTC)[reply]
It might come from a misunderstanding of ignition temperatures for materials as fixed like melting point and boiling point are fixed temperatures. But fire is far more complex than that. Rmhermen (talk) 14:58, 24 April 2013 (UTC)[reply]
The answer to the last question is at Adiabatic flame temperature. Long story short, no, you can't reach any temperature, even without heat loss to the environment, because the best you can do is transfer all of the energy of the molecular bonds into the kinetic energy of the combustion products. As you pile in more energy, you are also distributing it over more mass, so the temperature doesn't tend to infinity. However, as the other posters have said, you don't need either an adiabatic flame or a puddle of molten steel to make a building collapse. --Heron (talk) 20:00, 24 April 2013 (UTC)[reply]

Gulf stream

Is there a Gulf stream or some stream similar that goes from Ireland to Canada (or the U.S.) that ships might take advantage of?LordGorval (talk) 13:38, 24 April 2013 (UTC)[reply]

Not exactly, but sort-of. Check out Ocean current. ←Baseball Bugs What's up, Doc? carrots13:43, 24 April 2013 (UTC)[reply]
O.K. Thank you. Now I see the "N. Atlantic Drift" and if a ship (or sail boat) went on this from Ireland to the Canary Islands and then to the West Indias, about how far would that be?LordGorval (talk) 13:53, 24 April 2013 (UTC)[reply]
In other words, is it possible to travel on a small sail boat from Ireland to the U.S.? How long would that take? Miles?? Which stream would help the sail boat the most on such a crazy voyage?LordGorval (talk) 14:38, 24 April 2013 (UTC)[reply]
Not only is it possible, it may have happened over 1000 years ago - see the Brendan Voyage. --TammyMoet (talk) 15:19, 24 April 2013 (UTC)[reply]
I have doubts about establishing possibility using voyages that probably never happened. Anyway, Columbus actually followed that route on each of his voyages, except starting from Spain rather than Ireland. See Voyages of Christopher Columbus. Generally speaking, though, winds are a more important factor than currents. Looie496 (talk) 16:08, 24 April 2013 (UTC)[reply]
You could design a vessel that would have ocean currents as the biggest factor. A submarine, for example, would be carried around by the current but would be unaffected by the wind. SteveBaker (talk) 18:36, 24 April 2013 (UTC)[reply]
See Tim Severin; in 1976 he sailed across the atlantic in a leather boat.--Gilderien Chat|List of good deeds 18:40, 24 April 2013 (UTC)[reply]
Map of labeled major ocean currents.
Indeed: and Tim Severin did that journey to prove that Brendan could have made his journey all that time ago. --TammyMoet (talk) 19:08, 24 April 2013 (UTC)[reply]

I see Voyages of Christopher Columbus. Unless I missed it, it doesn't show the distance (one way) from Spain to West Indies on his 1st, 2nd, and 3rd trips. Does someone have an estimate? LordGorval (talk) 19:03, 24 April 2013 (UTC)[reply]

Off course it is possible. The sea faring people of Southeast Asia traveled in small boats to far away places including Hawaii, New Zealand, and Madagascar. Dauto (talk) 19:07, 24 April 2013 (UTC)[reply]
You're looking for Triangular trade. Indeed, it worked... though it would have been better if it hadn't. Wnt (talk) 04:55, 25 April 2013 (UTC)[reply]
Better for who? ←Baseball Bugs What's up, Doc? carrots11:14, 25 April 2013 (UTC)[reply]
Better for the slaves, but not so good for the British and American economy. 24.23.196.85 (talk) 00:36, 26 April 2013 (UTC)[reply]

medicine

Over use of a nasal spray cause what? — Preceding unsigned comment added by Titunsam (talkcontribs) 14:33, 24 April 2013 (UTC)[reply]

There are a variety of different medications that can be delivered via nasal spray. What the potential side effects are would depend on the specific medication. Red Act (talk) 14:59, 24 April 2013 (UTC)[reply]
There's a start about the topic at nasal spray. Gross over use would cause drowning I'd have thought ;-) Dmcq (talk) 15:47, 24 April 2013 (UTC)[reply]
Think think the OP is probably referring to rebound congestion. This isn't a very good source but it explains it simply. Nasal Sprays Can Bring on Vicious Cycle--Aspro (talk) 18:24, 24 April 2013 (UTC)[reply]
There are worse complications than that...recall the Zycam controversy when hundreds of people permanently lost their sense of smell. That was a homeopathic remedy delivered as a nasal spray - the non-spray version of it was perfectly safe (although, being homeopathic, it doesn't work worth a damn to reduce your cold symptoms) - but when you sprayed this relatively innocuous stuff up your nose, the consequences were pretty severe! SteveBaker (talk) 18:34, 24 April 2013 (UTC)[reply]

Light and gravity

If light is partially made of particles, and those are affected by gravity, then why doesn't light go faster than "light speed" when headed down because of gravity? — Preceding unsigned comment added by GurkhaGherkin (talkcontribs) 18:40, 24 April 2013 (UTC)[reply]

According to the theory of relativity, massless particles always travel precisely at the speed of light. Their energy can however change, despite traveling at the same speed. Massive particles when approaching the speed of light will approximately behave like this. E.g. the protons in the LHC will be accelerated to an energy of a few TeV which is thousands of times the proton mass. The protons then travel at practically the speed of light, but you can still add more and more energy to the proton beam without that having much effect on the speed.
A dumbed down analogy is boiling water. The temperature is very close to the boiling point. If you turn on the heat more and more the water will boil more and more intensely, however the temperature will hardly increase. Count Iblis (talk) 18:48, 24 April 2013 (UTC)[reply]

Thanks. I am not Jimbo Wales (talk) 18:59, 24 April 2013 (UTC)[reply]

Vertical bright lines from flashlights

Why is it that when in a movie someone points a flashlight directly or almost directly at the camera, bright vertical lines appear extending from the center of the flashlight bulb to the top and bottom of the screen?

For an example see "Day of the Moon" starting at approximately 17:46. Whoop whoop pull up Bitching Betty | Averted crashes 20:22, 24 April 2013 (UTC)[reply]

There are two possibilities...well, three actually:
  1. In "the real world", those kinds of lines can be formed due to refraction of light through your eyelashes - and that happens when you look at a very bright light in an otherwise dim scene. That feel of staring into a bright light is absent when you look at a TV screen - which never gets bright enough to reproduce that effect...so sometimes, the effect is added as a special effect in order to produce the feeling of looking at a very bright light in the audience. This is my best bet as to what's going on here.
  2. Lens flare is another common thing you see in poor optical systems - which is also sometimes added as a special effect to make something look very bright. There is also a specific kind of flare that comes about when using an "anamorphic" lens to take wide-screen images with a standard 3:4 aspect ratio sensor or film that produces a single line artifact...although the ones I've seen have tended to be horizontal lines.
  3. Some older digital cameras generate bright, perfectly vertical lines that are exactly one pixel wide when the image is too bright for them to handle. I'm not quite sure why. I'd hope that Dr.Who episodes would be filmed with better cameras - so that's probably not the reason. I used to have a 10 year-old Canon camera that did this all the time when you get a glint of sunlight reflecting off of a shiney car...very annoying!
SteveBaker (talk) 21:12, 24 April 2013 (UTC)[reply]
Steve - your #3 item is called "CCD smear" and it is because old CCDs transferred charge (in the analog domain) column-wise to a sense-amp. (The read-out logic piped data through other pixels using analog charge transfer, before it was finally digitized at the bottom of the sensor). One single saturated pixel (over-exposed by light) would therefore cross through every other pixel in the column; this saturated every diode in the entire column. It was the bane of many a CCD camera engineer - very ugly, very visible, and represented total data loss so it was difficult to digitally correct or "bleep." CMOS sensors (in newer-technology cameras) do not work this way; and so there is no column-wise saturation. CMOS sensors therefore have different characteristic artifacts. Here is a nice cartoon from Hamamatsu, Interline Transfer CCD architecture, written by an Olympus Corp. engineer. Nimur (talk) 21:48, 24 April 2013 (UTC)[reply]
My reading of the lens flare article (and personal experience) leads me to believe that lens flare can also occur in the human eye -- no camera necessary. SemanticMantis (talk) 21:21, 24 April 2013 (UTC)[reply]
Looking at the pictures, it's almost certainly lens flare, Whoop whoop pull up Bitching Betty | Averted crashes 21:57, 24 April 2013 (UTC)[reply]
It's not #1 - when that effect occurs, the lines are as wide as the light source. In the scene with Amy Pond's flashlight, the lines are much narrower than the light source. Whoop whoop pull up Bitching Betty | Averted crashes 21:33, 24 April 2013 (UTC)[reply]
Resolved


April 25

Proper motion of stars

According to the article on Proper motion, these motions are measured as coordinate changes on the celestial sphere, and celestial coordinates are in turn, as far as I can gather, defined based on projections of the Earth's Equator and point(s) where the sun crosses the plane of the Equator. It seems to me that this entire coordinate system will shift as the Earth's orientation wobbles and precesses in various ways over time. Given that stars' proper motions are usually tiny anyway, why are their measurements not distorted or even swamped by such effects? 86.160.222.208 (talk) 02:39, 25 April 2013 (UTC)[reply]

Minor historical note, as I don't know the answer to your question (I have a suspicion I know what the reason is, but I have nothing to cite for now) -- the celestial coordinate system actually came first; the latitude and longitude lines on the earth's surface are, IIRC, a projection of those onto our planet. Evanh2008 (talk|contribs) 03:04, 25 April 2013 (UTC)[reply]
Quoting from the equatorial coordinate system article "...these motions necessitate the specification of the equinox of a particular date, known as an epoch, when giving a position." The current International Celestial Reference Frame standard specifies the equinox of J2000.0. I'm not an astronomer though, so if there is any more to this to followup on, I'm sure others will chime in. -Modocc (talk) 04:07, 25 April 2013 (UTC)[reply]
See epoch (astronomy). As Modocc said, celestial coordinates are always specified with respect to an epoch, and the standard epoch is redefined every 50 years. The most recent one is J2000.0 (the beginning of Julian year 2000). You can convert between epochs by accounting for the effect of precession, which is by far the most important cause of axial drift. When a new epoch is about to begin--in the 1990's, for example--some people decide to be ahead of the game and specify coordinates using the new epoch. This is highly annoying because it forces other astronomers to convert back and forth. --140.180.249.226 (talk) 04:47, 25 April 2013 (UTC)[reply]
Thank you. 86.160.85.235 (talk) 17:49, 25 April 2013 (UTC)[reply]

About electrification

I'd like to hear others' opinions regarding these:

a. why we don't get electrified when just touching an exposed conducting wire, despite the fact that current does flow thru us ? (while the touching body is kept isolated and unearthed). b. when an exposed wire is being touched by a bare hand (in principle), can the hand lit a bulb while unearthed ?

c. the danger in touching a wire while earthed is due the immense current due to the voltage falling on the body, according to Ohm's law (since the voltage can reach hundreds to thousands of Volts). Does this explanation describe the situation, correctly & fully ?

Thank you, BentzyCo (talk) 04:42, 25 April 2013 (UTC)[reply]

a) You do get a bit of a jolt from the spark, but it's only a fraction of a second, until the charge in your body equals the charge in the wire. That's not nearly as bad as the continuous flow of electrons through you which would occur if you were grounded ("earthed" to you).
b) Not an incandescent bulb, as that would require a continuous flow of electrons. Fluorescent bulbs can be lit by just being near a live electrical wire though, due to the way they react to the fields around the wire.
c) I prefer my continuous flow of electrons explanation.
BTW, if you've ever seen one of those science demonstrations where a (well-insulated) kid puts his hand on the metal ball, then his hair stands on end from the electrical charge, you will note that the ball doesn't have any charge when he first touches it, or he would get a wicked spark. They slowly turn up the charge after he has steady contact with it. StuRat (talk) 05:35, 25 April 2013 (UTC)[reply]
a. "until the charge in your body equals the charge in the wire" ???
c. Your declaration regarding "continuous flow" needs clarification & substantiation. BentzyCo (talk) 06:58, 25 April 2013 (UTC)[reply]
Regarding a, what Stu means is that current only flows until the voltage of your body equals the voltage of the wire, not charge. Someguy1221 (talk) 07:05, 25 April 2013 (UTC)[reply]
a) I meant the "relative charge". So, if there was 1 free electron for every trillion atoms in the wire (totally made up number), then the flow of electrons continues until you have one free electron per trillion atoms in your body, more or less.
c) If you hold a live wire in one hand and the other hand is grounded, and the voltage/current are sufficient, then electrons will continue to flow through your body, heating it and causing damage, until you let go (or your charred body becomes an effective insulator). StuRat (talk) 07:16, 25 April 2013 (UTC)[reply]
If the wire is carrying an AC current, then you do get a continuous flow of AC current through you even if you aren't touching anything but the wire, due to your body's self-capacitance. However, human body capacitance is only about 100pF, which at 60Hz produces a capacitive reactance of about 25MΩ, which at 120V results in a current of about 5µA, which is about 200 times smaller than the 1mA minimum that can be felt. Red Act (talk) 06:58, 25 April 2013 (UTC)[reply]
... though a current as small as that can be "felt" by Electrovibration. (For forty years, I wondered why I could detect electricity in my fingertips at currents much too small to be "felt" according to conventional theory.) Dbfirs 12:03, 25 April 2013 (UTC)[reply]

Finding planets around smaller sized star

I wonder if brown dwarf have their own planetary system. I am not sure exactly how big average brown dwarf will be. They say average brown dwarf is around 10-60 Jupiter mass, I thought Y typed brown dwarf is smaller than the size of Saturn and average brown dwarf star is about the comparable size of Jupiter. If brown dwarfs have planets around it are they able to measure correctly the distance the planets are from their brown dwarf star? I hear red dwarfs have found alot of earthliked planets, and I have seen alot of comparable earth planets around red dwarfs on science magazines. Based on what I thought scientists are still able to measure correctly to far the planets are away from their red dwarfs systems. I thought average red dwarfs are four times the size of Jupiter. Is M7 or higher brown dwarfs, or red dwarfs? I thought M0-M9V are all red dwarfs. But according to what I hear when scientist try to study planets around white dwarf to predict what will happen to our solar system in the future, we just get more confused and everything just goes more anomaly. Could it because white dwarfs are too small to measure planets from star with an accurate distances? If star is too small will it be harder to measure distances the planet is to their parent star? Is it even hard to tell if white dwarf is white dwarf, or some scientist often confuse white dwarf with subBdwarf or subOdwarf.--69.233.255.83 (talk) 05:26, 25 April 2013 (UTC)[reply]

You are confusing the spectral classification (i.e. M, L, T, Y) with the classification by mass. For instance a young brown dwarf ~ 15 Jupiter masses will have M spectral class. As it cools down it will eventually became Y-type. The average size of all brown dwarfs is about the Jupiter size irrespective of mass. Ruslik_Zero 09:37, 25 April 2013 (UTC)[reply]
(ec) There seems to be some confusion of size and mass.
If you could add mass to Jupiter, it wouldn't grow in size; Jupiter is about as big as planets get. Its mass and therefore its gravity would increase, and compress the mass. Depending on how much you add, Jupiter would even shrink, because compression would more than offset the volume of the additional mass. If unclear fusion didn't ensue at one point or the other, you could add the mass of the Sun and end up with something the size of planet Earth (ballpark estimate).
And that's what happens when stars like the Sun run out of fusion fuel. After some burping and farting, their dead body gets all pale and wrinkly - and a Brown dwarf is a stillborn star in comparison; it never gains the mass required to "rise and shine."
Also keep in mind that you need radiation to analyze if you want to find exoplanets. Brown dwarf radiation is just too faint to analyze (yet?), and while Red dwarfs aren't exactly bright either, they are far more common than any other type. - ¡Ouch! (hurt me / more pain) 10:44, 25 April 2013 (UTC)[reply]
White dwarfs have radiations although fainter. If scientists try to capture the planets around white dwarfs can they mis-estimate the distance? Will star being too small make finding planets hard? They can still see the planets though.--69.226.44.190 (talk) 00:14, 26 April 2013 (UTC)[reply]
The misspelling of the word "nuclear" as "unclear" makes the answer above a bit unclear. Dauto (talk) 17:42, 25 April 2013 (UTC)[reply]
Yes, my mother's brother's husband Kenny once tried to have me engage in "uncle-ar fusion" with him, but I told him he wasn't my type. A little too av-nucular for my taste. -- Jack of Oz [Talk] 04:46, 26 April 2013 (UTC) [reply]
Oops, epic typo, on the scale of "marital arts" on IRC. Thanks for pointing that one out. (You say "unclear", you lose, right? :S )
<still smaller> That one was hard to catch but sometimes I wonder why some non-words pass just as easily...</still smaller> - ¡Ouch! (hurt me / more pain) 11:38, 26 April 2013 (UTC)[reply]
p.s. WHAT? Those bastards killed Kenny AGAIN? ;)
Yes if the star is smaller there is less chance of an eclipse happening, with a planet in line with the earth and the star. However a planet will be a similar size to the star and so dimmming of light will be much more dramatic. Detection by gravitational microlensing should not make a difference with the size. But for the very dim stars yuou will struggle to get enough light to measure the radial velocity very accurately. Reflected light detection will be extra hard. Graeme Bartlett (talk) 07:47, 29 April 2013 (UTC)[reply]

Does energy have mass?

In the article Conservation of mass, it is stated that energy has mass. But I thought only matter has mass. If energy has mass, what is the difference between energy and matter? — Preceding unsigned comment added by G.Kiruthikan (talk • --G.Kiruthikan (talk) 06:19, 25 April 2013 (UTC)contribs) 06:18, 25 April 2013 (UTC)[reply]

You are mostly right. In classical mechanics, only matter has mass. In the more advanced mechanics discovered by Albert Einstein, mass and energy are found to be equivalent rather than totally different entities. The principle of the equivalence of mass and energy shows that mass can be converted to energy in accordance with the famous equation E=mc2; and energy also has mass. See Mass-energy equivalence and Mass in special relativity. Dolphin (t) 06:48, 25 April 2013 (UTC)[reply]
Indeed. Mass-energy equivalence means that mass is energy and energy is mass - you can't have one without the other, because they are just different ways of measuring the same property. Matter is a less well defined concept in modern physics - as you will see from our article, there are various working definitions depending on context. One definition is that matter is anything that both has mass/energy and occupies volume. An alternative definition in terms of elementary particles is that matter is composed of fermions (rather than bosons). Gandalf61 (talk) 07:43, 25 April 2013 (UTC)[reply]
In some contexts the Higgs boson is considered matter even though is not a fermion. Dauto (talk) 18:18, 25 April 2013 (UTC)[reply]
Often confusion arises because force-carrying bosons photons and gluons are massless. In this context, these massless particles have mass-energy (as explained above by Dolphin51 and Gandalf61), but they do not have invariant mass (an invariant mass with respect to different reference frames). See the section on massless particles though, in which a system of two massless particles have an invariant mass iff their momenta forms a nonzero angle. -Modocc (talk) 14:25, 25 April 2013 (UTC)[reply]
Not all force-carrying bosons are massless. See W and Z bosons. Dauto (talk) 17:24, 25 April 2013 (UTC)[reply]
Thanks. I meant only that the gluons and photons are massless particles so I corrected that. --Modocc (talk) 20:12, 25 April 2013 (UTC)[reply]
There is also a definition based on whether a particle's energy is larger than its rest mass (AKA invariant mass) in which case it is considered radiation, or whether its energy is smaller than the rest mass in which case it is considered matter. Those are the definitions used when contrasting a matter dominated universe (Currently the case) with a radiation dominated universe (early universe). The cold hard truth is that the word matter doesn't really mean anything.Dauto (talk) 17:34, 25 April 2013 (UTC)[reply]
So then matter doesn't matter? What's the matter with that? --Jayron32 00:07, 26 April 2013 (UTC)[reply]
"You must lie upon the daisies and discourse in novel phrases of your complicated state of mind, The meaning doesn't matter if it's only idle chatter of a transcendental kind". -- Jack of Oz [Talk] 04:39, 26 April 2013 (UTC) [reply]
"This particularly rapid unintelligible patter / Isn't generally heard, and if it is, it doesn't matter."AlexTiefling (talk) 10:41, 26 April 2013 (UTC)[reply]

What we call matter fairly well aligns with what you might consider "tangible". Or at least that distinction fit for a long time. A brick is matter - you can hold it in your hands. A brick's thermal or kinetic energy are not. You can't isolate and hold "a gram of kinetic energy". Same goes for a gram of gravitational potential energy, or a gram of heat. Some forms of energy are not so abstract, but are intangible nonetheless, such as the electromagnetic energy carried by photons. But as has been mentioned above, the definition of "matter" doesn't matter. It's just a choice of how to classify things. Someguy1221 (talk) 04:45, 26 April 2013 (UTC)[reply]

Momentum operator in energy basis

What is the momentum operator in the energy basis?

150.203.115.98 (talk) 11:33, 25 April 2013 (UTC)[reply]

If H commutes with p (e.g. if H is just the kinetic energy p^/(2m)), then p is already diagonal in the energy basis. Otherwise, it's just given by the matrix elements of p. You can write the identity operator as:

where the sum is over any complete set of states. You can thus take these states to be the energy eigenstates. You can then write:

If you then multiply by the identity on the left, you get:

Count Iblis (talk) 13:08, 25 April 2013 (UTC)[reply]

And if you are dealing with a continuum of states instead of quantized states, you simply replace the sums with integrals

Dauto (talk) 17:19, 25 April 2013 (UTC)[reply]

All right. So, for example, what would it be for a particle in a box?, the discrete energy levels being , I think. So I guess the (n,m)th entry of this matrix is , how do you calculate that?
I'm not really sure if this is the right way of doing it, but can you define as in the position basis then ? I get a complex value though, so this can't be right.
150.203.115.98 (talk) 21:54, 25 April 2013 (UTC)[reply]

There are some subtle issues here, this has to do with the fact that the infinite potential barrier means that you are necessarily going to look at only the interior of the box. There H = p^2/(2m), however, p actually does not commute with H due to the boundary conditions which break translational invariance. What you can do here is consider some arbitrary plane wave exp(i k x), this can always be expanded in the region within the box in terms of the energy eigenstates, clearly this amounts to a Fourier expansion. The physical interpretation would be that if you start out with a momentum eigenstate which is upon a measurement found to be inside a box (without there bing an infinite potential well), and we then lock the particle up there by switching on the potential well, you can then consider expanding the state in terms of the energy eigenfunctions. Count Iblis (talk) 12:43, 26 April 2013 (UTC)[reply]

Simple Amplifier

Can I build a simple sound amplifier using a simple micrphone, transistor (what type) and battery etc, in what order shall I place them ? 124.253.255.39 (talk) 13:58, 25 April 2013 (UTC)[reply]

The design of such an amplifier depends on what you want it to do. What type of microphone? What do you want it to drive - headphones? loudspeaker? With normal dymamic (electromagnetic) microphones, a single transistor will not give enough amplification even for headphones. However the output of carbon and electret microphones is much greater, and a single transistor will be sufficient to drive sensitive headphones. Driving a loudspeaker will reqire a more complex circuit - an audio amplifier integrated circuit will be a better choice. You can google "simple amplifier" and look for circuits. However, the wording of your question indicates a knowlege of electronics at a level where you may require a lot of help. I suggest you subscribe to one or more electronics magazines. Keit 120.145.168.40 (talk) 14:08, 25 April 2013 (UTC)[reply]
This page (linked from our common emitter article) has an example circuit (with component values) that the OP might find useful. Tevildo (talk) 19:14, 25 April 2013 (UTC)[reply]

Length of minor and major axis of the earth's elliptical orbit

Earth revolves around the sun in an elliptical orbit and we know every ellipse has two axis (one minor and the other major). I have read at many places that the distance of earth from the sun is 150 million kilometers, but I am confused whether this distance is of major axis or of minor axis. So, I want to know both the distances of major and minor axis of the Earth's orbit. Scientist456 (talk) 14:14, 25 April 2013 (UTC)[reply]

That number is an approximation to an average. The article Earth's orbit has more details and more precise values. RJFJR (talk) 14:54, 25 April 2013 (UTC)[reply]
Also, you should be aware that the sun is at one focus of the ellipse, not at its geometric centre. Dbfirs 16:12, 25 April 2013 (UTC)[reply]
Using the numbers from that article, we find that the semimajor axis is 149,598,261 km with eccentricity 0.01671123. The semiminor axis is therefore 149,577,371 km. As you can see, these numbers are extremely close to each other, and both can be rounded up to give 150 million km. --140.180.249.226 (talk) 16:29, 25 April 2013 (UTC)[reply]
Note, though, that the variability in Earth's distance from the Sun is much larger -- 147.1 Mkm for the point of closest approach (perihelion), 152.1 Mkm for the point of greatest distance (aphelion). Looie496 (talk) 16:51, 25 April 2013 (UTC)[reply]
That's still only about a 3% difference. --Jayron32 00:39, 26 April 2013 (UTC)[reply]
But the radiation received goes down as the square so it is a 6% difference that way. And temperature is relative space which is near to absolute zero so it should mean about 16°C difference I'd have thought which is enormous. I wonder why it makes so little difference. Dmcq (talk) 03:47, 26 April 2013 (UTC)[reply]
The Greenhouse effect. 202.155.85.18 (talk) 06:55, 26 April 2013 (UTC)[reply]
That should just be a constant factor as far as this is concerned, the only sort of thing I can see affecting it much is the heat capacity of the earth, especially the oceans. Dmcq (talk) 11:25, 26 April 2013 (UTC)[reply]
I've read somewhere that this has to do with the fact that most of the land area is in the Northern hemisphere, while the Earth if farthest from the Sun during the Summer there. Count Iblis (talk) 12:21, 26 April 2013 (UTC)[reply]
Thanks, that gave me a search term that got this [2]. Incredibly the earth as a whole is on average 2.3°C warmer when it is farther from the sun because of the effect you said! It also says northern summers are about 3 days longer than southern ones because the earth moves faster when it is closer to the sun. Dmcq (talk) 12:40, 26 April 2013 (UTC)[reply]
Look, we don't need to be speculating on this, especially because the OP's question has already been answered. The effective temperature of a planet can be calculated by determining how much radiation it absorbs from the Sun, and balancing that with the blackbody radiation the planet emits. The result is show here--temperature changes as the inverse square root of the Sun-planet distance. This is because the power emitted from a blackbody goes as the fourth power of temperature, so if Earth moves 1% farther from the Sun, it receives 2% less radiation, but only needs to decrease its temperature by a tiny amount to decrease its emitted radiation by 2%.
So since Earth's distance to the Sun changes by 3%, its temperature is expected to change by 1.5%, or 4 degrees. But this calculation assumes that Earth has no thermal inertia. In reality, the oceans and landmasses will take a long time to respond to changes in solar irradiation, so the actual temperature change is smoothed over a few months and is expected to be less than 4 degrees. --140.180.240.146 (talk) 19:54, 26 April 2013 (UTC)[reply]
that should be te other way round, 6% not 1.5%. Dmcq (talk) 22:51, 26 April 2013 (UTC)[reply]
Did you read the article I linked to? Temperature is proportional to the square root of the distance. If the distance changes by 3%, temperature changes by 1.5%. --140.180.240.146 (talk) 00:37, 27 April 2013 (UTC)[reply]
Sorry didn't spot that. So the square root is due to the power being radiated at a rate proportional to the fourth power of the temperature. Thanks that explains everything. Dmcq (talk) 06:26, 27 April 2013 (UTC)[reply]

Li-ion batteries in new electronics

I heard that Li-ion batteries only last 2-3 years even if unused. Does that mean if a brand new, unopened electronic item, working on a Li-ion battery, just sat there for 2-3 years, it would become useless, assuming the item doesn't allow you to change the battery? Clover345 (talk) 14:51, 25 April 2013 (UTC)[reply]

That's not a simple question. There are many variations of battery technology that are swept together as "Li-ion" - and their shelf-life varies immensely. Also much depends on how much charge is in the battery as it is stored. SteveBaker (talk) 15:23, 25 April 2013 (UTC)[reply]
Li-on batteries usually have a controller with a safety cut-out that disconnects the battery if it falls below a certain minimum charge. This might possibly make the battery unchargeable if it has been allowed to discharge below this minimum. One doesn't hear of this happening very often, either because dealers open the packaging and re-charge after three years on the shelf, or because the batteries hold their charge for longer than 2-3 years. As Steve says above, there is a wide variation in battery specifications. If you are going to store Li-on powered equipment unused for many months, it is usually advised that you leave it at least partially charged. Dbfirs 16:10, 25 April 2013 (UTC)[reply]
Properly stored batteries (in a low temperature and low humidity environment) would have a longer shelf-life, whatever this is. A new battery will also be charged with the right amount to last longer. Add to this that a good supply chain management and shop management won't let any electronic sit on a shelf there for 2-3 years. The life circle of most products is probably much shorter than that nowadays. OsmanRF34 (talk) 17:09, 25 April 2013 (UTC)[reply]
See Lithium-ion_battery#Disadvantages. Just sitting there isn't as bad as using it, but even if you use it what typically happens is the battery only lasts half the time on charge after three years. However my experience is that quite a number also do fail totally in that time. Dmcq (talk) 11:36, 26 April 2013 (UTC)[reply]

Kangaroos hit by cars

I understand this is quite a problem in Australia. Why is this ? Their eyes are high enough to see cars coming with plenty of warning, and they seem fast enough to get off the road in time. Also, most of Australia is flat and treeless, allowing them to see for miles. (Do collisions occur mainly where there are trees and buildings, blocking their view ?) StuRat (talk) 16:51, 25 April 2013 (UTC)[reply]

Maybe they freeze and stare at headlights like deers. So, the question is probably why some animals do it. OsmanRF34 (talk) 17:13, 25 April 2013 (UTC)[reply]
In the case of deer, coming out from the woods without looking where they are going seems to be the problem, along with the freezing behavior you describe. They also seem to have a problem that they often travel in a group (say mom and a fawn), and the followers assume the path is safe if the leader crossed the street, while the leader barely makes sure the path is clear for herself, much less others. Then there are creatures like badgers, with both eyes low to the ground and a slow speed, meaning that once they see the car, they can't get out of it's way. StuRat (talk) 17:15, 25 April 2013 (UTC)[reply]
I doubt you are right about deers being hit by cars because they don't look when crossing roads. Where did you got that from? Walt Disney? It's actually instinct and it controls the deer's behavior. Until the deer's brain has a chance to understand what a car's headlights means to the deer, and what the deer should do about it. Often, the deer doesn't get that time. The environment in which deer evolved did not have cars with headlights. So unless a deer has had a chance to learn about cars with headlights, it goes into the deer brain as "unknown possible danger" (UPD). Before cars, the safest thing any animal could do when faced with a UPD was to freeze. The is because the UPD is likely to be a predator, and predators have trouble seeing still objects while the most dangerous ones (wolves, big cats, bears) react to retreating animals by chasing them. If the UPD turns out to be a non-predator danger (fire, earthquake, falling tree) what's the point of running away until the deer can figure out which way to run? On the other hand, if the UPD turns out not to be dangerous, or another deer that wants to horn in on whatever the deer is eating, why waste energy running and giving up on whatever objective the deer was following. The UPD might even be something the deer wants to scare away--which they deer can do by stamping their front feet at or by charging at a smaller animal. OsmanRF34 (talk) 17:24, 25 April 2013 (UTC)[reply]
(Pssst... the plural of "deer" is still "deer".) Deer often run into the sides of cars on roads, and it's hard to explain that unless they didn't see the car at all. StuRat (talk) 04:17, 26 April 2013 (UTC)[reply]
Kangaroo behaviour does seem to have some parallels with that of deer. There's probably also the fact that, unlike deer, kangaroos haven't historically had much in the way of predators, so watching for threats the size of a car hasn't been part of their evolution. And, most of Australia is not flat and treeless, or at least the parts where most of the cars are isn't. Even where there are not large trees, there's usually plenty of scrub. It's also just possible that movie and photographic imagery of Australia doesn't not give a true picture of the country to the rest of the world. HiLo48 (talk) 17:27, 25 April 2013 (UTC)[reply]
Australia is not flat and treeless? Everybody knows it's a big desert. Have you ever been there? OsmanRF34 (talk) 17:35, 25 April 2013 (UTC)[reply]

Where I live in the UK I regularly have to avoid deer suddenly crossing the road. When I drove in Australia (I spent around seven months there), I'd often have a 'roo jump out from the trees/bushes on the side of the road and "near miss" me. A lot of this has to do with the fact that deer (in the UK) and 'roos (in Oz) are prevalent so the odds of seeing them try to cross the road right in front of you in certain places (e.g. the "countryside") is high. The Rambling Man (talk) 17:45, 25 April 2013 (UTC)[reply]

I live and drive in Australia and I concur with HiLo48, The parts where most of the cars are isn't flat and comletely empty of fauna. Deserts_of_Australia has a decent map which shows that the majority of the eastern half of Australia does not have any desert in it. I live in victoria where plenty of Kangaroos get hit on the roads and we have no deserts, similarly with New South Wales and Queensland which only have small parts of their western most borders in desert. Well over half the population of Australia live in Victoria and the eastern parts of the other two states. Vespine (talk) 23:40, 25 April 2013 (UTC)[reply]
(Oi. Victoria has the Little Desert and the Big Desert, but I'm not after a fight over it.) HiLo48 (talk) 06:57, 26 April 2013 (UTC)[reply]
Deserts of Australia has a few maps that contradict each other on exactly where there is desert and where there isn't, but this map shows that most of the eastern half of Australia is either desert or grassland, and if you have a look at some of the grassland areas using google maps (such as here, here and here), you'll see that while there is grass, a layman could be forgiven for calling it a desert. 202.155.85.18 (talk) 07:39, 26 April 2013 (UTC)[reply]
I don't know anything about deer, but if you want to know about kangaroos, ask an Australian, such as me. They are indeed a significant problem - several have done serious damage to my car. If you hit a big red at highway speeds, your car can be a write-off.
While I don't know anything about deer, I imagine that as ruminants that run in herds, their behaviour would be something like goats, which my parents farmed. Goats are not very intelligent, but they have a range of instincts. One of which is that if one senses possible danger, it lift it's head and points in the direction of the suspected danger, while watching to see what other goats in the herd do (they can do this because the position of their eyes gives about a 320 degree all round view. If another goat does the same, both goats run AWAY from the danger. All this happens in less than a second. When the two goats run, the rest of the herd run with them - they do not bother with assessing the danger. If for some reason a second goat does not react, nothing further will happen. If a goat gets out on a road and is blinded by headlights, it literally has no idea what to do and just stands there looking into the lights. As a child I occaisonally amused myself by getting a school friend to come with me into the goat paddock, and suddenly break into a run with me. The result will invariably be the entire herd running in the same direction in panic.
Kangaroos are completely different.
While kangaroos like to gather with other roos, they spend a lot of time on their own. And while they are biologically VERY sophisticated (their calorific requirements are very low, and their hormone and reproduction system is amazing), they are truely the stupidest of anything that has fur. So their intincts/pre programmed reaction to danger have adapted to suit. When a kangaroo senses danger, it imediately hops away at top speed IN A COMPLETELY RANDOM DIRECTION. I have often been walking in the State forrest and, snapping a twig on the ground, startled a roo. More than once the stupid thing has ran directly toward me - I've had to get out of the way.
This is what happens if you are driving at night and there is a roo on the road, or just happens to want to cross the road as you approach: He's blinded by your lights, and at first doesn't know what lights mean, so he just stands there looking at them. As you get closer, he sees the lights getting bigger and brighter and he hears the sound of the tires and motor. He then decides "Danger" and hops at maximum speed IN A RANDOM DIRECTION. If he turns and goes directly away, fine. You can brake heavily and wait for him to get well clear. If he hops away at right angles to the road, fine. But sometimes they hop directly toward you, and then there is going to be a crash.
I had a big one that ran directly toward me, and ran at full speed directly into my car, after I had braked to a complete stop! The damm thing casued $2000 worth of damage.
Sometimes they will hop off the road at right angles and become unblinded and then concerned presumably with hitting a tree, and turn and run on the road. They certainly are stupid.
Incidentally, Osman clearly has not been to Australia. Its a bit like the USA. About the same land area, it certainly does have a lot of desert in the centre, but within 200 to 400 km of the coast it is pretty lush. In my State, we have the Jarrah forrest, which is dense forrest with very large trees about the area of one average US State. The wood from Jarrah trees has been extensively used for building construction and furniture both here and in the UK, and has been exported to the USA and other countries. However, its' use is now restricted to high quality furniture only.
Wickwack 121.221.220.87 (talk) 00:03, 26 April 2013 (UTC)[reply]
As someone who doesn't "know anything about deer", you perhaps should have avoided speculating so much about them. Unlike caribou or bison or some Indian deer, North American deer are not generally herd animals. Males are usually solitary, except at mating times, and a female is usually accompanied only by a few children less than a year of age. So the assumptions based on herd instincts don't really apply. Dragons flight (talk) 01:00, 26 April 2013 (UTC)[reply]
Whatever. What I said about goats is valid. Deer are farmed here - kept together in large paddocks. Whatever sort of deer they are, they clearly like to be together. Wickwack 124.178.174.155 (talk) 01:21, 26 April 2013 (UTC)[reply]
  • "Deers"? (Rggggh....venison) Both times I almost hit a deer it was because they were crossing the road as a group at the darkest spot in otherwise well-lit roads. Don't know diddly about kangaroo behavior. μηδείς (talk) 00:51, 26 April 2013 (UTC)[reply]

One time when I was driving from Mataranka to Katherine(where Australia is a big, flat dessert) shortly after sunrise, I saw two kangeroos (actually more like wallabies I guess) on the road over a kilometer up ahead. I was doing about 130km/h in a 4x4 Toyota Hilux, and the kangeroos saw me straight away. One hopped promptly off into the bush to the left side of the road, but the other hopped along the road towards me, then darted away again, never actually moving off the road. Then it stopped and by this stage I was getting rather close. I was just about to ease off the accelerator to give a bit more time when it darted off into the bush on the right side of the road. I continued along without slowing until the kangeroo that originally ran straight off the left side of the road came back straight towards the driver's side tyre. Given the car's bull bar and high suspension, the roo went straight under the wheel. To us in the cab, it felt like a small bump in the road. In summary, roos get hit by cars because they're extremely stupid animals. You also see this when shooting at them (I wouldn't call it hunting, because there's pretty much no skill involved); they often hop towards where the shots are coming from and I've even seen them not react at all as their mob falls around them. 202.155.85.18 (talk) 01:10, 26 April 2013 (UTC)[reply]

Yes, all that is typical roo behavior. For those who don't know, a wallaby is a kangarro that is a small variant. Variants of wallabies/roos vary for 600 mm high to over 2 m (2 to 6 foot). They are all equally stupid. Hit a 2 m big red at highway sppeds and you've just had a serious accident. Wickwack 124.178.174.155 (talk) 01:24, 26 April 2013 (UTC)[reply]
We all know what a wallaby is. Did IP 202 not beep his horn? μηδείς (talk) 01:25, 26 April 2013 (UTC)[reply]
Don't know if he did. But it would make no real difference. What roos do when startled is completely random. If you see a roo on or near the road, the only thing to do, is stop until it goes away. Wickwack 124.178.174.155 (talk) —Preceding undated comment added 01:29, 26 April 2013 (UTC)[reply]
My point is that beeping while the roo's on the road at a distance and continuing to beep if it stays on the road might work. It works somewhat for geese which are the daytime hazard in the US N.E. Or are they as stupid as opossums? Oh, god, the idea of leaping opposums is indeed a scary one. μηδείς (talk) 01:38, 26 April 2013 (UTC)[reply]
I don't know how stupid opossums are. But, no, beeping as you suggest doesn't work. Roos don't understand what it means - it can only scare them, which provokes hopping at speed in a random direction. The best approach is as I said, stop until they go away. The idea is to try and avoid scaring them - if they are not startled and don't feel threatened, they MIGHT do something sensible like standing still or hopping away. This is not always possible, as when you are travelling at speed and a roo hops out from the bush at roadside, you may not have enough time. Emergency brake if there is any possibility of hitting one assuming it comes straight at you, and minimise the risk. Wickwack 124.182.174.187 (talk) 02:39, 26 April 2013 (UTC)[reply]
I didn't use my horn and never have with roos. It has never occurred to me before that it might work, and based on my experience with them I think Wickwack is probably right about it having no helpful effect. Also, a car travelling at 130km/h makes a fair bit of noise without honking the horn, especially compared to the cicardas and bird calls in the bush, so if noise would work, I think the car should do it on its own. Because of the size of the wallabies in the NT and the size of the car I owned at the time, I wasn't too concerned that they might damage the vehicle. The most dangerous thing I could have done would be to try and turn to evade the things since that risks running off the road. Emus are also a bit of a hazard because they can try to "race" the car and cut in front of it. Often you see them slowing down as they approach the road ahead, then dart quickly across when you get close. In the NT, road trains can be up to 4 trailers long (>40 wheels, >50 metres long and up to 120 ton) and many drive through the night. With their huge size and heavy bull bars, they just mow down the wildlife that they frequently encounter on the highways including kangaroos, dingos, emus and various feral animals. When you drive along the highway during the day you see the results littered all down the road. The smell is so bad it makes rest stops very unpleasant. 202.155.85.18 (talk) 03:45, 26 April 2013 (UTC)[reply]
I'd agree with "stop until they go away". I should have taken my own advice for the only time I hit a roo, back in 2007. I saw it quite clearly up ahead, munching on some grass on the side of the road. So I slowed right down as I approached it, hoping it would hop out of my way. It stayed put as I continued to approach very slowly. I actually came to a dead stop for about 10 seconds. It seemed like it was going nowhere fast, so I gingerly started up again. Just as I was almost past it, it suddenly jumped towards the other side of the road, right in front of my car. I had no time to stop or swerve, and we made contact. I lost a headlight and had some damage to the body and paint work. The roo was injured, but still able to limp away. It would have been far worse for the car, the roo and me, if I'd been moving faster.
I'd still take issue with Stu's premise, in that it very much depends where you are in Australia. We have this image of the "real" country people and the golden sunsets over unending plains of wheat and sheep and cattle etc, but the truth is that we're one of the most urbanised countries on Earth. Most people live in the state capitals, and would never see a kangaroo from cradle to grave except in zoos, or while touring in the countryside. Canberra is a little different, as it was designed to have elements of the bush intermingled with a cityscape, so there are definitely places there where roos can be an issue (the back road to the airport, for example, is notorious; and they occasionally gambol over the top of Parliament House, while the politicians gamble with our lives below). I lived there for 27 years and luckily never had a collision (with a roo, that is). But it took only a few months after moving to Gippsland for the inevitable to happen. -- Jack of Oz [Talk] 03:55, 26 April 2013 (UTC)[reply]
Well, looking at the climate maps of Australia, most of it is desert and grassland. However, it's also true that most of the people, and presumably roos, live in parts that actually have trees. So, most Australians might well think of Australia as a lush green land (except for a few opal miners in Coober Pedy). StuRat (talk) 04:15, 26 April 2013 (UTC)[reply]
Roos are essentially very shy creatures. Ones bred in captivity or tamed for touristic purposes are fine, but wild ones will shun humans. Ergo, a conurbation is not generally the place to expect to find a roo. -- Jack of Oz [Talk] 04:35, 26 April 2013 (UTC)[reply]
I used to live on the green patch you can see here. As you can see, there's plenty of other places for the roos to go, but they chose to stay in the camp and eat our nice green grass. They wouldn't stir if you walked passed within a metre of them, but they did if you stopped walking near them. They're not really that shy, but since the major cities are always in areas with high rainfall, I guess keeping away from people is more of a viable prospect. 202.155.85.18 (talk) 04:49, 26 April 2013 (UTC)[reply]

Follow-up: What's the evolutionary reason roos sometimes run towards danger ? I can see it possibly being helpful for the pack, as a predator would be afraid to attack a pack that might slam into them at top speed and cause an injury. It also might be harder to attack a roo that's charging right at them. And while full-sized adult roos might not have many predators, a pack of dingos must be a potential threat to wallabies or to roo joeys. StuRat (talk) 04:23, 26 April 2013 (UTC)[reply]

I'm not 100% sure, but I think a large kangaroo charging at a dingo would probably succeed in getting away most of the time. Dingos hunt roos in pairs or larger groups and try to catch them alone or single them out. They try to herd the roos toward fences (so before white fellas showed up maybe rivers or whatever else is impassible. Forget cliff faces, the roos would just jump), so simply refusing to be herded probably mucks up the dingos plan. 202.155.85.18 (talk) 04:33, 26 April 2013 (UTC)[reply]
An adult red kangaroo can jump fair over a dingo; apparently they jump 1.8m [3] high, which I suspect is too high for the dingo to even jump and take hold. 202.155.85.18 (talk) 04:41, 26 April 2013 (UTC)[reply]
Now we've beaten kangaroos to death, does anyone, particularly eastern Victorians, want to talk about wombats and cars? HiLo48 (talk) 06:57, 26 April 2013 (UTC)[reply]
Squirrels have an interesting brain defect that results in many dying on roads. They seem to be indecisive. I will be driving up to one in the street, it will start to run to one side, change it's mind, turn and run for the other side, then turn around again. StuRat (talk) 07:21, 26 April 2013 (UTC)[reply]
Wow, so kangaroos are stupid and squirrels have a brain defect that makes them indecisive - what a complete load of unscientific crap! Both these species have developed escape strategies that have served them well for hundreds of thousands of years but unfortunately humans have used their intelligence to break free of evolution and have developed ways of killing, and even wiping out, other species faster than they can evolve into something with more suitable strategies for escape. Clearly the seemingly random escape patterns described above worked well for the situation they found themselves in before cars and guns were developed but aren't suited to the world we have created over the last couple of hundred years as evolution doesn't work that fast. There are examples of species evolving because of our effect on the environment such as the peppered moth becoming darker during the industrial revolution and lighter again as pollution gets less (see: Peppered moth evolution). Given enough time I would think that squirrels and kangaroos would evolve, by natural selection, into animals with different escape strategies but it won't happen for a long time yet. Richerman (talk) 09:34, 26 April 2013 (UTC)[reply]
In both cases they seem to be strictly going on instinct, rather than assessing the situation and coming up with a strategy to fit the situation. This is not a good indication of intelligence. StuRat (talk) 09:38, 26 April 2013 (UTC)[reply]
I would suggest you read Anthropomorphism#In science. "Stupid" is a term we use for humans who lack intelligence or common sense - applying it to other species is meaningless - and squirrels do not all have a brain defect. Richerman (talk) 09:51, 26 April 2013 (UTC)[reply]
What point are you trying to make? -- Jack of Oz [Talk] 09:49, 26 April 2013 (UTC)[reply]
(ec) That answers on this page are supposed to be scientific. Richerman (talk) 09:55, 26 April 2013 (UTC)[reply]
He's suggesting that he is smarter than a roo. We won't know for certain until we throw a car at stu and see how he reacts. Someguy1221 (talk) 10:02, 26 April 2013 (UTC)My comment makes the assumption that Jack is replying to stu. With all the edit conflicts, i'm not certain who is talking to whom. Someguy1221 (talk) 10:03, 26 April 2013 (UTC)[reply]
Yes, I was asking Stu. But Richerman then added an extra colon to Stu’s post, which put the cat among the pigeons, sense-wise. -- Jack of Oz [Talk] 10:54, 26 April 2013 (UTC)[reply]
Oops! - now that was me being stupid...or clumsy...or both :-) Richerman (talk) 11:40, 26 April 2013 (UTC)[reply]
Stu asked what's the evolutionary reason why roos sometimes head toward danger. The key to it is that the direction they take is random as I said. I thought I had covered it, but here it is again:
While roos like to gather together, as they do if there is a spot with nice food, they spend a lot of time on their own. So instincts that serve well in a herd are not so good. Compare with goats, which are herd animals that use a crude but very rapid way of voting on what to do (see my 1st post above). By immediately running at top speed in a RANDOM direction, most of the time it works, because almost any direction not directly at a natural predator is a good direction. And no time is wasted trying to make a decision. And, sometimes heading toward danger is not a bad idea also. I can tell you that, from personal experience, having a kangaroo going full tilt directly at you reliably provokes an immediate emergency reaction from a human: GET OUT OF THE WAY!
Having said that, if a group of roos are startled, they don't act like goats, sheep, and other herbivores and all run in the same direction. Roos don't waste time deciding direction. Roos in groups on sensing danger scatter and head off on all points of the compass. That's not a bad strategy. With natural predators, only one roo might be killed - the rest will get away. And, when food is plentiful, nothing as large as a roo can reproduce as fast and as effectively. That's why farmers pay people to shoot them and why they are shot for food - they reproduce so well (with their hormone and reproductive process that is way more sophisticated than than standard mammals) there's not the slightest risk of them becoming extinct.
A comment on whether or not roos are stupid: StuRat is right - an animal that relies on pre-programmed instincts is not an intelligent animal that can tailor a response to suit any new situation. If, like me, you have encountered quite a few roos on the road while driving, and encountered quite a few while bushwalking or rogaining, you can only come to one inescapable conclusion: roos ARE stupid. Very effectively evolved, but definitely stupid.
Wickwack 124.178.131.148 (talk) 12:20, 26 April 2013 (UTC)[reply]
Why do humans keep damaging their vehicles by banging into deer and kangaroos? They have good eyesight and are aided by lights at night, and they supposedly have high intelligence. One would have thought they would avoid such collisions instead of blithely driving along assuming everything will be okay. You can only come to one conclusion: humans ARE stupid. Dmcq (talk) 13:04, 26 April 2013 (UTC)[reply]
It's a matter of risk assessment. If you want to be 100% certain of never hitting a deer, you'd probably have to drive everywhere at 20 miles per hour. However, the personal cost to drive (as I recently did) 1,000 miles from Austin Texas to Phoenix Arizona - then doing so at 20 miles per hour in complete deer-proof safety would requires 50 very tedious hours - if you drive at the speed limit, then you can do it in 14 hours...but the price for doing that is a statistical chance of hitting a deer.
Let's crunch some numbers:
  • 1.5 million deer are killed on US roads every year.
  • Americans drive around 3 trillion miles per year.
  • That's one deer impact for every 2 million miles driven (presumably most of those miles were driven at or near the speed limit).
  • If I drive slowly enough (let's say, 20mph) to be sure that I won't ever hit a deer - then I need to spend 100,000 hours driving those 2 million miles rather than 50,000 hours if I drive at 40mph or 25,000 hours if I push it up to 80mph.
  • Let's say that by driving faster (at 40mph) and risking deer impact, then on average I'll spend an extra 50,000 hours in order to avoid hitting a single deer. Even if I drive 2 hours a day on average right now - then by driving twice as slowly (for 4 hours per day, at 20mph), I'd avoid one deer impact every 68 years of my life. I doubt I'll drive for that many years! Since there are an average of 1,400 human deaths each year from deer impacts (and 1.5 million such impacts), you're asking me to spend an extra 50,000 lifetime-hours behind the wheel in order to improve my chances of avoiding a deer impact by one in 1,000! It's simply not worth it. Doubling the amount of time I'm on the road doubles the chances of being killed by a drunk driver...that VASTLY outweighs the risks of deer impact.
Conclusion is that only an idiot drives slowly enough to reliably avoid hitting deer.
SteveBaker (talk) 14:48, 26 April 2013 (UTC)[reply]
Well put. And, as has been stated above, kangaroos will even collide with cars and put dents in them after the car has been brought to a stop. In nearly 50 years of driving I've hit about 5 to 6 roos. Do you want me to never go anywhere? Wickwack 124.178.131.148 (talk) 15:49, 26 April 2013 (UTC)[reply]
Guys, the notion of intelligent adaptation is simpler than that. See deer/roo/posum/turtle. Slow down and avoid impact. Help the poor critter out if you must. But then go on your merry way full throttle. The risk is minimized. Road engineers and maintenance crews help out too by improving road conditions. But road conditions are marginal at times and there are too many distracted and inattentive drivers that simply don't care what or who they hit. But perhaps in a few decades we will be smart enough to develop ways to do without paved roads and improve our lot. -Modocc (talk) 16:10, 26 April 2013 (UTC)[reply]
Exactly. Hopefully stupid inattentive drivers will be replaced by robots and this sort of thing will be a thing of the past. Dmcq (talk) 16:43, 26 April 2013 (UTC)[reply]
I don't know where Steve Baker got that 1.5 million deer collisions figure. 2000, there were 247,000 deer-vehicle collisions in the US. Could the number have increased by a 6x factor within one decade? To avoid hitting one you also don't have to drive at 20 miles. There are deer whistles that make noise when attached to your vehicle and would drive deer away. Anyway, I'm sorry to have triggered a discussion about deer vs. car collisions when the question was clearly about kangaroos. OsmanRF34 (talk) 17:26, 26 April 2013 (UTC)[reply]
Well...
  1. This says that "According to the Insurance Institute for Highway Safety, a driver hits a deer an estimated 1.5 million times each year in the U.S.[1], up from 200,000 just 25 years ago"...so yeah - the number has gone *way* up.
  2. I'm intrigued that you referenced our deer whistles article in the same sentence where you suggest using them! I guess you didn't look where you linked! That article clearly states: "Scientific studies of these devices have indicated that they do not in fact reduce collisions."...so *NO*, do not under any circumstances get deer whistles! Our article points out: "Deer are highly unpredictable, skittish animals whose normal reaction to an unfamiliar sound is to stop, look and listen to determine if they are being threatened."...so far from scaring them off - these devices would more likely cause them to stop in their tracks in order to figure out where this weird noise is coming from!
  3. That same article mentions the "Roo Shoo" device that does the same thing to scare off kangaroo - but there is no indication of whether they work for that species. My bet would be "no"...but there doesn't seem to be much information about that.
Anyway, I'm sure the answer I gave earlier is equally relevant to kangaroo collisions - the underlying message is the same, though the numbers are clearly going to come out differently. SteveBaker (talk) 19:56, 26 April 2013 (UTC)[reply]
OK, on a second glance, the thing with the deer whistle seems as effective as the ultrasound against cockroaches or cholesterol-lowering food or anti-barking product or whatever other poorly thought product. On the other hand, your 1.5 million figure could be wrong. It comes from the Insurance Institute for Highway Safety, and it's from the year 2004. The Centers for Disease Control and Prevention, a federal agency, cites a figure of 247,000 accidents that " involved incidents in which the motor vehicle directly hit an animal on the roadway." Mostly, these animal were deer. The figure is from 2002, so I doubt they went up by that much in such a short time. OsmanRF34 (talk) 00:34, 27 April 2013 (UTC)[reply]
While I agree it's unlikely the figure changed so much from 2000 (or 2002) to 2004, I wouldn't conclude it's wrong. Ultimately, whether a federal agency or not, it's difficult to conclude their figure is inherently more reliable without a careful consideration of the methodology in comparison to the other methodology. From what I understand from the source [4], they are relying on police reports of accidents. I don't know much about the US, but are police reports required for every single accident even if they only involve one vehicle and without injury to any person in all states? Or for all insurance claims? If no to both, then one obvious possibility is that some of these minor accidents are going unreported. BTW while a minor point, I don't know where 2002 is coming from. The source and our article clearly says the estimate is for 2000. They provide some estimates of hospitalisations and fatalities for 2001-2002 as well as percentage of accidents which involved a deer in one state for 2002, but I didn't see any estimate of the number of accidents in 2002. Nil Einne (talk) 17:06, 27 April 2013 (UTC)[reply]
There's a couple of other points that haven't been mentioned yet that Sturat might be interested in. One is that there are a lot of areas where the only fresh grass shoots (known as "green pick") are, is on the road verges. I was told that this was caused by dew forming at night in the air above the roadway. This means that the Kangaroos are more often feeding right beside the road than randomly distributed through the bush. Also I am a strong believer in the theory that if they are not dazzled by the oncoming headlights, they will often jump into the only area where they can see the ground clearly, ie the roadway in front of your car because it's lit up by your headlights. Certainly my observations from many years as a Wildlife Ranger are that when they are hopping along in front of you at night, they are much more likely to leave the road if you turn off your headlights.122.108.189.192 (talk) 07:13, 27 April 2013 (UTC)[reply]
I agree that the only green grass, or the most abundant dry grass, is often found alongside a paved road, cycle path or footpath. This is unlikely to be related to dew forming in the air above the roadway. It is largely because whatever rain falls on a paved surface mostly runs off the surface on to the adjacent soil. Consequently, when rain falls in the vicinity, the soil alongside a paved surface receives many times the amount of water received elsewhere. Dolphin (t) 07:38, 27 April 2013 (UTC)[reply]

And now for a little light entertainment. For reasons that have nothing to do with this thread but which would take too long to explain, this afternoon I had cause to listen to a recording of the classic version of The Carnival of the Animals with verses by Ogden Nash spoken by Noël Coward. The section called "Kangaroos" is at 8:55 here, but the whole piece is well worth a listen for those who don't know it. To hear Noël Coward pronounce "kangaroo meringues" to rhyme with "boomerangs" is an experience like no other, really. (The video has a bet each way with the spelling; it's 'kangaroo' intially, then it deteriorates to 'kangeroo'). -- Jack of Oz [Talk] 07:46, 27 April 2013 (UTC)[reply]

Probably no good, but might work. Dingos make three kinds of noise: Short barks, which they make only once as a warning. The nature of this noise is such that to a Kangaroo it's probably just another noise. They make howls to call other dingos - a sound that travells long distance in the bush/forrest. Kangaroos hear howls routinely and take no notice unless it is very close. A howling dingo is a dingo on its own. At highway speeds a car making a howl noise will sound distant but might work. The third noise is a quiet "ahhhgggghhhhh" noise, useless for the purpose. There may be a serious technical problem in reproducing a howl such that a roo will think "dingo". It is a mistake to think animals hear sounds as we do. What is an obvious difference to a human may be very subtle to an animal and vice versa, and other unknown factors may apply. I had a dog that would come when I called his name with total reliability. I tried an experiment a few times with a good quality microphone, an amplifier, and a very good hifi loudspeaker. I called his name over the speaker positioned some distance away. I expected the dog to come to me, or maybe look at the speaker. He did absolutely nothing - totally ignored it. Wickwack 121.221.28.17 (talk) 00:57, 29 April 2013 (UTC)[reply]

Why can't LCDs be made capable of displaying interlaced video?

As the title says - why can't LCDs be made capable of displaying interlaced video, when the progressive scan used on modern LCDs produces far inferior video quality to that of interlaced video? Whoop whoop pull up Bitching Betty | Averted crashes 23:29, 25 April 2013 (UTC)[reply]

The design of LCD's does not lend itself to interlaced scanning, but you wouldn't want it anyway.
Interlaced scanning was done with CRT displays becasue the response of the screen to the scanning electron beam is very fast, but the energy required to force enough current through the scanning coils for a high scanning speed, and the cost of achieving the high video bandwidth required for non interlaced display was too great. LCD's have a slow response, so they can be scanned at a lower speed without flicker.
Interlaced scanning is not superior in quality to non-interlaced - it is a compromise that was acceptable for low definistion television. You were mis-informed. In systems where there is not much movement, and the user sits close, as in computer displays, interlaced scanning causes an annoying line crawing effect. While analog television always used interlaced scan, only the earliest CRT computer displays used it, as it was not really acceptable. For computer displays, the cost of high energy wideband non-interlaced scanning had to be accepted.
Keit 120.145.177.181 (talk) 00:45, 26 April 2013 (UTC)[reply]
That answer is incorrect in almost every regard!
  • LCD's can (and are) easily used to display interlaced images.
  • You might (and indeed do) want it when bandwidth is limited.
  • Interlaced scanning was done on CRT's to save bandwidth and for no other reason. The BBC broadcasted non-interlaced television using the Baird system in 1936 - and the few people who watched it evidently used non-interlaced CRT's. Non-interlaced CRT's were made in the billions as computer monitors - they work just fine.
  • LCD's can be made with all sorts of response rates - managing 60Hz is relatively easy - so can CRT's. Indeed "Storage tube" CRT's could hold an image on the phosphor more or less indefinitely without re-scanning.
  • Interlaced scanning DOES produce superior video quality when compared to non-interlaced in an environment such as analog broadcast TV or limited capacity videotapes and such where bandwidth is at a premium.
SteveBaker (talk) 15:34, 26 April 2013 (UTC)[reply]

When video editing in the digital age, little is more of a pain in the butt than interlaced video. — Preceding unsigned comment added by 217.158.236.14 (talk) 09:57, 26 April 2013 (UTC)[reply]

It's a slightly more nuanced argument than that. Using non-interlaced rendering at 60 frames per second produces much better video than interlaced frames at the same display resolution. HOWEVER: If you have limited amounts of bandwidth, it is much better to use that bandwidth to deliver interlaced images than to halve the resolution to send a non-interlaced video stream.
It's a bit less clear-cut than that...for very dense information (text, graphics), the flicker you get with thin horizontal lines in interlaced systems can be very annoying...and for fast-action motion, the interlacing can be objectionable...but for most image content, interlacing at higher resolution produces a far superior picture than lower resolution non-interlaced video.
So for bandwidth-limited systems, interlacing is a good thing for picture quality...which shouldn't be a surprise - the guys who invented broadcast television weren't idiots. They thought very carefully about things like that.
What changes the game isn't LCD versus CRT. It's fancier video compression systems made possible with computers doing the work of decoding the video stream. With compression standards like MPEG, there is nothing much to be gained in terms of bandwidth or resolution in using interlaced images. With video being transmitted over the Internet, via Satellite or Cable TV, and via Digital broadcast TV, we have pre-compressed images that use even less bandwidth than interlacing saves us.
This really has nothing to do with LCD versus LED versus Plasma versus Cathode-ray tubes. You can display non-interlaced images on a cathode-ray tube - or interlaced pictures on LCD/LED/Plasma screens. Computer displays were non-interlaced even when we were still using CRT displays - and analog broadcast TV is still interlaced even on fancy flat-panel LCD/LED/Plasma displays.
So it's all about compression and bandwidth - it has nothing whatever to do with the display technology.
SteveBaker (talk) 14:22, 26 April 2013 (UTC)[reply]
MPEG isn't a compression standard, it's a group of experts. — Preceding unsigned comment added by 217.158.236.14 (talk) 14:47, 26 April 2013 (UTC)[reply]
That's the most overly pedantic statement I've heard on the ref desks in many weeks! Sure, MPEG is a committee which produces compression standards, numbered MPEG-1, MPEG-2, and so forth. These standards are collectively known as "The MPEG video compression standards"...which (being a bit of a mouthful) is universally reduced to "MPEG" by all practitioners of the video transmission arts. Pointing out this completely trivial distinction may be a great ego-inflating thing for you - but it doesn't help in answering this question in the slightest degree. Please learn when a clarification is needed - and when it's a muddification - as it clearly is here. (Since he who lives by the pedantry should also die by it: "MPEG" is only the informal name for the group - it is properly called "ISO/IEC JTC1/SC29 WG11" or "Coding of moving pictures and audio, ISO/IEC Joint Technical Committee 1, Subcommittee 29, Working Group 11.") SteveBaker (talk) 15:13, 26 April 2013 (UTC)[reply]
It has almost EVERYTHING to do with display technology, and a fair bit to do with viewing distance. Sorry SteveBaker, but you do not know what you are talking about.
Firstly, as I said before, CRT phospors repond to the scanning electron beam very fast - much faster than the eye responds to light. LCD's repond very slowly. For a field rate (the rate at which the picture is updated) fast enough for motion picture display, you'll see more flicker with a CRT than you will with and LCD. So for a CRT system, the bandwidth, which is directly proportional to field rate, must be high enough to minimise flicker. Interlacing is a method of tricking the eye by replacing the flicker with a linecrawling effect, which is acceptable if you sit far enough away so you can't make out the lines - which is what you normally do when watching TV. People sit close enough to computer screens to see the line crawling effect. The flickering with a low field rate does not occur with LCD's, as the pixels are slow to respond, slower than the human eye.
Secondly, when TV was developed in the late 1930's, the cost of receivers had to be minimised. Tube technology was expensive - a typical TV set in 1939 was a very different proposition compared to today's solid state sets. It was like purchasing a car - husbands discussed it with wives, saved up for a deposit, and bought the thing on hire-purchase, to be paid off over several years. Even in the 1970's, a (colour) TV set cost me over $800, compared to my annual salary then as a professional engineer of only ~$4,000. What affected the cost was the scanning rate, as scanning a CRT consumes considerable energy directly proportional to the scanning rate, due to use of magnetic scanning coils. Also, with tube (valves to the Brits, not to be confused with the picture tube) circuitry, one single tube/valve could easily provide the ~5 MHz bandwidth required for an interlaced picture, but to provide the 10 MHz bandwidth for a non-interlaced picture would have required several tubes or a very large tube, either way a lot more power. All this means that a non-interlaced tube and CRT TV set would have cost a lot more.
With computers, the interlaced display was not acceptable, due to users sitting close, so, except for the very first computer CRT displays, non-interlaced displays have always been used. When the first PC's and displays with a resolution good enough for serious grpahics and CAD work became widely available in about 1984, they were EXPENSIVE. In 1984 I bought a 17 inch colour CRT display of 640 x 480 resolution, about the same resolution as a good TV set, for about $900. 21 inch colour TV's were by then only half that. The cost penalty of complexity and energy requirements does not apply with today's solid state electronics and LCD displays because they do not have magnetic scanning, and because it costs much the same to make a video chip with 1,000 transistors as it does to make one with 10. The cost is instead determined weakly by the resolution, and is so low it really doesn't matter. A few months ago I bought an LCD TV, 20 inch, full HD, for $149.99. The salary for a professional engineer of the same level as I was in the 1970's is now around $140,000 a year.
So, in the 1970's, a (CRT) TV set cost ~2.5 month's wages, but today's TV sets of about the same screen size (LCD), but far better picture, cost a bit less than half a day's wages. That's why it was important, when CRTs were used, to mimimise the cost, and interlacing was the way to do it.
Keit 124.178.131.148 (talk) 15:22, 26 April 2013 (UTC)[reply]
I strongly disagree - and before you impune my qualifications, I worked as a researcher in this field at Philips Research labs in the displays and television group between 1977 and the mid 1980's. I even hold a patent for a novel TV scanning system...and I've been a researcher in the field of computer graphics ever since then.
So: You can choose fast or slow phosphors for a CRT to get almost any reasonable response rate you want. In NTSC, each pixel gets lit up 30 times a second in interlaced rendering and 60 times in non-interlaced - so you actually need a somewhat longer phosphor/LED response times to do good interlacing with less flicker...which is the opposite of what you're asserting! There are PERFECTLY GOOD non-interlaced CRT's - and PERFECTLY GOOD analog TV's that use LCD, LED and Plasma technologies. I own at least two of each of those things!
The flicker due to interlacing is most objectionable in computer displays because the data is produced digitally with abrupt brightness changes between consecutive lines. Analog TV camera and film scanners produce soft transitions from one scanline to the next - so when viewing natural scenes, you don't have terrible line-crawl or interlace flicker.
Computer screens were always more expensive than TV's but that's down to economy of scale - this was a big bone of contention in the Philips research group where we were concerned with promoting home computers (my team invented the CD-ROM) and we didn't want to get stuck with connecting them to interlaced TV's. Back then, TV's sold in the billions - and computer screens in hundreds of thousands...certainly not in the millions. It was only with the arrival of the cheap PC and home computers that the market for non-interlaced screens got big enough to drop the prices. LCD and LED prices have been almost the same between TV's and computer screens because these days they sell in comparable quantities. SteveBaker (talk) 16:00, 26 April 2013 (UTC)[reply]
Well, if you were an expert display technology researcher at Philips, then I am Bernard Tellegen come back from the grave (look him up in Wikipedia!). Increased production volume reduces unit cost - but there is definite law of diminishing effect. The 640 x 480 NEC monitor I bought was made in volumes exceeding that of many TV models. The TV prices I quoted for comparison are for TV's that were made for use in Australia only, but the NEC monitor was World market. Both TV's and monitors came in a multitude of brands, but under the hood, there has always been a limitted number of component manufacturers, so the production volume is effectively higher.
The less frequent line refresh rate with interlacing, which causes line crawl, is only important if you sit close enough to (just) make out the individual lines, which you do with computer monitors, but not when watching TV - how many times must I say this obvious fact? I haven't got it wrong way round - you have. Yes, a manufacturer can select a phosphor with a range of persistance times. They do just that with radar displays and oscilloscopes. However the low cost but good colour and brightness phosphors used have a fast response. The decay in light output is gradual and somewhat inverse exponential, so a phosphor than is slow enough to eliminate flicker tends to cause "comet trailing" on moving objects depicted. LCD decay once it starts is more abrupt, although comet trailing was a problem until the technology was mastered.
If you worked at Philips you should be aware for example that the Philips 19AX, 20AX, and later variants of colour TV display component systems were used by dozens of set makers. I also have CRT and LCD TV's and monitors of good quality - but what does that prove? Nothing. Both will do the job if enough money is spent. If you think that minimising cost was not the object when TV standards were developed, go research the then quantity cost of a 6CM5 (a power vacuum tube/valve most commonly used for TV line output until solid state took over, considered a marvel of modern tech in its day, and only just capable of handling the power required to scan a 625 line picture on a 21 inch 110 degree TV set) - you'll find it was more, in real terms, than an entire LCD set costs retail today) Then get a good 1950's engineering textbook on TV technology from the Library - you'll find it's all about compromises to keep the darn cost down. You certainly, back then, not want to almost double the power dissipation and cost by using non-interlaced scan when TV's were the third biggest cost to the average family (after purchase of a home, and purchase of a car). What was your patent title? Keit 121.215.9.85 (talk) 16:33, 26 April 2013 (UTC)[reply]
I agree with Keit. I worked for a TV repair company while going thru college. It was pretty obvious that interlaced scan was selected when TV's were tube and CRT-based, as the cost of selecting non-interlaced scan would have made TV sets so expensive that they would not have been viable as a mass consumer product back then. In isn't just the the line output tube (6CM5 or 6DQ6B and european equiv) would need to handle double the power, the video circuits would also be double power and the set power supply would need to be almost double in size. Steve Baker claims that sharp edges do not occur in natural scenes in television. What utter poppycock! If edges going from full black to full white or vice versa in from one line to the next (or from one pixel equivalent to the next), such as the edge of a dark suit against a light coloured background, did not oocur frequently in television, then the number of lines required, and the video bandwidth, could have been considerable reduced. When TV started in the USA, 525 lines was chosen as a compromise between an acceptably sharp picture and not too much cost. When TV started in European countries and Australia, 625 lines was chosen, as technology had advanced at bit, American experience could be drawn upon, and thus a slightly better cost vs picture quality was viable. The UK, which started broadcast TV ealier than everyone else, chose 405 lines in order to get the cost realy down so folk in a poor economy could stretch themselves to buy sets. The 405 line picture was never really acceptable though, especially in colour, and the UK scraped it and went to 625 lines in the early 1960's rendering millions of sets obsolete at considerable cost. One suspects that if Steve was actually a researcher in Philips Display and Television Group, then from the misinformed nonsense he wrote, he was probably in cabinet design or something. Ratbone 121.215.33.216 (talk) —Preceding undated comment added 02:15, 27 April 2013 (UTC)[reply]
In the UK at least, 405 lines never existed in colour. Transmission on UHF/625 lines started around 1964, with the launch of the third channel BBC2, and BBC1 and ITV were transmitted in both UHF/625 and VHF/405 for quite some years, all in black and white. Colour was introduced a few years later only on UHF/625. "Dual standard" sets were sold during the 60s so those didn't become obsolete unless you wanted to switch to colour. Not really relevant to the question but to the detail above. Sussexonian (talk) 10:16, 27 April 2013 (UTC)[reply]
405 line colour was actually seriously proposed though, in order to allow existing sets to remain in use. The BBC spent vast sums of money adapting the NTSC method to 405 lines, and built a complete line of colour cameras, studio processing equipment, studio monitors, and 405 line colour recievers. When they got it all going end to end, and could test viewer reaction to pilot shows, it was very evident that, while 405 line gave an almost acceptable black and white picture, it was never going to be acceptable in colour. It was this large scale expensive experiment by the BBC that was a major factor in the British decison to scrap 405 line and go to 625 line. Other factors were the economics of making set components for the World market and a (mistaken) belief that with 625 lines British firms could help the economy by exporting complete sets, and the severe restrictions on artistic camera shot direction due to the low resolution of 405 lines, which reduced the export value of British TV shows (back then a lot of shows were shot on 35 mm film facilitating showing in 625 line in other countries, and because shooting on film was more tolerant of lighting, but artistic direction still had to allow for showing in 405 line at home). The BBC work on 405 line NTSC colour was extensively documented in contemporary professional engineering journals and in the electronics magazine Wireless World. The published information on the BBC experiment makes fascinating reading.
You are misinformed about UHF broadcasting in the UK. Because VHF channels were taken up by 405 line transmission, 625 line had to be started on UHF, an unfortunate circumstance as more transmitters are required for coverage in hilly areas and in cities, and UHF tuners are more expensive to make. The major reason to change to 625 line was to enable a good colour picture as I said. 625 line PAL BBC broadcasts, using colour studio equipment, links, and colour transmitters right from the word go started on 1 July 1967. However not all shows were available immediately in colour, even though some that were exported had been shot on colour film for years. Also, colour sets were for a few years very very expensive, & prone to poor picture and trouble due to lack of experience in the trade. So for several years many consumers continued to buy black and white sets, albiet 625 line UHF capable. Many folk in the USA continued to buy b&w sets for years after their earlier colour introduction for similar reasons.
Ratbone 58.169.243.248 (talk) 10:50, 27 April 2013 (UTC)[reply]

April 26

Experimental results following theory rather than vice versa

I don't have much for you to go on, but I'm hoping my description of this will jar someone's memory.

I read a chapter in a book that was written in English at least a decade ago by a theoretical physicist, if I remember correctly Japanese. It was a cautionary tale about how our expectations can color our science. A parameter in particle physics (I forget which) had a possible value from 0 to 1. At first theorists expected zero, and experimentalists got results compatible with that. However, theoretical expectations of the value gradually increased until they reached one, and the experimental results were always compatible with theory. Each result agreed within the margin of error with that of the previous team of researchers and with theoretical expectations, but the results at the end were incompatible with where they started. This pathological relationship was the opposite of what one would hope for in science, and the chapter was an instructive illustration that the hard sciences are still human and not necessarily objective accounts of the world.

Does anyone recognize where this came from? I only ever had a copy of the one chapter. — kwami (talk) 01:32, 26 April 2013 (UTC)[reply]

This sounds very similar to Richard Feynman's account of Millikan's experiment as an example of psychological effects in scientific methodology. (However, not Japanese, not a decade ago, and not between 0 and 1 :) ) --Dr Dima (talk) 01:58, 26 April 2013 (UTC)[reply]
Yes, that's another good example. In this case I think there was a graph plotting both theoretical expectations and experimental results over time; you could see that the error bars always overlapped with each other and with expectations. I think that would make it somewhat more accessible than Feynman, even though the parameter was more obscure, though I don't remember how technical the text was. And it may have been two decades ago (I don't recall the publication date), though I think it was more recent than 1974. — kwami (talk) 02:06, 26 April 2013 (UTC)[reply]
The only parameter I can think of that was thought to be 0 and turned out to be 1 was the magnitude of the parity violation in the weak interaction. But the experimental history doesn't match what you described.
"A book in English by a Japanese theoretical physicist" should narrow it down quite a bit already. Was it any of these people? -- BenRG 16:32, 26 April 2013 (UTC)
This reminds me of when I was an undergraduate at Uni. We were required to do a great number of experiments in the Lab - always with the same process: Before coming to the Lab, master the theory and pre-calculate the expected results. On arrival at the Lab, set up the equipment and run the experiment. More often than not, the results did not agree with the pre-calculation. Must be something wrong, so we would all then check apparatus looking for mistakes in assembling and connecting up, maybe bang and wriggle a few connectors in case of bad electrical instrument connections. Of course doing so could just as well make a good connection bad. Run experiment again. Repeat as necessary until results agreed with theory. Overall, a good way to never discover an valid anomally in the theory. Better than having no idea what to expect though. Wickwack 124.182.174.187 (talk) 02:55, 26 April 2013 (UTC)[reply]
Except that you aren't in undergraduate lab to test a theory. The point is a) to learn techniques for working with equipment and methodology and b) to physically demonstrate a well-established concept. Those are both valid educational goals. If you don't have those two concepts down, you aren't prepared to make novel discoveries. --Jayron32 04:00, 26 April 2013 (UTC)[reply]
True. I wonder, however, if that set up folk for life to just retain that habit. My education stopped after graduating at batchelor level, so I don't know if they teach higher degrees to be more carefull in lab work - but I suspect not. Wickwack 124.182.174.187 (talk) 10:41, 26 April 2013 (UTC)[reply]
I heard a story (2nd hand) about a 1st year bio lab where the students were counting the yellow and black corn kernels on cobs of corn to verify Mendelian genetics (not being a biologist myself I could have this wrong). The tutors knew from a cheat sheet what the correct ratio of black kernels to yellow kernels should be, and the students inevitably got the ratio wrong every time for years and years before someone finally noticed that whoever wrote the sheet didn't understand the principles of inheritance themselves and had made a logical error that resulted in the wrong expected ratio. Turns out most of the kids were right after all, had been marked down, and sent away convinced that they had to relearn something they had actually already mastered. 202.155.85.18 (talk) 04:28, 26 April 2013 (UTC)[reply]
Instructors being idiots notwithstanding, there is still valid educational value in demonstrating known principles. You can't extrapolate that these type of labs aren't valid from a stupid instructor. --Jayron32 04:34, 26 April 2013 (UTC)[reply]
It doesn't sound like the uni I went to anyway. As students we did occaisonally (not very often)get answers different to what teaching staff expected. In any class you get three sorts of student: 1) those who really don't know the subject but may be able to regurgitate; 2) those who when marked down just accept it; and 3) those who have mastered the topic, are confident, and will challenge the lecturer/professor, and show where he/she got it wrong. What impressed me at the time was that teaching staff were quite happy to be challenged, in fact they welcomed it, and would acknowlege they had it wrong. Quite different to school teachers. Wickwack 124.182.174.187 (talk) 10:52, 26 April 2013 (UTC)[reply]
Not all school teachers are afraid of challenges or refuse to admit when they are wrong, and some university lecturers get very huffy when challenged! Dbfirs 11:34, 26 April 2013 (UTC)[reply]
Anyone who goes into teaching without expecting to learn just as much as they teach, if not more, has no idea of what teaching is about. But if that's the model they got from their "teachers" or "parents", it's probably no wonder. -- Jack of Oz [Talk] 12:36, 26 April 2013 (UTC) [reply]
I had a friend in college who was training to become a math teacher. He painstakingly 'loaded' about 30 dice by drilling out the middle spot of the '5' and inserting a piece of lead shot into the hole. He gave these to the kids in his class without telling them that they were rigged. So when they did the tedious statistics experiment of graphing the number of 1's, 2's, 3's, 4's, 5's and 6's over dozens of rolls, the answers would show a disproportionately large number of 2's and far too few 5's. He wanted the kids to get results that differed from theory. They were being challenged to actually discover something non-obvious by doing their experiment - and to actively dispute the theoretical results that he'd previously presented in class. As I recall, he said that quite a few of the smarter kids deduced that the dice was loaded - and even figured out how he'd done it - but too many others either fudged the results or were un-curious about the anomalous results. SteveBaker (talk) 14:07, 26 April 2013 (UTC)[reply]
This. ^^^^ --Jayron32 14:58, 26 April 2013 (UTC)[reply]

So, no idea who the author was? — kwami (talk) 19:10, 26 April 2013 (UTC)[reply]

Quoting from Hyperphysics

I suppose many here are familiar with this good academic physics site. I'm trying to find out if the use of the stuff contained therein, including figures, are free to cite and use within the framework of writing a paper. My previous appeals to addresses found there weren't answered as yet. Thank you, BentzyCo (talk) 04:24, 26 April 2013 (UTC)[reply]

I suppose that it depends what sort of paper you are writing. Citing small extracts is standard practice as long as the source is acknowledged, but, quoting from the site: "HyperPhysics (©C.R. Nave, 2012) is a continually developing base of instructional material in physics. It is not freeware or shareware. It must not be copied or mirrored without authorization. The author is open to proposals for its use for non-profit instructional purposes. The overall intent has been to develop a wide ranging exploration environment which could be of use to students and teachers." I suggest that you wait for a reply from the author if you want to use a significant proportion of the material. Dbfirs 08:34, 26 April 2013 (UTC)[reply]
a. The intention is a scientific-educational paper, to be publicized in a couple of periodicals, e.g., 'The Physics Teacher'.
b. I've already addressed Dr. Nave, more than a month ago, but nada - no reply. What do you suggest ? BentzyCo (talk) 09:04, 26 April 2013 (UTC)[reply]
No one can stop you from citing them. But regarding reuse, there is a such a thing as fair use which allows you to republish content without permission, but whether that applies in your case is a legal question we can't provide an answer to. As far as Dr. Nave failing to return your correspondence, well, that's just tough. Copyright holders have no obligation to do anything in response to reuse requests. Someguy1221 (talk) 09:12, 26 April 2013 (UTC)[reply]
It's the fact that your paper is going to be published in periodicals that constitutes the problem. If Dr Nave sees exact copies of his diagrams published elsewhere, then he might well have cause to sue for breach of copyright (whether he would do so is another question, and we don't give legal advice). If your publication is not for profit, and you are unable to get permission to reproduce the content, then either make your own diagrams and re-word the text, or acknowledge the source in your paper, then, at least, you cannot be accused of plagiarism. If your publications are for educational purposes then you have a better chance of getting permission to reproduce content (per the cited statement above). If your publication is a purely commercial venture, then don't breach copyright. It could be expensive! Dbfirs 11:23, 26 April 2013 (UTC)[reply]
Basically, you do need permission to copy large chunks of this stuff - and if Dr. Nave chooses not to give you that permission - then you can't legally do it. If he doesn't reply - then he didn't give that permission. You might be able to copy some of this material under the "fair use" provisions of copyright law - but we aren't allowed give you the legal advice to tell you the limits on that. It's particularly problematic because "fair use" isn't a "bright line" rule - it's a matter of balancing pro's and con's - and ultimately, unless you have explicit permission, only a court can tell you for absolute certainty whether you copied fairly or illegally. SteveBaker (talk) 13:48, 26 April 2013 (UTC)[reply]

physics

what Is difference between electronic filter and electronic circuit ? — Preceding unsigned comment added by Titunsam (talkcontribs) 06:39, 26 April 2013 (UTC)[reply]

An electronic filter is one type of electronic circuit. A calculator is an example of an electronic circuit which is not an electronic filter. StuRat (talk) 07:24, 26 April 2013 (UTC)[reply]
Completing the previous answer - while circuit refers to the structure, filter refers to its physical function. BentzyCo (talk) 09:07, 26 April 2013 (UTC)[reply]
A filter has the specific function of changing one kind of electrical signal into another - typically by reducing the amount of power in some range of frequencies. A good example of that would be the bass and treble controls on a music replay system...when you turn down the treble control, some of the high frequencies in the music are reduced in volume...when you turn down the bass control, some of the low frequencies are reduced.
You can think of it like a coffee filter - which removes the large particles of coffee grounds while allowing the microscopic particles to flow through it...a "bass" audio filter removes the "large" wavelengths while letting the "small" wavelengths flow through it.
An electronic filter can be a very simple circuit - or it may be something complicated and sophisticated - but it is (by definition) an electronic circuit.
Electronic circuits can be almost anything with wires and electronic components like resistors, capacitors, coils, diodes and transistors. The computer you're using to read this text is a gigantic, almost unimaginably complex, electronic circuit - and it almost certainly contains several electronic filters within it. But it's easy to imagine simple electronic circuits that don't contain filters.
SteveBaker (talk) 13:34, 26 April 2013 (UTC)[reply]

Gym trainers

What's the difference between a personal trainer and fitness consultant in gyms? Most gyms have both. Clover345 (talk) 11:19, 26 April 2013 (UTC)[reply]

A personal trainer is for ill disciplined people who make a mess of their lives and need help to sort them out. Count Iblis (talk) 12:15, 26 April 2013 (UTC)[reply]
That statement would require a reference, wouldn't it now. -- Jack of Oz [Talk] 12:32, 26 April 2013 (UTC)[reply]
In my experience, these internal job descriptions can mean almost anything. What is the difference between a secretary and an office manager? Both are the same, and have different functions at different companies. OsmanRF34 (talk) 17:55, 26 April 2013 (UTC)[reply]

frogs eggs in lawn

I live in the fenland area of East Anglia UK.I was mowing my lawn a couple of days ago when a frog jumped out of a clump in my grass.It hopped to the edge of the lawn and hid behind a compost bin.I left a shallow dish of water as there is no pond in my garden.I found it basking in the dish.Each day I have been going out to see if it is still there and I have discovered it has been laying eggs in bald patches of my lawn.I now need to recut my lawn but I do not want to destroy the eggs or harm the frog .How can these eggs surive if not in water and how should I treat them?90.201.129.169 (talk) 11:32, 26 April 2013 (UTC)[reply]

What makes you think those are frog eggs? Most types of frog, including the common frog of Europe, lay their eggs in water, where they hatch into tadpoles. Is there something special about this frog? Looie496 (talk) 15:23, 26 April 2013 (UTC)[reply]
Place flags or stakes where you find the patches and mow around them. If this is a species of direct-developing frog that lays eggs on land they won't take more than a week or two to develop. You can water them with a watering can if you are afraid they will dry out. If you try picking them up you are likely to disrupt them. It's curious what species this might be, I thought direct-developing frogs were all tropical. μηδείς (talk) 15:28, 26 April 2013 (UTC)[reply]

Eternal and infinite time-continuum (Universe/Multiverse/Meta Universe)

In this section I would like not to have a discussion about God or extra-terrestrial life, if possible.

The question is: Are there any reliable sources that propose a hypothesis that our time-continuum (Universe/Multiverse/Meta Universe) is eternal in time and infinite in volume/space?

For the purpose of our question, Universe I have in mind is not something that developed from a Big Bang, and is a current form of our Universe, but that going beyond Big Bang, to what was before, to the earliest possible event or a starting point, or back in eternity. By a Multiverse, I don't mean one of the specific definitions such as a collective name for Universes that are far away from each other and might not be connected. But all 4 names I mean an encompassing definition of everything there is, and it's not of much importance what name the reliable source uses for it. Ryanspir (talk) 11:37, 26 April 2013 (UTC)[reply]

Conformal cyclic cosmology for example. Or Eternal inflation. In fact lots of theories but distinguishing between very big and infinite is never going to be fully possible if it is infinite, it'll just be which theories or theories seem to fit the currently known facts the best. Dmcq (talk) 11:44, 26 April 2013 (UTC)[reply]
There are plenty of theories - either way - but we don't know for sure, and worse still, there are reasons to believe that we may never know.
  • Time: The Big Bang is "known science" - by far the most widely accepted explanation for the beginning of the universe and confirmed by experimental evidence such as careful measurement of the cosmic background radiation. Because the big bang started with a "singularity" (an infinitely small dot), no information from "before" the singularity can ever be passed through to the time period after it. We cannot experimentally measure anything about the "before"...and indeed, a strong possibility is that there was no "before". More importantly, this origination of absolutely everything through this singularity means that even if there was a "before", it cannot possibly have any influence on the "after"...so in a physics sense, it doesn't matter whether there was a "before" or not.
  • Space: Again, we don't know whether the universe is infinite or not. The problem is that the speed of light limits the rate at which information can travel across the universe and the rate of expansion of space at larger distances exceeds that. This means that we are at the center of a sphere around 92 billion light years in diameter. Everything outside of that spherical volume of space is forever unknown to us. This means that we cannot directly measure whether the universe is infinite or not - we can only know that it's larger than 92 billion light years.
So from a practical standpoint, the amount of time in the past is finite and the useful, measurable, knowable, "observable" universe is most definitely finite. But from a theoretical standpoint, time in the past may or may not be infinite and the size of the universe may or may not be infinite - and in neither case can we observe anything to prove that.
That leaves one other matter - time extending into the future. Time within the universe may well be finite in the past - but is it finite into the future too? Well, as far as I know, the current lead theories all suggest that time will be infinite into the future - but again, we don't know for sure. However, we can (in principle) figure that out and know for sure at some time in the future.
It's a frustrating answer - we are irrevocably constrained to the finite - and fundamentally prevented from seeing whether there is a greater infinite. The best we can hope is to use evidence from what we can see to extrapolate out to what we cannot see. We'll have theories about what's out there - but we won't know for sure.
SteveBaker (talk) 13:22, 26 April 2013 (UTC)[reply]
Steve: "...so in a physics sense, it doesn't matter whether there was a "before" or not" may well be the single most useful, clarifying (to me), whatever-you-calls-it sentence you've ever written here. Thanks!
--DaHorsesMouth (talk) 16:51, 27 April 2013 (UTC)[reply]

Floss first or brush first?

Has the debate over flossing or brushing first stabilized yet? Is it recommended that people floss and then brush or brush and then floss? My understanding is that there is no agreement in the literature about this. What is the current state of the research? Thanks. Viriditas (talk) 12:25, 26 April 2013 (UTC)[reply]

The advantage of flossing first is obvious, the brush bristles can then get between the teeth in places previously occupied by bits of food. But what's the advantage of brushing first ? StuRat (talk) 18:00, 26 April 2013 (UTC)[reply]
If I've eaten something that causes to a lot of stuff to stick inbetween my teeth (e.g. fish), then I prefer to first brush my teeth to get rid of quite a large fraction of the stuff, then floss and then brush again. If I try to floss first in such a case, then it takes a lot of time to get everything removed. Count Iblis (talk) 18:18, 26 April 2013 (UTC)[reply]
StuRat, I think most medical associations outside the U.S. agree with you. For some strange reason, U.S. dental associations won't take sides. For example, the Canadian Dental Association comes right out and says, "Brush your teeth after you floss - it is a more effective method of preventing tooth decay and gum disease." I would be very interested if you could find a similar statement from a respected U.S. authority. The U.S. medical associations take the position that "it doesn't matter", but that seems to be a strange position to take. Viriditas (talk) 05:48, 27 April 2013 (UTC)[reply]
...and "it doesn't matter" is a phrase rarely seen in Wikipedia, and certainly less on the Ref Desk. Richard Avery (talk) 14:40, 27 April 2013 (UTC)[reply]
further, you'll note that that phrase occurs in two consecutive posts now. Woohoo...
--DaHorsesMouth (talk) 16:53, 27 April 2013 (UTC)[reply]

Postnatal clinic

explaination on postnatal clinic? — Preceding unsigned comment added by 49.138.37.29 (talk) 14:14, 26 April 2013 (UTC)[reply]

Start reading the Wikipedia articles titled postnatal and clinic. If you have any more specific questions after reading those articles, we can try to help. --Jayron32 14:56, 26 April 2013 (UTC)[reply]

Tilt

Why does the northern hemisphere tilt towards the Sun during the summer? Pass a Method talk 15:37, 26 April 2013 (UTC)[reply]

Well, according to common usage, summer is opposite for the northern and southern hemispheres. For each hemisphere, summer is defined as the season during which the Sun is highest in the sky, which means the season during which that hemisphere points toward the Sun. Looie496 (talk) 16:00, 26 April 2013 (UTC)[reply]
The axial tilt of the Earth means that days are longer and the sun is higher in the sky for longer periods of time during one half of the year than the other half. Whichever hemisphere you are in, the part of the year with longer days etc. is summer in that hemisphere. See Seasons#Axial tilt for more details. Gandalf61 (talk) 16:00, 26 April 2013 (UTC)[reply]
The reason for the axial tilt is more a question of why there is so little tilt! When the earth was formed, it could have been spinning at almost any angle. Venus has a tilt of 177 degrees, Mars 25 degrees, Jupiter just 3 degrees, Pluto 122 degrees and Uranus has 97 degrees. It's a seemingly random thing, so the fact that the earth's tilt is 23 degrees could just be a flook. The axis of rotation would also change over time but it's stabilized by the moon orbiting around us. SteveBaker (talk) 16:12, 26 April 2013 (UTC)[reply]
I should think that when the planets formed, they would have no tilt at all, from the ecliptic, and that tilt was later added due to collisions. However, there may not have been any gap between "formation" and "collisions" (the two might well have overlapped). StuRat (talk) 18:02, 26 April 2013 (UTC)[reply]
In my graduate physics study, we actually had a homework problem modeling the probability distribution of planetary axes assuming they formed via the accretion of asteroid sized chunks. One generally assumes the disk of debris from which planets form is relatively flat and on average the angular momentum will be coaligned with the axis of the orbits, which would tend to predict that the planets have axes also aligned with the orbit. However, each accretionary impact event between a protoplanet and an asteroid is a little bit random and can kick the axis of the planet by a little bit. Even though the average impulse is small and effectively neutral, this series of small kicks still results in a random walk that can cause the planetary axis to have large deviations by the time planet formation is essentially finished. For a more interesting question, one might wonder why the sun's axis is tilted 7 degrees relative to the orbital plane. It is actually not entirely clear what formation processes would have the ability to move the sun (or alternatively the planets) by that much. Dragons flight (talk) 18:47, 26 April 2013 (UTC)[reply]
Further complicating matters, isn't it the case that the earth has an "overspin", like a top, which rotates once every 23,000 years or so? I have a vague recollection of that from 9th grade science or something. ←Baseball Bugs What's up, Doc? carrots22:08, 26 April 2013 (UTC)[reply]
Every 26,000 years to be precise, but close for a vague recollection. See Axial precession (astronomy) --NorwegianBlue talk 23:24, 26 April 2013 (UTC)[reply]
Very good. So, if I'm picturing it correctly, 13,000 years from now the seasons will be at opposite times of the year from what they are now? ←Baseball Bugs What's up, Doc? carrots23:47, 26 April 2013 (UTC)[reply]
No, the axis doesn't reverse, it just rotates (the small circle). Dbfirs 07:11, 27 April 2013 (UTC)[reply]
In 13000 years, midsummer will occur 180° around the orbit from where it does now. Bazza (talk) 10:30, 27 April 2013 (UTC)[reply]
Oh, I see what you mean. That's not the seasons being at opposite times of the year, but the perihelion occurring in July instead of January in 10,500 years. This will make summers in the northern hemisphere slightly warmer, and winters colder. See Milankovitch cycles for various other effects, including this 21,000-year cycle of the aphelion and perihelion against the seasons (actually a combination of two cycles). Dbfirs 12:01, 27 April 2013 (UTC)[reply]
I'm not talking about perihelion. I'm talking about the tilt of the earth relative to the sun. As Bazza said. In 13,000 years the earth should be tilted the "opposite" of how it's tilted now, hence the northern hemisphere would have winter weather in July, and summer weather in January. Or maybe I'm not picturing it correctly. ←Baseball Bugs What's up, Doc? carrots05:01, 28 April 2013 (UTC)[reply]
No, you are not picturing it correctly from the point of view of us Earth-dwellers. The year and the calendar are both defined by the tilt, so we will not see this difference, though I agree that astronomers from elsewhere in the solar system might take your point of view, and they would indeed observe the change in tilt over 13,000 years (not the 10,500-year effect on the perihelion). Dbfirs 06:49, 28 April 2013 (UTC)[reply]
Yes, I'm talking about the observer sitting at some fixed point in the solar system, watching this process over 13,000 years (he takes a lot of Geritol). If I understand what you're saying, Orion would be a summer constellation instead of a winter constellation, right? Also, we would have to adjust our calendar every once in awhile, to ensure that summer still starts on June 21, right? ←Baseball Bugs What's up, Doc? carrots07:08, 28 April 2013 (UTC)[reply]
We are not really disagreeing here, except on the point of having seasons at the opposite time of the year. They will occur at the opposite position of orbit, but not at a different time of year. This is because we already adjust our calendars to match the mean tropical year (or, more correctly, the mean period between vernal equinoxes which is not quite identical), so we already adjust our year to maintain summers in the same months over thousands of years (see Gregorian calendar). This is what we Earth-dwellers mean by "year". Observers elsewhere in the solar system would see the tilt change over 12,886 years in Earth-time (see Axial precession), as well as observing the slower 112,000 year-Apsidal precession of the orbit. The 10,818-year reversal effect on the perihelion that we observe here on Earth is a combination of the two. Of course, Earth's astronomers observe the two effects independently against the stars. Dbfirs 07:30, 28 April 2013 (UTC)[reply]

Abandoned Tower Crane rotation

Tripoli has more abandoned tower cranes than anywhere I've ever seen (and approximately none currently doing anything). Hundreds of construction projects lie in suspended animation at whatever stage they were at when the 2011 revolution shut things down.

Several cranes near where I work normally point north but from time to time I see a couple of them (always the same two) pointing another way. The site is abandoned but anybody could clamber in and probably anybody could climb the crane if they wanted to. But surely they won't have the keys. (Do cranes need keys? Anyway, I doubt these ones have power.) How and why are they sometimes turned around? I wondered if it's the wind (perhaps these two are not locked). What's the most likely explanation?

Hayttom 15:41, 26 April 2013 (UTC) — Preceding unsigned comment added by Hayttom (talkcontribs) [reply]

Clearly, these cranes don't have people sitting in them 24/7 the whole time they are out there - so there must be some kind of mechanism to allow them to be safely parked. I guess that the last time the crane operator left to go home, he'd have engaged whatever mechanism there is - just like he'd do every night at the end of his shift. Doing a Google image search on "Tower crane cab" produces photos like this one: http://ic.pics.livejournal.com/nord_operator/16414762/33225/33225_original.jpg that clearly show a key...so it's reasonable to assume they have them. Of course it might be that the crane operators just leave the key up in the cab ready for the start of the next shift...so that's not really proof of anything. It would make sense to point the crane into the prevailing wind at the end of a shift - and according to several sources I looked at, the prevailing winds in Tripoli are from the north...so this is a plausible thing. SteveBaker (talk) 16:28, 26 April 2013 (UTC)[reply]
From [5]: "When not in use the rotation / slew brake must be released to allow the crane weather vane thus the wind then poses no danger to the crane". It sounds like the standard procedure is to allow an empty crane to rotate freely in the wind. Whether or not any given crane rotates would then depend on the strength of the wind and the amount of internal friction resisting rotation. Dragons flight (talk) 17:10, 26 April 2013 (UTC)[reply]
Resolved

Another crackpot theory question

Hi all, I have another question based on a crackpot theory of mine, so please be kind and moderate, and don't abuse me too much. It's all very sketchy, so someone is going to want to take the mickey, but please go easy. It goes like this: the laws of the universe are time symmetric (or nearly so, but I'll get to that). My theory is that time could just be a kind of disturbance, and the laws are symmetric simply because any function can be resolved into a symmetric component and an antisymmetric component, as in f(x) = (f(x) + f(-x))/2 + (f(x) - f(-x))/2. Now if time is a disturbance, essentially consisting of vibrations forwards then backwards, then the antisymmetric operators would cancel (this is how I picture it) and the symmetric operators would amplify. But every operator (or function) can be reduced to a symmetric component and an antisymmetric component, so any operator would have its antisymmetric component removed.

But: in reality, the universe is governed by CPT symmetry. You have to reverse time, conjugate charges, and reverse polarity, for symmetry to be perfect. Under the covers, mathematically, any matrix can be decomposed into a Hermitian matrix and an anti-Hermitian matrix. My theory is that the transpose operation on a matrix is analogous to changing polarity, and the complex conjugate operation for the matrix is analogous to charge conjugation. Then, the same concept follows: any operator is the sum of a Hermitian operator and an anti-Hermitian one, so if time is some kind of disturbance in a Hermitian space, the CPT-symmetric laws will amplify, and the CPT-antisymmetric laws will cancel.

This depends on a couple of things: firstly, the mathematical operations must be analogous in some way to the physical operations I mentioned, and secondly, the mathematical proof of CPT symmetry (which I do not know) must (presumably) incorporate this analogy in some way. Is there anything that would count for or against this theory? I know it is suitably vague, so if you want an unfalsifiable theory, you've got one, but whilst it may not be strict science, it seems as though it is open to further analysis. And yes, perhaps it is strictly falsifiable, but I just can't show this. IBE (talk) 15:41, 26 April 2013 (UTC)[reply]

I can't make any sense of this. If time is a disturbance, what is it a disturbance of? Time is a dimension -- something that can be measured using a numerical scale. How can a disturbance be a dimension? Looie496 (talk) 15:56, 26 April 2013 (UTC)[reply]
I take your point, and I admit that it's a problem. I've often wondered that, but the theory seemed so neat when I thought it up, and it connected with things I found out later - things that have been known for ages, but which I myself didn't know. The idea is closest to Max Tegmark's mathematical universe hypothesis, so the universe is just made up of numbers. Essentially it is just a complex number field, or matrix field of some sort, and it consists of spatial dimensions, and an imaginary time dimension. Now what happens with the imaginary dimension? Imaginary numbers cannot be pictured in spatial terms, so let us say the universe has this dimension and doesn't know what to "do" with it. So it oscillates between states - producing time as a by-product. Of course one can imagine a universe that doesn't need to do anything with the imaginary dimension - it just sits there. Such a universe would exist, however, for zero seconds. So the universe exists because it is the thing that doesn't know what to do with the imaginary numbers. I know that is vague, so the theory is intended as a starting point, so people can either refute it, or show how it links to some other theory, and maybe has a grain of truth in it somewhere. IBE (talk) 16:25, 26 April 2013 (UTC)[reply]
The problem is that time is the anchor that we use for experience. If time speeded up, slowed down, or even reversed, we'd be completely oblivious of it happening. I don't see what it even means to have time itself change.
Worse still for your theory is that time itself isn't a constantly marching thing. Relativity says that bodies in motion (or inside gravitational fields) experience time dilation...so the rate of passage of time is slightly different for every person, every molecule in the universe...it doesn't seem like your theory works too well with every fundamental particle's experience of time oscillating around separately. The universe doesn't have one single, universal clock...there is no "true" measure of time - it's different for every particle.
For example, an earth-bound fundamental particle called a "muon" has a lifetime of about 2 microseconds before it decays away. But muons created in the upper atmosphere due to cosmic ray interactions move at about 99% of the speed of light - and they decay over about 20 microseconds. Time for a muon creating that way is happening 10 times slower than it does for us. It's clock runs slower than ours does. But if our time and it's time are running at different speeds - what does it mean for time to oscillate? There is no "master clock"...no "true time" for the universe.
The biggest problem is that theories shouldn't come from nowhere. You need to find what's wrong with our current theory - find experimental evidence that it's broken - then propose something new that not only predicts that new evidence - but (most importantly) reproduces everything we already know under the old theory. I don't see that you're doing that here. Occam's razor says that we don't make things any more complicated than we have to - and we simply don't need your hypothesis to explain what we already know.
SteveBaker (talk) 16:53, 26 April 2013 (UTC)[reply]
Ok, two main points, both worth noting. Firstly, lack of "true time" or universal time, secondly, lack of a need or basis for the theory. The need is easiest - it is philosophical as much as anything. There isn't a theory it is replacing, and I admitted it is not pure science - it hits philosophy as much as anything. It is an overarching explanation that may add to existing science, and would be judged by its productivity, which we can't decide here - it could, however, be refuted by something specific, or (partially) justified by some specific connection. I'm not sure it comes from nowhere, but I won't go into it. The lack of a cosmic timekeeper is a bigger practical issue - I cannot see why it is a deal-breaker. The variable nature of time places it on a mathematical footing the same as space, and it is always (I think) the imaginary component. Time is not haphazardly variable, but rather, changes according to precise mathematical relationships, worked out by Einstein a century ago. It is not too big a problem, I don't think. IBE (talk) 17:42, 26 April 2013 (UTC)[reply]
i think maybe i see your point, may be; it occurred to me way back there in quantum physics class that you could presumably model the universe as a somewhat complex equation, one of the variables being time; and what we see as events unfolding in time is just the values of various other variables at various values of T for time. 206.213.251.31 (talk) 19:31, 26 April 2013 (UTC)[reply]

How do I call it?

30-pounder without rigging.

What is the name for the wheeled cart for a navy gun? -- Toytoy (talk) 16:15, 26 April 2013 (UTC)[reply]

Gun carriage. TenOfAllTrades(talk) 16:22, 26 April 2013 (UTC)[reply]
Specifically a "garrison carriage" or "ship's carriage" depending on whether you were using it on land (generally in a fortress) or sea.[6] A "field carriage" or "travelling carriage" was a more mobile affair with a pair of large cart wheels, designed to be towed about by a team of horses.[7] Alansplodge (talk) 00:11, 27 April 2013 (UTC)[reply]
How do you call it? I'd say "Gun ho!" or "I predict it will be a gun". Plasmic Physics (talk) 02:42, 28 April 2013 (UTC)[reply]
We're being a bit mean. For example, in Spanish you ask someone their name by asking como se llama ("how do you call yourself") rather than que es su nombre. Although my first thought was like calling a pet: "Here, cannon, cannon...!" ←Baseball Bugs What's up, Doc? carrots04:56, 28 April 2013 (UTC)[reply]
Well, I guess we differ in opinion about the meaning of 'malice/mean'. In any case, if the OP was offended, then I appologise. Plasmic Physics (talk) 06:55, 28 April 2013 (UTC)[reply]
Not very mean, just a bit. He got the question grammatically right, it's just the heading that reads kind of funny to a native English speaker. ←Baseball Bugs What's up, Doc? carrots07:02, 28 April 2013 (UTC)[reply]

The whole thing would normally be called a cannon. Martin Hogbin (talk) 12:43, 28 April 2013 (UTC)[reply]

Holes/tears/cracks in spacetime

Is it actually possible to create a hole/tear/crack in spacetime? If so, what if anything would happen to something that passed through the crack? Whoop whoop pull up Bitching Betty | Averted crashes 17:01, 26 April 2013 (UTC)[reply]

Nobody really knows, but see wormhole. Looie496 (talk) 17:06, 26 April 2013 (UTC)[reply]
In either The Elegant Universe or The Fabric of the Cosmos by Brian Greene, the author describes his own mathematical discovery of "space-tearing flop transitions" or something like that, based on some mathematical model of space including Calabi-Yau manifolds. IBE (talk) 17:45, 26 April 2013 (UTC)[reply]
According to known physics, there is no such thing as a hole/tear/crack in spacetime. Greene's work on string theory is speculative and has no empirical support. --140.180.240.146 (talk) 19:13, 26 April 2013 (UTC)[reply]
Are you referring to the subset of physics which you know yourself, or the totality of physics Brian Greene and his peers know? General relativity was contrary to the "known physics" of the 1920's so far as most physics students and many leading physicists were concerned. Edison (talk) 00:48, 27 April 2013 (UTC)[reply]
I am referring to the subset of physics that has been empirically verified and is considered uncontroversial by the scientific community. I don't have the expertise to judge whether string theory, loop quantum gravity, or other speculative physics have any validity, but they certainly have no empirical support, and Greene himself would be the first to admit this. --140.180.240.146 (talk) 09:38, 27 April 2013 (UTC)[reply]
There is absolutely nothing out there that suggests that this is remotely possible. There isn't really any clear concept of what those words actually mean. It's a pure science-fiction idea and searching for ways to make science fiction work isn't really science! Even wormholes are entirely speculative. SteveBaker (talk) 19:40, 26 April 2013 (UTC)[reply]
Can I call "citation needed" on your claim that it is pure science fiction? It is mathematical physics, which will often mean that it is not easy to test for the time being, but your claim looks pretty strong. IBE (talk) 02:23, 27 April 2013 (UTC)[reply]
From the article that you linked: "Andrew Strominger and Edward Witten have shown that the masses of particles depend on the manner of the intersection of the various holes in a Calabi-Yau", so the basic holes in that space are around quark level in size. I haven't investigated ""space-tearing flop transitions", but I think any such claim that Mathematical Physics predicts large-scale tears in the fabric of space-time definitely requires citations. Most scientists regard this idea as "pure science fiction" (it brings to mind the last episode of Star Trek: Voyager), but the best answer is the first one (given by Looie496):"Nobody really knows" the answer to either question. Dbfirs 11:52, 27 April 2013 (UTC)[reply]
Most scientists regard this idea as "pure science fiction" (it brings to mind the last episode of Star Trek: Voyager) --Not what I was thinking of; I was thinking of the Doctor Who series 5 finale. Whoop whoop pull up Bitching Betty | Averted crashes 08:11, 28 April 2013 (UTC)[reply]
Wasn't that the collapse of the whole universe? Dbfirs 09:25, 28 April 2013 (UTC)[reply]
Ooh, I wouldn't mind a ringside seat to that event. From a safe distance outside, of course. -- Jack of Oz [Talk] 10:26, 28 April 2013 (UTC) [reply]
Seats are still available at Milliways. Dbfirs 12:36, 28 April 2013 (UTC)[reply]

Do female hormones make a man weaker?

I was just reading this news article, here, which talks about a transsexual man who is fighting in the MMA against women. I was wondering if female hormones injected into a man over the course of a year or so really make a man weaker. I seem to recall reading something about how men have greater grip strength regardless of how well a woman works out or something. ScienceApe (talk) 17:02, 26 April 2013 (UTC)[reply]

Nope, because as a teenager he had enough male hormones to build his muscles. Besides that, genetic men competing against genetic women have an advantage due to several reasons. It's not just the muscle, but also the adrenalin, height, weight and size. Any serious sport league should avoid this scenario. OsmanRF34 (talk) 17:30, 26 April 2013 (UTC)[reply]
This is pure OR but female hormones do change a man's body. I have a friend who is a MTF transexual, and she told me that her shoe size dropped 2 sizes after she started taking female hormones, with no sign of it stopping (it may have continued dropping in the last year in other words). I'm sure there will be proper references somewhere but I don't have access to the medical sites. --TammyMoet (talk) 18:01, 26 April 2013 (UTC)[reply]
I wonder how. Are the bones in her feet actually shrinking ? StuRat (talk) 18:06, 26 April 2013 (UTC)[reply]
Yes, that story seems extremely dubious and in contradiction of medical science. If her feet really are shrinking, that could indicate a serious medical condition like neuropathic arthropathy, and she should immediately seek medical advice. --140.180.240.146 (talk) 19:08, 26 April 2013 (UTC)[reply]
StuRat: her shoe size has dropped, not the size of her feet. Women will often wear shoes that look good even if they're the wrong size. – b_jonas 20:27, 27 April 2013 (UTC)[reply]
my experience is that female hormones definitely make a man weaker, if they are packaged inside a female 206.213.251.31 (talk) 19:34, 26 April 2013 (UTC) [reply]
  • Insofar as they encourage fat deposition, especially breast enlargement, they won't make him stronger. There was a lot of press about marijuana causing an increase in female hormone in men and the adverse effects about a decade back. I always assumed it was drugwarrior pseudoscience, but it's a place to look. μηδείς (talk) 19:39, 26 April 2013 (UTC)[reply]

Male-to-female hormone replacement therapy reduces physical strength, muscle mass, and bone density according to these interviews with physicians who specialize in sex reassignment. This link (scroll halfway down) shows Fallon Fox fighting a cisgender female MMA fighter, Erika Newsome. Fox is on the left, Newsome on the right. Newsome's muscles are huge compared to Fox's. Note that development of muscle mass can vary greatly even among cisgender women. Fighters are paired by weight class. 173.49.177.27 (talk) 21:48, 26 April 2013 (UTC)[reply]

The older the male the less will be the effect. Estrogen is an (now obsolete) effective treatment for prostate cancer. I had a friend at work who developed protrate cancer at age 50. They inserted radioactive pellets into his prostate gland tumour to slug it down, and put him on estrogen for 5 years to stop any tumour spread. Apart from growing a nice pair of little schoolgirl tits, there was no real effect on him. He did not change size and he retained full muscle strength. I had an uncle who also was put on estrogen to treat prostate enlargement disease. It had no effect on him physically. His wife claimed it made him a bit silly, but given he was nearly 90 at the time, his sillyness was probably was going to happen anyway. I don't know what the cancer treatment dosage is/was compared with what transvestites take. Wickwack 58.170.141.242 (talk) 03:02, 27 April 2013 (UTC)[reply]

Yes, treatment with estrogen for more than a brief period of time will weaken a man's muscle strength. Estrogen will suppress GnRH, which will suppress LH, which will reduce the testosterone produced by the Leydig cells, which will gradually result in some reduction of muscle mass and strength. And, sorry guys, there are errors in almost every single comment so far posted to this question. alteripse (talk) 03:26, 27 April 2013 (UTC)[reply]

GMSK modulation vs. 8PSK modulation

Why measure frequency error in GMSK modulation as opposed to EVM? Is EVM not relevant in GMSK modulation? Why measure EVM in 8PSK modulation? Is frequency error not as important? Thanks. — Preceding unsigned comment added by 192.240.14.2 (talk) 17:24, 26 April 2013 (UTC)[reply]

(See EVM, Frequency-shift keying and QAM for our articles on the subject). The answer is because those are the relevant parameters for the signals in question. For a GSM signal, you're not particularly interested in the (demodulated) amplitude of the waveform, only the frequency - there isn't a definite "target" point on the polar diagram from which one can measure the error vector, which means that the EVM isn't a useful parameter. On the other hand, for a QAM signal, the frequency error doesn't give you enough information - you need to know how far off the amplitude, as well as the phase, is, in order to characterize the signal quality. Tevildo (talk) 20:06, 26 April 2013 (UTC)[reply]

Best office plant for artificial light only

I want to buy a plant as a gift for an office worker whose desk has 8-12 hours a day of artificial light only. Google searches have suggested jade plants. But looking up jade plants direct sun for part of the day is recommended. None of the sources I have found is above the level of a blog. Can anyone provide information about jade plants or other alternatives? Thanks. μηδείς (talk) 20:18, 26 April 2013 (UTC)[reply]

I didn't realise these "reference desks" were just a substitute for five seconds on Google!! This is a good start. Depends what sort of office environment your plant will be exposed to. Try this, or this. The Rambling Man (talk) 20:24, 26 April 2013 (UTC)[reply]
Jades should survive fine, but they won't grow much. Sansevieria are a common choice, as they are virtually indestructible. They can literally go months with zero light and water, so are great for people who don't know how to care for plants. They can't get too much light in an office, and are also very hard to over water if planted properly. The lists posted above outline many of the common choices. They are also all pretty boring, IMO. Additional good choices are Schefflera, Zamioculcas, Gardenia and Epiphyllum (it might not bloom, but I find them very interesting). Contrary to one of the links above, I'd recommend against trying to grow Azaleas in an office. SemanticMantis (talk) 21:50, 26 April 2013 (UTC)[reply]
Why? The Rambling Man (talk) 21:52, 26 April 2013 (UTC)[reply]
(assuming you mean "why not grow azaleas as an office plant?") Well, first, I've never seen it done, and I check out interior plants wherever I go. Yet they are a very common landscaping plant across vast swaths of the USA... Secondly (and relatedly), my hunch is that they need a seasonal cycle of dormancy to trigger repeated blooming (e.g. vernalization), but I could be wrong there. Finally, if you google "indoor azalea", you'll find lots of discussion of overwintering them inside, but nobody really talks about keeping them inside. Anyway, that's just my recommendation, take it or leave it. SemanticMantis (talk) 22:06, 26 April 2013 (UTC)[reply]
Would you think that it depends where your office is located around the globe? The Rambling Man (talk) 22:09, 26 April 2013 (UTC)[reply]
This will be in the Tri-State Area. μηδείς (talk) 17:04, 27 April 2013 (UTC)[reply]
Assuming "artificial light only" as medeis has, then no, I assume offices around the world have very little seasonal variation in temp, lighting, or humidity (more precisely, vapor pressure deficit). Sure, there is some variation, but not on the scale that temperate plants are adapted to. It would be a different story if there were no climate control, or if there were a large window. Of course, there are zillions of species, strains, and hybrids of Azaleas, with different cold tolerances and overall habits. I'm sure someone has coaxed some type into long-term cultivation and blooming indoors. But I still wouldn't recommend one as gift. I've seen them sold in small pots with decorative wrapping, presumably for just that purpose. But I suspect most people only keep them inside for a month or so, then either throw them out or plant them outside if they are hardy to that particular zone. Anyway, It's Friday evening here, and I have no time for further refs. For what it's worth, I've been practicing plant husbandry for ~15 years, and currently maintain ~100 plants in ~50 pots, across a few dozen species. Some indoor, some outdoor, some move. SemanticMantis (talk) 22:47, 26 April 2013 (UTC)[reply]
The Aspidistra aka: Cast-iron-plant was very popular in Victorian homes. Not only did they have to cope with deep shade but gas lighting fumes and cigar smoke (although, I don't find the latter a problem myself). Failing that, the other popular deep shade plants are the ferns. [8] I have seen these growing deep inside caves (Cheddar Gorge et al.,) with only the electric light bulbs to provide illumination. If you get really desperate, there are all way the pretty Psilocybin mushrooms. They need no light what-so-ever and turn a pleasing blue when crushed... or so I am reliably told.Aspro (talk) 22:50, 26 April 2013 (UTC)[reply]
Thanks, all. I think I will stick with a bonsai jade which shouldn't grow too much, prickle, or topple. The plant will be on a large desk manned by multiple people and within reach of the public. μηδείς (talk) 17:14, 27 April 2013 (UTC)[reply]
Resolved

Hydrogen manufacture

Can carbon nanotube material be used to filter water into hydrogen and oxygen; with pressure anf/or temperature applied? — Preceding unsigned comment added by 174.100.57.235 (talk) 22:40, 26 April 2013 (UTC)[reply]

You can't "filter" hydrogen out of water anymore than you can filter all of the left arms out of a crowd of people. You need to break apart the water molecules to produce hydrogen and oxygen, and that breaking requires a substantial input of energy. Most commonly this energy comes from electricity. See Electrolysis of water. No need for any elaborate nanotubes. A couple of jumper cables and a 12 volt battery will do it. --Jayron32 02:46, 27 April 2013 (UTC)[reply]
To be very, very clear about this: Any effort whatever by any means whatever to split water into hydrogen and oxygen will take more energy than you'll get by burning, reacting or otherwise recombining the hydrogen with oxygen again. If anyone tells you otherwise (and especially if they try to sell you a handy gadget that can do this) - kick them hard in the shins. If they specifically claim to be able to power a car with this gizmo - then kick them somewhere where it hurts more than that. You'll be doing the world a favor in the long run. SteveBaker (talk) 03:44, 27 April 2013 (UTC)[reply]
Also to be clear, using hydrogen to power a car, in (for example) a fuel cell is not a shin-kickable offense. There are many proposed solutions to generating low cost (and low carbon footprint) hydrogen, but what it boils down to is coming up with a low-carbon-footprint source of electricity to electrolyze the water to produce the hydrogen to put in your fuel cell. The problem with, say, your average "home hydrogen electrolysis kit" (they exist, Honda makes one that runs off your house current to supply hydrogen for its production fuel cell vehicle, the Honda FCX Clarity) is that if you're using your house power supply to make hydrogen, and your house gets its electricity from a dirty coal power plant (and chance are, it does if you're in the U.S. or many other countries), the upside being your supposedly "clean" fuel cell car isn't so clean, as you're essentially burning coal to run your car, and doing it fairly inefficiently in the sense that you're burning coal several hundred miles away to do it. If, perchance, you had a solar-electric system that was generating your hydrogen, and you used that hydrogen to power your car, then, aside from the capital cost in setting the system up, you would be running your car for free. Honda is working on that too, see [9]. However, SteveBaker's caveat is valid: if someone is saying that energetically you can get hydrogen from water for free (that is, in a way that you don't need a massive input of energy in excess of the energy you get from recombining the hydrogen with oxygen in, say, a fuel cell) they deserve a kick in the shins. If, perchance, you could get a reliable source of cheap or free energy (like sunlight) to do so, then it may be cash free, but of course, you still need that energy. --Jayron32 04:04, 27 April 2013 (UTC)[reply]
Getting energy by burning coal in a power plant several hundred miles away and then sending it along a line then powering an electric motor is more efficient than burning petrol or diesel locally in a car's petrol or diesel engine, plus it is cleaner overall and needs less processing to produce the fuel. For cars there is also the added advantage that electric motors can get a bit of the energy back going down a hill or braking. However generating hydrogen as an intermediate step wastes quite a bit of the energy and batteries make cars heavy and so use up more energy. Dmcq (talk) 06:13, 27 April 2013 (UTC)[reply]
Coal is radioactive, you know? Plasmic Physics (talk) 11:18, 27 April 2013 (UTC)[reply]
That was a scare headline for dramatic effect. Nuclear power plants don't have smoke stacks at all and and petrol production does the same thing. Dmcq (talk) 14:40, 27 April 2013 (UTC)[reply]
What headline? I have no idea what you're talking about. Plasmic Physics (talk) 15:04, 27 April 2013 (UTC)[reply]
Sorry I thought you probably got it from Scientific American Coal Ash Is More Radioactive than Nuclear Waste. Dmcq (talk) 16:44, 27 April 2013 (UTC)[reply]
No, that is just ridiculous. At most, coal ash has traces of radioactive material. I was just pointing out that coal, and coal ash is radioactive. Plasmic Physics (talk) 23:43, 27 April 2013 (UTC)[reply]
"No" to the article, not to you. Plasmic Physics (talk) 02:29, 28 April 2013 (UTC)[reply]
I'm more concerned with the evironmental accumulation of radioactive contaminants from fly ash, than the simple presence of the radioactive contaminants in the fly ash. Plasmic Physics (talk) 02:35, 28 April 2013 (UTC)[reply]
These are what the molecules involved look like
H-H    O=O     O
              / \
             H   H
When hydrogen is burned with oxygen to form water, two moles (a fixed number of molecules) of hydrogen react with one mole of oxygen, to form two moles of water. More specifically, two hydrogen-hydrogen single bonds are broken, and one oxygen-oxygen double bond is full broken to form free atoms, and then 4 hydrogen-oxygen single bonds are formed.
Bond-dissociation_energy lists the energy required to do this, it is 2 * 104 Joules + 1 * 119 Joules (324 Joules) to split the hydrogen and oxygen, and 4 * 110 Joules (440 Joules) released when the oxygen and hydrogen combine to form water, giving a net surplus of 76 Joules.
To split water back into hydrogen and oxygen, you need to break the water back into free hydrogen (440 Joules required), and then form then into hydrogen and oxygen (releasing 324 Joules), this gives a net deficit of 76 Joules; this needs to be supplied for the reaction to proceed. In practice forcing a reaction against its natural direction generally needs more than the minimum amount of energy; this is wasted as heat.
Also, hydrogen and free oxygen atoms are highly reactive; they will probably react with the carbon nanotubes as well. CS Miller (talk) 11:25, 27 April 2013 (UTC)[reply]

April 27

In the era of digital medical records, Quantified Self, home genomics testing, patient directed laboratory blood tests available at shopping malls and kiosks, and the upcoming Scanadu tricorder, it seems strange that government and medical associations would continue to stand in the way of progress, efficiency, and lower health costs. However, I have just discovered this seems to be the case in sleepy Hawaii, where, "According to State and Federal Regulations" a laboratory "must receive an order from your physician/health care provider before any laboratory tests can be performed". How many states is this still true for anyway and why are these backwards laws still on the books? Viriditas (talk) 05:56, 27 April 2013 (UTC)[reply]

Laws vary locally. In some states you can go to a laboratory and have many basic tests done for yourself. You may even be able to do it from Hawaii. http://www.walkinlab.com/?gclid=CLLAhqjg6rYCFa5DMgodgUkANw However, Do not confuse insurance coverage or the specific lab policy with legal constraints however. These are three different things. alteripse (talk) 11:34, 27 April 2013 (UTC)[reply]

Honestly, I don't think this is a science question, and I can't really give you very good information. But I honestly believe you're up against a racket, and that medical ethics is defined as whatever maximizes profits for the medical industry. In the continental U.S. it doesn't seem like any trouble to go to Quest Diagnostics for "screening" tests, as long as they're not "diagnostic" (they measure the same things, and are cheaper). From a search I can see that there are locations of this company in Hawaii - if you want, you could ask by phone if you can get an "omega health screen" there; if not, then they should be able to tell you the specific regulation that impedes them. Because of Hawaii's isolation preventing people from alternatives, I suppose their state legislature would be a particularly high-value target, but I don't know anything about their laws. Wnt (talk) 15:53, 27 April 2013 (UTC)[reply]
You can also do blood test via mail, e.g. ZRT labs does such tests. You just order a test on their website, they then send you a blood spot card with needles. You have to put some drops of blood on the card, let it dry and sent it back to them via post. Count Iblis (talk) 16:05, 27 April 2013 (UTC)[reply]
I'm curious to know why someone would feel the need to get their blood tested, in the absence of a doctor having called for it. ←Baseball Bugs What's up, Doc? carrots03:16, 28 April 2013 (UTC)[reply]
Diabetics do it all the time. Someone might want to cross-check the accuracy of a reading if they have reason to mistrust a specific lab or interpretation. Bielle (talk) 04:18, 28 April 2013 (UTC)[reply]
Yes, they use equipment to measure their own blood sugar. That kind of thing has been around for a while. And if they think something's wrong, they should see the doctor that's treating them already, not try to be their own physician. ←Baseball Bugs What's up, Doc? carrots04:53, 28 April 2013 (UTC)[reply]
That's the old way of doing medicine. The new way of doing medicine is for healthy individuals to monitor their health for any unusual problems and then to visit their physician with hard data showing the results of their health monitoring. In the long run, it saves everyone time and money and forces the patient to take control of their own health. Laws that prohibit patient testing in lieu of a physician are out of date. Viriditas (talk) 05:47, 28 April 2013 (UTC)[reply]
There are gazillions of different blood tests. I'd like to see some examples of the type of blood test you might ask for. And supposing you got it, what would you do with the results? Also, if anybody could just walk in off the street and ask for a blood test, by what percent would the staff need to be increased? ←Baseball Bugs What's up, Doc? carrots06:44, 28 April 2013 (UTC)[reply]

Are Chilean Wineberry seeds or seedlings available in the United States, specifically online or in California at all? It is also known as Maqui or Aristotelia chilensis. They are great antioxidants and my Chilean grandfather was telling me about their anti-cancer effects and medicinal qualities. I know guarana and yerba mate and coca tea and other herbal products from South America are now available so I was wondering if anyone knows if they are approved for export into the United States by the FDA or department of customs and department of agriculture. If so does anyone know where one may buy some seeds or a clone or a sapling? I am an avid homesteader and grow my own berries, avocados, grapes, lemons, oranges, potatoes, onions, carrots, lettuce, corn, tomatoes, basil, wild onions, artichokes, saffron, cannabis, roses, nusturium, jasmine, peppermind, mint, oregano, thyme, agapanthus, carnations, cala lillies, gladyolas, dill, marjoram, beets, pumpkins, watermelons, alcayotas, cucumbers, mushrooms, and the list goes on, at home in addition to forraging, fishing, and hopefully when I have a hunting cabin some hunting trapping logging and husbandry and bee hives and bat houses too. I would love to have this chilean plant here in california and possibly sell it at a farmer's market as an exotic fruit such as lúcuma or cherimoya both chilean delacacies, cherimoya being grown in california now as well. Thank you for any info or if you can lead me to the right place to ask. Thanks! Maybe I can send you a bottle of maqui (wine). Cheers. — Preceding unsigned comment added by 108.212.70.237 (talk) 07:06, 27 April 2013 (UTC)[reply]

I put "chilean wineberry usa" into Google, and two out of the first three results led me to growers' websites offering Maqui seeds for sale. Both appeared to be mainstream retailers publishing plenty of information about themselves; one was based in Oregon and the other in California, and both had email contact details on their websites. The answer to "are seeds available in the USA/CA?" would seem to be yes - I can't offer any information about the law in your jurisdiction, but contacting a potential supplier directly and asking them would probably be the logical next step. Karenjc 09:30, 27 April 2013 (UTC)[reply]
If you can manage coca and cannabis you must know more than we do about the legalities. You might want to try the Humanities desk about how DSHEA deals with wines or other alcohol-containing supplements, being careful to avoid requesting specific legal advice ... beyond the basic questions for orientation, this does sound like something where genuine expert consultation would be useful... Wnt (talk) 15:42, 27 April 2013 (UTC)[reply]

Static electricity powering my lights?

Each winter, the static electricity is not to be believed and thoughtlessly touching a light switch with the tip of one's finger can cause the loudest "snap" and quite painful shock for every few steps I take crossing a room. For the sake of simplicity, if the light switch is only powering a 100 watt bulb, am I helping to power that bulb, if even for an infinitesimal amount of time? Or is the static electricity zipping away from the light to some other source? Am I causing a power surge (even if tiny)? – Kerαunoςcopiagalaxies 18:39, 27 April 2013 (UTC)[reply]

Almost certainly you'll have created a (very brief) current flow to the ground wire...so probably not. Even if you did somehow discharge the electricity through the bulb, the amount of energy involved is in the realms of millijoules. To power a 100 watt bulb for one second requires 100 Joules of energy - so you'd be able to supply power to the bulb for a few millionths of a second...nowhere near enough time for it to heat up to the temperature required to emit light. That's not to say that this brief and tiny amount of power has no effects though. If you create a static discharge into a computer chip or other electronic device, even that tiny amount of power can destroy the device because of the huge voltages involved. SteveBaker (talk) 18:48, 27 April 2013 (UTC)[reply]
... you might like to avoid metal switches (try plastic) so that the static you generate has nowhere to discharge to (though you will still get a shock when you touch water taps). Adding humidity might allow the static to discharge naturally, or you might prefer to add a slow discharge circuit (experiment with a connection to earth through a resistor of many megohms). What sort of carpets do you have? They are probably generating the static. I have a similar problem with my car in dry weather (rare here). Dbfirs 19:00, 27 April 2013 (UTC)[reply]
Interestingly, all the switches here are plastic, and there aren't carpets where the static electricity is most abundant, just hardwood floors. I notice I discharge more static electricity when wearing nylon (or polyester?) wind/jogging pants (not sweat pants; the kind that make a sort of "zippy" sound as you walk), with less static electricity when wearing jeans. I do scrape along the floor with socks though. I can also discharge static electricity with the granite (or gneiss) countertop, which connects to the metal sink. The rooms with carpeting don't seem to create static electricity. Weird, eh? – Kerαunoςcopiagalaxies 20:28, 27 April 2013 (UTC)[reply]
Yes, nylon is known for generating static. I'm surprised that plastic switches discharge the static, but it must be tracking along the surface to the wires inside. Have you tried putting your hand on the wall to discharge slowly before touching the switch? Installing some humidification equipment (even just a dish of water on a radiator) might help. Dbfirs 20:48, 27 April 2013 (UTC)[reply]

Extending Kerαunoςcopia's idea, would it be realistic to build outdoors panels in a particular material, being able to accumulate static electricity simply from air friction (triboelectric effect)? Could then this electricity be retrieved? 141.30.214.203 (talk) 19:43, 27 April 2013 (UTC)[reply]

(edit conflict)Theoretically possible but, as SteveBaker mentioned above, the amount of current generated is very tiny, and harvesting a static charge at high voltage is tricky, so not a practicable option. Wind and sun can be used in more conventional ways to produce thousands of times as much power. I assume Jayron's reply (above) was a joke! The Van de Graff generates static but runs on mechanical power (a handle or a motor). Dbfirs 19:59, 27 April 2013 (UTC)[reply]

Radiation hazards in drilling, mining and aviation

Hello everyone,

I'd like to know if the exposure to radiation for humans and/or equipment in the area of oil drilling (presence of traces amounts of natural radionuclides), mineral and coal mining (naturally occuring radiation in the earth crust) and aviation (cosmological radiation) was high enough to require monitoring with particle detectors or atomic emission spectroscopy.

Thanks in advance, 141.30.214.203 (talk) 19:27, 27 April 2013 (UTC)[reply]

Wow. That was one hell hell of a question. Let us break it down. Yes, both mineral oil and natural gas will come out of the lower stratus with radionuclide’s. Natural gas is probably the most important in this context, because in processing for the domestic market the radio-active components can get concentrated in parts of the gas processing equipment. This is known about. Thus, radon etc., can be vented so that the domestic gas supply provides not radiation hazard. Other interesting gases like helium can also be recovered but I don't think this is what the OP is asking.--Aspro (talk) 19:50, 27 April 2013 (UTC)[reply]
As for mining, I would have thought that other minerals other than coal would have a greater need for radionuclide assay of their spoil. The percentage of Thorium for instance can increase, as the sort for elements, are removed from the ore. Yet don't panic. Gas mantles are impregnated with the same radio active thorium. OK, unless you spend your life sleeping on a mattress stuffed with gas-mantle fabric there is no health hazard. A thorium gas mantle however, can make one Geiger counter buzz like a hive of inquisitive bees. Go round a very old house that had once gas lighting with a Geiger and you can find where the original gas lamps were situated. Oh, I have forgotten what your question was.--Aspro (talk) 20:15, 27 April 2013 (UTC)[reply]
As for aviation, Our article radiation concerns may answer it.--Aspro (talk) 20:15, 27 April 2013 (UTC)[reply]
Oh yes, back to your question. These industries you mention monitor radiation, not because its 'high enough' but to ensure that it doesn't become high enough to matter. Even scrap metal yards have scintillators now to ensure that radionuclides don't end up in your recycled cast iron patio-chairs. --Aspro (talk) 20:44, 27 April 2013 (UTC)[reply]

I was previously the Radiation Safety Officer at a large fly-in fly-out mining operation in Australia. There was essentially no radiation hazard from the minerals we were mining (sphalerite, pyrite, galena, silica, chalcopyrite, etc). The three major sources of radiation were (in order of the doses received by the workers); cosmic radiation from flying to and from work, ionising radiation from the industrial radiography instruments used on site (density gauges, XRF, multi-stream analysers), and radiation from the X-ray scanners and explosives detectors used at airport security screening. All of these doses had to be calculated from known source strengths, distance from the source, and time spent near them, as direct measurement with the TLD badges was impossible due to the extremely small levels of radiation. 202.158.112.87 (talk) 01:30, 28 April 2013 (UTC)[reply]

Atomic emission spectroscopy is used to determine the chemical composition of a substance. It's not generally useful as a radiation dosimeter. As for the radiation from cosmic rays on aircraft, this link has a decent amount of measured data. A typical dose on a transoceanic flight is 0.1 milisievert. That's far higher than what you get by spending the equivalent amount of time on the ground, but it's around the dose from a skull X-ray and several times lower than the dose from a CT scan. --Bowlhover (talk) 06:03, 28 April 2013 (UTC)[reply]

Do sea horses really 'burp' in order to change their buoyancy?

My 5-year-old son asked, after watching an episode of Octonauts, do sea horses really 'burp' in order to change their buoyancy? I looked at our article on sea horses and found no answer there.82.31.133.165 (talk) 23:38, 27 April 2013 (UTC)[reply]

Sea horses are fish. So they have swim bladders. Therefore, ergo, etc., if they need to lose buoyancy, the gas has to escape from some where. Burp! --Aspro (talk) 00:05, 28 April 2013 (UTC)[reply]
That's not common in salt-water fish, they often have closed air bladders which adjust only through blood absorption and release. (I.e., there's no direct connection from the bladder to the esophagus.) We'll need a better source than an animated children's TV show. μηδείς (talk) 00:11, 28 April 2013 (UTC)[reply]
Don't try and confuse. Those are demersal fish. Sea horses inhabit sallower waters – so what I said still goes. The pressure differential diminishes per foot as one goes deeper. (partial pressure come into this as well but lets not complicate). That is why divers (of the human type) dwell in their assent, below the surface for a while, to diffuse the nitrogen in their blood. Water pressure is about 1/2 a pound per foot of depth, so when feeding time comes at the aquarium, little Nemo who swims up from the bottom of a two foot tank and opens his mouth in anticipation of those lovely mealy worms you about to feed him with, lets out two or three bubbles. Those are the burps!--Aspro (talk) 00:49, 28 April 2013 (UTC)[reply]

April 28

why do overweight people don't get treated with Adiponectin?

i wonder if it could help in some cases. thanks. Ben-Natan (talk) 02:20, 28 April 2013 (UTC)[reply]

It's a protein, and proteins can't be taken by mouth because the stomach breaks them down. And even if somebody wants to go through the ordeal of injecting it into the bloodstream, it might be hard to maintain a steady level that way. In short, you would probably need something like an IV drip to administer it. Looie496 (talk) 02:44, 28 April 2013 (UTC)[reply]
Adiponectin#Pharmaceutical therapy lists some of the problems that would need to be overcome. Red Act (talk) 02:45, 28 April 2013 (UTC)[reply]
Actually I have just removed that whole section from the article. The text was written in 2004 (although the refs were added later), and although it was clearly written using the literature, it is equally clear that the person who wrote it did not understand what they were reading. Looie496 (talk) 03:44, 28 April 2013 (UTC)[reply]

Question about progress of time in world line?

1- Less than speed of light 2- At speed of light 3- Greater than speed of light — Preceding unsigned comment added by 75.152.167.86 (talk) 04:45, 28 April 2013 (UTC)[reply]

4. None of the above. Time does not have a speed. It might be easier to give a useful answer if you would tell us what the question is. Looie496 (talk) 04:59, 28 April 2013 (UTC)[reply]
Speed, or let's say velocity, is a function of distance and time. Something like v = d / t (example: 120 miles in 2 hours = 60 miles per hour). Which can also be expressed as t = d / v. That is, time is a function of distance and velocity (example: 120 miles at 60 miles per hour = 2 hours). Time doesn't have a standalone velocity. ←Baseball Bugs What's up, Doc? carrots06:57, 28 April 2013 (UTC)[reply]

Heat capacity

Background:

"Early biophysical experiments revealed that the rate of induced mutations in the genetic material of cells is proportional to their radiation exposure. Extrapolating to the smallest possible dose, it might be concluded that even a single photon of UV light has the ability to cause genetic damage to the skin cell.
...An ultraviolet photon carries an equivalent nergy of about 10 eV.
You hear a proposal for the damage mechanism: that the photon deliers the energy into a volume the size of a cell nucleus and heats it up and the increased thermal motion knocks the chromosome apart in some way.
(b) Estimate the temperature rise and hence describe why or why not you think this is a reasonable proposal."

What is the mean volumetric heat capacity of a skin cell nucleus? Could I substitute the volumetric heat capacity of water? What is the most important factor to consider to answer thesecond part of the question? Plasmic Physics (talk) 06:44, 28 April 2013 (UTC)[reply]

This is a not uncommon question in undergrad physics these days, and a good one, as the photons may actually be 1800 MHz electromagnetic radiation from a cell phone, instead of your ultraviolet, penetrating your brain - some idiots think cell phones cause brain tumours.
The average density of animal flesh is about that of water as most of it IS water, so it is reasonable to use the specific heat of water, either the volumetric specific heat or the mass specific heat - they are the same in SI units. Googling "density of flesh" results in pages quoting all sorts of unlikley values, but it easy to spot common errors e.g. not allowing for the lower density of bones, air in lungs etc. However a site I find usually reliable is Engineering Toolbox - it gives the specific heat of human flesh as 3470 J/kg.K vs 4178 J/kg.K for water at 40 deg C.
What is important in the second part is that to cause chromsome damage, there must be sufficient energy to break a chemical bond - you need to know this bond energy and estimate what fraction of photons of sufficient energy penetrate the nucleus and hit DNA, DNA being by volume a fraction of the nucleus volume, and taking into account transparency of cell plasm.
In estimating the effects of cellphone radiation, most people assume the human head 100% absorbs all radiation emitted in a direction so as to enter the head - estimating the temperature rise is then trivially simple, as the heat is carried away by the blood stream (thermal capacity assumed to be roughly that of water) flowing at a known rate (and relating the resulting miniscule temperature rise, the inevitable conclusion is that cellphones simple do not have enough power to be a problem)
However you do the claculation, the actual probability of permanent damage or a tumour is very very much less as each cell has effective mechanisms to repair DNA, and faults that do result in rogue cells are usually dealt with by cell mediated mitosis, most that get past secrete odd things that trigger by inter-cell mediated mitosis, and on top of that the body has further means of supressing tumours, which can't grow anyway beyond a few mm unless they figure out how to secrete a substance that brings in blood vessels.
Wickwack 121.221.214.246 (talk) 09:18, 28 April 2013 (UTC)[reply]
That being said, this question deals with a single photon, with a known energy, and the nucleus of a cell, as opposed to the whole cell. Plasmic Physics (talk) 09:59, 28 April 2013 (UTC)[reply]
I've covered that, but in a round about way without key data, as I think you've posted a homework question. Normally I would have responded "Do your own homework" to this not-uncommon question but I know you make lots of good contributions to Ref Desk and know a thing or two, so I've compromised. Wickwack 124.182.140.226 (talk) 10:42, 28 April 2013 (UTC)[reply]
I'm not asking for the question to be done for me, so it doen't really fall under that category. I'm not familiar with cellular biology, which is why I'm getting a bit lost. Does that mean that I can substitute the HC of water for the HC of the nucleus, I thought you were refering to the whole cell? It just sounds to me like you've made the problem too complicated by introducing too many variables. It seems that, by the nature of this question, assuming perfect conditions, I must figure out what the probability is that the increase in temperature causes damage. How do I relate those two values? I don't even know which bond is being broken, or how to incorporate the value into the relation. Plasmic Physics (talk) 11:13, 28 April 2013 (UTC)[reply]
OK - so first of all, you need to calculate the volume (and from that, the mass) of a nucleus. You then need to translate the photon energy into SI units. You then use the figure for the specific heat of flesh (or water) to calculate the amount that the nucleus will be heated by the photon. You then compare that with normal temperature variations in living creatures to ascertain whether chromosomal damage will be likely. HTH. --Phil Holmes (talk) 11:59, 28 April 2013 (UTC)[reply]
It's not as simple as that. Unstated in Phil's approach is an assumption that the photon is captured somewhere in the nucleus, which is mostly NOT chromosome material. The heat energy obtained by capturing the photon spreads out thru the nucleus and is largly lost to the rest of the cell and the tissue mass generally. This means that the probability of a photon breaking a DNA chemical bond by being absorbed in the nucleus generally is very low, but the probability of a photon breaking a DNA chemical bond by actually striking a DNA molecule is very considerably greater.
The idea of comparing the nucleus temperature rise with the range otherwise experienced naturally is a good one, but unfortunately it may not be valid - that depends on the energy required to break DNA chemical bonds and the fraction of nucleus volume occupied by DNA. So, yes, you do need to know the energy required to break bonds. Really, it may turn out that the probability of a bond break from a direct strike turns out to be low enough to ignore, but I think you need to show that. However, calculations I've seen that estimate the probability of chromosome damage from cellphone radiation, which are based on knowing the bond energy, assume it is negligible anyway. Note that except for skin, the extremities of limbs, and the testes, tissue temperature is held to a very precise value (in humans, 37.4 +,- 0.5 deg C as I recall) except in illness.
Wickwack 121.221.81.83 (talk) 12:23, 28 April 2013 (UTC)[reply]
OK, so back to square two. I'll assume that the genetic material is at the same temperature as the nucleus, because of thermal equilibrium; and I'll assume that the nucleus is a closed-system because of the timescale of this scenario. Is that not a reasonable method? Plasmic Physics (talk) 12:50, 28 April 2013 (UTC)[reply]
Perfect! Thanks. Plasmic Physics (talk) 12:08, 28 April 2013 (UTC)[reply]
It's strange though, denaturisation (which I assume is a key factor) is only introduced much later inthe textbook. Plasmic Physics (talk) 12:12, 28 April 2013 (UTC)[reply]

Relativity: Moving faster than light

An electron is approaching the Earth with a velocity very close to that of light [eg, its velocity is c - g (x 1 sec)], while the Earth's gravitation accelerates the electron's velocity by an acceleration - which does not decrease (and even increases) - as far as the distance between the Earth and the electron gets shorter. What prevents the velocity from crossing that of light? Please note, that the acceleration caused by the gravitational force, is not influenced by the electron's (increasing relativistic) mass. HOOTmag (talk) 06:45, 28 April 2013 (UTC)[reply]

Keep in mind that gravitational effect from the earth diminishes at a distance. I'm thinking it's an inverse-square ratio, but I don't recall for sure. But the electron (or whatever particle) would be going so fast that it wouldn't be within the earth's gravitational pull long enough for there to be any measureable effect. And I hasten to add that I'm just trying to reason this thing - I'm sure there's a more rigorous explanation - beyond just the axiom that nothing can exceed light speed. ←Baseball Bugs What's up, Doc? carrots07:12, 28 April 2013 (UTC)[reply]
Yes, it's an inverse-square ratio, but the gravitational effect from the earth increases as far as the distance gets shorter. Anyways, this effect remains rather constant (approx. g) - at sea level. HOOTmag (talk) 08:22, 28 April 2013 (UTC)[reply]
Correct me if I'm wrong, but isn't an object's acceleration due to gravity a consequence of its increase in energy? If that is so, then there is more than one way for an object to exhibit such an increase in energy, other than increasing velocity. Instead of increasing in velocity, the electron could simply be getting hotter. Plasmic Physics (talk) 07:18, 28 April 2013 (UTC)[reply]
Yes, the electron gains energy from gravity, and this energy goes to increase the momentum and energy of the electron, but only some of the increase appears as an increase in velocity. The rest appears as an increase in "relativistic mass". As the velocity approaches the speed of light, less of the momentum increase appears as velocity, and more appears as "mass". Another way of looking at the situation is to say that as the velocity approaches that of light, it takes more and more energy to produce a small increase in velocity. Please note that this is only a rough layman's view. No doubt an expert will be along soon to give a more formally-correct explanation. Dbfirs 07:48, 28 April 2013 (UTC)[reply]
Darn, I forgot about RM. Embarrasing, considering I studied cosmology. Plasmic Physics (talk) 07:56, 28 April 2013 (UTC)[reply]
I guess the old addage is applicable: "If you don't use it, you lose it". Plasmic Physics (talk) 07:57, 28 April 2013 (UTC)[reply]
Yes, the electoron's mass increases as far as its velocity increases, but the gravitational force increases as far as the electron's mass increases, so the acceleration is not influenced by the electron's mass, and remains rather constant (approx. g, at sea level), and even increases as far as the distance between the electron and the Earth gets shorter, so I don't see how your comment explains what prevents the electron's velocity from crossing that of light. HOOTmag (talk) 08:22, 28 April 2013 (UTC)[reply]
No, an object's acceleration due to gravity is a consequence of the curvature of spacetime, which in turn is due to the energy distribution of the universe (as represented by the stress-energy-momentum tensor). Temperature has no meaning for a single electron. For a large collection of particles, temperature represents the kinetic energy--i.e. the velocites--of said particles. --Bowlhover (talk) 08:21, 28 April 2013 (UTC)[reply]
I was using the very loose meaning of temperature (random thermal motion, fluctuations in its energy distribution among it's degrees of freedom). Plasmic Physics (talk) 09:12, 28 April 2013 (UTC)[reply]
FYI, 'g' does not stand for velocity, but acceleration due to gravity at sea level. Plasmic Physics (talk) 07:18, 28 April 2013 (UTC)[reply]
I didn't say that g is a velocity; g is the gravitational acceleration at sea level. However, if the electron is at sea level and its velocity is c - g (x 1 sec), then it's expected to cross that of light within one second. HOOTmag (talk) 08:22, 28 April 2013 (UTC)[reply]
If you're not saying that 'g' is velocity, then why are you attempting to subtract it from a velocity. You can't take three apples away from five oranges. Plasmic Physics (talk) 08:50, 28 April 2013 (UTC)[reply]
I was using c and g as pure numbers. Anyways, thanks to your comment, I've just replaced the expression "c - g" by "c - g (x 1 sec)". HOOTmag (talk) 09:26, 28 April 2013 (UTC)[reply]
Let's forget gravity, because that brings in general relativity, and replace it with a constant force pushing the particle. That way, we can answer the question in a purely special relativistic framework.
In layman's terms, when you exert force on an object, you give it momentum. In special relativity, the momentum of an object approaches infinity as its speed approaches the speed of light. So you can never get it to the speed of light, because you can never give an object infinite momentum.
To see what happens to an accelerating object: by definition, force is equal to the time derivative of momentum:
In special relativity, the momentum of an object is equal to
Without loss of generality, let's assume that the momentum, force, and velocity are all in the x direction. Then we can take the time derivative of p, and after some algebra, find that:
where a is the acceleration.
So the closer v is to c, the higher Fx needs to be to achieve a given acceleration. As v approaches c, Fx needs to be infinitely large to achieve a given acceleration, or alternatively, the acceleration would be close to 0 for any force. --Bowlhover (talk) 08:21, 28 April 2013 (UTC)[reply]
I haven't been talking about a constant force, because the acceleration caused by any constant force decreases as far as the electron's (relativistic) mass increases. I've been talking about the gravitational force, by which - the acceleration caused - is not influenced by the electron's (increasing relativistic) mass. HOOTmag (talk) 08:36, 28 April 2013 (UTC)[reply]
Obviously, the problem is that your assumption is false. The acceleration caused by gravity does depend on the relativistic mass. Or more precisely, the apparent acceleration due to gravity expressed as the change in velocity of the particle in the rest frame of an inertial observer appears to decrease as the speed of the particle approaches the speed of light. For a particle at rest it is certainly true that:
Where m is the mass, p is the momentum, a is the acceleration, and g is the gravitational field. However, only the second equality holds in the relativistic limit. Specifically, if you start in the frame of the accelerating particle then:
However in the inertial frame of the lab the right hand side becomes is:
The left hand side goes to:
Setting these two pieces equal tells us that given a constant gravitational force, the apparent acceleration measured in an inertial frame is not constant, but rather:
Hence the effect of a constant gravitational field on relativistic particles is what one would expect at low speeds (), but decreases towards zero as the speed of the accelerated particle approaches the speed of light. Dragons flight (talk) 11:30, 28 April 2013 (UTC)[reply]

What species of bird is this?

http://commons.wikimedia.org/wiki/File:Seabird,_Elwood_Beach.JPG Flaviusvulso (talk) 09:47, 28 April 2013 (UTC)[reply]

Data capacity of vinyl record

How much data capacity (in MB equivalent) is there on a vinyl LP record? 86.179.119.114 (talk)