Wikipedia:Reference desk/Science
of the Wikipedia reference desk.
Main page: Help searching Wikipedia
How can I get my question answered?
- Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
- Post your question to only one section, providing a short header that gives the topic of your question.
- Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
- Don't post personal contact information – it will be removed. Any answers will be provided here.
- Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
- Note:
- We don't answer (and may remove) questions that require medical diagnosis or legal advice.
- We don't answer requests for opinions, predictions or debate.
- We don't do your homework for you, though we'll help you past the stuck point.
- We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.
How do I answer a question?
Main page: Wikipedia:Reference desk/Guidelines
- The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
March 1
Error correction for analog signals
Is there a way to use error correction for analog signals to remove noise? i.e. is it possible to trade bandwidth for noise, like you can with digital error correcting codes.
I am aware of Dolby noise reduction for audio tapes, and spread spectrum, but neither one removes noise completely.
Or in other words, given a specific level of environmental noise, with analog is it possible to have zero noise in the final signal like you can with digital? Ariel. (talk) 02:34, 1 March 2013 (UTC)
- Unlike a digital signal, analog signals are not quantized. If we define the noise as the error between the desired signal and the received signal - then the analog domain means that you can produce an arbitrarily-small but still-measurable noise-signal; for any sufficiently-small noise, a sufficiently-more-sensitive measurement can detect it. Contrast this to the the digital domain; the minimum error for any given sample is exactly one bit; anything smaller than exactly one bit is defined as zero. So, it's easier to have a "perfect reconstruction" for a digital signal; you can't make your measurement more accurate than the actual signal.
- Now that this definition is cleared up, we can answer your first question. Analog signal processing is full of tricks to reduce noise: this is summarized in the power/gain/bandwidth product. If you spend more power, you can achieve lower noise. If you use more bandwidth, you can achieve lower noise.
- Some tricks that analog signal processors use: you can double the amount of signal by creating a differential pair. This adds the benefit of redundancy and also takes advantage of the electromagnetic propagation properties of coupled differential elements.
- You can double the bandwidth; compare single side band, which uses less bandwidth, to dual side-band, where redundant information can be used to improve signal-to-noise ratio. In principle, an arbitrary amount of redundancy can be added to a transmission by modulating at multiple frequencies, and constructing a special radio-receiver that down-mixes and combines each signal. This is rarely done in practice; there are better ways to make use of available spectrum. Frequency modulation in the analog domain is one such technique; an arbitrarily-large frequency spectrum can be used to frequency-modulate a signal of a given input bandwidth; here, SNR is traded against spectrum availability.
- Typically, the best way to reduce noise in the analog domain is to prevent it in the first place. Modulate a signal so that it operates in the best regime that your actual devices work at: this is the premise for the superheterodyne and the multiple intermediate frequency stages on some complex radios. Select better parts: lower noise amplifiers are made out of higher-quality semiconductor. Better tolerances on passive components, and more stringent filtering of parasitics, all make for a better analog signal. Nimur (talk) 05:38, 1 March 2013 (UTC)
- Thank you. I was trying to understand what makes digital immune to noise, and now I see the unavoidable noise is in the quantization, and thus digital is not in fact immune to noise, you just see it elsewhere. How do you do the resolved checkbox? 17:04, 1 March 2013 (UTC)
I know nothing about this subject and first learned electronics a few hours ago, so this is just speculation: but can't we say that analog signals certainly are quantized in one respect: no electrons passing means exactly no electrons passing, not an arbitrarily small amount. Thus a completely open circuit, or a capacitor that is charging, might literally let nothing through. The question did not say that what ISN'T noise has no added tolerance due to the noise reduction - so if you include something like a capacitor on the circuit, it might still qualify as. Therefore, it seems to me quite likely that if you know something about the characteristics of what the signal is supposed to be, you can indeed reduce noise. For example, imagine a microphone that changes voltage between 1-5 volts depending on the sound(totally arbitrary and probably wrong numbers). Well, if you know that a voltage of 1-2 represents physical hertz that do NOT exist in your environment, you are guaranteeing that there is no signal there. You can then "trade" that extra bandwidth for lower noise by dropping it to 0. Let's go further: if you know half of your bandwidth is empty, can't you broadcast twice in analog, reverse one of the waves, and then filter the noise that is outside of the "0" of combining the waves? If they are in sync aren't tricks like this possible totally in analogue? This seems to me certainly to be bandwidth-for-noise: you can trade half your bandwidth (one of two lines for example) to broadcast twice and get a better signal? As I said I know nothing about this subject and first learned any electronics a few hours ago. 91.120.48.242 (talk) 07:56, 1 March 2013 (UTC)
- Re: "can't we say that analog signals certainly are quantized in one respect: no electrons passing means exactly no electrons passing, not an arbitrarily small amount", that's not how it works. An analog electrical signal in a wire doesn't just have 0 or 1 electrons. It can have 0.9999999 or 1.0000001 electrons per unit of time. Of course, being a statistical property, it takes time to gather enough samples to tell those two apart. This too breaks down when you get to the point where the quantum nature of time matters. --Guy Macon (talk) 08:36, 1 March 2013 (UTC)
- Can you really say that about a completely open circuit though? This is akin to saying that RAM is not really digital, because in fact the bit can be flipped by cosmic rays before it's read - so it really doesn't have a value of "1" but more like 0.99999999999999995487 to 0.999999999999999999871 and it doesn't really have a "0" so much as 0.00000000000000000442 - 0.00000000000000079325. I think that's a pretty thin/ridiculous argument though. An open circuit really is a digital 0, and a bit really is a digital 0 or 1. It's not analogue by any stretch of the imagination. 91.120.48.242 (talk) 14:48, 1 March 2013 (UTC)
- Voltage is not quantized. A circuit doesn't work just from the number of electrons moving, but also from how "hard" they move (voltage). Ariel. (talk) 17:04, 1 March 2013 (UTC)
- Again, except for no amps. A circuit is open from no amps through it and it doesn't make sense to talk about how hard the electrons AREN'T moving across it. 86.101.32.82 (talk) 19:27, 1 March 2013 (UTC)
- Sure you can. That's what voltage IS! It's the potential difference in an open circuit. You can't measure voltage in a closed circuit - the voltage difference across a wire is zero (ignoring the tiny resistance in the wire). You can only measure voltage when there is resistance to current, and an open circuit is the ultimate resistance. Ariel. (talk) 19:36, 1 March 2013 (UTC)
- See Johnson noise, incidentally. To completely remove _any_ analogue noise, you'd need to cool the device to absolute zero. Tevildo (talk) 20:55, 1 March 2013 (UTC)
- Sure you can. That's what voltage IS! It's the potential difference in an open circuit. You can't measure voltage in a closed circuit - the voltage difference across a wire is zero (ignoring the tiny resistance in the wire). You can only measure voltage when there is resistance to current, and an open circuit is the ultimate resistance. Ariel. (talk) 19:36, 1 March 2013 (UTC)
Re: Re: "can't we say that analog signals certainly are quantized in one respect: no electrons passing means exactly no electrons passing, not an arbitrarily small amount". See shot noise, the quantized nature of current does actually produce noticeable noise in some amplifiers when very low-level signals are being processed. Re "digital is immune to noise". This is, in fact, not correct. Random noise, even at very low levels, has a small, but finite, probability of causing a digital error. Over a short message with low noise on the channel and/or good error correction there is a good chance that the message will be received uncorrupted. However, over a long enough message (or many short messages) there will eventually be a sufficient number of errors to overwhelm the error correction and part of the message will be lost. This is because there will always be a finite limit to the number of bits it is possible for the error correction algorithm to deal with. An analogous process to digital error correction can indeed be achieved with analog signals. One simple way of doing this is to send multiple copies of the same message (either in space over different routes or in time over the same route). In the presence of random noise the copies will add together according to N, but the noise will add only according logN (because it adds or subtracts randomly) and hence the signal to noise ratio increases according to N/logN. SpinningSpark 21:28, 1 March 2013 (UTC)
- On the other hand, the Shannon limit establishes that however noisy the digital channel is, there will still be a data rate that you can push through it with vanishingly small error probability, if you use sufficient error correction. See also channel capacity for a worked example, and Shannon–Hartley theorem. Jheald (talk) 21:40, 1 March 2013 (UTC)
When dimensional analysis fails
Can someone give an example of a dimensional analysis/order of magnitude estimate gives a very wrong answer?
This isn't a homework exercise or anything. Someone used an order of magnitude estimate to arrive at a conclusion that I think is totally wrong, and I'd like to substantiate my belief that these kinds of estimates are far from fool-proof. 65.92.4.236 (talk) 05:02, 1 March 2013 (UTC)
- I think I gave an example way back: in this March 2009 discussion, I intentionally abused dimensional analysis to show that a 60 mph car is also traveling at 3.99x10-4 ... (no units). This was intended to illustrate that if you use invalid analysis, you can make a valid equation that is totally meaningless. Surprisingly, another reference-desk regular used a different and purportedly better method of analysis to show that the 60 mph car is in fact moving at 8.947x10-8 ... evidently, my failure to use the correct universal constant caused my calculation to err by four orders of magnitude... but there's no point in re-opening a four-year-old debate! Nimur (talk) 05:17, 1 March 2013 (UTC)
- Trying to correct Dauto or any one on this one would be maddening, [| since there is always someone wrong on the internet]. OsmanRF34 (talk) 13:36, 1 March 2013 (UTC)
- That discussion back in 2009 was not about dimensional analysis. Dauto (talk) 20:14, 1 March 2013 (UTC)
- If you re-read that discussion in entirety, you will find that on 4 March 2009, you referenced the Wikipedia article on dimensional analysis to support your point. Regardless of the point, or its merit, do you still believe that the discussion was not about dimensional analysis? Nimur (talk) 23:00, 1 March 2013 (UTC)
- That discussion back in 2009 was not about dimensional analysis. Dauto (talk) 20:14, 1 March 2013 (UTC)
Interesting discussion! I side with Dauto, in natural untis you don't throw away any information whatsoever, as the choice of units is completely arbitrary and physically completely irrelevant. In fact you can still do "dimensional analysis" because, as I explained below, this is fundamentally nothing more than using a scaling argument. I also explain this here where I consider the classical limit of special relativity in natural units. Count Iblis (talk) 23:19, 1 March 2013 (UTC)
- If you read the past RD discussion you'll see that one thing is to exchange different equivalent units (like meter or inches or miles), and another is to covert from meters to seconds! 00:25, 2 March 2013 (UTC)
- Well yes, I can "analyze" the gravitational acceleration on Earth to be 2.1512, by pulling a number out of my ass. That's also an invalid dimensional analysis, and also doesn't count as an example to the OP's question.
- I don't know what "very wrong" is supposed to mean, but the power radiated by Hawking radiation has a constant factor of 15360pi, so a dimensional analysis result would be wrong by a factor of 48,000. Is that wrong enough? --140.180.255.158 (talk) 10:24, 1 March 2013 (UTC)
- I am not sure that is a really case, there is always an arbitrary constant which could be a lot bigger than that (Avogadro's number for example). No one claims the constants will all be 1. I think dimensional analysis fails in general when a problem depends on any set of variables which of themselves can form a dimensionless group. So choose a problem with two lengths for example, or a length a speed and an acceleration. How much water will flow down a Xm long pipe with a diameter of d and a head of pressure of h? The subtler forms of failure are when you do not spot the additional variable on which the problem depends (viscosity, density, gravity, pressure etc). --BozMo talk 10:44, 1 March 2013 (UTC)
Dimensional analysis is nothing more than using a scaling argument. But most people don't know this, and then they get surprised if it doesn't work. Any dimensionally incorrect equation can be fixed and made to be dimensionally correct as follows. If y = f(x1,x2,x3,...xn) where y, x1, x2, etc. are physical quantitiies wich can have incompatible dimensions, and f is some arbitrary function that computes y from the xj in an illegal way (e.g. by adding up length to time, while y has the dimension of mass, so that it doesn't make any sense whatsoever), then you can fix this problem by dividing all the xi by the Planck units for their units (which is always just a combination of hbar, G and c). This has the effect of making f(x1,x2,...,xn) dimensionless. And then you multiply the function f by the Planck unit for y.
E.g. suppose we postulate the equation:
T = L^2
for the period of a pendulum. To make this dimensionally correct, you have to divide L^2 by the square of the Planck length and multiply by the Planck time, you then get:
T = L^2 sqrt[c/(hbar G)]
Then, obviously , the difference between the "correct equation" and this one is that here c, hbar and G appear. So, the reason why dimensional analysis works is because of the hidden assumption on the appearance of these dimensionful constants. The value these constant take depends on the units you chose, you can interpet changing the values of these constants in terms of rescaling your variables. Taking the limit c to infinity amounts to rescaling toward the nonrelativistic limit. So, if we want a relation between variables that is valid in some scaling limit, e.g. the classical nonreltivistic, non-quantum world then you're seeking an equation between physical variables that remains non-singular if you take the limit of c to infinity and hbar to zero. In that scaling limit, these variables then don't appear at all. Count Iblis (talk) 12:40, 1 March 2013 (UTC)
- For example, suppose a clock only has an hour hand - how many times will this hour hand pass an hour marker in 24 hours?
- Well the correct answer is 86400. Just consider>
- 86400 seconds = 1440 minutes. No one would argue with that! Likewise
- 1440 minutes = 24 hours. No one would argue with that!
- Well a clock's hour marker by definition passes 1 hour marker per hour.
- So to answer the question of how many times this happens in 24 hours, we simply multiply 1 by 24, yielding 24. But notice! We already have shown that 24 hours = 86400. So the answer to the question is that in 24 hours the hour hand must pass the hour marker 86400 times.
- How many complete revolutions around the face of a clock is this? Well, clock faces normally don't show 24 hour markers, but only 12. So dividing by 12 (the number of hour markers that fit on the face of a clock) we see that in a twelve-hour period, an hour hand will turn all the way around the face of a normal clock 86400 / 12 = 7200 times. Each of those revolutions are half of a day, so if we wanted to see how many years fly by we could divide by the number of half-days per year, which is (360*2). That's 720. So any fool can see that about ten years fly by during every twelve-hour period. Time really does fly! 91.120.48.242 (talk) 14:35, 1 March 2013 (UTC)
. Getting back to actually helping the Original Poster....
- I think the classical case where it fails is the energy of a moving object. It's 1/2 mv^2 - but if you do dimensional analysis you will not know about the 1/2. The other examples you were given where the final answer is unitless are cheating, since you can do anything to the number if it has no units. As far as I know the only errors you will have when doing a normal dimensional analysis are scaling errors - where you need to multiply by a constant, and you don't know you need to. It's also possible to calculate things that have no physical meaning - the analysis is not wrong exactly, but doesn't mean anything useful. Ariel. (talk) 17:01, 1 March 2013 (UTC)
- There's two aspects of the question: dimensional analysis, and order of magnitude estimation.
- Dimensional analysis is a powerful and effective method to use in two situations: a) making sense of recorded data, and b) checking a derived formula. Often, someone working in a new field is trying to deduce a mathematical formula to fit recorded data. Dimensional analysis will give you a possible formula, but of course if there are constants it won't tell you the constants. But in fitting the curve deduced from the possible formula, either a constant will be required to get it to fit the data, or you won't be able to fit the data regardless of what constant you choose - that means the data or the formula is wrong, so you take action as appropriate. It happens that if you are working in the SI metric system, the constants almost always are simple - 2, 2xPI, 1/2 and the like - that's the beauty of the SI system - it's designed that way. When the constants are not these simple factors, they are generally combinations of the well known fundamental constants of physics. In this use, dimensional analysis is not infallible. But the reason it's not infallible is, almost entirely, because in practice you often do not have enough measured data over a range large enough. The data is the problem, not the dimensional analysis.
- Ariel says in his example that diensional analysis will miss the constant (1/2 in his example). That's not the point. The point is, in dimensional analysis, you KNOW you may need a constant, so it is up to you to go find it (by fitting to data or by mathematical reasoning).
- In dimensional analysis in use (b), us humans often make errors in deriving formulas from first principles. If dimensional analysis says the formula is wrong, then it IS wrong - you can count on it. If dimensional analysis checks out ok, it could still be wrong. In my experience (>40 years in Engineering), it has never happened that way though - it is in practice a very reliable check. Note that here I am referring to checking a derived formula for some real purpose, not an exercise in trying to prove silly things or prove things with nor physical meaning.
- Order of magnitude estimation (i.e., working out the correct power of ten magnitude, assumed to be a task quicker & easier than calculating accurately) is something else. Having done a lot of engineering, and supervised both other Engineers and scientists on contract to do research, It seems to me that order of magnitude estimation is something that physicists like to do. Engineers quickly learn to do all calcs to 3 place accuracy unless there is a specific need to be more accurate, which sometimes is the case.
- The risk of stuffing things up with order of magnitude estimation is very high. It might serve the OP best by quoting a famous example:-
- When Admiral Rickover and his team had just about got the first nuclear submarine ready, Dr Edward Teller, at that time well known as the father of the hydrogen bopmb and an internationally famous nuclear physicist commanding considerable respect from Presidents down, came close to derailing the whole thing. He claimed that he had shown by order of magnitude estimation of radiation levels that the sub could not be safely refueled in port, and wanted nuclear subs to be refuled/serviced in the midle of the ocean, which of course is simply not practical. He put out a press release. This triggered a debate between Teller, and Rickover's team. It went something like this:- A team member showed engineering calculations that showed radiation would be well within agreed safe limits. Teller said, well I can't see where you made a mistake, but you must have made a mistake, becasue my order of magnitude calcs show a very different magnitude. The team then got Teller to walk them through his calcs. Every now and then, as Teller went thu his calcs line by line, they would say things like "Dr Teller, you left out a π/4 term. Let's put that in". Teller: "Why bother, we only need a rough answer here." "let's put it in anyway, please." A little later: "Dr teller, you assumed a sphere, but radiation into the gound only hurts earthworms. Let's assume a hemisphere of exposure and divide by 2." And so on. At the end of it Teller agreed refueling in port would be safe after all. It had turned out that all his approximations erred in the same direction.
- Reference: The Rickover Effect: How Admiral Rickover Built the Nuclear Navy, by Theodore Rockwell, John Wiley 1992, Page 312.
- Ratbone 58.164.229.83 (talk) 02:24, 2 March 2013 (UTC)
The right citation for the Rayleigh scattering' equation?
I consulted the wiki website to get one formula" http://en.wikipedia.org/wiki/Rayleigh_scattering ", the Rayleigh scattering. It is as attachment 1, and the given citation is "3 ^Seinfeld and Pandis, Atmospheric Chemistry and Physics, 2nd Edition, John Wiley and Sons, New Jersey 2006, Chapter 15.1.1". However, I turn to the book with none of the formula (shown in attachment 15.1.1). So I want to know the right citation, as I need the original documents.
Rayleigh's equation: I=I0* (1+cosθ^2)* ((n^2-1)/(n^2+2))^2* (2π/λ)^4* (d/2)^6/ 2R^2 (http://pan.baidu.com/share/link?shareid=316021&uk=3590423494)
— Preceding unsigned comment added by Yanyan1992111 (talk • contribs) 07:05, 1 March 2013 (UTC)
- The expression most likely is in the Seinfeld and Pandis reference, but that particlular page is not available online. It is given in numerous other books [1][2][3][4] (although the last one seems to have got the sign of the cos2 term wrong). Or were you looking for Lord Raleigh's original paper from 1899? SpinningSpark 20:34, 1 March 2013 (UTC)
Thank you for your reply.But I still have something that I puzzled.You say the last one seems to have got the sign of the cos^2 term wrong.
Do you think our equation's sign is right or wrong?
We find the particular page of the Seinfeld and Pandis.http://pan.baidu.com/share/link?shareid=316981&uk=3590423494 Acually there is no this expression in this book.http://pan.baidu.com/share/link?shareid=316983&uk=3590423494 http://pan.baidu.com/share/link?shareid=316985&uk=3590423494 http://pan.baidu.com/share/link?shareid=316986&uk=3590423494
Please tell me the authentic citation of my equation!Thank you! — Preceding unsigned comment added by 218.249.94.132 (talk) 07:31, 2 March 2013 (UTC)
- The equation in the article is correct. SpinningSpark 22:08, 2 March 2013 (UTC)
Thank you for your reply.But you still don't tell us the right citation of our equation. Can you help us find the citation,as we don't know how to find the equation's citation?
Orbital mechanics
My local newspaper's article about Dennis Tito's planned 2018 mission to Mars states that it is "timed to take advantage of the once-in-a-generation close approach of the two planets' orbits". I thought the optimal alignment for a Hohmann transfer orbit to Mars happened once every 18 months or so. Is there something special about this particular one, or is the newspaper wrong as usual? --Carnildo (talk) 10:44, 1 March 2013 (UTC)
- Their plan includes a Venus flyby, and the low energy transfers that can do that is rare, there is another opportunity in (of the top of my head) 2036ish. And yes, normal Hohmanns are indeed every 18 months or so.Fgf10 (talk) 11:22, 1 March 2013 (UTC)
- Mars' orbit is quite eccentric (Elongated), So it makes a difference whether Earth approaches Mars close to Mars perihelium or not. That approach coincides with the perihilum about once every 15 years. combine that with the desired location for Venus fly by and you get a relatively rare event. By the way, the Hohmann transfer orbit window comes about every 26 Months or so. Dauto (talk) 19:51, 1 March 2013 (UTC)
- Where did you and Fgf10 get the idea that there's a Venus flyby? Tito never mentioned it in his press conference, and neither does the paper he got his trajectory from. If you look at the first graph on page 19, you'll see that the lowest time-of-flight trajectory occurs every 15 years, with the next one in 2017 and the last one in 2002. Tito's mission is planned for launch in January 2018. --140.180.255.158 (talk) 22:31, 1 March 2013 (UTC)
- Did somebody dust off their Spirograph ? :-) StuRat (talk) 03:58, 2 March 2013 (UTC)
carbol
Would you inform to me, please. The contens of carbol and how make it? Generally poeple put in the toilet. thank you — Preceding unsigned comment added by 114.79.0.170 (talk) 13:00, 1 March 2013 (UTC)
Most peaceful hominid
I know chimpanzees and humans are the most violent hominids, but I'm wondering which is the most peaceful hominid (bonobo, gorilla, or orangutan)? Previously I thought bonobos are the most peaceful, but studies claim otherwise. --PlanetEditor (talk) 13:54, 1 March 2013 (UTC)
- By which means are we to quantify peacefulness? --Jayron32 14:15, 1 March 2013 (UTC)
- In the sense that they exhibit the least intra-specific aggressive behavior. --PlanetEditor (talk) 14:21, 1 March 2013 (UTC)
- The most peaceful hominids are the dead ones. ←Baseball Bugs What's up, Doc? carrots→ 14:39, 1 March 2013 (UTC)
- In the sense that they exhibit the least intra-specific aggressive behavior. --PlanetEditor (talk) 14:21, 1 March 2013 (UTC)
I would be interested in what the second-most peaceful hominid is, so that I can pretend to be that while hunting the most peaceful one. 91.120.48.242 (talk) —Preceding undated comment added 14:36, 1 March 2013 (UTC)
- A serious, referenced answer in good faith, using the notion of intra-species aggression as a way of defining "peacefulness": Of the extant hominids, I'd go with the Orangs, simply because they are the most solitary, and have the smallest groups when they do aggregate. See Orangutan#Social_life. While I'm sure some aggression occurs, they simply don't bump into each other as often. The preceding is considering aggression generally, e.g. for both sexes, disputes over territory, food, etc. However,
allmost non-human hominids display some male aggression in coercing females to copulate, orangs included; see this overview "Male Aggression and Sexual Coercion of Females in Nonhuman Primates... " here [5]. SemanticMantis (talk) 15:31, 1 March 2013 (UTC)- Thank you for the reference. Very interesting article. --PlanetEditor (talk) 15:42, 1 March 2013 (UTC)
- You're welcome, glad you like it. After reading it further, I think you'll have to limit the type of aggression to get a simple answer. For instance, if we restrict to sexual aggression, then Bonobos are far more "peaceful" than Orangs... SemanticMantis (talk) 15:49, 1 March 2013 (UTC)
- Also, the link posted by the OP is not about intra-species aggression in Bonobos, it is about Bonobos eating monkeys. So that doesn't really make them less peaceful under the working definition. To rigorously compare incidence of intra-species aggression between Bonobos and Orangs would require a careful literature search, and possibly WP:OR. But I'll be interested if anyone turns up some more relevant data on the topic. SemanticMantis (talk) 15:35, 1 March 2013 (UTC)
- I think the fundamental cause of aggression is domination instinct, which is seen in chimps, gorillas, and humans. Species which do not want to dominate others lack aggression. This is why bonobos and orangs are peaceful. Orangs exhibit, as you said, sexual violence and male-male competition for access to females. So bonobos are the most peaceful. Not sure whether male-male competition exists among bonobos. --PlanetEditor (talk) 15:58, 1 March 2013 (UTC)
- According to this, bonobos exhibit less male-male competition. This makes bonobos the most peaceful hominid. --PlanetEditor (talk) 17:41, 1 March 2013 (UTC)
- Are you including humans in that comparison? I'd like to know what data the OP is using to assert that humans and chimpanzees are the most violent hominids. The statement is definitely false for modern humans living under a government, and while archeological evidence suggests 15% of humans died of violent causes in pre-Neolithic times, I haven't found any analogous statistics for other hominids. --140.180.255.158 (talk) 18:36, 1 March 2013 (UTC)
- "The statement is definitely false for modern humans living under a government" - that is a joke, right? If you look at the history, you will see human history is history of conflicts and wars. While Paleolithic hunter-gathers had far less violence compared to post-Neolithic humans, the rise of civilization and resulting division of labor (origin of full-time warriors) only strengthened organized violence. Have a look at this list. As I said, humans have domination instinct and it is this domination instinct which make them violent. It is this domination instinct because of which one human can kill millions of individuals of their species. It is because of this domination instinct one human is not willing to give up their dominating power even if thousands of other humans are dying. I agree modern democracies are relatively peaceful, but humans are genetically violent [6]. The fundamental question is human nature. If you want to know more whether humans are instinctively violent or not, I would suggest the book The Murderer Next Door: Why the Mind Is Designed to Kill by University of Texas evolutionary psychologist David Buss. --PlanetEditor (talk) 19:06, 1 March 2013 (UTC)
- See also Homicidal ideation. "50-91% of people surveyed on university grounds in various places in the USA admit to having had a homicidal fantasy." Horrible. I lament why I did not born as a bonobo. --PlanetEditor (talk) 19:24, 1 March 2013 (UTC)
- While violent behavior does have evolutionary purpose, absolute violence will led to species extinction. So evolution made humans in such a way so that they can suppress their violent instinct. This is why humans developed a unique cognitive faculty called morality. While the concept of morality can reduce violence, it can equally facilitate violence. People construct morality to serve their personal goals and dominate others. The unique feature of human violence, which distinguishes it from Chimpanzee violence, is that humans always use moral excuse before perpetrating violence. The perpetrators of violence always label their victims as immoral. For example, the Nazis labeled the Jews as oppressors, the Communist regimes label their victims as "enemies of the people". --PlanetEditor (talk) 19:36, 1 March 2013 (UTC)
- See also Homicidal ideation. "50-91% of people surveyed on university grounds in various places in the USA admit to having had a homicidal fantasy." Horrible. I lament why I did not born as a bonobo. --PlanetEditor (talk) 19:24, 1 March 2013 (UTC)
- "The statement is definitely false for modern humans living under a government" - that is a joke, right? If you look at the history, you will see human history is history of conflicts and wars. While Paleolithic hunter-gathers had far less violence compared to post-Neolithic humans, the rise of civilization and resulting division of labor (origin of full-time warriors) only strengthened organized violence. Have a look at this list. As I said, humans have domination instinct and it is this domination instinct which make them violent. It is this domination instinct because of which one human can kill millions of individuals of their species. It is because of this domination instinct one human is not willing to give up their dominating power even if thousands of other humans are dying. I agree modern democracies are relatively peaceful, but humans are genetically violent [6]. The fundamental question is human nature. If you want to know more whether humans are instinctively violent or not, I would suggest the book The Murderer Next Door: Why the Mind Is Designed to Kill by University of Texas evolutionary psychologist David Buss. --PlanetEditor (talk) 19:06, 1 March 2013 (UTC)
- Are you including humans in that comparison? I'd like to know what data the OP is using to assert that humans and chimpanzees are the most violent hominids. The statement is definitely false for modern humans living under a government, and while archeological evidence suggests 15% of humans died of violent causes in pre-Neolithic times, I haven't found any analogous statistics for other hominids. --140.180.255.158 (talk) 18:36, 1 March 2013 (UTC)
- According to this, bonobos exhibit less male-male competition. This makes bonobos the most peaceful hominid. --PlanetEditor (talk) 17:41, 1 March 2013 (UTC)
- I think the fundamental cause of aggression is domination instinct, which is seen in chimps, gorillas, and humans. Species which do not want to dominate others lack aggression. This is why bonobos and orangs are peaceful. Orangs exhibit, as you said, sexual violence and male-male competition for access to females. So bonobos are the most peaceful. Not sure whether male-male competition exists among bonobos. --PlanetEditor (talk) 15:58, 1 March 2013 (UTC)
- No, that is not a joke. It's the central theme of The Better Angels of Our Nature. This is corroborated by Jane Goodall, who writes openly (for instance in Reason For Hope) about how chimpanzees show intra-species aggression far beyond that of the average modern human. — Sebastian 20:47, 1 March 2013 (UTC)
- PlanetEditor, your claims are a classic example of cherry-picking data. You linked to a list of casualties in wars, but you haven't linked to a list of non-casualties in wars (99.7% of the population in Syria, much higher globally), a list of casualties in non-wars (see this World Health Organization report, in which violence only accounts for 0.9% of deaths), or a list of non-casualties in non-wars. The fact remains that 15-20% of people died violently in prehistoric times, whereas that number is currently 0.9% globally and much lower in developed countries. This 20-fold discrepancy is actually too small because it measures the percentage of deaths that are due to violence, not the percentage of the population that dies violently per year. Due to technology, the modern mortality rate itself is lower than its prehistoric counterpart.
- Here is another relevant statistic: in the Middle Ages, the European homicide rate was 30-100 per 100,000 per year. Currently, only 1 EU member has a rate exceeding 2 per 100,000 per year.
- I'm surprised you would call that Scientific American article "scientific", as its author seems to lack an elementary knowledge of statistics. He accuses Pinker of confirmation bias, but gives no evidence or contradictory data. He accuses Pinker of cherry-picking, by pointing out 2 data points (out of hundreds) from Pinker's book that don't support his point (!) He also doesn't understand why percentages are meaningful in social analysis:
- "Of greater concern is the assumption on which Pinker's entire case rests: that we look at relative numbers instead of absolute numbers in assessing human violence. But why should we be content with only a relative decrease?"
- By this logic, France's citizens are not wealthier than India's because India has a greater GDP. I think both the French and the Indians would beg to differ. Also by this logic, the US is not safer than Mali because the former, due to its much larger population, has more homicide victims. Again, I think Malians would beg to differ.
- The author's final point is even more bizarre, to the point of being sophistic: "The biggest problem with the book, though, is its overreliance on history". Wow, Pinker dares to use historical evidence in a book about historical violence rates! How presumptuous! --140.180.255.158 (talk) 03:41, 2 March 2013 (UTC)
- I'm not going to argue with you, and the ref desk is not a place for debate. Statistics can never shed a light on human nature. The cause of war, from evolutionary perspective, is more important that the number of individuals killed in a war. There may be one victim, and multiple aggressors. The claim that the number of casualties is decreased does not prove that the number of individuals who may engage in violence is also decreased. War is not the only indicator of humans' violent instinct. Bike rage, Road rage, Bullying, Legislative violence, Torture - all are evidence of the violent nature of the average modern human. Humans are far more violent than chimpanzees because the chimps engage in aggression only during territorial dispute, they don't get pleasure from violence, but modern humans engage in violence to gain pleasure. The main point here is that whether Homo sapiens engage in violence or not, they have the instinct of violence. Some engage in real violence, others engage is virtual violence. Blood sport and gladiator are classic proofs that modern humans gain pleasure from violence. Graphic violence, popular combat sports, and the popularity of WWE are the proof that the average modern humans are preoccupied with violence, they are emotionally attracted to violence. --PlanetEditor (talk) 05:02, 2 March 2013 (UTC)
- And don't forget edit wars and endless discussions on Wikipedia! If your point is that modern human still has a lot of violent traits, then I don't think anyone would disagree with you. If your point is to convince everyone else that humans are worse than other hominids, then I feel this is indeed not the task of this page. You are the OP; we're here to answer your question. I think we have done an honest attempt at it, and I don't see any further question, so I suggest to close this discussion. — Sebastian 06:05, 2 March 2013 (UTC)
- Yeah fine, actually this post was meant to find the ape with least violent behavior, which was determined to be the bonobos. But the discussion altered its course. --PlanetEditor (talk) 06:13, 2 March 2013 (UTC)
- Although this isn't a place for debate, you're the OP, and I feel that the following is relevant to your question. How do you know that chimpanzees don't engage in violence for pleasure? The fact that they can't communicate this pleasure doesn't mean they don't perceive it. It is also not true that chimps only engage in violence for territorial purposes--in addition to competing with males for mates, they also beat and rape females into submission. --140.180.255.158 (talk) 07:35, 2 March 2013 (UTC)
- And don't forget edit wars and endless discussions on Wikipedia! If your point is that modern human still has a lot of violent traits, then I don't think anyone would disagree with you. If your point is to convince everyone else that humans are worse than other hominids, then I feel this is indeed not the task of this page. You are the OP; we're here to answer your question. I think we have done an honest attempt at it, and I don't see any further question, so I suggest to close this discussion. — Sebastian 06:05, 2 March 2013 (UTC)
- You can't judge a book by its cover. Industrialized war may look extremely violent but it's far less so than ancient and homonid endemic violence. Modern human individuals are less likely to die from inter species violence than any other homonid. The nuclear war and genocide of the 20th century killed barely 2% of the societies involved. According to this source murder is a neglible tiny fraction of one per cent. Compare this to hunter-gatherer societies where more than 60% of males can die from violence. Even gorillas have a much higher level of violence than modern humans.--178.167.236.242 (talk) 23:50, 2 March 2013 (UTC)
Calculating the day of conception
Given that someone already know his/her birthday (month, day, year, and approximate time of the event), is it possible to calculate that person's moment of conception or the day his/her parents presumably had sexual intercourse, assuming that his/her parents did not perform artificial insemination as a route to become pregnant and assuming that the parents only engage in reproductive sexual intercourse (not possible in real life, since parents may engage in recreational sexual intercourse) and assuming that the intercourse was a one-time event? I had to make it simpler by making so many assumptions. It might become too complicated, if I add additional variables. 140.254.226.226 (talk) 15:19, 1 March 2013 (UTC)
- No. You can come up with an estimate, but nothing more. For example, the existence of preterm birth completely undermines your premise. Consider also that, in cases of artificial insemination and other known-conception data points, babies aren't universally born on their expected due dates. — Lomn 15:22, 1 March 2013 (UTC)
- Oh. Then, is it possible to do a similar sort of thing for non-human animals then, particularly for those species when offspring are usually born during a specific season, and the parents only mate during a particular season? I wonder if a mouse-breeder could calculate the day of conception given only the birth date of the baby mice. 140.254.226.226 (talk) 15:31, 1 March 2013 (UTC)
- Most definitely in the case of mice. Although if the timing of conception is important (in developmental experiments) you will work the other was round by doing a timed mating, and checking for a vaginal plug to indicate mating has taken place during the previous night. Fgf10 (talk) 15:41, 1 March 2013 (UTC)
- How do you know that mice only mate during night time? I wonder if they mate during the day time, or if daylight/moonlight/time of day has anything to do with the mating habits.140.254.226.226 (talk) 17:29, 1 March 2013 (UTC)
- Even for highly inbred (genetically homogeneous) strains of laboratory mice, there's still a variation of about +/- one day in the lengths of their gestation periods [7]; you'll get even more variation in wild mice, or cross-breeding different mouse strains. TenOfAllTrades(talk) 15:52, 1 March 2013 (UTC)
- I think the more important thing is whether or not the difference is really significant. If it's a +/- day variability and that value is not enough to significantly affect the results, then there's no need to worry. 140.254.226.226 (talk) 15:55, 1 March 2013 (UTC)
- That's not what that paper says at all, for instance it's 462.2 +/- 1 hour (NOT day) for standard Bl/6s. Within a strain it's very reproducible. Of course between strains all bets are off. But like I said, you normally work the other way around anyway. Fgf10 (talk) 15:59, 1 March 2013 (UTC)
- I'm assuming that you're looking at figure 2a. The error bars on that plot are for the standard error of the mean, not for the standard deviation. SEM is useful for determining (at a rough glance) which population means are significantly different, whereas SD tells you about the actual breadth of the distribution of values that went into the mean. To calculate the SD from the SEM, you have to multiply the SEM by the square root of the number of gestations measured.
- The means and standard errors for the most popular strains (BL/6, and A/J, for instance) are calculated from rather large pools of gestations (139 for the BL/6 mice), which means that the error bar shown is roughly one-twelfth the standard deviation. Given a standard error of one hour, the standard deviation is twelve hours. To catch 95% of gestations – two standard deviations, assuming a roughly normal distribution – one has to open the window to plus-or-minus one full day. TenOfAllTrades(talk) 16:16, 1 March 2013 (UTC)
- Most definitely in the case of mice. Although if the timing of conception is important (in developmental experiments) you will work the other was round by doing a timed mating, and checking for a vaginal plug to indicate mating has taken place during the previous night. Fgf10 (talk) 15:41, 1 March 2013 (UTC)
- Oh. Then, is it possible to do a similar sort of thing for non-human animals then, particularly for those species when offspring are usually born during a specific season, and the parents only mate during a particular season? I wonder if a mouse-breeder could calculate the day of conception given only the birth date of the baby mice. 140.254.226.226 (talk) 15:31, 1 March 2013 (UTC)
- (ec) If intercourse was really a one-time event, the answer to that part of the question is trivial—ask the couple for the one time that they had sex. Conception (if taken to mean fertilisation) can occur anywhere from roughly 4 hours to roughly 72 hours after intercourse, however: [8]. That three-day window means that even if you have the date of intercourse and the date of birth, you still can't precisely pin down the date of conception. TenOfAllTrades(talk) 15:37, 1 March 2013 (UTC)
- Which is probably why it's easier to do experiments on mice than humans. At least they are more predictable. According to the fertilisation article, fertilisation is just another word for conception. 140.254.226.226 (talk) 15:50, 1 March 2013 (UTC)
Room-sealed gas fire in chimney?
Mum is wanting to have a gas fireplace installed in the boarded up fireplace of her ~1920s terraced house to use as an alternative to the central heating. My concern regards the requirement for constant, unobstructed ventilation (about 200 cm^2), regardless whether the fireplace is in use, which I think will negate any improvement in efficiency over heating the whole house via the central heating.
I've read in an old thread on a forum that it may be possible to draw air for combustion via the chimney, in the space surrounding the flue. Has anyone here heard of that? --78.144.199.159 (talk) 16:01, 1 March 2013 (UTC)
- I'm not quite clear on your Q, but there are generally two types of fireplace, those which draw the warm air out of the living area for combustion, and send it up the chimney, and those which draw the air for combustion from outside, and are sealed on the inside, which are far more efficient. You could combine this with a flue to keep the outside air from flowing after the gas is shut off. However, with this design, you do need to keep the air intake on the outside of the house clear. Another helpful addition is air intakes from the room, which take the air near the flames, but not exposed to the fumes, and return that heated inside air to the room.
- Also, one thing to be aware of is that a gas fireplace can make an annoying hissing noise. I think there's a cure for this, if a device is used to lower the gas pressure before it's released, but they don't all have this. StuRat (talk) 16:23, 1 March 2013 (UTC)
- Here's a side view (although the air recirculated to the room typically is on the sides of the fireplace, not the front, as shown, to provide an unobstructed "romantic" view of the fire):
^ |^| O |^| U |^| I T | +-+ > N S | F || S I | I || I D | R || D E > E || E +---+ <
- I'm hoping to avoid the type of fireplace which draws air from the living area because those fires require a certain amount of ventilation to comply with UK building regulations. However, I can't use those fireplaces which are designed to use a balanced flue through an outside wall because the house is terraced (the neighbours would not appreciate it). I think most balanced flues have a limited height. I would like a solution which draws air from the outside, via the chimney. 78.144.199.159 (talk) 16:30, 1 March 2013 (UTC)
- So you're looking for this system: [9], where it draws outside air down the outside of the chimney ? StuRat (talk) 16:42, 1 March 2013 (UTC)
- Neither of the systems in that picture are what I'd like. The balanced flue system shown has to be fitted on an outside wall. Our house is terraced and the chimney is on the party wall.
N ^ E |^| I |^| G |^| I H | +-+ > N B | F || S O | I || I U | R || D R > E || E +---+ <
78.144.199.159 (talk) 17:08, 1 March 2013 (UTC)
- I once renovated a Victorian house and installed a coal effect gas fire in the fireplace of the living room. To do this we had to have a flue put in the old chimney, rather than relying just on the old chimney. Both with that one, and when replacing the gas fire in the current house, I had to have a home visit from the supplier to establish how the gas fire would work and what was needed to meet regulations, and I suggest your mother relies on the supplier for information rather than some random guys on the internet. --TammyMoet (talk) 17:04, 1 March 2013 (UTC)
- I'm aware that there are building regulations to be met and the fireplace will be fitted and tested by a qualified person in accordance but I want to find out if there's a supplier of the system I'm looking for rather than risk getting lied to by the supplier because they don't stock what I'm looking for and just want to sell. 78.144.199.159 (talk) 17:13, 1 March 2013 (UTC)
- Fireplaces are not usually energy efficient. They are installed for looks, not efficiency. A house from 1920 will probably be leaky enough that you don't need to worry. To check use a Blower door and measure it. But central heating can be 95-98% efficient if you use a condensing unit (and I believe all such units in the UK are required to be condensing). No fireplace can match that. Ariel. (talk) 17:16, 1 March 2013 (UTC)
- What fireplaces do offer, though, is the ability to heat only one room. So, if everyone is in that room, you can turn down the house thermostat, and save on gas that way. StuRat (talk) 17:21, 1 March 2013 (UTC)
- You can do that with central too, with zone heating (although admittedly that's harder to retrofit into a house). Ariel. (talk) 17:35, 1 March 2013 (UTC)
- OK, so we are looking at a design like this, then ?
| N |v|^| E |v|^| I |v|^| G |v|^| I H |v| +---+ > N B |v| F || S O |v| I || I U |v| R || D R | E || E +-------+ <
- (I only showed the air intake on the back of the fireplace, but it could be a full tube surrounding the chimney exhaust tube.) StuRat (talk) 17:23, 1 March 2013 (UTC)
- They sell those, they are called "concentric chimney vent". Usually used for retrofitting condensing units to use the existing chimney. (The units need the source and vent at the same height to avoid wind pressure messing with the heater, so you can't just use the chimney as vent only.) They are usually made out of PVC for condensing units, but you can find metal ones too. If your fireplace is too efficient you will need an insulated type or the flue gases will not stay hot enough to exit the chimney. (Since the gases are cooled by the incoming air.) Ariel. (talk) 17:35, 1 March 2013 (UTC)
- Or how about an electric exhaust fan ? StuRat (talk) 17:40, 1 March 2013 (UTC)
- If you do that make sure to interlock it with the flame, so a failure of the fan doesn't asphyxiate everyone. (It's not enough just to interlock power, you also have to detect rotation.) They actually do sell those BTW - I've seen them on domestic hot water systems (but they are a bit noisy). Ariel. (talk) 17:50, 1 March 2013 (UTC)
- Agreed. Also, if it's properly sealed, then any carbon monoxide (created by a flame deprived of sufficient oxygen) won't make it into the living area. StuRat (talk) 18:29, 1 March 2013 (UTC)
- Are these concentric flues rigid or flexible? The problem is that UK building regulations require inspection of all flue joints to be possible which would mean a lot of inspection holes would need to be created along the height of the chimney. 78.144.199.159 (talk) 18:21, 1 March 2013 (UTC)
- Couldn't a camera on a flexible tube be sent down it for inspection ? StuRat (talk) 18:26, 1 March 2013 (UTC)
- They are usually sold as segments that are locked together and fed into the chimney one at a time. There are flexible ones as well - but most of those are plastic for high efficiency systems. There may be flexible metal ones, I don't know. But I think even the flexible ones are sold as lengths that are joined together (they are intended for chimneys with curves).
- But, are you sure you need inspection? The flu is inside the chimney after all - the chimney will block any gases from entering into living space, and can realistically be called the flue. Ariel. (talk) 18:39, 1 March 2013 (UTC)
- There are catalytic flueless gas fires available but they are quite expensive and I did read a comment on a forum that no gas installer would use one of these as, if anything goes wrong you're dead from carbon monoxide asphixiation. There's a bit of discussion about them here. Richerman (talk) 14:41, 2 March 2013 (UTC)
Longest lived known species of single-celled organism
What species of single-celled organism is #1 on the list of longest lived single celled organisms according to reliable source H. sapiens who keep such records, and how long is that record lifespan? 20.137.2.50 (talk) 16:42, 1 March 2013 (UTC)
- Defining a species in an organism which reproduces asexually is particularly problematic, as the normal standard of two individuals which can reproduce together and create fertile offspring doesn't apply. StuRat (talk) 16:46, 1 March 2013 (UTC)
- Yes, and even defining life-span in this context is problematic! To wit: when a bacterium splits into two, does the parent die and produce two new offspring? Or does the parent survive, and make one clone? Biologically and philosophically, both interpretations are defensible. If you take the first interpretation, no individual lives very long. If you take the second interpretation, then there is some single bacterium out there that has survived millions of years... See Biological_immortality#Bacteria for some scant coverage. SemanticMantis (talk) 17:01, 1 March 2013 (UTC)
- If we define cell division as death of the parent cell, then you and I are mostly dead or mostly recently born, depending on how you look at it. I feel younger just thinking about it! :) --Guy Macon (talk) 20:29, 1 March 2013 (UTC)
Stupid idea about using Kickstarter to fund building a nuclear power station
Maybe this is a silly idea, but I was thinking about the cost of nuclear power and why more nuclear power stations aren't built in the UK. Would it be possible to use Kickstarter to collect an amount of money from each of, say, one million people and then provide electricity to those people at no cost for however many years? My understanding is that most of the cost is in the initial outlay. --78.144.199.159 (talk) 21:59, 1 March 2013 (UTC)
- It depends what costs you count. Mention Chernobyl or Fukushima, and other costs become obvious. IMHO, even if a reactor doesn't explode, the costs of managing the nuclear waste "forever" must be factored into its total cost. Others may argue differently, but the fact that anyone feels the way I do tells you that it isn't as simple as you propose. HiLo48 (talk) 22:08, 1 March 2013 (UTC)
- Well, the vast majority of reactors ever didn't explode, and those you mention were due to stupid mistakes, besides which coal-fired power stations pollute much more per unit energy generated. Are you suggesting that the reason for lack of new nuclear power stations in the UK is due to public disapproval rather than the cost? 78.144.199.159 (talk) 22:20, 1 March 2013 (UTC)
- Hey, it was you who mentioned costs, and I pointed out the cost involves a lot more than building them. And yes, the vast majority of reactors don't explode, and those that did may have been due to stupid mistakes, but do you really trust every government, corporation and individual who will be involved with your proposed reactor for the next fifty years to never make a mistake. And then there's the waste...
- Why shouldn't I trust all those organisations? Their counterparts do the job just fine all over the globe, including those already operating in the UK. Also, in the UK, the government has agreed to take care of any waste so that isn't a concern either. 78.144.199.159 (talk) 04:02, 2 March 2013 (UTC)
- LOL. The waste is dangerous for 1000 years. And your faith in corporations and government is greater than mine. HiLo48 (talk) 04:30, 2 March 2013 (UTC)
- Coal-fired power plants don't just pollute more per unit energy generated. They actually release more radioactive material into the environment. See http://www.physics.ohio-state.edu/~aubrecht/coalvsnucMarcon.pdf and http://www.scientificamerican.com/article.cfm?id=coal-ash-is-more-radioactive-than-nuclear-waste --Guy Macon (talk) 23:18, 1 March 2013 (UTC)
- The cost of building a nuclear plant is in the billions of dollars (or pounds, if you prefer). That means you would need millions of people to give you thousands of dollars each. It doesn't seem plausible to me. Looie496 (talk) 22:25, 1 March 2013 (UTC)
- Doesn't seem plausible via Kickstarter. But there are lots of public corporations (in the U.S. English usage of the phrase) that actually do this. For example, Duke Energy has thousands - millions, perhaps - of share-holders who each contribute hundreds - or thousands - of dollars. Instead of using Kickstarter to rally up people and raise funds, they use the New York Stock Exchange. They have a huge team of financial experts who make sure that everything is above-board with the money collection. They have a huge team of lawyers and legal experts who make sure that everything the corporation does is in compliance with lots and lots of applicable rules. And then ... they raise billions of dollars - and build a nuclear power plant or three. When you look at the whole ecosystem of the energy economy, it actually is very much like the original poster asked: except that because so much money and so many people and so many laws and regulations and rules rules are involved, Kickstarter just isn't the right forum to raise the funds. Nimur (talk) 03:50, 2 March 2013 (UTC)
- Okay, but what if you consider that the annual household energy bill in the UK is about £1 250. If one million invested £2 500, that'd be £2.5b. Maybe it's not realistic, but I'd invest my money for two or three decades of unlimited (probably "within reason") electricity. Maybe it would be easier to sell to companies consuming large amounts of electricity. — Preceding unsigned comment added by 78.144.199.159 (talk) 22:41, 1 March 2013 (UTC)
- Economics of new nuclear power plants explains all the costs. Nope, no idea for Kickstarter. Besides that, the people who populate the site are more ofn the hippie-alternative type, who would get more interested in a wind turbine. OsmanRF34 (talk) 00:30, 2 March 2013 (UTC)
- The cost of nuclear power is regulatory, not capital. Build your plant in China. μηδείς (talk) 01:57, 2 March 2013 (UTC)
- Legal costs are very high (more than actual regulation, strictly speaking) for starting new plants in the USA, but the capital costs are pretty high, too. Even in China it costs billions of dollars to construct these things. It is also probably also worth noting that the US nuclear industry has been heavily subsidized by government funding in the past, and that they don't need to fully fund their own insurance against accidents — sometimes regulation makes things cheaper for energy companies, too. The economics of all energy generation is somewhat complicated, but nuclear is especially so because of its combination of high capital costs and high regulatory/legal issues. The result is that it's very hard to be profitable with nuclear except in the very long term. --Mr.98 (talk) 04:10, 2 March 2013 (UTC)
- And the high regulatory cost is because they insist on building them above ground, in populated areas, which, when combined with bad design, operator incompetence and regulatory capture, is inherently dangerous. If they would build them at the bottom of abandoned mines, far from population centers, with only the cooling towers above ground, they could leak right and left and only endanger the workers. They could also leave the spent radioactive fuel down in the mine, and just fill it in with concrete when the plant is decommissioned. Sure, transporting electricity a longer distance increases costs, but the reduced objections and subsequent delays from the public would more than make up for this added cost. StuRat (talk) 03:54, 2 March 2013 (UTC)
- There are a lot of concerns with siting nuclear reactors. "At the bottom of abandoned mines" doesn't really address them, and introduces entirely prohibitive construction costs. I don't think you realize how physically large a modern nuclear reactor power station is, and how hard it woud be to excavate that amount of space in a safe way deep underground, much less the issues that would come into place in making such a facility safe to operate. It's a silly idea. --Mr.98 (talk) 04:10, 2 March 2013 (UTC)
- I don't think you realize how big abandoned mines are. StuRat (talk) 05:05, 2 March 2013 (UTC)
- It's an intriguing possibility. You could offer some really interesting rewards (eg $1,000 reward: "Free electricity for life"). But the trouble with Kickstarter is that it's a matter of trust. It's fairly easy to trust someone you've never met to follow through at $1,000 to $10,000 - but beyond that, it gets tougher and tougher. The other problem with large construction projects that take a long time to complete is that such projects tend to overrun costs badly. That's bad enough with conventional venture capital - but if you collect $1000 from a million people and fail to deliver...I don't want to imagine the consequences. Coming back and requesting another $500 to get it finished ten years from now would be astronomically hard.
- There is another practical problem. The Kickstarter rules for technology projects seem to kill your idea stone dead. http://www.kickstarter.com/help/guidelines says: "In addition, Design and Technology projects that are developing new hardware or products must show on their project pages a functional prototype — meaning a prototype that currently does the things a creator says it can do". So you need to have a working nuclear reactor before you start!
- You don't have to use Kickstarter though - there are MANY other crowd-funding sites out there - and (of course) we have an article on that at Comparison of crowd funding services.
- So I agree that this is a bold and intriguing idea - but very, very difficult to execute.
- SteveBaker (talk) 04:00, 2 March 2013 (UTC)
Geranium care
I have a gorgeous Pelargonium hortorum that my mother bought and hung in her backyard until Thanksgiving, when she was going to throw it out. I found it at my parents' curb and adopted it, it has bloomed constantly in my south-facing window. The only problem I have run into keeping it inside is that, even though it is quite lush, and definitely not overwatered, it is constantly dropping about half-a-dozen yellow leaves out of maybe 12 dozen total. The leaves are not limp, and they are largely from the bottom edges. Is this just a consequence of the constant growth pattern, or can I address it with some sort of care? The article suggests there are growing societies, but the link didn't seem recent. Thanks. μηδείς (talk) 22:07, 1 March 2013 (UTC)
- A few questions: Is there new growth (i.e. small leaves emerging), and is it still blooming? If those are both true, then change nothing. If neither are true, I'd give it more light if at all possible, or possibly a very weak shot of N fertilizer. If only one is true, I'd probably also do nothing differently for a few weeks... Realistically, if it limps along until spring, just be patient and give it a good pruning before you set it out in indirect light in the spring. Even when "healthy" plants are successfully wintered indoors, they usually wane and get a bit sad, unless you have very good light and humidity infrastructure. Remember to change any parameter slowly if possible ;) SemanticMantis (talk) 01:24, 2 March 2013 (UTC)
- Yes, there is constant new growth and blooming. It has three nice full red flower heads and three sets of green flower buds, and terminal growth on each branch. It's not limping at all. It has the hardiness of a perennial and the vitality of an annual. If this were outside I would just pluck the yellow leaves. It's the fact that they are constantly dropping (even as it grows overall) that annoys me--as far as I know it did this over the summer as well, but it wasn't kept well watered and tended to wilting then. (Frankly, I think it's happier now than it has ever been except when purchased.) It has great light, and is not dry at all. I have thought of repotting it since it might be twice the volume it was when Mom bought it. Would a dilute fertilizer solution be an alternative to that? I just doubt that crowding is the reason for the leaf dropping, because, as I said, it's been doing this all along. μηδείς (talk) 01:55, 2 March 2013 (UTC)
- It might be slightly N (or Fe?) deficient, but I would only use either at very small doses until spring. Putting it in a bigger pot will result in it growing much larger and not blooming for a long time. I would do that next fall, and only if you have the space for a much bigger plant. Otherwise just prune the top (and bottom if it is very tight). They are certainly OK to be a little root bound. Honestly, your plant sounds happy and healthy to me. It is normal for it to shed senescent leaves, and it will translocate nutrients when it does so. Just pick off unsightly leaves and enjoy it! SemanticMantis (talk) 02:31, 2 March 2013 (UTC)
- Yes, I actually did (half of) my undergrad work in plant science, and have a greener thumb than my sister who did her doctorate in it, but I worked at the crop science department, and they were all outdoor grown monocots. (Although I did grow some pretty, sticky, nutty, funky dicots in my closet.) The fact that nothing on the plant is wilting is reassuring. I will try the dilute NPK fertilizer. I have had success with it with my rebloomed poinsettias.
- One other question. If I prune it, where should I prune it? It has a nice 28"-32" diameter bun-shaped growth habit. μηδείς (talk) 03:18, 2 March 2013 (UTC)
- Think thinning, not shearing like a sheep. Cut a few branches down to 2-3" long, then cut 2-3" off of a few other stems, and leave the overall shape the same. Make cuts ~1/4" above the target Node_(botany) on each branch. It's hard to kill a healthy houseplant by pruning, and you can root the cuttings to give to friends or keep as insurance. SemanticMantis (talk) 04:31, 2 March 2013 (UTC)
- One other question. If I prune it, where should I prune it? It has a nice 28"-32" diameter bun-shaped growth habit. μηδείς (talk) 03:18, 2 March 2013 (UTC)
- When I first read this Q, I thought you were asking about germanium. But, never mind, as this comment isn't germane to the topic. StuRat (talk) 03:46, 2 March 2013 (UTC)
- because you're a man? μηδείς (talk) 02:42, 3 March 2013 (UTC)
- Definately go easy on the fertilizer, both because very little is needed and also if the pot dries out the increased concentration can kill it like salt. Polypipe Wrangler (talk) 22:07, 5 March 2013 (UTC)
March 2
Analytical mechanics for Russian meteorite
how do the specialists calculate mass , speed and path of such meteorite?--Akbarmohammadzade (talk) 04:27, 2 March 2013 (UTC)
- All they currently have is various videos of the event. By comparing the time stamps and locations of the meteor in each video, it should be possible to determine the speed. Unfortunately, many of the time stamps are off, complicating matters. Still, with enough videos, they can figure it out. The path is a bit simpler. Various vids from various angles allow them to fairly accurately establish the path, at least once it was bright enough to appear on videos. There's also a possible impact crater on a frozen lake. Working backward from these, they can extrapolate to find the location in space, but this gets less accurate the farther back they go. The mass is perhaps the hardest. They can infer the mass from the magnitude of the explosions and amount of light, but other factors besides mass also affect this, like the materials from which the object was composed. StuRat (talk) 05:02, 2 March 2013 (UTC)
- Just for information of others reading this: This apparently refers to the 2013 Russian meteor event. — Sebastian 06:16, 2 March 2013 (UTC)
- Mass is definitely the most difficult parameter to estimate, that paper does not even make an attempt. NASA put it at 10,000 tons based on data from a worldwide network of infrasound sensors. The most common and straightforward method of estimating asteroid masses is from their absolute magnitude, which is, of course, impossible in this case unless some telescope has coincidentally captured a picture of it while still in orbit. Even then, it is an extremely inaccurate method because the albedo and composition of asteroids varies enormously and is rarely known with any accuracy. SpinningSpark 12:26, 2 March 2013 (UTC)
- StuRat — they have more than the dashboard videos. The infrasound sensor network, designed to detect nuclear detonations, got a huge amount of data from it. They also have satellite information of various sorts (weather satellites, of course, military satellites, probably). --Mr.98 (talk) 19:27, 2 March 2013 (UTC)
- I doubt if any of those sources would be as good as live camera footage. Satellites, for example, may only scan an area periodically, so would catch the trail after it had been blown around in the wind a bit. The time stamps might be more reliable, though. StuRat (talk) 16:46, 3 March 2013 (UTC)
- They actually have pretty good satellite footage of the plume. Which you'd know if you'd bothered to look it up. --Mr.98 (talk) 02:53, 5 March 2013 (UTC)
- The plume is what I'm talking about. That's not as good as having live video of the meteor itself, which is what the ground cameras provided. StuRat (talk) 03:13, 5 March 2013 (UTC)
EMP
Suppose a terrorist group sets off a small non-nuclear EMP generator with the following parameters: Total energy 56.25 kJ (very small indeed as such things go); average power 281.25 MW; pulse length 200 microseconds; capacitance 64.5 pF; inductance decreases linearly with time from 1 microhenry at t=0 to 2.5 nanohenry at t=200 microseconds (this is a simple CFG-type device with the main coil itself acting as the antenna); frequency increases as the square root of time from 20 MHz at t=0 to 400 MHz at t=200 microseconds (the "Valkyrie battlecry", as I call it :-) ). Also assume that they detonate the device right next to a standard step-down transformer that reduces the voltage from 34.5 kV to 240 V, and the device couples into both the primary and the secondary windings of this transformer. What's the radius in which the device will fry most plug-in electrical devices? 24.23.196.85 (talk) 05:53, 2 March 2013 (UTC)
- I gather the intent is to induce a spike or surge in the transformer, thus "frying" electronic devices in the circuit, right? I haven't done the math, but I suspect the transformer would either handle the pulse (which is weak compared to a lightning strike) - or the pulse would be strong enough to trip it, or otherwise cause a shunt or power failure. ~:74.60.29.141 (talk) 06:45, 2 March 2013 (UTC):~
- Right, this is precisely how an EMP generator works. So, I gather that 56 kJ is too small for any serious damage to the transformer or anything else connected to it? 24.23.196.85 (talk) 07:19, 2 March 2013 (UTC)
- At the timings nominated, the transformer should be modelled as a transmission line, however, the standard lumped model will give us a ball park idea. Unfortunately you have not nominated the size of the transformer, so I'll use the smallest (and most sensitive) size commonly used in street distribution: 10 kVA (per phase load 14 A). This can be expected to have a secondary side leakage inductance of ~~1 mH. This considerably exceeds your EMP equiv inductance, so we can simply take direct connection of the capacitance to get worst case voltage. With 200 pF capacitance, the peak voltage from 56 kJ is 30 MV. With a 200 uSec discharge time, this will certainly damage the transformer insulation. So we do need to calculate the range at which transformer breakdown will occur.
- Now for a bit of order of magnitude estimation that I pored scorn on in an earlier question. In this case this is all we can do as you have not supplied a lot of critical information. The initial electrostatic field will be ~~30 MV/m (from 1 uH initial inductance) decreasing with the cube of the distance in meters. Thus, to get within the transformer's secondary insulation voltage breakdown limit, which I'll assume to be 6 kV, the maximum range to ruin the transformer will be ~~ Dwireseparation x (Vinit/Vtrans)1/3 ~~ 6 m with typical wire separation in open wire systems. For underground distribution, it would thus seem that 74.60.29.141 is correct, it is not sufficient to destroy the transformer. This estimate can only be a very very rough guide.
- Ratbone 124.182.176.152 (talk) 10:38, 2 March 2013 (UTC)
- Re: "This will certainly damage the transformer insulation", try modeling it again with lightning arresters at the input to the transformer and at the service entrance to the house. (And possibly at the plug strip the equipment is plugged in to.) You might also want to model the fact that "assume that they detonate the device right next to a standard step-down transformer" does not mean that 100% of the energy will couple into the transformer. There will be losses because some energy goes in other directions, losses because of reflections caused by impedance mismatch, and -- this is the big one -- losses because the transformer is surrounded by a grounded faraday cage. --Guy Macon (talk) 12:39, 2 March 2013 (UTC)
- If you read my post more carefully, it was in two parts: the first part was to see if damaging the transformer was at all possible - calculation with a direct connection is the easiest to model. This showed that destruction is possible, and so further work is required. In the second part I did that further work and established that the EMP device needs to be within roughly 6 m, at which range the coupling is indeed well below 100% - about 0.02% in fact. The fact that the transformer housing can act as a faraday cage is not relavent - I assumed all coupling was via the open wire distribution, allowing a coupling factor for the wire spacing. Due to series inductance in the distribution wiring, which in practice will be very large compared to the flashed over transformer, not much energy will go down the distribution into customer's equipment, and any surge/lightning protection at customer premises is not relevent as far as the transformer goes. It is normal practice in the small distribution transformers I assumed for calculation purposes not to have lightning/surge protection fitted. In any case, such devices are intended to shunt lightning to earth (where it wants to go), which has absolutely no relavence to any neaby EMP device (which has no need to go to earth). Like other posters, as far as larger transformers are concerned, and underground plant, I don't think such an EMP device would cause any damage. However, I have established that for small transformers in open wire installations, damage is possible with the EMP device a few meters away. Ratbone 120.145.182.161 (talk) 14:44, 2 March 2013 (UTC)
- Re: "This will certainly damage the transformer insulation", try modeling it again with lightning arresters at the input to the transformer and at the service entrance to the house. (And possibly at the plug strip the equipment is plugged in to.) You might also want to model the fact that "assume that they detonate the device right next to a standard step-down transformer" does not mean that 100% of the energy will couple into the transformer. There will be losses because some energy goes in other directions, losses because of reflections caused by impedance mismatch, and -- this is the big one -- losses because the transformer is surrounded by a grounded faraday cage. --Guy Macon (talk) 12:39, 2 March 2013 (UTC)
- It certainly doesn't sound like it would have much effect to me. I'd have thought there would be far more efficient ways to destroy a transformer with the explosives you need for such an EMP weapon, and the surges on the power lines would probably not get far with the various devices to stop lightning strikes having an effect. Why would one waste it on such an enterprise when there are far simpler and better targets around? Dmcq (talk) 13:10, 2 March 2013 (UTC)
- That it would be easier to just blow the transformer up with standard explosives is blindingly obvious. Any half witt terroist or local red-neck nutcase can make a decent bang with ordianry explosives. But EMP devices are high-tech advanced devices that will cost a heck of a lot more. Ratbone 120.145.182.161 (talk) 15:00, 2 March 2013 (UTC)
- You might be interested in Report of the Commission to Assess the Threat to the United States from Electromagnetic Pulse (EMP) Attack. They do not highlight transformer insulation breakdown as being a particular problem. For large transmission transformers the greatest threat comes from damage to protective relays and other protective devices [10] for E1 pulses (the shortest duration and most relevant to this question). For small distribution transformers they point to arcing across the power line insulators possibly causing transformer explosion [11]. As for frying electronic equipment, they seem far more concerned with the direct effects of EMP on electronics rather than induced currents in the supply. For a larger (nuclear) EMP than described here, longer lasting ground induced currents could be a problem from the E3 pulse. SpinningSpark 16:05, 2 March 2013 (UTC)
- Thanks, everyone! So, the bottom line is that the EMP attack will cause little or no damage in this case, at most burning out the transformer, right? In that case, here's a second scenario: the bad guys set off a similar device, but this time the total energy is 100 MJ (the highest energy practicable for a man-portable device) and the average power is correspondingly 500 GW, while all the other parameters remain the same. (Also assume that in this case the 34.5 kV power line supplying power to the targeted transformer in turn draws power from a 115-kV line via a substation just a few blocks away, which in turn delivers power from SoCalEd's Rinaldi Street substation that receives power directly from Path 26 -- a real worst-case scenario!) How much mayhem will happen in this second case? 24.23.196.85 (talk) 20:42, 2 March 2013 (UTC)
- Just curious: is this background for a novel, or should you expect a
2:am knock on your door[friendly visit] from the San Jose branch of Homeland Security? ~:74.60.29.141 (talk) 21:26, 2 March 2013 (UTC):~- Not to worry, this is for a detective novel. In fact, I don't think I could build such a device myself if I wanted to. 24.23.196.85 (talk) 21:47, 2 March 2013 (UTC)
- And in any case, I'm an American patriot, so the LAST thing I would want to do is to cause widespread indiscriminate destruction to my own country's infrastructure! 24.23.196.85 (talk) 21:54, 2 March 2013 (UTC)
- Just curious: is this background for a novel, or should you expect a
- Thanks, everyone! So, the bottom line is that the EMP attack will cause little or no damage in this case, at most burning out the transformer, right? In that case, here's a second scenario: the bad guys set off a similar device, but this time the total energy is 100 MJ (the highest energy practicable for a man-portable device) and the average power is correspondingly 500 GW, while all the other parameters remain the same. (Also assume that in this case the 34.5 kV power line supplying power to the targeted transformer in turn draws power from a 115-kV line via a substation just a few blocks away, which in turn delivers power from SoCalEd's Rinaldi Street substation that receives power directly from Path 26 -- a real worst-case scenario!) How much mayhem will happen in this second case? 24.23.196.85 (talk) 20:42, 2 March 2013 (UTC)
I would like to address the claim that "such devices are intended to shunt lightning to earth (where it wants to go), which has absolutely no relavence to any neaby EMP device (which has no need to go to earth)".
To keep things simple, I will assume a 1kV single phase two-conductor live with no connection to earth ground.
Assume that your EMP device can cause that 1kV line to jump to 100kV between the two conductors.
Now add a lightning arrester from each line to earth ground that limits the voltage to 2kV. Those lightning arresters would also limit the voltage between the two conductors to 4kV.
In real life there would usually be a third lightning arrester that would limit the line-to-line voltage to 2kV. --Guy Macon (talk) 22:37, 2 March 2013 (UTC)
- But would it have time to activate (the whole pulse is only 200 microseconds long, remember)? 24.23.196.85 (talk) 22:57, 2 March 2013 (UTC)
- [e/c] You might want to consider: line filters, surge protectors and Uninterruptible Power Supply; and check out: The EMP threat: fact, fiction, and response and Section 7 in this PDF which has info specific to your inquiry. ~E:[last modified]74.60.29.141 (talk) 23:59, 2 March 2013 (UTC)
There is a classic joke that goes something like this:
Two men see, in the distance, an angry bear heading towards them. One of the men begins to put on his running shoes. The second man says, "What are you doing? Are you crazy? Bears can run at 30 mph! Your sneakers won’t help you run faster than that bear!" To which the first man replies, "I don’t have to run faster than the bear, I only have to run faster than you."
Your protection circuit does not have to withstand the full voltage of an EMP or lightning strike. It only has to withstand the maximum voltage beyond which an arc forms between the conductors of your wiring. Your protection circuit does not have to be faster than the risetime of an EMP or lightning strike. It only has to be faster than the maximum risetime that the capacitance and inductance of your wiring allows. --Guy Macon (talk) 14:43, 3 March 2013 (UTC)
- That is not right. Any circuit consisting of lumped or distributed inductance, capacitance, and resistance has a response amplitude directly proportional to the excitation voltage. This means that while the risetime, measured from (say) 10% to 90% of final amplitude, is determined by the inductance, capacitance, and resistance, the rate of change is not. The rate of change is directly proportional to excitation amplitude. Therefore, the time taken to reach a given (breakdown eg) voltage is inversely proportional to excitation amplitude. The bigger the EMP, the faster the time to reach breakdown. Ratbone 124.182.147.20 (talk) 03:49, 4 March 2013 (UTC)
(un-indent) Thanks for the info, everyone. So I gather (especially from the documents provided by 74 IP) that the main danger is to sensitive electronics (computers, cell phones, etc.), while the heavier-duty components that actually handle the power distribution (power-line wires, switchgear, circuit breakers, etc.) will remain intact? I guess then that I'll have to revise quite a number of details: for example, the air won't be "thick with the acrid stench of burning insulation" like I thought it would be; when one of the detectives goes to check the fuse box, he won't see "a mess of half-molten copper" sitting at the bottom; and the transformers at the Orange County Airport substation probably won't be burning all night long like a bonfire. On the other hand, the detectives' cell phones will indeed get fried and might even suffer thermal damage up to and including partial melting of the outer casing (especially if turned on while recharging) -- and it goes without saying that all power to the affected area will get knocked out for up to several days. 24.23.196.85 (talk) 05:54, 4 March 2013 (UTC)
- Where did you get all that nonsense from? Let's take it one item at a time:-
- Effect on home electronics, computers, and the like: Not likely to be affected, as they are isolated from an EMP device to your specification near a distribution transformer by the inductance of the street distribution wires. Nothing supplied by 74 IP refutes this.
- Effect on cell phones: Quite unlikely, as the circuit board and inbuit antenna is simply not long enough to intercept sufficient EMP field. However, if the EMP device is set off adjacent to a cell phone base station (the little hut next to a cell tower containing the electronics), the cell base could be knocked out by damaging it's power supply or the land line connection to the phone company. But bear in mind they have battery backup, and the loss of one base does not prevent phones from working - it only reduces service.
- Effect on power company distribution transformers: If the transformer is a small one, and the street distribution is not underground cabling, then an EMP device within a few meters will damage it, as I shoed by simple calculation.
- Will any smells or molten copper be noticed? I doubt it.
- Would power to affected area be knocked out for several days? Not likely, unless perhaps you are talking about a very backward area in a third world country. The most likey outcome is just a damaged transformer. Most power companies will detect (and in any case customers will ring them up when power fails) and replace a distribution transformer within less than an hour. Only if damage is widespread will they take longer, and even then you are only looling at a few hours outage at most.
- Note that for the larger distribution transformers, and substations generally, and transformers of any size in an underground network, no damage will acrue from the EMP device you specified.
- You should also note that in the case of significant public infrastructure, such as feeds to hospitals, airports, public utilities, electric railway supplies, and major substations, knocking out one substation will have at most only momentary (eye-blink time) effect on power available to customers, as redundant infrastructure is provided. For example, in my city, the hospitals, the telephone exchanges, the police headquarters, and certain other important buildings, are on the "priority ring" - this is a duplicated high voltage ring fed from 4 major substations. At least two substations must be knocked (on both busses) out to kill power on these rings. No single fault on either ring can kill power on that ring. Two simulataneous faults on a ring can at worst only cut power between the two faults, and the customers still have power from the other ring. You can expect the power company to attend to any priority ring faults immediately, not hours, not days, but immediately. Apart from satisfying service garantees, quite a lot of revenue, far outstripping the technician's wages, will be lost if a second fault occurs and major customers don't have power. Outside the city, important things like the international airport, hospitals, etc, are feed by two sets of cables from two geographically diverse substations.
- Ratbone 58.170.164.191 (talk) 09:54, 4 March 2013 (UTC)
- Where did you get all that nonsense from? Let's take it one item at a time:-
- So in other words, the damage from non-nuclear EMP will be minimal and localized even in the worst-case scenario, right? In that case, maybe I should scrap this novel altogether and concentrate on my other three titles -- the plot of this one absolutely depends on the EMP causing severe and widespread damage. All right then, if what you're saying is true, then I guess it's back into the cockpit/back into the Forties for me as far as my subject matter. 24.23.196.85 (talk) 03:13, 6 March 2013 (UTC)
- That is your judgement as an author. I am not a great fan of science fiction, being an electrical engineer, though I liked most of Arthur C Clarke's novels - with his own electronic engineer's background, his scenarios were at least plausible. And like all good fiction, his books weren't about scenarios, they were about human behavior. I don't much like science fiction novels where the author obviously has no scientific appreciation, as I find the crook science an annoying distraction. I've noticed though, that my wife (a legal officer) rather enjoys the fiction books I reject on this basis, such as Mathew Reiley's books, which sell very well, absolutely stupid science in them notwithstanding. So your decision. Since you are writing not a science fiction, but a terroist/detective novel, Can't you just have the bad guys blow up one or more substations with explosives? While the average power company will replace a damaged transformer within an hour or less, and their systems can in many cases automatically reconfigure around a single point of failure, having a few entire substations wrecked with explosives would try them. Ratbone 124.182.44.76 (talk) 04:03, 6 March 2013 (UTC)
- Not really -- the whole storyline revolves around an EMP superweapon that the bad guys steal from a top-secret lab in order to sell it to Pakistan. So if there's no superweapon, there's no story. Besides, the scenario you suggested has already been used by a famous Canadian author who developed it so well that any attempt to reuse it will probably look like a cheap imitation. And BTW, I don't like stupid pseudoscience any more than you do -- hence my obsession with making sure that all the facts check out (and hence my intense loathing for modern sci-fi). 24.23.196.85 (talk) 04:54, 6 March 2013 (UTC)
- That is your judgement as an author. I am not a great fan of science fiction, being an electrical engineer, though I liked most of Arthur C Clarke's novels - with his own electronic engineer's background, his scenarios were at least plausible. And like all good fiction, his books weren't about scenarios, they were about human behavior. I don't much like science fiction novels where the author obviously has no scientific appreciation, as I find the crook science an annoying distraction. I've noticed though, that my wife (a legal officer) rather enjoys the fiction books I reject on this basis, such as Mathew Reiley's books, which sell very well, absolutely stupid science in them notwithstanding. So your decision. Since you are writing not a science fiction, but a terroist/detective novel, Can't you just have the bad guys blow up one or more substations with explosives? While the average power company will replace a damaged transformer within an hour or less, and their systems can in many cases automatically reconfigure around a single point of failure, having a few entire substations wrecked with explosives would try them. Ratbone 124.182.44.76 (talk) 04:03, 6 March 2013 (UTC)
- So in other words, the damage from non-nuclear EMP will be minimal and localized even in the worst-case scenario, right? In that case, maybe I should scrap this novel altogether and concentrate on my other three titles -- the plot of this one absolutely depends on the EMP causing severe and widespread damage. All right then, if what you're saying is true, then I guess it's back into the cockpit/back into the Forties for me as far as my subject matter. 24.23.196.85 (talk) 03:13, 6 March 2013 (UTC)
Force carrier particles in a black hole
If the fundamental forces of nature are carried in particles/waves, as quantum physics maintains that they are, then how are gravitrons (for mass) and charge carriers able to exert influence on the outside world, seeing as they cannot escape the black hole? Thus, a black hole should not be able to exert the full amount of force due from its mass nor its charge. Magog the Ogre (t • c) 15:45, 2 March 2013 (UTC)
- A (theoretical) graviton is itself massless, so is not affected by gravity. How can it "carry mass" yet itself be massless ? Think of it as carrying information, like if you carry a bankbook with your bank balance written on it, but that doesn't mean you have the actual cash on you. StuRat (talk) 16:38, 2 March 2013 (UTC)
- StuRat, sometimes the answers you give are very useful, but I wish you would stop answering questions where you don't have the slightest idea what you are talking about. Massless particles are indeed affected by gravity, for example photons are redshifted when they pass through a gravitational field. Regarding Magog's question, there is no accepted theory of quantum gravity currently, but if one is ever developed, it will probably have the gravitons created in the region outside the event horizon. The story is clearer for charge carriers. Quantum field theory allows for the creation of an electron-positron pair out of the vacuum, with one of them falling into the black hole and the other escaping to infinity -- this mechanism allows electric field propagation without anything actually escaping across the event horizon. Looie496 (talk) 16:53, 2 March 2013 (UTC)
- To clarify, you mean particles with no rest mass can be affected by gravity, due to the mass they gain due to relativity effects at high speeds, right ? Yes, I neglected to include that (and our article on gravitons really should say it has 0 "rest mass", not just 0 "mass", but I can't figure out how to change it). StuRat (talk) 17:26, 2 March 2013 (UTC)
- They don't gain mass, but they are affected by gravity because space time is curved. Light has a mass of exactly 0, but it cannot escape a black hole due to the curving of space time. Magog the Ogre (t • c) 17:38, 2 March 2013 (UTC)
- But, again, photons have zero rest mass, but do have relativistic mass. StuRat (talk) 20:55, 2 March 2013 (UTC)
- Gravity affects everything equally (equivalence principle); the mass really doesn't matter, whether it's rest mass or relativistic mass. Also, relativistic mass has never been a very useful concept, and isn't usually taught any more. It leads people to think that rapidly moving objects should collapse into black holes, for example, which they certainly don't. -- BenRG (talk) 23:30, 2 March 2013 (UTC)
- But, again, photons have zero rest mass, but do have relativistic mass. StuRat (talk) 20:55, 2 March 2013 (UTC)
- So, in theory then, a black hole could have a hugely positive charge, but only the content which lay on the event horizon would be able to affect the outside world in terms of charge? (I realize that incoming material does not fall into a black hole but forever stays on the event horizon... but what about material that was present when the black hole expanded, or material that was at the center of the original star, say a magnetar, when the black hole formed) And what does this say about the speed of gravity? Magog the Ogre (t • c) 17:08, 2 March 2013 (UTC)
- Matter does cross the event horizon. You can't see it cross from outside, because no light from the crossing can ever reach you, but it does cross. The electromagnetic field of a black hole has the same strength as the total field of the matter than went into the black hole, but it doesn't come from inside; it is a "fossil field" (retarded potential) from before the matter crossed the event horizon. The same is true of the gravitational field. This doesn't change (as far as anyone knows) in quantum mechanics; nothing that happens inside the event horizon affects anything outside. As for the virtual particle picture, I think this is just an example of a case where it isn't very helpful (as Count Iblis sort of says below). -- BenRG (talk) 23:30, 2 March 2013 (UTC)
- So, in theory then, a black hole could have a hugely positive charge, but only the content which lay on the event horizon would be able to affect the outside world in terms of charge? (I realize that incoming material does not fall into a black hole but forever stays on the event horizon... but what about material that was present when the black hole expanded, or material that was at the center of the original star, say a magnetar, when the black hole formed) And what does this say about the speed of gravity? Magog the Ogre (t • c) 17:08, 2 March 2013 (UTC)
Virtual particles are just mathematical tools to do computations, they are unphysical and don't stick to the equations of motions of real particles. E.g. in case of Compton scattering, two Feynman diagrams are needed to compute the amplitude. In one of these diagrams, the emitted photon leaves the electron before the photon that is going to be absorbed has arrived. Count Iblis (talk) 18:45, 2 March 2013 (UTC)
exact che soale mahi .... absolute favorite question!!
LCD panel colours from the side
Why do LCD panels look very red, green or blue from the side? Clover345 (talk) 17:03, 2 March 2013 (UTC)
- To clarify, you aren't asking about those LCD monitors with "ambient display" (that is, they intentionally shine lights out the sides), right ? StuRat (talk) 17:38, 2 March 2013 (UTC)
- No, I mean LCD TVs, PC monitors, phones etc. when you look at them side on or angled on dark screens, most seem to change colour to red, blue or green depending on the display. Clover345 (talk) 17:48, 2 March 2013 (UTC)
- LCDs, in particular, seem to only provide a good image over a limited angle (both up-down and right-left). Apparently some of the colors can be viewed over a somewhat wider angle than others, giving the color mismatch you describe at the fringe. I'm not sure why this is, though. Could it be a prism effect ? StuRat (talk) 17:51, 2 March 2013 (UTC)
- The way I understand it (no ref, may not be accurate...) is that the LCD mask over the subpixels only look right when viewed from the right angles. Otherwise, they mask out adjacent subpixels, switching which color they seem to block from your viewpoint. 38.111.64.107 (talk) 13:23, 4 March 2013 (UTC)
how practical are diy 3d laser scanners for specific industrial applicaitons?
I'd like to understand whether I could expect to be able to cheaply hand-assemble a relatively real-time 3D recognition engine with cheap DIY parts. (Relatively real-time meaning I don't know 200-500 ms delay or whatever. Cheap means under $100.) My problem though is that I don't know the fundamentals of the different possible 3D recognition approaches!!!
I know of several:
-> stereoscopy, meaning two cameras a known distance apart. you match pixels and then figure out how far you had to move them compared with the known difference between your cameras
-> then I know there is "structured light" meaning you shine some light like a straight line or something, and then from how it deforms, you can tell what the shape of the object must have been that it hit
-> there is even 3D from motion processing! (parallex).
how practical are any of these in a diy 3d project? Thanks!! 86.101.32.82 (talk) 19:27, 2 March 2013 (UTC)
- I think we'll need a more detailed description of the application before we can help. As described, determining the position of the user's fingers on a table is a 2D rather than a 3D problem, and could be done with a single camera; or, indeed, you could use a touchpad and not require any optical sensors at all. Why do you need 3D? Tevildo (talk) 21:13, 2 March 2013 (UTC)
- The application is of a resolution like trying to recognize the relative positions of hot dogs on a conveyor belt in 3D (so you can represent it as some cylinders in a coordinate system - this is about recognizing the xyz values of the ends of those cylinders) -- it doesn't really matter as this is a generic DIY project. this is just to indicate the level of granularity: i.e. we are not scanning a statue in mm resolution or anything like that, nor is it about a landscape or room or something like that. Basically, I am more interested in what kind of results I could get from this kind of DIY project and, importantly, what the best approach is. I'm unclear about even basic pros/cons, such as whether polyscopy (more cameras) would help, whether a greater distance between cameras would give me a better result trying stereoscopy, and so forth. Basically, I'd like basic references geared toward the practicalities I've mentioned. Thanks! 86.101.32.82 (talk) 23:29, 2 March 2013 (UTC)
- See 3D scanner, Laser scanning, and Structured-light 3D scanner for our articles on the subject. A quick Google search on "laser scanner" comes up with lots of out-of-the box solutions at around the $300 mark, so a DIY system for $100 might be feasible. Tevildo (talk) 23:42, 2 March 2013 (UTC)
- The application is of a resolution like trying to recognize the relative positions of hot dogs on a conveyor belt in 3D (so you can represent it as some cylinders in a coordinate system - this is about recognizing the xyz values of the ends of those cylinders) -- it doesn't really matter as this is a generic DIY project. this is just to indicate the level of granularity: i.e. we are not scanning a statue in mm resolution or anything like that, nor is it about a landscape or room or something like that. Basically, I am more interested in what kind of results I could get from this kind of DIY project and, importantly, what the best approach is. I'm unclear about even basic pros/cons, such as whether polyscopy (more cameras) would help, whether a greater distance between cameras would give me a better result trying stereoscopy, and so forth. Basically, I'd like basic references geared toward the practicalities I've mentioned. Thanks! 86.101.32.82 (talk) 23:29, 2 March 2013 (UTC)
- This is a fairly common conveyer belt application and the usual, and much simpler, solution is electronic detection using either a radar technique or distortion of an AC magnetic field. Since hot dogs have both a moisture content and ingredients that have a dielectric constant very different to air, detection is simple and reliable. Ratbone 124.178.153.245 (talk) 02:05, 3 March 2013 (UTC)
that's just the thing though
- http://en.wikipedia.org/wiki/Laser_scanning
- http://en.wikipedia.org/wiki/Structured-light_3D_scanner
- http://www.google.com/search?q=stereoscopy+depth+recognition (where is our specific article)?
- the just mentioned: http://en.wikipedia.org/wiki/3D_radar
- http://en.wikipedia.org/wiki/Structure_from_motion
There are so many techniques!! I would like references, if possible, to help me compare them on a practical level. I realize there are "many ways of doing this" - so, please, what are the pros and cons? (In a mobile device that can detect fingers and the like). 86.101.32.82 (talk) 09:54, 3 March 2013 (UTC)
- This site might be a better place to ask, although I would suspect their first question will be "What specific application do you have in mind?" (See this thread on the same forum for an example). Tevildo (talk) 16:11, 3 March 2013 (UTC)
- The Kinect is an interesting 3D scanning device. It projects an IR pattern over the area and images it, along with taking color images, and is designed for gaming so it is real-time. Microsoft has a free SDK available for developing PC software that uses it. It isn't laser scanning, and I don't know if it is right for your application, but it seems a great entry-level way to get real-time 3d imaging. 38.111.64.107 (talk) 13:09, 4 March 2013 (UTC)
- The problem with the Kinect in some of these applications is that the IR emitter and the camera that goes with it are quite far apart. This produces good results for large objects at long ranges - but starts to cause real difficulties for small objects at close ranges. The result is generally that you have to keep even small objects quite a distance from the Kinect - and then the poor spatial precision starts to become an issue. A new version of the Kinect is due to appear soon, it's claimed to be a big improvement in this specific situation, but it remains to be seen how good it really is. SteveBaker (talk) 14:37, 4 March 2013 (UTC)
- For the literal case of hotdogs on a conveyor belt, I'd get the area as dark as you can and generate a laser line (shining a laser pointer into an appropriate lens works well - I used the guts of a discarded hand-held bar-code reader) pointing down onto the conveyor from vertically above. Use a regular digital camera to look at that line from a few inches away from the laser pointer along the length of the conveyor. What it sees is a straight line when there is nothing on the belt and a "cross-section" of the 3D shape when something intersects that line. If you know how fast the conveyor is moving, you'll get a bunch of cross-sections at known separations - each of which will have the shape of half of an ellipse. From that you can compute approximate elliptical cross-sections for each object as it crosses the laser line. The ratio of the diameters of those semi-ellipses gives you a hint as to the orientation of the object on the belt - which you can strengthen by stitching together multiple cross-sections over time and averaging to get the true orientation of the cylinder. It should be easily possible to do this in realtime - even at fairly fast belt speeds.
- Total hardware cost should be around $40. Lots of software effort needed though.
- SteveBaker (talk) 14:37, 4 March 2013 (UTC)
Pump & water question
Say you have a water pump with two independent pipes of the same diameter coming off of it. The pipes go up about 50 feet and each has a valve at the end. Pressure at the pump is 37 psi for both pipes, and 22 psi at the their ends. Opening the valve for one pipe changes the pressure in the second pipe by a very little - say 0.5 psi.
If you were to join these two pipes with a "T" fitting (same internal diameter as the 2 pipes), and read the pressure coming out of the "T", would you expect it to read 22 psi, 37 psi, or something in between?
(No this isn't a homework question - I'm hoping to come up with a clever solution for a real plumbing problem.)
Thanks! 142.68.14.116 (talk) 20:43, 2 March 2013 (UTC)
- It will be somewhere in between, decreasing linearly with height. Are you trying to improve the plumbing in a high-rise condo? 24.23.196.85 (talk) 20:47, 2 March 2013 (UTC)
- Why not tell us what the real plumbing problem is ? StuRat (talk) 20:50, 2 March 2013 (UTC)
- Similar - trying to make a toilet flush on the bridge of a ship. I'm guessing the linear component would be .433 psi/foot due to gravity, plus some small friction-sort of losses? The toilet requires 35psi of water pressure for optimal operation, it's getting 22. 142.68.14.116 (talk) 20:53, 2 March 2013 (UTC)
- Simplest solution, add a tank above the toilet. This approach is used for most land toilets, as it doesn't require much water pressure. Other than that, you'd need another pump, either replacing the current one with a more powerful pump, or adding a supplementary pump. Either sounds more complex than just adding a tank, to me. A tank will make the ship somewhat more top-heavy, but I can't imagine this tiny difference being significant. StuRat (talk) 20:57, 2 March 2013 (UTC)
- StuRat, we're talking about this kind of tank, not that one. ;-) 24.23.196.85 (talk) 21:13, 2 March 2013 (UTC)
- Tanks for clarification. :-) StuRat (talk) 02:08, 4 March 2013 (UTC)
- With similar problems involving equalizing pressure differentials in racing engines (oil/fuel), the solution involves the use of pressure regulators (adjustable or static). (I.e.: decrease pressure to one fork of 'T' to increase pressure on other) ~E:74.60.29.141 (talk) 21:45, 2 March 2013 (UTC)
- The only way I can see that working is to close off one of the pipes, sending all of it's water pressure into the other pipe. That should increase pressure somewhat, but I doubt if it would go from 22 to 37 PSI. This is basically the same as reducing the diameter of a single pipe to increase pressure.StuRat (talk) 05:28, 3 March 2013 (UTC)
- Also, we should clarify. Atmospheric pressure is around 15 PSI, so, when you say 22 PSI and 37 PSI, do you mean that much pressure beyond atmospheric pressure (so 37 and 53 PSI, total) ?
- Bernoulli's equation is easy, reliable and covers all questions and solutions on pressure in pipes and more. --Kharon (talk) 13:18, 3 March 2013 (UTC)
- Considering that this question is about pressure drop along pipes and T-junctions, where the flow rate is a non-linear non-monotonic function of pressure drop, and a decidedly non-linear function of pipe diameter (4th power for stream flow; 19/7-th power for turbulent flow) depending on the Reynolds Number, please explain how Bernoulli's equation, which models the pressure as a linear function of flow rate, can be used to cover this question. Please explain how Bernoilli can explain the comparitively very high pressure drop in T-junctions and 90 degree pipe bends. Ratbone 124.182.147.20 (talk) 03:32, 4 March 2013 (UTC)
- Bernoulli's equation is easy, reliable and covers all questions and solutions on pressure in pipes and more. --Kharon (talk) 13:18, 3 March 2013 (UTC)
- If the pump is only producing 37 psi at the pump then it looks like 35 psi is only obtainable up to 4 feet or so higher than the pump. A DC pump the size of a mango will only go to about 60 feet by itself. and not pump a large enough volume fast enough. I suspect you really want a certain volume of water at the toilet within a short time for a good flush. Polypipe Wrangler (talk) 22:26, 5 March 2013 (UTC)
March 3
Bell's inequality
Is the following statement about the experiment used in Bell's inequality correct?
We have a pair of entangled particles. One particle encounters a detector with a polarizer at some angle and one of two things happens (1)The particle is measured collapsing the wave function and leaving the other particle in the correlated state, or (2) the particle is not measured but is absorbed, breaking the entanglement and leaving the other particle in an unknown state.
(In addition to whether it is correct, should it end with the words 'unknown state' or 'random state'?)
Thank you. RJFJR (talk) 00:11, 3 March 2013 (UTC)
- It's a measurement whether the particle is absorbed by the polarizer or not. If it's not absorbed it's an interaction-free measurement, but it doesn't matter—the particle states are correlated either way. Also, you don't get a nonclassical result from a single pair of entangled particles or with a single kind of polarization measurement. You need to repeat the experiment many times, randomly reorienting the polarizer each time (just two orientations are enough). -- BenRG (talk) 01:41, 3 March 2013 (UTC)
- There's been more than one experiment measuring Bell's inequality. See Bell test experiments. In the "typical" two-channel experiment, a coincidence counter only activates if both entangled particles got through their respective polarizers. If one photon got absorbed, it simply isn't counted. --140.180.251.41 (talk) 22:52, 3 March 2013 (UTC)
How accurately can we predict the properties of an element and its compound by trends from other elements of a series? How to prove it?
I am just curious if Mendeleev's method (if any) of predicting chemical and physical properties of elements such as Germanium do apply to other elements. I am currently confused on some original research issues on francium and francium hydroxide.--Inspector (talk) 03:02, 3 March 2013 (UTC)
- I don't really understand the question, but for anything it could possibly mean, the answer is yes. I suspect you're looking for more information than that, but you'll have to clarify what you really want to know. Looie496 (talk) 03:08, 3 March 2013 (UTC)
- I'm not sure how to quantify how accurately the periodic table predicts properties. How does 9 out of 10 prediction units sound ? StuRat (talk) 03:39, 3 March 2013 (UTC)
- Let's say: by which method did Mendeleev come out with the prediction for germanium and its compounds? Just simple linear regression? Can we apply such methods to other elements undiscovered or too unstable to experiment? Is it original research to make such predictions?--Inspector (talk) 05:50, 3 March 2013 (UTC)
magnets strength
you have two magnets,same size each and same strength.are they stronger when they are replying from each other rather than attracted to each other.?? my test show they are atronger replying from each other. ```` — Preceding unsigned comment added by Westfall272 (talk • contribs) 04:50, 3 March 2013 (UTC)
- I assume you mean "repelling" each other. Well, the force varies dramatically with the distance, and you would probably place them in contact to test the repelling force, so that would then seem higher. StuRat (talk) 04:52, 3 March 2013 (UTC)
if they was .015 apart would repelling still be stronger ```` — Preceding unsigned comment added by Westfall272 (talk • contribs) 05:03, 3 March 2013 (UTC)
- .015 what? Evanh2008 (talk|contribs) 05:05, 3 March 2013 (UTC)
15 thousands apart — Preceding unsigned comment added by Westfall272 (talk • contribs) 05:10, 3 March 2013 (UTC)
- I know how to read fractions, yes. I have no idea what it is a fraction of in this context. Millimetres? Inches? Centimetres? Parsecs? Evanh2008 (talk|contribs) 05:14, 3 March 2013 (UTC)
- I think this "clarification" was hilarious :). 86.101.32.82 (talk) 06:08, 3 March 2013 (UTC)
- Yes, like when Sheldon Cooper requested alcohol from the bartender, and was asked which type, to which he responded "ethyl alcohol". StuRat (talk) 16:40, 3 March 2013 (UTC)
.015 of a inch````` — Preceding unsigned comment added by 4.131.77.179 (talk) 06:01, 3 March 2013 (UTC)
- What's with all the digressions? If they're the same distance apart, then the forces should be the same strength, just in opposite directions. Clarityfiend (talk) 07:20, 3 March 2013 (UTC)
This is the hardest I've laughed looking at this board. Had to comment. – Kerαunoςcopia◁galaxies 08:56, 5 March 2013 (UTC)
"This machine destroys EVERYTHING"
[[12]] What is this machine called, and what is its normal diet? — Preceding unsigned comment added by 75.35.96.60 (talk) 09:13, 3 March 2013 (UTC)
- Specifically, one intended for processing industrial solid waste. Horselover Frost (talk · edits) 09:58, 3 March 2013 (UTC)
- Lots more videos here. The one doing refrigerators is pretty cool. SpinningSpark 10:00, 3 March 2013 (UTC)
- So, what would happen if you fed one of those machines into another? ←Baseball Bugs What's up, Doc? carrots→ 23:03, 3 March 2013 (UTC)
- Feeding you into one would be much more entertaining. SpinningSpark 23:41, 4 March 2013 (UTC)
- Actually, this is a good question. It is the basis of the Chinese term "矛盾" - see our article Irresistible force paradox. — Sebastian 00:27, 5 March 2013 (UTC)
- @Baseball Bugs: check out March 2008 Robinh (talk) 02:32, 5 March 2013 (UTC)
How long will a brass key retain functionality if used as a doorknob?
The door requires high force to open, enough to deform a well-built ~1.5mm gauge 2-turn key coil, but elastically. So what is that, a couple dozen pounds? It's efficient and allows one-handed entry, but something tells me this was outside it's intended use. Sagittarian Milky Way (talk) 13:26, 3 March 2013 (UTC)
- If it's truly within the elastic deformation range (no plastic deformation), then you're talking about eventual failure from metal fatigue. The closer it is to the plastic deformation range, the fewer cycles it can withstand before fracture (could be thousands, millions, billions, or trillions of cycles). Unfortunately, the micro-cracks which lead to fracture can't always be spotted before failure. And, when it does fail, it might break off in the lock, requiring a locksmith to fish it out. And, it's more likely to break off on the coldest day of the year, due to metal being more brittle at low temps (not to mention Murphy's Law). So, I recommend getting a stronger key, and put the brass key in your wallet as a backup. StuRat (talk) 16:32, 3 March 2013 (UTC)
- Most keys seem to be made of brass, though some have a silvery plating. The lock might need lubrication (WD-40 or a graphite product), adjustment so binding is corrected, or replacement if worn out or damaged. A new lock can generally be keyed like the old one if that is desirable. Edison (talk) 02:21, 4 March 2013 (UTC)
followup aerodynamics question
(On a purely theoretical level, just to help me understand aerodynamics).
If we take a normal glider wing that is somehow in sections, and manually (for the sake of argument) rotate and fix the sections to have different angles of attack, and perhaps push the start of the wing to be a bit away from the center (as in the designs here - http://www.youtube.com/watch?v=MkZ2bTWvRns - where the wings are held away from the center of turn), then is the new wing something like optimal for a helicopter rotor? Or are there STILL going to be issues that make it completely wrong on a theoretical aerodynamic basis? (Not a practical basis - I'm not even suggesting a specific mechanism for turning the Angle of Attack of the different sections, nor do I have one in mid). Just trying to understand aerodynamics. If there are still issues, what are they? I am just trying to increase my theoretical understanding, thank you. 86.101.32.82 (talk) 16:57, 3 March 2013 (UTC)
- Most aircraft already have a varying angle of attack to improve stall behavior. This causes the stall to form at the wing root first, improving controllability and recovery at the stall onset. By now, you know where to look for more information on aerodynamics fundamentals, right? The Pilot's Handbook of Aeronautical Knowledge is available for free from the FAA -http://www.faa.gov/regulations_policies/handbooks_manuals/ - or you can buy a hard-copy for about thirty dollars. If you need a more advanced aerodynamics textbook, I'm sure we can recommend some more. Nimur (talk) 22:00, 3 March 2013 (UTC)
- Perhaps you have misunderstood. I am asking about varying it from one moment to another moment possibly a few minutes later, when it should be in a different configuration. (Kind of like how the wing is in sections here: http://en.wikipedia.org/wiki/Boeing_X-53_Active_Aeroelastic_Wing if I have interpreted that photo correctly). I couldn't find more information about that photo. So, this is a very esoteric question I'm asking about, and not at all the typical change in AoA from tip to stem of a fixed wing. Rather, about the possibility of changing between two radically different "fixed" states (perhaps over the course of a few minutes) where one is like a glider, and the other state is like a pair of helicopter rotors, meaning that AoA angle is radically different between stem and tip in the latter configuration, not to mention that one of the wings is now facing the due opposite direction! 86.101.32.82 (talk) 22:10, 3 March 2013 (UTC)
- Note: I think I misinterpreted that photo. If you click on it and zoom in (direct link: http:/upwiki/wikipedia/commons/5/57/EC03-0039-1.jpg ) , it looks like it's just stripes painted on, not movable sections. Well, imagine it's movable sections instead... 86.101.32.82 (talk) 22:12, 3 March 2013 (UTC)
- What is your question, exactly? You asked whether there are "issues." Of course there are issues - as with any engineered system, there are tradeoffs, including weight, reliability, aerodynamic efficiency... a full enumeration of all issues is covered in our article on airfoils. Using today's technology, we can build sophisticated machines that deform in a variety of interesting ways, and we could build an airfoil that does almost anything. But, we typically build the airfoil that falls in our design envelope - trading against the lightest, safest, cheapest, and most reliable design. Usually, lightest- and cheapest- and safest- are all far more important criteria than "most aerodynamically efficient." Your proposal vaguely suggests that if we made a very sophisticated design and analyzed its performance, we might improve efficiency; I'm sure we could derive from first principles and solve airfoil equations for your proposed idea; but that won't solve the problems that aeronautical engineers need solved in this decade. (You don't see a lot of deformable-wing aircraft flying around that just that need optimizations for better deformable-wing performance!) So, what exactly is your question? Are you just seeking mathematical modeling techniques for analyzing airfoil performance in the general-case? Or do you want a specific run-down of the top problems for an unspecified design that does not currently exist, and is only vaguely specified by your question? Nimur (talk) 22:29, 3 March 2013 (UTC)
- Note: I think I misinterpreted that photo. If you click on it and zoom in (direct link: http:/upwiki/wikipedia/commons/5/57/EC03-0039-1.jpg ) , it looks like it's just stripes painted on, not movable sections. Well, imagine it's movable sections instead... 86.101.32.82 (talk) 22:12, 3 March 2013 (UTC)
- Perhaps you have misunderstood. I am asking about varying it from one moment to another moment possibly a few minutes later, when it should be in a different configuration. (Kind of like how the wing is in sections here: http://en.wikipedia.org/wiki/Boeing_X-53_Active_Aeroelastic_Wing if I have interpreted that photo correctly). I couldn't find more information about that photo. So, this is a very esoteric question I'm asking about, and not at all the typical change in AoA from tip to stem of a fixed wing. Rather, about the possibility of changing between two radically different "fixed" states (perhaps over the course of a few minutes) where one is like a glider, and the other state is like a pair of helicopter rotors, meaning that AoA angle is radically different between stem and tip in the latter configuration, not to mention that one of the wings is now facing the due opposite direction! 86.101.32.82 (talk) 22:10, 3 March 2013 (UTC)
- lightest, cheapest, safest and so forth might bemore important criteria than "most aerodynamically efficient" but not the issues of interest to me at the moment. As I hoped to mention, my only real aim is to improve my personal understanding of aerodynamics. As such the trade-offs that go into actual systems are of less interest to me, though of course I am happy to read those as well. I am not really interested in improving the efficiency of existing systems or anything like that: there is no practical interest inherent in my question, and no interest in actual design constraints faced by aeronautical engineers this decade or any other.
- In all, I would say the characteristics / run-down of the top problems for a partially specified hypothetical model that does not exist (nor have reason to exist, nor is practical, nor is posited to be practical in the sense that I am not requiring the wing to be able to support the weight even of its power plant nor asking about how that might be possible). It relly is exactly as you say: trying to improve my understanding of general principals through this mental exercise.
- So, the question is, given an attempt to turn external torque into lift, where does a glider that is posited to be able to deform in segments to approximate the shape of a helicopter rotor (possibly pushed away from the center of rotation as in the videos here http://www.youtube.com/watch?v=MkZ2bTWvRns ) meet its primary problems? What are the primary characteristics of such a deformation?
- would we need to imagine like a T1000 shapeshifting material if we wanted to talk about an efficient wing shape - or would turning sections of the wing approximate the correct new shape?
- "shape memory alloys" exist. If we somehow posit that by applying or removing an electric field, a wing would deform or return to one of two configurations, but obviously the weight and volume of metal had to remain the same, then could the same material be a good glider wing and good helicopter rotor?
- Basically I'm trying to understand the relationship between these two things, but positing a way to turn between the two by turning sections of a wing thru some unspecified mechanism (that I don't intend to discover): in this case (with this conceit) could both shapes be close to optimal to generate lift, as a glider (very large glide ratio) and helicopter (great lift from rotation) respectively? Or what "issues" are there and so forth. just help me understand some basics. 86.101.32.82 (talk) 22:48, 3 March 2013 (UTC)
- The problem Gamera's airfoil faces is that no material is known to exist that is both very light, and can sustain very high dynamic loads, particularly shear stress and bending moment. Because no such material exists that meets the requires parameters, the team had to design a complicated structure - summarized in their Rotor Blade Fact Sheet - using a complicated truss structure. If the power source were something other than a human, the inputs to the equations would be different, and the required parameters would be different: a different rotation speed and torque would be available; and so a different aerodynamic loading would be present; and a different gross takeoff weight would be possible. If so, the team could use aluminum or metallic blades - as does a conventional helicopter - with corresponding reasonable sizes and weights - and these stress and strain issues would not be problematic. Hopefully this helps explain why their aircraft looks so unconventional. Nimur (talk) 23:09, 3 March 2013 (UTC)
More on the tradeoff between weight - and stiffness - Gamera Structures, part of the full project publication listing. Nimur (talk) 23:12, 3 March 2013 (UTC)
can you talk about this inflatable glider:
can you talk about this inflatable glider, http://www.eaa.org/apps/galleries/gallery.aspx?ID=305&p=2 - as we do not have an article on it. It is a glider called "woody jump". How does an inflatable glider work in theory?
Aren't the stresses that a glider meets something that inflatable is totally inappropriate for? If not, then could you talk about the theoretical possibility of my previous question in terms of a rubber outside shell that snaps shut for one configuration (either helicopter rotor or glider, whichever is the profile that fits inside the other) or can be inflated to change into the larger cross-section (helicopter rotor or glider) optimal shape?
This is not intended to be practical or anything like that. I ask these questions simply to gain understanding of fundamental principals of aerodynamics. thank you. 86.101.32.82 (talk) 17:07, 3 March 2013 (UTC)
- That's quite a light design, so the stresses are much less than, in, say, a metal plane. However, inflatable wings can handle more stress than you might think, provided they have the proper cross-members. Those aren't just big hollow balloons, I bet, but rather separate chambers. And remember that the very first planes were made of canvas, string, and a bit of wood, so not all that strong, either. But, during heavy winds, you should avoid flying either those early planes or the inflatable ones. I take it it's designed for hobbyists, not serious flyers. StuRat (talk) 17:35, 3 March 2013 (UTC)
- So, what's the worst that happens in strong wind? And without the strong wind - can the form of the inflated wing be optimal? (for completely still wind)? Or is it still going to be worse for some reason than a real fixed wing? 86.101.32.82 (talk) 17:59, 3 March 2013 (UTC)
- In a strong wind the glider would be difficult to control, and the wings might rip apart. It is difficult to get thin wings with such an inflatable design. However, the reduced lift from having short, fat wings is compensated for by the lower weight. Most lighter-than-air (or nearly so) craft have similar trade-offs.
- The flexibility of inflatable wings also provides the possibility for wing warping, but I'm not sure if this design takes advantage of that. StuRat (talk) 18:24, 3 March 2013 (UTC)
- But who says "short, fat wings"? I mean, in the specific design I asked you about, we have a long wing that is a normal wing (either a normal pair of helicopter rotors or a normal glider wing) with a rubber shell. you can blow up the rubber shell (and possibly slightly turn the whole wing) to take on a new shape. Who says that has to be short and fat? especially given that it can be several separate components blown up along the length of the wing like this (-)(-)(-) (cross-section). i realize this is not practical in the slightest, but theoretically, what's wrong with the new long, inflated wings? 86.101.32.82 (talk) 20:01, 3 March 2013 (UTC)
- They wouldn't be strong enough to support the lift if they were long and thin. If you add heavier materials to support them, then you lose the whole advantage of light, inflatable wings. One thing I might change is making the inflatable wings clear, with a black surface in the center, so that solar heating would make them warmer, and thus lighter. Of course, they would need to be flexible enough to allow this expansion. StuRat (talk) 20:18, 3 March 2013 (UTC)
- I don't understand what you're referring to when you say "One thing I might change".. you mean on the Woody Jump? Surely the volume to surface area is so small, even if it were totally empty inside, the lift would be negligible. I don't think heating up (which reduces the density of the air inside) makes much of a difference, neither would filling it with helium etc. it's just not big enough. (I think).
- But secondly, where you say that "they wouldn't be strong enough to support the left if they were long and thin" - how do you know, I mean if there is a rigid structure inside? (as in my example) For example, the rigid structure could specifically be joined to the surface by as many cross-members as you need that are also inflated. We're not talking about one long rigid inflated wing that has to keep rigid through its inflation, but rather, an inflatable portion over a fixed wing. That means it can be joined by supports/cross-members wherever you want. In this case, do you think there could be a viable design that is firstly a fixed wing with a rubbery uninflated portion over it, but if inflated, the rubbery portion would expand AND be supproted by cross-members? Or this would not exist for some theoretical reasons you can tell me now? Thanks. This is all just theory, no intention to build anything like htis. 86.101.32.82 (talk) 20:56, 3 March 2013 (UTC)
- I wouldn't expect the lift from hot air inside to be much, but enough to justify the design change (a few ounces make a difference on a glider). Filling it with helium would also help a bit, but that gets expensive quickly, since you can expect it to leak out over time, so you'd need to top it off periodically. Alternatively, you could try to use a compressor to suck it back out after each flight, but you'd only get so much back.
- As for a design with rigid wings that have balloon surfaces attached, what's the advantage there over just a fixed wing glider ? The whole point of making it out of inflatable parts is to avoid the weight from rigid wings (with another benefit being that's more portable, when uninflated). StuRat (talk) 21:13, 3 March 2013 (UTC)
- StuRat, in practical terms my whole line of questioning is simply meant to increase my theoretical understanding of aerodynamics, so don't read anything more into it - no intention to build this or anything like that. So my question about "As for a design with rigid wings that have balloon surfaces attached," is whether it could be used as a theoretical framework for my exercise that I raised earlier: http://en.wikipedia.org/wiki/Wikipedia:Reference_desk/Science#second_theoretical_question_about_aerodynamics . Basically, as a theoretical question, I want to understand if starting with a long glider wing, and then selectively changing the angle of attack in sections along the length of the wing, and possibly pushing the whole thing off of center, we could build an aerodynamically 'efficient' helicopter rotor. It's not intended to really be practical or anything like that. Just "aerodynamically efficient.". Any thoughts? 86.101.32.82 (talk) 21:33, 3 March 2013 (UTC)
- I'm afraid I don't know much about helicopter design, which is why I didn't answer that Q. Hopefully somebody who does know about it will answer soon. StuRat (talk) 21:53, 3 March 2013 (UTC)
- There is something else I can add about gliders, though. You can also place solar panels on top, and use those to power small props. While this isn't strictly a glider, it doesn't require any fuel. Here's several NASA built: NASA_Pathfinder (they also had batteries, so they could fly at night). I see no reason why you couldn't combine an inflatable design with flexible solar panels and a small prop, to extend flight time more. StuRat (talk) 22:03, 3 March 2013 (UTC)
- Well let's not complicate the question by adding solar panels, which are quite heavy. I think it's a quite appropriate question to ask: if the power source, motor, etc didn't even have to be on the plane, but you received a direct line of mechanical power (compressed air, whatever), then it terms of simple aerodynamic efficiency, could a glider wing turn in sections and inflate to become an efficient helicopter rotor? (possibly being pushed off a bit away from center). Or if not, why not? I realize we will have to let someone else answer. 86.101.32.82 (talk) 22:15, 3 March 2013 (UTC)
I recently had the privilege of getting up close to some Super Cobras. My first impression was, "wow, the rotor blades are the size of my Citabria wings!" (You can check the actual dimensions to be precise; this was just a first-impression reaction). My point is, some helicopter blades already look a lot like a glider wing - some even look like they've got the aspect ratio of Citabria wings. But, the thing to remember is that helicopter blades, fan blades, propeller blades, and fixed wings are all just airfoils. And when we talk about efficiency of these airfoils, we have to be careful to define "efficiency": the method we use to evaluate an airfoil's efficiency depends on what we want to use the airfoil for. A glider is intended to have a very very very high lift-to-drag ratio, so that the aircraft maximizes its glide slope. A Citabria wing is designed to provide high lift at low speed; this means that it has a poor lift-to-drag ratio (inefficient!) in comparison, but it also means I can get my aircraft off the ground before I hit the numbers (the POH lists a takeoff roll of 340 feet at gross takeoff weight, and you can find videos of Citabrias taking off or landing with ten-foot rolls in strong headwinds!) A helicopter rotor is different from both of these airfoils; it is designed to provide lift and remain stable while the helicopter is operating in its normal flight envelope. Moral of the story: "efficient" isn't good enough to summarize airfoil performance. Efficiency is a good summary for Carnot engines, but not for aircraft performance. You need a lot more numbers: including, but not limited to, stall speed, wing loading, lift to drag ratio, drag coefficient, ... and of course, it will help to sweep each of these parameters across a variety of conditions: atmospheric conditions like density altitude; attitude (angle of attack, including non-ideal attitudes like rolls, especially if you care about stall/spin performance); and of course, reliability, safety, weight, and so on. If aerodynamics were reducible to a single parameter, aerodynamical engineers wouldn't have earned a reputation for solving some of the hardest mathematical and engineering design problems ever surmounted! Nimur (talk) 22:54, 3 March 2013 (UTC)
Note: the name "woody jump" is incorrect. Laurent de Kalbermatten of Switzerland has developed two inflatab;e wing fliers; the "Woopy-Jump" hang glider and the "Woopy-Fly" ultralight. See [ http://woopyjump.com/ ] A Google search on those two terms will give you lots of information. --Guy Macon (talk) 12:06, 4 March 2013 (UTC)
- Also note that Woopy has an internal aluminum and carbon fiber spar. See [ http://www.generalaviationnews.com/2010/05/woopy-and-the-one-hour-concept/ ] --Guy Macon (talk) 12:11, 4 March 2013 (UTC)
How to tell type 1 diabetes from type 2 ?
I ask because Diabetes#Diagnosis seems to neglect this, and I'd like to fix the article. StuRat (talk) 20:21, 3 March 2013 (UTC)
- According to Diabetes mellitus type 2#Diagnosis, "If the diagnosis is in doubt antibody testing may be useful to confirm type 1 diabetes and C-peptide levels may be useful to confirm type 2 diabetes, with C-peptide levels normal or high in type 2 diabetes, but low in type 1 diabetes." Tevildo (talk) 20:32, 3 March 2013 (UTC)
- Thanks. Perhaps that info should be copied into the main article. (I wonder why separate articles are even necessary.) StuRat (talk) 20:58, 3 March 2013 (UTC)
- Not quite. There is no single gold standard test, but rather a combination of parameters. Type 1 diabetes is easier to define and is characterized by insulin deficiency with normal insulin sensitivity at all prodromal and early treatment stages, and by far the most common variety of type 1 is caused by autoimmune destruction of the beta cells. There is less relationship with obesity and a higher incidence in childhood. The insulin deficiency reaches more complete lack in fewer years. The genes so far discovered associated with type 1 diabetes are involved in regulation of the immune system. In contrast type 2 diabetes is characterized by both insulin resistance and by insulin deficiency. In the early years before and after the diagnosis, the insulin deficiency is partial but reversible insulin depletion after prolonged hyperglycemia can be as complete as in type 1, which is why low c-peptide levels are poor discriminants between the two. Type 2 has a strong association with obesity, especially in young patients. The genes thus far identified as associated with type 2 are nearly all involved with metabolic pathways of energy metabolism or islet development. Heritability of type 2 is stronger than type 1. The incidence increases throughout life, with relatively few cases in childhood. However children and adults with type 2 can have positive antibodies (IAA, GAD, ICA), but typically only one or two and at lower titers. Children and adults with type 2 can develop ketoacidosis because of the insulin deficiency. So you can make a set of parallel columns describing heritability, age, weight, ketosis, acidosis, and antibodies, and perhaps a third of newly diagnosed diabetic patients will have characteristics from both columns. Furthermore, as the number of mechanistically distinct types of diabetes continues to proliferate (dozens now), the old designation of type 1 or 2 is becoming obsolete. alteripse (talk) 21:09, 3 March 2013 (UTC)
- Thanks. Also, is it possible to have both types ? Would the tests show this ? StuRat (talk) 21:16, 3 March 2013 (UTC)
- This page claims that you can have both. --Guy Macon (talk) 22:06, 3 March 2013 (UTC)
- Yes. Neither are rare diseases. I call it "type 1+2", and use to apply to adolescents who got ordinary type 1 but became heavy with time, have strong family history of type 2, and have other features of metabolic syndrome. It's not a formal designation. alteripse (talk) 03:08, 4 March 2013 (UTC)
March 4
where does dried water (molecules) go to?
His bath mirror is covered in humidity - he opens a hot fan towards it - it banish. where have the water molecules went? thanks ! 79.183.98.234 (talk) 01:42, 4 March 2013 (UTC)
- the water molecules evaporate, which means turn to steam, and then go into the air. 86.101.32.82 (talk) 01:45, 4 March 2013 (UTC)
- Agreed. You may wonder why you can't see the steam (water vapor). Well, if the droplets are small enough, if becomes invisible. In fact, there is water vapor in the air all the time, called humidity. It's only when it forms bigger droplets that it becomes visible, and forms visible clouds. StuRat (talk) 02:05, 4 March 2013 (UTC)
- It should be noted that "steam" in common parlance can refer to water vapor and also to the mist formed above boiling or evaporating water. Water vapor is invisible; mist or fog is visible under normal conditions. 75.164.249.10 (talk) 04:35, 4 March 2013 (UTC)
- To be clear: water vapor (evaporated water) is invisible (it's a colorless gas). Condensed water may form droplets - but then it's not water vapor. -- Scray (talk) 05:55, 4 March 2013 (UTC)
- At what point does a water-in-air liquid aerosol become distinct from a water-and-air gas mixture. Plasmic Physics (talk) 06:23, 4 March 2013 (UTC)
- At the point where there ceases to be an identifiable phase boundary (my god, is that a horrid article) between the liquid droplet and the surrounding gas. -- 71.35.110.219 (talk) 06:32, 4 March 2013 (UTC)
- At what point does a water-in-air liquid aerosol become distinct from a water-and-air gas mixture. Plasmic Physics (talk) 06:23, 4 March 2013 (UTC)
- That just rephrases my question into: at what point does there cease to be an identifiable phase boundary. Plasmic Physics (talk) 00:33, 5 March 2013 (UTC)
- You can think about it this way: Take a spoon full of sugar (or salt) and place it in a glass of water. Wait for a while, or if you're impatient, heat and/or stir it. You'll find that the sugar (or salt) dissapears. Where did it go? It dissolved into the water. The same thing happens with water and air. The water "dissolves" (evaporates) into the air in much the same way the sugar dissolved into water. And you can also speed up the process by heating the air or by stirring the air (e.g. by using a fan), or both. -- 71.35.110.219 (talk) 06:29, 4 March 2013 (UTC)
- As I recall from science class long ago, the term "dry" is relative; there's actually a thin layer of moisture on pretty much everything, even "dry" objects. ←Baseball Bugs What's up, Doc? carrots→ 13:58, 4 March 2013 (UTC)
- Really, even on lithium tetrahydridoaluminate(1-)? Plasmic Physics (talk) 09:54, 5 March 2013 (UTC)
Self-defense question
Can a woman of average (or somewhat greater than average) strength stun a man, even briefly, by hitting him on the head with an umbrella as hard as she can? Can she do the same to an attack dog (or at least deter it from ripping out her throat)? Thanks in advance! 24.23.196.85 (talk) 04:54, 4 March 2013 (UTC)
- The first would depend on the umbrella - the standard cheapie unbrella you buy for a few dollars is made of thin alloy tubing and is simply not heavy enough and rigid enough to even raise a bruise. However, it might be possible to find and umbrella strong enough - I haven't seen one. The second obviously depends on the dog. Most sizeable dogs who can be a threat simply have stength and reacion speed that few humans (male or female) can defend themselves. If you pretect your face, any dog with half a brain will simply go for the tendons at the back of your knees and bring you down, and then spring round for another attack while you are still falling. The best way to defend oneself against a threatening dog is the same way that you can handle a mugger though - show confidence and no fear. Both like easy targets, but a vicous dog is more difficult. In any case, in my experience, many women (who have strenth and speed under normal circumstances not dissimilar to men) simply freeze under sudden threat, or panic and do something stupid. (Some men will do that too) Wickwack 58.169.246.228 (talk) 05:39, 4 March 2013 (UTC)
- This gal works for the SOE, so she won't freeze or panic -- but she's normally unarmed. 24.23.196.85 (talk) 06:21, 4 March 2013 (UTC)
- I didn't know Sony Online Entertainment employees were so tough. But perhaps you really meant the Slavko Osterc Ensemble, in which case I suggest hitting the dog with her euphonium might be more effective than an umbrella. SpinningSpark 23:36, 4 March 2013 (UTC)
- This gal works for the SOE, so she won't freeze or panic -- but she's normally unarmed. 24.23.196.85 (talk) 06:21, 4 March 2013 (UTC)
- (EC) Probably not. The mass is too low, the length is too short, and the fabric provides a cushion. Now, if you put a mace ball at the end of the umbrella, and reinforced the shaft, then you might have a reasonable self-defense weapon. As far as existing umbrellas go, some have pointy tips, so could possibly be used to poke an attacker in the eye. StuRat (talk) 05:41, 4 March 2013 (UTC)
- Thanks! So, she should poke that Gestapo fink in the eye, then? 24.23.196.85 (talk) 06:21, 4 March 2013 (UTC)
- Yep, a direct hit to the eye ought to stun anyone, especially if she manages to puncture it. If she knows about the danger ahead of time, perhaps she can sharpen the tip, specifically to make it a better weapon. StuRat (talk) 06:26, 4 March 2013 (UTC)
- If your heroine really needs to kill the attack dog, a knee drop on it's chest will do a lot of damage very quickly. Google that . 124.191.176.117 (talk) 07:18, 4 March 2013 (UTC)
- I would say good luck to any human who thinks they can poke an attack dog in the eye, with an umbrella or any other thing. What do you think the dog will do, just stand there and take it? The reaction time of any decent dog is far quicker than that of a human. For the same reason, she would have buckley's chance of doing a knee job to a dog too. A good knee job ought to to kill a dog, but the dog will dodge. And if you just poke it in one eye, you'll just enrage it, so that it will now go for the kill instead of just disabling you. There are two ways of defeating a dog without an overt weapon such as a gun or spear: 1) Prevent it from approaching by using something like a large shovel with a sharp edge or garden rake with sharp prongs, with a handle at least 2 m long; 2) spray its eyes and nose with some sort of serious irritant. Kerosene based fly spray would do, but that would not have been carried by anyone unless they were in an area subject to flies, such as Australia. Maybe you could think of something else that is not an overt weapon. Maybe you could have an scenario where she throws a couple of darts (as in the dart board game) but that seems too difficult to me - she'd have to score a first time bulls-eye in both eyes, otherwise the dog is not incapacitated, but enraged.
- Poking an ordinary man in the eye seriously will stop him. But a Gestapo officer? I don't know what training they got, but any two-bit policeman or soldier will not be stopped by eye-poking.
- Wickwack 58.169.246.228 (talk) 09:23, 4 March 2013 (UTC)
- A punctured eyeball is certainly a more serious irritant than kerosene sprayed in the eye. It would be difficult to do that to a dog, but if the dog is lunging at you and doesn't consider the umbrella to be a weapon, you have a better chance. StuRat (talk) 16:20, 4 March 2013 (UTC)
- A more serious injury long term, but kerosene will temporarily blind both eyes, thereby immediately disabling the attacker, whther human or dog. An eye totally destroyed by poking/stabbing will not disable as the attacker still has the other eye fully functional. It would stop your average mugger, but not a dog, and not a trained policeman or soldier - they are trained to get the upper hand even when injured. I expect the same would apply to a Gestapo officer. Wickwack 124.178.41.155 (talk) 00:40, 5 March 2013 (UTC)
- If we're talking about standard predator behavior, then any serious (painful) injury should be enough to persuade the attacker to look for easier prey, as risking ones life for a meal rarely makes sense. With that in mind, can these dogs really be trained to continue to attack, even after sustaining such a serious injury ? How would they so train them, since this would require seriously injuring them (or perhaps just making them think they were) ? Also, having one eye punctured ought to cause both eyelids to close, reflexively. I doubt if the other eyelid could be opened quickly after. StuRat (talk) 01:08, 5 March 2013 (UTC)
- How about a cane gun or a swordstick disguised as an umbrella? 196.214.78.114 (talk) 13:16, 4 March 2013 (UTC)
- WWII secret agents in British boys' comics of the 1950s and 60s generally carried a pot of pepper to deal with guard dogs. Whether they actually carried these, I have no idea. Alansplodge (talk) 13:19, 4 March 2013 (UTC)
- Is it her own umbrella? If so perhaps she could have one like this, maybe supplied to her by the SOE, it would likely be less incriminating than a cane gun or swordstick if discovered, although I don't know if suitable materials would have been available at that time to make it without increasing the weight excessively. Equisetum (talk | contributions) 13:32, 4 March 2013 (UTC)
- Actually, it turns out their premium model is made from steel and aluminium, so the materials would have been around. Equisetum (talk | contributions) 13:37, 4 March 2013 (UTC)
- She can certainly hit him with the handle on the temple and plain knock him out. It may not be reliable, but it's not implausible, either. --Stephan Schulz (talk) 15:25, 4 March 2013 (UTC)
- Given the weight of the whole umbrella is given by the website as less than a kilogram, their pretensions of use in self defence is optimistic to say the least. I certainly would not expect it knocking anyone out. To have any hope of doing so, she would have to make a pretty vigorous large swing, like wielding an axe. A Gestapo agent would see it coming and simply duck or parry it with a hand. I should think that in this respect you should consider Gestapo offcers the same as policemen. That is, you either knock him out or blind him first time with complete certainty, or he will retaliate, violently. The pepper idea is the best so far. Wickwack 120.145.200.139 (talk) 15:47, 4 March 2013 (UTC)
- Every time I have killed a dog that was attacking me it was by shoving the tip of the umbrella down its open mouth, and then a left knee drop to the chest while pulling its head down to the right with the umbrella in its gullet. Sheesh, that's just dog-defense 101. μηδείς (talk) 18:12, 4 March 2013 (UTC)
- And just how many dogs have you killed? Just one maybe? None? Dog attacks actually are not very common. And what sort of dog was it? Miniature poodle? Chihuahua? Or some poor old tottering dog just about ready to drop dead with old age? Wickwack 124.178.41.155 (talk) 00:27, 5 March 2013 (UTC)
- ROFL. This makes my month in terms of comedy. Unintentional or not. Any post that begins with "just how many dogs have you killed".... Shadowjams (talk) 01:04, 5 March 2013 (UTC)
- "And just how many dogs have you killed?" Well, duh, every one I have had to use this method on. As for the ones who haven't needed killing--I can think of one, a smallish German Shepherd--a swiping blow to the head that transitions into grabbing the bitch by the scruff, lifting her enough so she loses purchase with the front legs, and then a rolling side toss followed by full body compression on the upper torso works wonders. (A swift punt return will always work on toys; but I like chihuahuas.) Of course, your results may vary. μηδείς (talk) 01:26, 5 March 2013 (UTC)
- Well, come on, how many then? So you have killed every single dog you had to kill - Zero most likely - you haven't needed to kill any. If you think you can bash a german shepherd that doesn't regard you as friend, you are simply living in fantasy land. It will have its' jaws clamped round your forearm before you can move it even half way. That's why police use them to bring down violent crims. Wickwack 121.215.74.126 (talk) 02:06, 5 March 2013 (UTC)
- Wickwack, you so crazeh. I said smallish. μηδείς (talk) 02:17, 5 March 2013 (UTC)
- Enjoyed the video clip. You still haven't said how many you killed. Must be none. And that smallish one you didn't need to kill - an untrained puppy who thought you were a friend eh? Not an attack trained dog. Wickwack 124.178.62.87 (talk) 02:49, 5 March 2013 (UTC)
- Wickwack, you so crazeh. I said smallish. μηδείς (talk) 02:17, 5 March 2013 (UTC)
- Well, come on, how many then? So you have killed every single dog you had to kill - Zero most likely - you haven't needed to kill any. If you think you can bash a german shepherd that doesn't regard you as friend, you are simply living in fantasy land. It will have its' jaws clamped round your forearm before you can move it even half way. That's why police use them to bring down violent crims. Wickwack 121.215.74.126 (talk) 02:06, 5 March 2013 (UTC)
- "And just how many dogs have you killed?" Well, duh, every one I have had to use this method on. As for the ones who haven't needed killing--I can think of one, a smallish German Shepherd--a swiping blow to the head that transitions into grabbing the bitch by the scruff, lifting her enough so she loses purchase with the front legs, and then a rolling side toss followed by full body compression on the upper torso works wonders. (A swift punt return will always work on toys; but I like chihuahuas.) Of course, your results may vary. μηδείς (talk) 01:26, 5 March 2013 (UTC)
- I wonder whether suddenly opening the umbrella in it's face (especially if it's one of those spring-loaded ones) might surprise the dog for long enough for one to escape or something? I could easily imagine that even trained attack dogs have never seen something jump in size so abruptly...kinda like the defense a puffer-fish puts up to deter predators. Even if the dog doesn't run off, I could imagine it grabbing the umbrella and tossing it around, ripping it to shreds for long enough to allow it's owner to get away.
- But I can't imagine an umbrella doing much physical damage to either dog or human...for all of the reasons given above. But Google "SOE unarmed combat training" - and you'll see several references to declassified documents about exactly the training they went through and the techniques they would have mastered. I bet you could find something interesting and surprising that your heroin can do against the enemy if she's been through the SOE unarmed combat classes. Another thing is that you describe "a woman of average (or greater than average strength)" - several sources for the SOE indicate that they selected only the most physically capable individuals - and then trained them to be better, so "average" isn't possible - and "greater than average" should mean *MUCH* greater than average. (I'm thinking: a stiletto heel dug into the knee and scraped hard down to the ankle, followed by that old saw, the knee to the groin and then garotting with her headscarf.) SteveBaker (talk) 21:47, 4 March 2013 (UTC)
- There's a reason groin hits and eye gouges are banned in every professional fighting sport... cause they work a little too well. Shadowjams (talk) 22:40, 4 March 2013 (UTC)
- It's not because of their effectiveness so much as the possibility of causing permanent damage. SpinningSpark 23:34, 4 March 2013 (UTC)
- There's a reason groin hits and eye gouges are banned in every professional fighting sport... cause they work a little too well. Shadowjams (talk) 22:40, 4 March 2013 (UTC)
- They tend to be related, but that's a good point. There's two practical pieces to self-defense... physically stopping an attack (tazers seem to be really good at this, if you can penetrate heavy clothing, can hit on the first shot, are within close range, etc.) And then there's the deterrent effect. The fact that if you get hit by a bullet you could die is a powerful incentive to avoid situations where you could be hit by a bullet. There's something to be said for that notion. Groin hits and eye gouges I tend to think of in the "stop the attacker" category, but obviously also in the permanent injury category. Spinning has a really good point on this... although, i think that if you get a thumb in your eye or a knee in your groin, the long term damage is secondary to the immediate pain. Shadowjams (talk) 01:01, 5 March 2013 (UTC)
- She'll definitely kick him in the balls before poking his eye out with the umbrella (which comes with a sharp tip as issued, specifically for contingencies like this one). But she won't have time to choke him with her scarf, because other Nazi thugs will arrive and she (and her partner) will have to climb out of the third-floor window onto the cornice in order to escape. Thanks, Steve! 24.23.196.85 (talk) 06:14, 5 March 2013 (UTC)
- One last suggestion. Having a sharpened tip on the umbrella might make it obvious that she has a weapon. If she has a rubber cap on the end, which makes it look innocuous, she could then remove that cap at the first sign of trouble (perhaps by holding it on the ground, and stepping on the rubber cap with her shoe). StuRat (talk) 06:20, 5 March 2013 (UTC)
HIV/AIDS deaths
List of causes of death by rate, sourced to this WHO document, lists HIV/AIDS as causing just under 5% of deaths in 2002. I thought that HIV/AIDS broke down the immune system so that other things, which are normally repelled by the body easily, are instead able to take over. Does HIV/AIDS kill anyone directly? If so, we need to point this out in its article, unless I missed it when reading it. If it doesn't kill people directly, I'm confused: why don't they count the AIDS deaths according to the immediate cause of death (e.g. infection, cancer), and how do they decide which HIV+ deaths are AIDS-caused and which aren't? Presumably they include as an HIV/AIDS death someone who dies of an HIV/AIDS-enabled bacterial pneumonia infection, but presumably when someone with HIV/AIDS shoots himself, they count it as Intentional injuries (Suicide, Violence, War, etc.). Nyttend (talk) 06:34, 4 March 2013 (UTC)
- The vast majority of that 5% consists of people dying of HIV/AIDS related diseases. You are correct that the vast majority of HIV patients die from secondary infections. They may still be listed as having died of HIV/AIDS, regardless. There are ways you can die from HIV in an of itself, but they are rare. HIV-associated nephropathy could conceivably kill a person, and AIDS dementia complex can leave a patient as good as dead. Someguy1221 (talk) 06:44, 4 March 2013 (UTC)
- (edit conflict)As you say, HIV causes immune dysfunction that renders the infected person susceptible to a variety of immediate causes of death. When someone has heart failure, they may die from low oxygen in their blood but we count the death as related to heart failure, (e.g.) due to coronary artery disease - and this death could be counted as due to coronary artery disease. In a similar way, if someone dies from Pneumocystis jirovecii pneumonia (PJP) in the setting of HIV infection, the death can be attributed to HIV (because prevention or treatment of HIV would have avoided the PJP altogether). The WHO most likely has a list of proximate causes of death that, when found in persons with HIV infection, would be counted as AIDS-related deaths. -- Scray (talk) 06:51, 4 March 2013 (UTC)
- Thanks to both of you. I'd never heard of the topics that Someguy links, while Scray's heart analogy and notes about a potential list made the concept much simpler. Nyttend (talk) 14:39, 4 March 2013 (UTC)
- I, too, would expect to see the fact that AIDS doesn't kill directly in the lead of the article - and I don't see it either. Anyone want to improve it? Rmhermen (talk) 15:44, 4 March 2013 (UTC)
- I think it's pretty clear already, and should not be overstated. If someone is shot and loses a great deal of blood, resulting in a massive stroke or heart attack, would you (or reliable sources) say prominently that the gunshot did not kill them? I realize that this discussion should probably continue over there rather than here. -- Scray (talk) 16:12, 4 March 2013 (UTC)
- I, too, would expect to see the fact that AIDS doesn't kill directly in the lead of the article - and I don't see it either. Anyone want to improve it? Rmhermen (talk) 15:44, 4 March 2013 (UTC)
- Thanks to both of you. I'd never heard of the topics that Someguy links, while Scray's heart analogy and notes about a potential list made the concept much simpler. Nyttend (talk) 14:39, 4 March 2013 (UTC)
- In the US (and probably most first-world countries), death certificates allow for listing both an immediate cause of death and an underlying cause of death (as well as other contributing factors). In the AIDS example, the immediate cause of death might generally be an infection, but the underlying cause of death might be listed as AIDS. For US data, the typical database will contain both listed causes, so even though AIDS doesn't typically kill people directly, one can still easily compile statistics on cases where AIDS was listed as the underlying cause of death (or as a contributing factor). For the US anyway, it isn't necessary to make any special inference based on the type of the infection because the doctor / medical examiner filling out the death certificate should have already made that determination and noted it on the form if appropriate. Dragons flight (talk) 16:28, 4 March 2013 (UTC)
- Here's an example of a representative US form showing multiple causes of death: [13]. Dragons flight (talk) 17:53, 4 March 2013 (UTC)
- Those suggestions are far better than the ones I've seen on actual death certificates. Of course, even those didn't go all the way back to the root causes, which were likely a poor diet and lack of exercise. StuRat (talk) 18:03, 4 March 2013 (UTC)
- (EC) Note that multiple causes of death is by no means unique to AIDS. For example, you could have poor nutrition + sedentary lifestyle -> obesisty -> diabetes -> kidney failure, or poor nutrition + smoking -> high blood pressure -> stroke. As far as I can tell, there's no universal way of listing the multiple causes of death. The person filling out the death certificate often seems to just pick one, most likely the last one in the chain, and ignore the rest, which makes us not appreciate how serious the root causes are. StuRat (talk) 16:30, 4 March 2013 (UTC)
- I read once that as far as our definitions go, all death is caused by lack of oxygenated blood to the brain, regardless of how that is "caused". Vespine (talk) 03:14, 5 March 2013 (UTC)
Color reproduction
Why is there so much variety in color reproduction on different monitors/tv/displays? Is it a lack of standard, sample to sample variation or something else? bamse (talk) 10:39, 4 March 2013 (UTC)
- There are plenty of standards out there (eg CIE 1931 color space) - and I don't think sample-to-sample variation accounts for much of the problem - although that kind of variation does exist. On CRT displays, the age and amount of usage of the tube would dramatically affect the color quality - but that issue has largely disappeared with modern flat-panel LED/LCD/Plasma displays. Lack of adherence to the standards by the manufacturer is one possibility - displays used as televisions are often tweaked to produce a "hyper-real" color space because it looks good in the store where you buy the thing from - but which is nowhere near what it should be. (My new Visio TV has a "STORE DISPLAY/HOME" toggle in the menu that seems to do exactly that!).
- But in a lot of cases it is simply that the device is not correctly set up. There are devices that one can buy that use a Tristimulus colorimeter coupled to a small 'black box' that adjusts the video signal into the display to produce the most accurate rendition of color that the device is capable of producing. Our Color calibration article covers some of that.
- When I worked at a computer games company a few years ago, we had a guy who would come around once in a while with a colorimeter and set up our monitors according to a common standard so that our artists, designers and programmers would all be looking at the same image brightness, hue and saturation. There are gizmos you can buy that do that adjustment continuously and automatically using a little sensor that sticks onto one corner of your screen.
- Some cheaper flat panel displays show a tendency to shift color and brightness depending on the angle you're looking at them at. This poses a serious problem for color quality.
- Here is anothervrason why sme monitors show different colors: [ http://compreviews.about.com/od/multimedia/a/LCDColor.htm ] --Guy Macon (talk) 15:06, 4 March 2013 (UTC)
- Thanks for the replies. So it is mostly a lack of common interest and producers trying to make their displays look "good" by deliberately setting them up incorrectly. bamse (talk) 19:10, 4 March 2013 (UTC)
- What would you do if every time you made a monitor with accurate color nobody bought it? I have a Samsung LCD TV that has an "accurate color and brightness" setting for when you get it home and a "make it look good next to the other monitors in a brightly-lit store" mode. --Guy Macon (talk) 19:18, 4 March 2013 (UTC)
- Yep - that's exactly what my new VISIO TV has. The "HELP" function for that button says that it's necessary to switch it into "HOME" mode in order to save electricity and to meet the "Low Energy" sticker that the TV has. So depending on which way you set that option, what you see in the store is a TV that's brighter than it would be home - or it's a TV that's less energy efficient than it claims. But truly, this is a minefield - far too many reasons or half-reasons not to follow the standards. In the end, if you need color precision and repeatability - you have to get a colorimeter and an accurately calibrated test pattern generator - and adjust the display accordingly. In most cases, you can get away with just adjusting the gamma-correction settings. Also, many modern displays have settings for different "color temperature" - and others (like my VISIO) has settings that purport to set the TV up better for Movies versus TV shows versus sport. I have no idea what those actually *do* but you know that whatever effect they have isn't making the display follow any kind of color standard.
- If you're talking specifically about computer displays, then your graphics card probably has a bunch of color tweaks in it's control panel which can "fight" the controls on the display itself, making accurate color setup an absolute nightmare! nVidia cards (for example) have a "Digital Vibrance" control which basically looks at which of the three color components (red, green or blue) is the largest at each pixel and makes it larger still. This definitely changes muddy-looking pictures into more colorful ones...but it circumvents the intent of whoever produced the images in the first place. SteveBaker (talk) 20:07, 4 March 2013 (UTC)
- What would you do if every time you made a monitor with accurate color nobody bought it? I have a Samsung LCD TV that has an "accurate color and brightness" setting for when you get it home and a "make it look good next to the other monitors in a brightly-lit store" mode. --Guy Macon (talk) 19:18, 4 March 2013 (UTC)
- Important to this discussion, because of the phenomenon of color constancy, the sort of differences you all are noting usually aren't perceptible unless you have multiple TVs or monitors showing the exact same scene simultaneously; something most people don't have in their homes. For most people, the sort of differences in color reproduction that exist between various display devices just aren't all that noticeable unless you're deliberately training yourself to look for it, or you've carefully constructed a set up to highlight it. --Jayron32 04:54, 5 March 2013 (UTC)
Ground penetrating radar and sink holes
Can't they use ground penetrating radar to detect cavities forming under buildings and things to know when a sinkhole is going to occur? ScienceApe (talk) 16:02, 4 March 2013 (UTC)
- Yes, but "when" might be tricky. — Preceding cryptic message added by 74.60.29.141 (talk) 16:28, 4 March 2013 (UTC)
- From Ground-penetrating radar:
- "Optimal depth penetration is achieved in ice where the depth of penetration can achieve several hundred meters. Good penetration is also achieved in dry sandy soils or massive dry materials such as granite, limestone, and concrete where the depth of penetration could be up to 15 m. In moist and/or clay-laden soils and soils with high electrical conductivity, penetration is sometimes only a few centimetres."
- Guess what kind of soil most sinkholes form in (hint: rhymes with "vet"...) --Guy Macon (talk) 16:34, 4 March 2013 (UTC)
- The trick would be in knowing when to suspect it, to have the ground penetrating radar used. There might be signs, like a cracked foundation, but you also get these just from normal settling. StuRat (talk) 16:35, 4 March 2013 (UTC)
- There are signs, e.g. [14] [15] but the time they take to develop does vary as the case which must have caused this question highlights (although I think it's fairly well established such a sudden development is rare hence why this is involving a fatality is such an unusual case despite the frequency of sinkholes in Florida [16], of course there may have been signs that were missed particularly given the time). BTW, as discussed [17], ground penetrating radar is one of a number of techniques used to assess risk and there is on going research on this [18] and other techniques [19]. However these techniques aren't necessarily cheap and I don't know if they're of much use once you see the signs, by that time the general advice seems to be to avoid the area until the sinkhole develops. The techniques seem to be of greater interest in evaluating risk for insurance purposes or when deciding whether to purchase a property or develop in the area. Note that in the case that probably resulted in the question, it was widely reported someone came to check for sinkholes a few weeks before but didn't appear to find anything. I don't know what they did and I doubt ground penetrating was used but I don't think it's any way certain they definitely would have found something. Nil Einne (talk) 18:11, 4 March 2013 (UTC)
- [e/c 3x] Since you're looking for a differential, a gap in "wetness" (soon-to-be hole) should be relatively easy to find in moist soil. (I used to be a field research technician in petroleum exploration). Using a microgal gravitometer survey (with a type of gravimeter) might be better for finding potential sink holes. — Preceding modified for clarity comment added by 74.60.29.141 (talk) 18:41, 4 March 2013 (UTC)
- Seismic imaging might be the most practical means of locating potential sinkholes: [20] ←[Where I used to work] 74.60.29.141 (talk) 19:45, 4 March 2013 (UTC)
- See also: "What is a Seismic Survey?". Schlumberger. and Thumper trucks. ~:74.60.29.141 (talk) 20:08, 4 March 2013 (UTC):~
- Seismic imaging might be the most practical means of locating potential sinkholes: [20] ←[Where I used to work] 74.60.29.141 (talk) 19:45, 4 March 2013 (UTC)
Does Helium-2 jump off the ledge, or is it pushed?
Has anybody tried changing the "impact velocity" of the Proton–proton chain reaction to see if excess energy makes deuterium more likely?
If this was found not to be the case wouldn't it mean that deuterium is produced purely by diproton decay rather than by the decay of an excited proton? Hcobb (talk) 18:14, 4 March 2013 (UTC)
Does drinking water right after a meal makes us fatter?
humans who drink water (let's say a glass or two) right after eating a bowl of pasta \ or a sandwich, would they theoretically gain more fat then those who won't drink ?, does this phenomenon has a literal name? 79.183.98.234 (talk) 03:51, 5 March 2013 (UTC)
- Seems very unlikely. What's the source of that bizarre idea? Looie496 (talk) 05:23, 5 March 2013 (UTC)
- No Primary source, it's just something i heard and in first sight made some sense (when thinking about the water interrupts the enzymes in their work).
- Probably because they will gain weight, that being the weight of the water, at least until they pee it back out. StuRat (talk) 05:43, 5 March 2013 (UTC)
- There are a bunch of theories running around (Google [ water with meals ]) involving diluting digestive acid and thus hindering digestion, diluting digestive acid and causing more to be produced, thus helping digestion, causing the food to empty into the intestines faster and thus hindering digestion, causing the food to empty into the intestines faster and thus making you become hungry sooner, and the fascinating theory that if your body really needs food you will have no trouble eating enough to meet that need without extra water, but once you start overeating you need to "wash the food down".
- As far as I know, there are no scientific studies supporting any of these theories, but we do know that simply keeping a log of what you eat helps with weight loss, so anything that makes you think about what you are eating rather than finishing off a large bag of chips while watching TV is probably a Good Thing.
- The best unproved diet theory I have heard of is a fellow who kept all his food in a guest house a mile from the main house, and any time he wanted to eat anything, from a full meal to a snack, he had to walk two miles to fetch the food, and another two miles if he had leftovers he wanted to put in the refrigerator afterwards. If anyone wants to buy me some property with a lot big enough to try this, let me know. I suppose having the food at the top of ten or twenty flights of stairs would work as well. :) --Guy Macon (talk) 09:32, 5 March 2013 (UTC)
March 5
Are there any research about compounds of francium?
I am just struggling with some information seemed to be original research in Chinese wikipedia.--Inspector (talk) 10:25, 5 March 2013 (UTC)