Jump to content

Wikipedia:Reference desk/Science

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 149.152.23.34 (talk) at 19:29, 10 May 2013 (Double minor: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


May 5

can one derive a language's grammar from sample code

Hello, this is out of idle curiosity, but can one derive a language's grammar from sample code? In other words, is it possible to write a program that, if fed code in some language wrote the EBNF that parsed the code , like this:
program xxx; var x: integer; begin x:=17; write(x); end. ===>
program::="program" ident; var-section main-block "."
var-section::="var" ident ":" type ";" etc, etc. (the names of the rules themselves need not be meaningful, obviously)
I suspect the program needs to have a way of knowing what the tokens are so it can tell between keywords and user-supplied identifiers. If the code sample was really big (it need not be a real program, either), such that it had every non-terminal in it and/or there were several snippets for when you can't put everything into one (one with no, one with a single and one with several variable declarations, for example), could one derive a complete grammar (or one of if there can be many) of the language? Thank you everyone in advance Asmrulz (talk) 01:42, 5 May 2013 (UTC)[reply]

Grammar induction is the article, though unsurprisingly the research focuses on natural languages. I don't know anything about this but I doubt you could find exactly the correct grammar of a programming language this way. Many programming languages don't have context-free (EBNF) grammars. For example the expression (A)(b) in C-like languages could be a type cast or a function call depending on how A was previously declared. -- BenRG 04:40, 5 May 2013 (UTC)
Hmmm...yes and no.
Strictly - no, it's not possible because you'd never know whether there might be some kind of construct that's not present in the example programs that you fed it. For example, you could pick an enormous corpus of C and C++ programs and never find a single one that uses the "goto" statement. I have written millions of lines of C/C++ code - and and read tens of millions more written by co-workers - and never once encountered a "goto". Even if you did find a few rare examples, it wouldn't be clear what the rules about jumping into and out of loops and subroutines are because you simply wouldn't have enough examples to deduce those things. I bet that it would be impossible to deduce the arcane rules for "goto" even if you took every C and C++ program ever written and analysed the whole lot.
But imperfectly - yes: Children can learn to understand and speak any language on the planet simply by listening to examples - and (in principle) anything that the human brain can do, you can do with a sufficiently powerful computer. However, children (and the adults they grow into) learn the language imperfectly...nearly everyone has some kind of failure of linguistics built into their brain.
SteveBaker (talk) 15:37, 5 May 2013 (UTC)[reply]
If the system only has source code, no it cannot extract the grammar. Children are able to learn laguage because they are exposed to two things: the language and its results. For instance Mother may say the word "custard" a few times while serving custard, so the child can associate the word custard with the actual stuff. Children also learn to speak because when they try it, their parents correct them. (Child: "wan custard"; Parent: "No, I want custard, please")
A computer program that deduces the grammar could in theory be constructed if it has access to both the source code, can run the source code, and can alter the source code. It motly cannot work out the complete grammar, for the reasons others have posted. Neither do humans ever completely master the vocabulary and grammar of spoken and written languages.
In practice, constructing a program that can deduce grammar of a computer language is probably not possible, because of the difficulty in understanding the meaning and scope of the output. I once wrote a program for an embedded processor in a hand held device. Each time you press the single "go" switch on it, after entering the 6-digit challenge number issued by the secure server you are trying to log into, it displays a 6-digit keycode that, with your own personal password, will enable you to log in. The keycode is generated by mashing the challenge (which was generated from a random number) and to the user appears to be another random number. No two challeges will ever be the same within a certain very low probability, and no two keycodes will ever be the same. Keying in the same challenge twice will give you two different keycodes. What is the proposed grammar extraction program to make of that?
Wickwack 120.145.68.194 (talk) 23:47, 5 May 2013 (UTC)[reply]
If you're correct about children needing parental feedback when they attempt to produce language from what they've learned about the grammar - then all that's needed to provide this for a grammar-learning program is access to a compiler for that language and a means to detect whether the resulting program compiled correctly. That's pretty much all a child gets.
But in this case, the computer has several advantages. Firstly, it's only being asked to learn the rules of the grammar - not to interpret what programs written in that grammar actually do. Your example of a tricky program is irrelevent...it's only tricky use of the grammar (like x=y++++z; or something) that's going to throw it. For example, it just needs to deduce that if a statement begins with the keyword "while", it is always followed by an expression surrounded by round brackets - followed by a statement. The program would have no idea that "while" is a looping construct or that the expression will not be executed once the expression evaluates to 'false'.
That's quite different from a parent pointing at a bowl of custard and saying "custard" to a child...the child is learning vocabulary in that case. I don't think parents ever explicitly teach things like "words that are verbs that end in "ed" are in the past tense".
Secondly: I also don't believe that children learn syntax by producing statements and being corrected. Our grandchild was quite able to understand "Fetch grandpa the duplo" long before she could speak a single word...and I'm fairly sure that children who do not have the ability to speak are perfectly able to learn enough grammar to understand what people say to them.
Thirdly, the program is only (presumably) being fed legal programs - where children hear all kinds of broken sentences, incorrect grammar and so forth.

SteveBaker (talk) 19:51, 6 May 2013 (UTC)[reply]

I agree, if the grammar extraction program has access to the compiler code, it can work out the grammar. It is an alternative to accessing the output. In fact, if you (or a machine) has the compiler source code, then you (or a machine) HAS the grammar to hand - all that may need to be done is to write it more concisely. I don't think that was what the OP had in mind though. He has talking about feeding the grammar extractor only examples of source code. Re your next point, undertsanding the vocabulary is essential to determining grammar - as BenRG pointed out. You have a point regarding the ability of children to understand spoken language with limited ability to speak themselves (and thus opportunities to hear corrections from others) - this has been an interesting field of study by certain linguists and psychologists. It appears that it is because children are born with a built in understanding of a "prototype" grammar, though it isn't English grammar. This is likely an advantage that a computer program cannot have, as there is an infinite range of possible computer proagram grammars. Finally, a grammar extraction machine that can deduce things like "statements can begin with while", as they do in some lenguages, or that "programs begin with program and end with end" as they do in Pascal, is not going to be generally useful. Many computer languages simply don't work that way - almost all assembly languages included. Ever tried to work out what someone else's assembly language does, having obtained a listing from ROM, which does not include any comments or pretty printing? That's a task fundamentally easier than deducing a grammar, never the less it isn't easy, dependent in practice on good guesses, exploring dead ends, and often very very difficult. Wickwack 60.230.238.42 (talk) 00:21, 7 May 2013 (UTC)[reply]
You're completely missing the point (again!) - the idea isn't to understand what a program DOES - the goal is to understand the grammar of the language that the program is written in. Sure, machine code programs can be hard to understand - but the grammar of the assembler is trivial: <label>:<opcode><argument>,<argument><comment> ...with a few of those parts being optional. I didn't say that the grammar extractor would have access to the source code of the compiler - just a means to generate a program and test whether it's legal or not...a boolean "good/bad" flag. But I don't think this is necessary. This is a difficult problem - but with enough variety of source code to examine, I think it's possible. However "enough variety of source code" might be a very tough barrier to making this work...as I explained with the difficulty of deducing the peculiar rules for the "goto" statement in C and C++ - given that almost nobody uses that horrible construct anymore. Just as children have "protogrammar" hard-wired, we could give our program knowledge such as "Most programming languages allow variables that start with a letter and are followed by any number of letters, digits and underscores" or "Most programming languages allow numbers that start with a +, - or a digit..." or "Some languages allow whitespace anywhere, others don't - figure out which of those is true before you start". There are all sorts of things that are common to so many languages - that you'd be able to figure them out. Of course there are deliberately obtuse languages (like Whitespace (programming language) that would defeat these rules...so this has limited value. SteveBaker (talk) 14:07, 7 May 2013 (UTC)[reply]
Sounds like you haven't worked with assembly code, Steve. The construct <label>:<opcode><argument>,<argument><comment> only exists at the time of writing. What gets stored in ROM (or whatever other form of memory is used) does not contain the labels or the comments. In any but the most trivial assembly routine, figuring out what are constants and what are opcodes can be a major effort in itself - they look the same, have the same range of values. Opcodes and constants both vary in byte length. And some smart-arse programmers not only use recursive techniques, some use self-modifying code. I can remember finally nutting out one bit of code where the author called the routine with two different entry points - one entry point restored and ran the routine pretty much as is, another entry point used a trick to use an opcode byte as a constant to change another opcode and completely change what the routine did!
I realise that the OP's question does not require the grammar extractor to understand what the program does, however it does need more than just source code of the target language. Yes, you didn't say that it needed access to the source code of the compiler - I misread that bit. However it did make me realise that being able to run and alter the source code is not the only way. However, your idea of just being able to run it thru the compiler to see if there are no compiler errors, or being able to run the program and see that there are no run time errors is not sufficient. If it was, there would have been no need for all the thousands of hours I've spent debugging code that did compile ok, and debugging code that ran ok too, until I or a colleague hit it with the right test case. And in some cases, after a couple of years use, the customer reported a bug! Even very strongly typed languages like Pascal can compile ok when the grammar has been violated. For example, passing a constant and then mistakenly treating it as a var (or a pointer) will compile ok, but the runtime result will usually be very different, and only sometimes trigger a runtime error. And as for that horrible nasty thing used for embedded control, the FORTH language, which is not typed at all and with which you can do anything one's smart arse lateral thinking heart desires..... That's why I misread what you said - I automatically assumed you didn't intend a simple go/no go run test because it is insufficient.
Wickwack 121.215.9.73 (talk) 14:59, 7 May 2013 (UTC)[reply]

Sodium hydroxide for washing dishes?

If I add sodium hydroxide to the water when washing up, I can convert triglycerides to the dark side and have the cleanest dishes ever? — Preceding unsigned comment added by 78.144.202.9 (talk) 02:07, 5 May 2013 (UTC)[reply]

Sure, go ahead, wash your dishes with drain opener, what do we care? Just make sure you wear gloves and googles and rinse very thoroughly. Looie496 (talk) 03:13, 5 May 2013 (UTC)[reply]
Internet search engines have the most surprising uses.  :) -- Jack of Oz [Talk] 03:18, 5 May 2013 (UTC) [reply]
Which triglycerides? Plasmic Physics (talk) 04:36, 5 May 2013 (UTC)[reply]
It doesn't matter, the lye will take care of any of them. --Jayron32 04:53, 5 May 2013 (UTC)[reply]
That depends on whether the lye is added in excess. Plasmic Physics (talk) 07:04, 5 May 2013 (UTC)[reply]
I'm just trying to figure out if the OP actually knows what they are talking about, because, by not defining 'triglycerides', there is no way to unambiguously interpret the question. Plasmic Physics (talk) 07:08, 5 May 2013 (UTC)[reply]
Triglycerides that make my dishes greasy? What are my options? 78.144.202.9 (talk) 07:15, 5 May 2013 (UTC)[reply]
Ordinary dishwashing liquid. NaOH is total overkill, and dangerous. -- Jack of Oz [Talk] 07:19, 5 May 2013 (UTC)[reply]
Doesn't that depend on how much I use? I could make it up to a less threatening concentration, or just add it to the water in its solid form... would it pop if I did that? — Preceding unsigned comment added by 78.144.202.9 (talk) 07:22, 5 May 2013 (UTC)[reply]
And when I said, what are my options, I meant re: the different types on triglycerides. — Preceding unsigned comment added by 78.144.202.9 (talk) 07:49, 5 May 2013 (UTC)[reply]
Sodium hydroxide is equally corrosive at all concentrations, except at 0 and 100 %. Varying the concentration only changes the reaction rate. Changing the amount, shifts the reaction equilibrium. Regardless, of concentration or amount of hydroxide used, it will still corrode glazes on cermics such as crockery, repeated use will strip the glaze off completely. While it does accomplish the task of degreasing, it also attacks the very thing you're trying to clean.
Note: it does not 'pop' when dissolved, that would be sodium metal igniting elemental hydrogen, given off by sodium metal. Plasmic Physics (talk) 08:26, 5 May 2013 (UTC)[reply]
Not to mention, how difficult it is to rinse off. If not rinsed completely, all food consumed from its surface will tase bitter and brackish, or similiar to baking soda, and it will damage your shelves. Plasmic Physics (talk) 03:38, 6 May 2013 (UTC)[reply]
It will also damage aluminium, zinc and tin. Rivets will tend to be eaten away. Graeme Bartlett (talk) 12:15, 5 May 2013 (UTC)[reply]
(Assuming those elemental forms and that type of object, are participant to this system.) Plasmic Physics (talk) 12:58, 5 May 2013 (UTC)[reply]
Adding to the list of others things you probably shouldn't wash your dishes in, you might enjoy this series from a professional chemist. Shadowjams (talk) 20:09, 5 May 2013 (UTC)[reply]
An alternative alkali to use is ammonia solution. It turns fats into soap, and cuts through high grease build up very fast. Try not to breathe the vapour. Graeme Bartlett (talk) 11:36, 9 May 2013 (UTC)[reply]

Pope alexander of the borgia family used vitriol or sulfur as a mood anhancer..

I cannot find anything anywhere on the internet about the ancient use of vitriol or sulfur as a mood enhancer nor how it was prepared and the side effects..can anyone please help me??

5/4/2013 10:28pm... — Preceding unsigned comment added by Jaime71355 (talkcontribs) 02:28, 5 May 2013 (UTC)[reply]

I'm afraid you might be mixed up -- I've never heard of anything like that and can't imagine how it would work. Is it possible that you're thinking of arsenic, which was sometimes used as a stimulant? Looie496 (talk) 03:18, 5 May 2013 (UTC)[reply]
I thought the Borgias only used arsenic against their enemies? 24.23.196.85 (talk) 06:06, 7 May 2013 (UTC)[reply]

Beef vs. chicken

Why is it alright to eat rare/medium beef but not chicken? Is salmonella not a problem with beef? DRosenbach (Talk | Contribs) 03:48, 5 May 2013 (UTC)[reply]

Because rare chicken tastes terrible. --Jayron32 04:04, 5 May 2013 (UTC)[reply]
Rare chicken is eaten in Japan, mostly in Kagoshima prefecture, in sashimi style like [1] and [2]. The taste is OK. But there's always a possiblity of Campylobacteriosis. I've eaten rare chicken twice, but nothing happened. Oda Mari (talk) 07:43, 5 May 2013 (UTC)[reply]
You can also get sick from undercooked beef. ←Baseball Bugs What's up, Doc? carrots09:24, 5 May 2013 (UTC)[reply]
Neither is "alright" - you can get sick from any type of food. It's really about statistics; and you've no doubt been exposed to more educational campaigns advising you about poultry - because that's where the majority of the reported infections come from. You are more likely to find hazardous bacteria in commercially-raised chicken than in commercially-raised beef. You can get sick from any harmful bacteria; and those bacteria can live almost anywhere - fruits, vegetables, meats, poultry, and non-food sources... but according to a report I found by browsing the USDA.gov website's food safety pages, An Economic Assessment of Food Safety Regulations: The New Approach to Meat and Poultry Inspection, salmonella accounts for more than half of all food-borne illness death, and accounts for about the same percentage of total health-care expense. E. coli infections, typically found in beef but also including all other sources, accounts for less than one tenth that amount. There is much speculation about why this is true: it is plausible that in the "natural environment," more chickens have illness, per capita, than cows. It is plausible that chickens are raised in conditions that are less sanitary or less healthy than cows. It is plausible that different regulations concerning beef and poultry contribute to a different epidemiology. It is plausible that testing for salmonella is less effective - or that hazardous quantities of poultry-borne salmonella are harder to detect than hazardous quantities of other food-borne pathogens. (In fact, the report I linked details each of these possibilities, with additional data). Nimur (talk) 18:27, 5 May 2013 (UTC)[reply]
And it's not just about eating rare vs. well-cooked meat. There's also the issue of defrosting frozen cooked meat. You can do that once, but if there's any left unused, you should get rid of it. If you refreeze it and defrost it a second time, it would no longer be safe to eat. This is particularly a risk for chicken, but it's true for all meat to some degree. -- Jack of Oz [Talk] 20:02, 5 May 2013 (UTC)[reply]
Different species harbor different organisms and the risk/transmission profile for them differs. There's not a lot of design beyond that. The one thing Nimur didn't mention is processing practices. Chickens are processed in bulk differently than cattle (the cleaning and defeathering for instance), and that can facilitate cross contamination. That said, even a single chicken from the backyard, dressed individually should probably be fully cooked. Shadowjams (talk) 20:07, 5 May 2013 (UTC)[reply]
  • Great, but I think the OP is asking for illumination on “is it alright to eat rare/medium beef but not chicken.” This puts it so simply that even a child can understand: “Poultry roasts, however, have an added complication: Unlike solid beef roasts, chickens and turkeys have internal cavities that can be contaminated with bacteria. The central cavity will be the last part of a roast to cook, so to be safe, it's important to cook a whole chicken or turkey until a thermometer inserted into the thickest part of the thigh registers 165°F.” [3] So, in other words (and to answer the OP's question). It is not so much the bacteria but the nature of the meat that requires a cooking temperature to ensure that all pathogenics become 'toast.' Personally, I think raw meat is fine, providing that the chief buys it himself and knows what s/he is doing. But those artisans are far and few between. Having experienced food poisoning, I now trust only in people that have a solid local recommendation. Even if they comment : “ Uyee loook at mi daughter one more time like that … an' I'll smasher you face... OK? Bon appetite.”Aspro (talk) 17:12, 10 May 2013 (UTC)[reply]

Does frequent reading/writing affect neurology of spoken language?

The phenomena described in this article seem like they could be explained more or less like so: "Fans speak by reading their mental writing aloud; mundanes write by transcribing their mental speech." Have any studies examined how frequent reading and writing, and infrequent speaking, affect the brain structures or activation patterns associated with language? NeonMerlin 05:43, 5 May 2013 (UTC)[reply]

Athletes and alcohol

This is a hypothetical question, not medical advice, because I'm not a professional athlete. Before athletes compete, they follow a strict diet but say for example if they had a heavy night of drinking, a few days before, not a night before, how would that affect their performance on the day because I'm sure a few athletes must have tried this before whether, intentional or not. — Preceding unsigned comment added by Clover345 (talkcontribs) 12:06, 5 May 2013 (UTC)[reply]

None directly; the body has plenty of time to metabolize alcohol in a few days' time. The most likely avenue of affecting performance, in my opinion, is for the drinking and associated behavior to result in an arrest. Being in jail, or being suspended for having been in jail, is a sure means of preventing an athlete from performing normally. — Lomn 13:09, 5 May 2013 (UTC)[reply]
What about junk food? Do you think that would affect then more a few days before since they follow such a strict diet normally? Clover345 (talk) 14:33, 5 May 2013 (UTC)[reply]
It doesn't have a significant effect, the diet is relevant only for the long term. On the short run what counts is only if you are getting enough calories. Count Iblis (talk) 15:34, 5 May 2013 (UTC)[reply]

As this is a reference desk, here's one; The effects of Alcohol on Endurance Performance. It highlights these effects on the day after drinking:

  • Dehydration
  • Potassium and sodium depletion
  • Impaired temperature regulation
  • Impaired balance and co-ordination
  • Reduced total work output

"... the ACSM recommends skipping anything beyond "low amount social drinking" for 48 hours prior to the event. It can take your body up to three days to purge itself of alcohol. One drink (sorry) over the course of an evening is your best bet." Alansplodge (talk) 16:46, 5 May 2013 (UTC)[reply]

You might find this article fascinating. Shadowjams (talk) 19:54, 5 May 2013 (UTC)[reply]

Optimizing learning habits

I've recently started playing games that are meant to improve the players' skill at solving simple arithmetic operations (such as 5×4 or 448÷32), which made me think about the way our brains study. For example, we will likely feel a bit uncomfortable whenever we make a mistake, which I suppose is the brain's way to make us more cautious. On the other hand, being too cautious would make us slower and therefore the learning process would take more time.

In order to accelerate my learning I wanted to know what consideration there are to the act of studying. What is the best time to study new information or to practice information that I've already learned? In the calculations games, should I prefer practicing each operation individually (first a list of additions, then subtraction and so forth) or all combined? Should I waste time to recalculate wrong results or should I move on? And how significant is it anyway?

In case the subject is too wide to detail, describing just the considerations of one habit (e.g. studying time) would also be great. I'm also interested in recommended books. Thanks! 79.181.175.168 (talk) 12:11, 5 May 2013 (UTC)[reply]

On whether to practice each operation individually or all combined, it really depends on what you want to achieve. If one particular operation is giving you problems, it would be a good idea to do that one individually. If you aren't having specific problems with any operation, it would be a good idea to do all combined, although I cannot quite remember the reason. Double sharp (talk) 15:11, 5 May 2013 (UTC)[reply]
Opinions differ widely on these sorts of things - from my own experience, there is no direct link from educational theory to educational practice to perfectly designed textbooks and educational systems. When you get to the act of "doing", there are just too many variables, and your own enjoyment of a system is one of them. But read up on Paul Pimsleur and spaced repetition. Basically, repeating immediately, then increasing the delay between each repetition, is the proven technique for memorising. This may or may not relate to solving mathematical puzzles, but I suggest it is a place to start. IBE (talk) 11:54, 6 May 2013 (UTC)[reply]

Is the blood test influenced by water?

I saw a nurse that say that if someone don't drink water before he come to a blood test, then his blood wouldn't come out as properly. My questions are: 1. Is it true? 2. Assuming this is true, How long it takes to the water to come into the blood circulation from the moment of the drinking. מוטיבציה (talk) 21:10, 5 May 2013 (UTC)[reply]

What 'blood test' would that be referring to. Richard Avery (talk) 21:40, 5 May 2013 (UTC)[reply]
Drinking a lot of water can dilute the blood - which I suppose would affect measurements of the amount of some substance per milliliter of blood...so yes, I assume it's true. Healthy kidneys are able to excrete 1 litre of water per hour...so the time it takes to return to normal will be at least one hour for every liter you drank. SteveBaker (talk) 22:23, 5 May 2013 (UTC)[reply]
It seems to me the question refers more to cases where people are (mildly) dehydrated and have low blood pressure as a result, causing problems with drawing blood... Count Iblis (talk) 23:08, 5 May 2013 (UTC)[reply]
As the OP supposes, it is not a question of affecting lab measurements, it is a question of how easy or difficult it is for the nurse to get blood out of the patient. Mild dehydration in basically healthy patients does not affect difficulty in drawing blood. In elderly patients, and in patients who have been given intravenous chemotherapy, the peripheral veins that are normally chosen to draw blood, which are the same ones they tend to choose for intravenous drips, tend to harden and partially close down. In such patients, drawing blood can be quite difficult, but if they drink plenty of water 20 to 60 minutes before, it can be easier. In my experience, (as an elderly patient who has had chemo) it seems to depend on the experience and skill of the nurse. Older nurses, and nurses that have specialised in taking blood (phlebotomists) don't seem to have a problem regardless, as they will choose a better vein, and they are better at getting the canula in. Time of day also matters. If a doctor orders a blood test that requires fasting (for which the normal way is to get the blood taken before breakfast), on a cold morning nurses cannot get blood out of my arm veins at all - blood has to be taken from a foot vein, which requires a doctor's direct supervision for some reason. Wickwack 120.145.68.194 (talk) 23:27, 5 May 2013 (UTC)[reply]
Just a slight correction, the term is phlebotomist, it has nothing to do with plebs or botany :) Vespine (talk) 02:28, 6 May 2013 (UTC)[reply]
Last summer my GP practice had a notice up to the effect that patients who have booked morning blood tests should drink a pint of water after they get up and before they come for their blood test, so the phlebotomist there obviously think it affects the process. --TammyMoet (talk) 10:33, 6 May 2013 (UTC)[reply]
Interesting. I have about 6 blood tests a year these days, and the issue has never been raised. When it's a fasting test, they always ask when was the last time I ate; but otherwise, no questions. -- Jack of Oz [Talk] 10:41, 6 May 2013 (UTC)[reply]
You have to book an appointment to get blood taken, Tammy?? Another reason why I'm glad I don't live in the UK and have to put up with the NHS. We in Australia (not noted for a good caring medical system either) just rock up when we get round to it. I've never had to wait more than 5 minutes. Wickwack 121.221.85.77 (talk) 14:19, 6 May 2013 (UTC) [reply]
We do have a choice: I can go and sit in the out-patients at my local hospital and queue to have my blood taken without an appointment, or I can make an appointment to see the phlebotomist at my GP's surgery, or at one of several pharmacies around town, or I can arrange for a private phlebotomist to come to my house for a fee of £15 and take my blood. The advantage of the GP option is they have free parking. The disadvantage is that, when I faint (which I do from time to time) while having the blood taken, they have to call an ambulance and get my husband out of work and I go to A&E. If I'm at the hospital, I get charged parking fees of £2 per hour. If I faint they wheel me round to the recovery area and make an internal call to my husband, who works in IT there. Guess which I prefer. --TammyMoet (talk) 19:00, 6 May 2013 (UTC)[reply]
We have these choices: 1) go to the phlebotomist at a GP's premises (all are private) - if you faint, they'll just get the doctor to sort you out - calling an ambulance to take you away from a doctor on hand and to to A & E where you'll have to get past the triage is just plain silly. No parking fee, due to competitive pressure; 2) Go to a laboratory's blood taking outlet - all private - there are heaps of them all over the place, typically in shopping centres - no parking fee. What would happen if you faint is unknown to me, I guess they would have to call an ambulance, but maybe not - all phlebotomists are fully trained State Registered Nurses. 3) Go to the phlebotomist at a private hospital - no parking fee usually; 4) Go to the phlebotomist at a public (ie Goverment owned) hospital - pay an outrageous parking fee and wait. Public hospital phlebotomists are the most skilled though. If you genuinely can't leave home, a nurse from the Silver Chain will come, take blood, admister treatments etc. Silver Chain are a Not-or-Profit organsiation and there is no fee regardless of what service is provided. However, if you can think rationally and talk, they will ask for a donation and can be quite aggressive about it. For those with terminal cancer and want to spend their last months or days in their own bed at home, away from hospital noise and hospital food, Silver Chain are absolutely marvelous. Overall, yep, I'm glad I don't live in the UK and have to put up with the NHS. Wickwack 60.230.238.42 (talk) 00:47, 7 May 2013 (UTC) [reply]


May 6

A few chemistry questions

A few chemistry questions… Please explain how to solve (no calculator allowed).

Assume that gaseous substance A undergoes a first order reaction to form gaseous substance B. At a certain temperature, the partial pressure of gas A drops to 1/8 of its original value in 242 seconds. What’s the half life for this reaction at this temperature?
I just plugged in numbers for the original value, calculated the new value, then looked at the answer choices and figured out the half-life. But how would this be done mathematically (without answer choices)? The answer is 80.7 seconds.
Equal masses of He and Ar are placed in a sealed container. What is the partial pressure of He if the total pressure in the container is 11 atm?
The answer is 10 atm. How?
A 160 mg sample of NaOH (MM=40) is dissolved to prepare an aqueous solution with a volume of 200 mL. What is the molarity of sodium hydroxide in 40 mL of this solution?
The answer is .0200 M. The only explanation I can come up for this is that the molarity in 200 mL is .02 M, but then why wouldn’t it change when looking at it for 40 ml? Does it not change? Molarity changes with volume though?
If .15 mol of K2CO3 and .10 mol of KBr are dissolved in sufficient water to make .20 L of solution, what is the molar concentration of K+ in the solution?
The answer is .40 M. How?
Vessel A containts 32 grams of O2 gas while Vessel B contains 32 grams of CH4 gas. Find the ratio of the pressures of the gas in Vessel A to Vessel B and the ratio of the average kinetic eneriges in vessel A to Vessel B.
For the pressures, I just found the mole ratio was 1:2 and said moles is proportional to atm, so it’s 1:2. That’s correct answer. Is that correct logic?
How do you find the ratio of the average kinetic energies? I thought it was r1/r2 = r_1/r_2 = √(M_2/M_1 ) but that answer would be 1:1.4 yet the answer key says the ratio is 1:1.

Thank you!!! --Jethro B 00:42, 6 May 2013 (UTC)[reply]

Generally, a functional knowledge of simple arithmetic and introductory algebra are considered prerequisites before students begin stoichiometry. Each of these questions are essentially solved by trivial application of simple algebra. (With a little rounding and liberal application of critical thinking, you can solve much more difficult problems than these with no calculator - even pencil/paper are unnecessary for these problems). Other questions are spot-checks to verify that you understand terminology (like the difference between concentration and volume). Can you help us understand your background so we can lead you in the right direction? Nimur (talk) 01:35, 6 May 2013 (UTC)[reply]
If you believe that the stoichiometry and algebra is simple, then please provide some answers for me... I don't have an issue with algebra or stoichiometry. However, you can't just do stoichiometry in each of these problems - they test information across various units and different things need to be applied for each one to understand how to do the stoichiometry. The picture here isn't stoichiometry, but rather the specific units each question deals with, whether it's solubility, gas laws, kinetics, etc...
So if someone could provide an explanation of how to solve them (or some of them), and I note that they are not all the same stoichiometry, I'd appreciate it.
P.S. If you're really interested in my background, you can send me an email - it's not something I will publicly reveal. I will say that I consider myself skilled in the subject area and have proven this in the necessary courses by scoring highest, but there are always specific little things that you either don't know or forget (to put this in context, these are selected from a document of 75 questions, and are the ones I have trouble with. It's just a few, but I'd like to know how to solve them). --Jethro B 02:06, 6 May 2013 (UTC)[reply]
I have no idea how much chemistry you, as an otherwise-anonymous poster, already know. If you're a new student of chemistry, you deserve some directed conceptual explanations. If you're an advanced grade-schooler, you need some detailed help with mathematics you wouldn't know yet. And if you're a physics undergraduate, you need a stern talk about life-choices, and we need to yell "apply the equipartition theorem" at you (for the last problem). Without knowing what type of help you need, it's difficult to help you.
For example, "Equal masses of He and Ar are placed in a sealed container. What is the partial pressure of He if the total pressure in the container is 11 atm?" Look up the atomic mass of Helium and Argon if you don't already know them. Helium's atomic mass is four, and argon's is forty (for the purposes of our discussion, without a calculator, and ignoring some irrelevant decimal places). The ratio 4:40 is simplified to 1:10; and the question gives you a total partial pressure of 11... one plus ten is eleven. The math is alarmingly simple - but only if you already know that partial pressure is proportional to the molar mass ratios. That is a simple fact, but it's one you need to learn somewhere (presumably in a chemistry class). Do you need help with these concepts or do you just need a reminder to apply them?
Every other problem had a similarly simple arithmetic answer, as long as you recognized the concept that was being asked. Nimur (talk) 02:52, 6 May 2013 (UTC)[reply]
These questions look like they're on the level of College level chem/AP Chem, which I know. Yes, if you can do what you did with the pressure question - state what concept is applied here and how - that's great. I don't need a detailed explanation, I should be able to understand it. The helium, with the lower mass, would exert more pressure than the Argon? --Jethro B 03:18, 6 May 2013 (UTC)[reply]
Here's my attempt to point you in the correct direction for these questions. An additional overall hint is don't be afraid to start slinging algebraic equations around. You might have an idea of how to solve things "if only I knew T (or V, or ...)". Don't get discouraged - just try representing it by a variable and calculate through algebraically. It's quite possible that the T's or V's will cancel and you'll find you don't actually need to know them to solve.
  • Equal masses of He and Ar ... as Nimur discusses above - the key point is that partial pressures are distributed like the molar ratio of the gasses (each individual molecule contirbutes equally to the pressure for ideal gasses).
  • Assume that gaseous substance A undergoes a first order reaction ... You'll need to undestand what a first order reaction is. Drawing from examples of first order reactions, it should be clear what the reaction and stochiometry is. From that, you should be able to calculate final amounts of A & B from the given partial pressures, and from the starting and ending amounts and the rate equation determine the half life.
  • A 160 mg sample of NaOH ... Questions can have superflous information to catch out those people who are blindly combining numbers. If you're confident in your understanding of what molarity is, the 40 mL shouldn't throw you.
  • If .15 mol of K2CO3 ... Start by calculating what the molarity of the K2CO3 and KBr would be seperately in the final solution. Then figure out what each would contribute with respect to K+ ions. The final K+ concentration is simply the total contribution for the K+ ions.
  • Vessel A containts 32 grams... This is a straightforward application of the ideal gas law. You can do PV=nRT for both vessels to find the ratios of pressures. Likewise, you can also write the equation for the average kinetic energy, then take the ratio and cancel like terms. One catch is that they're likely talking about the per molecule or per mole average, rather than a per mass average or something like that, so keep that in mind as you write the expression for the average kinetic energy.
Hopefully that should be sufficient to get you on your way. -- 71.35.116.214 (talk) 04:22, 6 May 2013 (UTC)[reply]
  • A half life reduces the amount of something by half. The statement that it is a first order reaction merely emphasizes that there is a half-life, i.e. no matter how much is present, half of it is gone in the same amount of time. Half of half of half is 1/8, so the half life is 1/3 the "eighth life".
  • Helium weighs 4 amu, argon weighs 40. To make equal masses you need 10x as many particles of helium. So there are 10 times as many particles of helium as argon flying around in the gas. Ideally all the particles have the same range of energies, so the helium particles will sock a wall ten times as often as argon particles and therefore be exerting 10x the pressure.
  • The next one is a dirty trick. An aliquot of a solution has the same molarity as the stock it is taken from. Molarity is moles / volume, so you could make up 0.02 with 160 mg in 200 ml or 32 mg in 40 ml or (easiest for calculation) 800 mg = 0.2 mol in one liter!
  • 0.15 mol of K2something contains 0.30 mol of K. Add 0.10 mol of K from the other and you get 0.40.
  • Your logic is right. But if you put the two gasses in the same vessel, they won't push a membrane (or the invisible boundary between them) one way or the other, nor will they transfer energy from one to the other because they are more importantly at the same temperature. This implies they're carrying the same kinetic energy. See kinetic theory, which says the energy per particle depends only on the temperature. Wnt (talk) 03:12, 7 May 2013 (UTC)[reply]
For the potassium ion question, it's true that there are 0.40 moles of K+ in the solution, but the question asks for molar concentration, not the amount of substance, and gives the volume as 0.20 L. Unless you specify units of moles/200ml (a rather strange choice), you need to divide by the volume. This gives a concentration of 2M, so it seems the answer given in the OP's answer book is incorrect.please correct me if I am missing something obvious here... Equisetum (talk | contributions) 09:39, 7 May 2013 (UTC)[reply]
Reading my answer I feel I need to engage in a little auto-pedantry and point out that you actually divide by the concentration even if you do chose units of moles/200ml, it's just that the concentration is then 1 (if you don't conceptually divide then the units don't come out right, which even five years after doing my last dimensional analysis still makes me nervous). Equisetum (talk | contributions) 09:44, 7 May 2013 (UTC)[reply]
Sorry! I was rushing near the end and didn't notice the incorrect "right answer" was in M not mol. Wnt (talk) 15:57, 7 May 2013 (UTC)[reply]

I want to change a reference on the Elephant Cognition page.

I posted the following comment on the Elephant Cognition talk page but nothing has been done to change it.

The source [Dubroff, M Dee (August 25, 2010). "Are Elephants Smarter than Humans When It Comes to Mental Arithmetic?". Digital Journal. Retrieved 2010-08-29.] Is just an flake article talking about the journal article [Irie-Sugimoto, Naoko ; Kobayashi, Tessei ; Sato, Takao ; Hasegawa, Toshikazu."Relative quantity judgment by Asian elephants ( Elephas maximus )"Animal Cognition, 2009, Vol.12(1), pp.193-199]. Shouldn't the actual journal article be cited with the link being something like <http://journals.ohiolink.edu/ejc/article.cgi?issn=14359448&issue=v12i0001&article=193_rqjbaem> ? — Preceding unsigned comment added by reku68 (talk) 19:57, 22 April 2013 (UTC) — Preceding unsigned comment added by 129.110.5.89 (talk)

You can edit the article yourself! I'll put the article on my watchlist, and if you mess something up, I'll help fix it. Regards, Looie496 (talk) 14:49, 6 May 2013 (UTC)[reply]

What kind of bug is this?

Hi there. For the last few weeks, I've been finding little bugs, about one every day or so. I get rid of it, but there's usually one there the next day anyway. It's very rare for me to see more than one at once, but it has happened. They almost always appear in the same area of the room, and whenever I see them they're always on a wall, and rather low to the ground. I don't recall ever seeing them outside of this room, either. I was wondering if anyone could help me identify what it might be? I took a Photo of the bug (apologies for image quality, it's the best I could get out of my old iPhone 3GS camera). I live in England, in case this helps narrow it down. Thanks for any help I might receive. 86.134.231.216 (talk) 12:29, 6 May 2013 (UTC)[reply]

It looks like a shield/stink bug. I don't know which species though. Plasmic Physics (talk) 12:46, 6 May 2013 (UTC)[reply]
To test it, does it stink when you try and catch it, or otherwise disurb it? Plasmic Physics (talk) 12:50, 6 May 2013 (UTC)[reply]
I don't think so. They move so slowly (hell, if I look straight at one it's hard to tell whether it's moving at all - it's only when I look away for a bit and then look back that I can tell it's moved, usually), and they're incredibly tiny (three millimetres top estimate). I can't rule it out for certain though, as I didn't try actively sniffing it or anything like that. If I see another one, I'll update. The last time I had a visitor I asked her, and she guessed at woodworm, though she stressed it was just a guess. I looked at all the insects linked in the woodworm article, though, and they all looked too elongated, whereas these are all rounder. I haven't noticed any wood damage on anything wooden nearby, either. 86.134.231.216 (talk) 13:07, 6 May 2013 (UTC)[reply]
I don't know much about bugs (I'm sure there's a word for the logical study of bug, probably not bugology), but I do know that species can vary considerably in size within the same superfamily. Take Tessaratomidae, they are a famly under stink bug, and they are giants compared to the other families. I would not be surprised if there was a family of dwarf stink bugs, of which this one is a member. I ny case, that's just my opinion. Plasmic Physics (talk) 13:20, 6 May 2013 (UTC)[reply]
The study of insects is entomology. Anyway, the picture has very little detail, but PP's guess of a sheild bug is pretty good. That's good enough for casual purposes, but only narrows it down to ~7000 species... Anyway, as for why this is a good guess: note that the elytra seem to be incomplete, and the pronotum has the general shape we expect in shield bugs. Note that many stink/shield bugs will not display any odor when disturbed. Even the (recently very common) brown marmorated stink bug only rarely produces odors, in my experience-- so that's not a very good criterion. Finally, when trying to ID an insect, remember that things like color, markings and size are not very informative. The pros usually focus on gross insect morphology to get to Category:Orders_of_insects, and then use fine features to get to family or genus. In generally, getting any insect ID down to species is very difficult, even for experts. The exceptions are things like honey bees and monarch butterflies, but even then, there are several close relatives look-alikes that can easily fool amateurs. SemanticMantis (talk) 15:43, 6 May 2013 (UTC)[reply]
Are we looking at the same picture? What I see looks like a carpet beetle, family Dermestidae. --Dr Dima (talk) 19:17, 6 May 2013 (UTC)[reply]
Agreed. Looks nothing like a stinkbug. --jpgordon::==( o ) 19:33, 6 May 2013 (UTC)[reply]
What something 'looks like', is down to opinion, and by definition, an opinion cannot be false or true. So, it may not look like a stink bug to you, but it does not inherently invalidate my opinion. Plasmic Physics (talk) 01:13, 7 May 2013 (UTC)[reply]
Well, I only said shield bug was a good guess :) Anyway, if you think the elytra are completely covering the hindwings, then I suppose it could be a beetle. I thought the elytra looked incomplete, which would rule out beetles. But the photo is pretty bad. I was mainly trying to point out some of the diagnostic features that we could look for. "Looks like" doesn't hold much water for insect ID. There are several thousand species of beetles and sheild bugs, and many of them will look rather similar when photographed in this manner. SemanticMantis (talk) 21:26, 6 May 2013 (UTC)[reply]
The posterior dorsal section of the insect appears more tappered than rounded, which is why I recognise it as a shield bug. Plasmic Physics (talk) 01:30, 7 May 2013 (UTC)[reply]
That's probably more down to the poor quality of my photograph than anything else. As SemanticMantis said, it's hard to really make any kind of judgement on such a photograph. Not really your fault. Going off the tips from Dr Dima and Jpgordon above, I did a bit of Googling and found quite a number of council websites mentioning carpet beetles, and in particular they tend to mention the Varied carpet beetle, which I think this is. Considering a lot of these pages (and the WP article) mention bird nests, I might have to get on the phone with the council (I live in a council flat) and see what they can find out. Thanks again everyone for your help, and apologies once more for the poor quality of my photograph. 86.134.231.216 (talk) 05:41, 7 May 2013 (UTC)[reply]
Ah, I concur, the dorsal perspective of the varied carpet beetle does indeed look similiar to the photo. Plasmic Physics (talk) 08:01, 7 May 2013 (UTC)[reply]
I also go with Dr Dima and JPG, this image helps to confirm it is likely to be a Varied Carpet Beetle. Richard Avery (talk) 06:47, 7 May 2013 (UTC)[reply]

Meteor trail color

Why the trail color of this meteor shifts from green on the left to blue? A kind of redshift since it approaches from the left? Brandmeistertalk 16:23, 6 May 2013 (UTC)[reply]

A meteor would need to travel something like a couple of thousand times faster than they do to give visible redshift. I don't see much blue, I see green which comes from the meteor's copper or magnesium, and red which happens when Earth's atmospheric gases are heated. At higher altitudes there is more oxygen and the metal emission dominates, lower altitudes have more nitrogen which gives a deeper red. This green-to-red is not uncommon, see here. The article on meteors mentions color, and there is more at Nasa and good ol' web searches on meteor color. 88.112.41.6 (talk) 17:07, 6 May 2013 (UTC)[reply]
The image was captured with a consumer digital camera, and according to the file's metadata, the image was further post-processed in Adobe Photoshop. Be very very careful drawing scientific conclusions from such images. Digital cameras are incredibly complicated, and if you aren't sure what digital processing has been applied (as well as a very good understanding of all the optical and electronic characteristics of the camera), you should not jump to conclusions about things like color. In other words, a digital camera with color is not the same as a spectrophotometer. Nimur (talk) 17:12, 6 May 2013 (UTC)[reply]
Didn't do google search this time, but thanks anyway, solved. Brandmeistertalk 17:48, 6 May 2013 (UTC)[reply]

Tetrahydrofuran

If a site is contaminated with tetrahydrofuran (specifically, I am talking about the Seymour Hasardous waste site), would removing all contaminated soil at the site remove all contamination from the site and/or prevent the future spread of contaminants from the site?--149.152.23.33 (talk) 22:34, 6 May 2013 (UTC)[reply]

Removing all of the contaminated soil, by definition, removes all of the contamination as well. 24.23.196.85 (talk) 22:57, 6 May 2013 (UTC)[reply]
Note also that THF is only mildly toxic, highly volatile and rapidly biodegradable -- so digging up all that soil might not even be necessary or worthwhile, it might be just as effective to simply allow the THF to evaporate/leach out/break down over time while taking strict measures to prevent any further contamination. 24.23.196.85 (talk) 23:13, 6 May 2013 (UTC)[reply]

May 7

Practically insoluble

Is there a defining characteristic according to any particular school of thought, that distinguishes between 'partically insoluble' and 'insoluble'. My question is inspired by the difining characteristic of an 'existant isotope', that it must have a half-life greater than the time it takes for the nucleus to internally differenciate (~10-14 s). Plasmic Physics (talk) 01:23, 7 May 2013 (UTC)[reply]

Please clarify. By "partically insoluble", did you mean "partially insoluble", or "practically insoluble"? And what does the half-life of an isotope have to do with this? 24.23.196.85 (talk) 04:12, 7 May 2013 (UTC)[reply]
I mean 'practically insoluble'. The isotope statement demonstrates a pragmatic approach to semantics in an area of science. Plasmic Physics (talk) 04:22, 7 May 2013 (UTC)[reply]
Terms like "practically insoluble" and merely "insoluble" are imprecise terms. If you want precision, you would use defined numerical measures of solubility such as solubility product and molar solubility and mass percent solubility as shown at solubility table. There aren't "hard and fast" cut-offs between terms like "slightly soluble" "practically insoluble" and "totally insoluble". It's a bit fuzzy around the edges. Generally, something is "soluble" if the solubility product indicates that it will dissolve extensively, that is the solubility product is significantly higher than 1. Things which are considered insoluble have solubility products which are very tiny, while something "slightly soluble" would have a solubility product near 1. But yet again, there are no hard cut-off lines here. It's somewhat subjective. --Jayron32 05:09, 7 May 2013 (UTC)[reply]
Personally, I'd say a substance is "practically insoluble" if no decrease in weight of the solid phase can be detected after placing it in the solvent, and "completely insoluble" if the dissolved phase doesn't even show up in spectroscopic analysis. But that's just me. 24.23.196.85 (talk) 05:43, 7 May 2013 (UTC)[reply]

Explosive experiments

Why is there no speciality laboratory glassware for experiments that involve explosively unstable compounds, like nitroglycerine? Such experiments seem to involve ordinairy glassware, that readily turns to shrapnel in these explosions. Is it not possible to create thickly walled composite glassware that is heavy and shock resistant? Take a test tube as an example, to enable efficient heating and cooling, a thermally conductive metal bottom can be fused into the underside of the test tube. Of course, this will result in a make-shift cannon, but it should be possible to attach a force dissipator. I'd like to think that chemists would try and keep their lab in one peace. Plasmic Physics (talk) 01:50, 7 May 2013 (UTC)[reply]

Is this actually a wide spread problem? How many scientists do you know who have been injured or killed by explosions? Or how many labs have been blown up? I suspect anyone working with explosive compounds has strict operating and safety procedures in place already. If they are working with quantities large enough to blow up the lab, I hazard a guess they have bunkers or similar special areas where the compounds are restricted to. Vespine (talk) 02:12, 7 May 2013 (UTC)[reply]
It's not a problem very often encountered. For scenarios, see [4] as linked in a previous discussion on this reference desk. Plasmic Physics (talk) 02:38, 7 May 2013 (UTC)[reply]
There are bombs and hoods and vessels and even ranges to handle explosives. Why would someone with more than one year's chemistry think a test tube would be an ideal or idealizable place to conduct such experiments? μηδείς (talk) 02:48, 7 May 2013 (UTC)[reply]
Concerning chemical experimentation of such explosives. Plasmic Physics (talk) 02:52, 7 May 2013 (UTC)[reply]
I see that I probably should never have given you advice on how to make that nitric acid after all... 24.23.196.85 (talk) 04:00, 7 May 2013 (UTC)[reply]
No, that was fun, but I did not make nitroglycerine, if that's what you're infering. Plasmic Physics (talk) 04:17, 7 May 2013 (UTC)[reply]
Have you had two years of chemistry, Plasmic Physics? Knowing that (and the meaning of your last sentence fragment) will help us guide our suggestions. μηδείς (talk) 04:21, 7 May 2013 (UTC)[reply]
Yes, I've had two years. My last sentence fragment reasserts the cntext of my query, since you seem to have missed that from the link I gave. You speak of bombs and such, which is hardly applicable. Plasmic Physics (talk) 04:26, 7 May 2013 (UTC)[reply]
Actually, do you know what a bomb is? μηδείς (talk) 05:00, 7 May 2013 (UTC)[reply]
Hint: This word has a different meaning in chemistry than in everyday usage. 24.23.196.85 (talk) 06:00, 7 May 2013 (UTC)[reply]
I see: you're talking of a bomb calorimetre. It is still not applicable. I'm talking about experiments, where detonation (or deflagration) is an undesireable reaction. Plasmic Physics (talk) 06:53, 7 May 2013 (UTC)[reply]
No, a bomb calorimeter is a specific type of bomb, there are others about which we don't seem to have articles, although the sense is mentioned at wiktionary. μηδείς (talk) 16:24, 7 May 2013 (UTC)[reply]

Thomas Klapötke is a professor in Munich, the link above also mentions him. He has a large interest in high energetic materials and green bombs. One of his favorites is hydrazine azide (N5H5). I saw the material which is highly explosive and the people used normal glass ware to handle it. They were behind a several layer glass window reaching around with their arms. The used a set of leather and chainmail gloves for their hands. The use of more than a few hundred milligramms was not considered a problem but nobody I knew did it. Uranium hexaazide and the selenium and tellurium azides were handled in a very similar way. Klapötke also told the story of having a bromoazide in a flask and somebody opened a door and the induced pressure difference was enough to induce the explosion. The high speed of the explosion leaves only fine dust of the glass mostly incapable to penetrate the normal labcoat. He said only a few glass particles reached him making it very painful to close the safety belt when he went home that evening.--Stone (talk) 07:48, 7 May 2013 (UTC)[reply]

For a lack of better words: Bingo! Plasmic Physics (talk) 07:53, 7 May 2013 (UTC)[reply]

I think Plasmic might like [5] this series, not to seem like I'm spamming it, but I did find it good reading, and I don't know a thing about chemistry. Shadowjams (talk) 07:54, 7 May 2013 (UTC)[reply]

I do appreciate a good read, however, I've already linked to that site near the start of this discussion. Plasmic Physics (talk) 09:54, 7 May 2013 (UTC)[reply]

The high energy working group was never a place for me to join. The place was dangerous. One PhD student had a hole in his palm and nearly died. The small round flask simply explodes in his hand. Although the fun to do the fall hammer test or the other tests to get physical data of the explosives would have been a nice work. --Stone (talk) 13:38, 7 May 2013 (UTC)[reply]

My late father in law was an industrial glassblower, and worked for a polytechnic which be came a university. His job was to make such specialist glassware if it was possible. So you may not be able to buy such stuff off the shelf, but it would be bespoke. --TammyMoet (talk) 17:49, 7 May 2013 (UTC)[reply]
Can such extraordinairy requirements be met by a glassblower's skill? Considering that the walls of such a vessel would be on the order of several centimetres thick (2-3 ?) and would be laminated with two different types of glass? Plasmic Physics (talk) 23:56, 7 May 2013 (UTC)[reply]
Using an air compressor to inflate the glass "gather" instead of a blowpipe is still a glass blowing skill and the man doing it is still known as a glassblower. And with aircompressors and support tools there's no real limit. Wickwack 58.169.245.20 (talk) 11:06, 8 May 2013 (UTC)[reply]
Good, so my question is: why is that not the usual case, why do these experimenters make do with standard glassware, albeit with massive protection measures? Plasmic Physics (talk) 13:20, 8 May 2013 (UTC)[reply]
Do they? Where's your evidence that they do? In any field there will always be some idiot who does something stupid, because he doesn't know any better, or doesn't have a real feel for the subject, or is just plain nuts, but that does not mean they are all stupid. Wickwack 124.182.51.119 (talk) 12:19, 9 May 2013 (UTC)[reply]
Don't miss the "albeit..."-part, which implies the use of chain-mail gloves, blast proof windows, etc. Ergo, I'm not calling them stupid, or making any sort of judgement on their operandi modi for that matter. Plasmic Physics (talk) 12:28, 9 May 2013 (UTC)[reply]
It is stupid if somebody uses a complex or expensive solution to a problem where a simpler or cheaper solution will do the job. Again, why do you think they don't use appropriate vessels? Your evidence? Can you cite an example? Wickwack 124.178.140.2 (talk) 23:40, 9 May 2013 (UTC)[reply]
Rather, I can't find evidence that they do. Plasmic Physics (talk) 00:46, 10 May 2013 (UTC)[reply]
So, cite some evidence or examples that they don't, then. You've been asked 5 times by 3 different people, over 3 days, so it looks like you can't. Shall I ask the Ref Desk why don't aliens have the good manners to write us a thankyou note for our hospitality each time they depart Earth? Heck, there's no evidence that they've ever bothered to write each time. Wickwack 121.215.3.250 (talk) 15:46, 10 May 2013 (UTC)[reply]

Aeronautical Engineering

Why do the technicians, space scientists and engineers wear white coveralls, masks, boots and gloves while working on an spacecraft? — Preceding unsigned comment added by 117.200.240.10 (talk) 05:44, 7 May 2013 (UTC)[reply]

Same reason that nurses or lab techs wear white coats -- so that any dirt or contamination on their clothes would be immediately visible. Why do British soldiers wear red coats, and French ones wear brown pants?  ;-) 24.23.196.85 (talk) 05:56, 7 May 2013 (UTC)[reply]
The colour is not that important. In front of my office there is an image of the integration of the Dawn spacecraft and two of the engineers wear black suits. The colour could be used as a colour code like the white ones do the checking while the green ones do the documentation and the black ones do the actual work. The clean room levels like class 10000 force you to wear the protective garment. Most of the dirt in the clean room is introduced by the persons working there and the clean room garment is there to seal it and make it stay with the person. We had the choice for our clean room to get green, light and dark blue, pink, black and white.--Stone (talk) 07:29, 7 May 2013 (UTC)[reply]
There can be psychological reasons. Not all spacecraft work will require clean room conditions. The company I worked for in the 1990's had several large data centres at different locations. The supervisor of one centre set about using every opportunity to tell his team how good they were, and had them all issued with white coats. It worked - the fault rate in his data centre went down, significantly lower than the other centres, and equally lower than before he took over. The colour did not matter, and maybe he could just as well issued them with uniform shirts or something. What did matter, is that wearing a white coat reinsforced in their minds the concept that they were good at their job and took care in it - and people rise or fall to their own perceptions.
Some factories I've worked in go to a lot of trouble to keep the place neat and clean, with everything in excellent condition, and issue different sorts of coats to different sorts of workers. It's the same thing - a proud worker is a good worker. A sloppy workplace tends to get sloppy workers.
In many places, wearing a white coat or whatever is what you do - just as managers wear a tie, and senior managers wear a dark suit.
Not to be forgoten is that workers like wearing lab coats as they reduce wear and tear on your own clothes, and the cost of cleaning the coats is a tax deduction. — Preceding unsigned comment added by 60.230.238.42 (talk) 12:16, 7 May 2013 (UTC)[reply]
Wickwack 60.230.238.42 (talk) 09:57, 7 May 2013 (UTC)[reply]
I am skeptical. How do you know it wasn't some other practice by the same manager? The mere fact that he was apparently eager to come up with new ideas/directives might have convinced other people there that somebody was paying attention. I do believe the first response that it is meant to make dirt show up - no matter how clean an area is supposed to be, the way it gets dirty may well be something very obvious. Wnt (talk) 16:43, 7 May 2013 (UTC)[reply]
There's simply NO issue with dirt in a data centre. There's no issue with dirt in most places lab coats and the like are worn. In most cases its about convention and image. If you are a customer (whether you are a representative of a government buying a satellite, or an individual getting a vetinary to look at your sick dog. All other things being equal, who are you going to deal with - the outfit whose staff are dressed in jeans, t-shirts, and thongs (= flip-flops in USA-speak) or the outfit whose staff are dressed in neat coats, proper leather shoes, and maybe wearing ties? I knew the data centre guy very well, and I knew everyone in his team. How he got his team, comprised as it was of techs just the same as the techs in the other centres, to do a lot better did not go unnoticed by management. As I said, the white coats were not the only thing he did - praise was the key to the performance improvement, by making them feel they were the best. The white coats were a part of his strategy to get them to percieve themselves as top class, and through that, improve. It's Leadership 101 actually. Wickwack 120.145.46.40 (talk) 00:35, 8 May 2013 (UTC)[reply]
I'm sure I've read about other companies with a deliberately casual culture, and of course the same goes for academia. Being forced into uniform, stereotypical clothing seems to speak less of an employee being elite than of being of a low status. Wnt (talk) 04:23, 8 May 2013 (UTC)[reply]
I bet you have too. So have I. It all depends. What works for one workplace will not work somewhere else. Look up Hawthorne Effect. Essentially it is this: In the Hawthorne factory of Western Electric in the 1920's, their best efforts resulted in only barely adequate quality and productivity, due to the limitations of the technology of the day. Management decided to do a study on what would motivate the factory workers. The lighting was improved. Work output improved. They increased lighting again. Work output again improved. Lighting was then degraded somewhat. Aagin work output improved. The Hawthorne experience has been studied ever since, and just about anyone who does management courses in the Western World gets to look at it. The overall conclusion from Hawthorne and studies at other factories is that workers do better if they think management cares about them, not so much on what management actually does. Workers work better if they see Management spending money on them. Generally, it is good practice to give workers uniforms or "white coats". But equally valid is showing you care about workers by allowing them some freedoms - it may be flexibility in dress, flexibility in work hours, or whatever. That does not invalidate what my data centre coleague did, and does not invalidate that giving workers uniforms or white coats elsewhere can improve morale and pride.
Horses for courses. I worked quite a while for a very large company (the same one with multiple data centres) that was a bit like IBM - wearing a tie and a conservative suit was the thing in Head Office, and a sort of natural selection meant we liked wearing suits. However, the company had a sort of skunk works/think tank in a separate location deliberately staffed by radical bright young university graduates who were expected to think up new ideas and challenge accepted technology solutions. They got to dress in flip-flops, jeans, and teeshirts. Right for them but not for me. More Leadership 101 for you.
Wickwack 124.182.9.143 (talk) 05:28, 8 May 2013 (UTC)[reply]

Biplane vs. fighter jet

Suppose there's a dogfight between a World War 1 fighter biplane (for example, the Fokker Dr. 1 -- which was actually a triplane, but you get the idea) and a modern fighter jet (like the Mig-29). What would be the outcome of such a battle? Would it be an easy kill for the Mig, as I believe it would be? Or would both planes be unable to harm each other, as I've been told by a fellow aviation fan? (For the sake of argument, let's suppose both planes are piloted by top aces -- for example, Red Baron in the Fokker, and Ivan Kozhedub in the Mig -- so pilot skill is not a factor.) 24.23.196.85 (talk) 05:53, 7 May 2013 (UTC)[reply]

I'm not sure the Fokker Dr. 1 has the maneuverability necessary for evasion of an R-73 fired from ten kilometers away. The Fokker also couldn't return fire at that range, as the MG 08 that was its sole armament has a range of 2-3.5 kilometers. --Jayron32 06:02, 7 May 2013 (UTC)[reply]
That's what I'm thinking too -- provided that the missile can track the Fokker (which is the argument the other guy used -- he claimed that neither heat-seeking nor radar-guided missiles would have enough of a signal to track, and that the Mig wouldn't be able to get a shot with its Gatling gun because its higher stall speed and wider turning radius will make it overshoot. Personally, I don't buy it, but I'm looking for confirmation from someone more knowledgeable.) 24.23.196.85 (talk) 06:15, 7 May 2013 (UTC)[reply]
Why wouldn't radar-guided missiles have something to track? The Fokker should be large enough to show up, shouldn't it? It is much smaller than a Jet (5m long x 7 m wingspan for Fokker vs. 17m long x 11m wingspan for the MiG) but the resolution of the radar should be good enough not to confuse the Fokker with a duck... --Jayron32 06:31, 7 May 2013 (UTC)[reply]
...the Mig wouldn't be able to get a shot with its Gatling gun because its higher stall speed and wider turning radius will make it overshoot... That sounds like wishful thinking. Aircraft routinely destroy slow-moving and stationary surface targets. While it's plausible that our hypothetical MiG can't sit on the Fokker's six and follow it oh-so-slowly around, I don't see any practical problem with the MiG strafing the Fokker at its leisure. And it really shouldn't take that many 30mm cannon rounds to finish off the Red Baron. TenOfAllTrades(talk) 06:46, 7 May 2013 (UTC)[reply]
The OP's friend exhibits (as Spock says of Khan) two-dimensional thinking. The MiG can strafe the Fokker while diving on it or climbing up at it; he does not need to patiently sit right behind it as it chugs along. The MiG's astonishing power means it can trivially climb to a point a mile above the Fokker and then can dive down at its leisure. -- Finlay McWalterTalk 09:50, 7 May 2013 (UTC)[reply]
The Fokker would be helpless, Red Baron or not. The differences in technology, capabilities, and armament are so vast that I don't think there is any scenario where the older plane would pose any sort of threat. This is, after all, a plane that first flew a mere 14 years after the Wright Flyer. To even things up slightly, you'd have to take away the MiG's radar and missiles (or add them to the Fokker, although that might make it too heavy to fly), but I still don't think there's much the Fokker could do. You may as well ask a Model T to outrun a Corvette. --Bongwarrior (talk) 07:05, 7 May 2013 (UTC)[reply]
I'm going to go out on a limb here, and suggest that none of us are fighter pilots (the 8 year old me would be disappointed)... however that said, I bet if a Mig29 flew over the top of a biplane it might make it crash just because of its wake. Who needs a 30 mm when you have 1000 knots. Shadowjams (talk) 07:52, 7 May 2013 (UTC)[reply]
Indeed: the MiG's engines generate a massive jetwash and the wings a sizeable vortex - safe operating distances to avoid another aircraft's wake turbulence is measured in nautical miles. The MiG can easily get his to envelop the Fokker, which would probably shatter its airframe. -- Finlay McWalterTalk 09:59, 7 May 2013 (UTC)[reply]
During the Falklands war I believe Harrier jump jets were able to avoid Exocet missiles by staying still , this fooled the Exocet missiles into considering them as decoys and looking for something else. I'd guess a Fokker would be considered as practically stationary by such a missile. Dmcq (talk) 09:34, 7 May 2013 (UTC)[reply]
Exocet is an anti-ship missile. Argentine air forces in the Falklands War#Armament lists the AAF's offensive missile capability in 1982. -- Finlay McWalterTalk 09:42, 7 May 2013 (UTC)[reply]
Sorry one of their anti-aircraft missiles anyway and it's about the only name of a missile I heard of in the war!, I'm no expert on military weaponry, I know the NRA goes on at length about assault rifles and automatic rifles but it all passes over my head. Dmcq (talk) 10:31, 7 May 2013 (UTC)[reply]
It's good that you mention the MiG 29 specifically, because that leaves open (at least theoretically) the possibility of the most Tom Clancy-ish, fighter-jockily silly strategy the MiG can employ. He can fly ahead and beneath the Fokker, perform a Pugachev's Cobra, and strafe the Fokker as it passes in front of him. There's some chance that might not be safe (particularly that the gun exhaust, expelled into such an unusual air envelope, would stall the port engine), and it'd be idiotic to do in an actual conflict. But I'd put good money that somewhere, when his boss was off golfing, that some VVS Colonel has tried this to see if it works. -- Finlay McWalterTalk 10:11, 7 May 2013 (UTC)[reply]
The Fokker can come down to tree top level and jink about between tall trees and hills. This would make it just about impossible for a jet fighter to strafe, and it may make it difficult to hit with rockets as well. Isreali Mirage fighters found it very hard to hit MiGs as the lower stalling speed of the MiGs allowed them to come down to low altitudes and jink about. The Americans had a similar experience in the Korean War - the slower older generation MiGs could avoid being hit by coming down lower, and if the American overflew, pull back and hit the American. Not that I expect a Fokker with WW1 guns could hit a departing and rotating MiG29. It wasn't all that easy to hit other WW1 fighters. And modern fighters can take a lot more punishment anyway. The "dirty' stalling speed of a MiG29 is thought to be about 230 km/hr whereas the Fokker DR1 triplane stalled at 72 km/hr. As far a strafing goes, imagine trying to hit a door size target while passing it at 160 km/hr - pretty damm difficult - for both. So I tend to think the OP's friend is more right than wrong. Wickwack 60.230.238.42 (talk) 11:57, 7 May 2013 (UTC)[reply]
Let's do the math. don't know the stall speed of the MiG 29, but its landing speed is higher than that. Looking at a roughly equivalent US fighter, the F/A18 Super Hornet, it lands at about 240 km/h (arresting gear says US equipment can stop an aircraft at 130kt). The maximum speed of the Fokker is 185 km/h; let's say he's sustaining 170 km/h. So if the MiG is chasing the Fokker, as slowly as he can, he's gaining at about 1 km/minute. The MiG's Gryazev-Shipunov GSh-301 cannon has an effective range of between 1.2 and 1.8 km. So the MiG comfortably has a full minute to strafe the Fokker. The MiG carries 150 rounds, which (at the 301's cyclic rate) he can fire off in as little as 5 secondsl; he only needs a handful to kill the Fokker. This isn't an unusual mission for an air-superiority fighter - it would be expected to be able to shoot down cruise missiles, light aircraft, helicopters, and reconnaisance drones. -- Finlay McWalterTalk 12:13, 7 May 2013 (UTC)[reply]
You've missed the point. The Fokker can evade by flying as SLOW as possible, and he can turn on a sixpence, due to flying slow and because it's a triplane designed for manoeverability. That's what the MiG pilots did in the Isreali/Arab war and the Korean War - they slowed down as much as they could and darted about at treetop level, thus turning their slow speed to advantage. The MiGs would sometimes even lower wheels, as the "dirty" stall speed is lower than the "clean" stalling speed (with more drag you can fly a bit nose up with more throttle - the aircraft is then stable at a lower speed). What matters then is not just the MiG gun range but the rate of fire and the spread. The MiG29 in this case won't have a steady target for 1 minute, he has a moving target passing through his sights, if he's lucky, for a second or so. Even if they are flying straight and level, which would only happen if the Fokker pilot is a complete fool, the closing speed is not 230-170=60 km/r, its 230-72 = 158 km/r so the MiG pilot still has only 23 seconds, not one minute. In practice he has even less, because he'll want to pull up to avoid a collision. He'll have maybe 8 seconds at best if the Fokker pilot is a complete fool. Wickwack 60.230.238.42 (talk) 12:31, 7 May 2013 (UTC)[reply]
To all intents and purposes, engaging an elderly triplane would present the same problems as engaging a small observation helicopter, a scenario which I imagine fast jet pilots practice regularly. To Dmcq, although it is possible for the Harrier to rotate its jet nozzles to cause rapid deceleration, a tactic known as Vectoring in Forward Flight or "Viffing", I have read that this tactic was not employed during the Falklands War, although there was much press speculation that it would be. Our Vectoring nozzles article disagrees, so more research needed. Alansplodge (talk) 13:30, 7 May 2013 (UTC)[reply]


Argh - stupid argument. The Mig wins very, very easily...and in at least three or four different ways. The Fokker will be destroyed before it even knows that the Mig is nearby.
Simplistically - the Mig has the capability of deploying nuclear bombs - so it flies at high altitude above the Fokker - drops a nuke and it's game-over for everything within several miles of the target point - but the shock wave would shred the cloth-covered wings from a distance of ten miles. Accuracy isn't needed.
OK - let's assume that the Mig doesn't have that weapon load - or that it simply isn't desperate enough to use such a drastic solution.
Flying low and slow isn't going to help the Fokker. It's speed range is 45 to 115mph - about the speed range of a car. It can turn pretty tight for a plane - but nowhere near as tightly as a modern car. It's radar and thermal signature is also comparable to a car. So can the Mig destroy a car? Sure - it has a bunch of air-to-surface missile options...taking out a moving car is child's play for those kinds of systems. Truck convoys are classic modern military targets - and a Fokker is not much different.
The point is that "dogfighting" simply doesn't enter into the equation...the Fokker will be destroyed from five miles away by any one of dozens of possible guided or unguided missile options.
Even if the Mig is somehow forced to use it's cannon - the range of that weapon is about one and a half kilometers...it's perfectly capable of strafing stationary targets - and it has laser guidance...taking out a car (or the Fokker) would be child's play. That gun can take out most "soft" targets with a three round burst.
If your friend is still skeptical - ask this: Is the Mig capable of shooting down helicopters? The speed, altitude and manouverability range of a military helicopter is a close match for the Fokker. Helicopters can fly at any speed from zero to over 100mph - they can literally turn on a dime...manouverability and slow flying won't help the Fokker one iota. I absolutely guarantee that there is a whole range of weapons that this aircraft can carry that are ideally suited to taking out helicopters.
Even without weapons of any kind - a combination of hot jet exhaust, supersonic shock wave and massive air vortices would disrupt the flight path and the health of the pilot enough to do it great harm...being in the open cockpit of a plane made of cloth in the wake of a Mach 2 flyby would not be a survivable thing! You can try to argue that the Fokker can turn fast enough to avoid being close to the Mig - but the math says otherwise. Let's suppose the Mig lines up on the Fokker from a mile away. At 1500mph, the Mig will cover that mile in a little over 2 seconds. The Fokker (at top speed) can cover about 300 feet in that time - and if it's going slowly enough to turn tighly, then it'll be under 100 feet away...so even with the best possible reaction time, and the most direct flightpath away from the oncoming jet the Fokker will be hit by supersonic shockwaves at something like 200 feet from the Mig. Recall the Mythbusters episode where a blue angel aircraft trashes stuff on the ground in a 200 foot altitude, Mach I flyby? Well, double the speed and you get four times the energy...yeah...that's what happens to the Fokker.
Silly argument - trivially dismissed on multiple grounds.
SteveBaker (talk) 13:49, 7 May 2013 (UTC)[reply]
A very convincing argument by Steve. I had forgotten about laser guided weapons - something not available in the Isreali-Arab and Korean wars I used as examples of succesful evading by slow craft. My first thought was that you shouldn't include nuclear weapons, on the basis that nobody is going to approve the deployment of such an expensive weapon just to destroy a triplane. But on second thoughts, you should include it, as the scenario of a Fokker triplane or any of its contempories being used in a war today is silly anyway. Wickwack 120.145.46.40 (talk) 00:51, 8 May 2013 (UTC)[reply]
My $0.02: The Mig can kill the Fokker and possibly its pilot with its sonic boom, and its vortex will disrupt, if not damage or destroy its control surfaces. Remember that a "Mach 2" plane can often only fly slightly supersonic at low altitudes. Still the cheapest option, and one of the easiest.
Cannon might be less than effective. The Fokker may not be heavy enough to make the cannon rounds explode. Enough rounds will be bad news (eventually, one will hit something hard, like the engine), but three hits to the wings will probably do no more than punch holes. How probably, I can only guess. (How effective are they against dirigibles and balloons?)
Radar Missiles will probably be all OK. They have been upgraded to be useful against "stealth" planes, so...
IR missiles, I don't know. Not sure if there's enough of a signature to trigger the long-rod warhead at the correct time. The blast will still be Bad News(TM), though.
Eh, Freudian slip. Continuous-rod warhead. - ¡Ouch! (hurt me / more pain) 07:52, 8 May 2013 (UTC)[reply]
Nukes: Just NO. Too expensive.
Laser-guided Whatever: still on the expensive side. Not sure if the laser will track it that reliably, either
Rockets? Like cannon rounds, and not so good overall. More punch, but heavier, slower, more expensive, fewer of them, etc.
Free-fall bombs. I think that's where it's at. If the Red Baron stays low, just dive-bomb him, and he's utterly [fokked]. ;) - ¡Ouch! (hurt me / more pain) 07:50, 8 May 2013 (UTC)[reply]
While it is true that in general supersonic planes cannot go as fast at near sea-level, and that applies to the MiG-29, it is still rated to fly at Mach 2 (~2400 km/hr) at low level. So SteveBaker is probably right - one close fly past and the Fokker is history. But you'd need a bloody big free fall bomb to do it. Cannon should work well if it's laser guided from kilometers away as SteveBaker said. One hit in either the pilot's head or torso, the fuel tank (no self-sealing tank in a WW1 Fokker) or the engine and it's history. Why worry about cost when a MiG-29 against a Fokker triplane is a nonsense scenario anyway? Wickwack 60.230.209.84 (talk) 08:17, 8 May 2013 (UTC)[reply]
Well, sure, if you start adding in more constraint like price and collateral damage - you can make the answer come out any way you want. If you say that the entire mission has to cost less than $100 then the Mig can't even start its engines without consuming $100 of jet fuel...so the fokker swoops low over the stationary Mig and strafes it until it hits something important. Conversely, without constraints, the total inability of the fokker to place even a single bullet into the Mig means that the Mig pilot can keep trying over and over again until he does something the fokker can't survive. If nothing else, he can just wait for the triplane to run out of fuel (the Fokker can only stay aloft for about 90 minutes - the Mig is good for at least a couple of hours)...then, when it's a sitting duck - he can use any kind of ground attack weaponry and will kill the fokker in about as long as it takes to push the right button.
But within the given parameters, and with any reasonable assumptions, the answer is very clear.
SteveBaker (talk) 17:16, 8 May 2013 (UTC)[reply]
Exactly! How can you set, sensibly, constraints, cost or otherwise, or apply rules of engagment, when the basic scenario of a MiG-29 against a WW1 triplane has no realistic basis at all - it is just plain silly. It's a situation that can only arise in a pub conversation, so the only way you can answer this is to allow the Fokker to do anything a Fokker could do (in 1916), and the Mig-29 to do anything a MiG-29 can do (today). Wickwack 120.145.32.250 (talk) 04:10, 9 May 2013 (UTC)[reply]

Zero point energy and the volume of the universe.

Why doesn't the zero-point energy density of the vacuum change with changes in the volume of the universe? And related to that, why doesn't the large constant zero-point energy density of the vacuum cause a large cosmological constant?

Is it allowed to postulate / hypothesize on this topic on the reference desk, or is there a separate science forum / talk page for that? Robert van der Hoff (talk) 06:49, 7 May 2013 (UTC)[reply]

Robert, the idea is that a header is a **short** (up to about 7 words) pointer to what the question is about, the meat of which then appears below the header. -- Jack of Oz [Talk] 07:32, 7 May 2013 (UTC)[reply]
(I made a more concise title and transferred the L-O-N-G title into the message body). SteveBaker (talk) 12:59, 7 May 2013 (UTC)[reply]
We don't encourage using the reference desk simply to initiate discussion - especially when it's to discuss some idea that you had. However, there is a reasonable question here that we can possibly answer. SteveBaker (talk) 13:01, 7 May 2013 (UTC)[reply]
You hit the nail right on the head. That is indeed a very deep mystery that is yet to be satisfactorily answer by modern physics. Naive calculations show that the cosmologic is about 120 orders of magnitude off (If memory serves). Supersymmetry improves that to "only" 60 orders of magnitude. Dauto (talk) 22:28, 7 May 2013 (UTC)[reply]
And did you read the Zero-point energy article? where energy per particle is ½hν, not necessarily a very high density by cosmological standards. And this article also mentions renormalization to deal with the possibility of the lowest energy level of fields also containing energy. Graeme Bartlett (talk) 08:05, 8 May 2013 (UTC)[reply]
It's not ½hν per particle. It's ½hν per vibration mode of each bosonic field which strictly speaking is infinite hence the need for renormalization. Dauto (talk) 17:18, 8 May 2013 (UTC)[reply]

Why do we need omega-3 fatty acids?

What would the evolutionairy basis be for the need for omega-3 fatty acids? As a land based species it doesn't make sense that we require a nutrient which is most prevelant in oceanic fish.11:17, 7 May 2013 (UTC) — Preceding unsigned comment added by 137.224.252.10 (talk)

Just a relavent thought; omega-3 supplements are a waste of money, since they have a poor bioavailability. To quote a certain Big Bang character, "All you're buying, is expensive urine". Plasmic Physics (talk) 12:20, 7 May 2013 (UTC)[reply]
There are many plant sources of omega-3 fatty acids, so since the premise of your question is false, your question is ultimately unanswerable. It is certainly very highly present in fish, but it's also bioavailable in many foods that would have been food sources for people for thousands of years. --Jayron32 12:41, 7 May 2013 (UTC)[reply]
Also, if you go back far enough, we were oceanic fish! If Omega-3 fatty acids became an essential part of our piscine metabolism back then, they may have persisted as such after we (i.e. the ancestors of all the tetrapods) crawled up on land. {The poster formerly known as 87.81.230.195} 212.95.237.92 (talk) 13:32, 7 May 2013 (UTC)[reply]
  • Omega-3 is important for brain development. [6] et al. That is why the Aquatic ape hypothesis has developed traction over resent years. Later on, when farming was learnt, other sources of were found. Plasmic Physics appears to be confusing water soluble vitamins. The excess of those, get extricated in urine but dietary fatty acids are broken down and used by the body as fuel. The kidneys don't filter them out. They get metabolized instead into useful fuel in most cases. An excess of Vitamin A (fatty acid based) however can lead to Vitamin poisoning and you can't piss that out either; hence hypervitaminosis. Therefore, I can't see what befit Plasmic comment was to this OP's question. Perhaps he knows something I don't?--Aspro (talk) 15:24, 7 May 2013 (UTC)[reply]
Not at all Aspro, the study I that read, specifically refers to omega-3 acids administered orally. It states that they generally have low bioavailability, or are not in a form which is readily absorbed through the digestion tract. Plasmic Physics (talk) 23:32, 7 May 2013 (UTC)[reply]
Don't see you point PP. You state: that you specifically refers to fatty-acids administered orally”. So, did our ancestors have access to hypodermic syringe administration of fatty acids not administered orally ? - I don't think so.... orally yes. You are here, and I'm am here- now!, communicating to you via the internet!!. Your are an issue of your ancestors who 'must have' had an adequate bio-available source of omega 3 so that your linage has survived into the modern day. So, Engage brain, before engaging fingers to keyboard. Then you stated: nor are they in a form which is readily absorbed through the digestion tract???? Fatty-acids are fatty-acids. The human body looks at all fatty -acids all as fuel................ – mind you.... PP might just have some superior knowledge (scientifically proven in the last few months -weeks-days) that we have all missed. Reference desk is about about answering the OP's-questions-the best-we-can …....not about confusing them with scientific piddle.-- Just sick to physics PP. As always, Wikipedia has an article for guidance in this instance. Sutor, ne ultra crepidam.--Aspro (talk) 18:31, 9 May 2013 (UTC)[reply]
[7]. Here is a nice summary of the investigation. Plasmic Physics (talk) 22:59, 9 May 2013 (UTC)[reply]
Thanks for posting a report which actual supports my drift. Quote from the article: “Because of the wide variability in the results between individual subjects and other factors, acute studies appear not to be as dependable as chronic studies in making conclusions with respect to bioavailability.” There is a WP article to explain (me thinks) your last inclusion -Obfuscation. From memory (but I am sure others will back me up) the DHA/EPA mentioned the article you posted can be manufactured in the body from alpha-linolenic acid (LNA 18:3w3) found in seeds etc. But how many unprocessed seed do you have in your diet? Going back to your first comment. It is the efficiency which your body converts alpha-linolenic acid that depends on your survival. If your ethnic stock is of Irish or Norwegian or some other European decent then your body may well l have trouble doing this alpha-linolenic acid conversion. Thus you will need DHA & EPA dietary sources (whether they be fish, or bought over-the-counter supplements, etc.,) to keep you healthy. Most medical doctors are not interested in this because they sell “health care”. Can I make my self anymore clear!??? Just stick to commenting on queries about levers, force, vectors and other physical things.Aspro (talk) 14:23, 10 May 2013 (UTC)[reply]

We produce chemicals we need internally with enzymes (coded for by genes) that create a metabolic pathway for the chemical to be produced. Some such pathways may be unique and metabolically costly. In any case, if a nutrient is highly available in your natural diet (like vitamin C in the vegetables our ancestors ate) then the gene for the enzyme necessary to make that nutrient may be mutated or lost without harm. Once the gene is lost, people whose diet becomes unnaturally restricted can become ill (scurvy, beri beri) due to the lack of the nutrient. μηδείς (talk) 16:21, 7 May 2013 (UTC)[reply]

I found another source - seal oil - while searching omega-3 and Namibia (I continue to suspect an ancient origin of human feet and hairlessness in the wildly variable conditions of the Okavango, even if very good evidence traces the known origin further north) - anyway, apparently Nambia exports, or recently used to export, substantial qualities of seal oil as an omega-3 supplement. Here's one manufacturer's claim from Canada. [8] Apparently high omega-3 levels are true of all fish-eating mammals [9]. (It wouldn't surprise me if fish-eating birds were another source for less finicky palates...) Wnt (talk) 16:31, 7 May 2013 (UTC)[reply]

bioremediation

Can Tetrahydrofuran be phytoremediated? According to http://apps.echa.europa.eu/registered/data/dossiers/DISS-9d87f855-86db-4392-e044-00144f67d249/AGGR-9e7de3e4-b992-48c1-b99b-cbbabe912a48_DISS-9d87f855-86db-4392-e044-00144f67d249.html Tetrahydrofuran is not very bioaccumilative, which leads me to think phytoremediation is not effective. — Preceding unsigned comment added by 149.152.23.34 (talk) 15:25, 7 May 2013 (UTC)[reply]

Also, can tetrahydrofuran be removed from the Seymour Hazardous Waste Site by Air sparging?--149.152.23.34 (talk) 18:26, 7 May 2013 (UTC)[reply]

Based on the properties of THF, I think some form of bacterial remediation would be the most promising method. BTW, what exactly is the nature of contamination at the Seymour waste site? 24.23.196.85 (talk) 00:00, 8 May 2013 (UTC)[reply]

Remote Sensing

I am trying to learn how remote sensing developed over the course of time beginning in the seventies. It would be helpful to find the Proceedings for the Annual Meetings of The American Society for Photogrammetry during that period of time. Where can I find them? Clues: not WorldCat and not the American Society for Photogrammetry and Remote Sensing. — Preceding unsigned comment added by Bobgustafson1 (talkcontribs) 19:21, 7 May 2013 (UTC)[reply]

While trying to answer this question I came across the Monroe Institute. Are you familiar with it? --TammyMoet (talk) 20:47, 8 May 2013 (UTC)[reply]

Venus fly trap digestion

I understand that the digestion process of the prey of a Venus fly trap may take several days, but how long does it take for the prey to actually die? What is the process that kills it? Against the current (talk) 20:07, 7 May 2013 (UTC)[reply]

If no one is able to answer here, you could ask at Wikipedia talk:WikiProject Carnivorous plants.
Wavelength (talk) 21:30, 7 May 2013 (UTC)[reply]
I don't know how long it takes for the bug to die, but the process that kills it is almost certainly chemical poisoning. 24.23.196.85 (talk) 00:18, 8 May 2013 (UTC)[reply]
Insects breathe through trachea that open via spiracles on the sides of their abdomen. Once digestive juices get in these they will smother, if not already. μηδείς (talk) 00:56, 8 May 2013 (UTC)[reply]

Why does stirred water reverse direction slightly just before reaching a stop?

Stirred me tea with a tea bag in it. Round and round the tea bag goes, in the direction stirred. Eventually the tea bag comes to rest... well, almost: before it does it goes back a small amount in the opposite direction. What causes this reverse? It's as if the water is slightly elastic. At first I thought it may be an illusion caused by watching the bag going round. But on careful observation I am sure it does go backwards. --bodnotbod (talk) 22:44, 7 May 2013 (UTC)[reply]

Maybe the teabag strayed across the centre of the vortex, and the remaining angular momentum created a torque in just the right way to turn the teabag in the other direction. Plasmic Physics (talk) 23:19, 7 May 2013 (UTC)[reply]
Water is most certainly not elastic, it does not retain a memory of a previous state. Plasmic Physics (talk) 23:23, 7 May 2013 (UTC)[reply]
It is worth considering that we are dealing with a three-dimensional flow here. It might be worth trying it in a glass rather than a cup, to see if that helps explain it. AndyTheGrump (talk) 23:28, 7 May 2013 (UTC)[reply]
I think this could be a backflow caused by the vortex collapsing on itself as the flow velocity drops toward zero. FWiW 24.23.196.85 (talk) 00:12, 8 May 2013 (UTC)[reply]
The teabag could be "elastic".... It would almost certainly have less mass then the water, could it be somehow rebounding off the water? Vespine (talk) 00:14, 8 May 2013 (UTC)[reply]
Isn't this how they power the TARDIS? μηδείς (talk) 00:52, 8 May 2013 (UTC)[reply]
You're thinking of the Heart Of Gold, but close. Tevildo (talk) 01:04, 8 May 2013 (UTC)[reply]
Actually, I was just making sure people were paying attention. :D μηδείς (talk) 01:44, 8 May 2013 (UTC)[reply]
The teabag may be an impediment to the circular motion of the stirred water. If it is an impediment to the circular motion of the stirred water perhaps we can assume that to a slight degree the water level is higher at the trailing edge of the teabag than at the leading edge. At the point at which circular motion of the water ceases, the higher water level "falls", and in so doing it momentarily falls below the water level of the cup of tea at complete rest. This overshooting of the higher level of water to a lower level of water creates a lower level which the teabag itself falls into. This "falling" of the teabag creates a slight backward motion. This is just an explanation that seems likely to me, but it might be wrong. Bus stop (talk) 02:19, 8 May 2013 (UTC)[reply]
It occurs to me that the opposite may be occurring at the leading edge of the teabag. A "depression" may exist in the surface of the liquid while the teabag is in forward motion. When motion ceases, water may "fall" into this slightly lower water level at the "leading edge" of the teabag. This filling in may result in an overshooting of the amount of water to result in a completely uniform water level throughout the body of water. That momentary overshooting of water into a depression at the leading edge of the teabag may result in a momentary higher water level at this point, which may serve to push the teabag in a small backward motion. Bus stop (talk) 02:46, 8 May 2013 (UTC)[reply]
The way to really see this effect is during a large plasmid preparation. For some reason, perhaps the elasticity of the DNA (?), a flask filled with bacterial lysate in 10% SDS will very noticeably rebound and come back the other way. Wnt (talk) 04:20, 8 May 2013 (UTC)[reply]
Thank you for the many replies thus far. I think I shall have to read some of them a few times more to grasp them. I wondered whether someone might say "ah, this is the '[Scientist's name Effect]'". Thanks again everyone. --bodnotbod (talk) 11:25, 8 May 2013 (UTC)[reply]

Another possibility is that this an optical illusion. If you look intently at something thats visibly rotating for about 20 seconds, then transfer your gaze to a textured surface (concrete works well), the textured surface appears to start rotating in the opposite direction. (Well it does for me!)122.108.189.192 (talk) 07:24, 9 May 2013 (UTC)[reply]

Sodium carbonate electrolysis?

Carbon dioxide scrubber says that you could electrolyze a sodium carbonate solution to get CO2 out from the carbonate returning it back to sodium hydroxide. But there is very little about that in google (like searching for sodium carbonate electrolysis) so I have a question: What gasses will be generated at the cathode and the anode while electrolyzing sodium carbonate in water?118.136.5.235 (talk) 23:28, 7 May 2013 (UTC)[reply]

Anode -- CO2 only; cathode -- none (if the product is NaOH). However, it is possible to electrolyze molten sodium carbonate all the way to sodium metal, in which case both CO2 and oxygen will be generated at the anode, and hydrogen at the cathode. Also, converting sodium carbonate to sodium hydroxide does not require electrolysis -- simply distilling a hot solution of the carbonate with steam under vacuum can produce the hydroxide by stripping out the CO2 (this is actually done on an industrial scale in some coke plants). 24.23.196.85 (talk) 23:55, 7 May 2013 (UTC)[reply]
This isn't strictly electrolysis, as this is not a redox reaction. The oxidation states of all elements stay the same on both sides of the reaction. It is merely a non-redox Chemical decomposition of a metal carbonate. You get sodium hydroxide merely because sodium oxide pretty much instantly forms sodium hydroxide in water. You need to use electric current to generate enough energy in a water based solution, as you can't get enough energy from heating it, but you can use heat to decompose solid sodium carbonate into sodium oxide and carbon dioxide just fine. The water present in solution, however, prevents this from happening when heated: you can't get the solution much above the boiling point, which is not a high enough temperature to cause the decomposition. --Jayron32 00:01, 8 May 2013 (UTC)[reply]
Seems really cool to know that we can strip out CO2 just by distilling it, but after a quick search in google (carbonate distillation) it returns home brewery thingy... so is you can provide the source, it will be good and anyway thanks for the answer118.136.5.235 (talk) 23:23, 8 May 2013 (UTC)[reply]
Sorry, I was wrong about that -- what I was referring to was stripping out excess CO2 from sodium bicarbonate solution to regenerate sodium carbonate (Seaboard process), NOT converting sodium carbonate to sodium hydroxide. BTW, the source is the book Coke, Tar and Coal Chemicals, which is available at our city library (I don't remember the author's name). 24.23.196.85 (talk) 04:47, 9 May 2013 (UTC)[reply]
Seems good enough to scrub CO2 from air.. or might not.. thanks (its vacuum carbonate process you describing not seaboard process) 118.136.5.235 (talk) 00:44, 10 May 2013 (UTC)[reply]

May 8

Predicting the history can be wrong

Is there any other field in science besides astrophysics and predicting the far distance events in astronomy involves speculations and unreliable guesses. My mind is straight-forward-thinking that is why I thought academic research papers are just well-written, I was never aware they have guesses and unreliable speculations. Plate tectonics:, I have seen several different websites discussing how will the continents reshape itself in the 100s of million year future, all the scientist just come up with 10s or different results. I thought prediction the geology history are quite accurate, because of the fossils we have gives us. I thought prediction 500 million years ago to 600 million years ago, dating things that far back are still pretty accurate. Between Mesozoic and Jurassic period, and 70 million years ago up to today geological scale, I never hear any observation error based on where the continents were placed. I there is errors found in Mesozoic, Jurassic, where may they have messed up?--69.233.254.115 (talk) 00:29, 8 May 2013 (UTC)[reply]

It's much easier to go backward than forward. Going backward is like putting together a jigsaw puzzle. For example, the east coast of South America fits neatly into the west coast of Africa, and when you line them up, the surface rock formations also match nicely. But going forward requires understanding the mechanisms that make plates move, and the simple fact is that we don't. We can project forward about 100 million years by assuming that plates will continue to move at their current rates, but beyond that, everything is guesswork. We know that plates sometimes change their direction of motion, but we don't know why. (The most popular theory is that the motion is driven by mantle plumes, but that theory is controversial and in any case nobody knows for sure how mantle plumes change over time.)
Even I, who like to think I understand this stuff to some degree, have been tricked by all the speculations. I have a National Geographic atlas that confidently shows the state of the Earth 250 million years in the future, with all the continents coalescing into Pangaea Ultima. It took me a lot of reading to figure out that this is really no more than one person's wild-ass guess.
But to step back from this, making predictions of this sort is simply part of the process of science. Scientists know that every prediction is based on theory and data, and the weaker the quality of the theory or data, the less meaningful the prediction. Geologists know enough not to take these plate tectonic predictions very seriously. The problem comes when predictions are fed to the general public, who don't have enough background knowledge to judge them. Looie496 (talk) 01:24, 8 May 2013 (UTC)[reply]

.

Probably the best, and these days the most important, given that governments are making decissions about it, is the science of climate change. Climate change boffins use computer models to predict what tempertures, sea level, and storm activity will be like 50, 100, or more years to come. Almost all expect all three to increase, but the range of predictions does not inspire confidence that this is a well understood science. All modelling is done by making simplifying assumptions - that is, what are at best judement calls, and at worst, no more than wild guesses, as to what factors to include and what factors to ignore, in order to make computer modelling simple enough to actually do.
The Australian Govt had in 2010, recognising that most Australians live and work in coastal areas, directed the Bureau of Meterology to issue maps showing the flooding expected, with high and low limits, so that town planning, investment, and planning for major remedial works can proceed, with climate change in mind, on some sort of rational basis. After some pondering, the BOM boffins decided on a sea level rise low limit of 800 mm and a high limit of 1100 mm by Year 2100. Quite precise you might say, but this is only a small fraction of the range of predictions made by various experts around the World. And the assumption of a 300 mm range in sea levels had an enormous efect of the magnitude of city areas computed to be flooded. It is difficult so say whether the BOM had some reasonable basis for their assumptions, or whether they decided they shouldn't frighten us too much.
Most computer models have disregarded variation in the Suns' output, which is a risky thing considering we don't fully understand the Sun's weather. Some boffins just ignored it or didn't think of it, some boffins have reasoned it out that it probably can be ignored; a few think it cannot. But, until we have at least a few thousand years of good data (we do not), it's really just a guess. Heck, there is not even a broadly accepted view as to whether the last sunspot cycle was normal or not.
As far as I know, all climate modelling has proceeded on the basis of natural phenomena plus the normal activities of Man in peacetime. The commom result of such modelling is a global temperture rise, which will cause a rise in sea level and an increase in storm activity (becaus ethere is more energy in the system). But what if there is a nuclear war, either local or large scale, sometime in this century? It is a quite possible, even somewhat likely, scenario. It would result in long term global COOLING due to high atmosphere dust, much as many (but not all) climate experts think that the saturation bombing of German cities in WW2 caused unusually cold winters in the Northern Hemisphere for a couple of years.
Some expert boffins (only some) in relevant fields consider the ocean current directions in the North Atlantic to be unstable, and a change from one quasi-stable state to the other could produce global climate change far outweighing anything Man is doing. Almost all climate change modellers have ignored this, or simple are not aware of it. Who is right?
I have read journal articles that say volcanoes are a natural source of greenhouse gasses and act to increase temperatures. Other articles have said that dust from volcanoes can lower temperatures. Which one will predominate over the next 50 or 100 years? Seems to me the answer can at best be only be a good guess, as when a given volacano will erupt, and what sort of eruption will occur, is a science not yet mastered.
Wickwack 120.145.46.40 (talk) 02:02, 8 May 2013 (UTC)[reply]
What other type of science besides Pangaea Ultima and Future of the astronomy stuff involves speculations. People told me earlier the whole purpose of science is speculation and guesttimate, otherwise there will be no point of science.--69.233.254.115 (talk) 02:34, 8 May 2013 (UTC)[reply]
People gave me an example of cold fusion. Does chemistry involves theory and future prediction. I never hear chemistry argue about any future events. I never hear predictions of the future and speculations in Biology area. What other scientific area involves prediction debate and speculation arguments.--69.233.254.115 (talk) 02:40, 8 May 2013 (UTC)[reply]
In the sense that scientists seek to understand phenomena, and develop working hypotheses and test them against measured data, and use these hypostheses to predict what new experiments will be worth doing (as distinct from Engineers who make practical decisions using the theries provided by scientists), it is correct to say the purpose of Science is speculation and guestimation. However, in most fields what gets predicted is known to within very close limits and is for all practical purpose a certainty. For example, I have been a practicing electrical Engineer for over 50 years. The theory has never let me down, though I have long learnt to watch for my own human error. Wickwack 120.145.46.40 (talk) 02:48, 8 May 2013 (UTC)[reply]
It's true that the purpose of almost all science is to produce explanations or theories from which meaningful predictions of the future can be made; but the range and accuracy of those predictions is highly variable. "What happens if I let go of this apple?" has an obvious and fairly specific answer; "What happens if I let go of these dice?" does too, but it may lack the specific detail the questioner wants; "What happens to the whole world in 50, or 50 million, years' time, if current trends continue?" has no one specific answer, but rather a range of varyingly likely guesstimates. AlexTiefling (talk) 11:22, 8 May 2013 (UTC)[reply]


You hve to look at what is based on undisputable physics. If you focus on that, you can even look into the future and get an accurate picture. E.g., we may not know the exact configurations of the continents 300 million years from now, but you have to ask if that is relevant. We actually do know quite well the big picture of how things will look like here on Earth over the next hundreds of millions of years. The most important factor is that the Sun will gradually get hotter. We can quite accurately predict the induce decline in CO2 levels and when this becomes so low that photosynthesis will stop. We can predict quite accurately when the oceans will evaporate away. That will then lead to a runaway greenhouse effect. Count Iblis (talk) 11:54, 8 May 2013 (UTC)[reply]

I think that what you're running into here is chaos theory. There are systems (and the weather is one of them) where the equations for what's going on are well known - but the final outcome can be drastically different depending on microscopic variations in the initial numbers. So in the case of the weather, if your thermometers are off by even a millionth of a degree - then your ability to predict the path of that hurricane sometime in the future can be off by hundreds of miles. This is popularly known as "the butterfly effect" - but it affects much more than the weather.

My favorite example of that is this: Take a free-swinging pendulum with a magnet on the end. Place a sheet of paper beneath it and place two magnets onto the paper labelled 'A' and 'B'. Let's do it in a perfect vacuum with totally frictionless bearings. Set the pendulum swinging, and it'll wind up pointing towards either A or B. Given the initial position of the pendulum, we can write down the equations for it's motion with complete accuracy and confidence. Now, let's experimentally determine which set of places we can launch the pendulum from to have it end up over A - and which set leave it over B. Let's put a colored dot onto the paper where we launched the pendulum from - a red dot if launching the pendulum from that point leaves it ending up over A and a blue dot if it ends up over B.

If you were to do that experiment over and over again, you'd discover large areas of the paper that were all red, large areas that were all blue and a lot of areas where the red and blue dots seem to be all over the place. If you investigated those areas by placing the pendulum between those dots and filling in the gaps between them, you'd still find regions with a mixture of red and blue dots. If you plotted the positions perfectly, then the smallest all-blue and all-red areas would be smaller than the size of an atom...infinitely small in fact. So in those regions, mispositioning the pendulum by even the width of an atom is enough to screw up the results.

When science is faced with chaotic systems like that - our ability to predict the past or the future from a given set of data becomes sharply limited by the precision of that data. Even though we know the precise equations governing magnets and pendulums, we sometimes can't predict where it'll end up. That's why we can't predict the weather - or know precisely how the continents drifted or any of a wide range of things.

That doesn't mean that we can't predict anything at all. Many systems are not chaotic - many others are chaotic but can be controlled such that we're working in non-chaotic regions of the "parameter space". There are place on the red and blue map for the pendulum where we can predict with 100% confidence where the pendulum will end up.

Even in cases where we do have chaotic systems, we can often come up with statistical answers that are still quite useful.

Astrophysics is an especially difficult area. We might measure the spectrum from a distant star, find some distinctive spectral lines, use that to calculate the red-shift, from that calculate the distance to the star, from that calculate it's true intensity, from that calculate what kind of star it is, from that and it's intensity, estimate it's mass, from that... well, you get the picture. Every one of those stages contains measurement errors and approximations. Making solid deductions from those results (taking into account the size of the error bars) is rather tricky - and you have to expect there to be more than one set of hypotheses that covers the same set of observations.

SteveBaker (talk) 16:58, 8 May 2013 (UTC)[reply]

The OP is misunderstanding how science works. New research papers are, by their very nature, unproven, speculative, and unreliable. On Wikipedia, they're considered primary sources and should generally not be used in articles. If a paper seems to have merit, it is included in review articles and meta-analyses. If the scientific community manages to replicate the paper's results repeatedly, or otherwise find independent evidence of its validity, the discovery might become part of the scientific consensus, and might make it into textbooks or Wikipedia.
Now for astrophysics, which is actually my field. Just like in any science, there are predictions about the future that are absolutely certain, and those that are highly speculative. I'm absolutely sure that I can tell you where Mars will be a year from now, to within a few meters. I'm sure that a transit of Venus will happen in December 2117. Scientists are almost as certain that the Sun will turn into the red giant, shed a planetary nebula, and leave behind a white dwarf. There are 100 billion roughly Sun-like stars in our galaxy; we can see plenty of other stars of similar mass and chemical composition as the Sun have either gone through this process or are going through it right now. On the speculative side, we don't really know if the Sun will swallow the Earth after turning into a red giant--existing models aren't accurate enough to predict that.
Regarding simulations: in the 21st century, computer simulations are an indispensable tool of almost every science. Humans are simply not smart enough to work out the equations of stellar evolution and predict what a star will do through its lifetime, because that involves more computation than a human could possibly handle. I'm sure Wickwack would agree that they're an indispensable tool in electrical engineering as well. The difference is that engineering is meant to use well-tested, well-understood and uncontroversial scientific principles to make a reliable product, whereas research is meant to discover those principles in the first place. Just like all of science, simulations involve simplified models and don't account for every possibility. For example, we could simulate where Mars will be tomorrow, but Wickwack seems to be worried that an alien could come and whack it off its orbit today. If someone else wants to model the alien-Mars interaction, they're free to do so, but the non-alien model has worked exceptionally well in the past, accounts for all likely scenarios, and is based on known physical laws. The same is true for climate science. A model based on known physical laws that accurately predicts past and near-future temperatures is our best guess for what will happen in the future. All realistic models that meet these criteria predict easily measurable warming, flooding of low-lying areas, and more extreme weather. Climate models probably don't account for the Sun suddenly changing brightness, but the Sun hasn't changed brightness by more than 0.1-0.2% in the past 2000 years, there's no astrophysical reason for it to change brightness, and other main-sequence stars show the same level of stability. They also don't account for a nuclear war, or for the chance that humans suddenly decide to stop emitting CO2 and manually sequester it out of the atmosphere. The best response to that complaint is "my model tells you what happens when there's no nuclear war. Somebody else can tell you what happens during a nuclear war."
In the end, computer models are judged in the same way as other hypotheses. If they predict things correctly, they're accepted. If not, they're improved or rejected. --Bowlhover (talk) 19:09, 8 May 2013 (UTC)[reply]
Some comments:-
  • Bowlhover's description of science is correct but rather idealistic. The World's body of scientists is just as contaminated with incompetents, tricksters, and honest mistake makers as any other field. Sometimes wrong theories get thrown out right smartly, some persist for years. Where there is a vacuum of knowlege, people fill it with dubious theories, but until the right experiment happens, or the error discovered, such theories can be accepted. Just look at some of the theories on why the dinosaurs dissapeared. Lots of theories not properly confirmed make it into textbooks. In the field of applied science, as distinct from pure research, newly discovered knowlege is often published first in books.
  • I would indeed agree that computer modelling is an indespensible tool in electrical and especially electronic engineering. There is an important difference between electrical/electronic modelling and climate modelling though: Electric and electronic modelling is very much simpler and on VERY solid theoretical ground. The implications of the necessary simplifications and assumptions are VERY well understood. Climate modelling has yet to have aquire a useful amount of accurate measured data.
  • I have no idea why Bowlhover thinks I worry about aliens coming and whacking Mars off its orbit. Not only is it clearly quite unlikely, if they do, they'll perturb Earths' orbit as colateral damage, and cause us all death and destruction. So, in the unlikely event of it happening, we'll be so stuffed nothing will have been gained by pondering it. No good worrying about whether your favorite TV programme will continue if you are going to die. More seriously, that's exactly what proper scientific (or engineering) modelling is about - making intelligent decisions about what to leave in and what to leave out. And that's exactly where climate modelling is weak.
  • Nuclear war (and even large scale conventional war) should be considered by those who make decisions based on climate change science. It is NOT satisfactory to model natural and peace time conditions separately to war conditions, as one thing all the boffins agree on is that there are positive and negative feedback systems in climate, and significant non-linearities. This means that there is coupling between the two - you simply cannot arithmetically add the effects of war from a war model to the effects of natural & peace effects from natural & peace models. If governments believe global temperatures will increase, they may enforce things that will put us in economic or actual peril should it happen that temperatures decrease. Strageties like reducing population (as China has been doing) will be a good strategy in either case. Strategies such as altering the atmosphere to reduce greenhouse effect just might make things worse before we realise it. Other strategies, such as setting up infrastructure to assist farming in new areas expected to become viable because of temperature increase (as has been proposed in Australia), and abandoning areas that expected to become unviable (as the Australian Govt seems to be doing by default) will just be an economic penalty should temperatures not rise as expected.
  • The variation in the Sun's output, as measured at the average Earth-Sun distance (but not as attenuated by the Earth's atmosphere) is about 1365.5 to 1366.5 W/m2 i.e., about 0.07% variation, (cf 0.1 to 0.2 % as Bowlhover claimed "over 2000 years"). However, accurate measurements are available for only ~30 years. Plenty of reasonable theory has been advanced to suggest that the average output over thousands of years may be significantly less. I will admit that this is controversial. We know that solar output is affected by and linked to sunspot activity, but we simply do not have enough data on sunspot activity to understand it. In the last decade the Sun has confounded experts who predict sunspot activity.
Wickwack 120.145.32.250 (talk) 10:03, 9 May 2013 (UTC)[reply]
Responding point by point:
  • Bowlhover's description of science is correct but rather idealistic. - Yes, there are plenty of errors made - but that's the point of not using primary sources. A single experiment with a single result is not generally accepted as "The Truth" until it's been independently reproduced at least once and published in some kind of higher level review.
  • I would indeed agree that computer modelling is an indespensible tool in electrical and especially electronic engineering. There is an important difference between electrical/electronic modelling and climate modelling though: Electric and electronic modelling is very much simpler and on VERY solid theoretical ground. -- The problems for climate modeling is the complexity of the interactions. However, the goal here is not to get a perfect year-by-year or month-by-month temperature reading - it's to gain enough precision to capture the trend and approximate rate. That's all that ought to be needed to spur governments to take drastic action. You don't need to know whether the polar bears will be extinct in 5, 10, 50 or 100 years - you just need to know that they are going extinct. You don't need to know whether 5, 10, or 15 tonnes of CO2 per capita is enough to cause serious problems. Your claim that we don't have enough accurate data is utter nonsense. You first have to define what you expect to get from this data. If you're trying to figure out the precise year at which rising sea levels would cause the Thames flood barrier to over-top, then yes - we need more data. If you just need to know that it's going to be OK out until it's planned replacement in 2030, then we have all the data we need.
  • I have no idea why Bowlhover thinks I worry about aliens coming and whacking Mars off its orbit. - Whatever.
  • Nuclear war (and even large scale conventional war) should be considered by those who make decisions based on climate change science. It is NOT satisfactory to model natural and peace time conditions separately to war conditions... - You can't say that scientific modeling is useless until every single possibility is taken into account. It's perfectly valid (and useful) to produce a simulation that says "this is what we predict will happen assuming there is no nuclear way". That produces a useful result and advances the discussion. Once you have a solid model that is limited in that it excludes this special case event - then someone else can come along and improve on it later by adding that possibility. You don't have to solve the entire problem in one bite of the cherry.
  • The variation in the Sun's output, as measured at the average Earth-Sun distance (but not as attenuated by the Earth's atmosphere) is about 1365.5 to 1366.5 W/m2 i.e., about 0.07% variation, (cf 0.1 to 0.2 % as Bowlhover claimed "over 2000 years"). However, accurate measurements are available for only ~30 years. - But if you say "we don't have enough data - so we can't calculate anything" - then science will cease. You merely need to be careful to adjust your error bars to account for the data you don't have - and apply suitable caveats to your answers. It's perfectly valid to come out with some prediction - with the caveat "This result assumes that solar activity remains as it has been for the last 30 years". Nothing whatever wrong with that. All of human endeavor is based on prediction with assumptions. You drive to work in the morning on the assumption that a giant meteor will NOT come crashing down on the freeway without getting really solid data about meteor impact frequencies. Sure, that's not a perfect assumption - but it's sufficiently good for the purpose.
Using small approximations and small data gaps to justify inaction is a truly stupid way to proceed. One has to weigh the probabilities against the cost. In the case of climate science, the probability that the climate science is wrong and that global climate change isn't going to happen is very close to zero...but it's not zero. Do we wait to close that 0.1% gap in our knowledge - or do we take action anyway? Do you decide "I won't drive to work today because I don't know the probability of getting smooshed by a meteorite impact to three decimal places." or do you say "Well...I've never seen one of those happening, and nobody I ever met saw one either - so it's probably unlikely enough to not affect the outcome of my decision - so I'll ignore it for the purposes of making this decision."?
SteveBaker (talk) 21:24, 9 May 2013 (UTC)[reply]
The common theme in Steve's responses is that he thinks that I claim that attempting to model climate change is not worthwhile, and haven't realised that in scientific development, you have to crawl before you can walk. In doing that he has read into my post something that was decidedly not there, and he has missed the point with regard to the OP's question - basically does any field of science use speculations and unreliable guesses to make predictions? The OP possibly used a too strong and emotive language, but the answer is essentially yes, and the most significant example in terms of political and economic importance is climate change science.
Nowhere did I say that researching and developing models for climate change is not worthwhile. In fact you can most reasonably, and should, infer from what I wrote that I believe that MORE effort needs to go into this. A lot more.
The difference between electronic engineering computer modelling and climate change computer modelling is that electronic modelling is much simpler and it is mature. Climate change modelling is not. Yet governments are making decisions about it NOW. Some decisions were put into law and implemented a while ago. I didn't say they shouldn't. I said their decisions are on shakey ground and might turn out to be not good decisions.
Decisions made by leaders, governments, businesses, military generals and professional engineers often have to be made in a timely way, and that often means basing decisions on incomplete, of unknown reliability, and/or imprecise, information. As a professional Engineer and engineering manager, I've made lots of just such decisions. But what you do when you have imprecise or unreliable information is that you assess the range of possibilities and choose actions that will work over that range, or at least be a sensible temporary course until better information becomes available. That's not what the Australian Government (my example) is doing - they have instead picked a narrow range and gone for a solution they think will suit that range (asuming it IS a solution to suit that range - but that is another topic). That is the essence of what I said, and is directly contrary to what Steve said using his driving to work example.
In Steve's comment about modelling based on assuming the Sun's output is the recently measured 1366 W/m2with 0.07% variation at mean Earth distance, for example, he completely missed the point. Yes, it would be stupid to wait for more data - that will take 1000's of years. Maybe longer. But it is ALSO stupid to assume that that is the value, and won't be anything different, seeing as we cannot yet accurately predict sunspot cycles. We need a two-pronged strategy: be flexible while improving the theory until we can accurately predict sunspot cycles. When we can, that will give a good assurance we know what we are doing, and it most probably will not take anything like 1000's of years. Steve thinks that we should just go on the measured data. That would be analogous to a gambler noting a run of early wins and continuing to place betts on that basis. He doesn't need to make more observations (though in the case of gambling that might help). He needs to develop a good understanding of the random or pseudo-random process theory behind whatever he's betting on. This is not saying that climate science is a gamble to that degree, but it is partly a gamble. — Preceding unsigned comment added by 124.178.140.2 (talk) 02:48, 10 May 2013 (UTC)[reply]
Wickwack 124.178.140.2 (talk) 02:01, 10 May 2013 (UTC)[reply]
I'll focus on solar activity, since SteveBaker has covered the rest pretty well. It's simply not true that we only have 30 years of records on the Sun. Sunspot cycles have been tracked by European observers since the invention of the telescope in the 17th century. We have various proxies for solar activity that stretch back over 10,000 years, mostly from cosmogenic isotopes like 14C and 10Be: [10]. There is no evidence that solar luminosity will decrease enough in the near future to counteract the effects of CO2, no evidence that such a decrease has occurred in the past several thousand years, no evidence that other Sun-like stars have such drastic changes in brightness, and nothing from theory to suggest it is probable. Even the 0.1% variation we talked about earlier is due to the solar cycle; the long-term average is much less variable.
It is entirely true that EE simulations can be done much more accurately than climate simulations, or for that matter, almost any simulation in any science. But suppose you need to build a probe that lands on Europa. Testing it on Europa beforehand is impossible, so the best you can do is use simulations with many uncertain parameters to model Europa's atmosphere and the behavior of your design in an alien environment. Some simulations say the probe will last 4 km, some say 8, some in between the two, but no simulation with realistic parameters predicts the probe can make it to the surface intact. Do you launch the probe anyways and hope that some unmodelled effect will make it land (instead of making it even more likely to crash)? Or do you redesign it, at some financial cost, so that simulations don't predict disaster? That's the situation with climate modelling right now. --Bowlhover (talk) 09:01, 10 May 2013 (UTC)[reply]
Bowlhover, you have missed the point too. Yes, we have sunspot records going back hundreds of years. And we have enough evidence to know that sunspot activity is linked to solar energy output. But what we don't have is sufficient understanding of sunspot activity to make accurate predictions of sunspot cycles. It's much like predicting the weather on Earth as SteveBaker mentioned. We know enough to know winter will follow summer, but predicting the mid-winter month's mean temperature for next winter is alittle difficult. The last sunspot cycle did not follow predictions. Having said that, we understand sunspot variation better than we undertsand solar energy output. You are correct in saying we have sunspot records going back to the 17th century. But we have accurate records of solar energy output for only the last 30 years. We are not in a position to predict what solar output for the next 100 or 1000 years will be. We can reasonably assume it will be about 1360 W/m2, but we are not in a postion to take it as accurate as measurements suggest, and we are not in a position to know what a reasonable range might be. It's a 2-part problem: We can't yet accurately predict sunspot cycles, which means we don't fully understand it, and we don't have enough data to fully understand the linkage between sunspots and energy output, and whatever other factors there might be that affect output.
Your analogy of planning a probe to Europa is exactly what I have been saying. I never said, as SteveBaker tried to claim, that climate modelling is not worthwhile. I said it is, but we need to do better. Using your Europa probe analogy adjusted a little, it is much like that simulations predict that the spacecraft will need heat shielding to protect it for between 30 and 90 minutes of time descending in the atmosphere, and every minute of shielding costs a staggering amount of money, and some expert opinion says that their models predict that with too much shielding, shield ablation will contaminate the atmosphere, which we would prefer not to do. It is bad strategy to decide to split the difference and put in 60 minutes of shielding. It is better strategy to work more on the simulations and improve the accuracy. It is better strategy to redesign the spacecraft so that less shielding is needed. Given that time marching on, it is even better strategy to do both at the same time.
Wickwack 120.145.219.196 (talk) 10:08, 10 May 2013 (UTC)[reply]

Nasal congestion and sleep

Whenever I have a cold I notice that my congestion is completely gone when I wake up in the morning, and that it comes back within a few minutes of waking up. Does this phenomenon have a name / has it been studied in the literature? Could it potentially be exploited in medications? DTLHS (talk) 01:19, 8 May 2013 (UTC)[reply]

It nearly always works the other way round for me. But either way, you can buy decongestant medicines from chemists. Wickwack 120.145.46.40 (talk) 01:25, 8 May 2013 (UTC)[reply]
Sinuses will drain differently depending on your attitude, but you really need to take it up with a doctor. μηδείς (talk) 01:42, 8 May 2013 (UTC)[reply]
My attitude becomes quite surly when my sinuses are blocked. Sometimes a sinus will open up and drain while I am sleeping, other times the blockage becomes worse at night. Edison (talk) 03:15, 8 May 2013 (UTC)[reply]
Nasal cycle might have something to do with it. Or getting up, holding your head in a different position. Ssscienccce (talk) 06:27, 9 May 2013 (UTC)[reply]

Can sun actually magnify electromagnetic waves as depicted in Three Body (science fiction)?

--朝鲜的轮子 (talk) 01:19, 8 May 2013 (UTC)[reply]

There are many scientific inaccuracies and terminology abuses in the Three Body (science fiction) article. The lead paragraph claims that the three-body problem is analytically unsolvable, which is false. The question about magnification does not make sense: waves are not magnified; only images get magnified. In principle, a wave can be amplified when it interacts with a plasma, such as the ionized portion of the heliosphere; this is uncommon. For the budding plasma physicists, here is Particle Acceleration at the Sun and in the Heliosphere, a good review-article from NASA Goddard, the center that operated Ulysses spacecraft during the 1990s to study solar plasma wave/particle interactions. There has been much research into wave/particle amplification by energetic plasmas, in the context of solar, terrestrial, and other physics.
I would ignore the scientific claims made in this science-fiction story - at least, based on the descriptions in our article. Like many works of fiction, scientific correctness has taken a second seat to dramatic license. Nimur (talk) 16:10, 8 May 2013 (UTC)[reply]
The three-body thing isn't exactly untrue. Henri Poincaré proved (back in the 1880's) that there is no analytic solution given by algebra and calculus - except for certain special cases. Solutions have to be arrived at by successive approximation. So it is "analytically" unsolvable - ie, there is no system if equations into which you can plug the initial positions, masses and initial velocities of three or more bodies and get out their exact positions and velocities at some time in the past or future. But that doesn't stop you from producing an arbitrarily accurate solution given enough time to iterate through the calculations.
But it's science fiction - taking liberties with reality in order to make a good story is the core of almost all science fiction. SteveBaker (talk) 16:34, 8 May 2013 (UTC)[reply]
In an easy sense, if there is actually such thing, it should already been applied in search for extraterrestrial intelligence, I guess.--朝鲜的轮子 (talk) 02:15, 9 May 2013 (UTC)[reply]
From the plot description, this book is so full of science errors as to really be not worth discussing here. Even if this "magnification" of electromagnetic waves were something real - the speed of light would still apply to these "magnified" waves - and the message would still take 4.5 years to get there. The synopsis said the aliens came from the nearest star system (at 4.5 light years) - but the nearest is Proxima Centauri - which at 4.24 light years. While Alpha Centari (4.36 ly) is a double star, Proxima Centari is gravitationally affected by it - this isn't any kind of horribly chaotic triple-star system. The two Alpha Centari stars orbit each other quite stably - and Proxima is too far away to have much effect on that. The idea that planets could be gravitationally ripped to shreds like this is unlikely! It's also evident that in a system capable of causing such horrible gravitational disruption to planets, no planets could possibly have accreted in the first place! The errors and screwups in this book go on and on - but in the end, it's a work of fiction - so put doubts away and just enjoy it...but don't worry about whether any of the crazy ideas in there are real - they aren't. SteveBaker (talk) 20:55, 9 May 2013 (UTC)[reply]

Bird question

"I am Tawny Frogmouth. Much bigger than a sparrow. So gorgeous they named a pinup girl after me!"

What bird species consume (1) the most insects by weight, and/or (2) the most flying insects by weight (in absolute terms, not relative to their body size)? 24.23.196.85 (talk) 01:55, 8 May 2013 (UTC)[reply]

As an individual bird of that species or as the whole population of that species? Vespine (talk) 02:18, 8 May 2013 (UTC)[reply]
As an individual bird. 24.23.196.85 (talk) 03:34, 8 May 2013 (UTC)[reply]
This is very tough to answer with scientific references. Do you have access to journals through a library, school or work? For now, I'll mention that purple martins are highly praised for their ability to eat/control insects. Farmers across the USA install martin houses (like this [11]) to attract the birds and keep down insect pests (though they don't eat many mosquitoes, despite the common claim). Swallows in general are famous bug-catchers, another candidate would be the barn swallow. Note swallows are specialized on aerial feeding. I think they are very efficient by relative weight, but some heavier birds that eat ground-dwelling insects might eat more total weight per day, such as robins or jungle fowl. I'm assuming per-bird, Vespine also has a good point of clarification above. SemanticMantis (talk) 02:30, 8 May 2013 (UTC)[reply]
If we're doing guesses, my candidate would be Tawny Frogmouth, based purely on the fact that it is quite large (the largest bird in its order) and unlike the jungle fowl, is almost exclusively insectivorous. I live in tawny frogmouth territory and have been fortuante enough to see and even photograph a couple of them. They are nocturnal and extremely silent fliers, so not usually easy to spot. We had one land right on our balcony once, while we were on it, and the only sound that alerted us to its arrival was when its claws clasped the rail, gave us all quite a start. Vespine (talk) 03:52, 8 May 2013 (UTC)[reply]
Up to 21 inches long, too. It's weird how that infobox picture makes it look... almost sparrow-sized. Evanh2008 (talk|contribs) 06:11, 8 May 2013 (UTC)[reply]
The picture lower down, under the heading Description, gives a more accurate impression. The camouflage effect is real too. I have some in my garden occasionally. They like sitting in a melaleuca tree, maybe four or five metres above the ground, and staring at us. HiLo48 (talk) 07:42, 8 May 2013 (UTC)[reply]
Gulls are known for being able to eat a large amount of food in one sitting. I don't know any figures, but I suspect that a large gull eating its fill of something like swarming alkaline flies (yes, they will do this) would end up with a fair weight packed in his crop. --Kurt Shaped Box (talk) 22:12, 8 May 2013 (UTC)[reply]
Part of the story about the Salt Lake seagulls is that they ate all the locusts they could handle, disgorged them, and then ate more, and so on. ←Baseball Bugs What's up, Doc? carrots23:03, 8 May 2013 (UTC)[reply]
And another part of that story (or so the article says) was that the gulls had some human help, too -- the farmers would form a line and thresh the field in unison, driving the locusts toward the edge of the field where the gulls were gathered. 24.23.196.85 (talk) 04:38, 9 May 2013 (UTC)[reply]

Arthritis

Are Marine people mostly affected by arthritis than normal people? — Preceding unsigned comment added by Titunsam (talkcontribs) 11:06, 8 May 2013 (UTC)[reply]

Do you mean Marines, or people living in marine environments? AlexTiefling (talk) 11:18, 8 May 2013 (UTC)[reply]
And what makes either group not "normal"? -- Jack of Oz [Talk] 12:05, 8 May 2013 (UTC)[reply]
The OP's previous questions are constructed as if by one who does not speak English natively, so we could charitably assume that he meant "average". ←Baseball Bugs What's up, Doc? carrots12:26, 8 May 2013 (UTC)[reply]
I don't see any statistics on that but I'd guess they would suffer more from osteoarthritis because of overdue stress sometimes. Normal exercise is supposed to be okay or if anything good but jumping down from walls can be counted as trauma I'd have thought! Miners and construction workers have an increased chance of rheumatoid arthritis but that's thought to be more because of stuff they breath in. Dmcq (talk) 12:38, 8 May 2013 (UTC)[reply]
  • I'd take it as asking, are people who interact with the sea more likely than others to suffer arthritis? 12:39, 8 May 2013 (UTC)
If that is so then I don't know of any reason for a link between living by or on the sea and arthritis, and we're always being told how good fish is for all of us instead of beef or pork. Dmcq (talk) 12:52, 8 May 2013 (UTC)[reply]
on the other hand, it's a cliche that people are always complaining that the dampness is making their rheumatism act up. Gzuckier (talk) 16:59, 8 May 2013 (UTC)[reply]
I suspect that's a different kind of dampness. The kind one gets in rainy environments, which are not necessarily the same as marine or coastal ones. -- Jack of Oz [Talk] 20:58, 8 May 2013 (UTC)[reply]

How would a Sulfur Aluminum battery compare

To standard car batteries, or batteries meant to provide long term propulsion for river-navigating trading vessels? μηδείς (talk) 12:36, 8 May 2013 (UTC)[reply]

Interestingly, Wikipedia doesn't yet have an article on the Aluminum sulfur battery, however there's more than enough literature to start one: [12]. I haven't looked too deeply into the articles in question, but it looks like most of the literature on the topic came out in 1993, there's a furious burst of articles from that time period, and then like nothing, indicating that the technology never really made it out of the R&D phase. The only mention of them I can find at Wikipedia is at Aluminium–air battery, which states (uncited) that "Aluminium-sulfur batteries worked on by American researchers with great claims, although it seems that they are still far from mass production. It is unknown as to whether they are rechargeable." So, there you go. There was some research in the area 20 years ago, there hasn't been much since, and so we don't have a lot of performance to compare. You can comb through the research I noted above in the Google search to see if anything turns up. --Jayron32 12:43, 8 May 2013 (UTC)[reply]
Is the strength of a cell related to the difference in electronegativity of the substances chosen? μηδείς (talk) 21:29, 8 May 2013 (UTC)[reply]
That determines the voltage, but not the cell electrical resistance, tendency to polarise (voltage drop under continuous load), ampere-hour capacity, capacity/weight ratio, and many other aspects of cell performance. Wickwack 120.145.65.205 (talk) 23:27, 8 May 2013 (UTC)[reply]
Can you or other readers summarize each of those concepts in a sentence on one foot? I could probably understand that response clearly at one level deeper of concretization from the abstract. For example, what is "tendency to polarise (voltage drop under continuous load)"? I can imagine that it has to do with the fact that a continuously flowing current realigns something. But I would be bullshitting in the same way I'd pass an exam on Dickens, and just as unsure. μηδείς (talk) 04:38, 9 May 2013 (UTC)[reply]
As I am recovering from comsuming a brewer's happy substance, I'm not sure I can answer while on only one foot, but here goes sitting down as I type:-
  • Voltage: More correctly in the case of a cell or battery, an Electromotive Force (EMF) - EMF is the electric tension between two connections, somewhat analogous to the pressure of water at the output of a pump.
  • Resistance: Any electrical conductor is not a perfect conductor - it resists the flow of electric current. The magnitude of this resistance to flow is called resistance and is measured in ohms. Thicker shorter conductors have less resistance that long thin ones, somewhat analogous to short fat pipes compared to long thin pipes (but don't take the analogy too far - the flow of fluid in pipes is a non-linear function, whereas electrical resistance is a linear property wrt dimensions of the conductor)
  • Polarisation: Drawing current from most types of cell causes temporary changes in the cell that increase resistance. For example, in a simple cell made of metal electrodes and a liquid electrolyte, the flow of current causes gas to collect around one or both electrodes, lowering the area of electrode in contact with the electrolyte. This increases the resistance of the cell and lowers the voltage, but if you switch off the current, the gas will disperse, and on reconnection, the cell will again supply full output. The EverReady brand used to make this disadvantange of their torch cells and radio batteries into a virtue with their marketing slogan "Nine Lives - bounces back for extra use."
  • Ampere-hour capacity: This is a concept that a cell of given size has a finite capacity to supply energy, such that current multiplied by time is a rough constant. It is a yardstick to compare cells by, but in practice it is far from constant.
Wickwack 120.145.32.250 (talk) 09:12, 9 May 2013 (UTC)[reply]
Also to clarify something else for Medeis (and others), electronegativity is merely a way to quantify bond polarization in a molecule, it has limited applicability outside of that very narrow usage. The underlying physics behind electronegativity is related, but only in a very broad sense, to the physics behind electrochemistry. The actual measurement relevant to the discussion here (and perhaps Medeis confused this term with electronegativity) is the standard reduction potential, which is kind of like electronegativity in the sense that both measure the "desire" of a particular substance to "attract" electrons towards itself, but the context of the two measurements is very different: electronegativity measures chemical bond polarization, while reduction potential measures the ability to generate a voltage in an electric circuit, and there's really no actual connection between the two concepts. --Jayron32 16:34, 9 May 2013 (UTC)[reply]
That's helpful. I did mean standard reduction potential, I just haven't studied this as chemistry chemistry since the 80's. How would that concept apply in relation to Al/S as opposed to other potential species? μηδείς (talk) 21:52, 9 May 2013 (UTC)[reply]

drafting - mechanical drawings

if a angular dimension is called out on a drawing (ie 30 degrees, hold) and no tolerance is given, is there any standard used? — Preceding unsigned comment added by 12.1.83.2 (talk) 13:54, 8 May 2013 (UTC)[reply]

As with any dimension on an engineering drawing, if no tolerance is given anywhere on the drawing, normal "good engineering practice" for the lowest cost suitable manufacturing technique applies. You can (mostly) expect that the tradesman will mark out 30 degrees on a small object with a standard 60-30 set-square. With ordinary care that will produce marking out to within +,- 0.5 degree or so. But what happens next? Is it to be cut with a hacksaw? An hand-held oxy torch? A profile cutter? Or is the drawing in electronic form to be fed into an automatic machining centre? These factors influence enormously what accuracy you get. The tooling used will depend on what finish is specified. Something cut with a milling machine can be expected to provide both an excellent finish and excellent accuracy. But say youy've specified some sort of plate to be cut from 18 mm steel plate by means of a hand-held oxy torch, and the side that is at 30 degrees is about 50 mm long. The resulting surface roughness will be about 1 mm, so worrying about cutting it to an accuracy any better than 2 degrees is pointless, and the tradesman won't worry too much about it.
Is the drawing for construction work? Mechnical parts? Construction work has standard tolerances, but I have almost no understanding of them.
Are you certain the drawing has no tolerancing information? Default tolerances and finishes are typically set out either as text or as symbols in the title block for mechanical part drawings.
Wickwack 124.182.22.141 (talk) 14:33, 8 May 2013 (UTC)[reply]
If the machining method is specified - then the tolerances may remain unspecified since they are "whatever that machine produces". SteveBaker (talk) 16:22, 8 May 2013 (UTC)[reply]
It's not clear whether Steve meant "there is no need to specify tolerances if the machining method is specified" or whether he meant "If you don't specify, you'll get whatever is produced". Neither is really true. Each machining process has its normal "good practice" tolerance, plus what it will do with a sloppy worker, and what it will do with special care. Where tolerances are not specified, its the "good practice" tolerance that you get. For example, if the machined surface is a cylinder turned in an ordinary centre lathe, a tolerance of 0.02 mm is normally achieved by taking normal care, checking with a vernier caliper, and not making any special effort. You can achieve 0.002 mm if you take special care, check with a well maintained micrometer, etc. If tolerance is not specified, but turning is specified or implied, then a toterance of 0.02 mm is what you'll get. So, specifying the machining method does in effect specify the tolerance. Note though that in the production of one-off lab equipment, hand-made prototypes, and one-off orders, the tolerances of hand-marking out add to the machining tolerances. For example, drilling a hole: Marking out the centre by hand will raely be better than 0.2 mm; centre-popping will add another 0.2 mm or so error, and drilling with a worn twist drill will add a bit more error. A good tradesman can use techniques to dramatically reduce such errors, but he'll only do it if you ask. Note also that specifing a dimension as an integer (as in 30o) specifies normal good practice tolerance, but specifying it with decimals, as in
30.00o
tells the machinist that that is the precison you expect - however doing it this way is not good practice. If precision is required, write it as in
+0.01
30.00o -0.01
How to specify tolerances, the symbols used, and how tradesmen should interpret drawings, is specified in detail in ISO Standards and in harmonised equivalents in other countries, eg AS 1100 in Australia. Wickwack 120.145.65.205 (talk) 23:18, 8 May 2013 (UTC)[reply]
The influence of the skill of the machinist isn't always a factor though - if you're using a CNC machine, a laser cutter, a 3D printer or any kind of computer-controlled machining system - then the precision you get is entirely independent of the operator. So if you're preparing CAD drawings for laser cutting (which I do all the time) - then writing down the precision is entirely pointless if you've already determined that the part will be laser-cut.
Worse still, the concept of "precision" for an angle is likely to be entirely meaningless for a computer controlled machine tool because they (mostly) use X,Y,Z drives and the angle of a long line will be as accurate as the mechanical construction of the frame of the machine. Although there will always be a positional error of some amount due to backlash in the transmission and the resolution of the X,Y,Z positioning system - that's not going to affect the angular precision.
These old-school technical-drawing standards are really kinda meaningless for these kinds of machine tools. For a 3D printer, the parameters of nozzle size and layer thickness are the things you need to specify, not angular precision. For a CNC tool, the tool selection parameters are more critical, and for a laser cutter, you'll get whatever is the best precision the machine produces, no matter what you ask for because there is no cost or speed benefit for demanding less.
In a modern design environment, the designer doesn't so much demand tolerances from the machine-shop - but rather knows the parameters of the manufacturing system (s)he's choosing to use - and making the design work within those limitations.
Furthermore, since the CAD drawing will almost always end up being the thing that actually controls the tool directly - the idea that other people are "interpreting" your drawing is fast becoming archaic too.
SteveBaker (talk) 13:34, 9 May 2013 (UTC)[reply]

May 9

When was the cactus introduced to China?

This recent question informed me that cacti are native to the Americas. They are very popular in China, and the Chinese have bestowed an awesome name upon them: 仙人掌 (literally "immortal's palm"). I'm now very curious to learn whenabouts the first cacti made it to China. Given their prodigious trading history, I assume it was rather early... but even then, it could not have predated Columbus, could it? The Masked Booby (talk) 02:05, 9 May 2013 (UTC)[reply]

They could predate Columbus.Try google-ing on "Storm-driven maritime dispersal" --Digrpat (talk) 09:36, 9 May 2013 (UTC)[reply]
Most likely it would refer to a cactus-like plant in the Euphorbiaceae - see [13] for example. See [14]. Wnt (talk) 19:24, 9 May 2013 (UTC)[reply]
The only reliably sure pre-Columbian contacts between Eurasia and the Americas broadly are Vinland and whatever early Polynesian seafarers brought the sweet potato to the Pacific Islands and New Guinea. The Vikings of Vinland didn't land anywhere that cacti grew, but as there are cacti ranging down the Pacific coast from Canada to Patagonia, it is technically possible that those same Polynesians may have taken Cacti to China. Though unlikely, because I don't know that they dropped off the cacti in the same places they dropped off the sweet potato. It's much more likely than any true cactus in China got there as a result of the post-Columbian trans-Pacific trade of the 16th-17th centuries. --Jayron32 19:32, 9 May 2013 (UTC)[reply]

Is there actually a trend in the size of fragments of a broken object or is it a hoax?

Just in some Chinese internet articles I saw something like a "雅各布·博尔碎片规律" (Jacob Bohr?'(I cannot find his exact english name)s trend of fragments), it roughly says:

In 1942, a Danish college student Jacob Bohr accidentally broke a glass bottle. He studied the sizes of galss fragments, and find the sizes of fragments roughly falls in 4 groups, with each group's average size 16 times that of the next group. This trend applies to other objects of different material and size too, though there are some difference in the proportion between sizes of each group,(16 for cups and vases, 11 for sticks, and 40 for balls). This trend is now applied in archaeology and meteor studies as to estimate positions of the fragments in the original piece.

Another version says that Jacob Bohr categorized the pieces by sizes(largest, medium, smaller, smallest), and calculated the average mass of each group, but it seems too rough and arbitrary. Still another unsourced version gives that he categorized the fragments by mass groups of 10g-100g, 1g to 10 g, 0.1g to 1g and less than 0.1g.--朝鲜的轮子 (talk) 02:12, 9 May 2013 (UTC)[reply]

Seems genuine. Or at least, it appears to have been published in a scientific journal: [15] AndyTheGrump (talk) 04:10, 9 May 2013 (UTC)[reply]
Actually, that seems to refer to something else - but by the same author, I'd imagine. AndyTheGrump (talk) 04:13, 9 May 2013 (UTC)[reply]
Are there any other studies in this area?--朝鲜的轮子 (talk) 06:07, 9 May 2013 (UTC)[reply]
And are there more details to this study and its conclusion? I have seen text whichi I have no access to full text[16] such as "pieces between one-tenth of a gram and a gram will be 16 times greater still. The number 16 is the "scaling factor". It looks like what I have in derived articles in Chinese. I am not sure whst it means: greater in number, or greater in mass or volume?--朝鲜的轮子 (talk) 06:33, 9 May 2013 (UTC)[reply]
There has been a lot of research into fragmentation because it's important in many fields (and of interest to the military). If you google size distribution of fragments you'll find many interesting papers. Sean.hoyland - talk 06:34, 9 May 2013 (UTC)[reply]
Still I have struggled on understanding the "pieces between one-tenth of a gram and a gram will be 16 times greater still". According to the idea in the abstract given above, the Probability density function of a fragment of mass m being found should be like f(x)=A*m-C, where C is scaling exponent,an integration will show if the mass range is reduced for 10 times, the probability of finding fragments of this size will be 1015times greater?--朝鲜的轮子 (talk) 07:06, 9 May 2013 (UTC)[reply]
No, not 1015times greater. Just 16 times greater. C is NOT equal to 16. Dauto (talk) 17:45, 9 May 2013 (UTC)[reply]
"The probability of finding a fragment scales inversely to a power of the mass; the power, or scaling exponent, was found to depend on the shape of the object rather than on the material." Do I get it right?--朝鲜的轮子 (talk) 00:09, 10 May 2013 (UTC)[reply]
I see. Scaling factor and scaling exponent are just two different terms. It is rather confusing withour reading the whole article.--朝鲜的轮子 (talk) 01:41, 10 May 2013 (UTC)[reply]
The fragmentation of material has also been described in terms of fractals. This ref gives examples from both naturally and experimentally fragmented materials, many of which have a fractal size distribution with a fractal dimension of about 2.5. I'm not clear whether this is describing a similar distribution to the OP's example. Mikenorton (talk) 19:34, 9 May 2013 (UTC)[reply]

The size of sun seen from planets look bigger during sunset?

At 6;30 PM at Pacific Time Zone, I can actually look at the sun setting the sky, the sun looks big going down at the horizon. But at afternoon, I am too afraid to look at the sun, I can estimate the sun at noon is much smaller than it is during sunset. I don't know why is that? Does other planets follow the same way? If we look at the sun at Mercury during sunset will the sun view in horizon look bigger than the sun view in horizon during the noon hours? What about seen from Jupiter's four major moons?--69.233.254.115 (talk) 04:24, 9 May 2013 (UTC)[reply]

Only if they have refracting atmospheres. μηδείς (talk) 04:32, 9 May 2013 (UTC)[reply]
(edit conflict) The concept is covered at the article Moon illusion which discusses why both the sun and moon appear to be larger when close to the horizon. The sun and moon are not larger; if you measure the size of the sun on the horizon and compare it to the size of the sun at noon, they will be the same. You can do this with the full moon easier since a) it isn't blindingly bright and b) it is subject to the same illusion as the sun. Just hold out a ruler at arms length and measure how big the moon is when it is on the horizon and looks abnormally large (say, near 6-7 PM on the night of a full moon). Then do the exact same thing at around midnight. You'll find them exactly the same size. There is some debate as to what the source of the optical illusion is (and it is just that: an optical illusion), though I suspect the illusion is caused primarily by the proximity to the horizon as a reference. When the sun or moon is higher in the sky, you don't have the horizon nearby, so your brain perceives it to be smaller at the zenith and larger on the horizon. (post EC response to Medeis) The "refraction" hypothesis has been around since at least Aristotle and Ptolemy, and has been summarily discredited since the real size can be very accurately measured, and does not vary. Actually, both the sun and moon are very slightly smaller on the horizon because they are literally farther from the viewer; the difference for the sun is impossibly small to measure, while the difference for the moon can only be detected on very accurate instruments. In reality, to any reasonable measure, the sun is the same size at all points in the sky. It's just an optical illusion. --Jayron32 04:36, 9 May 2013 (UTC)[reply]
So is just the human eye is a crappy way to measure size? Is that angle of perception? I looked up refraction article.--69.233.254.115 (talk) 04:54, 9 May 2013 (UTC)[reply]
It's not refraction, it's an optical illusion. The illusion is in your brain, not your eyes. Take the classic crazy tables illusion, your eyes aren't diffracting anything or seeing anything "incorrectly", it's your brain's interpretation of the input from your eyes that's causing the illusion. Vespine (talk) 05:41, 9 May 2013 (UTC) (edit, sorry I don't know how I got above Jayron32's reply even though my edit time is 3 minutes later, It did take me more then 3 minutes to write the above)[reply]
It's not your eye. The lenses in your eyes aren't to blame. It's your brain. In order to make sense of the world, your brain has to take the input from your eyes and give it meaning; in doing so it takes contextual clues to determine things like relative size and distance. That is, your brain judges the size of an object by the environment that it is in. This is because your brain has to decide between "big and far" and "small and close" and there are ways to construct "optical illusions" which confuse your visual processing center in your brain into messing up the sizes of objects. This is a necessary side effect of being able to (in most cases) be able to accurately judge distance and size; that is the same processes that cause the illusion come "part-and-parcel" with the processes that allow you to see in the first place. Some common illusions that play with your visual perception of size and distance are the Ebbinghaus illusion, the Delboeuf illusion, the Ames room illusion, the Jastrow illusion, the Müller-Lyer illusion, the Ponzo illusion. Whatever processing glitch is responsible for these common illusions, the same thing is likely happening with the size of the sun on the horizon. They're all caused by our brain misinterpreting size due to the contextual clues around the object of study; the brain perceives the size of the object from these cues, and sometimes it gets it wrong. What is striking is how consistent it gets it wrong: nearly all people experience these illusions in the same way, which makes it clear that this is something universal to how the human brain constructs a visual image of the world using the raw data from our eyes. --Jayron32 05:38, 9 May 2013 (UTC)[reply]
I (more or less) agree with (nearly) everyone else here - this is just like the moon illusion, the sun looks larger near to the horizon because of a simple optical illusion. However, as the image at right shows, atmospheric diffraction does plays a very, very small part in distorting the shapes of the sun and moon - but not by enough to account for the size changes we perceive. In fact, you can only see the distortion effect at all when the sun or moon are very close to the horizon over the ocean or dead-flat ground. Under those circumstances, you can sometimes see that the orb of the sun is squashed at the bottom just as it touches the horizon due to atmospheric diffraction. Worse still for the atmospheric distortion theory - and as you can see in that picture even if atmospheric distortion were a factor, it would make the sun and moon look smaller at the horizon...not bigger!
Incidentally - I do 3D computer graphics for a living, and am occasionally asked to include the sun and/or moon in my pictures. Its interesting to note that the moon illusion shows up in computer graphics sun and moon rise and set too. Even though we have not simulated any atmospheric distortion in the graphics system - and we know for 100% sure that the diameter of the circle we're drawing for those bodies is precisely the same size no matter what the time of day is - you still get the powerful impression that the sun and moon are twice the size at the horizon than at zenith.
SteveBaker (talk) 13:07, 9 May 2013 (UTC)[reply]
The OP seemed to be asking about distortion at sunset, not relative appearance high and low in the sky. I am curious why you say the sun would appear smaller at sunset? μηδείς (talk) 19:26, 9 May 2013 (UTC)[reply]
I think you misunderstand.
What we're being asked is: "I can estimate the sun at noon is much smaller than it is during sunset. I don't know why is that?" - so our OP believes (as many people do) that the sun (and moon) are smaller at noon and larger at sunset. We know that this is because of the "moon illusion" effect. The argument that you (wildly incorrectly!) made that this is somehow due to refraction is untrue for two very good reasons:
  1. Because the moon illusion is the cause - proven by the fact that the illusion persists even in computer graphics where there is no refraction.
  2. Because if it were due to refraction, it would only happen within about a third of a sun/moon diameter of the horizon - *and* the effect of refraction is to decrease the apparent size, not to increase it.
So for both of those reasons, the your explanation and the OP's evident suspicion (that this is related to atmospheric composition) are entirely false - and therefore the followup about other planets is probably not relevant.
If you look at the pair of photos I posted (above), you can see that the top half of the sun is more or less a semi-circle (following the blue line) - and the bottom half is drastically squished upwards (following the green line). You can clearly see from that photograph that the effects of atmospheric distortion only happen VERY close to the horizon (too close to encompass the entire sun or moon) - and the effect is to squash the object, not to stretch it...so even if refraction were somehow the cause - the sun/moon would look SMALLER at the horizon, not bigger.
Despite this explanation, many people refuse to believe that this effect is just an illusion (because it's such a powerful one). To those people, I suggest this practical experiment: A US one-cent coin (or a UK penny), held at arm's length more or less covers the sun (or moon) exactly (depending on how long your arms are!). Knowing this, hold the penny up right next to the sun at noon - then again at sunset - and it'll be obvious that the true size of the sun is indeed exactly the same on both occasions.
QED.
SteveBaker (talk) 20:29, 9 May 2013 (UTC)[reply]
I took the OP's question to be, why does the sun look wider at sunset. Your answer sees to be it is not larger, but it is squished top-to bottom. Unless that distortion in proportions is caused by something other than the refractive power of an atmosphere I am satisfied. μηδείς (talk) 21:47, 9 May 2013 (UTC)[reply]
The answer, as many other editors have said, is that it's an optical illusion. The Sun is not actually wider at sunset; it just looks that way because the brain is misinterpreting what the eyes are telling it. --Bowlhover (talk) 23:22, 9 May 2013 (UTC)[reply]
Not larger over all, squatter. Wider, but not taller to the same extent. μηδείς (talk) 01:02, 10 May 2013 (UTC)[reply]
As Jayron32 alluded to in the first post on this question, the brain percieves objects high the sky as smaller than they are. The brain uses objects on the horizon (trees, buildings, hills) to calibrate size perception. When nearby objects of roughly known size are nearby, the brain can accurately judge the size of the sun or moon. When no such objects are nearby, the brain defaults to perceiving the sun or moon's disk, or anything else in the sky, as much smaller than reality. The phenomena is well known to movie directors, and was well illustrated in the Howard Hughes movie The Aviator, 2004 starring Leonardo DeCaprio. Hughes was depicted making a movie featuring a biplane dogfight, and wanting audiences to feel immersed in the action. It didn't work, because the planes in the sky look tiny, and not of the size expected from the lens focal length and shooting distance. Hughes realised that to provide the eye with scale, he had to include ground objects in each scene, and have the planes visually near the objects. Wickwack 124.178.140.2 (talk) 01:13, 10 May 2013 (UTC)[reply]
The Moon Illusion changes the apparent size of the moon, not its relative width. The sun gets squat, squatter, and green flash as it transits the horizon. μηδείς (talk) 02:25, 10 May 2013 (UTC)[reply]
None of that has to do with the illusion we're talking about here. We keep saying "apples, apples, apples" and you keep saying "But oranges!!!". Try to keep up. The moon illusion doesn't have anything to do with the odd effects happening at sunset/sunrise or moonset/moonrise. It has to do with the full circle of the moon/sun appearing bigger when close to the horizon, and smaller at the zenith. This is a real, documented effect and optical illusion (that is, caused purely in the visual processing of the human brain, not with the actual image itself) which is quite independent and unrelated to all the effects you are talking about. The ones you are talking about is what happens to the disc of the sun as it moves past the horizon itself. Different. --Jayron32 02:54, 10 May 2013 (UTC)[reply]
Green flashes?? Medeis needs to get outside more. Wickwack 124.178.140.2 (talk) 02:57, 10 May 2013 (UTC)[reply]
The green flash is actually a documented phenomenon, so Medeis is (probably) not pulling this out of his ass. (Or his ass's ass - :) ) Whoop whoop pull up Bitching Betty | Averted crashes 03:33, 10 May 2013 (UTC)[reply]
I'm pretty sure it's her ass, but regardless, I've conceded that the green flash is a real phenomenon, as are the other actual optical distortions noted; just that they aren't relevant to the discussion at hand. --Jayron32 05:02, 10 May 2013 (UTC)[reply]
One more observation, I live on the western edge of a mountain range. So when the sun or moon rises, it has to come up quite high to rise over the mountains. I've driven home in the city when the moon is 20-30 degrees up in the sky and looks quite normal, but when I get close to home and the mountains dominate the horizon, the moon will look huge even though it's higher in the sky then when I started the trip. Vespine (talk) 05:54, 10 May 2013 (UTC)[reply]
Excellent observation, but don't forget that it isn't that the moon looks bigger than reality when against the horizon (or montain range); it's that it looks smaller than reality when high in the sky. Wickwack 121.215.25.35 (talk) 08:01, 10 May 2013 (UTC)[reply]
It's actually pointless to decide which perspective on the moon is "reality". The entire issue is that the moon looks larger on the horizon and smaller at the zenith, not that either position represents "reality". Both are merely collections of neurons firing in your visual cortex giving you a particular perception of the moon, the illusion is the mistaken belief that the two don't match each other, when in fact, they do. It makes no sense in this context to speak of one collection of firing neurons as "more real" than the other. --Jayron32 15:42, 10 May 2013 (UTC)[reply]
I disagree - it may be "pointless" - but it's certainly possible to find out which one is "correct". I've actually tried this with many people (although not a statistically valid sample). If you talk to people about holding various objects at arm's length and using them to cover the sun or moon - they will generally concede that a penny - or at most a golf-ball held at that distance would cover the disk of the sun or moon at zenith - and that a shirt-button would definitely not be big enough and a tennis ball would be much too large (in fact, a penny is big enough). However if you ask them how big the sun/moon looks at the horizon, they tell you things like that your whole hand would only just cover it, or a tennis ball held at arm's length is the right size - only people who know about the illusion, or who have actually tried it will easily believe that even a golf ball is sufficient to cover a full moon at the horizon. So the illusion is definitely inflating the "true" size of the object when it's close to the horizon rather than shrinking it at zenith. That's a testable fact...and "too big" at the horizon and "about right" at zenith is the clear result from my informal testing...although I have to say that people do also assume the sun and moon are larger than they really are at zenith too...just not by such a crazily huge amount!
The fact that they over-estimate the size of the sun and moon everywhere is another interesting thing...coming back to my computer graphics experience, when I draw the sun and moon at the correct size - most people say that they look ridiculously too small - so I tried giving them a slider to adjust that size - and at zenith, down to maybe 30 degrees above the horizon, they generally wanted the sun or moon to be about twice as large as it should be...which is about the size of a golfball held at arm's length...which nicely ties in with what many people tell me when I ask them to estimate the size that way. That suggests that some part of the over-estimate is not so much an optical illusion (which ought to be the same in the computer display as in reality) - but more a matter of memory. SteveBaker (talk) 16:58, 10 May 2013 (UTC)[reply]

Electromagnetic radiation effect

in memory chip, we can save several information/file. like credit card, pass card for employee, etc. My question is does it right that electromagnetic radiation can affect dammaged on memory chip? — Preceding unsigned comment added by 202.152.199.34 (talk) 04:56, 9 May 2013 (UTC)[reply]

The short answer is yes, and it's not just memory chips that can be affected. An Electromagnetic pulse can induce currents in delicate electronics far greater then the device's operating parameters. What kind or how strong a pulse it would take to damage any particular piece of electronics would depend on a very large number of factors. Typically, sensitive electronics are shielded at least to some degree to prevent damage from EMF that might be encountered during normal use, like audio speakers, generators, mobile phones, wifi transmitters, etc... Vespine (talk) 05:23, 9 May 2013 (UTC)[reply]
Er. If the answer is "yes," what was the question? Evanh2008 (talk|contribs) 05:27, 9 May 2013 (UTC)[reply]
Paraphrased from non-native English, the question appears to be "Is it correct that EM radiation can damage the information on a memory chip?" SemanticMantis (talk) 14:55, 9 May 2013 (UTC)[reply]

Earth's orbital plane and its revolution

The moon revolves around the earth, the earth revolves around the sun and the sun revolves around the center of milky way galaxy. Do these three celestial bodies have the same orbital plane or different orbital plane? The moon always revolves around the earth and everyday it should come in the path of sun and earth. I think the moon comes (once in a day) between the straight line joining the center of the earth and sun. If it comes, then, why don't we see solar eclipse everyday? Concepts of Physics (talk) 08:40, 9 May 2013 (UTC)[reply]

The rotation of the Sun around its axis, of the planets around the sun, and of the Moon around the Earth all lie in roughly the plane of the ecliptic (well, the orbit of the Earth lies exactly there, by definition). The key to your question is in the roughly. The moons orbital plane is inclined a bit more than 5% against the ecliptic. So we only get an eclipse when the Moon happens to be near one of the points where its orbital plane and the ecliptic intersect, and when it is also on a line with the Earth and the Sun. The orbit of the Sun in the galaxy is unrelated to the ecliptic. --Stephan Schulz (talk) 08:54, 9 May 2013 (UTC)[reply]
(edit conflict)
No, all three bodies have different orbital planes, all three have different orbital periods. Therefore, they do not allign everyday. Plasmic Physics (talk) 08:57, 9 May 2013 (UTC)[reply]
(edit conflict) There are three different orbital planes here. The plane of the Earth's orbit around the Sun is called the ecliptic plane. The Moon's orbital plane is at an angle of about 5 degrees to the ecliptic plane, which explains why we don't see a lunar and a solar eclipse every month. The orbital plane of the Sun around the centre of the galaxy is at an angle of about 60 degrees to the ecliptic plane. Gandalf61 (talk) 09:01, 9 May 2013 (UTC)[reply]
And to correct one thing, it takes a month for the Moon to orbit the Earth. Even if the orbital planes were exactly aligned, we'd see a solar eclipse once a month (at New Moon), not every day. Rojomoke (talk) 10:23, 9 May 2013 (UTC)[reply]
Speaking of which, an annular eclipse is starting in about half an hour's time in northern Australia and parts of the Pacific. C'mon down. -- Jack of Oz [Talk] 20:24, 9 May 2013 (UTC)[reply]
Wait a minute there! A lunar month is not the time it takes the Moon to orbit the Earth - it's the (average? variable?) time between two equivalent phases of the Moon. Because the Earth is revolving around the Sun, the point in the orbit where the Moon is full changes from month to month. See sidereal month. Wnt (talk) 23:24, 9 May 2013 (UTC)[reply]

Question about Science career choice in India

Hello,

I am an Indian Student currently looking at going for a 4 year Under-Graduate course in B.Science at the Delhi University. I've heard from some that doing the UG course is equivalent to doing a Masters course, as you can get Masters jobs and PhD straight after doing the UG. But at the same time, the course has just been introduced in IISc and will be introduced in DU this year onwards.

So could you please tell me whether I should be looking for doing a Masters after doing UG or not. If you can tell me which are the advantages in doing that?

Thanks! 117.194.88.176 (talk) 14:33, 9 May 2013 (UTC)[reply]

I don't know much about the system in India, but my general advice for students is to not worry too much about masters/Ph.D before you've even started your undergraduate education! Many things can change during your degree program. Your interests, your goals, your finances, your relationships, etc. Waiting all the way until the end of your second year UG to think about MS/Ph.D is certainly not too late. Just focus on your studies, and discovering what your true interests and motivations are. Good luck! SemanticMantis (talk) 14:48, 9 May 2013 (UTC)[reply]
You can ask the following resources.
Wavelength (talk) 15:18, 9 May 2013 (UTC)[reply]
I have no idea how it works in India, but in other countries it depends on what sort of career you want. For example, in Australia (the USA is much the same) if you want to be an Engineer, you first do a Bachelor degree - that is the minimum requirement to work as a professional Engineer. With a Bachelor degree, you can assimilate just about anything in peer reviewed journals (with a bit of effort) and that's really all the academic ability you need. Most Engineers have only a Bachelor degree (4 years), but a few go on and get a Masters or even a Ph.D. It's more useful though to get a Masters in Business Management, not in Engineering, and do it after some years of professional work experience - that is what will help get an Engineer promoted. However, if you want to be a psychiatrist, the minimum requirement to be allowed to see patients is a Ph.D - so students stay at university for the required 7 or 8 years in one long stretch until they either drop out and do something else, or they get their Ph.D.
In any case, the best thing is to make contact with practitioners in fields that you might like, talk to them and find out what they really do, and how they got there. Consult staff at universities and review their syllabus and student handbooks. My advice is pick the course that you think, after diligently investigating (not just do, say, town planning because some school teacher who thinks it is about drawing plans has noticed you are good at art) that will be the most fun, at the best university you can get into. Relying on a few posts on Reference Desk and/or perusing a few websites will not give you enough to make a good decision. Wickwack 124.178.140.2 (talk) 00:47, 10 May 2013 (UTC)[reply]

Genetic diversity of dogs vs. other mammals

This blog post makes what I think is an absurd claim about the relative genetic diversity of dogs compared to other mammals. It says "The differences between a Great Dane and a Pug are greater than the differences between a weasel and a walrus." Now, they don't quite specify genetic diversity, but on this point, the claim can't be true. Unless I'm much mistaken, a Great Dane and a Pug are the same species, and thus could reproduce (better make it a male Pug, though). And of course, a weasel can't do this with a walrus. If the claim is more about phenotypes, the dog breeds would still seem to have more in common than the weasel and walrus. Is this claim true in any meaningful way? --BDD (talk) 18:22, 9 May 2013 (UTC)[reply]

Some of the genetic changes favored by artificial selection are alterations in really fundamental proteins that evolution normally doesn't touch (that is, doesn't touch and live for long). You could find individual nonconserved positions on specific proteins that would be more similar in weasel and walrus than between dog breeds, but only on the minutest fraction, a few individual basepairs, out of the whole genome. I ought to go hunting for examples in these dog breeds (Manx cat is one I actually remember offhand) but somehow I'm just not up to chasing the squirrel right now. Wnt (talk) 19:10, 9 May 2013 (UTC)[reply]
  • The claim is utter nonsense, the differences between dog breeds are caused by a few sets of genes that regulate size and relative size of organs, hair color and pattern, and a few things which are rather superficial from a developmental point of view. There is no difference at the cellular or metabolic level, they still eat and digest their food in the same way, can interbreed, react the same to the same medicines, etc. On the other hand weasels and walruses are not close enough to produce ofspring. I highly doubt a walrus would do well on a weasel's diet. It's utter nonsense. μηδείς (talk) 19:20, 9 May 2013 (UTC)[reply]
  • Another vote for total nonsense. I've been trying to think of some odd sense in which it might be true, but you'd have bend over backward like Wnt has above :) I suppose you could hand-type in every url at the bottom of the graphic, and read everything you find in an attempt to discover how they went so wrong, but I'm not that charitable toward lazy "info" graphic makers. SemanticMantis (talk) 20:52, 9 May 2013 (UTC)[reply]
Good catch! I didn't even notice the references at the end. I checked three of them and found this Scientific American blog post, which says the skulls of the Pug and Great Dane have more differences than the walrus and weasel. I suppose it's not exactly surprising that a blog took a fun fact and just ran way too far with it. --BDD (talk) 21:56, 9 May 2013 (UTC)[reply]
Heh. By that logic, you and I are more different than me and my dog are; you haven't slept as many days in the same house as we have ;) SemanticMantis (talk) 23:04, 9 May 2013 (UTC)[reply]

How did our Stone Age ancestors get enough vitamin A?

Some of our foods are fortified with vitamin A and some foods we eat that contain a lot of beta carotene like carrots are not really natural foods, they are the result of centuries long selective breeding. I have checked that I get enough vitamin A, but only about 1.8 times the RDA, despite eating about twice the average amounts of vegetables as recommended. If I subtract the extra vitamin A I get from carrots and from fortified foods, I end up below the RDA for vitamin A. So, I don't see how I could get enough vitamin A if I lived in the Stone Age. Count Iblis (talk) 19:00, 9 May 2013 (UTC)[reply]

Liver (food). Internal organs were held in quite high regard as foods in ancient times, and the use of liver for night blindness was known to Dioscorides (see [17]). Wnt (talk) 19:05, 9 May 2013 (UTC)[reply]
Indeed, carnivore livers can cause Vitamin A overdose (it's possible to overdose on Vitamin A itself, beta-carotene is excreted without being converted if taken in excess, and isn't poisonous) CS Miller (talk) 19:08, 9 May 2013 (UTC)[reply]
(edit conflict) Liver. Some animal livers, like Polar Bear, IIRC, have so much vitamin A that they are toxic even in small amounts. But the liver of any animal contains enough vitamin A to keep you going for some time. Vitamin A is a fat soluble vitamin so you can store it in the long term, to handle those weeks when you don't take down any buffalo for a while. --Jayron32 19:09, 9 May 2013 (UTC)[reply]
Apparently Western culture is a bit of an outlier when it comes to diet, meat-wise in particular. We don't eat the good(-for-you) bits. Organ meat is where are the nutrients are. The Inuit diet is a decent parallel to how they got nutrients back then. Mingmingla (talk) 22:16, 9 May 2013 (UTC)[reply]
I see, so we are closer to being carnivores than our present typical diets would suggest. Count Iblis (talk) 22:38, 9 May 2013 (UTC)[reply]
No, that is not a valid conclusion. Ancient people ate lots of non-meat. What is true is that modern people (and modern USA in particular) eat far less organ meat than our ancestors did. SemanticMantis (talk) 22:59, 9 May 2013 (UTC)[reply]
That's what I was going for, yes. Mingmingla (talk) 01:22, 10 May 2013 (UTC)[reply]
Yes, I guess I was focussing too much on what I usually eat :) Count Iblis (talk) 12:40, 10 May 2013 (UTC)[reply]
It wasn't just liver. The Paleolithic diet included substantial amounts of leafy vegetables (modern day examples of which are spinach, kale and collard greens), a couple hundred grams of which is enough to supply all the vitamin A you need in a day. See Vitamin A#Sources. Red Act (talk) 23:57, 9 May 2013 (UTC)[reply]
One can also ask, did they get enough vitamin A? I don't see any reason to assume that they did. They got enough to survive as a species, or we wouldn't be here, but how we know they got enough for the best health outcomes is obscure to me. (It's not impossible that someone here does know that, in which case it would be interesting to hear.) --Trovatore (talk) 00:06, 10 May 2013 (UTC)[reply]
Our Life expectancy article gives aUpper Paleolithic lifespan of 33 years. So perhaps they really didn't live long enough to worry about all our modern middle-age health issues. But agree with the points above, that every edible scrap of the animal would have been eaten. Alansplodge (talk) 07:27, 10 May 2013 (UTC)[reply]
Life expectancy at birth is almost useless for that sort of consideration. The table says that a Paleolithic person who lived to 15 would on the average die at 54, which is more relevant to your point. But in any case it doesn't address my point particularly. There's no obvious reason to exclude the possibility that it would have been better for them to get more vitamin A, even in their prime, but it just wasn't available. --Trovatore (talk) 08:52, 10 May 2013 (UTC)[reply]
The following text comes from Douglas Mawson#Australasian Antarctic Expedition:
It was unknown at the time that Husky liver contains extremely high levels of vitamin A. It was also not known that such levels of vitamin A is poisonous to humans. With six dogs between them (with a liver on average weighing 1 kg), it is thought that the pair ingested enough liver to bring on a condition known as Hypervitaminosis A. However, Mertz may have suffered more because he found the tough muscle tissue difficult to eat and therefore ate more of the liver than Mawson. It is of interest to note that in Eskimo tradition the dog's liver is never eaten.
Dolphin (t) 08:20, 10 May 2013 (UTC)[reply]

May 10

collars and tie outs

Why are dog collars not supposed to be used with tie-outs?Curb Chain (talk) 00:43, 10 May 2013 (UTC)[reply]

What's a tie-out? ←Baseball Bugs What's up, Doc? carrots03:41, 10 May 2013 (UTC)[reply]
I think it's British English for "a chain you nail to the ground via a stake to keep your dog from running out of the yard and eating small children." In general, I believe that it is often recommended that you don't attach a leash or a chain to the dog's collar, for the potential to choke or injure the dog that way. It is usually recommended that a Dog harness is used instead if you're going to be tying him up in the yard. --Jayron32 04:55, 10 May 2013 (UTC)[reply]
Not a common British usage - I've never heard of it. Alansplodge (talk) 07:15, 10 May 2013 (UTC)[reply]
Googling "tie-out cable" returns heaps of sites that show that it is a cable or chain used to tie up a dog - the name being particularly used in the USA and Australia. It appears to be recently introduced marketing terminology. This Australian, who is a dog owner who would never tie a dog up, as it is not nice for the dog (dog owners should have an appropriately sized and fenced yard) had bever heard of the term before seeing it in this question. Wickwack 121.215.25.35 (talk) 07:56, 10 May 2013 (UTC)[reply]

Why have there not been visitors from the future?

If time travel is possible, and we can visit the future, why hasn't anyone from the future ever visited us yet? — Preceding unsigned comment added by 72.70.202.28 (talk) 03:35, 10 May 2013 (UTC)[reply]

How do you know they haven't? Whoop whoop pull up Bitching Betty | Averted crashes 03:37, 10 May 2013 (UTC)[reply]
(edit conflict) Ergo, time travel is not possible. QED. That was easy. --Jayron32 03:37, 10 May 2013 (UTC)[reply]
Specifically, backward time travel is not possible. Time only goes one direction. ←Baseball Bugs What's up, Doc? carrots03:40, 10 May 2013 (UTC)[reply]
Those both depend on the assumptions that A: if time travel was possible, it would not be considered a military secret (unlikely); B: time travellers would be easily recognisable as such (also unlikely); and there are no obvious time travellers (how do we know what time travellers would look like)? Your argument is as holey as a colander.
Also, tell me what if anything precludes the possibility of time travel, Jayron Pond and Baseball Williams. Whoop whoop pull up Bitching Betty | Averted crashes 03:50, 10 May 2013 (UTC)[reply]
1)If time travel were possible, the Bible would mention that the Crucifixion was observed by millions of strangely clad people speaking strange languages, for just one example.
2)We tend to think of a time machine like a car; you get in and drive to whatever time you want. But it may be more like an elevator, in that it covers a defined span; from the time it was built to the time it is destroyed, and you can no more go to a time before it was built than you can ride an elevator to a floor that is below the bottom floor the elevator goes to. This would be true for the Tipler cylinder, for instance.
3) If time travel does indeed allow the past to be altered which then alters the future, which then causes a difference in the time travel to the past which alters it yet again, which then alters the future yet again, etc.; then the only stable reality where this will finally stop is one where time travel never gets invented, and everything then continues unchanged. Gzuckier (talk) 03:59, 10 May 2013 (UTC)[reply]
The problem with using Tipler cylinders (or any other kind of closed timelike curve) for backward time travel is that they're hard to stop. That is, they are essentially self-contained, and there isn't even a hypothetical way of "jumping off" and actually being able to do anything of consequence in the past. I believe there are also some serious ontological issues with creating them (I remember reading something years ago about Kerr black holes as a possible solution to that, but I don't recall the specifics). Evanh2008 (talk|contribs) 04:12, 10 May 2013 (UTC)[reply]
Tipler culinders might as well be "wave your magic wand and say presto!" as far as practical solutions to time travel go. After all, a cylinder of infinite length would take an infinite amount of time to construct, and would reach the entire breadth of the universe. --Jayron32 04:51, 10 May 2013 (UTC)[reply]
No, it wouldn't reach the breadth of the universe - it would only tend to the breadth of the universe. (Infinity is not a number.) Although, the number 'orez', which I invented, is. (Not that anyone cares.) Plasmic Physics (talk) 05:00, 10 May 2013 (UTC)[reply]
If the cylinder is infinite, then it's infinite — that's just a tautology. As for "infinity is not a number", that's meaningless. Infinity is not, say, a real number or a complex number, but there is no well-defined notion of "number, full stop" from which infinity can be excluded. Complex infinity is, for example, an extended complex number.
As for the "tending to" thing, see potential infinity versus actual infinity. Pre-Georg Cantor, actual infinity had a poor reputation, but actually infinite structures are now well accepted. --Trovatore (talk) 05:05, 10 May 2013 (UTC)[reply]
Then what was all that about on the Maths Reference Desk, when I raised the topic of infinity? Plasmic Physics (talk) 05:20, 10 May 2013 (UTC)[reply]
(edit conflict) The problem is not the existence of infinite structures, the problem is the construction of infinite structures in a finite amount of time. That's why Tipler cylinders are magic: if you don't have one already, you need to build one, and you'd need magic to do so. And if you have magic, why not use your magic to time travel and save yourself the trouble of building an infinite length cylinder. Tipler cylinders may be an interesting thought experiment, and may be mathematically consistent with current physics in allowing time travel of a sort, but that doesn't mean that we can actually build one and use it for time travel. People seem to always forget the step where they have to actually make the object and then use it to send someone back to, I don't know, kill Hitler or try to convince your father to not be so much of a dweeb or something. In the end, there's still no practical way yet proposed to do this, so it's a moot point. Fun to sit around and bullshit about, but it's really not anything more than magic wearing a physics disguise. --Jayron32 05:22, 10 May 2013 (UTC)[reply]
Well, maybe there already is one somewhere. Or maybe Hawking is wrong about the "weak energy criterion" or whatever he claims keeps very long finite cylinders from working. --Trovatore (talk) 06:05, 10 May 2013 (UTC)[reply]
Undoubtedly. If the universe is infinite in scope and infinite in matter, then every possible object that could exist does already exist somewhere right now. However, the issue is still a practical impossibility. In an infinite universe, the Tipler cylinder does already exist, but it would take an infinite amount of time to search the infinite universe to find it, and then an additional sufficiently long period of time past infinity to travel to it to actually use the thing to travel backwards in time. Still pointless to consider it a practical time machine. --Jayron32 13:48, 10 May 2013 (UTC)[reply]
As usual, Mr Munroe has an answer: http://xkcd.com/1203/ 196.214.78.114 (talk) 08:03, 10 May 2013 (UTC)[reply]
Time travel FORWARDS is possible. People do that everyday when they sit on a bullet train. ☯ Bonkers The Clown \(^_^)/ Nonsensical Babble12:24, 10 May 2013 (UTC)[reply]
People do that every day sitting still. --Jayron32 13:48, 10 May 2013 (UTC)[reply]
I'm afraid I don't get what the cartoon is trying to say, but I should take a moment to mention negative mass as a crucial concept.
Another notion: any use of time machines to rewrite history is going to keep going on and changing things until... what? Until nobody ever invents a time machine. In this model we'd occasionally get hit by one-off shrapnel of future time travellers, but only if their efforts have the effect of preventing time travel from being invented, which would tend to skew the odds of their actions to be heavily in the favor of not telling us the technical details. Wnt (talk) 12:28, 10 May 2013 (UTC)[reply]
Like your statement, the cartoon is a bit of a paradox. Self-contradictory, and therefore the world implodes. The end. ☯ Bonkers The Clown \(^_^)/ Nonsensical Babble12:32, 10 May 2013 (UTC)[reply]
"I'm afraid I don't get what the cartoon is trying to say": There's a wiki for that. AndrewWTaylor (talk) 12:44, 10 May 2013 (UTC)[reply]
Explanation here: http://www.explainxkcd.com/wiki/index.php?title=1203 196.214.78.114 (talk) 12:46, 10 May 2013 (UTC)[reply]


Time doesn't exist in the sense we intuitively think about it. The past and the future simply exist, they are the present moment for the people who live there. The question should really be formulated as why we can explain the present state of the universe in terms of only a past initial condition instead of a mixed one (the other possibility of explaining it in terms of only the future would de-facto redefine the future as the past and vice versa). My view on this is that what we're actually doing here is to take our present state and then find a set of rules that allows the information in our present state to be compressed. This then explains our present states in terms of hypothetical "past states" that should be considered to be alternate universes. This gives rise to a one parameter family of parallel universes, the parameter can then be identified with what we conventionally call "time". Entropy then decreases in the negative time direction by definition. We then perceive time in a way that makes it look like we're traveling in time in only one direction, in this intuitive picture, time travel to the past is impossible. Count Iblis (talk) 13:09, 10 May 2013 (UTC)[reply]

Time flies like an arrow. Fruit flies like a banana. --Jayron32 13:43, 10 May 2013 (UTC)[reply]
and Green flies like a lettuce. SteveBaker (talk) 16:35, 10 May 2013 (UTC)[reply]

Some time i visit the past and have my de ja vu thanks water nosfim — Preceding unsigned comment added by 81.218.91.170 (talk) 18:28, 10 May 2013 (UTC)[reply]

Mountain prominences and parentage

Yesterday's main page linked to lists of mountains which made me think about peaks and cols and heights and prominences (yet) again. I find it confusing. So I have drawn a diagram:

Diagram of a mountain range showing peaks and cols, from which mountain parentage and prominences can be determined.

In particular each col (lowercase letter) joins two peaks (uppercase letter) together and "ownership" below the col is assigned to the higher peaked mountain, which is deemed the parent of the other mountain. Prominence is the vertical distance from the peak to the col which joins that peak to its parent peak. The mountain height is the vertical distance above sea level (I suppose above "a" and "n" here). The colours in the diagram show "ownership" and how the tallest mountain gradually gets to include everything.

Is this diagram "correct"? Have I misunderstood anything? -- SGBailey (talk) 07:39, 10 May 2013 (UTC)[reply]

I think you have this right. However, prominence is measured above the key col (as you say) but the key col is with respect to any terrain higher than the peak in question. "Parent peak" is not essential to the concept. Indeed there are several definitions of "parent peak" with various advantages and disadvantages. I find the easiest way to think about prominence is as it says in Topographic prominence, "Suppose that the sea level rises to the lowest level at which the peak becomes the highest point on an island. The prominence of that peak is the height of that island. The key col represents the last isthmus connecting the island to a higher island, just before they become disconnected." What the mountains are like on this other island does not matter provided the land is higher somewhere. Thincat (talk) 14:49, 10 May 2013 (UTC)[reply]
OK That seems to match. Thanks. -- SGBailey (talk) 16:38, 10 May 2013 (UTC)[reply]
Neat diagram. From "The colours in the diagram show "ownership"" I would have expected, for example, A, B, C, and D to all be some shade of pink. Instead it looks like peaks whose color is other than blue have blue peaks as daughter peaks (and blue peaks have no daughter peaks), but which peak is the daughter peak (ownership) is not indicated by color.--Wikimedes (talk) 17:31, 10 May 2013 (UTC)[reply]

List of Products that contain triclosan

Triclosan (edit | talk | history | protect | delete | links | watch | logs | views)

Why hasn't anybody responded to my talk page section re: this topic?165.212.189.187 (talk) 19:13, 10 May 2013 (UTC)[reply]

I'll respond. AndyTheGrump (talk) 19:16, 10 May 2013 (UTC)[reply]

Double minor

What do you think would be the best minor for an environmental geologist?