Jump to content

Wikipedia:Reference desk/Science: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Line 22: Line 22:


::: A picture of this is [http://imgur.com/PSX2J here]. -- [[User:Finlay McWalter|Finlay McWalter]] • [[User talk:Finlay McWalter|Talk]] 12:12, 3 August 2010 (UTC)
::: A picture of this is [http://imgur.com/PSX2J here]. -- [[User:Finlay McWalter|Finlay McWalter]] • [[User talk:Finlay McWalter|Talk]] 12:12, 3 August 2010 (UTC)

: Adapters exist for using SMT components on breadboards. Digi-Key (etc.) carry these, search for [http://www.capitaladvanced.com/6000ser.htm 'Surf-boards']. [[Special:Contributions/128.95.172.173|128.95.172.173]] ([[User talk:128.95.172.173|talk]]) 00:59, 7 August 2010 (UTC)


== What benign medical conditions are easily misdiagnosed as a deadly incurable condition? ==
== What benign medical conditions are easily misdiagnosed as a deadly incurable condition? ==

Revision as of 00:59, 7 August 2010

Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


August 1

Question 9, part (d)(i)

http://www.tqa.tas.gov.au/4DCGI/_WWW_doc/008614/RND01/PHY5C_paper.PDF The Right hand grip rule tells me that current is flowing clockwise, which means the electrons are flowing anticlockwise, but apparently that is not correct. Am I missing something?--220.253.172.214 (talk) 01:08, 1 August 2010 (UTC)[reply]

The grip rule refers to the field created by moving charges. However, in this case the field is externally applied (i.e. not related to the electrons), and you are asked to explain how the charges move in response to the field (rather than the other way around). If you apply F = qv × B and the right hand rule you'll arrive at the correct answer that the electrons move clockwise in the applied field. Dragons flight (talk) 01:22, 1 August 2010 (UTC)[reply]
I don't see how F=qvB is usable here since you are not given values for F, v or B. I also fail to see the correct application of the right hand rule, sorry.--220.253.172.214 (talk) 01:35, 1 August 2010 (UTC)[reply]
You don't need the values, just the relationship and signs. In order to move in a circle, F must point towards the center. B points into the page. q is an electron, so it is negative and flips the sign. Given those factors, you use the right hand rule (specifically this) to decide whether clockwise or anti-clockwise v fits the other constraints. Dragons flight (talk) 01:45, 1 August 2010 (UTC)[reply]
To be clear, you need , not just F = qvB. Dragons flight (talk) 01:48, 1 August 2010 (UTC)[reply]
You don't want or need to use the right hand grip rule - you need the Fleming's left hand rule for motors (again remember that current is in the opposite direction to elecron flow). You have the field, and the force (the force must be directed into the centre of the circle to make the electrons turn) - one of the two directions of current gives a force inwards (the other outwards) - from this you can find out whether the current is clockwise or anticlockwise).Sf5xeplus (talk) 12:35, 1 August 2010 (UTC)[reply]

Identical twins and children

Suppose a woman has a threesome with two identical male twins and gets pregnant from this. Is there any way to find out which twin is the father? 68.237.21.90 (talk) 01:27, 1 August 2010 (UTC)[reply]

Theoretically, yes. But you'd have to use a more sensitive technique than traditional paternity testing which only looks at a few loci at which the identical twins are almost certain to be identical. There is a certain mutation rate inherent during gametogenesis, so if you had an appropriate "sample" from each of the potential fathers you could potentially identify positions at which the offspring might differ between them using whole genome sequencing. --- Medical geneticist (talk) 01:38, 1 August 2010 (UTC)[reply]
How does that work? Why would one sperm from a man have the same mutations during gametogenesis as another sperm from that man? And, if that is the case, why would it be different mutations in an identical twin? --Tango (talk) 01:46, 1 August 2010 (UTC)[reply]
The gametes of the two identical twins will contain different de novo mutations. By chance, some of those mutations will be present in the majority of that individual's sperm (while others may be present in only a fraction of the sperm). This is called gonadal mosaicism. If the two sperm samples were sequenced deeply enough using "next-generation" methods, it would be theoretically possible to identify new mutations unique to either of the twins (those would be the positions at which their respective offspring would differ). Then you could just sequence those sites using standard methods in the child. Please note, however, that the OP asked if there was "any way to find out which twin is the father", not whether that method was "practical", "likely to succeed", "admissible in court", or "achievable by the average person". Even though what I've outlined is possible, it would require an extraordinary effort by today's paternity testing standards. --- Medical geneticist (talk) 11:27, 1 August 2010 (UTC)[reply]
I'm sorry, but you aren't making sense. You seem to be using "gamete" and "sperm" as if they are different things. Sperm are the male gametes. When you say "gamete" do you actually mean "gonad"? --Tango (talk) 15:10, 1 August 2010 (UTC)[reply]
Spermatozoa are the mature male gametes. However, de novo mutations occur in the earlier gametic progenitors (during DNA replication), which is why I called them gametes and not sperm. See spermatogenesis and meiosis. The gonad is the sex organ (testis or ovary) that contains and supports the developing gametes, along with having endocrine functions. There is certainly a difference between "gamete" and "gonad". When we talk about "gonadal mosaicism" or "germline mosaicism", the implication is that there can be gametes (progenitors and mature gametes) gametic progenitors with different genetic compositions -- either chromosomal or at the level of individual nucleotides. In the context of the OP's question, the most likely sample to be analyzed would be the mature spermatozoa as opposed to a testis biopsy specimen, which is why I specified that it would be the sperm sample that would be sequenced in this highly improbable scenario. Does it make sense now? --- Medical geneticist (talk) 18:00, 1 August 2010 (UTC)[reply]
No, the same problem exists: if the mutation happens in the production of individual gametes then there will be no correlation between the mutations in different gametes. Without such a correlation, you can't identify the father from new sperm. I think you are trying to say that the mutations happen in the production of gametocytes, not gametes. --Tango (talk) 23:16, 1 August 2010 (UTC)[reply]
Point taken. It was sloppy for me to use the word "gamete" to refer to gametic progenitors. However, gametocyte isn't quite right either. The proportion of sperm carrying a given de novo mutation will depend on how early in the process of gametogenesis the mutation was introduced. In order to generate a significant degree of germline mosaicism, the new mutation would have to occur in the primordial germ cell or spermatogonium. If the mutation occurred as late as the gametocyte, there would not be enough affected mature gametes to allow determination of paternity. --- Medical geneticist (talk) 02:31, 2 August 2010 (UTC)[reply]
(edit conflict)AFAIK, not with current technology. Children of identical twins have the same genetic similarity as half-siblings (rather than as cousins, which is to be expected as the children of non-identical siblings). While it should be noted that one could find genetic difference between identical twins at the full genome level, genetic fingerprinting and paternity testing use a tiny tiny fraction of the total genome, such that one cannot tell the difference. Remember that the human genome project took over 10 years to complete the first full sequencing of a human genome; and that was just a generalized genome for humans in general. An individualized full genome for even one person is a practical impossibility with current technology.ed note: striking. It appears it has been done, but it is still generally impractical to do for purposes such as this Hypothetically, if you COULD obtain a full genome from the child and both twins, you MAY be able to tell the difference. But that isn't possible under modern methods of DNA fingerprinting. See Twin#Genetic_and_epigenetic_similarity for a (IMHO) altogether too-brief discussion of twins and genetics. --Jayron32 01:42, 1 August 2010 (UTC)[reply]
You've blinked and missed the revolution. Sequencing a full individual human genome only costs a few thousand dollars right now, and is expected to drop to less than $1000 in a year or two. (Although I think it's correct that this problem would be extremely difficult even given both full genomes.) Looie496 (talk) 02:01, 1 August 2010 (UTC)[reply]
Note that "a few" still means around $20-15 thousand. But yeah, getting cheaper all the time. --Mr.98 (talk) 02:17, 1 August 2010 (UTC)[reply]
Part of the reason why DNA fingerprinting is legally admissable is that their is solid experimental backing that it works; there are now millions of successful uses which allow us to say that matching two samples via DNA fingerprinting methods is reliable as can practically be. Full genome sequencing indicates that the current number of published full genomes of individuals runs somewhere in the dozens at the outside; while comercialization in the next few years may increase this number, it is not in any way currently a reliable method of identification, which is what is needed to use it to determine which of two identical twins is the father of a child. --Jayron32 02:33, 1 August 2010 (UTC)[reply]
Well, it hasn't been tested in court because of the minimal number of people whose genomes have been sequenced, but I can't imagine it would be seen as unreliable. Basically DNA fingerprinting is sort of like saying two books must be the same because all 20 chapters have the same number of pages. A match based on full genome sequencing is like saying two books must be the same because they match word for word and letter for letter. There just isn't any way it could conceivably give a wrong answer. Looie496 (talk) 06:40, 1 August 2010 (UTC)[reply]
Oh, I am with you on that point. However, I am also married to a forensic scientist, so there is also the other end of it. Being reliable for a scientist isn't the same as being reliable to a lawyer. They are overlapping, but not necessarily identical, sets of data... --Jayron32 03:41, 2 August 2010 (UTC)[reply]

Where is the center of the universe?

An explosion radiates energy equally in all directions. As the energy loses heat, it decays into matter. The Big Bang was such an explosion, and it created a universe, which continues to expand, after nearly 14 billion years. So one would assume that the center of the universe is empty, since all its matter continues to move outwards, and away from other matter. But I am told that our solar system is at the center of the universe, as is our galaxy, the Milky Way. How is this possible? On the other hand, Penzias and Wilson were able to measure the residual microwave radiation of the Big Bang, in frquency and temperature, at the outer edges of the universe. They found that the values were equal in all directions, within minuscule uniformity. How would this be possible, if some of their measurements were taken from one side of the universe to the other, across its empty center? Or from closer to one edge of the universe than to another, more distant edge? Where are we, in relation to the center of the universe? ---- —Preceding unsigned comment added by Geepod2 (talkcontribs) 01:57, 1 August 2010 (UTC)[reply]

The universe has no center; an alternate perspective is everywhere is part of the universes center. The universe does not expand from a single point, like an explosion, it expands from every point, like a rising loaf of bread. One standard analogy is to think of the universe as being like polka dots on a balloon surface. If you draw polka dots on a balloon, and blow it up, all of the dots move away from each other, yet no one of the dots is actually the "center" dot. This is because the universe is not expanding into empty space, it is creating the space itself. The article shape of the universe discusses some of the common theories about the shape of the universe, and many of the best fit shapes are "edgeless". --Jayron32 02:02, 1 August 2010 (UTC)[reply]
When thinking on the scale of the universe, Newtonian physics is no longer a good approximation. HiLo48 (talk) 02:21, 1 August 2010 (UTC)[reply]
You said you've been told the solar system is at the center of the universe. Whoever told you this is completely off the mark since, as described above by Jayron, the universe has no center. You are probabily better off desregarding anything else that person told you about cosmology. Dauto (talk) 05:55, 1 August 2010 (UTC)[reply]
That person maybe meant the Observable universe, since the universe is expanding uniformly and the light has had an equal time to travel in all direction we are of curse in the centre of our Observable universe. Every point is in the centre of the universe observable from that point.Gr8xoz (talk) 10:11, 1 August 2010 (UTC)[reply]

The "center" of the universe isn't in the universe at all! Think of it like an expanding balloon, but a 3-dimensional surface on a 4-dimensional sphere instead of a 2-dimensional surface on a 3-dimensional sphere. --138.110.206.100 (talk) 12:54, 1 August 2010 (UTC)[reply]

Let's pretend that there are only two dimensions of space for a minute. If the universe were infinite in size, then clearly there is no boundary and nowhere can be called "the centre". But if the universe were finite in size (like a piece of paper) then there is a clear boundary and one can define a central point. The same goes in 3D: if the universe is infinite then there is no centre. If it is finite, there it.
There is an interesting alternative however. Imagine we're back in 2D again. Imagine curving the piece of paper into a sphere. The universe (the surface of the paper) is still finite, but now there is no boundary and so no centre on the piece of paper. Or you could form the paper into a doughnut shape or something even crazier. These "curved back on themselves" model universes are said to have "non-standard topologies".
So what do we think applies to our universe? Well, some cosmologists are working on non-standard topology theories, and there are astronomers looking at the sky for signs that we live in such a universe. If light could go "all the way around", we might be able to see the same star at opposite points in the sky. So far we haven't found any evidence for non-standard topology but we certainly haven't ruled it out. But the "standard model" (our best theory of the universe at the moment) says that either that the universe is infinite, or so huge that each point in our observable universe is so far from the boundary that we can't tell which is the central point. And of course if the universe is finite in size (but much much bigger than the observable universe) then the centre of the universe will not be in our observable universe anyway. Most cosmologists don't think that the universe has a wrap-around topology like in the balloon example, so that's not a good explanation as to why there is no centre in my opinion.152.78.128.149 (talk) 13:51, 1 August 2010 (UTC)[reply]
Can you give some examples of cosmologists that seriously think the universe has (or is likely to have) a boundary? My understanding of modern cosmology is that we either live in an infinite universe, or a finite universe without boundary (such as a torus). I've never heard anyone seriously suggest the universe might have a boundary - what would that boundary be like? --Tango (talk) 15:16, 1 August 2010 (UTC)[reply]
True. I don't think I've ever heard of anyone considering a universe with a boundary. Having said this, I work in particle physics (I've just started a PhD, so I'm no expert!) and a model that I've looked at proposes that there is a fourth spatial dimension of (small) finite length that is simply a line segment and not compact like a circle. Technically though, it can be formed by orbifolding a circle. This extra dimension thus has two boundaries. But the orbifolding process naturally leads to boundary conditions on the fields that can live in the dimension so that excitations (i.e. particles) travelling into either one of the boundaries are reflected. So a boundary isn't a completely crazy idea if you add some extra postulates (like those boundary conditions) that effectively tell you what happens to things travelling into a boundary. I definitely haven't heard any cosmologist talking about this stuff though. I'm the poster you replied to, despite the different IP! 86.137.169.18 (talk) 15:49, 1 August 2010 (UTC)[reply]
Inflation makes a large homogeneous region, but not (necessarily?) an infinite one. I remember one of my undergraduate professors sketching on the blackboard a flat region of space surrounded by a crazy wavy region on which he wrote "here be dragons". That's an accurate if sardonic representation of the present state of knowledge in cosmology. The problem is that you can't experimentally distinguish models that only differ outside the observable universe. If the flat region were smaller than the observable universe then we'd see huge inhomogeneities in the CMBR, so that much is ruled out. But whether the flat region is finite and larger than that is anybody's guess. Compact spatial topologies have the same problem. When people propose that space wraps around, they always have it wrap around on a scale smaller than the observable universe, simply because that's the only way the model can make any new predictions. So far, all testable models of this type have been ruled out, but whether space wraps around on a larger, untestable scale is an open question, and may always be. -- BenRG (talk) 22:06, 1 August 2010 (UTC)[reply]
Sure, the inflated region may well be finite, but what would happen at the edge? There would still be spacetime outside it, would anything actually happen when you crossed the edge other than the average density increasing (a lot)? You are right, though, what we are discussing is largely unscientific: by definition, what happens in the unobservable universe is not empirically verifiable. --Tango (talk) 23:24, 1 August 2010 (UTC)[reply]
For the universe to have a shape, that "shape" would take place on a "plane" that would be the "boundary" of said universe. The existing space within the boundary would be the universe, and the non-existing emptiness is what's outside the boundary. Since the universe has an age, it must have had such a boundary while it had a finite size early in its life, and the same should be true when multiverses are created. If such a boundary does exist, it would likely be moving faster than light since matter cannot move faster than but the expansion of space can. The centre of the universe could be considered the origin point of the Big Bang, but that point might no longer meaningfully exist as the universe itself could have moved relative to that point yet there is no outside frame of reference to compare the location of the universe to. ~AH1(TCU) 23:31, 1 August 2010 (UTC)[reply]
I'm afraid that's all nonsense. It just doesn't work like that. When we talk about the universe being a torus (for example) we don't mean a torus embedded in some higher-dimensional space, we're just talking about its inherent shape. It's a little difficult to get your head around, but thinking of the universe as being embedded leads to completely nonsensical conclusions. The big bang does not mean the universe is finite - the universe didn't necessarily start as a point, it could well have been infinite at the time of the big bang and then expanded at all points. --Tango (talk) 23:38, 1 August 2010 (UTC)[reply]
To expand on Tango's idea for a bit; the idea of defining the universe as expanding within something has the problem as this merely redefines what the Universe is; as does the idea of "Multiverses". The Universe is, by definition, everything; if you define the Universe as part of everything, then you aren't talking about the Universe, but something smaller. If the Universe is expanding into something, that something is the real Universe and the thing doing the expanding is a smaller subunit of it. --Jayron32 03:39, 2 August 2010 (UTC)[reply]
Indeed, well put. I like to define "universe" as what you get by taking your current point in spacetime and then adding every point that is causally connected (two points are causally connected if something that happens at one point can have an effect at the other, which basically means light can travel between them) to that point (which results in the observable universe), and then every point that is causally connected to any of those points and repeating ad infinitum. By that definition, any "other universe" in the "multiverse" can either have no effect on us whatsoever, so we might as well assume it doesn't exist, or is part of our universe. You can then forget about multiverses as anything more than a mathematical convenience. --Tango (talk) 12:05, 2 August 2010 (UTC)[reply]

Placebo Questions

I was reading the placebo article on wikipedia and I was wondering a few things...

1. Why does the placebo effect wear off overtime? 2. It also says that a placebo may not always give immediate relief or improvement and the real drug will, why is that? 3. Why does the placebo effect only work in 30% of people? Is it a certain group of people? People that really trust their doctor or people who really think it's going to work? I know the article talks about personality and the placebo effect and says there is no difference in placebo effect based on personality, but does anyone know of any studies that show a difference? —Preceding unsigned comment added by 76.91.30.156 (talk) 08:02, 1 August 2010 (UTC)[reply]

Placebos work on the premise that psyhology can affect physiology. There is a wide physiologic range among people, and there is likely a wide psychological range as well. Even physiologically speaking, a common dose of medication is usually referred to as the ED50, referring to the dose at which 50% of drug takers will manifest the effect. (In order to judge the lethal dose, a similar LD50 is used.) And there's a lot of psychology involved in almost everything one does -- numerous studies have shown that people who engage in social interaction, exercise regularly, etc. can be shown to complain of less pain. So there are so many numerous factors involved in things like post-operative pain/sensitivity/complaints. Perhaps that's a start for you. DRosenbach (Talk | Contribs) 13:05, 1 August 2010 (UTC)[reply]
I corrected your ED50 link. Dose thresholds are meaningful where there is a correlation between dosage and effect. There is no consistent correlation for placebos. Cuddlyable3 (talk) 13:23, 1 August 2010 (UTC)[reply]
There can be an effect between dosage and effect in placebos, if only insofar as the patient believes that two different placebo pills contain different dosages of the illusory drug... --Jayron32 03:34, 2 August 2010 (UTC)[reply]

Computer-controlled cars

I think it might have been in Time Cop; I recall a driverless car that took its occupant to the desired location. Assuming there is no track in the road guiding such a car, could such a system work on the GPS technology we current have, assuming we got vehicular technology up to such a level? Cars would be speeding around and would never crash because the GPS map would know where each car is at every moment. Assuming streets and highways would be off limits for pedestrians (or some kind of sensors would allow for them to be integrated into the GPS map), is such a system possible, or can, for instance, GPS not detect objects to such precise measurements? DRosenbach (Talk | Contribs) 13:10, 1 August 2010 (UTC)[reply]

Yes, it has been done. See DARPA Grand Challenge for an example. The third challenge in 2007 took place on the streets of a disused airbase, although the only other cars were other challengers. 62.56.61.163 (talk) 13:38, 1 August 2010 (UTC)[reply]
(ec) GPS satellite navigation can give adequate location precision though safety would require keeping generous spacing between vehicles and provision for signal shadow areas. There would be concerns about the vulnerability of the radio navigation to interference, whether it is accidental or deliberate jamming. The radio controlled traffic lanes would have to be isolated from all other traffic, comparable to creating a new railway system. With central control, traffic could move very efficiently but it would be constricted by the limited number of entrance/exit points. These points would have to include adequate spaces for acceleration, deceleration and queuing. All possible failure modes need study. Cuddlyable3 (talk) 13:42, 1 August 2010 (UTC)[reply]
You wouldn't just use GPS — the car itself would have localized sensors to detect other vehicles, walls, pedestrians, dogs, etc. It takes more than GPS to navigate through a real-life environment for fairly obvious reasons (GPS can't tell you when an old lady is in the street). Anyway, this is potentially something out there for the future, though aside from the rather copious technical hurdles involved, and the problematic legal ones — how would such a thing be insured? who is at fault when they crash? make no mistake, there will be crashes, no matter how clever the technology is, because that's how things work in the "real world" — there is also a high psychological barrier to being driven around in such a fashion by a computer.
Getting out my crystal ball, I would suggest that unmanned cars will probably only be used in rather limited situations, like guarding a border fence or a military base, where the conditions can be relatively controlled for and the car itself can defer to a remotely-controlling human in the case of anything anomalous. I doubt they will be used for general transportation, especially since if you want someone to drive you somewhere, it is not that expensive to just hire an actual human being. I suspect automatic transportation for general use would only be used on tracked systems, like subways or rail, where the conditions can be easily automatically limited. --Mr.98 (talk) 14:22, 1 August 2010 (UTC)[reply]
(ec) The autonomous vehicle article goes into more detail. Basically GPS is used only for deciding which junctions to take. Each car has short-range LIDAR (IR laser based RADAR) and video-cameras so it can detect what other vehicles are doing, and avoid collisions. There is a car platoon protocol; this allows cars to form convoys. Each car indicates to the car behind that it is about to brake, so that the second car can brake earlier. This allows the separation distance to be less. The lead car decides what speed to travel at and to avoid collisions. If each car communicates its acceleration/braking profile to the others, then the platoon can follow the least capable car, and the separation distance can be very close. Of course, there need to be a way for cars to announce that they are about to leave the platoon; if car in the middle needs to leave, then the car behind it will become a temporary lead and drop back a bit to make some room. The leaving car will then drop back a bit itself, and then leave at the next junction. The car behind will then rejoin the platoon. CS Miller (talk) 14:31, 1 August 2010 (UTC)[reply]
An Italian university is scheduled to have a driverless car driving from Italy to China in October. http://www.dw-world.de/dw/article/0,,5829135,00.html Impressive if true - but I thought it was impractical for a human driven car to do the same route due to wars? 92.29.127.162 (talk) 17:11, 1 August 2010 (UTC)[reply]
Wars? Between who? The link and my searches don't seem to discuss the route, but mention Siberia and Mongolia. In any case, the obvious route from what's been described would be reaching Russia somehow from Italy (many practical ways) then on to Mongolia and China. There are of course other routes, e.g. [1]. All these options have a variety of safety risks, likely a lot of paperwork & money & time & other facets of bureaucracy and probably a bunch of bribes too (for example you'll probably need a 'guide' to drive in China http://www.lonelyplanet.com/thorntree/thread.jspa?threadID=1780257) so while not for the faint hearted is doable and there must be a resonable number of people who do that sort of thing, i.e. driving from Western Europe to China every year. Nil Einne (talk) 19:18, 1 August 2010 (UTC)[reply]
Technically, the Italian car isn't exactly driverless. There will be someone sitting behind the wheel the entire time - they'd never get permission to have an entirely unmanned machine to drive in those places. How many times will he have to take control? Lots, I suspect. SteveBaker (talk) 01:46, 2 August 2010 (UTC)[reply]
Audi designed and built a driver-less Audi TTS to do the climb up Pikes Peak at full tilt [2]. Episode 10x08 of Top Gear featured an autonomous BMW doing a full-ball lap of the track [3]. As mentioned by others, such systems won't only use GPS. Some of the other systems that are already in use today include radar guided cruise control, park distance control, lane departure warning, traction control, vehicle stability control, ABS, infra-red and visible light cameras featuring facial, vehicle, pedestrian and road sign recognition, self-parking systems etc. It's amazing how many of the necessary systems are already in place in the (top-of-the-range) cars you can buy today. Zunaid 18:21, 2 August 2010 (UTC)[reply]

few question about glycolysis

1)What is the logic of glycolysis?? I mean its main purpose is to provide energy and also it is a multipathway i.e,its intermediates are use in many other pathways... bt do it have any other function other than the above....???

2)glucokinase is synthesized after 2 weeeks of birth. After feeding(insulin) stimulates the glucokinase system. What kind of regulatory mechanism is operational here....??

plz help me by replying fast.....m waiting!!!! —Preceding unsigned comment added by Priyankajoshi7 (talkcontribs) 14:59, 1 August 2010 (UTC)[reply]

As a point of netiquette it is generally considered impolite to ask for a quick response; we are all volunteers here. For future reference, please sign your questions and replies by typing in ~~~~. It's to the left of the '1' (one) key on US-keyboards and to the 3 keys right of the 'L' on UK-keyboards, or press the pen-like button on the standard editing toolbar.
Back to your question, bio-chemistry is not my discipline, but our glycolysis article gives some of the other uses of the intermediates, especially the biochemical logic and intermediates for other pathways sections. CS Miller (talk) 15:46, 1 August 2010 (UTC)[reply]

What is convergent evolution?

If an octopus and I have a common ancestor who possessed what I might call 'proto-eyeness', by which I mean not eyes themselves, as we know them, but everything necessary to the production of descendants with eyes, then can my own eyes and the octopus's eyes be said truly to have evolved independently, and is there really any such thing as convergence? By analogy (no pun intended), a woman with no formal education but wealth and social connections has three daughters, two of whom go to university. The university degrees did not evolve independently of one another: they are each the result of the mother's wealth and social connections (or 'proto-universitiness'). I mention the third daughter because I know some of our cousins have no eyes. 91.107.28.138 (talk) 20:13, 1 August 2010 (UTC)[reply]

Did you read Convergent evolution? A good example: sharks and dolphins share many similarities and look a lot alike. However they are not closely related at all. The converged on many of the same traits because they lived in similar environments and benefited from many of the same adaptations. Friday (talk) 21:17, 1 August 2010 (UTC)[reply]

Friday, I did read Convergent evolution. It says of bats and birds that 'their last common ancestor did not have wings'. But I think their last common ancestor did have 'wingedness', or everything necessary for the production of descendants with wings. 91.104.164.37 (talk) 17:04, 2 August 2010 (UTC)[reply]

You are describing parallel evolution rather than convergent evolution. --Tango (talk) 23:27, 1 August 2010 (UTC)[reply]
The term convergent evolution doesn't carry any implication about causes, it only says that two species are more similar now, with respect to a certain feature, than their ancestors were. It is entirely possible to have convergent evolution at the organism level in spite of divergent evolution at the gene level. In the case that you describe, convergence would result from unveiling of a previously hidden similarity. That isn't how convergence normally arises, in the opinion of most biologists, but if it did, it would still be called convergent evolution. Looie496 (talk) 00:32, 2 August 2010 (UTC)[reply]
An interesting recent paper argues that the distinction between "parallel evolution" and "convergent evolution" is artificial and should be discarded. See Convergence and parallelism reconsidered: what have we learned about the genetics of adaptation? Adrian J. Hunter(talkcontribs) 09:28, 2 August 2010 (UTC)[reply]
The eyes of an octopus evolved from skin cells. Your eyes evolved from brain cells. They're not modifications of the same 'proto-eye'. — DanielLC 22:47, 2 August 2010 (UTC)[reply]
91, I don't think it's fruitful to consider this issue of an organism having "everything necessary to the production of descendants with *some feature*". Given enough time and evolutionary pressure, pretty much all that's necessary is being alive, and you can produce an astounding array of descendants with novel features. It seems very impressive that the vast array of life on earth almost certainly came from a common ancestor, but hey, life is very impressive. Friday (talk) 15:13, 3 August 2010 (UTC)[reply]

can loud music (esp. prominent electric guitars, drum, bass) kill cells on a microscope slide?

When imaging cells for sometimes up to six hours at a time, I of course take advantage of my laboratory's nifty sound system ;-)

but I want to ask if thundering bass in the same room can affect my experiments? John Riemann Soong (talk) 21:28, 1 August 2010 (UTC)[reply]

Sonication usually uses ultrasonic frequencies. It seems very unlikely that audio frequency music would be able to disrupt cell membranes. Nimur (talk) 21:42, 1 August 2010 (UTC)[reply]
Well I wasn't thinking disrupting cell membranes, but maybe activating sensitive mechanoreceptors that would set off apoptosis? John Riemann Soong (talk) 22:45, 1 August 2010 (UTC)[reply]
I read one paper where apparently sustained exposure to "low-frequency sonication" could kill cells. Would that be in the audible range? John Riemann Soong (talk) 23:09, 1 August 2010 (UTC)[reply]
Maybe something like this, this or even this? ~AH1(TCU) 23:19, 1 August 2010 (UTC)[reply]
The first link is from 1932 (!) and the others describe killing cells with ultrasound. Bass and sub-bass are infrasound so I don't think they can kill cells (I'm glad off this, although I do get a strange headache after listening to drum and bass) This paper states that "low-frequency sonication" is of 25 kHz, just outside of our hearing range and the opposite end of the spectrum compared to bass. I think that sonication must also be very loud sound, considering that bats produce ultrasound at 130 decibels but don't drop out of the sky as their cells die. Considering all of this I agree with Nimur that it is very unlikely that loud music could kill cells on a slide. Smartse (talk) 00:45, 2 August 2010 (UTC)[reply]
Sound waves have two measures of strength: Their intensity (or amplitude) and their frequency (or tone or note). Both can do damage. High frequency soundwaves are of a higher energy sound than lower frequency sounds of the same intensity, AND louder sounds have more energy. Your hearing can be damaged by high volume infrasound, even if you cannot hear it. Presumably, very high volume, but low frequency, soundwaves could cause damage even on the cellular level. --Jayron32 03:31, 2 August 2010 (UTC)[reply]
Yes, but noise-induced hearing loss occurs because the cells that are damaged have evolved specifically to absorb sound energy - normal cells won't be affected. My physics is pretty rusty but AFAIK the wavelength of low frequency music would be too large to interact with something as small as a cell. What frequencies and decibels do you mean by "very high volume, but low frequency" and can you find a rouce to back up your presumption? Smartse (talk) 08:30, 2 August 2010 (UTC)[reply]
Personally I would be more concerned about the stability of the optics during imaging than killing cells - excessive vibration could shift or blur your images (how long is each exposure?). However if you are not finding this a problem already it's probably ok - but you should check if anyone else in the lab is doing vibration sensitive experiments, or you might find yourself quite unpopular when they check their images/results. Equisetum (talk | email | contributions) 08:32, 2 August 2010 (UTC)[reply]
I usually take time lapse images -- I use a 55-75 msec exposure time but for space constraints (to avoid taking 1 terabyte of data each day) I use a 500-800 msec delay between each capture. Focal drift is always an issue, so I always have to play with the knob now and then, even to stay on the same focal plane (and particles I'm tracking often shift focal planes as well) but I did notice that with music on, I seemed to have to knob more frequently. John Riemann Soong (talk) 14:20, 2 August 2010 (UTC)[reply]


August 2

Units of Time

Are there any natural units for measuring time not based on the movements of the Earth?

Like, say you were far out in space trying to communicate to an alien how long a day on Earth is, what unit of measurement could you use that's universal? Are there any longer than the amount of time it takes hydrogen to perform a hyperfine transition? Just curious. 108.3.173.100 (talk) 00:26, 2 August 2010 (UTC)gejl[reply]

The most natural unit of time is the Planck time. There's a bit or arbitrariness to do with factors of pi, but otherwise it's something that aliens with similar understanding of physics to ours should share. The Planck time is very short, though, so doesn't answer your second question. The Hubble time is very long, but also pretty fundamental. --Tango (talk) 00:30, 2 August 2010 (UTC)[reply]
It's "natural" in a physics-y sense of "being based of fundamental physical quantities". However, it's not particularly "natural" in the colloquial sense of being familiar and easy to recognize. I would also point out that the Planck time has a standard uncertainty of 5×10−5 which is pretty crappy. This is mostly due to our difficulty in measuring G. By comparison, the best atomic clocks have a precision ~5×10−16 at measuring a second as defined to be "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom". If we wanted a really rigorous comparison, I think we would definitely tell them something based on an atomic clock. For a quick and dirty comparison, it would probably make more sense to use a simpler physical process that is easily accessible such as the half-life of a common radioactive decay. Dragons flight (talk) 03:17, 2 August 2010 (UTC)[reply]
Assuming the alien can understand multiplication/exponents, is there anything wrong with applying those the tiny units? Lenoxus " * " 00:49, 2 August 2010 (UTC)[reply]
Accuracy. As mentioned above, Planck time is only measured accurately to around 5x10-5 (confirm here at NIST.gov). 50 ppm might seem tiny - but for frequency stability, it's actually really terrible - we've had better frequency-stability technology for literally thousands of years, even using shoddy "oscillators" like pendulums and water clocks. If a clock drifted with 50 ppm of timing error, it would accumulate about a half-hour of error each year. This would definitely become problematic. A modern run-of-the-mill off the shelf digital watch uses a quartz oscillator with an (electronically compensated) accuracy of around 1e-8 (and you can still tell a few seconds of drift per month or so). A modern scientific timing system can achieve many orders of magnitude better frequency stability by using even better oscillators, like atomic resonances or cesium decay rates. For practical purposes, this means that we can trust our clocks accurately enough to synchronize distant electronics and computer systems to a very high degree of precision and accuracy - making possible things like phase shift encoding in GPS and mobile telephones. Better frequency stability in such technologies directly corresponds to higher achievable data rates. Nimur (talk) 15:41, 2 August 2010 (UTC)[reply]
The primary unit of time, the second, is now defined using an atomic clock. Atomic clocks don't rely on movements of the Earth, Sun, moon or any other celestial body. Dolphin (t) 02:58, 2 August 2010 (UTC)[reply]
...but first you would have to tell the alien which atomic clock to use. To be understood, a message usually implies a particular time base e.g. to demarcate successive words or symbols. See Arecibo message which was an exercise in constructing a message that an alien might be able to interpret. Cuddlyable3 (talk) 18:36, 2 August 2010 (UTC)[reply]
A message can contain its own definition of temporal units: "The duration of the transmission of this sentence is one grondlewipple." - since the aliens can directly measure the duration, they now know how long a grondlewipple is. Obviously you'd have to allow for relativity - but that should be a relatively simple thing for the aliens to estimate. SteveBaker (talk) 00:08, 3 August 2010 (UTC)[reply]
In a practical sense, many digital communications protocols use the above technique described by SteveBaker, incarnated by engineers in the 1950s and 1960s in the form of clock sync protocols. For example, an ethernet PHY message usually starts with something like 41 sets of eight-bit "10101010"s, so that the timing and phase can be established exactly; message starts with the last set "10101011" - so even if the signal is garbled or your receiver didn't know when the actual message began, at the same time, you have a clear indication when the timing information is done and the message is about to start. More sophisticated schemes, like autonegotiation, also exist, with the ability to specify frequency and other communication parameters. And in the packet radio community, a variety of more noise-resilient techniques exist, as well as better ways to distinguish a message from non-message noise. Whether these techiques would be useful for conveying timing information to a receiver who was unfamiliar with the protocol is up for debate. Nimur (talk) 17:05, 3 August 2010 (UTC)[reply]
Given several relevant elements in a large number of objects plus a good understanding of statistics, radioactive half-life could probably be used to define a standard "basket" unit with a low margin of error. 63.17.39.134 (talk) 03:53, 6 August 2010 (UTC)[reply]

Precambrian rabbits and natural fossil exposure

A thought just occurred to me on the subject of Precambrian rabbits. Occasionally, natural events like erosion will expose fossils which date from tens of millions of years prior. Given this, couldn't an organism die immediately on top of or next to such a fossil site, after which the whole thing gets covered in sediment, resulting in "distant neighbor" fossils?

To put it another way: Given this small possibility (unless, as is far more likely, it's not a possibility), why is the fossil record so consistent? Why are there (at least) no Triassic rabbits? Lenoxus " * " 00:46, 2 August 2010 (UTC)[reply]

Because the experts are aware of this issue and take it into account. The rock the recent organism is fossilised in would be different from the rock the old fossils are in, so you would be able to tell that something had happened. Working out the ages of different layers of rock can be a bit of a challenge, but it can be done (in various ways). --Tango (talk) 00:55, 2 August 2010 (UTC)[reply]
As stated by tango, geologists understand the processes by which rock formation and erosion can occur; in fact rocks are often dated by index fossils, which determine their age. Thus, finding a rabbit lying in a rock layer which was immediately on top of a trilobite wouldn't lead people to conclude that the rabbit was really old, rather that the rock layer the rabbit was in was much younger than the one the trilobyte was in. --Jayron32 03:27, 2 August 2010 (UTC)[reply]
I had a strong feeling it would be something like that (since that relates to how they're able to tease apart folded strata); thank you both! So, now I'm wondering… has the situation I described ever happened? That is, are there any known "distant neighbors" (where the rocks are obviously quite different, etc)? If not, any particular reason why not? Lenoxus " * " 04:48, 2 August 2010 (UTC)[reply]
You'll want to follow the threads that lead from the article titled Stratigraphy, which is the science of reading and interpreting rock layers. Undoubtedly, examples exist of local areas where two non-sequential rock layers lie side by side. I cannot think of one specific example, but that's more a function of there being so many rather than being so few. --Jayron32 04:59, 2 August 2010 (UTC)[reply]
The specific term, BTW, for such a situation is an Unconformity. Examples given in that article show pictures of adjascent rock layers which are actually represent gaps of up to 1 billion years. --Jayron32 05:02, 2 August 2010 (UTC)[reply]
It does happen that sometimes fossils get into layers they don't belong in. Most commonly it is a case of older fossils getting incorporated into younger rock -- this can happen if the soft rock they are embedded in erodes away, leaving the fossil behind to be covered by younger sediment. It's much harder for younger fossils to get incorporated into older rock. Looie496 (talk) 05:44, 2 August 2010 (UTC)[reply]
A closely related phenomenon happens all the time in archaeology. In those cases, you end up with different soils rather than different rocks, but it's still (usually) quite obvious when it's occurred. During a proper excavation, you make note of all visible stratigraphic layers, as well as any and all discontinuities you find. There are a variety of pedological assays you can have done to help out, but for the most part a detailed visual inspection is all that's required. Matt Deres (talk) 14:07, 2 August 2010 (UTC)[reply]

Depression Medication

A while back I saw an article that showed which type antidepressant is more likely to work in what type of people. I can't remember the different categories they had for the type of people. Has anyone seen it? —Preceding unsigned comment added by 76.169.33.234 (talk) 05:04, 2 August 2010 (UTC)[reply]

That's rather vague, and could describe every single antidepressant; each probably works better in some types of depression than others. You could explore Category:Antidepressants yourself to see if you can dig it up. --Jayron32 05:07, 2 August 2010 (UTC)[reply]
The most common types of depression are major depressive disorder and bipolar disorder, and they are usually treated with different types of drugs -- the articles will give you more information. Looie496 (talk) 05:39, 2 August 2010 (UTC)[reply]
There's also a list of antidepressants which might be of use. Smartse (talk) 08:48, 2 August 2010 (UTC)[reply]

Contact between shaft and bearing

hey i want to know on what basis the contact(clearence,angle)between shaft and bearing(journel,liner) because am installing new roller of shaft dia 320 and my liner is of dia 320, its is a semi spherical(only i have liner at bottom half)??? and why???? —Preceding unsigned comment added by 82.129.222.71 (talk) 09:25, 2 August 2010 (UTC)[reply]

There's some basic info here on the hows and whys of journal bearings [4] does yours match any of these types?77.86.94.177 (talk) 12:53, 2 August 2010 (UTC)[reply]
It's not clear if the bearing is a Plain bearing - it might be - one reason for only having half a cylinder would be if the bearing was supporting a force that was predominately in one direction - eg a heavy weight - is your bearing of this type? 77.86.94.177 (talk) 15:50, 2 August 2010 (UTC)[reply]

Electrolysis

What are the typical values of current and voltage required for the electrolysis of water? In the current industry what power input do they use, is it mostly of national grid systems? Thanks very much--91.103.185.230 (talk) 11:15, 2 August 2010 (UTC)[reply]

You need to use DC if you want to separate the hydrogen and oxygen. A minimum voltage would be around 1.5 volts DC. In my experience, 24 volts DC works fine; add some baking soda, sodium hydroxide, or sulfuric acid for the electrolyte. Current might be 100 milliamps. --Chemicalinterest (talk) 12:11, 2 August 2010 (UTC)[reply]
The minimum voltage required for water electrolysis is 1.23V (see Electrolysis of water). However the overpotential needed at the electrodes is ~1V (depending on electrodes). Additionally producing the gas at above 1atm pressure also increases the voltage.
The current directly controls how much water is electrolysed per second.
The voltage used therefor will be roughly 2+IR where I is the current, and R is the internal resistance of the electrolysis cell.
Also see http://www.hydrogenassociation.org/general/faqs.asp and this google books Hydrogen fuel: production, transport, and storage By Ram B. Gupta p.162-163
Commercial cells produce work at high pressure. This link http://www.nrel.gov/docs/fy04osti/36705.pdf gives typical figures - there are high and low pressure cells, and electrolyte of KOH is used to reduced cell resistance. As with all processes that use electricity as a feedstock hydroelectric power is common.
Using the wattage figures in the nrel.gov link above you can work out the typical current values using V=~2V after taking into account the efficiency. However note that typical cells are connected in series, not parallel -so the figure may be a sum of the currents going through the cell rather than the supply voltage/current.77.86.94.177 (talk) 12:15, 2 August 2010 (UTC)[reply]
Here are some commercial low pressure cells from Statoil : [5] they use ~4000Amps (max) - voltage depends on number of cells.77.86.94.177 (talk) 13:12, 2 August 2010 (UTC)[reply]
It depends whether the OP is talking about commercial electrolysis or laboratory electrolysis -- I was talking about the latter. --Chemicalinterest (talk) 14:16, 2 August 2010 (UTC)[reply]
Seems like the question quite directly tells you his context... DMacks (talk) 14:41, 2 August 2010 (UTC)[reply]

Fennec Hare

Is there any such animal as a Fennec Hare? This page says they are critically endangered, and one has just been born in captivity. However the photo looks like a kitten 'shopped to have bunny ears, and it's supposed to have been born in North Korea. Is this nonsense, propaganda, or maybe possibly true? 213.122.216.120 (talk) 12:53, 2 August 2010 (UTC)[reply]

In case you aren't aware, the Fennec fox does exist, but I still very much doubt the authenticity of the article (the misspelling in the URL doesn't fill me with confidence). The "Iperian steppe" doesn't exist, and here is also doubtful. In short: no. - Jarry1250 [Humorous? Discuss.] 13:21, 2 August 2010 (UTC)[reply]
So this was our April Fool's joke in 2009. Amusingly, it inspired a number of would-be conservationists and some additional Photoshopping (try Google Image searching "Fennec Hare"). Here is our April Fool's joke from 2010 - http://www.zooborns.com/zooborns/2010/03/rare-baby-skeksis-chick-born-at-franklin-park-zoo.htmlDeKreeft27 [Discuss.] 14:09, 2 August 2010 (UTC)[reply]
Keep in mind: Wikipedia has an article on everything: cabbit. Nimur (talk) 16:44, 2 August 2010 (UTC)[reply]
That is excellent. Was a baby mystic also born simultaneously to the skeksis? APL (talk) 19:43, 2 August 2010 (UTC)[reply]

documentary

Anyone know of a good documentary on bees / wasps / ants? 82.43.88.151 (talk) 13:10, 2 August 2010 (UTC)[reply]

This is maybe too specific, but Queen of the trees is about fig wasps and is the best documentary I've ever seen. For army ants, see this BBC page, you should find some other videos on there too (sorry not sure if they work outside of the UK). There have been a couple of recent films about colony collapse disorder which I imagine would discuss bees more generally as well. Smartse (talk) 15:05, 2 August 2010 (UTC)[reply]
(ec)Below are links to video documentaries about

bees, sand wasp, Ichneumon Wasp, paper wasp, fairy wasp, ants (old film), and ants (new film). Cuddlyable3 (talk) 15:24, 2 August 2010 (UTC)[reply]

So are you vouching for them as being good? (Did you even watch them?) Because I'm sure the original poster can use Google/YouTube as well as you can. They're asking about quality. --Mr.98 (talk) 16:35, 2 August 2010 (UTC)[reply]
I do not take it for granted that all videos at YouTube are good and therefore I looked at all of them. Several are by David Attenborough or have National Geographic logo that IMO also speaks for quality. Cuddlyable3 (talk) 18:24, 2 August 2010 (UTC)[reply]
This one of wasps attacking a bee hive is so amazing it's hard to even believe it's real. --Sean 16:59, 3 August 2010 (UTC)[reply]

Real life lightsaber

In this video, http://www.youtube.com/watch?v=6G8VztSWnVo Michio Kaku describes how to make a real life lightsaber. Of course being the geek I am, I loved it. But are there any problems with his design? Any way to improve upon it? Would 12,000 degree plasma be sufficient to cut/burn through almost anything? What would the plasma look like? Color? 148.168.127.10 (talk) 14:24, 2 August 2010 (UTC)[reply]

It would likely function, but you will he holding 12,000 degree plasma in your hands. How thick will the insulation need to be to avoid burning yourself? Remember, his design fills the "insulation" area with batteries - which aren't great insulators. In fact, he doesn't explain how we keep the batteries from melting. In the end, it is a heat saber, not a light saber. -- kainaw 15:49, 2 August 2010 (UTC)[reply]
The plasma can be bright but composed of very little mass, so it is almost impossible to get burned. A static electricity spark is an example. To make a large amount of matter in the plasma state requires much more energy than a few batteries can supply. --Chemicalinterest (talk) 17:36, 2 August 2010 (UTC)[reply]
Then, how will it cut things? Nimur (talk) 18:57, 2 August 2010 (UTC)[reply]
Could it be similar to a CCFL? With maybe some colorants? --Chemicalinterest (talk) 17:45, 2 August 2010 (UTC)[reply]
Not really. He isn't trying to make something that looks like a lightsaber. He is trying to make a pretty hand-held plasma cutter. From his design, which is poorly shown, the plasma cutter begins looking like a sort of flashlight. Then, a fan blows the arc out from the handle to make the "blade". So, the user is literally holding the handle within an inch of the cutting heat. He could make it safer, but you'd end up with a normal plasma cutter - which already exist and are used every day. -- kainaw 17:52, 2 August 2010 (UTC)[reply]
We should just come out and say it: the light saber is not an effective design, from an engineering point of view. It was imagined by a filmmaker, because it looks neat, and makes for great special effects. But any small amount of practical consideration invariably yields the same conclusion: infeasible. It has little to no utility as a weapon (if the enabling technologies for a "light saber" existed, they would be better employed to make more efficient weapons or shields). It has little advantage as a tool, compared to existing technologies; and again, if the mysterious fictional technologies necessary to make the saber did exist, they could be packaged in a more convenient form-factor if the objective was a cutting-tool or welding tool. As Kainaw has pointed out, we already have plasma cutters, cutting torches, and so on. What exact advantage would the light saber actually have? Nimur (talk) 19:01, 2 August 2010 (UTC)[reply]
Can a plasma cutter bounce energy weapon ammo back at the shooter? Googlemeister (talk) 20:05, 2 August 2010 (UTC)[reply]
That is a fictional capability and a fictional scenario. But following my above statements, if a lightsaber could do it, wouldn't it make more sense to build a big array of light-saber "light" and carry it around in front of you? Anyway, we already have a technology that does provide pretty good protection against bullets, projectiles, and so on; what is the advantage of a lightsaber over a solid, resistant light-riot-shield? Nimur (talk) 20:26, 2 August 2010 (UTC)[reply]
It's worth remembering even if it could, we end up with the same problem we had with the katana question a few days back. Sure the sword may be able to bounce or deflect bullets if it happens to be in the right position. It doesn't help you if you can't use the force to actually be able to get the sword or lightsaber into exactly the right position for every shot. Nil Einne (talk) 16:59, 3 August 2010 (UTC)[reply]
I disagree Nimur. If we were to have a real lightsaber, and it worked the same way it does in the Star Wars movies, I can see it being very useful. It would be the size of a knife, but have the blade length of a sword, and supreme cutting ability. I can see this being quite useful in combat situations for cutting down doors, walls, obstacles. Sure there are other devices that can do those things, but we're assuming the lightsaber works fine, and it's small so why not? It can be used to start fires, or heat up water which can find some uses. Yes we already have cutters, and torches, but we're assuming the lightsaber works fine right? Well in that case, the lightsaber is smaller, and has a longer blade which makes it more useful as a weapon and cutting device. Yes I'm sure the technologies could be implemented in projectile weapons or shields, but why not have both? ScienceApe (talk) 02:31, 3 August 2010 (UTC)[reply]
You missed Nimur's point completely. Assuming the technology exists to make a light saber as understood from Star Wars, it (the lightsaber) would likely be the least useful thing that technology could produce. 218.25.32.210 (talk) 04:48, 3 August 2010 (UTC)[reply]
A little one would make a great cheese knife though... Googlemeister (talk) 13:09, 3 August 2010 (UTC)[reply]
I understood his point perfectly well IP. ScienceApe (talk) 22:32, 3 August 2010 (UTC)[reply]


BTW, if you want something that looks like a lightsaber, there are many ways to do one. Leds are common nowadays probably because they're cheap, small, and well suited for something handheld. Some use a strip of leds but I think it's more common to use a power led in the base with some sort of diffuser for the blade. E.g. [6] [7] [8] [9] [10] [11] [12] [13]. Some also use electroluminescent sheet or wire. I would guess people have been doing it with neons and various other lights for a very long time. There are forums dedicated to this sort of thing [14] [15]. You can of course get official versions [16] too. In other words, if you just want a lightsaber lookalike it's really not a problem but as kainaw says, that's not what the OP is discussing Nil Einne (talk) 17:48, 3 August 2010 (UTC)[reply]
While we're nitpicking, there's a lot of problems with the "science" explained in that video. I think the designer needs a solid refresher course in physics! Futuristic technologies notwithstanding, some things are fundamental properties, and while we can invent better materials and technologies, we can't break physical laws.
The video's proposal is to make a "light saber" by jetting hot ionized gas out of a handle. When you expel a jet of hot gas (ionized plasma or neutral), the gas is going to expand as soon as it contacts atmosphere. The "angle" of the expansion is characterized by the expansion ratio, and illustrated here: hot gas expands out of a nozzle. If the plasma is low-pressure, it's not going to leave the tube, or it will just waft out like smoke from a pipe so slowly that it will neutralize and cease to be an incandescing plasma. If we increase the pressure to compensate for this, the second the jet of plasma encounters atmospheric pressure around it, it's going to balloon out like a cone or fireball and disperse most of its energy through adiabatic expansion, contact conduction, and convection (leaving preciously little energy left over for "cutting" anything - though it'll probably be like a steam explosion! Lots of nasty burns). The only way to make a fluid (gas or plasma) exit a nozzle in anything remotely resembling a "beam" is to increase its fluid velocity and carefully match its exit pressure by designing a non-expanding nozzle geometry. Unfortunately, such a design also has the effect of maximizing the kinetic energy (and momentum transfer) of the exiting plasma; the gas will be moving at such a high velocity that it jets out in a collimated beam, dissipates its energy only radiatively, until eventually turbulence kicks in and disrupts the beam. (It won't look like a light saber as much as, say, a candle or a rocket plume). But keep in mind that if the hot gas is shooting out, the amount of momentum transfer will be so high that you will basically be holding a rocket motor - hardly convenient for control as a hand-held device!
The proposed "solution" for this problem, plasma confinement, also seems to misrepresent some fundamental principles. If the described method of "unrolling" an electromagnetic confinement chamber even worked, the light saber would not function because all of the plasma would be confined to the inside of this confinement tube. How would the plasma get out of the confinement? Diffusion? Or will there be "holes" in the electromagnetic confinement to let small jets of gas out?
As far as even creating a plasma: the quantity of electrical energy needed to ionize this quantity of air will be enormous - fictional "nano-batteries" notwithstanding, the energy density of the battery will have to be so ridiculously high, that you might as well use that as your cutter or weapon. I'm surprised the designer didn't think to use a chemical reaction to ionize gas - there are some pretty easy ways to do this, especially if you've already decided blow in atmospheric air full of oxygen into the bottom of the device.
Finally, let's discuss the energy balance of the device. The designer has assumed a fictional magic "nano-battery" can power both a turbo pump and an ionization chamber. We can calculate how much gas pressure is needed to extend the plume out to a 3-foot long "beam," accounting for the hydrodynamics. The fact that the designer tries to generate this energy by blowing atmospheric air via an inlet, through a "fan," indicates a misunderstanding of fluid flow - where will the inlet air come from? (Turbo pumps don't like to operate in this kind of condition, anyway. Try swinging one around! They're pretty fragile). The design will require a pressure chamber to store a large quantity of gas at high pressure; think of it as sort of a fluid dynamics "flux capacitor" - all joking aside, the laws of physics do allow us to store mass flux and PV energy in a tank, using energy delivered at reasonable power levels from the pump, until we're ready to use it. This lets us use a small amount of pump power over a very long period of time, and release it later very rapidly. But, we need such a pressure reservoir. PV energy storage is great, because it's easy to store huge quantities of energy; but it's also terrible for a hand-held device, because it is not extremely energy-dense (per mass or per volume) - especially when you consider the necessary characteristics (mass, thickness) of a pressure vessel.
I think it would be better to let the sci-fi buffs take a little poetic license here and just explain the light saber as a magical "light sword" that works by "The Force". Trying to find any correct physical or engineering explanation for it will just lead you to the disappointing conclusion that a light saber is an imaginary magical invention and can not actually be built. What has been described in the video really wouldn't look or work much like a light saber. More like a propane-barbecue-grill burner. A very heavy, hot, electrically charged rocket motor/grillburner with a bunch of gas inside of it, with a jet engine and a pressure cylinder mounted on the bottom. Nimur (talk) 20:34, 3 August 2010 (UTC)[reply]
But you can't say the lightsaber only works because of the force, because Han Solo borrowed and operated Luke's LS once. Googlemeister (talk) 13:29, 5 August 2010 (UTC)[reply]

Hematite color change

I have a small bead of hematite. It's has a beautiful dark, shiny gray color and I'd like to keep this color. I've read that hematite can turn red through contact with oxygen though. It's in a plastic container right now. Can I keep it in the open while still preventing this? Should I maybe coat it with a bit of oil like with steel? --85.145.56.218 (talk) 15:20, 2 August 2010 (UTC)[reply]

Sensitive gemstones such as malachite or haematite usually have a coating, something similar to furniture polish or wax. This should protect it from air. Unfortunately I don't know the trade name of a suitable product to use.77.86.94.177 (talk) 15:46, 2 August 2010 (UTC)[reply]
Looks like you have a product similar to iron(II,III) oxide and want to prevent it from oxidizing to iron(III) oxide. I would recommend keeping it in mineral oil, the same procedure used to store sensitive alkali metals. --Chemicalinterest (talk) 17:38, 2 August 2010 (UTC)[reply]
No. Cleaning mineral oil off of a piece of jewlery or a display piece is fantatsically impractical. Instead, why not coat it with some sort of clear lacquer, like clear nail polish... --Jayron32 02:07, 3 August 2010 (UTC)[reply]
Well yes if you want to use it as decoration do not store it in mineral oil. But what if the OP wants it to have a natural look rather than a shiny, artificial look? By all means store it in ascorbic acid solution ;) --Chemicalinterest (talk) 11:13, 3 August 2010 (UTC)[reply]

The hematite is already very shiny so I'm going to find a polish or something for it. Thanks for the answers everyone. --85.145.56.218 (talk) 20:14, 4 August 2010 (UTC)[reply]

Wind Erosion on Venus

There's a discrepancy in the first section of the article Geology of Venus. It states

Long rivers of lava have been discovered, as well as evidence of Aeolian erosion and tectonic shifts which have played an essential role in making the surface of Venus as complex as it is today.

but later

These winds exist at high altitudes, but the atmosphere at the surface is relatively calm, and images from the surface reveal no evidence of wind erosion.

Aeolian erosion is just a fancy name for wind erosion, isn't it? Is there evidence of it happening on Venus, or not? I've asked on the article's talk page, but got no response Rojomoke (talk) 15:28, 2 August 2010 (UTC)[reply]

I think the clue is in the word "relative". According to the article Atmosphere of Venus, the overall atmosphere of Venus circles the planet in around 4 days, which works out to around 400 km/h, and the article mentions high-level winds of 100 ± 10 m/s (= 360 ± 36 km/h) at altitudes of 60-70 km, in the same ballpark. By contrast, it says that
"the breeze barely reach[es] the speed of 10 km/h on the surface", and elsewhere
"the winds near the surface of Venus are much slower than that on Earth. They actually move at only a few kilometers per hour (generally less than 2 m/s and with an average of 0.3 to 1.0 m/s), but due to the high density of the atmosphere at the surface, this is still enough to transport dust and small stones across the surface, much like a slow-moving current of water."
Bearing in mind that the atmospheric pressure at the surface is about 92 bar (i.e. 92 times Earth's), and that these comparatively low average figures do not preclude occasional gales and gusts, you can probably see that these surface winds are, though "relatively" slow, still capable of erosion. 87.81.230.195 (talk) 17:19, 2 August 2010 (UTC)[reply]
That's a good scientific explanation, but it doesn't address the issue - the article contains two contradictory claims, and we need a source, I think, to tell us which is accurate. Vimescarrot (talk) 18:54, 2 August 2010 (UTC)[reply]
I've adjusted the wording ("most images" and "little evidence") to avoid the contradiction, but can anyone find a source to determine whether my compromise guess was correct? Dbfirs 08:23, 3 August 2010 (UTC)[reply]

aspartame

My friend believes that aspartame is in too many things ("everywhere") and is a poison and is only on FDA's GRAS list because of a conspiracy, and that aspartame is somehow making everyone dumber. While an organic food nut, she's also a really nice person who's a musician and is quite influential among my peers. I'm afraid the more orgo terms I use, the more I'll scare everyone. Any suggestions? John Riemann Soong (talk) 16:23, 2 August 2010 (UTC)[reply]

Ignore? --Stephan Schulz (talk) 16:26, 2 August 2010 (UTC)[reply]
Personally I've found "confrontation with facts" to not work terribly well, because people quickly close their ears up. When talking about potential low-level risks (like cell phone towers, which my mother is kind of afraid of), I generally emphasize that the risks we are talking about here are quite low, to be indistinguishable from other sources of risk, and are a lot safer than, say, the risk one takes when getting into an automobile. I tend to believe that whatever risks aspartame brings, they probably are offset by the risks that come with excessive sugar consumption. (One can argue that this can be accounted for otherwise, and indeed it can, but we don't have many means of enforcing that and we should not let the perfect edge out the good.) The evidence for it being dangerous is not very strong, in the end. Even in the cases of honest-to-god big conspiracies (e.g. tobacco), there were always lots of qualified outsiders willing to question the conclusions. --Mr.98 (talk) 16:34, 2 August 2010 (UTC)[reply]
(edit conflict)Aspartame controversy is a pretty comprehensive article that does not delve too deeply into the orgo itself. It covers both sides (i.e., hers too:) and has lots of cites from reliable-sounding sources that are not gov/industry-tied, so it's not dismissable as just more of the same conspiracy she fears. Stephan also raises a good point...I fear that some conspiracy nuts are too far gone to be helped by any amount of fact or logic. They refuse to question what they know because they already have seen the light and it's blinded them. Consider this a formal warning not to push this discussion towards religion. There are obviously lots of reasons why people behave this way...we probably even have an article about the rise and social popularity of scientific ignorance. DMacks (talk) 16:35, 2 August 2010 (UTC)[reply]
I agree with Mr.98's point. Low level risks, like aspartame or cell phone towers or living next to high-voltage transmission lines, make much less difference than higher level risks, like riding in automobiles and overeating of trans fats. One should notify the person that their views are out of perspective. If they listen, explain. If they do not listen, ignore what they say. --Chemicalinterest (talk) 17:44, 2 August 2010 (UTC)[reply]
Well it's hard to ignore someone who's trying to convert everyone to the cause over fb (quite successfully). John Riemann Soong (talk) 19:52, 2 August 2010 (UTC)[reply]
Another very easy approach is to say, "Hey, Wikipedia has a pretty good article on this, that seems pretty balanced. It seems that in the end, the evidence for it being harmful is pretty weak, and that if there is harm, it is basically indistinguishable from the background level of things that happen to people." I've used this for some of the other "controversies" before (like whether Jesus existed as an actual human being, or whether cell phones are dangerous), when the Wiki article is actually good and balanced. It tends to be a way to say, "Hey, this is a tough question, and if we take a balanced view of it, we see it doesn't readily give an unambiguous black or white answer." That's generally the place I try to get such people to arrive at — not so much believing, "this is safe," which it might not be, I don't know, but rather to "oh, this isn't a simple-minded thing, it's a complicated one," which is a lot better than the propagandistic approach in any case, and removes the missionary zeal. --Mr.98 (talk) 20:25, 2 August 2010 (UTC)[reply]
"People cannot be reasoned out of a position they did not arrive at via reason". The only response to people who believe in outrageous conspiracy theories is to either ignore them or ridicule them for your own amusement. No amount of actual evidence and reasoned discussion can convince them out of their position. --Jayron32 02:04, 3 August 2010 (UTC)[reply]

Human consumption of uncooked cereal grains

Just curious what would be the result of eating uncooked cereal grains (like raw oatmeal, rice or rye). Cooking converts the grains into human usable starches, right? Would the human digestive system be able to get any nutritional value out of raw grain? --70.167.58.6 (talk) 16:47, 2 August 2010 (UTC)[reply]

A short answer would be: "Yes it can, but not too well". Cooking gelatinizes starch which makes it a lot more absorbable to the digestive system. Raw grains are not digested easily and the body only takes up a tiny amount of the nutrients. The undigested and unabsorbed nutrients in the grain will go into the intestines and end up just feeding the gut flora, likely causing lots of gas, and then gets expelled from the body more or less whole in the feces. -- Sjschen (talk) 18:03, 2 August 2010 (UTC)[reply]
You would be able to get more energy if you chewed it for a long time and similarly if you were to eat a paste made from flour you would also get more energy compared to eating whole grains. The reason for this would be that the surface area of the grains would be increased allowing enzymes such as amylase to degrade the starch more effectively. Smartse (talk) 18:23, 3 August 2010 (UTC)[reply]
Still the best way to get the energy and nutrients out of the food and into the body is by cooking it in liquids to make the starch easier to absorb than chewing it to a pulp. Chewing and digesting raw grains takes up far more energy then the amount absorbed from the same grains themselves. -- Sjschen (talk) 21:07, 4 August 2010 (UTC)[reply]

wasp-like insects

Last year when I was cycling on the common I saw these tiny holes in the ground. This was during the summer, and the ground was very dry. Then I saw these flying wasp-like insects going into them. What were they? —Preceding unsigned comment added by CuteLesbianPossum (talkcontribs) 18:29, 2 August 2010 (UTC)[reply]

Digger wasps? Looie496 (talk) 18:36, 2 August 2010 (UTC)[reply]
It could be any of a thousand different wasps. Many wasp species burrow. For example beewolf wasps and cicada killer wasps. Without knowing more about the wasp in general, we can only say it may be some random species of burrowing wasp. --Jayron32 03:03, 3 August 2010 (UTC)[reply]

Queen Ant

When the ants were flying this year, I caught a lasius niger queen because I wanted to start an ant colony. Only today I discovered that you're meant to get them after they shed their wings. So I have a queen who hasn't mated and will never lay eggs to start a colony. She looks sad and vacant most of the time, just sitting in the container with her antenna slowly twitching, except when I feed her a tiny drop of honey on the end of a pencil - then she jumps into life, climbs onto the pencil and happily licks the honey. But afterwards she just goes back to sitting alone in the container. I feel very sad, because she is going to be all alone. Is there anything I can do to help her? I read that the worker ants are meant to feed the queen, but since there are no workers I will have to do it. Is feeding her honey the right food? Rebmetpes27 (talk) 18:44, 2 August 2010 (UTC)[reply]

If you know she will never lay, then why are you keeping her? Just release her. --Tango (talk) 19:35, 2 August 2010 (UTC)[reply]
Without worker ants to forage for food and protect her, she would die Rebmetpes27 (talk) 19:55, 2 August 2010 (UTC)[reply]
Well, she's not going to do a whole lot better in your cage, without a mate. She has basically one purpose in life as she sees it — breed, and lay eggs like mad. That's all she cares about. --Mr.98 (talk) 20:19, 2 August 2010 (UTC)[reply]
But she can't lay eggs; she is entirely dependent on me now, and my actions determine her fate. I do not want to kill her by putting her into the wild knowing she won't survive. My question is about how to help her. How can I care for her? What food does she need? etc Rebmetpes27 (talk) 20:43, 2 August 2010 (UTC)[reply]
My point is that you have already disrupted her only real purpose in life. She does not care about life itself and probably does not care about death in the slightest. She cares about laying eggs and that's it. I do not think you should feel bad about putting her out to certain death under these circumstances — at least in such a case, she will be food for something else! The queen is not an individual — the colony itself is the "organism," the queen is just the egg-laying part of it. The colony is already dead. (You needn't feel distraught about this — in all likelihood it wouldn't have survived anyway, only very few of the new queens end up forming successful colonies. That's how nature — and evolution — works.) --Mr.98 (talk) 01:17, 3 August 2010 (UTC)[reply]
According to List of animals by number of neurons, ants have only 10,000 neurons, cats have 300,000,000 and you have 100,000,000,000. The ant has a spectacularly small number - its ten times more stupid than a house-fly, 100 times more stupid than a cockroach. Almost all of those 10,000 cells will be there to process light and pheremone scent and to run basic instincts, feeding, walking, etc. It is basically impossible that she is able to feel happy or sad or anything else for that matter. She's simply obeying a scent and instinct-driven imperative to do whatever comes next in her life cycle - like a robot - and when it doesn't happen she shuts down, conserves energy and waits. You certainly shouldn't feel sad about it. You sealed her fate the moment you removed her from the environment - whether you keep her, let her go, squish her - none of that makes a difference. It's already over for her - she feels precisely nothing about anything. SteveBaker (talk) 23:59, 2 August 2010 (UTC)[reply]
I don't care whether scientists think she feels emotions or not, I still want to help her. What foods do they eat? Is honey enough, or do I need to give protein and vitamins and other stuff? Rebmetpes27 (talk) 00:16, 3 August 2010 (UTC)[reply]
Googling "ant farm food" seems to indicate that honey and water are OK foods for an ant farm. I doubt the queen by herself can deal with more complicated foods (which are, if I recall, predigested by workers usually). --Mr.98 (talk) 01:19, 3 August 2010 (UTC)[reply]
If you are committed to helping her, it seems you might have to take care of your ant for quite a long time. Googling "queen ant life span" I saw a site (not sure how reliable it is, so take it with a grain of salt) claiming that "a queen of Lasius niger, a common ant found in Europe, lived for 29 years in captivity." --- Medical geneticist (talk) 13:50, 3 August 2010 (UTC)[reply]
Am I missing something (probably me lacking in knowledge on ants), or couldn't he just go find the queen some mates and put them in the ant farm with the queen? Ks0stm (TCG) 02:52, 3 August 2010 (UTC)[reply]
Probably not. Ants can generally recognize their own nest by smell (else, how would ants from nests of the same species find their own nest), and may kill the queen from a different nest as an intruder. It also may depend a LOT on the specific species of ant as to what their reaction would be, but it may not go well for the queen... --Jayron32 02:59, 3 August 2010 (UTC)[reply]
And the timing is important. There is only a narrow window between which they can find mates. See Nuptial flight, which tells us that males die right after mating, and that there is a very high failure rate anyway (which is good, because otherwise there'd be wwaaayyy too many ants!). And the photos do a good job of indicating how different the males and females of a given species can look. --Mr.98 (talk) 11:39, 3 August 2010 (UTC)[reply]

Fluid balance - production of water by the human body throu metabolism

I'm puzzled how exactly does the human body produce water as stated by our Fluid balance article "In the normal resting state, input of water through ingested fluids is approximately 1200 ml/day, from ingested foods 1000 ml/day and from metabolism 300 ml/day, totaling 2500 ml/day" Can anyone explain this? 89.72.128.27 (talk) 19:41, 2 August 2010 (UTC)[reply]

Water is a byproduct of breaking down simple sugars during cellular respiration. Dragons flight (talk) 19:51, 2 August 2010 (UTC)[reply]
Simply put, when you take a molecule of sugar like C6H12O6 - and add six O2 molecules to extract energy, you end up with six CO2's (which you expel when you breathe out) and six H2O's which add to your water "input". SteveBaker (talk) 20:44, 2 August 2010 (UTC)[reply]

Silicon Bakeware

I am trying hard to find out what the symbols on some new silicon bakeware mean. I have used search engines and looked all over this site's main pages, archives, and reference desk. The best thing I've gotten so far is my own account here! Seriously, though, I would like to suggest that a "chart" section be added. With more and more technology and international cooperation in just about every field, there are more and more symbols being used. I find them from clothing labels to my new bakeware. There are three different circles with the universal "do not" slash accross them. I do not have any idea what the pictures inside the circles are so, therefore, don't know what I'm supposed to avoid. I think charts like this can be found in paper encyclopedias, and would be very usefull to many people. (posted on behalf of User:Designed4Him from his talk page) —  Hamza  [ talk ] 20:57, 2 August 2010 (UTC)[reply]

I've never seen something with "just logos" like that...there's either wording with it or a separate printed note about them. What we would need for an article (or to even answer a question like this:) is to actually see the symbols. No idea how standardized these are, but could have a gallery like at Hazard symbol. DMacks (talk) 02:04, 3 August 2010 (UTC)[reply]
at a guess (from glancing at the net) they would be warnings not to use knives, not to place the bakeware on direct heat, and not to use abrasive cleansers. those appear to be nonos for silicone. --Ludwigs2 03:55, 3 August 2010 (UTC)[reply]

Tipler Cylinder

Why would an infinitely long Tipler cylinder allow time travel, but not any spinning object? Is there more to it than just making a strong enough gravitomagnetic field?

Why couldn't this be done with regular electromagnetism? Why would general relativity only apply to gravity?

While I'm at it, the Tipler cylinder page mentions a conjecture Stephen Hawking made, but then says he proved it, so it would be a theorem. Should I change it? — DanielLC 22:19, 2 August 2010 (UTC)[reply]

Of course, there is the problem of building an infinitely long object... --Jayron32 02:01, 3 August 2010 (UTC)[reply]
Regarding your second paragraph: general relativity and gravity are the same thing. Possibly you could do this with electromagnetic fields, since they gravitate. You can't do it with electromagnetism in a gravity-less universe because the lack of closed timelike curves is already built into the spacetime background. A closed timelike curve is a distortion of spacetime, and spacetime distortions are gravity. Regarding the other two paragraphs, I'll have to guess. Third paragraph: the abstract of Hawking's paper (which is all I've read) says "This shows [i.e., proves] that one cannot create closed timelike curves with finite lengths of cosmic string." I think a rotating cylinder falls under his definition of cosmic string. But he also says that this only suggests (doesn't prove) chronological censorship, which he defines as the hypothesis that "the laws of physics do not allow the appearance of closed timelike curves". He might simply be saying that, although he's proved this result starting from general relativity plus the averaged weak energy condition, he hasn't proved it about the real world. But I'm not sure. First paragraph: as Tipler mentions, there are closed timelike curves in the Kerr vacuum that describes a spinning black hole, but in that case the CTCs are behind the event horizon. So the question is really whether you can have CTCs that aren't inside a black hole. Tipler's infinite cylinder has infinite mass, so perhaps you could say that the Schwarzschild radius is infinite and therefore all the weirdness of that geometry is technically inside the event horizon. When you reduce the cylinder to a finite size, the Schwarzschild radius becomes finite but all the weirdness is still confined. But I'm even less sure about that. -- BenRG (talk) 03:33, 3 August 2010 (UTC)[reply]
From my understanding of General Relativity, it's all just what happens if you think of gravity as an accelerating reference frame. If an electron falling to the Earth from gravity isn't really accelerating, why would an electron falling to a proton be? Electromagnetism and gravity seem to otherwise follow the same principles.
the fundamental difference is that the electron falling towards the proton can detect its acceleration using an accelerometer, while the electron falling towards earth can not. 157.193.175.207 (talk) 09:42, 3 August 2010 (UTC)[reply]
If you made an accelerometer with charge but no gravitational mass, it would work the other way. By assuming the accelerometer has to have no charge but have gravitational mass, you're already assuming there's a difference. In fact, you don't even have to split up the forces as gravitational and electric. You could say that one force is the forces between electrons (including gravity and charge), and the other is some other combination of gravity and charge (though it has to be a certain other combination if you don't want the forces to interact with each other). — DanielLC 19:56, 3 August 2010 (UTC)[reply]
Do you have any evidence that matter with charge but without gravitational mass exists, which would imply that the equivalence principle is false? 157.193.175.207 (talk) 11:07, 4 August 2010 (UTC)[reply]
The difference is that there are different kinds of charge that are deflected in different ways in an electromagnetic field, but everything is deflected in the same way in a gravitational field. You can detect an electromagnetic field by keeping differently charged objects around and measuring their relative acceleration, but you can't detect a gravitational field that way (or any way). It is possible to take a geometric view of electromagnetism, by adding a fifth dimension which loops around on itself with a fixed radius, as I described here. To get the "electromagnetism-only" version of that, you restrict the shape of the big four dimensions to be Minkowski space (or some other fixed spacetime). In the electromagnetism-only version, though you can still deform the 5D spacetime, you can't deform it such that the time dimension loops around on itself, because the shape of the big four dimensions is fixed. Re the comment below ("If it was just gravitomagnetism..."), the upshot of Hawking's paper is that a finite cylinder spun that fast will collapse into a black hole (if it doesn't break apart first). -- BenRG (talk) 06:56, 4 August 2010 (UTC)[reply]
If it was just gravitomagnetism that caused the closed time-like curves, any object would work, so long as it spins fast enough. 67.172.112.226 (talk) 05:11, 3 August 2010 (UTC)[reply]
That was me. I forgot to log in. — DanielLC 07:30, 3 August 2010 (UTC)[reply]

What sort of mood disorder is the following?

Does there exist/is there a name for a type of mood disorder where a depressive period is caused afterwards by a period of happiness. Like, for instance, if someone were to hang out with friends for a while, feeling really happy, and then aftewards go home to find themself in the opposite mood. Not necessarily reacting to the fact that they had to leave, but rather like some sort of a strange emotional balance out? Does that sort of thing ring a bell to anyone? 68.160.243.61 (talk) 22:57, 2 August 2010 (UTC)[reply]

We aren't allowed to offer medical advice - and in particular, we're not allowed to offer a diagnosis. If you (or whoever you might be referring to) is concerned, see a doctor. Our Mood swing article suggests some possibilities. SteveBaker (talk) 23:41, 2 August 2010 (UTC)[reply]
There are several disorders that might feature behavior like this: manic depression, certain forms of schizophrenia, a number of neuroses. This is also a fairly normal reaction to stress, a normal part of development in adolescents and young adults, and a typical response to certain life events. Normally no diagnosis would be made unless the behavior is consistent over a long period of time, unrelated to overt biological and environmental factors, and in some way damaging to health or well-being of the person involved, and that decision would need to be made by a qualified therapist (because it is next-to-impossible to diagnose someone with whom you have an close relationship, and completely impossible to diagnose yourself). --Ludwigs2 23:50, 2 August 2010 (UTC)[reply]
Manic depression is often called bi-polar disorder, and it sounds like what you are describing. Ariel. (talk) 03:40, 3 August 2010 (UTC)[reply]
Sorry, but I have to say that I think Ariel's spot diagnosis is very irresponsible. What the OP describes sounds like a normal change of mood to me - if you are doing soemthing that you enjoy, and it stops, then you will naturally feel sad, and this feeling may be intensified by tiredness or other environmental factors. However, if the OP is concerned that it may be more than that, then they need to discuss their feelings with a family member and/or see a doctor (as others have said). Gandalf61 (talk) 10:45, 3 August 2010 (UTC)[reply]
Ariel's observation appears to me not to be a diagnosis but a simple statement of fact, which in any case merely summarized some of what Ludwigs2 had already said (Bad Ariel! *Smack* :-) ). Note that at no time did the OP explicitly state that he/she was him/herself experiencing such moods or feelings; he/she merely asked for information impersonally, and has been directed to possibly relevant articles - it's hard to look things up in an encyclopaedia if one doesn't know the nomenclature. 87.81.230.195 (talk) 13:32, 3 August 2010 (UTC)[reply]
Ariel's correct - Bi-polar disorder is the current term, while manic depression is an older, outmoded term. my mistake. --Ludwigs2 14:01, 3 August 2010 (UTC)[reply]
But it can't be that because the mood swings in bipolar disorder happen over months...not hours! This is precisely why we don't allow diagnoses! SteveBaker (talk) 14:07, 3 August 2010 (UTC)[reply]
This really didn't read like a diagnosis request to me - he just wanted a name for this kind of disorder. Ariel. (talk) 16:59, 3 August 2010 (UTC)[reply]
There is a variant of bipolar disorder called ultra-ultra rapid cycling (PMID 9702745) in which moods cycle with a period of less than 24 hours sometimes. There are also drug manipulations that cause a person with bipolar disorder to switch very rapidly from depressed to manic or vice versa. (None of this is relevant to the current question; I'm just clarifying a point here.) Looie496 (talk) 17:53, 3 August 2010 (UTC)[reply]
The term "mood disorder" carries an implication that the depression is serious enough to call for treatment. I doubt that's what the question intended to mean -- for a mild drop in mood, I would just use a term like "letdown" or "mood swing". Looie496 (talk) 17:46, 3 August 2010 (UTC)[reply]

August 3

How to attach ceramic capacitor to solderless breadboard?

Ceramic capacitors are very small and have no wire legs, so how are they usually connected to a solderless breadboard? I could solder some wires on to form legs, but is that the usual solution? ----Seans Potato Business 01:05, 3 August 2010 (UTC)[reply]

No wire legs? I take it that you're using surface-mount components, then...? In principle, there's no technical reason not to solder legs on to them for breadboarding (the usual cautions about being careful not to overheat the component while soldering apply). But honestly, you can probably save yourself a fair bit of time and frustration (and possibly some slightly singed fingers) by investing in a few handfuls of assorted small ceramic capacitors with leads. They're pennies apiece if you can find a suitable store; they'll look like the brown one in the upper right corner of the top photo accompanying the capacitor article. (Hint — you're looking for a shop that has a wall of unpackaged electronic bits in little trays and drawers; a good place to look is near a university with an active electrical engineering program.) TenOfAllTrades(talk) 01:57, 3 August 2010 (UTC)[reply]
Right - the fact that it's potted in ceramic has nothing to do with the fact that it's packaged as a surface mount device. Potting and packaging are related but different parts of the mechanical design of electronic components. You can buy ceramic capacitors in all sorts of different packages. (I guess in the specific case of capacitors, "potting" is probably not even exactly right. Unlike other ceramic potting (e.g., for an IC), the dielectric layers in a cap are super-thin - they're probably CVD'ed, sputtered, or grown as an epitaxial layer). Nimur (talk) 03:25, 3 August 2010 (UTC)[reply]
Grr, I hate that problem. For the really tiny SMT ones, it's very difficult just to hand solder on leads, as the component, and the terminals, are so tiny. When I've done that in the past I've needed a little fragment of board (I think I had hardboard to hand) to hold things in place. I poked holes in it for the leads to give some stress relief (the solder bonds being so tiny they're not trustworthy to mechanically hold the leads on), threaded the leads through those, and soldered two little pools of solder at the far end, onto which I dropped and quickly resoldered the SMT component. The trick is preventing a bridge forming between the two pools; I think I needed more flux (proportionately) than usual, as the little spheres solder wants to form into are boulders compared with a little 3mm SMT part. -- Finlay McWalterTalk 11:01, 3 August 2010 (UTC)[reply]
One of my colleagues suggests the following hack: take two insulated wires and tape them together, forming a slightly rigid wire object. Hotglue the SMT component lengthways on this. At one end of the component, burn the insulator off one of the two wires (with an old soldering-iron tip), put a sliver of solder in the gap (between the exposed wire and the terminal of the SMT component) and squish the wire-solder-component sandwich together with the heated iron, forming a bond. Do likewise with the other lead at the other end. -- Finlay McWalterTalk 12:09, 3 August 2010 (UTC)[reply]
A picture of this is here. -- Finlay McWalterTalk 12:12, 3 August 2010 (UTC)[reply]
Adapters exist for using SMT components on breadboards. Digi-Key (etc.) carry these, search for 'Surf-boards'. 128.95.172.173 (talk) 00:59, 7 August 2010 (UTC)[reply]

What benign medical conditions are easily misdiagnosed as a deadly incurable condition?

For a short story I’m working on I’m looking for a chronic benign condition which is often misdiagnosed as a lethal, incurable condition. To fit the needs of the story, the lethal condition should also be typically relatively non-debilitating for a few months to a year. This should be the kind of mistake which could actually be made, even with analysis by a specialist and a variety of tests. Thank you for helping me make my writing as scientifically accurate as possible. --S.dedalus (talk) 02:42, 3 August 2010 (UTC)[reply]

[17] suggests Aortic dissection - which can be mistaken for heartburn. Dunno that it would still be mistaken after specialists and tests had been done though.
[18] suggests that strokes in people under the age of 45 are misdiagnosed as a bunch of relatively mild diseases.
SteveBaker (talk) 02:55, 3 August 2010 (UTC)[reply]
Thanks Steve, but I'm actually looking for benign conditions which could be misdiagnosed as deadly ones. A traumatic but fortunate turn of events to be sure! --S.dedalus (talk) 03:03, 3 August 2010 (UTC)[reply]
Doesn't it follow that if A can be misdiagnosed as B - then B can be misdiagnosed as A? The situation doesn't often happen the way you describe because if something benign (like indigestion) gets misdiagnosed as something serious (a heart attack) then immediately, everyone rushes around, gets in experts, does detailed testing - and eventually discover that it's really something benign. It's only in the reverse case where the misdiagnosis tends to go undiscovered until the condition gets much worse. SteveBaker (talk) 14:03, 3 August 2010 (UTC)[reply]
Perhaps it's different in places with good state funded medical care. If you attend Casualty at an Australian hospital with indigestion and the symptoms of anxiety from the fear that you may be having a heart attack, you'll be treated as if you're having a heart attack - several hours of monitoring followed by an appointment with a cardiologist. --203.202.43.53 (talk) 06:35, 4 August 2010 (UTC)[reply]
Haha. I find it funny you slip some politics into your non-response. Shadowjams (talk) 06:36, 4 August 2010 (UTC)[reply]
I'm not sure there is anything that meets all your conditions. If there is something that is "often" misdiagnosed, specialists quickly become aware of it and get more careful with their diagnoses. The closest thing I'm aware of is that benign forms of prostate cancer are pretty frequently misdiagnosed as aggressive forms -- but at the stage where this can happen, even the aggressive forms are still treatable. Looie496 (talk) 04:17, 3 August 2010 (UTC)[reply]
True. I didn't mean to imply this had to be a common occurrence. It could be something extremely unusual. A rare genetic condition or something perhaps. --S.dedalus (talk) 06:17, 3 August 2010 (UTC)[reply]
Your best chance is to just go with a technical error. True story - when I was 22 I blew out my right knee's ACL in a basketball game, but didn't realize it (thought it was just a sprain). The swelling went away pretty quick and I got on with my life. 3 months later I went to see an orthopedic specialist because that knee had some persistent swelling in the rear. He ordered an MRI, after which he informed me I had cancer in the lymph node(s?) in my knee. A second opinion revealed the blown ACL (and the first doctor to be an idiot), but for a full 72 hours my entire family thought their otherwise healthy 22 year old son was dying of KNEE CANCER. Bizarre. 218.25.32.210 (talk) 04:56, 3 August 2010 (UTC)[reply]
Wow, sorry to hear that! It must have been quite a scare. I had a roommate a while back that was told by a doctor that his wracking coughs and chest pain was "allergies." A more competent doctor realized that he was actually suffering from a spot of pneumonia. :) --S.dedalus (talk) 06:30, 3 August 2010 (UTC)[reply]
I can't really think of anything benign that can be misdiagnosed as a lethal condition except, perhaps, rarer forms of benign skin carcinomas/melanomas? A tumor can be benign (i.e. not invasive to surrounding tissues) or cancerous (where it starts to invade other tissues, and perhaps metastasize around the body). I imagine there are certain benign tumors, probably the rarer ones, that can initially appear as potentially cancerous but which later, after biopsy, are revealed to be benign. Problem (or not!) with this is that no doctor would say "you've got cancer", they'd say something along the lines of "this looks suspicious, so we'll check it out with a biopsy". So they're not going to misdiagnose you on the spot with cancer of the skin.

Thanks for the help everyone! How about multiple sclerosis? From what I've been reading, it sounds like MS diagnoses are extremely difficult to make and rather subjective. Is it possible for some, perhaps rare, condition to have all the symptoms of MS but slip through the battery of tests they put suspected patients through? --S.dedalus (talk) 06:30, 3 August 2010 (UTC)[reply]

I don't think MS fits the bill for a benign medical condition. People with MS do live long lives with proper care, but the life expectancy is still shorter than people without MS. Keeping that aside, I can't think of any diseases that have the same symptoms of MS without causing harm, because the very nature of MS is demyelination which is bound to have bad side effects. Regards, --—Cyclonenim | Chat  11:19, 3 August 2010 (UTC)[reply]
Here's a short-term one: marathon runners and other endurance athletes sometimes develop left ventricular hypertrophy, where the left ventricle of their heart grows abnormally powerful (to assist them in their exercise); this changes their cardiac rhythm, but is perfectly healthy. But LVH is also caused by heart disease, and also manifests itself as an unusual cardiac rhythm. So it's not unheard of for a scenario like this to happen: during a race, an marathon runner develops a minor injury (such as a scraped knee following a trip) which takes them to the on-course medics. These folks give them the once-over (which is wise, as that trip could have happened when the person felt faint, even though they don't remember it now) and listen to their hearts. People who work, or volunteer, as on-course medics are often nurses or EMTs during the day, so they're not used to dealing with serious athletes but very often see heart attacks and arhythmias in the general public. They listen to the athlete's heart, hear that weird left ventricular sound, and think it's heart failure. And as the runner has other symptoms that they've come to associate with heart problems (tachycardia, fatigue, breathlessness, sweating) they're prone to treat the person as if they're in the midst of a serious coronary episode. And it's hard to tell these people to ignore their training and experience, as genuine heart patients often deny they're seriously ill, and people really do die from genuine heart attacks during marathon races. The medics want to ship the runner off to hospital for proper investigations (which will quickly determine he's a freakish, but healthy, specimen), but the runner doesn't want his race ruined. I've heard of at least one case where runner, still with bleeding minor injury, escapes the tent and leads a band of medics off in pursuit, yelling to them that if he's really got heart failure then they should be able to catch him. Which is all very funny, unless he really has had a heart attack. -- Finlay McWalterTalk 11:32, 3 August 2010 (UTC)[reply]
To meet your full criteria (that of misdiagnosis for a lengthy time) would require some pretty bad, and very under-resourced, doctors. -- Finlay McWalterTalk 11:39, 3 August 2010 (UTC)[reply]
An easier approach — depending on the plot — is to make the patient do their own research on it. "Oh, hmm, chest pains — oh my! I'm dying!" ... and then it turns out to just be gas. Not a great example but you can see the gist. It would also ring familiar — how many of us have Googled funny little symptoms ("why do I have a pain here?") and then find that one of the possible reasons (with no indications given of probability) is, you know, that we have Ebola or something. Obviously that would require some retooling of the plot, but somebody slipping into a bit of hypochondria can be a wonderful character development... --Mr.98 (talk) 12:24, 3 August 2010 (UTC)[reply]
How about a complicated set of benign conditions which when taken together provide all of the symptoms of something serious? That ought to be relatively easy to figure out. Just pick out a suitably serious medical condition, then look at all of the symptoms it produces and hypothesize that your character has half a dozen minor ailments that happen to add up to meeting all of the symptoms. It might make a neat plot point to have some of the individual symptoms disappear one at a time. SteveBaker (talk) 14:03, 3 August 2010 (UTC)[reply]
That's sometimes known as medical student syndrome. The problem is that the question also requires specialists to be fooled after performing tests. Without that requirement it's easy. Looie496 (talk) 17:42, 3 August 2010 (UTC)[reply]
This is a great question though. I'm sorry, I don't have the background to really answer this well, but I bet if I rewatched enough House I could come up with one... if you don't get a good answer here I'd actually suggest reasking it again soon because I'm sure there're some interesting answers to this from med students / very bored doctors. Shadowjams (talk) 04:39, 4 August 2010 (UTC)[reply]

Pheochromocytoma (PCC) is a rare, but potentially lethal (if unnoticed and it leads to malignant hypertension or hypertension) condition and can be misdiagnosed as straightforward hypertension. In my case, my doctor explored the possibility of PCC due to my relatively young age and high blood pressure. I can imagine a situation where a doctor wouldn't consider it for an older patient, for the purposes of fiction, of course! --Rixxin (talk) 15:56, 4 August 2010 (UTC)[reply]

Interesting problem with AM Radios...

I wasn't quite sure whether or not to post this in the Computing Reference Desk, since it seems too low-tech for that page. Anyway, here goes.

I live in a 3,000 square foot 2-story house that's on a lot that's about 90 feet wide by 120 feet long. (From me looking out my window, but my sense of perspective is sometimes very off). The hill that our house is on is about 325 feet about sea level, and it slopes down gently to river level (I don't believe it's sea level) over the course of just over a mile. The town overall gets fairly decent AM reception on most local channels. However, since a week or two ago, the AM reception in our house's lot has completely disappeared. Whenever we are driving home from anywhere and we have the AM radio on, the radio works fine as we are driving down the road, but all of the channels go from clear to static (over the course of 10 feet or so) right as we pull into our driveway. This is the same for any other AM radio that is on our lot. FM works just fine, and so does any other sort of wireless communication except AM. The local AM channels also work on the properties of the 30+ other houses on our street. It's almost like there is an "AM black hole" that encompasses our property. I thought that it might have been our wireless router (which we had repositioned the antennas around the same time this ordeal started) causing the trouble, but the other members of my household (one with a ham radio license) believe this is not the case. It is nothing severe (it's not like we are in a place where the only way we get news is from AM stations), but it just seems a little strange. Could there be anything (electronic or geological) that is causing interference with the AM band? Any insight (be it professional or not) would be greatly appreciated. Hmmwhatsthisdo (talk) 04:18, 3 August 2010 (UTC)[reply]

This is a case of radio frequency interference. You can try to determine what it is by powering down everything in your house, and possibly neighbour's houses too. Nowadays switchmode powersupplies, computers, motors, video equipment, compact fluorescent lighting, can all put out interference in the megahertz range. You may have to change your light globes to tungsten to reduce the interference. Graeme Bartlett (talk) 06:12, 3 August 2010 (UTC)[reply]
... or ask of yourselves and possibly your immediate neighbours "What was switched on for the first time a week or two ago?" You should be able to tell the difference between RF interference and just weak signal (the static usually sounds harsher for RF). I notice that certain houses suffer this problem as I drive past whilst listening to my car radio. Christmas lights are a common cause, but freezers, motors and lots of other appliances can cause the problem, especially if the are run at some distance from the electricity supply. Fluorescent lights should cause only a very local interference unless they are faulty. Dbfirs 08:11, 3 August 2010 (UTC)[reply]
Yes, but neither my house nor my neighbor's houses (as far as I know) have installed anything new when the problem started. Also, their houses do not have any sort of interference to speak of. However, I just noticed today while driving home that the area of interference has decreased ever so slightly, by about 2 feet or so. It is definitely NOT weak signal, as all of the channels (not just one) drop out to static over the course of 5 feet or so. However, the list of possible causes for interference is helpful, and I will check with my neighbors later and see if they did happen to install anything. It just seems odd that such interference would happen so suddenly, and it seems unlikely that something in a neighbor's house could cause interference in our house. Hmmwhatsthisdo (talk) 00:37, 4 August 2010 (UTC)[reply]
It's possible that the cause is some appliance that has just started to malfunction. Have you tried carrying round a portable radio to determine where the interference is strongest? Dbfirs 06:32, 4 August 2010 (UTC)[reply]
It indeed sounds like there is a new source of radio frequency interference near your house. The source is often power company equipment. I have seen two utility ground wires touching and the small voltage between them was enough to interfere with radio reception. The FCC in the US is in charge of getting the creator of such "harmful interference" to shut it down, but they are grossly understaffed. The local power company might have personnel who could track it down if you call in and complain, but only if you insist the interference is coming from their equipment. I have known them to track it down to a light fixture in the neighbor's attic, or the new pizza oven in the corner restaurant. It is extremely hard to do "direction finding" and locate the interference. It is more process of elimination. It could be a malfunctioning doorbell, any electric appliance, a furnace ignitor, a fluorescent light, dimmer switch, low power radio transmitter such as a garage door control, or an aquarium heater. You might listen on your car radio, then kill the power in the house (taking precautions that no one is startled and nothing essential gets shut off) and see if reception improves. Then try to get the neighbors to do the same if it is not on your premises. Edison (talk) 03:27, 5 August 2010 (UTC)[reply]

Brand name drugs vs generics

I read the wikipedia article about generic drugs, but I am having a hard time understanding why a generic drug can really differ from the brand name. I have customers who use all generic drugs and don't have any problems or complaints with any of them. But those same people, when switching from brand to generic on certain drugs have a problem. An example would be generic Wellburtrin XL, both the 150mg and 300mg. Its a widespread problem. Almost everyone complains about it, not just one or two people. So I know there is a real issue. There are a few other drugs out there that really seem different than the brand name. I'm convinced it's not just in the head of my customers because they use other generics with no problems. Since the FDA regulates all generics and tests them also and I'm sure by now they have heard about it and would have done something if there was a chemical difference, what could be an explanation to the problem? —Preceding unsigned comment added by 76.169.33.234 (talk) 04:59, 3 August 2010 (UTC)[reply]

Tell the FDA , not us. Sf5xeplus (talk) 05:23, 3 August 2010 (UTC)[reply]
It is quite possible that the generic is manufactured at a different plant or by a different company than the brand name drug. Differences in manufacture could have differing standards of quality control, which may lead to problems. --Jayron32 05:44, 3 August 2010 (UTC)[reply]
From the "generic drugs" wikipedia article, the difference between two batches of the same brand name drug is about 3.5%, which is about the same between a batch of a brand name drug and a batch the its generic. So I don't think that's it either. Also, many of these generic manufacturers are very reputable and their products are tested and they get audited. So even though it could be a very short term problem, it wouldn't be a long term problem as this one is.
It could perhaps be an example of the Placebo (or Nocebo) effect, based on the patient's belief that a generic drug is somehow not the real thing. AndrewWTaylor (talk) 09:42, 3 August 2010 (UTC)[reply]
I mentioned that they have no problems with the other generics they are taking for this reason...
I once had a problem with a generic that used different inactive ingredients than the brand name drug, and it turned out I had an allergic reaction to one of them. Since the inactive ingredients are generally chosen to be pretty common and benign substances, I would assume it is pretty rare to see such a reaction, but I don't know how rare. Dragons flight (talk) 10:41, 3 August 2010 (UTC)[reply]


This is just a guess, but two things come to mind. The first is the inactive ingredients, as Dragons flight mentioned. The packaging of generic medications often says something like "the same active ingredient as (some brand name drug)", implying that the inactive ingredients are not the same. A second factor is quality control. I've read an anecdote from someone who supposedly was in the pharmaceutical industry. The person said that the internal quality control standards were tighter for the brand name drugs they made than the generic ones they made. In that story, the person was talking about variation in the amount of active ingredients in the manufactured drugs. I guess lower quality control standards may also affect the types and levels of impurities in the products. --173.49.16.4 (talk) 11:42, 3 August 2010 (UTC)[reply]
While the active ingredient is the same between a brand name and a generic drug, the pill itself is not the same. A pill is not just the drug. It is also a coating that dissolves in the body to release the drug at the right time. For many drugs, the release is performed in small increments over time. The FDA does not strictly regulate when and how a drug is released in the body.
Of course it does. It must be equivalent and have the same bio-availability.
So, consider something like a blood pressure medication that is marked at "extended release" (I am purposely avoiding a specific brandname because of my job). A brand name drug is more expensive. The expense is not just because it is a brand name drug. The expense is also for quality. That expensive pill may have 100mg of a drug that releases 10 times over 24 hours in 10mg doses. Then, consider the cheap generic. It also has 100mg of the same drug and it is extended release, but it releases 50mg right away and 50 mg 12 hours later. In a 24 hour period, they release the same amount of medication, but the brand name drug is clearly better at control. The goal is to increase release of the drug into smaller and smaller quantities over a long period of time.
If your argument were true, people would die from a generic form of a blood pressure medication.
So, why is the generic so bad? Why don't they make it release multiple times instead of two big doses? The same drug companies that make the brandname drugs also make the generics. They want people to buy the brandname drugs. So, they purposely make the generic ones much cheaper and lower quality. For many people, the generic is acceptable. For others, the generic is not. Thus, a market for the brandname drug is maintained after the generic hits the market. -- kainaw 12:08, 3 August 2010 (UTC)[reply]
"The same drug companies that make the brandname drugs also make the generics." While this is sometimes true, often it is not: most generics (by volume) are made by other companies than the originator of the licenced brand-name drug, when the latter's patent has expired. Generic manufacturers can do so more cheaply because they're not amortising any of the very considerable research, trialling and licencing expenses the brand-name originator had both for that drug ("molecule" in the trade) and around nine others that never made it all the way to market, which averages out to (I believe) around a billion US$ per marketed drug.
Apart from the considerations that Kainaw and others have mentioned, a possible further factor is counterfeiting. Making fake drugs and inserting them into the legitimate pharmaceutical trade (as well as selling directly online) is very big international business. While licenced brand name drugs are dearer and therefore more lucrative to fake, they are protected by anti-counterfeiting measures, which include (but are not limited to) markings on the tablets/capsules and coded information on the packaging, as well as procedural checks. The standard of such protections for generic drugs are likely somewhat lower and thus easier to overcome, and the markets for generics in some instances less alert, so generics too will be widely - perhaps more widely - faked and thus may have inappropriate levels (from too much to none) of the supposed active ingredients and possibly other ingredients actively harmful or more widely allergy-provoking. [Disclosure: I used to work in (non-pharmaceutical) administration at a large pharmaceutical manufacturing site.] 87.81.230.195 (talk) 13:58, 3 August 2010 (UTC)[reply]
"So, they purposely make the generic ones much cheaper and lower quality" — citation needed. Do you even have any evidence that it is the same companies making both generics and brand name? This strikes me as highly unlikely on the whole. If you are going to make these kinds of assertions and generalizations (which smell of old wives' tales), please cite them, lest we fall into rumor-repeating. --Mr.98 (talk) 13:55, 3 August 2010 (UTC)[reply]
From Merck here, "In fact, generic drug makers manufacture many trade-name products for companies that control the trade names. Sometimes, more than one generic version of a drug is available." Merck is a drug company that produces both brandnames (trade names) and generics. -- kainaw 17:09, 3 August 2010 (UTC)[reply]
A bit of OR from Australia here.... I regularly use medication for hay fever, and am happy to use generics, usually having no problem with them at all. Once I tried a new generic, and found very reduced performance. A close read of the packaging showed "Made in India". Most of our medications are made locally. I spoke later to my pharmacist, who said I wasn't alone in my experience, and he was no longer stocking that product. Make of this story what you will. HiLo48 (talk) 12:38, 4 August 2010 (UTC)[reply]

Cold object

Suppose an object is cold but doesn't change temperature easily. Will it feel cold to the touch? 81.137.101.82 (talk) 13:25, 3 August 2010 (UTC)[reply]

If it is colder than your hand, it will feel cold. "Cold to the touch" means that it is colder than your hand. It is a very poor measure of temperature. -- kainaw 13:32, 3 August 2010 (UTC)[reply]
"Doesn't change temperature easily" is a somewhat confusing phrase. Let's consider instead materials that conduct heat easily (like metals) versus those that are good insulators (like maybe wood). If you chill down a block of steel and a block of wood to the same temperature and put your hands on each of them - then the steel feels much colder to the touch than the wood does even though they are at the same temperature. That's because your skin isn't measuring the temperature of the material you're touching so much as the rate at which the skin is giving up heat to that material. The steel, conducts heat away from your hand easily - so (assuming it's colder than you are), it feels very cold to the touch. It'll continue to conduct heat away until the whole chunk gets warmed up to your body heat. The wood, on the other hand, conducts away just enough heat to warm up the surface of the wood to body heat - but because it's a good insulator, that thin warm layer insulates you from the colder stuff inside. The heat from your hand doesn't easily spread out into the bulk of the material. So right where you're touching it, the wood rapidly reaches body heat and the sensation of it being cold to the touch disappears. The bulk of the wood is still quite cold though - and it would take an enormous amount of time for your body heat to warm the entire block up.
The opposite happens when you touch something hotter than skin temperature. A block of hot metal will transmit heat into your hand quite easily - where a block of wood won't - so the metal feels hotter than the wood.
SteveBaker (talk) 13:52, 3 August 2010 (UTC)[reply]
Note that 'doesn't change temperature easily' could also mean the object has a high specific heat capacity. This is also likely to make a difference to how cold an object feels particularly over the long term. But I'll let someone else expand on that (my quick search didn't find anything but a lot of confused discussions) Nil Einne (talk) 16:52, 3 August 2010 (UTC)[reply]
Indeed, the heat capacity is the best interpretation of the difficulty in changing temperature. It is the amount of energy required to increase the temperature by a certain amount. If an object has high heat capacity (total heat capacity for a conductor, specific heat capacity for an insulator) then it will take longer to change temperature to match your hand, so it will feel colder/hotter for longer. For something like wood (an insulator with pretty low specific heat capacity), the bit you touch warms up to the temperature of your hand almost instantly, which is why it doesn't feel cold. If it had a much higher specific heat capacity, then it would take longer to warm up and you would initially feel it as cold. --Tango (talk)
Heat capacity is not enough. Much more important is heat transmission, i.e. insulation. Ariel. (talk) 17:17, 3 August 2010 (UTC)[reply]
That's more important in determining whether something feels cold, but it's not more important in answering the OP's question, since the question was clearly about ease of changing temperature, not ease of transferring heat within itself. --Tango (talk) 19:03, 3 August 2010 (UTC)[reply]
Still, the question was not this complicated. It was merely "Will it feel cold to the touch?" Discussing how cold is another topic all together. -- kainaw 19:07, 3 August 2010 (UTC)[reply]
The answer to your question is "No.", it will not feel (as) cold. Skin does not measure absolute temperature - but it doesn't measure relative temerature either. It measure change in temperature, specifically how fast heat is drawn off, or added to, the skin. This is why 75 degree air feels warm, while 75 degree water will feel cold. Note that the hypothalamus does measure absolute temperature, and works differently from the skin. Ariel. (talk) 17:14, 3 August 2010 (UTC)[reply]
Well, let's look at an example. What doesn't change temperature easily? Something like glass or ceramic, right? Does a cold glass or ceramic object feel cold to the touch? I don't think I need to answer that. Looie496 (talk) 18:22, 3 August 2010 (UTC)[reply]
They don't feel cold because they are thermal insulators, not because they have high specific heat capacity. --Tango (talk) 19:03, 3 August 2010 (UTC)[reply]
75F metal feels cooler than 75F plastic for the reason that the metal cools your hand faster. John Riemann Soong (talk) 19:16, 3 August 2010 (UTC)[reply]
Glass and ceramic still conduct heat away from skin better than wood and plastic does. --Chemicalinterest (talk) 19:50, 3 August 2010 (UTC)[reply]

sulfonates and cells

Do sulfonates have any signalling roles in cells? Ignoring pH considerations, can the presence of them inside a cell cause apoptosis? Are sulfonate ligands less likely to repel each other than carboxylate ligands? John Riemann Soong (talk) 16:26, 3 August 2010 (UTC)[reply]

What sulfonate and cells are you experimenting with? One source suggests that sodium tanshinone IIA sulfonate (STS) protects against stress-mediated apoptosis in cardiomyocytes. Regards, --—Cyclonenim | Chat  16:31, 3 August 2010 (UTC)[reply]
Wow thanks for the rapid response. =) Umm, I don't actually have the sulfonate -- rather I'm concerned about it being produced when I react hydrogen peroxide and gold nanorods with sulfur-attached carboxylate ligands of various alkyl lengths (I think 7 carbons maybe? I don't know if there are PEG motifs -- the company I bought these rods seems to keep it a secret). I am trying see if there's a side reaction between hydrogen peroxide (at 200 micromolar) and these thiol-attached linkers. It is my hope that the gold-sulfur bond inhibits oxidation, but if not, I suspect the bond may break and that the linker might be turned into a sulfonic acid, e.g. a sulfonate (since the rods are diluted and the linkers are found only on the surfaces I really doubt it will change the pH that much).
I'm working with a549 cells. I know many of the rods survive intact without aggregation (aggregation would be one symptom of the nanorods losing their sulfide-bound linkers) and cells endocytose them successfully. However I wonder if a minute amount of the linkers could be oxidising into sulfonates and poisoning the cells. One paper reports that a549 cells can actively grow at concentrations of 200 micromolar hydrogen peroxide, but that was without the presence of things like gold nanoparticles. John Riemann Soong (talk) 16:48, 3 August 2010 (UTC)[reply]
Sorry about the not so rapid response this time! I think that unless you have a reason to believe the gold nanoparticles are going to affect the growth of the cells from previous experiments, there's no reason to believe it'd have any effect on the growth with minute quantities of H2O2. I'm guessing either your stock of cells or gold nanoparticles are limited; if not, just give it a go? It's half the fun of experimentation. If it fucks up, it fucks up. That's science! Regards, --—Cyclonenim | Chat  18:41, 3 August 2010 (UTC)[reply]
Whoops, just realised I overlooked a sincere part of your first paragraph. This is above my head really, you're a year ahead of me and a biochemist as opposed to a biomedical scientist. I think the above is as far as I can speculate really! Regards, --—Cyclonenim | Chat  18:44, 3 August 2010 (UTC)[reply]

stochiometric tests for hydrogen peroxide

What are some quick and dirty tests I can do to test for hydrogen peroxide and if possible, measure how much left is in solution? I do not think I have any thiocyanate and phenolphtalein in my lab, but I could check... John Riemann Soong (talk) 16:38, 3 August 2010 (UTC)[reply]

I have some ascorbic acid and pH paper...I sense some nasty acid dissociation calculations coming up, but does ascorbic acid react with hydrogen peroxide instantaneously? John Riemann Soong (talk) 17:10, 3 August 2010 (UTC)[reply]

Any of these any use [19] 94.72.239.127 (talk) 19:17, 3 August 2010 (UTC)[reply]
Regarding mixture of hydrogen peroxide and ascorbic acid: I think it needs a catalyst. I occasionally mix tainted H2O2 and ascorbic acid and get a hot test tube almost instantaneously. But when I try to do it deliberately, it never works. --Chemicalinterest (talk) 19:45, 3 August 2010 (UTC)[reply]
You've never intentionally tried it in the presence of your iron(III) chloride? Not that I want to use a Lewis acid catalyst, seeing as how that might make the calculations harder. My orgo prof used to tell me that some quick and dirty recipes for Friedel-Crafts reactions used to be: add in your organic reagents...add HCl...drop in a rusty iron nail...cover reaction vessel, wait 10 minutes (or 2 hours, depending on reaction). John Riemann Soong (talk) 15:01, 4 August 2010 (UTC)[reply]

Not eating with tablets

The instructions on my medication say that I have to take them on an empty stomach, or one hour before eating. Why is this, what relevance will my pepperoni pizza have to my tablet? (I'm assuming here that it's a general rule and not a specific necessity of this drug, but just in case, it's flucloxacillin.) Vimescarrot (talk) 17:34, 3 August 2010 (UTC)[reply]

Some research has led me to note that it is not a tablet, but in fact, a capsule. Who knew. Vimescarrot (talk) 17:37, 3 August 2010 (UTC)[reply]
No it's specific to this drug. Specifically: "Floxapen is extremely well absorbed orally. A single 250 mg dose achieves an average peak serum level virtually equal to that achieved by an equivalent IM injection. The peak serum level is achieved half to one hour after administration. Floxapen should be taken 1 hour before meals to ensure that maximum absorption is achieved." Basically what they are saying is that by taking it with food you slow down the absorption, but for this drug it's better if it has a very high peak absorption (presumably to kill as many bacteria as possible with a sudden strong dose). Other drugs are different, for them you might want slow and steady. Sometimes they say to take with food because it causes stomach upset. Ariel. (talk) 18:05, 3 August 2010 (UTC)[reply]
Oh. Well, that clears that up. Thanks. Incidentally, does anyone know how I might stop burping up that awful taste afterwards? Vimescarrot (talk) 21:13, 3 August 2010 (UTC)[reply]
Tic Tac? DaHorsesMouth (talk) 22:58, 3 August 2010 (UTC)[reply]
In addition or perhaps more accurately a more extreme form of avoiding stomach upset, with some NSAIDs, taking with meals helps reduce the chance of getting a peptic ulcer and other similar problems like bleeding. [20] Nil Einne (talk) 06:07, 4 August 2010 (UTC)[reply]

what happens when PBS cell buffer is exposed to air?

I left the cap open for 1 whole day ... it wouldn't be infected or be oxidised by now...would it? John Riemann Soong (talk) 18:43, 3 August 2010 (UTC)[reply]

If you are talking about phophate buffered saline, then I think that there is nothing to worry about. It should be pretty unreactive, so it won't oxidize, and it is unlikely to grow any bacteria (if it could, it would proably do so even with the lid on). Unless something actually fell into the open bottle I think that it should be fine.24.150.18.30 (talk) 02:12, 6 August 2010 (UTC)[reply]

Traveling to nearby stars

After looking at this site, http://kisd.de/~krystian/starmap/, I was asking myself the following question:

When we decide to travel to any of this nearby stars, with current technology, it is an advantage if the star is not too far to the 'north' or 'south' of us?

I mean, so far, while traveling in our own solar system, we've never been 'up' or 'down' very much. Is this really more difficult without the slingshot acceleration you get from traveling past planets? Or is just that there is nothing special to see/do?

Of course, I understand that Proxima Centauri is the least far away, so it's quite logically the easiest star to travel to, just wondering if moving up or down is harder to do.

Thanks. Sealedinskin (talk) 19:43, 3 August 2010 (UTC)[reply]

The solar system is so miniscule compared to the distance between it and the nearest star that it will make very little difference. A down-to-earth equivalent would be a centimeter (inch)-high hill. --Chemicalinterest (talk) 19:47, 3 August 2010 (UTC)[reply]
(Pedentry attack! Can . . . not . . . resist!) The word is traditionally, and correctly everywhere but the USA, Minuscule :-) . 87.81.230.195 (talk) 20:53, 3 August 2010 (UTC)[reply]
"Pedantry". --Sean 21:09, 3 August 2010 (UTC)[reply]
One of my very common mispellings (oops, there's another one). --Chemicalinterest (talk) 22:07, 3 August 2010 (UTC) [reply]
It's not a question of distance, it's a question of whether a gravity assist would be useful for travel to a neighboring star. Intuitively I think it's unlikely you could get a big enough one to matter much, but I don't actually know. Looie496 (talk) 20:00, 3 August 2010 (UTC)[reply]
Relative to its size the milky way is flatter than a sheet of paper[21] (i.e. ratio of thickness to diameter). So it's not necessary to travel very far north/south (up/down? above/below?). Ariel. (talk) 20:36, 3 August 2010 (UTC)[reply]
Well, we're talking about nearby star systems. It's not flat on that scale. Looie496 (talk) 20:50, 3 August 2010 (UTC)[reply]
(ec) The most straightforward kinds of gravity assist, if used within our own solar system, would suggest a star on the plane of the ecliptic is easier to reach. If the plan is to travel quite distantly beyond that, using gravity assists, I would surmise that any gain in the local sense within our own solar system would be insignificant. Note that our ecliptic is not coplanar with the galactic equator, so a boost in the plane of our solar system would send us out of the galactic plane. Gravity assists don't have to be simple, though. Nimur (talk) 20:53, 3 August 2010 (UTC)[reply]
In any event, a gravity assist maneuver isn't bound to the plane of the ecliptic -- Voyager 1 was moderately deflected out of the ecliptic by Titan, and Voyager 2 is on a track 55° relative to the ecliptic. Probes such as Ulysses have been sent on solar polar orbits. The only reason we don't go outside the ecliptic very much is that there just isn't much there that's within a useful distance. In terms of interstellar travel, it both matters and it doesn't. On one hand, it matters because it's a significant proportional speed boost. On the other, it's still stupidly slow when you talk about interstellar travel, because arriving in 70,000 years (the length of time it would take Voyager 1 to reach Proxima Centauri, assuming that the Sun's gravity magically vanished) is functionally the same as never arriving at all. — Lomn 20:57, 3 August 2010 (UTC)[reply]
The Earth's orbital speed and gravity assists would help with travelling in or near the ecliptic, but I don't think they would be significant. The Earth's orbital speed is 0.01% the speed of light. I don't think it's really practical to do interstellar travel at much less than 10% the speed of light, so the Earth's orbital speed (and that of any planets you might use for gravity assists) is negligible. It makes far more sense to decide on stars to visit based on their distance from us and how interesting they are. --Tango (talk) 23:31, 3 August 2010 (UTC)[reply]
That is an interesting question, the answer depends on your definition of “travel” and “current technology”. With current technology we can not travel to other stars as in living humans reaching any other star, with or without gravity-assist. If a non-functional space-probe reaching a other star system in some million years is counted as travel it is easiest about the plane of the ecliptic, we need not do any thing at all since Voyager 1 will do that. Voyager 1 has a speed of 17 km/s and after exiting the suns gravity it will have an speed of 13 km/s. If it was directed at Proxima Centauri it would reach it in about 100 000 years.(Neglecting the relative motion between the sun and Proxima Centauri.)
In order to reach a other star a spacecraft on earth need deltaV to leave the earth's gravity (11 km/s), leave the suns gravity (ca 12 km/s), reach a cruse speed and de-accelerate at the other star if it is not a flyby mission.
The cruse speed needed for different travel time to Proxima Centauri with no compensation for acceleration time, the relative motion between the sun and Proxima Centauri or relativistic effects:
traveltime (years) speed (km/s)
10 000 120
1000 1200
100 12000
10 120000
The greatest increase in speed by an unpowered gravity-assist for the four gas giants are:
Sun 0 km/s
Jupiter 26 km/s
Saturn 20 km/s
Uranus 14 km/s
Neptune 11 km/s
These can not be combined fully and the inner planets could give some help, I have not worked out the math but I expect the maximum increase in speed to be less than 50 km/s. With somewhat lower increase in speed, gravity-assist can be used to change the direction out of the ecliptic plane.
The greatest additional increase of speed for a powered gravity-assist possible in the solar system is about 300 km/s and is obtained by firing the engines just above the solar surface. The radiation temperature of about 4500 celsius during some hours could be a problem.
A powered gravity-assist around the sun can with minimal extra effort be used for a mission in any direction.
For missions longer than 1000 years gravity-assist could have some utility but even then the direction is of low significance, for faster missions gravity-assist within the solar system gives insignificant extra speed.
--Gr8xoz (talk) 23:57, 3 August 2010 (UTC)[reply]
One note: Voyager 1 has a speed of 17 km/s and after exiting the Sun's gravity it will still have a speed of 17 km/s (rounded), see Specific orbital energy#Voyager 1.--Patrick (talk) 05:47, 4 August 2010 (UTC)[reply]
My mistake, the escape velocity at the position of Voyager 1 is 4 km/s but the substraction must be done with orbital energy not speed.--Gr8xoz (talk) 07:54, 4 August 2010 (UTC)[reply]
I made some more mistakes, a powered gravity-assist just above the solar surface could give an extra speed of 600 km/s not 300 km/s and the extra deltaV for escaping the gravity of the sun and the earth can not be directly added to the deltaV budget. Given this, powered gravity-assist could theoretically have some significance for missions longer than about 200 years, in that case 6000 km/s is needed (+de-acceleration) and 600 km/s can be obtained for "free" by powered gravity-assist.--Gr8xoz (talk) 08:45, 4 August 2010 (UTC)[reply]

August 4

Special relativity and Ender's Game

I just finished Ender's Game and have a question about the use of ultrahigh speed trips to extend someone's life. Obviously this is a work of science fiction, but assuming the premise of Ender's Game is true, would such a thing work? I tried to read the special relativity article but I didn't get enough information before it went into formulae and the like. If they needed Mazer Rackham (a general who was probably 40-60 years old) to be able to train new generals and they knew they wouldn't need him for 50 years but he would be dead when they did need him, can they send him on a starcruise that will take up 50 years of everyone's time except his own and then when he returns, he's just 2 years older? How does one's speed affect the physiology of one's biology (cells, tissues, organs, etc.) so that they don't shut down when they would ordinarily have (after 80 years, let's say)? Perhaps I'm just asking a basic question on special relativity -- I don't know -- but this book got me thinking about this. DRosenbach (Talk | Contribs) 02:11, 4 August 2010 (UTC)[reply]

The physics of the time dilation is spot on - we would certainly argue that getting a ship up to those speeds would be a spectacularly difficult challenge - but if you could, then this would work as advertised. The deal here is that time for the general seems to happen perfectly normally - it's just like someone pressed the 'fast-forward' button on the rest of the universe. His body chemistry/cells/tissues/organs and every other aspect of his life are perfectly normal. This isn't even a theoretical matter - astronauts have taken sensitive clocks on long space missions and measured this time distortion effect. Of course they aren't moving at anything like the speeds suggested in the book - and the time distortion is on the millisecond scale - not years. But the principle is well understood and quite in line with mainstream physics. SteveBaker (talk) 02:48, 4 August 2010 (UTC)[reply]
I'll add a 'me too' to SteveBaker's (as-usual-spot-on) response. Physicists are wont to say that "there are no privileged frames of reference" — in other words, the little bit of space inside a relativistic starship behaves no differently from a little bit of space at rest relative to the Solar System. There isn't any experiment that I could conduct inside the starship (at least, no experiment that didn't involve looking out the windows) that would tell me how fast I was going, nor how fast or slow my clock inside the ship would appear to an observer outside. On-board starship physics (and chemistry, and biology) are exactly the same as physics everywhere else. The cells in my body age normally, as far as the clocks on board ship are concerned. It's only when an observer looks in the window of my speeding starship and observes my apparently slowed clocks (and all the other sequalae of relativistic travel) that the magic happens.
As a matter of kinematics, I'll add as an aside that one year of acceleration at one gee (9.8 meters per second, per second) would take you just a hair (3%) over the speed of light in one year — if we lived in a universe without relativity. Since there are relativistic considerations, our starship never quite manages to exceed the speed of light, but about a year at one gee acceleration gets us just about close enough for government work. After that year of acceleration Rackham's aging would be slowed to a crawl. (Incidentally, was that a year of acceleration measured from the outside, or a year as seen on shipboard? Those two are actually going to be quite different durations....) TenOfAllTrades(talk) 05:46, 4 August 2010 (UTC)[reply]
... and, agreeing with everything said above, I'll just add that there are serious considerations of energy requirements to maintain one "g" acceleration, especially when one gets within a few percent of the speed of light, but presumably a future technology has almost unlimited energy at its disposal, even on a spaceship. Dbfirs 06:23, 4 August 2010 (UTC)[reply]
For the specific scientific discussion of this, see twin paradox. It has been a classic thought experiment of special relativity since the 1910s, and has since been experimentally verified in a number of different ways. --Mr.98 (talk) 12:08, 4 August 2010 (UTC)[reply]

I haven't read the novel, but I have some fair knowledge in Special Relativity. If we neglect gravitational fields, and if we need the general to age only 2 years while on the earth, 50 years have elapsed, then using the equation for time in SR, we find that his space ship must be travelling for 2 years continuously at the speed of 299552504.116 metres per second. thats just 239954 metres per second lesser than the speed of light. I have given u the idea. Now all you have to do is, to just find a space ship that can go at such a speed for 2 years!!! harish (talk) 15:59, 4 August 2010 (UTC)[reply]

I am pretty sure he aged 8 years, not 2. Googlemeister (talk) 16:31, 4 August 2010 (UTC)[reply]
The acceleration would be pretty killer, but iirc, the book does make vague mentions of technology for manipulating gravity and momentum being stolen from the Buggers in the First War, so that could probably hand-waved well enough for the book's purposes. APL (talk) 16:51, 4 August 2010 (UTC)[reply]

From the topic, I was sure this was going to be about the ansible! APL (talk) 16:51, 4 August 2010 (UTC)[reply]

:I wikilinked your ansible in case someone needs an article that explains what it is. Cuddlyable3 (talk) 21:45, 4 August 2010 (UTC)[reply]
I haven't read Ender's game, but it seems a similar "delaying your aging with relativity" plot device is used in Joe Haldeman's Forever War. The hero's lover spends some time travelling at relativistic speeds so that she doesn't age and die while waiting for him to return from the distant war. Astronaut (talk) 11:46, 6 August 2010 (UTC)[reply]

Just out of curiosity, i recently learned that high levels of copper in the blood causes a copper colored ring around the eyes, and even more dramatic, silver causes not only your eyes, but your skin too to change grey-bue. Are there any other elements, and maybe to a lesser extant compounds, that cause changing in coloration? 99.114.94.169 (talk) 03:05, 4 August 2010 (UTC)[reply]

This is most likely pseudoscience. See Iridology#Criticism. Dolphin (t) 03:11, 4 August 2010 (UTC)[reply]
Well, my question does not concern the actual color of the eye in judging health, but of actual, well documented diseases such as Kayser-Fleischer rings and Argyrosis, and not only including eyes, but skin color as the build up in the body. 99.114.94.169 (talk) 03:36, 4 August 2010 (UTC)[reply]
Carotenemia, Argyria, Chrysiasis, Lycopenodermia. I'm sure there are others. Ariel. (talk) 03:56, 4 August 2010 (UTC)[reply]
Thanks, I should probably just google ever color and see :P, well sense we don't have a wikipedia article on , Lycopenodermia, maybe you can elaborate the condition? 99.114.94.169 (talk) 04:02, 4 August 2010 (UTC)[reply]
Lycopenodermia is just like Carotenemia except it's from lycopene, and the color is more reddish. Ariel. (talk) 04:55, 4 August 2010 (UTC)[reply]

Mysterious movement of sediment in a wine glass

While on holiday in France recently, I happened to look into a wine glass which had been emptied some fifteen minutes earlier and not touched since. In doing so, I noticed that the sediment in the small amount of liquid that had gathered in the bottom was moving quite rapidly, in a roughly toroidal pattern (this was probably due to the raised dimple in the centre providing an obstruction). The glass was not in the sun, the table was hefty and not easily moved and there were no obvious sources of vibration anywhere around. My question then is, where was the energy for this movement coming from? At the time I didn't pay more than a few moment's attention, but now I find my curiosity is still piqued. I have tried to research this myself, but where to start?

Does anyone have any ideas, based on the scant information above, what was going on in that glass? The wine was an excellent Bordeaux red by the way!

Thanks Mark David Ward (talk) 08:16, 4 August 2010 (UTC)[reply]

Thanks this sounds like something I can investigate. --Mark David Ward (talk) 12:22, 4 August 2010 (UTC)[reply]

How much smoke and smog would harm an engine?

Would regular driving in smoke from forest fires (say, 200 meters visibility - darned heat wave) harm a car's engine? VW 1.9 diesel, mildly turbo'ed if it matters. No one knows when the fires will settle down... East of Borschov 08:32, 4 August 2010 (UTC)[reply]

You might need to replace the air filter ahead of schedule, but otherwise I wouldn't think it would make any difference otherwise. 207.47.164.117 (talk) 10:47, 4 August 2010 (UTC)[reply]
I was close to the bushfires in Victoria, Australia 18 months ago, that took 173 lives and burned for many weeks afterwards. There have naturally been in depth investigations and many words written about them since, and I drove through the smoke to work for a month. I heard nothing at all about possible engine damage. My car is fine. HiLo48 (talk) 12:31, 4 August 2010 (UTC)[reply]
Volcanic Ash -- Effects on Transportation from the USGS suggests frequent oil changes, as well as increasing frequency for other things like seals and gaskets, improving air intake filters, and so forth. Volcanic ash is much denser than your average smog and smoke. Nimur (talk) 18:26, 4 August 2010 (UTC)[reply]
Volcanic ash bears no resemblance to smoke. Your (implied) comparison is like looking at the result of sandblasting your car and extrapolating to figure out what water from a garden hose will do. --Carnildo (talk) 00:36, 5 August 2010 (UTC)[reply]
Yeah - absolutely.
  • Volcanic ash is made of incombustible silicates and is incredible abrasive - when it's sucked into the cylinder and gets heated in a combustion cycle, it melts - when the cylinder pressure drops, it condenses back onto the colder cylinder walls - and you have a nice solid rock coating the inside of your cylinder. It's like you coated your cylinder with sandpaper! That wrecks the piston rings in no time flat! The advice from the USGS to change oil frequently is an effort to flush out any of this sand-like stuff and to give your piston rings the best possible chance. But the only defense is really to have a really good air filter. (Something after-market - not the crappy paper ones that came with your car. Because of the propensity for the air filter to clog - you need to either change it or clean it very frequently.
  • Smoke is mostly water vapor and unburned carbon - which your engine will happily burn up if it gets into the engine. It's also soft, so the engine can clear it out easily. Modern gasoline often contains detergents that are there specifically to help the engine get rid of the carbon it might generate as it burns gasoline less than 100% efficiently - and this helps a lot to clear the particulates from the smoke.
  • Smog is smoke and fog - and we all know that our cars run just fine on a foggy day. (Actually, fog can improve your engine's running because the water mist slows down the combustion and make it more even and complete...but the effect is admittedly minimal).
I suppose that if the nearby fires were able to suck a large enough fraction of the oxygen in the air, then your car's performance might be kinda poor - but it still wouldn't wreck the engine. SteveBaker (talk) 02:25, 5 August 2010 (UTC)[reply]

Spark plug disrupter ray

During World War II the Japanese scientist Hidetsugu Yagi apparently was working on some kind of "beam ray" (e.g. radio transmitter) that could stop automobile engines by disrupting the firing of their spark plugs. He apparently (according to some Congressional testimony from the 1940s) could get it to work if the hood of the engine was up on some Fords, but when the hood was down it didn't work, presumably because of reflections and so forth. He stopped working on it at that point.

My questions:

1. What's the likely explanation for his "success"? How would this really operate? Would the principle only apply to 1940s cars, or would it work on modern cars?

2. More speculatively, is there any technology or advancements since the 1940s that would lead one to expect us to be able to improve upon this sort of thing today? That is, Yagi found only very limited success at the time. Is there reason to think someone could do better now, in terms of whatever problems Yagi was presumably coming up against?

Any thoughts you had would be appreciated. I am not in the slightest an electrical engineer so any explanations that can be done without reams of equations would be appreciated. --Mr.98 (talk) 13:31, 4 August 2010 (UTC)[reply]

I doubt this was real. However, a strong Electromagnetic_pulse may disrupt modern cars, seeing how much electronics they contain. There's quite a lot to read in that article. EverGreg (talk) 14:31, 4 August 2010 (UTC)[reply]
Well, it was reported to Congress by Karl T. Compton in October 1945, who was part of the investigation into the Japanese scientific work during WWII, so it probably isn't wholly false. It certainly wasn't effective, as is clear from the description above, and was abandoned. It seems unlikely to me that the whole story is fabricated—Compton was no scientific rube, and Yagi was a pretty serious guy himself. (I am getting this information from the transcript itself, not secondhand through some kind of Tesla-nut website.) --Mr.98 (talk) 15:03, 4 August 2010 (UTC)[reply]
Electromagnetic pulse guns like this one get reported in the press from time to time. Disrupting the ignition system on a pre-electronic car would take a lot more energy than crashing the electronics in a modern car. --Heron (talk) 18:29, 4 August 2010 (UTC)[reply]
Cars from that era are incredibly chunky things - I know a guy with a fully restored c.1945 jeep - and the electrical system is so insanely simple - it's hard to believe you could do much to hurt them. I suppose you could induce a large voltage in the ignition coil - but the problem is that the wiring around them is designed to cope with huge voltages - so at best you might just cause a back-fire or some other kind of 'hiccup'. Even a car from the 1960's would be immune to most of those kinds of tricks. It's not until the era of electronic ignition that your could do something nasty to the low-voltage systems and stop the car.SteveBaker (talk) 22:03, 4 August 2010 (UTC)[reply]
Here's the exact quote from Compton:
Well, we talked to Professor Yagi, and this is what we found: Some years ago, in talking to Dr. Coolidge at our General Electric Co., Dr. Yagi had suggested that it might be possible to stop the action of an internal-combustion engine by focusing an intense beam of an electro-magnetic wave, which would cause sparking and interrupt the operation of the spark plugs. When he got back to Japan and he tried it, he said he could make it work on a Ford car if the hood was up, but if the hood was down, unfortunately the metal shielding prevented its work. [At what range did he say he could make it work?] Oh, 30 or 40 yards.
He then goes on to say that Yagi went from there to work on directed energy weapons (e.g. death rays), and could make them work to some degree for killing rabbits, but power consumption was enormous and even though it could kill rabbits, it couldn't kill muskrats. (I am not making this up!) Compton says the idea was dumb because you could use a rifle more effectively than whatever ray they came up with. Compton then says that they doubt they could get the power consumption they claimed they could anyway (they were claiming they could produce an oscillator to produce 80 cm radio waves with 200 kilowatts of continuous power output, but the oscillator manufacture said they could get 40 kw at best).
Any of this sound feasible in the slightest? --Mr.98 (talk) 23:05, 4 August 2010 (UTC)[reply]
H. Grindell "Death ray Matthews" device could supposedly stop motorcycle engines. It appears to have been a Tesla coil discharge which was conducted via an ultraviolet spotlight beam. If so, then it could be defeated by metal shielding. —Preceding unsigned comment added by 128.95.172.173 (talk) 00:47, 7 August 2010 (UTC)[reply]

This story reminds me of the "engine-stopping ray" that Germany was thought to be developing in the late 1930s. Here's the story as told by Reginald Victor Jones in his book Most Secret War:

There was also, incidentally, the story that whatever was in the tower at the summit [of the Brocken, i.e. the Sender Brocken — Gdr] was able to paralyse internal combustion engines. As usually reported, the phenomenon consisted of a tourist driving his car on one of the roads in the vicinity, and the engine suddenly ceasing to operate. A German Air Force sentry would then appear from the side of the road and tell him that it was no use his trying to get the car going again for the time being. The sentry would, however, return and tell him when he would be able to do so. [...]
It was from my contact with one refugee that I found at last the explanation for the stories about the engine-stopping rays. This particular refugee had been an announcer at the Frankfurt radio station, and I therefore wondered whether he might know anything about the work on the nearby Feldberg television tower that was said to be one of the engine-stopping transmitters. When I told him the story he said that he had not heard it, but he could see how it might have happened. When the site for the transmitter was being surveyed, trials were done by placing a transmitter at a promising spot, and then measuring the field strength that it would provide for radio sigals in the areas around it. Since the signals concerned were of very high frequency, the receivers could easily be jammed by the unscreened ignition system of the average motor car. Any car travelling through the area at the time of the trial would cause so much interference as to ruin the test. In Germany, with its authoritarian regime, it was a simple matter to decide that no cars should run in the area at the relevant time, and so sentries were posted on all the roads to stop the cars. After the twenty minutes or so of a test the sentries would then tell the cars that they could proceed. In retailing the incident it only required the driver to transpose the first appearance of the sentry and the stopping of the engine for the story to give rise to the engine-stopping ray.

Gdr 13:24, 5 August 2010 (UTC)[reply]

Cloning Einstein

Lets say some time in the future, we have perfected human cloning. Do we have enough of Einstein's DNA to make a clone of him? 148.168.127.10 (talk) 13:35, 4 August 2010 (UTC)[reply]

Apparently not. The book Einstein: his life and universe, says,
... the way Harvey had embalmed [Einstein's] brain made it impossible to extract usable DNA.
Sorry. --Sean 13:47, 4 August 2010 (UTC)[reply]
Apparently he extracted his eyes as well, and put them in a safe deposit box.[22] One wonders how they were preserved. Googling around, there are some people who claim to have locks of Einstein's hair, which is not an ideal place to get DNA from, though. --Mr.98 (talk) 15:09, 4 August 2010 (UTC)[reply]
Remember, an adult is determined by far more than just their DNA. A clone of Einstein wouldn't necessarily be a genius and he wouldn't necessarily have any interest in physics. --Tango (talk) 16:27, 4 August 2010 (UTC)[reply]
It's likely he'd have that hair, though. Staecker (talk) 16:55, 4 August 2010 (UTC)[reply]
Not really. His hair looked like that because he wasn't interested in making it look like anything else. If the clone chose to cut it short or tie it back or straighten it, it would look completely different. --Tango (talk) 17:27, 4 August 2010 (UTC)[reply]
Using modern Polymerase chain reaction methods, you really don't need more than one intact DNA molecule - so this isn't about whether there is "enough" DNA - it's whether there is "any" DNA left. I think it would be surprising if we couldn't track down even the tiniest amount of the stuff. However, I agree with Tango - your Einstein clone might be completely useless at physics. Einstein certainly had a flair for physics - but in almost every other respect of his life, he was a total jerk/loser. It's not that he had general intelligence - it's that he had it all focussed in one incredibly narrow field. If you read his biographies - it's hard not to be horrified at his dealings with wife, kids, relatives, etc. It's incredibly unlikely that there is a "Physics expertise" gene - but it might well be that there is a "single-minded narrow-skill-set obsession gene". Clones of Einstein might just become fanatical stamp collectors for all we know. SteveBaker (talk) 21:56, 4 August 2010 (UTC)[reply]
You need one intact DNA molecule per chromosome. Having some, but not enough, DNA would mean you either have non-intact molecules or some missing chromosomes. --Tango (talk) 00:43, 5 August 2010 (UTC)[reply]
Oh! Yes, of course, silly me! But still - it's not much. I presume it wouldn't have to be intact either - the first thing the gene sequencers do is to chop the stuff up into smaller bits. It only matters that somewhere in your sample of fragmented section you have enough pieces to make up the entire genome - and in long enough sections that you have overlaps that tell you how to stitch them back together again. Even if a few bits were totally missing, the odds are good that those sections wouldn't code for anything interesting about Einstein - so you could probably fill in the bits that were missing with sections of someone else's genes without much risk of significant non-Einstein bits cropping up in the finished clone. SteveBaker (talk) 02:10, 5 August 2010 (UTC)[reply]

Diamonds are forever (?)

This problem needs someone who has in-depth knowledge about diamonds.

To make my point clear I have to summarize a story I read lately. Two Englishmen find a diamond, a rather big diamond (so big that they even compare it to Kohinoor). The first one who discovers it knows his stones, i.e. can tell beyond doubt if a diamond is real. He inspects it, firstly with naked eye, then with a lens and confirms it's genuine. The second also falls in line. Then both go to a jeweler who also tells them there is no doubt about it. It is real, it is big. He is willing to buy it for £ 10,000. But one of the sellers becomes stubborn and express his disbelief over the valubillity of the stone. The jeweler says that he is in this business for years and there is no room for doubt. But the idiot insists upon a test. So just to satisfy his whim the jeweler switches on a grinding wheel and rubs the stone on it. Immediately the stone shows "faults". Next moment it is lying there, valueless as dust !

Then the author explains us : though the diamond was real it was lying under fire and pressure for so long that it became "splintered". That is on inspection by an expert it will look real (and strictly speaking is real) but on slightest touch it will come into it's true state, that's nothingness.

Ain't the diamond hardest substance ? Is it like that the writer thinks ? Or is it pseudoscience ?  Jon Ascton  (talk) 14:27, 4 August 2010 (UTC)[reply]

I know nothing about whether faults could show up only after grinding, but as for the hardness of diamonds, I can tell you that tools used for grinding and cutting diamonds are often made from diamond themselves in order to be hard enough to affect the diamond. --Tango (talk) 16:30, 4 August 2010 (UTC)[reply]
Diamond has the highest hardness Moh = 10 of any bulk material. The Wikipedia article Diamond notes that the most time-consuming part of cutting a gemstone is the preliminary analysis of the rough stone. It needs to address a large number of issues (see article) and can last years in case of unique diamonds. It is possible that a major flaw would not be discovered in the initial rough state of a diamond, though the sudden reduction of the value of a Koh-i-Noor-like diamond from £ 10,000 to valueless sounds incredible.
I have found no references to splintered diamonds apart from a diamond trader who uses that name.
Diamonds do not last forever because they can be burned. Cuddlyable3 (talk) 17:20, 4 August 2010 (UTC)[reply]
Sounds like nonsense to me (not pseduscience - nonsense) for a number of reasons:
  • Diamonds are not tested by grinding - that's crazy. They are tested by temperature conduction.
  • Faults in the diamond would show up if you looked at it with a magnifying glass. For the scenario given, the diamond would have to be powder on the inside, with a thin sheet of diamond holding it together - that's just nuts. And would be very visible anyway. It's not like a Prince Rupert's Drop (very cool, watch the video) where there are tremendous internal stresses, covered with glass. Diamonds just aren't like that, glass is amorphous, but diamonds are crystal. An amorphous diamond would be graphite, i.e. black.
  • Was the diamond they found rough or cut? Probably cut since a jeweler was going to buy it. If it was cut that means it already was near a grinding wheel, so why would doing it a second time cause such a catastrophe?
Ariel. (talk) 17:57, 4 August 2010 (UTC)[reply]

What if there were some perchloric acid (or other oxidant impurities) that had somehow gotten into the diamond, and the temperature conduction test sets off a runaway reaction at a low temperature? John Riemann Soong (talk) 19:22, 4 August 2010 (UTC)[reply]

I don't think perchloric acid can oxidize carbon; try Piranha solution. --Chemicalinterest (talk) 19:50, 4 August 2010 (UTC)[reply]
Really? Won't it just make a lot of CO2? (For the metastability of the perchlorate -- maybe it was trapped inside a mineral impurity that gave the diamond a valuable coloured hue.) John Riemann Soong (talk) 20:37, 4 August 2010 (UTC)[reply]
But its CO2 made from diamonds... Wowwwwww... --Chemicalinterest (talk) 21:42, 4 August 2010 (UTC)[reply]

(original question) It's possible that the stone was a 'mash-up' eg made of smaller diamonds glued together with optical paste - that would fall apart .. however I don't think there is an optical paste with the same refractive index of diamond (?) 77.86.119.98 (talk) 19:51, 4 August 2010 (UTC)[reply]

You might consider a 'scratch test' - try to scratch it with something a little less hard than diamond...if you were in a jeweller's shop - you could grab a handy Topaz or Corundum and just try to scratch the diamond with it...or try to scratch a topaz with the supposed diamond. Topaz and Corundum are pretty hard stones in their own right. It's even possible that the grinding wheel in the story would be a corundum wheel (they are pretty common - and a lot cheaper than diamond wheels)- but why turn the thing on? You could just drag the diamond across the wheel and see if it was scratched. I guess the theory in the story was that this diamond had many flaws - which would obviously be weaker and maybe cause it to break...but crumbling to dust just doesn't seem likely. A large diamond that was that massively flawed wouldn't be worth that much anyway. SteveBaker (talk) 21:46, 4 August 2010 (UTC)[reply]
With a disclaimer that I'm far from an expert on diamonds, the story seems ridiculous to me. Any fracture plane inside a diamond will produce internal reflections, so a diamond fractured so extensively would look like crap quartz -- translucent rather than transparent. Looie496 (talk) 01:11, 5 August 2010 (UTC)[reply]

Eyeglass prescription

I know how to tell a nearsighted prescription from a non-nearsighted prescription by looking through the lenses. Is it possible to estimate the prescription (of any kind) of a pair of eyeglasses by looking through the lenses? If so how? —Preceding unsigned comment added by 68.76.147.53 (talk) 15:10, 4 August 2010 (UTC)[reply]

Yeah. The more bigger the number be the more concave the lens will be. The more positive they will be the more convex they will be. I think the near-sighted are always concave (negative) while the farsighted are always convex (positive) -- Jon Ascton  (talk) 15:34, 4 August 2010 (UTC)[reply]
Eyeglass prescription#Lens power describes how the prescription strength (in diopters) relates to the focal length of the lenses. Briefly, the strength in diopters is just one over the lens' focal length in meters. If you are able to estimate the focal length of the lens, then invert to get the prescription's approximate strength. (This assumes spherical lenses with no correction for astigmatism; it gets more complicated if you want to be able to extract a significant cylindrical contribution as well.) TenOfAllTrades(talk) 15:36, 4 August 2010 (UTC)[reply]
Hyperopia is the medical name for longsightedness, the opposite of nearsightedness Myopia. The eyeglass lenses prescribed to correct hyperopis are easily recognized as Magnifying glasses. Cuddlyable3 (talk) 16:48, 4 August 2010 (UTC)[reply]

I don't need to estimate the astigmatism correction but it'd be nice if I could tell if there is some correction for it. how could I do that? Thankx 68.76.147.53 (talk) 15:55, 4 August 2010 (UTC)[reply]

An astigmatic lens has different focal lengths for vertical and horizontal lines. This diagram[23] demonstrates views through the lens. Cuddlyable3 (talk) 16:55, 4 August 2010 (UTC)[reply]
(This only works for farsighted prescriptions.) Using a distant lamp as the source (you can use the sun, but don't burn anything), find the distance from the lens to the wall where the projected image is clear and sharp, in meters. 1 over this number is the power in diopters. Ariel. (talk) 17:31, 4 August 2010 (UTC)[reply]
(original question) Simply put - lens for nearsighted people (Myopia) make things look smaller. For the opposite type "longsightedness" (Hyperopia) the lens is like a magnifying glass - close up to things it will magnify, or further away it will invert (upsidedown) things you look at through it. A lens for a nearsighted person won't do this. For people with weak nearsightedness it may be difficult to tell by looking a the lens.77.86.119.98 (talk) 19:48, 4 August 2010 (UTC)[reply]
For a quick evaluation of eyeglasses: Hold them between you and a wall or desktop and move them back and forth. If the lenses have no correction, the background will not move. If they correct for nearsightedness, the image will move in the same direction you move the lens. If they correct for farsightedness, the image will move in the direction opposite from your movement of the lens. Now to check for astigmatism correction: Look through one lens and rotate it. If the image changes in its vertical and horizontal size with rotation, there is astigmatism correction. In each case, a stronger correction produces a larger effect. Bifocals/trifocals will show a gradation from top to bottom in the lens strength. In side to side movement, if the effect varies from top to bottom in a lens but no abrupt change is seen, then they are "progressive bifocals." For a simple lens correcting for farsightedness, you can form an image on paper of a distant light and measure the distance from the lens to the paper to determine the focal length, then the diopter rating is 1/f where f is the focal length in meters. With such a known positive lens, you can stack a weaker negative (nearsighted correction) lens next to it and from the combined focal length calculate the diopters of the negative lens. Despite what was said above, the curvature of the outside of the lens is largely unrelated to its power, since eyeglasses use meniscus lenses, with the inner and outer surfaces both curved to differing degrees. Edison (talk) 03:05, 5 August 2010 (UTC)[reply]

Positive and Negative Overlap

Positive overlap means overlap between two orbital lobes where the wave function is positive right?? Since on one lobe of the px orbital, the wave function is positive and on the other, the wave function is negative, there is equal chance that in two atoms about to bond with each other, the positive lobe of one atomic orbital may overlap with either the positive or negative lobe of the other atom to form positive or negative overlap, right? So is there equal chance for a formation of positive or negative overlap for a given pair of atoms? I am now just in Class eleven and I just know only basic quantum mechanics, so please explain clearly... Thank You. —Preceding unsigned comment added by 117.193.236.184 (talk) 15:39, 4 August 2010 (UTC) [reply]

The actual sign is not a real physical property, it's just an mathematical artifact of the way the orbitals are described. However, the sign does allow one to explain certain bonding types. First, remember that the sign itself isn't a real thing, so the actual "+" or "–" is arbitrary, but the relative sign ("same" vs "opposite") is a viable comparison. You have to be consistent, but you can pick an arbitrary starting point:) So positive overlap (a bonding molecular orbital) means the "same sign" (both positive lobes, or both negative lobes) on the two atoms, whereas an antibonding molecular orbital (a high-energy thing that tends to make the atoms move apart rather than stay bonded) is "opposite signs". So if you're describing bonding, you pick the same signs of the lobes that overlap, and there's your stable electronic state. And once you have that, the "other way" is the unstable state. Sometimes a graphical representation (using arbitrary colors) helps avoid getting misled by the actual signs of the lobes--just pick two colors. See Molecular orbital diagram for some more information, and ask more if you get stuck. DMacks (talk) 19:30, 4 August 2010 (UTC)[reply]


Positive overlap usually means that the lobes have the same sign, so that the combined wavefunction is greater than either, rather than less as when happens when they have different signs.77.86.119.98 (talk) 19:35, 4 August 2010 (UTC)[reply]
In terms of chances - take two H atoms - containing each one atom - the probability of either atom having a given sign of the 1s orbital is normal 50% .. so yes.. However in a magnetic field the different orbitals "+" or "-" sign are split in energy - so that the probability is not 50%.
When the "+" and "-" forms of the same orbitals have the same energy it can interconvert to the opposite sign with no barrier - so effectively creating a 50:50 chance as you describe.
Since the oribitals can change their sign value under normal conditions this means that the 50:50 probability of bonding or anitbonding orbital formation is obscured - since the formation of a bonding orbital is favoured energetically - thus when 2 H atoms meet the probability of forming a bonding orbital is more than 50% (in zero magnetic field the "+" and "-" orbitals are practically equivalent).77.86.119.98 (talk) 19:43, 4 August 2010 (UTC)[reply]


So, finally, it means that the chance of two opposite signed lobes approaching each other is 50-50, but since they are degenerate, they exchange their positions... so the probability that a positive overlap will be formed in H-H atoms close to each other is near 100%, is it?? and is this what we call bonding molecular orbital and anti-bonding molecular orbital? harish (talk) 01:17, 5 August 2010 (UTC)[reply]

Yes - positive overlap results in a bonding orbital .. negative=antibonding. Probability is better than 50% I can't be more specific than that - the product H2 has isomers see Spin_isomers_of_hydrogen - which very slightly complicates things. If two hydrogens with arbitary opposite orbital sign approached each other - they'd bounce off each other and not bond - this is a possibility - someone else may know how to better calculated the chances - but I guarantee it is better than 50% for the reasons given above.77.86.119.98 (talk) 02:17, 5 August 2010 (UTC)[reply]

Expense of oscilloscopes

I was wondering why oscilloscopes, even USB oscilloscopes, are so expensive? Is it a case of charging what the market will bear or is the consumer base so small that it is necessary to charge this much in order to make business viable? How complex are they, in comparison to a television? I don't study electronics but just wanted to make my own dynamo-powered bike lights and learn something as I go. ----Seans Potato Business 16:03, 4 August 2010 (UTC)[reply]

See the Wikipedia article Oscilloscope. It is a complex measuring instrument that needs accurate calibration. It is intended to be rugged, portable and to have a longer life than a domestic TV. It has many more user controls. Very few of its components are those that are mass produced for TV production. It contains beam deflection circuits that are capable of operating at a wide range of different scan frequencies compared to a TV that has only 2 scan frequencies. All these factors contribute to its relatively high price, as well as the small production volume that the OP has noted. Cuddlyable3 (talk) 16:38, 4 August 2010 (UTC)[reply]
It's also a much more specialized piece of equipment. Something designed to sell ten million units will be cheaper than something to sell ten thousand units, all else being equal.
However, if your needs are simple, There are pretty cheap scopes. Here is one with the form-factor of an iPod for $99. And here is a kit for $60. You might be able to find these slightly cheaper elsewhere. However notice that they are single-signal and lack advanced features like FFT. APL (talk) 16:45, 4 August 2010 (UTC)[reply]
You can also use the audio input of your computer for nothing, though the top frequency is usuauly about 200kHz (sampling rate), and you have to calibrate it yourself. Inputs shouldn't be over 1V or 0.5 but resistors are cheap - if you know what to do. There are free programs that convert the computer into a simple oscilloscope. eg http://www.zeitnitz.de/Christian/scope_en
On the other hand .. all scientific instruments are expensive - a laboratory sonic bath supplied by a laboratory supply firm will cost typically 10x what an equivalent mass produced sonicator will cost.. the reasons for this are purely economic.Sf5xeplus (talk) 17:39, 4 August 2010 (UTC)[reply]
(ec) A few reasons why scopes (and other test equipment) are expensive. First, compared to consumer devices, test equipment like oscilloscopes are sold at very low-volume. Economically speaking, this means that every unit must be sold at higher margin per each unit, in order to amortize the costs of engineering, manufacturing, and distribution (which are the same or greater than an "equivalently-complex" high-volume consumer device like an HD TV). Furthermore, the costs of designing and manufacturing each unit are much higher than for an "equivalently complex" consumer device. The thing to keep in mind is that for most purposes, an oscilloscope must be extremely high quality. For example, on a consumer device like a music player, if the audio amplifier is out of spec by 0.2 dB, nobody cares, and you adjust the volume knob slightly and move on with life. If a company sells oscilloscopes that are out of spec by 0.2 dB, because it is test equipment, this is totally unacceptable. Every single voltage, every single frequency, every indicator knob, must be exactly to spec out to several decimal places, because the oscilloscope (or the multimeter, or the network analyzer, ...) is the device that is used to calibrate all the other devices. So it has to be the best piece of equipment on the bench. It can't be flakey, it can't have strange voltage spikes or frequency spurs or unwanted parasitics; and particularly for a `scope, it's impossible to "hide" these sorts of parasitics "out of band" because a scope will operate over a very wide range of operating conditions (e.g., voltage, frequency, and even lab conditions like temperature and humidity). The design is therefore much more complicated. You can buy a cheap scope, or build a scope as a hobbyist project - but it will be nigh impossible to match the quality across all ranges of operation of a $1000 or $10,000 Agilent bench scope $50,000 wide-band scope or $350,000 mobile-telephoney network analyzer with protocol-stack decoding. The way I have heard these outrageous product costs explained is something to the effect of "without this equipment, an entire team of 20 RF engineers are useless, and their cumulative salaries are much more expensive than the gear that they require to do their jobs." Nimur (talk) 17:44, 4 August 2010 (UTC)[reply]
That's silly! I need a keyboard to do my job, but I can get one for $10! APL (talk) 19:24, 4 August 2010 (UTC)[reply]
A little simplistic perhaps but the general sentiment surely holds. If there is no advantage to the $350k scope then sure, they should go with the cheaper one. But if the more expensive scope reduce the number of person hours or does things that they need that the other one can't or whatever then the price may be worth it.
Your keyboard example is interesting. For a typist or someone who uses the keyboard a lot, they may find the $10 keyboard doesn't suit them well and slows down their typing (there may also be OSH concerns something which doesn't fit the scope example so well) then getting the $100 (or whatever) keyboard may be worth the extra cost given the increased productivity.
I've seen similar arguments made e.g. for large monitors in the past when they were still relatively expensive (they increase productivity so even if it costs $2000 it may make up for it fairly quick) or software and while I'd agree that the analysis is sometimes a little simplistic I would say there is some validity in the reasoning. Even if you can 'make do' with cheaper equipement the more expensive equipement could make up for it in the long run.
Note that for the keyboard example there may be some cases, e.g. when you need wireless or perhaps in a hospital where you need to worry about contamination or in an extreme environment where the $10 is really not up to the job. I suspect there are some similar cases where the $1000 scope, or even $50k scope simply won't do.
Of course the price of even fancier keyboards is so small relatively speaking that if you're just talking a small number it's not really going to be significant. And the room to innovate on a keyboard is also a lot smaller so the keyboard comparison somewhat falls through. There is I presume a very big difference between the $1000 and $350k scope and the cost difference even for a largish business is not so small. But it seems some business are either very stupid, or do feel the gain of the extra features and associated productivity or capability of their business makes up for the cost.
Nil Einne (talk) 16:33, 5 August 2010 (UTC)[reply]
Slashdot was just talking about this. Here is one for $189 Canadian. Ariel. (talk) 17:46, 4 August 2010 (UTC)[reply]
It really depends on what you need. There are programs for the PC ($0) that use the sound card to capture voltages and display them in an oscilloscope-like form. Sadly, they only work for voltages and frequencies that the sound card can handle - and you still need a 'probe' with the right impedance, etc to do a good job of picking up the signal without disrupting it. Second to that are some hardware gizmos that you can plug into a PC or into an Arduino board with an LCD display that do a somewhat better job. But those things are still going to be limited to maybe 8 to 10 bits of precision and maybe 100kHz of frequency. To see faster signals, there is really no substitute for a proper (and expensive) scope. They are the price they are because they don't sell many of them - and that's the price the market can stand. SteveBaker (talk) 21:27, 4 August 2010 (UTC)[reply]
I don't know if I would go for a sound card, but a pretty cheap solution is to get a basic A/D card and set up a software oscilloscope. Also I expect that there is a strong market in oscopes on Ebay (not that I've actually looked or anything). They've been around for decades and must pop up whenever an electronics repair shop goes out of business. Looie496 (talk) 01:05, 5 August 2010 (UTC)[reply]

"Eco" water bottles have more BPA?

At work, some health conscious types have warned me about water bottles. They say the new thinner, squishy "eco" water bottles contain higher amounts of Bisphenol A (BPA) than the older, thicker bottles. They say that eco water bottles are formed by injecting molten plastic (like normal bottles) but they are also filled at the same time. So the plastic molding and water filling take place simultaneously. The rapid cooling of the hot plastic supposedly releases more BPA into the water? Or something like that. They also mention eco water bottles that have been sitting in sunlight also have raised BPA levels due to some reaction with UV light. It sure sounds plausible to a non-science type. Is there any truth to any of this? --70.167.58.6 (talk) 18:45, 4 August 2010 (UTC)[reply]

The presence of BPA does not have anything to do with the "squishiness" of the plastic, what does matter is type of plastic it is. For instance Polyethylene (HDPE, LDPE, PE,etc.) and Polypropylene (PP) are both squishy but should not contain the compound since it's not manufactured with it or combined with it industrially. BPA however is present in Polycarbonate ("PC", the clear nalgene water bottles) since it is a monomer of certain types of this kind of plastic that was unreacted in the production process. It is also added to PVC (faux leather, notebook binder plastics, pipes) as a plasticizer (to soften plastic). Check what type of "eco" water bottle you have to see if it's likely you have something containing larger amount of BPA. -- Sjschen (talk) 20:56, 4 August 2010 (UTC)[reply]

The "squishiness" does matter because BPA is a plasticizer - meaning it's there to make the plastic more squishy. Some plastics need it, others don't. The resin code doesn't automatically mean it does/doesn't have it because it's possible to make the plastic either way, it's just a resonable shorthand for the more common uses. So for example polycarbonate most definitely can be made without BPA, even though it's on the "bad list". As for create and fill, that sounds very unlikely to me. The machines that make bottles are very different from the ones that fill them. Bottles are blown - with air (inflated like a balloon). They need to stay hot, there is no way you could do it with water unless you used hot water, which is very wastful (i.e. expensive) to heat all that water. So there is no way they do create and fill, so that part of what you heard is wrong. Heat will release more BPA. UV probably won't. Baseline: You can't tell by looking. You'll have to trust the manufacturer to give you accurate info (or a lab). For bottles that are made in massive quantities you can use the resin code since that's the more common use. However bottles that are spec made for a particular company you can NOT use the resin code. Ariel. (talk) 23:44, 4 August 2010 (UTC)[reply]
No. BPA is not/should not be used as a plasticiser - it's a monomer in the production of Polycarbonate plastics, and should not be present in anything other than microscopic amounts in polycarbonate plastics (or any other plastics) for the obvious reason of it's toxicity.
It is possible that the bottle are more squisy due to a plasticiser, or they may be using lower molecular weight plastics.
Even this 'pro BPA' site [24] states "Bisphenol A (BPA) is not used as a plasticiser in plastics" - it is a very toxic compound - it is not and could not be used in the quantities required for plasticising plastics.77.86.119.98 (talk) 01:24, 5 August 2010 (UTC)[reply]
My bad, BPA counld be used in plasticizers (as antioxidants), but not as a plasticizer. While in the end it does depend on what the manufacturer decides to add, plasticizer containing plastics are usually not used for water bottles. As well, plasticizers are usually not added for PE plastics, the typically squishy plastics. As such, the "squishiness" of a plastic does not necessarily mean that contains BPA or plasticizers. You should find the resin code on the bottle and decide if you want to trust it. -- Sjschen (talk) 14:40, 5 August 2010 (UTC)[reply]
Hope this doesn't change the discussion too much, but when I said 'water bottle', I meant single-use throw-away bottles of water (like Aquafina, Dasani, Dannon Purelife, Evian, etc), not reusable water containers. I hope my description of simultaneous plastic molding and water filling made this apparent. --69.148.250.94 (talk) 05:07, 6 August 2010 (UTC)[reply]
Anywho. The takeaway conclusion is: one can't tell from the recycle code. But the whole thing is bunk anyway. And even if it wasn't, the 'squishiness' of the bottle has no relation to BPA levels? And 'squishy' is not a new manufacturing process, but just a new plastic formula. Is that the gist?--69.148.250.94 (talk) 05:12, 6 August 2010 (UTC)[reply]
Yes/No. You can tell from the recycle code (ok if it's 7 there are options) - if it's not 7 there should/will be absolutely no BPA in it. In my experience the bottles (disposable) are usually 1 PET , not 7 , but maybe it's different where you are. As to part of the earlier question - filling the bottle as it is blow molded sounds like hydroforming. This seems very unlikely. As far as I know the plastic has to be hot to be molded - any water would instantly cool it - result cracking - cold plastic Polycarbonate is not ductile like , say Aluminium. (If it was done by hydroforming of hot polycarb the likelyhood of leaching is increased. I don't see an immediate reason why a thinner wall would result in increased leaching, unless it's incredibly thin. (Usually code 7 plastics have a letter code molded onto them as well - eg PC is polycarbonate , ABS is acrylonitrilebutadienesytrene - look for the letter code as well.)
As for uv light increasing levels of BPA in polycarbonate plastics - I can't find anything on this - but it is a possibility that can't be rejected.87.102.72.153 (talk) 16:40, 6 August 2010 (UTC)[reply]
A disposable water bottle is likely Polyethylene_terephthalate not Polycarbonate. (unless this differs from country to country)87.102.72.153 (talk) 16:43, 6 August 2010 (UTC)[reply]

Retrieving sound of the past

Does any one have some articles/webs related to studying the possibility of sound signals recovery from the past? I mean to hear some old information like our voices in the past. Sorry for not logging in, (Email4mobile)--89.189.76.246 (talk) 20:07, 4 August 2010 (UTC)[reply]

What, like audio restoration? That's fairly straightforward. Or do you mean some sort of "recover the original sound of Lincoln's Gettysburg Address" thing? That's strictly fiction. — Lomn 20:15, 4 August 2010 (UTC)[reply]
There have been claims that horizontal grooves inscribed on ancient pots while they span on the potter's wheel had inadvertantly recorded (with very low fidelity) words or word fragments being spoken at the time, and that the words had been or might be recovered. This site suggests that some claims were hoaxes, and discusses the provenance of others, as well as giving further links of relevance. 87.81.230.195 (talk) 20:37, 4 August 2010 (UTC)[reply]
The phonautograph from 1857 could transcribe sound to a visible medium, but there was no means to play it back until 2008 using computers. Cuddlyable3 (talk) 21:20, 4 August 2010 (UTC)[reply]
(ec)I'd heard that claim too - but it's pure fiction. Fingers don't vibrate much in response to sound they are too heavy and muscular. The gizmo that (for example) Edison used to make the his phonograph was a delicate little contraption being driven by a large diaphragm - so as to capture the maximum possible amount of sound energy and to focus it to displace the minimum amount of wax by the smallest distance. Plus, a pottery wheel rotates about once a second - so at most you'd only grab one second of audio before the fingers came around again and erased it. Nah - it's a great idea - but it's not gonna work.
Having said that, we do have some recordings from 20 years before Edison invented the phonograph. Edison was the first person to invent a machine that could both record AND play back sound - but Édouard-Léon Scott de Martinville invented a machine that could record sound (but not play it back). Those recordings were first replayed just a couple of years ago - and the audio files that they recovered can be replayed right from within our article! That dates back to 9 April 1860. But that's really as far back as you can go. Aside from that, your only chance is listening to early Edison phonographs - and the oldest known one of those dates to 1888. SteveBaker (talk) 21:22, 4 August 2010 (UTC)[reply]
I was wondering if there were some claims about the possibility of restoring sounds of our owns in our medium (air for example) by analysing the spectrum of vibration frequency and tracing for possible eventhough very weak signals since a fraction of energy will always reflect back and forth in the form of echos.--195.94.11.17 (talk) 22:16, 4 August 2010 (UTC)[reply]
Interesting concept - reverberation is real, and conceivably, even if the level of reverb is inaudible, it may still carry information. But, that information would be dispersed to very low energies, and the signal to noise ratio would make it impractical to reconstruct anything. The straw-man version is to be in a large, empty, echo-y room; obviously, if you shout, you will have an echo that could be recorded and used to estimate the "original" version of the shout. But after even a few seconds, the echo level has died away to such a low volume that the ambient noise would totally override any effort to reconstruct the source. In a non-ideal environment, like outdoors or in a non-echoing room, you'd have essentially no chance. Source estimation is an open research problem in signal processing - trying to figure out exactly what pulse was sent out, when all you have is a recording of the result. You would be setting up a very poorly-constrained source-estimation problem; it's safe to say we have no existing problem that could work reliably. Nimur (talk) 22:53, 4 August 2010 (UTC)[reply]
Also, sound propagates by causing vibrations in the air particles. Moving those particles up and down is an expenditure of energy, and, by the laws of thermodynamics, some of that energy will be lost in the form of heat. That lost energy will be taken from the intensity of the wave. So, the wave will die out in the way from one echoing surface to other.
Also, the echoing surface will absorb a small part of the energy of the sound, unless it's a perfect echoing surface. For example, we have perfect mirrors that are used to create laser rays, but they still don't reflect 100% of the energy. If you leave a laser ray bouncing indefinitely between two perfect mirrors, the ray will eventually get absorbed by the mirrors. A sound echoing between two "perfect" echoing surfaces will also die eventually.
Also, unless the echoing surfaces are perfectly aligned, the sound will echo in a slightly different direction, and it will start reflecting out of other surfaces (other walls in the room, for example). You will eventually have thousands of reflections in many directions, canceling and augmenting each other at places. Since they echo at different places that are at different distances, they won't go back to the origin at the same time. This will cause a shift in the sound, increasing the cancellations and augmentations. In other words, the sound will garble itself.
Also, even if the echoing surfaces are perfectly aligned, the sound doesn't echo back in a straight line, like a laser against a mirror, it echoes in an angle, dispersing the energy of the sound over an area even if the echoing surfaces are perfectly aligned, the echo will appear to come from a different place[25] and the sound will disperse over a wider area. This hastens the weakening of the sound since it vibrates over a bigger area, it has to move more air particles, and each echo will carry less energy in a certain direction. Instead of preserving all the energy in one clear echo, the energy will get spread and any listener will be getting a weaker echo that will last less time before disappearing. And if hastens the chance that the sound garbles itself.
Eventually, the sound will be so weak that it will be the same volume as ambient noise, and it will eventually go under the detection threshold of any instrument. Eventually, the sound won't be strong enough to move any air particle, and it will stop.
Air particles bounce continuously against each other due to Molecular diffusion, so any trace left by the sound will disappear immediately. Once the sound loses all its original energy, you will be unable to recover it from the air (and the sound will lose all of its energy, because you can't fight the laws of thermodynamics). --Enric Naval (talk) 23:56, 4 August 2010 (UTC)[reply]
See Archaeoacoustics, "the discipline that explores acoustic phenomena encoded in ancient artifacts". As the article notes, Mythbusters tried to recover sound recordings from pottery, with poor results. Clarityfiend (talk) 22:54, 4 August 2010 (UTC)[reply]

The TV series "Science Fiction Theatre" from the 1950's told of a ceramic object from Pompeii from which could be played back the sounds of panic in the streets when the volcano irrupted in 79 AD. Edison (talk) 02:57, 5 August 2010 (UTC)[reply]

For another fictional treatment, see The Stone Tape. 87.81.230.195 (talk) 17:05, 5 August 2010 (UTC)[reply]
Thank you very much. I think I've got a starting point to seek after.--Email4mobile (talk) 06:14, 5 August 2010 (UTC)[reply]
Information cannot, by definition, be destroyed. I'm not sure if that applies to information carried within bygone sound, though. ~AH1(TCU) 00:57, 7 August 2010 (UTC)[reply]

Weird Bug Redux

A while ago I asked a question about a strange insect I saw and someone incorrectly said I had seen the caterpillar of a tussok moth. I finally managed to take a picture of the mystery insect and have uploaded it to flickr. Can anyone help me identify it?

Americanfreedom (talk) 21:52, 4 August 2010 (UTC)[reply]

Looks like a velvet ant. Looie496 (talk) 00:55, 5 August 2010 (UTC)[reply]
I agree. Here are some more pics, which look very similar. Steer clear, Opie! --Sean 14:49, 5 August 2010 (UTC)[reply]

Recent CMEs and auroras

Hi. I have two questions related to the recent coronal mass ejections:

  • When was the last coronal mass ejection or equivalent geomagnetic storm (individual or multiple) of this intensity?
  • Would the possible auroras be visible through thin high cloud from, say, 44 north latitude?

Thanks. ~AH1(TCU) 23:01, 4 August 2010 (UTC)[reply]

Geographic latitude doesn't matter for seeing auroras. What matters is your geomagnetic latitude, which measures where you are relative to the magnetic poles. That said, the recent CME wasn't very strong (only a class C3), so it wouldn't have been visible from anywhere with a latitude of only 44 degrees. --Carnildo (talk) 00:56, 5 August 2010 (UTC)[reply]
It was visible from plenty of locations at 45 latitude north, but I wasn't able to see the aurorae from my light-polluted location last night. ~AH1(TCU) 16:32, 5 August 2010 (UTC)[reply]

Number/measurement system

With the increasing prominence of computers, why hasn't human society adopted a number and measurement system based on powers of 16 (hexadecimal)? --138.110.206.101 (talk) 23:06, 4 August 2010 (UTC)[reply]

Easy, cause we have 10 digits. --Wirbelwindヴィルヴェルヴィント (talk) 23:27, 4 August 2010 (UTC)[reply]
Computers use Binary numeral system, because it's easy to distinguish between "1" (I'm receiving energy in this cable) and "0" (I'm not receiving energy in this cable). Byte has 8 bits because the most successful computers happened to use 8 (see Byte#Size]), and hexadecimal became popular because it can represent two 8-sized bytes in only two human-readable characters. In other words, hexadecimal exists for arbitrary reasons and not because it's better, and computers don't even use it.
Also, when you interact with a computer, there is layers of abstraction that separates you from how computers really represent their numbers internally (they use weird systems like Two's complement and Floating point, which are very difficult to read for humans). The computer doesn't care what system you use, because the layers of abstraction will always translate it to the internal system before touching it for anything. We should use the system more convenient for humans, since all systems will need a layer of abstraction anyways. --Enric Naval (talk) 00:30, 5 August 2010 (UTC)[reply]
Small quibble: Hex cannot represent two 8-sized bytes in only two human-readable characters, it can represent one 8-sized byte in only two human-readable characters (or 2 in 4, 3 in 6, 4 in 8 - you get my drift ;-). --Stephan Schulz (talk) 09:20, 5 August 2010 (UTC)[reply]
I think your origins for the 8 bit byte are a bit off. 8 bit bytes come about because to represent English-language text conveniently you need 26 uppercase letters plus 26 lowercase plus 10 digits plus at least half a dozen punctuation characters - preferably more. That's annoyingly just a bit more than 64 symbols. So 6 bit 'symbols' aren't really convenient (although there are 6 and even 5 bit character codes that use special 'shift' characters to indicate lowercase, etc). The next handy power of two is 128 - so basic ASCII uses seven bits. The reason we don't have 7 bit bytes is that back when this stuff was being thought out, characters were stored on media like paper tape and magnetic tape - and transmitted over unreliable serial data lines. You needed a way to check that your symbols had been transmitted successfully - so they added an 8th bit (called "parity") which was set in such a way to make sure that the number of '1' bits in the byte would always be an even number. If any single-bit error occurred in the byte, there would be an odd number of '1' bits - and you'd know that something screwed up. Hence 8 bit bytes. A few computers used more or fewer than that - but 8 bit bytes were just so very convenient that they became a de-facto standard. These days, things are much more reliable - and we have more sophisticated ways to do error checking...and 128 (or even 256) characters aren't enough for international character sets. But the 8 bit byte is here to stay.
SteveBaker (talk) 01:25, 5 August 2010 (UTC)[reply]
You are right, I forgot about ASCII. --Enric Naval (talk) 08:48, 5 August 2010 (UTC)[reply]
Please add content (or link to content somewhere else) about that ASCII-origin to Byte#Size. Man, that article's sections are repetitive/overlapping! :( DMacks (talk) 14:24, 5 August 2010 (UTC)[reply]
I'd rather computers adapt to us instead of us adapting to computers. Also the base you use doesn't matter much. (There are couple things that are easier in one base or another, but not many. Base 60 is good because it has lots of integer divisors, balanced ternary is pretty cool. But mostly it doesn't much matter. Ariel. (talk) 23:50, 4 August 2010 (UTC)[reply]
Because we are pedantic on the Reference Desk, I have to correct one thing you wrote: Hex is "base 16", and doesn't use "powers of 16". Comet Tuttle (talk) 00:07, 5 August 2010 (UTC)[reply]
Er... yes it does... The rightmost hex digit is the 1's, the next one is the 16's, the next one is the 162's, etc.. Those are powers of 16. --Tango (talk) 00:46, 5 August 2010 (UTC)[reply]
Computers can easily adapt to any system that humans find convenient - base 16 is mildly useful for programmers - but the rest of the planet shouldn't have to be afflicted with them. Base 16 is also a pain to deal with, mathematically. If we were thinking of changing the base we work in, then base 12 would be much more convenient than either 10 or 16 because it has so many factors: 1,2,3,4 and 6. Much better than 1,2,5 or 1,2,4,8. Being able to divide by three in your head and getting an exact answer without all of those recurring digits would be a blessing!
Also, there is a system called BCD (binary-coded-decimal) that allows computers to do base-10 math with relative ease.
If we're revising the human number system (an inconcievable prospect!) then I'd like to switch to one in which we write only the digits 0,1,2,3,4 and 5 - and write 6,7,8,9 as 14,13,12 and 11. The underscore means 'minus' - but just for this digit. 5 can be written 15 - the two representations are interchangeable, just as 0 and -0 mean the same thing in normal arithmetic. Think of it a bit like roman numerals where IIX means 'ten minus two' - which is what 12 means. Doing this has dramatic effects on doing arithmetic. If you have a long column of numbers to add up - you can cancel matching pairs of 3 and 3 (for example) and you have much less problems with carrying digits. In roman numerals, you can add IIX and XII by cancelling the 'II's that occur either side of the X so IIX+XII=XX. So in this system, 12+12 = 20. Negative numbers are also more naturally represented - and the complications of adding and subtracting positive and negative numbers just goes away, quite naturally. Negative numbers are written like positive numbers - but with underscores flipped over so 0 - 1234 = 1234.
Feel free to spend the next hour trying to figure out what I just said! SteveBaker (talk) 01:30, 5 August 2010 (UTC)[reply]
You're talking about balanced decimal. --Tango (talk) 01:37, 5 August 2010 (UTC)[reply]
And at his point, I can't help but pointing out our nice Duodecimal article, which discusses a base twelve system. It also has a bunch of fun links to other articles related to alternative counting systems. Buddy431 (talk) 02:00, 5 August 2010 (UTC)[reply]
We already do, when it's convenient for us. For example, debuggers show us memory addresses in base 16 because memory is organized in chunks whose size is multiples of large powers of 2. But for day-to-day use, there's no advantage to it. Paul (Stansifer) 02:05, 5 August 2010 (UTC)[reply]
For a substantial fraction of the history of computers, we've used octal (base 8) rather than hex. Radix 50 notation also had a brief following. SteveBaker (talk) 03:27, 5 August 2010 (UTC)[reply]

If we start by setting an example now, we can convert human society by next year 7DBh. Tax returns should be written with income in hexadecimal and deductions in binary notations. Cuddlyable3 (talk) 23:19, 5 August 2010 (UTC)[reply]

My personal rule is that computer programmers are officially entitled to state their ages in hex. I feel much more like a 37 year-old than my decimal age. OTOH, tax returns tend to feel like they are being written in base 1. SteveBaker (talk) 13:50, 6 August 2010 (UTC)[reply]
There are 10 kinds of people in this world. Those who understand Binary and those who don't. ~AH1(TCU) 00:54, 7 August 2010 (UTC)[reply]

August 5

history of skin cancer?

The cancer article has a pretty good overview of the history of awareness of the disease, but focuses mainly on breast cancer. The skin cancer article has no history section whatsoever. I'm very curious about the history of awareness of skin cancer. My guess is that before long distance transportation became feasible, skin cancer acted on geographically fixed (more or less) populations such that after X (or XX) generations the survivors didn't contract skin cancer in noticeable numbers. Following that, it was only once people began to really move around (primarily latitudinally, I suppose) that you end up with humans living in latitudes with amounts of sunlight exceeding that which their bodies' can naturally protect against? I can't recall ever reading about a historical figure dying of skin cancer... though I suppose contemporary historians simply may not have known what it was that killed the person... 218.25.32.210 (talk) 01:23, 5 August 2010 (UTC)[reply]

Famous historical figures didn't spend much time outside. Poor (not-famous) people did that. So it seems unlikely that anyone from more than 100 years ago who was notable for anything would have had enough sun exposure to have skin cancer. Its likely lots of people had it, and some may have died from it, but those people didn't make the history books as individuals. --Jayron32 05:43, 5 August 2010 (UTC)[reply]
Caesar Rodney was plenty famous in the 1700s and he had skin cancer. Dragons flight (talk) 06:03, 5 August 2010 (UTC)[reply]
Oh, I'm sure if you dig far enough, you'll find someone. It's entirely possible to get skin cancer even if you don't spend any time in the sun. It's just that in general, sun exposure is a major risk factor for skin cancer, and most people who did something to get famous weren't out picking turnips in the field 12 hours a day... --Jayron32 06:09, 5 August 2010 (UTC)[reply]
I think you're probably over-exaggerating the lack of being in the sun. Whether or not it was fashionable (across very different periods and cultures) to be in the sun, plenty of important people have had to spent their time in the sun. And whether people are famous or not has nothing to do with whether doctors would have discussed the ailment, or would have discussed notable cases. --Mr.98 (talk) 14:51, 5 August 2010 (UTC)[reply]
This page does not exactly beam "reliability", but contains a plausible history of Western medical discussions of skin cancer. Apparently René Laennec first discussed it explicitly in a lecture in 1804, and gave it the name Melanoma. John Hunter operated on a melanoma in 1787 but apparently wasn't aware of what it was. He called it a "cancerous fungus". A certain Samuel Cooper (who does not seem to be any of the Wikipedia "Samuel Cooper"s) reported in 1840 that melanoma was basically untreatable in its advanced stages. It's not much but it's a start. In any case, it seems that awareness of it as a distinct health entity, much less a form of cancer, is pretty recent (i.e. 19th century). The understanding of the link been skin cancer and UV exposure is probably even more recent. If you don't know what causes something, it's hard to have much of a public health campaign about it. --Mr.98 (talk) 14:57, 5 August 2010 (UTC)[reply]
Skin cancer awareness probably increased following the discovery of the ozone hole. ~AH1(TCU) 00:52, 7 August 2010 (UTC)[reply]

what is the difference between cytonemes and filopodia?

Filopodia can fuse their membranes to make large pseudopodal or epithelial bridges between cells, right? But not traditional cytonemes? I just read this Nature paper that declared in their abstract: Hereafter, we refer to these filopodial bridges as viral cytonemes (neme meaning thread), because they share features with long-lived filopodia previously observed in the imaginal disc of Drosophil. Cytonemes constitute stable cell–cell bridges thought to mediate the long-range transport of signalling molecules between cells.

This confuses the hell out of me! Much thanks and gratitude to anyone who can explain this to me, because my current internship (which ends in 2 days) depends on this question. John Riemann Soong (talk) 02:09, 5 August 2010 (UTC)[reply]

My "simple" definition: Filopodia are thin, finger-like, dynamic actin-filled extensions of the plasma membrane that play roles in sensing the extracellular environment, cell-cell adhesion, and cell-substrate adhesion.
There are a whole lot of words that describe long thin protrusions of the plasma membrane, many of which probably have different functions than traditional filopodia (such as long-range cell-cell contact, intracellular communication, etc.) and I'm sure you will find experts who disagree on how to use the terms -- you're not the only one who is confused! One distinction might be whether the structure is short-lived and dynamic (filopodia) versus long-lived and stable (cytoneme). However, your best bet is to describe the structures you are observing as clearly as possible, without worrying excessively about what to call them. If you have time-lapse imaging that shows the protrusions extending, retracting, making cell-cell contacts, etc. it's probably safe to call them filopodia. If you can demonstrate that your filopodia fuse together or somehow allow intracellular movement of cytoplasm, it's probably safe to call them filopodial bridges. I can't say why the authors of the Nature paper decided to use the term "viral cytoneme," but from quickly skimming it, they seem to be suggesting that the viral envelope proteins are somehow facilitating the generation of filopodial bridges that allow the viruses to pass from one cell to the next. If you have a similar context where a treatment seems to facilitate the generation of long-term filopodial bridges allowing cell-cell transport, perhaps you could use a similar term. Keep in mind, however, that cell culture is a pretty abnormal situation for cells and some of the things we observe in a dish work a little differently in a three-dimensional organism. I'm not sure that was helpful, but the take home message should be to clearly define the structures you are talking about, and then use that term consistently. Good luck! --- Medical geneticist (talk) 13:42, 5 August 2010 (UTC)[reply]
That gave me some confidence. Thanks! John Riemann Soong (talk) 14:14, 5 August 2010 (UTC)[reply]
Does anybody have any clue about the different behaviours of filopodia and lamellipodia? I wonder if lamellipodia are in fact responsible for "trapping" my gold particle (in that animated gif) and bringing it in, since lamellipodia are found between filopodia? (According to the literature on growth cones in neurons...) I am also wondering if lamellipodia are responsible for the "bridge widening" I have seen. John Riemann Soong (talk) 00:47, 6 August 2010 (UTC)[reply]

Chimney

Here's the question I was given: Suppose a chimney of length L started to tip and fall. Where would it break?

Here's my attempt at a solution: The angular acceleration of the chimney should be (3/2)(g/L)cosθ (θ being the angle between the chimney and the ground). Consider a small portion of the chimney a distance x from the base. If just gravity were acting on this small portion, it would have an angular acceleration of (g/x)cosθ, so parts of the chimney near the top are rotating "faster than they should" were only gravity acting on it. Thus there must be internal torques, and thus internal forces, acting on within the chimney. Wherever the internal forces are greatest, that's where the chimney will break. For a small portion of the chimney, τint + τext = τint + mgxcosθ =(mx2)(3/2)(g/L)cosθ. Using this, the force becomes greatest at the very top of the chimney, which doesn't seem to make much sense. Can someone point out where I've gone wrong? Thanks. 76.69.241.253 (talk) 02:11, 5 August 2010 (UTC)[reply]

Well - do you trust your math or your instincts? Watch this (the interesting bit starts about a minute into the movie) - I'm pretty sure those two chimneys are both of length 'L' :-) SteveBaker (talk) 02:37, 5 August 2010 (UTC)[reply]

Don't forget the other internal forces, the ones due to the weight of the chimney. Gdr 17:18, 5 August 2010 (UTC)[reply]

Also don't forget that the inertia of the chimney is not the same along it's length. Your math does not seem to take this into account, lower down when it accelerates the top there is a ton of inertia against do so, in the middle it not only has to accelerate the top, in needs to decelerate the bottom, so you have double force. At the top there is little inertia to fight. Ariel. (talk) 18:44, 5 August 2010 (UTC)[reply]
Does your chimney have an strength at all, or is it just stacked bricks? If it has any strength at all your math will not be right, since it will break wherever the force is greater than the strength (so at multiple locations). If it has no strength then it will break everywhere at once, and it will impart no force to the top, since it broke and it can't. Ariel. (talk) 18:38, 5 August 2010 (UTC)[reply]
The forces and breaking of a falling chimney is a common intro-physics topic. google:physics falling chimney has tons of hits with pictures and the math details. DMacks (talk) 19:01, 5 August 2010 (UTC)[reply]

Okay I see where I went wrong, thanks. 76.69.241.253 (talk) 20:48, 5 August 2010 (UTC)[reply]

Also, the size and number of bricks, as well as the width, hardness and cohesive strength (?) of the coglomerate holding them together may affect your solution by small degrees. ~AH1(TCU) 00:49, 7 August 2010 (UTC)[reply]

gas central heating

can someone draw a diagram of how gas central heating works? and how it vents?--Tomjohnson357 (talk) 05:11, 5 August 2010 (UTC)[reply]

Central_heating#Electric_and_gas-fired_heaters has a very brief description. Air handler contains more. --Jayron32 05:40, 5 August 2010 (UTC)[reply]
In words, a description of the gas central heating system in my house: There is a large square vent in the ceiling, air goes in there, through a wide hose to a box in the ceiling space. A gas pipe also feeds that box. The box burns gas to heat air. It vents its exhaust through a chimney to outiside. A fan in the box pushes the heated air out of another wide hose which splits into about eight narrower hoses which connect to vents in the ceiling around the house. So air is sucked into the heater from inside the house; it's heated by burning gas; the hot air is pumped into the house through one hose which splits into several. --203.202.43.53 (talk) 08:15, 5 August 2010 (UTC)[reply]
See also Furnace#Household furnaces. DMacks (talk) 14:17, 5 August 2010 (UTC)[reply]
This schematic show the kind of set up in many UK homes. The boiler is a closed system heating water and pumping it round the radiators and through the heat exchanger coils in the hot water tank. Cold water is supplied to the cold taps around the house and the storage tank(s) in the loft space. Cold water is then gravity fed from the loft tank to the hot water tank (where it is heated by the heat exchanger coils) and then supplied to hot water taps around the house. There are a few variations using on-demand heating or pressurised systems. Astronaut (talk) 14:52, 5 August 2010 (UTC)[reply]

Fuss about Blackberrys and National Security

Im not a blackberry fan, find it to be quite an eyesore in the aesthetics department and I havent used it ever,so technically I wouldnt know whats this security risk stuff everyone is talking about. Saudi and UAE Govt. have banned blackberry services from yesterday. Whats with the Blackberrys that make them a security threat? Im not asking a political question here but whats the scientific reason that makes these handheld devices a risk? Is it restricted only to a blackberry or is an IPhone too capable of such a risque behaviour? And have these risks been a fall out of a new (recently introduced) feature or has the risk been there since blackberry launched? Why this sudden brouhaha?Im sorry for asking too many questions but I feel they are all inter related. Thanks for the answers in advance--Fragrantforever 06:57, 5 August 2010 (UTC) —Preceding unsigned comment added by Fragrantforever (talkcontribs)

I can't see any risk that blackberry devices present that other smartphones don't (although I do know someone who suffered a repetitive strain injury due to heavy use of his blackberry's scrollwheel). One risk of such devices in a secure environment is that they can connect to computers, take photographs, record voice, store huge amounts of data and smuggle that data out of the secure environment. Another is a risk shared by laptop computers -- they're easy to steal. A stolen blackberry could be full of confidential e-mails. --203.202.43.53 (talk) 08:22, 5 August 2010 (UTC)[reply]
But it's not that at all. I just looked up news reports and found this. At a quick reading suggests to me that they are banning the messenger function on blackberrys because it provides secure communications and so it's hard for the government to monitor what people are saying when using it. And worse, Research in Motion (the company that makes Blackberrys) refuse to decrypt messages, even when governments ask nicely... unless the government in question has a US court order unless the government in question has gone through proper processes. Both UAE and Saudi Arabia say they don't want evil terrorists having easy access to secure communications. My take on it is that those countries are unhappy about having to go after specific messages one at a time and through their courts, they'd much rather be able to simply monitor all communications to track down "naughty" people before they know they're naughty. --203.202.43.53 (talk) 08:30, 5 August 2010 (UTC) (edited a couple of times before 08:41, 5 August 2010 (UTC))[reply]
Of course, what the ruling regimes of the UAE and Saudi Arabia consider 'naughty' might not be restricted to terrorism and might not be entirely congruent with definitions prevalent in more, shall we say, liberal societies. 87.81.230.195 (talk) 16:51, 5 August 2010 (UTC)[reply]
From the reference you provide along with [26], it's entirely unclear under what circumstances ('proper processes' as you called them) RIM would decrypt messages and whether RIM would do it in the same circumstances in the UAE and Saudi Arabia that they would in China, Russia or even the US or Canada. I don't even see any evidence they would let them 'go after specific messages one at a time and through their courts'. In any case, I have strong doubts they've been willing to compromise with the UAE or Saudi Arabian governments as much as they have been with the Russian or Chinese ones and from the sources I'm not alone in this view.
If I was a cynic, I would say RIM is playing this for all it's worth since they don't consider UAE or Saudi Arabia that important so are willing to give up those markets if necessery in exchange for the good publicity about protecting their customers. They're perhaps hoping that they can come to some kind of deal with the UAE or Saudi Arabian governments which said governments would let them spin as a backdown on their part (even if it was something said governments would have agreed to all along). On the flipside, the UAE and SA are I think resonably friendly so may have been in contact and decided to apply mutual pressure.
I've heard of India having similar concerns recently also mentioned in your ref, I'm hardly surprised the suggestions are [27] that a deal is close. Who knows, perhaps news of a Indian deal reached the UAE and SA governments and having perhaps negotiated as long as the Indian government they got rather pissed that RIM was happily playing ball with the Indian government, as they may have already done with the Russian and Chinese governments so decided there was no point negotiating any further. Or perhaps RIM decided to play hardball with UAE and SA knowing they'll be cut off so they could use it as part of their negotiations with India (if you want to cut us off, then so be it).
The simple truth as in all of these sort of secretive corporate-goverment negotiations, it's hard on the outside to really know what's going on, or why, behind the scenes. It's almost definitely incredibly complex, and the power of each side generally plays a big part. Trying to paint either side as 'bad guys' or the one causing problems is usually way too simplistic. And as much as anything the lack of knowledge or imperfect knowledge about what's going on most likely doesn't help matters. It may be the Russian, Chinese and US deals are not that great, but as long as the UAE and SA governments think it's better then what they're getting they're likely to be reluctant to agree to something less.
BTW it seems Indonesia has similar issues [28] it would be interesting to see what happens there. Of note, a common thread is many countries don't like the servers being in Canada I wouldn't say this is that surprising or unusual, I think it's a common concern for some countries. While it's true that e-mail servers are commonly in another country one of the things is that for the Blackberry the e-mail is AFAIK strongly integrated with the phone and also it seems usually encrypted. Compare that to MMS or SMS or phone calls where they are not encrypted and go thorough the local service provider.
While I mentioned RIM's possible publicity advantage for their consumers, as somewhat shown by the Indonesian case, these sort of things do have a risk of cascading since when other governments see what's going they start to take notice too. This is of course likely to be a concern to RIM, and I saw a news report saying a similar thing.
It's also worth remembering that some of the people involved, particularly the people in high levels of government may have limited understanding of the technical details yet they will often be the people with the real say and may not always listen to what their more technically inclined advisors or civil service are telling them (if they even dare).
Nil Einne (talk) 10:11, 5 August 2010 (UTC)[reply]

According to CNN, http://www.cnn.com/2010/TECH/mobile/08/04/blackberry.fans/index.html?hpt=Sbin the Saudi and UAE governments like to spy on people's texts and e-mails. But the blackberry's encription is so good that they don't know how to crack it, so they just banned the thing entirely. 148.168.127.10 (talk) 20:16, 5 August 2010 (UTC)[reply]

Right. The security threat is that the Blackberry is too secure, as far as these governments are concerned. RIM is reluctant to "fix" the problem for obvious reasons. Looie496 (talk) 01:51, 6 August 2010 (UTC)[reply]
Of course any programmable phone could be provided with an 'app' that would encrypt your data with some ungodly good encryption and send it to someone with a similarly equipped device - any halfway decent programmer could toss that together in a day or two. Even if such applications were banned from phones (and I have no idea how you'd make that kind of a ban enforceable), it would still be possible to encrypt a message on a regular computer (or even on paper with a pencil) and key the resulting gobbledygook message into a completely dumb phone. Heck, you can use a one-time pad or a 'code book' and send a heavily encrypted message in what seems like plain text or speech: "Shall we meet tomorrow at around 2:15?"...look up number 215 in your code book...it says "Blow up the parliament building tomorrow at noon". Encryption is childishly easy. When I was a kid, my parents had me learn a bunch of fake names so that (in an era before cellphones) I could make a 'reverse charge call' (aka 'collect call') from a callbox and when the operator asked who I was, I'd give the fake name appropriate to the message I wanted to send. My parents would then refuse to accept the call and look up the name to discover that "Henry Plantagenet" really meant "Please pick me up from soccer practice"...or whatever.
The iniquitous thing about all of this is that the 'bad guys' can always do good encryption if they need to. But people who merely wish to prevent rivals from stealing industrial secrets, or to avoid telling the papparazzi where they're going to dinner, find it much harder to do so without the convenience of a securely encrypted data service like the Blackberry. SteveBaker (talk) 13:44, 6 August 2010 (UTC)[reply]
On a related note, China Telecom started to screen cell phone text messages last year so the police could enforce a ban on solicited public demonstrations after the July 2009 Ürümqi riots. ~AH1(TCU) 00:46, 7 August 2010 (UTC)[reply]

Reason for adding synthetic colors in medicines

Hi,
The use of synthetic colors in food and other edible substances is an area for concern. I understand that the addition of colors to foods such as candies, chocolates, shakes etc, is to increase the product appeal.
But then what is the main reason for adding colors to medicines such as cough syrups, capsule shells etc. After all, medicines are not generally marketed based on their visual appeal. —Preceding unsigned comment added by Kanwar rajan (talkcontribs) 09:58, 5 August 2010 (UTC)[reply]

Having a distinct shape, size and colour would most likely help people recognise a pill so they are less likely to take the wrong thing. It may also be a help to pharmacists. Also most people already don't like taking medicines, so the manufacturers doesn't want them to look too revolting. Particularly true I suspect for anything children are likely to take. (Although I've heard that nowadays many children medicines have been sweetened and flavoured enough that children actually want to take them.) Marketing may sometimes come in to it, for example most everyone knows what the little blue pill is (well apparently not wikipedia until I made the redirect). It may also be a useful way to distinguish your product from other generics and similar products.
Note that if you are taking medicines, even something like an OTC cough syrup I would say any alleged or real potential effects of the tiny amount of colours you consume in what one would hope is the tiny amount of medicine you consume pales in comparison to the side effects and other concerns of taking the medicine and the circumstances that require taking the medicine. Even if you actively avoid colours as much as possible, it's unlikely the amount you consume from medicines while sick is going to be a big proportion of the colours you do consume.
Nil Einne (talk) 10:54, 5 August 2010 (UTC)[reply]
I still have a real urge to go buy some Calpol (a child's, liquid paracetamol... tastes very good). I'm now 19, haven't had it in about 16 years, but still remember it being very, very nice. Just a random comment. Regards, --—Cyclonenim | Chat  11:25, 5 August 2010 (UTC)[reply]
I actually had the same urge a few months ago and bought some - it's just as nice as I remember it. Someone at that medicine company really knows what they're doing. ~ mazca talk 13:18, 5 August 2010 (UTC)[reply]
Studies show that the placebo effect is affected by the shape and colour of the pills. [29]. It is also useful in telling pills apart, there are drug identification programs that work with the color, shape and text on the pills for instance.[30] EverGreg (talk) 11:28, 5 August 2010 (UTC)[reply]
For a while, I was taking two different medicines regularly; the bottles always described the capsules' appearances, so even if I'd somehow forgotten which pill was which, I could have re-learned quite easily. Nyttend (talk) 13:45, 5 August 2010 (UTC)[reply]
In the case of cough syrups, adding food coloring to them makes the artificial flavoring (a little) more convincing. Imagine taking a colorless, orange-flavored cough syrup. It still tastes like (artificial) orange flavor, but it feels weird because you've been conditioned to expect orange-flavored food to have an orange color. --173.49.16.4 (talk) 12:46, 5 August 2010 (UTC)[reply]
Yup, I'd have to tag the original basis of the question as false assumption: "After all, medicines are not generally marketed based on their visual appeal."[citation needed] Companies spend huge amounts of money on marketing and consumer analysis. If a company thought it could do better with a different approach, for example, lower production cost (leading to lower consumer cost (== higher sales)) or higher profit margin) or improved consumer appeal (leading to increased sales or ability to raise price (==increased profit margin), they would do so. Heck, even increased acceptability by consumers of "now with no artificial colors" is a potential corporate boost that is not done too often in this industry...it's just not what consumers seem to want (enough to justify cost of changing their product-image and its manufacturing, and other potential offsetting loss of sales for reasons others have mentioned). DMacks (talk) 14:14, 5 August 2010 (UTC)[reply]
For what it's worth, I have seen some infant's medication marketed as "Dye-free!" Here's an example from a googled website. Interestingly, before Kanwar rajan's question, I never thought this was a scheme to market to parents who are concerned about the health effects of synthetic dye; I had always thought it was meant to tell the parents: "When you spill this damn stuff onto your clothes because your kid shakes his or her head around while taking this damn stuff, the clothes won't get permanently stained with hot pink and orange dye." Comet Tuttle (talk) 16:28, 5 August 2010 (UTC)[reply]
I'll note that when pouring measured volumes of a liquid, a bit of color can make the medicine easier to see. TenOfAllTrades(talk) 23:28, 5 August 2010 (UTC)[reply]

It was a dark and stormy night to take a red pill or a blue pill. Morpheus explains. (video) Cuddlyable3 (talk) 23:04, 5 August 2010 (UTC)[reply]

Sides of leather

What is a side of leather? A source that I've used at Charles Wintzer Building says that this building, a former tannery, processed 2,500 sides of leather in a year, but it doesn't explain what it is. Perhaps a complete animal hide? Nyttend (talk) 13:55, 5 August 2010 (UTC)[reply]

Half an animal hide. Oxford English Dictionary sense 9.b. gives this quote: 1885 Harper's Mag. Jan. 274/2 After soaking, the hides are..cut through the middle of the back to separate them into ‘sides’." Gdr 14:50, 5 August 2010 (UTC)[reply]

Mathematical Tools for General Relativity

I am currently in eleventh grade, and I wish to learn general relativity right from the basics to the derivation of equations, tensors, so on.. What mathematical knowledge is needed for this? Can you please recommend books for learning the mathematics needed for General Relativity? Currently, I am well versed with Differential Calculus and Basic integration, complex numbers, trigonometry, and some other basic mathematics. What further topics do i need to know in mathematics for General Relativity? harish (talk) 15:06, 5 August 2010 (UTC)[reply]

You can try to study General Relativity any time you like. You'll probably find that it uses techniques you aren't familiar with until you've mostly completed a core curriculum of advanced math and physics. It would be possible to try to learn each concept, piecemeal, but normally in an undergraduate physics or math program, you wait until your junior or senior year of university before even bothering with a GR course, so that you've had the time to build up the necessary classes. Many programs have a sort of "teaser" course in "modern physics" taught at the freshman or sophomore level, covering the basic conceptual ideas of relativity, but it's impossible to expect enough freshmans and sophomores to have finished the requisite math for full-blown GR at such an early stage.
In the United States, a typical undergraduate course for you will follow something along the lines of:
  • two to three more courses in calculus, culminating in multivariable calculus
  • a solid course in advanced linear algebra, (more advanced linear algebra will help, too)
  • a course in differential equations
  • after completing your basic physics run, you will need an advanced mechanics course to learn about coordinate-independent representations of physical laws; and an advanced electromagnetics course
  • finally, you will have the tools necessary for a course in general relativity, suitable to describe spatial coordinates as they relate to mass, energy, and momentum; and the ability to describe how those things affect particles' and electromagnetic waves' energies and trajectories.
You might consider looking at your preferred university's physics curriculum to see what courses they offer. Many universities offer a specific course in General Relativity, but others lump this in to an advanced math course or cover it during a classical mechanics or electrodynamics course. Ultimately, consider what your objective in learning GR is, and use that objective to guide your coursework. Nimur (talk) 16:04, 5 August 2010 (UTC)[reply]
While I'd agree that all of the above could be useful, I don't think all of it is strictly necessary before tackling a mathematically rigorous treatment of GR as some of it will be included in introductory GR texts. I'd say the fundamental prerequisites are multivariable calculus, partial differential equations, basic linear algebra, and an understanding of how classical mechanics and electromagnetism are handled through partial differential equations. That said, in many places a true course in GR isn't even offered before graduate school. And it certainly true that building up a deeper understanding of math and physics will be genuinely helpful to understanding GR. Since the poster asked about texts, I'd suggest starting with the chapters on multivariable calculus in an introductory text, such as Ellis and Gulick's Calculus with Analytic Geometry. That could be followed by looking at the vector analysis, linear algebra, and differential equations chapters of an intermediate text like Arfken and Weber's Mathematical Methods for Physicists (if your goal is only GR, you can probably skip the half of the book on special functions, as that is more a quantum mechanics thing). That might be followed with an advanced undergrad book on electromagnetism, such as Griffith's Introduction to Electrodynamics. This would probably get you to a point where you could look at a rigorous GR text without your head exploding, though you'd undoubtedly need to consult works on mechanics, linear algebra, and differential equations to fill in gaps as you go. Dragons flight (talk) 18:02, 5 August 2010 (UTC)[reply]
You can get books on GR that teach you a lot of the maths as you go along (you don't need to know anything about tensors, for instance). You might need a bit more calculus (particularly vector calculus/multivariate calculus (they're the same thing)) and you'll need at least some basic linear algebra. You'll need to have taken a basic course in mechanics as well. There is harm in starting on GR now and if you find something you don't understand going back and learning it before carrying on. --Tango (talk) 17:25, 5 August 2010 (UTC)[reply]
The first proper book on it I read was Einstein's own book 'Relativity: The Special and General Theory' which I read when I was sixteen I think. It may be a bit out of date in places but I thought it was very good. It can be better to see things from near the beginning by the original person. Dmcq (talk) 17:53, 5 August 2010 (UTC)[reply]
A book that old won't include things like Penrose diagrams, which I found extremely useful for getting a feel for a solution to the EFE. --Tango (talk) 18:17, 5 August 2010 (UTC)[reply]
If you restrict yourself to special relativity (no gravity) it's easy to find books that will teach it. The math isn't that hard at that level.
If you want to press on to general relativity, I would strongly recommend the book "Introducing Einstein's Relativity" by Ray d'Inverno which cover both special and general relativity. It introduces the tensor math as you go along and is an easy read. As in any curriculum involving some math, it's essential that you do the exercises in the book. EverGreg (talk) 07:54, 6 August 2010 (UTC)[reply]

I understood Special Relativity when I was 14, from Einstein's own Book. But the book has very little on the mathematics of general relativity. Thats why I need good books for learning GR from the roots. harish (talk) 11:08, 6 August 2010 (UTC)[reply]

Roger Penrose's book The Road to Reality contains a semi-popular introduction to general relativity, including the mathematical background. Gdr 17:24, 6 August 2010 (UTC)[reply]
My favorite book on the mathematics of relativity that is accessible to a high school student (at least it was accessible to me when I was in high school) is Lillian Lieber's The Einstein Theory of Relativity. Looie496 (talk) 18:00, 6 August 2010 (UTC)[reply]

chemistry

the question is that our chemistry teacher normally say when a tree has been more than 35 years it will form the sumation of coal that is it will change to charcoal is it true? —Preceding unsigned comment added by Abhay4life (talkcontribs) 17:03, 5 August 2010 (UTC)[reply]

I'm sorry, I don't understand the question. What has the tree been for 35 years? What does "sumation of coal" mean? --Tango (talk) 17:28, 5 August 2010 (UTC)[reply]
Any translation of this question that I can make still produces nonsense. A tree that is living is not coal and never will be. Once a tree dies, it will take far longer than 35 years to become coal - and then only if it is in just the right conditions. -- kainaw 17:33, 5 August 2010 (UTC)[reply]
For "changing trees to charcoal", on the other hand, see our article Charcoal. Deor (talk) 18:48, 5 August 2010 (UTC)[reply]
(ec)You can change wood from a tree to Charcoal whenever you want; just heat it out of contact with the air for a few hours. Coal is a sedimentary rock that began as layers of plant matter that have been covered by other strata for millenia, typically since the Carboniferous period about 330 million years ago. A visitor to a geologic museum asked an attendant how old was the specimen of coal on display, The attendant answered "It's 330 000 004 years old." The visitor said "Are you sure one can know the age so accurately?" The attendant said "Yes, I know because I was here when they brought in that specimen 4 years ago." Cuddlyable3 (talk) 22:45, 5 August 2010 (UTC)[reply]
I'm guessing there was supposed to be a 'dead' in the OP's question (when a tree has been dead more than 35 years) even if it's still rather confused. Nil Einne (talk) 01:52, 6 August 2010 (UTC)[reply]
On the other hand, peat, (which is basically "fresh coal") doesn't necessarily have to be millions of years old. It makes a pretty decent fuel. --Jayron32 03:54, 6 August 2010 (UTC)[reply]

Why can't Quantum Teleportation transmit information faster than light?

In easy to understand terms please. Tried to read the articles, but didn't really find anything that explains why QT doesn't transmit information. I do understand entanglement though. 148.168.127.10 (talk) 19:38, 5 August 2010 (UTC)[reply]

Nutshell version: Alice and Bob separate entangled particles. Bob watches his particle change state and knows that Alice's changed state right then, too! Instant FTL knowledge! However, Bob doesn't know what that means -- did the state change because Alice triggered it (a message) or because of natural processes (quantum mechanical noise)? There's no way to know until Alice sends a separate message, one way or another, by standard speed-of-light-limited means. Thus, no true information can be passed FTL via this method. — Lomn 21:22, 5 August 2010 (UTC)[reply]
Actually bob can not watch his particle change state. He can only measure it once, and you can't tell if alice measures it! Probably the easiest way to think of it is to imagine there are hidden variables. Your particle already knows if it's in one state or the other, and the other particle is guaranteed to be in the opposite state. But those states are already there - measuring them tells you nothing that was not already known. You are simply revealing the information to yourself, but transmitting nothing. Big caveat: hidden variables might not be a description of reality, it's much more complicated than that. But for an initial understanding it's a good start. Ariel. (talk) 21:32, 5 August 2010 (UTC)[reply]
Quantum teleportation works like this: Alice and Bob generate two qubits in a Bell state and each take one. Alice does a certain calculation with her half of the Bell pair and the qubit to be transmitted (the "message qubit"), obtaining two classical bits, which she sends to Bob. Bob does a certain calcuation with his half of the Bell pair and the two bits from Alice, and obtains the message qubit. Here's the classical equivalent of that: Alice and Bob generate a pair of equal bits (both zero or both one) and each keep one. Alice does a certain calculation with her half of that pair and the message bit (namely, an exclusive-or of the two), obtaining one bit, which she sends to Bob. Bob does a certain calcuation with his half of the pair and the bit from Alice (namely, another exclusive-or) and recovers the message bit. This protocol is simply encryption with a one-time pad. The bit that Alice sends to Bob contains no information about the message bit (that is, an eavesdropper learns nothing by observing it). Bob's original bit also contains no information about the message bit. Nevertheless, the message bit can be recovered by combining the two. This is also true of quantum teleportation. Bob needs both his half of the Bell pair and the two bits from Alice in order to learn anything about the message qubit. -- BenRG (talk) 03:22, 6 August 2010 (UTC)[reply]
Bob , alice , what ever , thay teleport statistic informaition , if you want it faster then light ,you have to do statistic fast . then you get the information . thanks —Preceding unsigned comment added by 212.199.175.104 (talk) 07:18, 6 August 2010 (UTC)[reply]
[citation needed]/[unreliable source?] Nil Einne (talk) 12:10, 6 August 2010 (UTC)[reply]

If there was some way to predict the true random bit, and Bob and Alice had all the keys in advance, it might be possible to do this. The trouble is predicting the random encrypted bit is a bit-erm...challenging. But I'm still working on it. —Preceding unsigned comment added by 80.1.88.20 (talk) 14:03, 6 August 2010 (UTC)[reply]

Might the Quantum zeno effect be relavent? ~AH1(TCU) 00:30, 7 August 2010 (UTC)[reply]

lunar eclipses and "blood red moons"

Is every total lunar eclipse a "blood red moon?"

If not, what are the differences?

Thank you!Trntcntysongbird (talk) 23:55, 5 August 2010 (UTC)[reply]

No. The colour and brightness can vary quite widely depending on what is going on in the upper atmosphere (where the sunlight has to go through to hit the moon during an eclipse). If there is lots of dust in the atmosphere (eg. following a volcanic eruption), you can get very dramatic lunar eclipses. --Tango (talk) 01:59, 6 August 2010 (UTC)[reply]
What's happening during the eclipse is that the only light that's reaching the moon has to pass through the earth's atmosphere. If you imagine the light from the sun just skimming the edge of the planet - you realize that the sunlight that makes it to the moon is light from a sunset - so it's always orangy-red but whether it's really that bloody red depends on which parts of the earth are in the way - what the atmosphere is doing. If it's passing mostly over the oceans then there probably isn't much dust or pollution filtering out the yellows and oranges...if it's passing over polluted areas or dusty deserts - then only the red light makes it through the air to the moon. SteveBaker (talk) 02:54, 6 August 2010 (UTC)[reply]

August 6

Scent + washing

I kneaded some ground beef into patties this evening barehanded and then rinsed my hands with water. My hands still smelled of the ground beef (be it the actual beef or the spices I used). What generates the smell? Is it antigen? Is it actual beef/spice on my hands? And is washing enough to get rid of the smell a good indicator that any harmful bacteria from the raw beef is also gone? DRosenbach (Talk | Contribs) 03:07, 6 August 2010 (UTC)[reply]

Lots of spices and other food smells can dissolve in skin, and then remain behind for a considerable time afterwards. Onion is particularly potent for me, after cutting onions I can smell them for days. What I discovered works for me is to use undiluted automatic dishwasher liquid (like Cascade or the like) directly on my hands. Most handsoaps, in order to avoid drying out your hands, end up being as much moisturizer as soap (moisturizers and soaps work at cross purposes with regards to cleaning. One is trying to get grease off of your hand, the other is trying to put grease into your hands). Since the automatic dishwasher liquid is not normally designed to wash hands, it tends to be harsher as a soap, which is generally what you need to clean these smells off. This is of course all [original research?] and YMMV. --Jayron32 03:52, 6 August 2010 (UTC)[reply]
I don't think you could use smell to indicate whether there's harmful bacteria on your hands. If the chemical making the smell has gotten right into your skin cells - in essence "staining" them - your hands might be perfectly clean (i.e. there's nothing "on" them), but still smell off. I mostly use ground chicken rather than ground beef, but I usually make use of disposable plastic gloves. Between the smell and the risk of contamination, to me it's worth the nickel to slip on pair of cheap plastics and toss them afterwards. Matt Deres (talk) 13:23, 6 August 2010 (UTC)[reply]
You might also try some of the hand-cleanser goop that they sell in DIY and Auto-parts stores - those are grey and don't smell of fruit and come in one gallon bright orange containers that don't look nice on your bathroom counter-tops - but they are really powerful degreasers with ground pumice as an abrasive. They do a spectacularly better job at cleaning your hands than the namby-pampy-girly stuff they sell in the soap aisle of a regular store...although you might want to apply some pretty scented moisturizer afterwards! SteveBaker (talk) 13:23, 6 August 2010 (UTC)[reply]
Yes, it's good stuff. See Swarfega. 87.81.230.195 (talk) 21:11, 6 August 2010 (UTC)[reply]
I don't know about ground beef, but I find the same can happen after chopping chillies (lick your fingers hours after washing your hands and it still burns your tongue) and that's because the capsaicin in the chilli isn't very soluble in water. If you wash you hands in oil (just normal cooking oil is fine), it gets them clean. It is possible some of the spices you were using are also oil-soluble. --Tango (talk) 16:28, 6 August 2010 (UTC)[reply]
You can also try using stainless steel soap, but it will only get rid of the scent and not any of the bacteria. ~AH1(TCU) 00:24, 7 August 2010 (UTC)[reply]

what is this plant?

It lives in my backyard in CA. It's got dandelion-like yellow flowers (but somewhat smaller and more compact, like that of the yellow starthistle, except more sunken in), and when the plant "dries up", the flowers turn into little irritating spikes that stick on one's clothing and skin. It's fairly green and leafy when not "dry". hello, i'm a member | talk to me! 03:44, 6 August 2010 (UTC)[reply]

It could be catsear or hawkweed or hawksbeard, any of which is commonly confused with true Dandelion. --Jayron32 03:48, 6 August 2010 (UTC)[reply]
It is aster-like, but it has leaves that are elliptic, almost ovate, kinda roundish-long. They are dark green, and they fall off when the plant dries and the little spikes form. It those that those spikes are the fruit (seeds); they don't seem to fly away with the wind...they stick on to whomever touches it. hello, i'm a member | talk to me! 04:21, 6 August 2010 (UTC)[reply]

microscopy terminology in a formal paper

I want to use terms like "north" and "south" instead of "above" or "below" to avoid ambiguity with structures below or above the focal plane. Is this permissible in a formal paper or a poster? John Riemann Soong (talk) 04:06, 6 August 2010 (UTC)[reply]

I would say no. I think that would annoy readers. Looie496 (talk) 04:43, 6 August 2010 (UTC)[reply]
Is there a good alternative? John Riemann Soong (talk) 05:30, 6 August 2010 (UTC)[reply]
Higher/lower? more/less gpe divided by mass? more/less attraction to the earth? —Preceding unsigned comment added by 91.103.185.230 (talk) 08:30, 6 August 2010 (UTC)[reply]
I would say that if there are no pre-existing conventions for describing such things and providing you define your terms clearly - then it should be acceptable. If you're going to do that, I would suggest that you use North/South/East/West to describe positions within the focal plane and Above/Below to describe positions along an axis perpendicular to the plane. But there are other approaches you might consider: X, Y, Z with (X,Y) being in the focal plane and Z being above/below it. You could also define a couple of acronyms: AFP and BFP for 'Above focal plane' and 'Below focal plane' (eg "In this image, the longitudinal structures are ~10um AFP"). So long as you are very clear about your conventions up-front, you should be OK. If there is an established convention that you're merely unaware of, then I'd expect the peer-reviewer or editor to suggest an alternative - I doubt that an otherwise meritorious paper would be rejected on those grounds so long as you take care to clearly state the conventions you are using. SteveBaker (talk) 13:17, 6 August 2010 (UTC)[reply]
I don't believe I've ever seen that nomenclature used in a formal poster, and certainly not in a paper. You might hear a more casual speaker use that sort of terminology with a friendly audience. While I completely understand why you'd want to use unambiguous terms here, I'd be afraid that such non-standard terminology would be very jarring to a reader or a reviewer. In most cases, you can write your text and figure captions in such a way that the intent of "above" is clear from context. In the remaining situations – as SteveBaker says – your best bet is to go with x, y, and z. The xy plane is almost always taken to be coplanar with a focal plane, while the z direction takes you through different focal planes. If there is any potential for confusion about the orientation of any images or renderings, you can even include a little two-arrow legend adjacent to or overlaid on the image that explicitly identifies the axes shown. TenOfAllTrades(talk) 13:52, 6 August 2010 (UTC)[reply]

"Closer" and "farther". Cuddlyable3 (talk) 20:01, 6 August 2010 (UTC)[reply]

resin

what's a resin chair —Preceding unsigned comment added by Tomjohnson357 (talkcontribs) 05:39, 6 August 2010 (UTC)[reply]

Resin in this case is just a kind of plastic. Looie496 (talk) 05:51, 6 August 2010 (UTC)[reply]
This picture (at right) is a typical kind of resin chair - they are cheap because they can be injection molded with a simple two-part mold, they are lightweight and stackable (which makes shipping them cheap) and they are made from an inexpensive resin-based plastic. From an industrial design perspective, they are the perfect solution. From the customer's perspective, they are cheap, moderately comfortable and waterproof - but the plastic gets brittle from prolonged exposure to UV light and tend to break catastrophically - so they actually do a fairly poor job as lawn furniture. SteveBaker (talk) 13:02, 6 August 2010 (UTC)[reply]
Probably this kind of chair. ~AH1(TCU) 00:09, 7 August 2010 (UTC)[reply]

Bacon Cheeseburger Armpits

This is rather odd, but I've noticed my underarms smell like bacon cheeseburgers sometimes. Even thought I haven't eaten one in months. (And I don't sleep with cheeseburgers nestled in my pits either). And I bathe 1-2x daily with non-cheeseburger scented body wash. Any ideas on how a person's body could make such a scent? (Please don't let this fall under the dreaded 'medical advice') I'm just curious how a cooked meat scent could be manufactured from a body. Or is my nose just picking up similar compounds and it's firing the same triggers in my brain that equal cheeseburgers? --69.148.250.94 (talk) 06:12, 6 August 2010 (UTC)[reply]

Odor in the underarms in not created by the body. It's created by bacteria. However the bacteria eat food that is secreted by the body, so the body can influence them to some degree. It seems to me you were colonized by bacteria that happen to excrete that particular smell (which is not strange, the smell of lots of foods is created by bacteria, for example cheese, wine, etc.) If you can culture them, maybe you can sell it :) If you want to change the smell first you need to serializesterilize your underarms (harder than it sounds), then colonize them with your choice of bacteria - another person is probably your best source. Ariel. (talk) 09:26, 6 August 2010 (UTC)[reply]
Serialization is really the best way to go? Vimescarrot (talk) 09:36, 6 August 2010 (UTC)[reply]
Probably sterilize. Bus stop (talk) 09:50, 6 August 2010 (UTC)[reply]
Yes, that's what I meant, I was tired and didn't proof read. Ariel. (talk) 19:35, 6 August 2010 (UTC)[reply]

I cannot fucking believe you just told this guy to culture his underarm flora and (presumably) proceed to sell it as all-natural natural cheesburger-flavoring. Would your mother be proud of you if she knew you were enabling a guy to feed his underarm flora to millions? 84.153.230.246 (talk) 12:38, 6 August 2010 (UTC)[reply]

Hmm, 84.153 I'm very impressed how well your English has improved since your lasts edits!!! Caesar's Daddy (talk) 21:58, 6 August 2010 (UTC)[reply]
Woaah...calm down! I think everyone but you understands that this is a little gentle humor - which is permitted on the reference desk. We might, perhaps, encourage Ariel to toss a couple of <small> tags into an otherwise interesting and relevant answer. SteveBaker (talk) 13:07, 6 August 2010 (UTC)[reply]
I put a smiley on there! It was a joke, and like the best jokes it has an element of truth - he really could do that, and I bet people would buy it (but not to eat, to use in their own underarms). But it would be very difficult, many bacteria are hard to culture out of their home environments. Ariel. (talk) 19:35, 6 August 2010 (UTC)[reply]

oil paint

why is oil paint in ny illegal —Preceding unsigned comment added by Tomjohnson357 (talkcontribs) 06:45, 6 August 2010 (UTC)[reply]

Reasons include toxicity and odor of the fumes, as well as potential ozone-layer-damaging chemicals. But frankly, this is proving a pain in the ass to find good information on; my google searches are only turning up blogs and forums. Someguy1221 (talk) 08:42, 6 August 2010 (UTC)[reply]
Several states in that area have banned oil paint (the kind used for walls, etc.; not sure if it also applies to artists' materials) in 2005. See for example [31] DMacks (talk) 08:48, 6 August 2010 (UTC)[reply]
No, I believe artists' oil paint is still available. Bus stop (talk) 11:08, 6 August 2010 (UTC)[reply]

Professional paints are likely alkyd based. In many countries they are indeed banned or discouraged because of the toxic fumes, to be replaced by waterbased paints, see Latex#Uses_of_latex. --VanBurenen (talk) 11:19, 6 August 2010 (UTC)[reply]

Disposal of the stuff is very difficult - it clogs drains alarmingly easily and it's pretty toxic too. There are plenty of modern, water-based, paints that do a comparably good job and are much less harmful both to the environment and the city infrastructure. SteveBaker (talk) 12:54, 6 August 2010 (UTC)[reply]

Type of engineering degree

Help me please :) Chemical engineering or mechanical engineering at university - which one gives better career prospects? I would like a job involving foreign travel. Thanks! 86.144.112.57 (talk) 11:57, 6 August 2010 (UTC)[reply]

Both career tracks have reasonably good prospects over so many broad fields that it's hard to answer your question meaningfully. Though, with the decline of the auto-industry, there's been a bit of a deflationary trend in mechanical engineering, compared to other disciplines. The U.S. Bureau of Labor Statistics introduction to Engineers has some useful links and you can find hard numbers for number-of-hires across disciplines. (But, analyzing those kinds of statistics is not as straightforward as counting the number of new-hires). Also, whether these current trends will be relevant over the timescales of your entire career is all speculation. Chemical engineers can work in all kinds of fields; if you work in upstream petroleum, your prospects of foreign travel are very high; but chemical engineers more often find careers in refining and petrochemical companies. Chemical engineers can also work in materials, biological/medical/pharmaceutical industries, and like all engineers, can cross discipline-boundaries depending on individual specializations. Nimur (talk) 13:51, 6 August 2010 (UTC)[reply]
I was already thinking of oil, actually. Just wasn't sure which subject (or perhaps a totally different one?) would be most useful in that industry. So you think chemical engineering is a better bet? My problem is I really don't know how oil exploration or drilling works in detail, so not sure what the most important skills to pick up would be. There's not much chance of a school trip to an offshore rig in the North Sea (I'm in England btw). 86.144.112.57 (talk) 14:57, 6 August 2010 (UTC)[reply]
You might be surprised the kinds of programs and educational opportunities available in the oil industry. Society of Exploration Geophysicists' Student Education Program is geared toward advanced undergraduate and graduate students (so, wait a year or two), but they can provide funding for travel and training. So can EAGE's Student program - both are international, but EAGE has a definite European concentration. As a chemical engineer working upstream, you can be a valuable asset - geochemistry, mud logging, and assaying all happen in the field. As a chemical engineer working in the refinery, you'd be focused on process control, efficiency, and the world's largest stoichiometry problem: balancing the carbon in and the carbon out, and making sure you have enough hydrogen atoms (and energy) for the reaction. Also consider if your university offers a course in petrochemistry or petroleum engineering. Nimur (talk) 16:05, 6 August 2010 (UTC)[reply]
The major aircraft manufacturer I used to work for hires a lot of mechanical engineers and is often involved in long-term multinational projects. For some engineers, there is a great deal of travel between international partners and/or suppliers, though not all engineers get to travel.
Of course, you shouldn't pick your degree based on which one could result in international travel. If you are good enough in your stay-at-home job, the rewards could lead to you being able to travel for leisure, and I think most people will tell you it is nicer to travel the world as a tourist, than going great places and only ever seeing the inside of an office. On the other hand, whenever I've travelled on business, I've made a point of getting out of the hotel and taking weekends off (working extra hard during the week makes that easier to do :-)). Astronaut (talk) 21:54, 6 August 2010 (UTC)[reply]

Chronobiology of sunsets

If we assume that most people find sunsets pleasing in some way, perhaps even beautiful, we should ask why. Could it be that the pleasure derived from watching a sunset leads to calm and relaxation, and helps us sleep? Do sunsets have light effects on circadian rhythm in mammals? Viriditas (talk) 12:25, 6 August 2010 (UTC)[reply]

I think you've probably nailed it already. To humans living out in the wilds with no electric light, etc - the setting of the sun would indicate the end of the main work for the day - a time to start unwinding and getting ready for sleep. Such cues to behavior are ingrained into our bodies more deeply than many of us realize. But I'm not aware (and was not able to easily find) any research on the subject. SteveBaker (talk) 12:52, 6 August 2010 (UTC)[reply]
But we also find sunrises beautiful. I think it's an unintended consequence. As primates,we have good colour vision and we find bright colours pleasing (usually) as part of our "programming" to find ripe fruit. Sunrises and sunsets are beautiful for the same reason rainbows and bright blue skies and strawberries are beautiful - because they are full of bright colour. Matt Deres (talk) 13:29, 6 August 2010 (UTC)[reply]
Not everything needs an evolutionary explanation. I'd venture the guess that sunsets, like many natural phenomena with a dramatic visual component, are actually aesthetically pleasing. Paul (Stansifer) 16:26, 6 August 2010 (UTC)[reply]
Actually, everything has an evolutionary explanation, whether you think it's needed or not. Even if the explanation is "Because it randomly occurred in my ancestor and any survival disadvantage to the trait was not very significant". Comet Tuttle (talk) 17:07, 6 August 2010 (UTC)[reply]
Which is a pointless tautology. An explanation like that is too generic to be of use to anyone and can safely be disregarded. Matt Deres (talk) 18:09, 6 August 2010 (UTC)[reply]
But why do we have a sense of aesthetics that is such that sunsets are aesthetically pleasing? --Tango (talk) 23:20, 6 August 2010 (UTC)[reply]

The article Aesthetics considers why we appreciate Beauty. Some sunsets are shown in the article Sunset and in wide varieties here. (OR) No printed photograph captures the brilliance and surprise value of some real sunsets, and they make one aware of the temporary and unique nature of what one is seeing. It's a lovely way to round off the day. Cuddlyable3 (talk) 19:55, 6 August 2010 (UTC)[reply]

See beauty of nature. ~AH1(TCU) 23:58, 6 August 2010 (UTC)[reply]

Surface tension

I'm a bit confused about surface tension. I know that surface tension should act parallel to the surface, but the diagram of water molecules on the surface tension page seems to suggest that the force is perpendicular to the surface. Can someone help me visualize why, from the molecular picture of a liquid, the force would be parallel? Thanks. Related question: If water is placed in a capillary, why will the surface tension pull it up?76.69.241.253 (talk) 14:01, 6 August 2010 (UTC)[reply]

Surface tension#Cause offers an explanation. When an isolated (from gravity) drop of water attains a spherical shape, no inward movement of the surface molecules is possible and only forces parallel to the surface ( = tangential to the sphere) are demonstrable (think of a balloon). Surface tension#Liquid in a vertical tube explains capillary action. Example: The attraction between water & glass molecules is greater than water & water molecules. That results in a non 90 degree contact angle between the water surface and the inside of a glass tube. In a thin tube or capillary the water column is pulled up. Cuddlyable3 (talk) 19:39, 6 August 2010 (UTC)[reply]
Okay, I get the balloon analogy. But about the capillary action; I would think that the surface tension would pull the liquid inward, and stop the surface from rising... 70.52.44.90 (talk) 21:15, 6 August 2010 (UTC)[reply]
For some combinations of liquid and tube material, it does. Cuddlyable3 (talk) 21:33, 6 August 2010 (UTC)[reply]

I know, but I'm talking about in all cases (like, say, water and glass). The adhesive forces of the glass are stronger than the cohesive forces of water, so the miniscus is U-shaped. But if the surface tension pulls the water inward, then wouldn't the water be drawn in and not rise? 70.52.44.90 (talk) 22:33, 6 August 2010 (UTC)[reply]

I got a new water heater

This refers back to this question. Since the Sears store where I bought it didn't have one, they ordered online for me, and I believe it's model 32636 on this page.

The plumber who installed it said, after I specifically asked that the water not be too hot, that he turned it up a little because it was set real low. It is the perfect temperature, as it turns out, so it makes me wonder what is "real low" and how did they (whoever set it before the plumber) decide to set it there in the first place?Vchimpanzee · talk · contributions · 20:02, 6 August 2010 (UTC)[reply]

I think the defaults are often set to be safe for babies, who can very easily be scalded by hot water. Looie496 (talk) 20:14, 6 August 2010 (UTC)[reply]
Any idea what that temperature is? And what if someone has a dishwasher and needs really hot water?Vchimpanzee · talk · contributions · 20:19, 6 August 2010 (UTC)[reply]
Hot water heater#Water heater safety has some info about temperature settings. I don't know if there are local codes prescribing specific licensed-installation standards (probably...there is for damn near everything else). DMacks (talk) 20:35, 6 August 2010 (UTC)[reply]
I did read that before asking my original question. I did miss the part about dishwashers heating the water further. That makes sense.Vchimpanzee · talk · contributions · 20:38, 6 August 2010 (UTC)[reply]
Not all dishwashers do that, you have to check your model specifically. But the biggest problem with low water temperature is the growth of legionnaires bacteria. Personally I would not risk it. Ariel. (talk) 23:17, 6 August 2010 (UTC)[reply]

Voltaic pile question

Is this true? It says that zinc ions oxidize copper to copper(II), themselves being reduced to zinc metal. I though it was the other way around (copper sulfate reacts with zinc). If it is wrong, then how does it work? Thanks. --Chemicalinterest (talk) 22:46, 6 August 2010 (UTC)[reply]

Sub-woofer orientation

Should the sub woofer port point up or down? If down, then what clearance does it need from the floor, and is this affected by the type of floor covering: carpet, lino, wood etc? Also, can other structures in the room detract from the sub woofer output level?--88.104.88.126 (talk) 23:25, 6 August 2010 (UTC)[reply]

It is best to point to the listener, but second best would be into the largest volume of air, so down does not sound good if it is on the floor, it is more likely to vibrate the floor and produce other vibrating and shaking noises. Graeme Bartlett (talk) 00:31, 7 August 2010 (UTC)[reply]

August 7

Katha from acacia catechu tree

This material Katha is used in making paan masalas and used for other medicinal purposes.For manufacturing Katha rooms are build where specified temperature are required and there are two rooms which are used to produce the final material. First Room:Material in liquid form(water content 60percent) is brought in Al/Steel Containers and stored for 10days at 1.5Deg.C and 90percentRH and the liquid get thicker as the water content is removed by providing air circulation with refrigeration.

Second Room:Material from the first room is converted in biscuits form(water content 44percent) and are brought in Al/Steel trays and stacked in racks and stored for 4days at 7Deg.C and 65percentRH and the water content is removed by providing air circulation with refrigeration.
  • QUESTIONS:
To find the refrigeration load the following is required by me.
  1. What is the specific heat of Katha before freezing and after freezing.
  2. What is the freezing point of Katha.
  3. What is the Latent Heat of Fusion of Katha. —Preceding unsigned comment added by Mgkhanduja (talkcontribs) 00:08, 7 August 2010 (UTC)[reply]
I am going to guess the answer, and that your substance is gum acacia. The specific heat will be largely due to water, so it will be proportional to the fraction of water. If you are spending a lot of money on this, you may not want to rely on Wikipedia volunteers! The solidification point will be closely related to the water content, but it will freeze at close to 0 degrees if a large amount of water is present due to the molecular weight of the gum being high. Graeme Bartlett (talk) 00:39, 7 August 2010 (UTC)[reply]

Uphill running

At the risk of asking a stupid question, do rivers ever run uphill? Maybe there are some quirks of geography such that some rivers have points at a higher elevation further along their course? Stanstaple (talk) 00:34, 7 August 2010 (UTC)[reply]

I think only for a short distance of a few meters, if they have built up a bit of speed, it could push up and over a bar across the river. Normally if there was a higher elevation it could split the river into two streams flowing away from the high point. Graeme Bartlett (talk) 00:43, 7 August 2010 (UTC)[reply]

Drinking diethyl ether

The diethyl ether page mentions that peasants in Silesia used to drink it. It doesn't say very clearly how dangerous that is, and neither does the reference link. Would a shot glass of ether do an adult any lasting damage? 86.140.52.244 (talk) 00:44, 7 August 2010 (UTC)[reply]