Jump to content

Wikipedia:Reference desk/Archives/Science/2015 October 21

From Wikipedia, the free encyclopedia

This is the current revision of this page, as edited by MalnadachBot (talk | contribs) at 05:49, 3 March 2023 (Fixed Lint errors. (Task 12)). The present address (URL) is a permanent link to this version.

(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)
Science desk
< October 20 << Sep | October | Nov >> October 22 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


October 21

[edit]

Regarding the reactivity of francium

[edit]

Is francium more reactive than caesium? Has anyone really done a test to prove so? --The mediocre Wikipedian (talk to me.) 05:06, 21 October 2015 (UTC)[reply]

Have you tried to google for something like reactivity of francium? I see plenty of good reading on the subject. Vespine (talk) 05:35, 21 October 2015 (UTC)[reply]
Nobody ever tried to do an experiment to test so, according to sources found on google. --The mediocre Wikipedian (talk to me.) 06:21, 21 October 2015 (UTC)[reply]
It's not that no one tried to. You can't! It's so radioactive that it's virtually impossible to collect any, and if you somehow managed to do so the radioactivity is enough to vaporize it instantly. But we know enough about chemistry and physics to make a really good guess at what it would do based on the atomic structure. Ariel. (talk) 07:39, 21 October 2015 (UTC)[reply]
Yes I definitely agree that currently such an experiment is impossible. My focus, however, is on the logic behind the prediction. Many scientists claim that francium is less reactive than caesium. But why? Is it about the ionization potential or atomic structure as you've mentioned? --The mediocre Wikipedian (talk to me.) 07:53, 21 October 2015 (UTC)[reply]
The logic is based on the Electronegativity, but you raise an interesting question because the article on Francium seems contradictory to me. On the one hand it says "It is the second-least electronegative element, behind only cesium" yet Francium is 0.7 while cesium is 0.79, which is the opposite of what it says. It also says "Francium has a slightly higher ionization energy than caesium .... and this would imply that caesium is the less electronegative of the two". My best guess is that the value 0.7 is not correct, and no better value is available. Ariel. (talk) 08:13, 21 October 2015 (UTC)[reply]
Let me take an example in the periodic table. The electronegativities of potassium and rubidium are both 0.82, but experiments show that rubidium is more reactive than potassium. Why is that the case? And, it is unlikely that the value 0.7 is incorrect, as I found that the first 5 sources I found on google search showed that the electronegativity of francium is 0.7. --The mediocre Wikipedian (talk to me.) 08:49, 21 October 2015 (UTC)[reply]
0.7 is the original predicted value by Pauling, which also gave 0.7 to caesium: the caesium value has since been experimentally refined to 0.79, but for obvious reasons a similar experiment on francium has never been done. Hence there are many grounds to suspect it, especially since theoretical calculations expect Fr to have a higher electronegativity than Cs. (This is because the 7s orbital is relativistically contracted: see relativistic quantum chemistry. Hence the 7s electrons are actually orbiting closer to the nucleus than expected, and so are more difficult than expected to remove in chemical reactions. Hence Fr should be less reactive than Cs.) I haven't found a theoretical value for francium, but by eka-francium the value should be back up to Na's (about 0.93), so I strongly suspect that the electronegativity of Fr is closer to 0.85 than 0.7.
Thus Fr is probably less reactive than Cs, but no one appears to have ever proven this experimentally. Double sharp (talk) 10:17, 21 October 2015 (UTC)[reply]
I am rather curious about why the Pauling scale gives K and Rb the same electronegativity. Some other scales do give Rb a lower electronegativity as would be expected. Double sharp (talk) 10:31, 21 October 2015 (UTC)[reply]
The electronegativity of francium should range between 0.79 and 0.85. So caesium should be the most reactive element in the periodic table. --The mediocre Wikipedian (talk to me.) 11:50, 21 October 2015 (UTC)[reply]
Linus Pauling's work The Nature of the Chemical Bond is the landmark text for modern theories of chemical bonding; it is easily the most important textbook length treatment of chemical bonding produced during the 20th century, and arguably the most important chemistry text book of any topic produced in the past century. I'm pretty sure he lays out his methodology and rationale behind his electronegativity scale in some detail. The text is readily available in (I'm fairly confident) nearly ANY University library, and probably many public libraries as well, as well as online. You can easily look it up. --Jayron32 12:18, 21 October 2015 (UTC)[reply]
I looked it up. Indeed he does lay out his methodology and rationale, and gives an example for Be where he calculates its electronegativity from the (already calculated) electronegativities of Cl, Br, I, and S and the enthalpies of formation of BeCl2, BeBr2, BeI2, and BeS from the elements in their standard states. But this just pushes back the question on the specific case of K and Rb to those values. Double sharp (talk) 14:46, 22 October 2015 (UTC)[reply]
Well, the answer would be then, he did that procedure for those elements, and got those numbers. That the number came out the same is because those numbers are the product of the procedure he used to calculate them. I'm not sure what else needs to be said on the matter. --Jayron32 14:51, 22 October 2015 (UTC)[reply]
Yes, that certainly resolves why the electronegativities of K and Rb come out the same. It does raise the question about why the enthalpies of formation for K and Rb compounds are so similar, though. Double sharp (talk) 07:20, 25 October 2015 (UTC)[reply]

Alien life

[edit]

Which planet/satellite in the solar system is most suitable for Alien life ? Yubarajsin30 (talk) 11:10, 21 October 2015 (UTC)[reply]

It depends on your definition of alien, but probably the Earth.--Phil Holmes (talk) 11:22, 21 October 2015 (UTC)[reply]
(ec) Earth. It currently has around 7 billion aliens living on it. -- Jack of Oz [pleasantries] 11:23, 21 October 2015 (UTC)[reply]
I favor Europa, because it has a massive subsurface ocean of liquid water plus tectonic activity. The next candidate would be Titan, but life there would have to be very different from on Earth. Looie496 (talk) 11:39, 21 October 2015 (UTC)[reply]
Assuming you mean apart from Earth, the Google search most suitable for alien life in the Solar System will quickly give you some reading, including Extraterrestrial life#Planetary habitability in the Solar System. Note that some sources prioritize life we may have the best chance of finding. PrimeHunter (talk) 11:47, 21 October 2015 (UTC)[reply]
This is really two questions that ought to be separated:
  1. Which planet/moon is most likely to have allowed the abiogenesis and subsequent evolution of life?
  2. Which planet/moon is the most likely to have alien life on it right now?
The answer to the first question should probably be Mars - because we know that it once had oceans and a somewhat reasonable atmosphere...but now that those oceans have vanished and the atmosphere is very thin, it's much less likely that life can still be clinging on there. But we still don't have a good idea of what process (if any) caused the initial abiogenesis event here on earth - so we don't know what set of conditions best suit that...which means that we can say what conditions life needs right now (liquid water, an energy supply, etc). But we don't know whether (for example) you need something like our moon, creating tides, to start life running in the first place - or whether life just managed to start despite those tides. Even when Mars had water, it didn't have significant tides - Europa, on the other hand, has plenty of tides...perhaps even an excess of them. Not knowing whether tides are important or hazardous makes the choice of Mars or Europa as 'best abiogenesis site' a difficult one.
The answer to the second question is likely to be Europa - because it's fairly certain that it still has an ocean under the ice and some reasonable energy sources, where Mars has turned into a fairly hostile place for life. But we don't know if the conditions for the initial creation of life are sufficient there - not because we don't know enough about Europa - but because we don't know enough about those necessary initial conditions. It's very possible that all we find on Mars are the long-dead fossils of life that was around when there were oceans.
I suppose that a third question would be to ask what constitutes "alien" life? We know that certain classes of extremophile bacteria can survive in space for considerable time - and we know that rocks ejected from one place can wind up on another. It might easily be that the solar system is a hot-bed of panspermia events and that life on Earth, Mars, Europa and elsewhere actually evolved from a common source - and whatever life is out there is related to life here on Earth. Heck, it's quite possible that all life on Earth is descended from a chunk of ice ejected from Europa by a meteor impact. We might easily be Europans or Martians - and that we are the aliens you're asking about!
SteveBaker (talk) 12:34, 21 October 2015 (UTC)[reply]
We do have some good guesses on how life can get started without panspermia, detailed at abiogenesis. Also tardigrades are one of the most complex, multicellular life forms that can survive the vacuum of space pretty well. The can survive without food or water for at least 10 years diapause, but they aren't technically considered extremophiles. SemanticMantis (talk) 13:27, 21 October 2015 (UTC)[reply]
Yes - we have guesses - but we have more than one set of guesses and no actual "truth". Furthermore, even if we figure out how (indeed 'if') abiogenesis happened on Earth, we won't really know whether other mechanisms are possible under different starting conditions. So I don't think we can conclusively state that abiogenesis could happen here but not there with any degree of confidence - which makes this question difficult to answer with any degree of confidence. SteveBaker (talk) 15:29, 21 October 2015 (UTC)[reply]
"Even when Mars had water, it didn't have significant tides . . . ." I question this. On Earth, the tidal force due to the Sun is, I believe, around 46% of that due to the Moon (hence Spring and Neap tides differing), so the Solar tidal force on Mars, while smaller than that on the Earth, seems likely to be of some significance (though I confess to not doing the maths.) {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 14:11, 21 October 2015 (UTC)[reply]
Mars is twice as far from the Sun as Earth is, and since the law of gravitation is an inverse square law, it means the tides due to the Sun should be on the order of 1/4th as strong as on Earth; the Moons of Mars are insignificantly small, so they don't really contribute to any tidal effects. Also, looking at Earth_tide#Tidal_constituents, and focusing on the semi-diurnal contributions, it appears the sun's contribution to vertical displacement to be about 1/4th of the total tidal effects. That means that the tide on Mars contributed by the Sun (the primary component) would be about 1/16th of the total tides currently on Earth. So, significantly smaller, but probably not nothing. --Jayron32 15:17, 21 October 2015 (UTC)[reply]
Thanks...yes, I should probably have explained that! It's actually even less than 1/16th because tidal forces depend on the DIFFERENCE in gravitation on the two sides of the planet - and Mars is significantly smaller than Earth. SteveBaker (talk) 15:24, 21 October 2015 (UTC)[reply]
On the other hand, the lower gravity means that there is less opposing force. Then again, Mars still rotates at the same speed as Earth, so do the tides have a chance to move? But then consider, the water would move a shorter distance... it was around here I decided I wanted a real geologist, aresologist, wait no hydrologist (or is it a meteorologist?), to do simulations and publish stuff before I'd believe it, so I searched out arxiv tides mars and came up with [1] [2] (omitting some others about a weird atmospheric tide or Phobos turning into a Saturn-like ring). Now if only I had read them, I might have an answer, but my gut feeling is I still don't. :) :( Wnt (talk) 15:33, 21 October 2015 (UTC)[reply]
While I suspect that the amount of the tide is not significant, only the fact that there is one, this is the Science Desk, so let's get the calculation right. First, as Steve says, tidal force depends on the difference in gravitation, and therefore it is proportional to the first derivative of gravitational force, so it varies as the inverse cube of the distance. Second, Mars is not twice as far from the Sun as Earth, but 1.524 times (at least, that's the ratio of the semi-major axes; of course the actual distances vary since the orbits are not circular). And 1.524³ = 3.54, so Jayron's number 1/4 in his first sentence was actually about correct, albeit for the wrong reason. Third, the whole diameter of the planet may not be significant, certainly not if we're talking about tidal motion within a body of water that does not surround the planet like our ocean. --174.88.134.156 (talk) 18:16, 21 October 2015 (UTC)[reply]
  • Consider the units. The first derivative of acceleration over distance is not itself an acceleration (it has units of inverse square time); to get back an acceleration you need to multiply by a distance, in this case the radius of the planet. To put that another way, we're interested not in a tidal field but in the difference between the acceleration due to solar gravity of a specific bit of water (at the subsolar point or its antipode) and the a.d.t.s.g. of the solid planet as a whole, which is proportional to the radius of the planet. So: 1/3.54 for the orbital distance, about 1/2 for the size of Mars relative to Earth, that gives us 1/7. As Wnt suggested, let's multiply that by 8/3 for the difference in local gravity, and we get 8/21. Solar tide on Earth is about one-third as strong as lunar tide, so a solar tide on Mars would have, I guess, about 1/8 as much effect as Earth's average aggregate tides. —Tamfang (talk) 22:12, 21 October 2015 (UTC)[reply]
Erm: 46% !=one-third: it's about 1/3 of the Total, not the Lunar tide. Probably doesn't make a great deal of difference, though. Possibly relevant is that the current topography suggest that the whole of Mars' southern hemisphere could once have been ocean-covered: that sort of fetch could have resulted in some impressive swell and breakers. {The poster formerly known as 87.81.230.195} 185.74.232.130 (talk) 15:36, 23 October 2015 (UTC) [reply]
For 'southern' read 'northern' —Tamfang (talk) 04:08, 25 October 2015 (UTC)[reply]
  • You're agreeing with the value of 1/3.54 that I computed, but disagreeing (based on dimensional algebra) with my reason for computing it? That's a bit confusing. When I referred to the derivative of gravitational force, I was talking about a statement of proportionality; dimensional algebra is irrelevant because what we want is the ratio of two such quantities, and that's dimensionless. --174.88.134.156 (talk) 07:06, 22 October 2015 (UTC)[reply]
I agree that 1/3.54 is a necessary part of the computation, and disagree with stopping there. —Tamfang (talk) 05:16, 23 October 2015 (UTC)[reply]
Addressing the original question, NASA Astrobiology Institute hosts a FAQ webpage which answers a lot of these topics.
If you review the 2015 NASA Astrobiology Strategy (published this month!), you can see that we have many unanswered questions about our own solar system, and stars beyond. Top priorities for scientists today include detection and identification of abiotic organic compounds (which means developing new instruments and new scientific procedures); and identification and analysis of the physical environs that make this kind of development possible.
We still have a lot of basic science to perform before NASA will commit to saying something like "Star ___" or "Planet ___" or "Moon ___" is a top candidate for extraterrestrial life. We don't know what extraterrestrial life will be like, so we don't know if a specific location in or outside our solar system is suitable for it.
Some specialized research focuses on the physical conditions that enable liquid water; other researchers study the chemical makeup of stellar atmospheres hunting for good mixtures that might indicate Earth-like planets; some researchers study our own solar system looking for hydrocarbons and complex macromolecules. So far, each of these avenues points to different places - we don't know of any extraterrestrial planet where all of the physical and chemical factors coincide to make Earth-like life possible.
Chapter 5(iii) of the NASA report I linked is all about the top candidates within our own solar system. These research avenues are broken into conceptual groups: investigations as Earth-analogous environments; surface and subsurface environments; and icy worlds. Mars, Europa, and Enceladus are called out as specifically interesting because they apparently have, or previously had, liquid water.
Nimur (talk) 16:05, 21 October 2015 (UTC)[reply]
ALL THESE WORLDS ARE YOURS - EXCEPT EUROPA. ATTEMPT NO LANDINGS THERE. Justin15w (talk) 18:17, 21 October 2015 (UTC)[reply]
Clarke is overrated. And this movie wasn't his best. And while Europa may have water, Enceladus is spewing it out into space, which means maybe some human spermatozoan can actually get in there to spread his nasty invasive genomes into its pristine waters. Wnt (talk) 18:24, 21 October 2015 (UTC)[reply]

if all polar ice melt, what is increase in sealevel

[edit]

OP think of global warming doomsdayMahfuzur rahman shourov (talk) 15:23, 21 October 2015 (UTC)[reply]

Maybe OP should consider using a search engine like Google or Bing rather than just thinking? I did and found the answer in less time than it took to write this. Nil Einne (talk) 15:40, 21 October 2015 (UTC)[reply]
About 58 meters, according to Global warming#Sea level rise, although other sources give different values. John M Baker (talk) 18:53, 21 October 2015 (UTC)[reply]
In human units, that is 190 feet, although a 200 foot rise is what I am used to hearing of. μηδείς (talk) 02:39, 22 October 2015 (UTC)[reply]
Note that per our article and the cited source, the quoted figure is only for melting of the Antartic ice caps/sheets, not the requested scenario of all polar ice caps melting. While much of the Artic ice is floating, some like that in Greenland is on land as per our article. Although now that I recall, most figures I saw were of all ice melting rather than those the OP may call polar ice caps. Although I'm also not sure why the OP is only interested in polar ice caps (not my area of expertise, but I presume there aren't many scenarios where you'll get melting of all polar ice caps, but no other land ice) and I'm not sure how significant non polar ice is to the sea level rise. It's possible any contribution from the later is dwarfed by uncertainty in the estimates. However the figure I saw were higher than 58 metres. I believe it's fairly unlikely that all Antartic ice sheets will melt without Greenland ice also melting, from a quick look, I think the research was primarily intended to estimate how the Antartic ice may melt under various circumnstances, rather than to estimate all sea level rises under such scenarios. (The Antartic ice sheet is the biggest possible contributor to sea level rises, but I think melting of the sheet in it's entirety is considered probably the least likely scenario. So you'll need to include it, but you can't forget the other smaller contributors which will happen first.) Nil Einne (talk) 22:06, 25 October 2015 (UTC)[reply]

What limits EPIC?

[edit]

The Deep Space Climate Observatory will send back "at least 12" images of the Earth daily. As a result, the rotation of the Earth seems choppy in their animation, and it is hard to watch how the weather systems evolve, etc. Also, the article says that the camera has 2048x2048 resolution, but is averaged down to 1024x1024 "on-board" before sending the images, so this sounds like a limit on transmission.

Now, one limit is that there are actually 10 channels being sent back (multispectral imaging, not 3, for the routine images. Yet the section about the transit of the Moon says that one channel's image can be taken in 30 seconds; naively, I would think a full set of 10 could be taken in 300 seconds/5 minutes from that. But judging by the animation, there seem to be about 13 frames/90 degrees (roughly) so that would be maybe 52 frames/24 hours or one frame per roughly 30 minutes... so there's something hokey in my calculation there, even before we wonder if some of the channels could be omitted during the transit.

So... is the 30 second figure just wrong? Or is the difference a matter of competing bandwidth from other instruments? Is there a limit on long-term power supply vs. peak power for transmission? Or am I just totally confused? Wnt (talk) 16:17, 21 October 2015 (UTC)[reply]

I was going to ask the same thing. It certainly seems as if it should be able to send back more data than that. Perhaps most of their communications bandwidth is dedicated to other things, and whatever is left over is what they use for the images ? Or maybe the camera is used for other things most of the time (like zoomed into a particular location) ? I suppose a third possibility is that it has limited energy. Broadcasting to Earth from that distance presumably takes more energy than from lower orbit, and if it runs off a small solar panel, it may not be able to send images continuously. Still, our spaceship at Pluto manages to find the energy to send plenty of pics back, even at that distance, and it was launched many years ago. Presumably that one is nuclear-powered though, which may make all the difference.
So, yes, I'd also like to know exactly why there is such a low limit on the number and res of pics sent back to Earth from EPIC. StuRat (talk) 19:26, 21 October 2015 (UTC)[reply]
There is always normal housekeeping data that must be sent back and forth to keep the spacecraft running, which consumes the time available for data transfer. In order to send data error-free, it is split up into packets. To transfer an image file, packet size may be say, 1.5 kB. The first transmission contains this 1.5 kB packet of data. If it is received error-free, the ground station sends an acknowledgement (ACK), otherwise it sends a reject (REJ). If the ACK is received, the next 1.5 kB packet is sent. A Frame check sequence, which is a type of Checksum and is usually 16 bits (2 bytes) is appended to each packet so the ground station can calculate whether the packet has been received error-free. A single bit error in either the packet or the Frame Check Sequence causes the ground station to reject the packet. That is an error of 1 bit in 12,016 in the 1.5 kB example given here.
Why does the data take so long? Well, there are timeout factors here. After sending each packet, the spacecraft must wait a reasonable time for it to reach Earth, be processed by the ground station, and for an ACK or REJ or mutilated ACK or REJ to return. If the timeout expires with a valid ACK not received, the packet must be retransmitted. Even if every packet were to be sent and received error-free each time, the process of sending the image would still take far longer than it would if the spacecraft simply blurted out the entire image without pause. Note that I have used 1.5 kB as an example packet size because it is commonly used in terrestrial links. There is a tradeoff between packet size and the quality of the radio link. Where the latter is exceptionally noise-free, large packet size transfers the file most efficiently. When the radio link is subject to interference, small packet size "gets the data through" between noise bursts, and is therefore most efficient. Terrestrial data networks usually adjust packet size and timeout periods dynamically, adjusting to conditions as they arise. Akld guy (talk) 00:36, 22 October 2015 (UTC)[reply]
I don't know... that reminds me of xmodem, which I recall as being pretty efficient, save only for the fact it was a modem. Wnt (talk) 15:58, 22 October 2015 (UTC)[reply]
I don't understand your response. Your original question asked why the quality of the images taken by Deep Space Climate Observatory needed to be reduced from 2048x2048 pixels to 1024x1024 prior to sending and why the images could not be sent more rapidly. The Deep Space Climate Observatory article states, "...and the camera will produce 2048x2048 pixel images, but to increase number of downloadable images to 10 per hour the resolution will be averaged to 1024x1024 on board." I interpreted the second part of your question about rapidity as a question as to why only 10 images per hour. I assumed you would realise that because the spacecraft is more than 900,000 miles from Earth, radio signals take 5 seconds each way. This means that whenever the spacecraft sends a packet, it must wait a minimum of 10 seconds for the packet to travel to Earth, be processed, and for the ground station's ACK or REJ to return. It's a slow process because of the distance travelled. Far longer than if the spacecraft simply sent the entire image in one transmission, but that's the price paid for receiving the image error-free. Akld guy (talk) 19:19, 22 October 2015 (UTC)[reply]
Uplink:downlink bitrate ratio is 1:70, round-trip delay is 10 seconds, and there are multiple receivers because the earth rotates. Not exactly ideal for a handshaking protocol. I'm not even sure if all the receivers transmit as well. The CCSDS has published recommended practices for space communication, TM Channel Coding Profiles, TM Synchronization and Channel Coding etc.. You don't necessarily want retransmission of every corrupted frame. For some time-critical data, like from the Plasma-Mag, retransmission may be pointless: space weather is like earth weather, you want the most recent (geomagnetic) storm data. The non-time-critical data doesn't require immediate retransmission, the system has 2.6 Gbit storage, enough for 11 hours of data (at 1k*1k resolution). Sending a list of missing frames every two hours for example is much more efficient than waiting for ACKs. Adaptive encoding on the other hand would require feedback about the signal quality (although, with the Plasma-Mag data..). But this is all speculation, one single source would tell us all, if only I could find one!!
Only decent report I found is 15 years old, prepared before the instruments were even finished. Quite a few pictures are missing, so not the final version, and there may be errors in some of the figures given (see below). The report lists higher specifications than the current system,can't explain that (more data transfer despite the same downlink bitrate). maybe it's only a preliminary draft. (NAS.Triana.report.12.99.pdf):
According to that report:
  • 3 instruments: EPIC (images); 3 RGB channels taken every 15 minutes, all 10 channels every hour. NISTAR measures continuously in 4 wavelength bands. Plasma-mag: 1 sample per second transmitted to earth.
  • Transmitter: downlink 100-140kbps ( Which gives a maximum of 1.5 GB per day ).
  • Receiver: uplink 2kbps.
  • Data received is split in three groups: Telemetry data, time-critical science/image data and delayed science data.
  • Raw data NISTAR: 25 MB per day, Plasma-Mag: a few MB per day (1GB/y).
  • EPIC, processed data: Level 2 EPIC data (that's the data that is available to level-2 "customers"): "hourly frames of 4 million earth-located radiances and ancillary data related to orbit and earth geometry." The numbers given are a bit strange: 4k*4k*12/8*24= 0.6GB per radiance channel per day. 24 hours and 12 bit (or 12/8 byte) depth, but why 4k*4k instead of 2k*2k? And what about the visible channels every 15 minutes? The total data (10 channels) would be 6GB per day, at least 4 times the available downlink bitrate, seems much, even with compression. Or we assume that the unfinished report is simply wrong and do the calculation for 2k*2k pixels, 10 channels and 3*96+7*24 images per day: result = 2.7 GB/day.
That's still much more than the current specifications: Source 22 in our article states: To increase the downlink cadence of retrieved 10-channel image sets, the resolution will be averaged on board to 1024x 1024 pixels.
Another source gives the same figures but a different reason: 2 by 2-pixel binning is employed, meaning that four active pixels are combined to one in order to increase the signal to noise ratio and correct for camera jitter that will be on the order of one pixel. Data is available as 1024 by 1024 raw bitmap and 12-bit JPG image format.
Can't find current figures for total data or data rate. In 1999: a 100 page report. In 2015: flashy web pages with nice pictures, press kits, and few hard facts. The dumbing down popularization of science...
At 1024 by 1024 the amount of raw uncompressed data for 7-channel 1/h and 3-channel 4/h is 0.7 GB, half the max downlink bitrate. No idea if the specified 100-140 kbps (some sources 135kbps) is the net bitrate (for the encoded signal) or the "goodput" (the actual data). Ssscienccce (talk) 20:21, 22 October 2015 (UTC)[reply]
If error checking is the slow-down, then I would suggest they skip it when sending pics from the satellite to Earth. If an occasional pixel is bad, that's OK. Far better to have more frames than put so much effort into avoiding bad pixels. Also, some type of post-processing on Earth might be able to remove bad pixels, like if a red pixel is found in the middle of a bunch of blue pixels, remove it and replace it with the average color of the surrounding pixels. StuRat (talk) 21:14, 23 October 2015 (UTC)[reply]
No, no, no. Absolutely not. You don't go to the trouble of taking high quality images and then ignore bad data during transmission. And if you make a guess that a red pixel should be present in the midst of a bunch of blue pixels...well you're meddling with the original image and you then can't claim that the image is a true depiction. Is that red pixel a glitch, or is it some feature on the planet that's really there? Sorry but no! Error detection and correction is the only way to go. Akld guy (talk) 22:08, 23 October 2015 (UTC)[reply]
If you can send two pics without error checking in the same time it takes one pic with error checks, and 99.99% of the pixels are good, on average, then you are able to send almost twice as many good pixels with the error checking turned off. The effect is very similar to lossy compression, which allows more frames to be sent, at only a very slightly lower quality, than without any compression. And if the goal is to convey a sense of motion, then more frames are vitally important, and a few bad pixels don't matter. StuRat (talk) 05:15, 24 October 2015 (UTC)[reply]
You're assuming that 99.99% of the received pixels are good. Without error checking, how can that figure even be determined? So even in your scenario, the error checking must be built-in from the start. I can't imagine the engineers saying, "Well we hit 99% yesterday, so let's just turn off error checking and hope it'll stay that way." You're also assuming that the only function of the imaging is to display moving images to a worldwide internet and TV audience. I would suggest that there are military and weather entities as well as NASA that are keenly interested in high definition static images where every single pixel must be true. They cannot simply ignore a few bad ones. Akld guy (talk) 08:24, 24 October 2015 (UTC)[reply]
All this debate is supposition. I remember admiring the design of error correcting codes on 5 1/4" floppy disks. I imagine top-of-the-line NASA probes are built to something of a higher standard. :) Wnt (talk) 15:46, 24 October 2015 (UTC)[reply]
Yes, the error correcting that I described above is a very basic illustration for those who didn't appreciate that error checking over a 10 second round trip causes significant delay. I put it in the simplest terms to illustrate. There are very much more sophisticated systems that not only report a received error, but identify where in the packet it occurred so that only the bad segment and not the entire packet needs to be repeated. We aren't or shouldn't be, discussing this as academics among ourselves to the highest level, but writing in the style of "Data Transmission for Dummies" so that those with no prior inkling can grasp the principles. Akld guy (talk) 18:44, 24 October 2015 (UTC)[reply]
If for some reason it was critically important to get every pixel perfect, then perhaps with such a long communication delay it might make more sense to send each image 3 times, and for every pixel take whichever 2 out of 3 images has there. (If all 3 are different you would have to do something else, like average them, or, if you still think any imperfect pixel is cause for panic, it could keep sending images until it gets a "quorem" on every pixel.) StuRat (talk) 03:53, 25 October 2015 (UTC)[reply]
Sending the image three times is extremely wasteful of transmission power (battery budget). Sending until you reach a "quorum" still implies the implementation of an error-detect/ACK/REJect system to report whether "quorum" has been achieved or not. What you may be advocating is along the lines of a Forward Error Correction (FEC) system, which is used in some circumstances where it is disadvantageous or impossible to implement error correction by means of retransmission. I may be wrong but I think FEC does not result in a guaranteed absolute zero error rate, whereas the packetized transmission with ACKs and REJects with resends that I described above does. Akld guy (talk) 07:55, 25 October 2015 (UTC)[reply]
Yes, we get back to the issue that ensuring that absolutely no pixel is ever bad is extremely expensive in terms of data transmission rate when you have a 10 second communication delay. How are pics from Pluto handled ? It must be prohibitively expensive to send a packet from Pluto to Earth, wait for confirmation to arrive from Earth that there was no error, then send the next packet. I imagine this could be sped up by sending many packets at once then resending those that don't make it through, at a later time. However, I suspect they would just skip error checking altogether at such a distance, at least on pics, to increase data transmission rates. Of course, navigation data must be precise, so there error checking would be needed. StuRat (talk) 18:59, 25 October 2015 (UTC)[reply]
Sending multiple packets in one transmission is the next step up from what I've described above. My one-packet-at-a-time with error checking and reporting and 10 second round trip was meant to be a simple illustration for the OP and other readers who could not comprehend why so few images could be sent per hour. There may be other factors, but that 10 second delay in the error checking is certainly one factor. Akld guy (talk) 21:14, 25 October 2015 (UTC)[reply]

"Sprite" Measurements on the Columbia Space Shuttle

[edit]

In Columbia's last flight in 2003, the astronauts measured lightning "sprites" seen on earth. However, when I search for the experiments all I can find are theories about the shuttle crashing due to sprites. Can anyone link me to references about the measurements? 79.178.236.248 (talk) 18:31, 21 October 2015 (UTC)[reply]

Sprite (lightning) is the Wikipedia article on the topic. That may help you find your information, perhaps by following references from there. --Jayron32 18:56, 21 October 2015 (UTC)[reply]
Also, the mission itself was STS-107, so that may help in your searches also. Besides the Wikipedia article itself being a starting point for further research (follow the references, perhaps) if you search for "STS-107", instead of merely "Columbia", that may help your searches some. --Jayron32 18:58, 21 October 2015 (UTC)[reply]
Freestar experiment, [3] -- Finlay McWalterTalk 21:21, 21 October 2015 (UTC)[reply]

how deep is your brain?

[edit]

What is the graph diameter (directed) of a mammal's nervous system? I'll be happy with either a measurement or a well-supported estimate. —Tamfang (talk) 21:22, 21 October 2015 (UTC)[reply]

What are the parameters you are looking for? I think you want Directed graph as your link, but the main problem is you have not said what your graph is mapping? What are the nodes in your nervous system? Synapses? --Jayron32 23:15, 21 October 2015 (UTC)[reply]
Vice versa would work if each neuron had no more than two synapses. —Tamfang (talk) 05:18, 23 October 2015 (UTC)[reply]
  • I'm going to direct you to the Scholarpedia article on brain connectivity. Basically the story is that there has been some work on the graph structure of the cerebral cortex, but I'm not aware of anything for the nervous system as a whole, and I think that would be hard to get a handle on (and actually not very meaningful). Looie496 (talk) 12:04, 22 October 2015 (UTC)[reply]
Thanks, I'll look. —Tamfang (talk) 05:18, 23 October 2015 (UTC)[reply]