Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Thepurplelefant (talk | contribs) at 19:50, 18 March 2012 (Cominatorics question: new section). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


March 12

Quantifying heterogeneity

I'm looking for a formal mathematical/statistical measurement of heterogeneity within a set. Basically, what I'm asking is, how does one mathematically/statistically quantify heterogeneity within a set? Wikipedia's article on homogeneity and heterogeneity only talks about the concepts in general terms and from an applied science point of view, whereas I'm looking for an exact mathematical/statistical measurement.

To further explain, imagine a bucket consisting of 100 colored balls. In ascending order of heterogeneity:

  • All 100 balls are blue. (Completely homogeneous; no heterogeneity.)
  • 90 balls are blue, and the other 10 are 10 other different colors.
  • 50 balls are blue, and the other 50 are 10 other different colors.
  • 10 different colors are represented, each color by 10 balls.
  • All 100 balls are different colors. (Completely heterogeneous; no homogeneity.)

While the ordering of the above is fairly intuitive, more complex situations lead to ambiguity. For instance:

  • 50 balls are blue, and the other 50 are 10 other different colors.
  • 5 different colors are represented, each color by 20 balls.

Which of these two is more "heterogeneous"? It is not clear: the former has greater color variation, but the latter has less concentration in one color. So I'm look for some sort of way to mathematically measure heterogeneity within a set, both of sets with a finite number of elements (like the above ball bucket examples) and of sets with non-countable constituent parts (for instance, a tank filled with various gases, each of which make up a certain percentage of the total mass or volume of the tank). I assume there must be such a measure in common usage, since this seems like a fairly basic kind of statistics problem?

SeekingAnswers (reply) 00:55, 12 March 2012 (UTC)[reply]

I think the answer is that there are different ways to measure it, depending on what you're looking for. StuRat (talk) 04:02, 12 March 2012 (UTC)[reply]
So, could you give some of those ways? —SeekingAnswers (reply) 05:44, 12 March 2012 (UTC)[reply]
Perhaps total number of colors or average number of balls of each color. Percentage of balls in the biggest group would be another. StuRat (talk) 05:59, 12 March 2012 (UTC)[reply]
Borrowing from statistical mechanics and the concept of entropy, one measure of heterogeneity is the number of possible microstates a configuration allows. For example, 100 blue balls permits exactly 1 microstate (all balls are blue), but the 100 balls of different colors would allow you to have 100! ≈ 9×10157 different ways to arrange the balls. Measuring the number of possible arrangements can be extended to all of your examples, though doing the math may be complicated in practice. Dragons flight (talk) 06:34, 12 March 2012 (UTC)[reply]
Very clever, but isn't the formula a simple one, ie. N!/(a!b!c!...) where N is the total number, and a, b, c,... are the numbers of each individual colour? N! is the total number of permutations (microstates), and then you divide by the number of microstates for each macrostate. The numbers might get horrendous, but I'm sure there are ways of estimating the total. IBE (talk) 07:36, 12 March 2012 (UTC)[reply]
The measure of heterogeneity that you are looking for is the Shannon formula for entropy (information theory):
bits, where is the probability that a random ball has color no i.
  • All 100 balls are blue. bits.
  • 90 balls are blue, and the other 10 are 10 other different colors. bits.
  • 10 different colors are represented, each color by 10 balls. bits.
  • All 100 balls are different colors. bits.
Bo Jacoby (talk) 08:20, 12 March 2012 (UTC).[reply]
Information theory barnstar for hitting the nail right on the head :) Rschwieb (talk) 14:32, 12 March 2012 (UTC)[reply]

Need the answer of this question in algebra

how many integer solutions for y and x does the equation x^2 -y^2=91 have? I thought it was 4 but the answer key says its 16. — Preceding unsigned comment added by Thepurplelefant (talkcontribs) 03:10, 12 March 2012 (UTC)[reply]

Which 4 did you find ? StuRat (talk) 03:12, 12 March 2012 (UTC)[reply]

(10,3),(10,-3),(-10,3),(-10,-3) — Preceding unsigned comment added by Thepurplelefant (talkcontribs) 03:14, 12 March 2012 (UTC)[reply]

There's another set of 4 answers:
(-46,-45),(-46,45),(46,-45),(46,45)
However, that only gives 8 answers. I suspect that they doubled that since there are 2 numbers in each answer, but they really should be more clear than that. StuRat (talk) 03:31, 12 March 2012 (UTC)[reply]
You can use x^2 - y^2 = (x+y)×(x-y) to show that these are the only 8 answers. PrimeHunter (talk) 03:33, 12 March 2012 (UTC)[reply]
To clarify what StuRat said, the answer key is probably counting (3,10) and (10,3) as distinct answers, which would double the count. Dragons flight (talk) 04:24, 12 March 2012 (UTC) Okay, I'm dumb... Dragons flight (talk) 05:18, 12 March 2012 (UTC)[reply]
No. 3² - 10² = -91, so that's not a correct solution. They seem to be counting the (10,3) as two answers, being 10 and 3, which is a weird way to put it. StuRat (talk) 04:56, 12 March 2012 (UTC)[reply]
so and must be factors of 91. That gives you eight systems of linear equations:
















Widener (talk) 05:29, 12 March 2012 (UTC)[reply]

Extending theorem to uncountable sets

One can show easily that bounded countable subsets of have a supremum given that all bounded nondecreasing sequences have a limit. I'm trying to think of a way to extend this to uncountable subsets. One way I've thought of it is to show that you can take any bounded countable subset of and add an uncountable number of reals to that set, and as long as each element added is less than the supremum of , the supremum of will also be the supremum of this new set. It is obviously important to show that all bounded subsets of can be generated in this way. I'm sure they can, but I'm not sure how to prove it. Given a bounded subset , one can find a countable subset which has a supremum greater than or equal to every element in , but why? Widener (talk) 06:00, 12 March 2012 (UTC)[reply]

The short answer is that is a hereditarily separable space. If you want to prove it directly, you can exploit the density of the rationals. Let be a listing of the rationals. Fix some . Define to be an element of bigger than , if there is such an element, and otherwise. Let be the collection of . Now verify that is as desired.--121.74.121.82 (talk) 06:38, 12 March 2012 (UTC)[reply]

Tensors

is a Cartesian tensor with respect to A and also a Cartesian tensor with respect to B. Show that it is a Cartesian tensor with respect to BA. (A and B are orthogonal but that might not be relevant).
This is what I did:



But now that I think about it, this depends on being a Cartesian tensor with respect to B, not Widener (talk) 10:46, 12 March 2012 (UTC)[reply]

"depends on being a Cartesian tensor with respect to B, not ": Well, and are not different tensors, they are different components for the same tensor. What you have above looks fine, as long as you can see that the Bij are components for B with respect to the image of your original basis in A. Specifically if you start in basis , go to the basis and then . Orthogonality will ensure that the image of a basis is a basis. Rschwieb (talk) 11:53, 12 March 2012 (UTC)[reply]

who sets racetrack odds and how?

who sets racetrack odds and how? How do bookies have a good enough knowledge of who is likely to win to set favorite/longshot odds to begin with, and how can this knowledge be SO good that a fluke doesn't leave them broke? If I were a bookie, I would have to have an 10x broker's fee on any bets placed (so that only if a 10:1 underdog wins, I lose) before I could offer any odds I could live with.... 84.1.193.119 (talk) 13:38, 12 March 2012 (UTC)[reply]

They don't try and forecast the race or bet against you, they try to set their odds as new bets come in so whoever wins they always make a profit. They do of course do some forecasting to set initial odds and can lose because they can't balance the books if everyone bets on the favourite and it wins for instance - but in those case people don't make much anyway and the betting shop can reject the bet or offer lower odds for very large bids in advance of the day. There's also systems where people just get a cut of the total bet and the winnings from them tend to be pretty similar to the fixed price odds. There the betting shop takes no risk at all. Dmcq (talk) 14:03, 12 March 2012 (UTC)[reply]
are you telling me that for the most part it's like a market, and odds are really just 'prices' that float with some transaction cost? (the fees you mention) 84.1.193.119 (talk) 14:06, 12 March 2012 (UTC)[reply]
Also, why do you say "in those case people don't make much anyway" -- if a 100:1 underdog wins, wouldn't the people win the MOST in those cases? So what if, in setting those initial odds, too many people bet on the underdog and it wins? 84.1.193.119 (talk) 14:07, 12 March 2012 (UTC)[reply]
Yeas a comparison to a market is good. I see we have an article on all this Mathematics of bookmaking. It is of course possible for someone to come in and win big on a 100:1 bet, but the bookmaker will change the odds pretty quick if everyone started betting on a particular underdog that way. Dmcq (talk) 14:11, 12 March 2012 (UTC)[reply]
your last sentence is particularly interesting. I wonder if a crowd of people could come in as a flash mob and start placing (cheap) bets on a particular, longshot horse - making it, at spot prices, not in the forecast sense, the "favorite" - and pushing the current (forecast meaning) favorite out of that position. Then the person who orchestrated all this could simple bet on the (forecast-meaning) favorite at favorable prices. How do bookies protect themselves from this, if they don't forecast? And if, as you say, they will "change the odds pretty quick if everyone started betting on a particular underdog that way". --84.1.193.119 (talk) 14:56, 12 March 2012 (UTC)[reply]
Presumably, the amount that you'd win by getting slightly better odds for betting on the favorite would be offset by the cost of all those long-shot bets. For an example of a distillation of this system, see Intrade, which structurally operates like a bookie, but merely facilitates bets between users (they don't stand to gain or lose money on bets, so they charge a monthly fee). Paul (Stansifer) 15:42, 12 March 2012 (UTC)[reply]
wow, private betting, all this sounds terribly smoke-filled! I'm just a lowly mathematician interested in economics, but I feel like I'm smoking a cigar just reading this! --84.1.193.119 (talk) 16:20, 12 March 2012 (UTC)[reply]
There are other bets, where they do try predict the chance of an event happening. There's the 20 million-to-1 if Elvis crashed a UFO on top of the Loch Ness Monster. Bets on where skylab would crash, first man on the moon, and other bets that for example William Hill has accepted (the chance of the olympic flame blowing out..) 84.197.178.75 (talk) 17:01, 12 March 2012 (UTC)[reply]
Any ONE of these things are far less likely than one in twenty million: Elvis is alive (he couldn't be seen by anybody; EVER); the Loch Ness Monster is real (the lake's been plumbed); and a UFO crashes. That these things happen should independently at once has way less than 8000 million million million in one odds, and should pay much better. You are getting ripped off! --84.1.193.119 (talk) 12:05, 13 March 2012 (UTC)[reply]

applications of linear functions

What are some direct applications of knowing how to solve linear functions--148.241.97.247 (talk) 14:36, 12 March 2012 (UTC)[reply]

depends what you mean, solving a linear equation in one variable is quite trivial, that's for example calculating how many apples you can buy at 20 cent a piece if you got 200 cent, or how fast you ran when you did 25 km in three hours. If you mean a real linear function, one for which f(x)+f(y)=f(x+y) and f(ax) = a*f(x), then I'm not sure what's there to solve. Solving a System of linear equations is used a lot in science, economics etc.. a direct application , hmm.. solving riddles? Like if you count 50 heads and 120 legs, how many people and how many dogs are there ... 84.197.178.75 (talk) 17:57, 12 March 2012 (UTC)[reply]
Here's a more complex example than your usual one to really get your teeth into. Having hated school work and never doing any after leaving you get a job cleaning offices, you join another person who has been cleaning the offices in five hours and together you do the job in three. However they want a night off to escape and get a life - how long will it take you to clean the offices on your own? ;-) Dmcq (talk) 18:10, 12 March 2012 (UTC)[reply]
40,10 & 7.5, WTP? :)Naraht (talk) 21:50, 12 March 2012 (UTC)[reply]
I think one of the commonest uses outside of scientific disciplines is in linear programming. There is an example in that article of an application by a farmer in planting grain. They also use it o decide what feedstock to buy for their animals. Dmcq (talk) 13:47, 13 March 2012 (UTC)[reply]

in 7 hours and a half?--148.241.97.247 (talk) 20:33, 13 March 2012 (UTC)[reply]

That's right, very good. You can see why people don't like that happening. Having a group of cleaners go round few offices means they can cope with holidays and illness better. Dmcq (talk) 23:49, 13 March 2012 (UTC)[reply]


March 13

Vectors

Suppose we take , i.e. the n dimensional vector space over the field of two elements. Now suppose we wish to construct a maximal set of these vectors such that each pair of vectors differ in at least d of their entries, if not more, by taking an initial vector and then continually adding another that is at least d away from all the others until we can no longer iterate the procedure. Can we place a bound on the size of this set? Or, better still, determine precisely how big it will be? Thanks. 131.111.216.115 (talk) 12:31, 13 March 2012 (UTC)[reply]

I should add that at each stage, including the initial one, we want the vectors to be randomly generated and only added to the set if they satisfy the given criterion. Otherwise, we reject the vector and try a new one. Aside from that, we have no control over what the vector is. 131.111.216.115 (talk) 12:32, 13 March 2012 (UTC)[reply]
Singleton bound gives an upper bound of 2n-d+1. Hamming bound gives another upper bound. Those pages also links to some other bounds that may be useful. Not sure about the expected size for this random algorithm. Rckrone (talk) 15:45, 13 March 2012 (UTC)[reply]
The expected size for the random algorithm iirc is what you would hope for (this figures into the proof of the noisy channel coding theorem) but of course the worst case is, uh, pessimal. 67.117.144.57 (talk) 08:01, 17 March 2012 (UTC)[reply]

Using Euler's theorem to find the last digits of a tetration

I am trying to solve a problem on Project Euler, which deals with finding some of the last digits of a tetration , where p is a specific prime number given in the problem and n is another specific integer. To get some ideas on how to solve it, I skimmed thorugh the article and stumbled on this sentence at the end of Tetration#Iterated powers

When a and n are coprime, we can compute the last m decimal digits of using Euler's theorem.

But is that true? I can see for myself that if a and 10 (but not n) are coprime (and they are in my case since a is prime), then any tetration of and 10m will also be coprime, and I can use Euler's theorem recursively starting from and then eat the tower of exponentials (mod 10m) top-down. So I am not asking for help to solve the problem, but questioning the statement in the article. --Slaunger (talk) 19:04, 13 March 2012 (UTC). My solution was accepted, confirming my own observations, but I still question the validity of the statement in the article.--Slaunger (talk) 19:55, 13 March 2012 (UTC)[reply]

I agree, it doesn't seem right. a and n being coprime doesn't seem to do anything. You only care if a and 10 are coprime. I changed the statement in the article, although maybe it should be removed entirely. Rckrone (talk) 03:17, 14 March 2012 (UTC)[reply]
Ok, thanks. A small victory for a number theory noob :-) --Slaunger (talk) 08:27, 14 March 2012 (UTC)[reply]

A week in Planck times

How many Planck times is a week? I guessed that it was 1,600,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 Planck times (1.6e+48) because Orders of magnitude (time) says that one is equal to approximately 10-44 seconds, but I'm not at all sure that I've done the math right. Nyttend (talk) 20:38, 13 March 2012 (UTC)[reply]

A week is 7*24*60*60=604800 seconds. The Planck time is 5.39106*10-44 seconds. If we divide 1 week by 1 Planck time, we get 1.1218573*1049. So you were close, just less than one order of magnitude out (I think you did the maths right, but you use 10-445.39106*10-44 not 5.39106*10-44 and must have had an approximation for the length of a week as well. --Tango (talk) 21:02, 13 March 2012 (UTC)[reply]
I thought that I put -44 into my computer calculator, clicked the 10x button, multiplied by 5.4, put the result into memory, typed 604800, clicked the divide key, selected memory recall, and finished with the equals key. However, that gives me the number you tell me; I'm not sure where I went wrong. Thanks for the help; I've fixed some statements that I made at WP:RFPP, where I was using a different method of expressing the fact that I'd protected a page for a week :-) Nyttend (talk) 21:21, 13 March 2012 (UTC)[reply]
http://www.wolframalpha.com/input/?i=1+week+in+planck+times does the job. Staecker (talk) 14:45, 14 March 2012 (UTC)[reply]


March 14

Names of these functions

Are there special names for functions that are of the form ? --HappyCamper 05:11, 14 March 2012 (UTC)[reply]

These functions are all rational functions, but not all rational functions are of the above form. There is no special name for the functions , nor should there be, because the sum or product of such functions is not of the same form. Why do you need a name? Bo Jacoby (talk) 13:46, 14 March 2012 (UTC).[reply]
If you google 1/(1+x^2n) you'll get a youtube video about the integral of that function as first result. Weird that google is smart enough to find that one despite it having the 2n in brackets, but not one of the other search results seems to match... 84.197.178.75 (talk) 14:57, 14 March 2012 (UTC)[reply]

Intuitive interpretation of Lebesgue integrals

Please see 1, 2 and 3. Which image is wrong? --151.75.123.9 (talk) 12:47, 14 March 2012 (UTC)[reply]

The third link is wrong (I have reverted that edit). One way to define the Lebesgue integral is to divide the range horizontally and then calculate the area under the resulting simple function. That's what the image was supposed to convey. Sławomir Biały (talk) 12:55, 14 March 2012 (UTC)[reply]
Also, the text beside the image explains this point. Sławomir Biały (talk) 12:57, 14 March 2012 (UTC)[reply]
Pls have a look here: Wikipedia talk:WikiProject Mathematics#Lebesgue_integration. I totally disagree to Slawomir.--Svebert (talk) 10:02, 19 March 2012 (UTC)[reply]

March 15

Probability question

Suppose there are N persons and each of them picks a random number from 1 ... M, where M ≥ N. A person picks a "unique" number if nobody else picks the same number. Is there a simple expression for the expected number of persons who pick a unique number? If not, is there a simple approximation for that expected value, assuming that M is larger than N by a margin, e.g. M ≥ 2N? By "simple", I'm really trying to avoid iterated summations/products. I'm trying to understand how the function behaves as M and N vary. Thanks. --173.49.10.183 (talk) 12:36, 15 March 2012 (UTC)[reply]

Fixing M, let f(N) denote the expected number of different numbers chosen by N people. When the Nth person chooses their number, the probability they pick a number that no one has chosen so far is 1 - k/M where k is the number of different numbers chosen so far. The expectation of k is f(N-1), and since 1 - k/M is linear in k, its expectation is 1 - f(N-1)/M. So then f(N) = f(N-1) + 1 - f(N-1)/M. Solving the recurrence relation with initial condition f(0) = 0, we find f(N) = M - M(1 - 1/M)N. The probability that the Nth person picked a unique number is 1 - f(N-1)/M, but it doesn't actually matter what order the people choose their numbers, so everyone should have the same chance. The expected number of people with unique numbers should be N(1 - f(N-1)/M) = N(1 - 1/M)N-1. Rckrone (talk) 13:43, 15 March 2012 (UTC)[reply]
And here's another way of getting to the same point. Binomial distribution deals with this. Just think of one number, what is the chance that only only person chooses it? Any of the N could be the person and they choose it with probability 1/M, the N−1 other people don't choose it with probability each of 1−1/M. So the probability of just one person choosing a particular number is
N × (1/M) × (1 - 1/M)N−1
And that gives you proportion that will be chosen on average by exactly one person. Multiply by the M numbers to get the number of people in the previous answer. When the numbers are large the proportion can be approximated by (N/M)e−N/M. Dmcq (talk) 13:51, 15 March 2012 (UTC)[reply]

Look in the "approximations" section of birthday problem. 67.117.144.57 (talk) 08:09, 17 March 2012 (UTC)[reply]

Convert Hamiltonian path problem into subset sum problem and vice-versa

Since the Hamiltonian path problem (in a specific graph, is there a Hamiltonian path) is NP-complete, and the subset sum problem (given a set of integers, is there a non-empty subset whose sum is zero) is also NP-complete, by the definition of NP-complete, it should be possible to create a set of integers based on a graph, or construct a graph based on a set of integers, so that both have the same answer, no? How would you do that exactly? 84.197.178.75 (talk) 20:32, 15 March 2012 (UTC)[reply]

Found a reduction from Hamiltonian path problem to subset sum. With V vertices and E edges you use 2*V*E numbers (each representing an edge, the direction and it's place in the path) and use binary coding so that V numbers representing a closed path result in V*E bit positions set to 1 and add another V bits to represent the V possible start vertices of the edge to verify that every vertex is in the path (the paper I found uses V*E positions and encodes both vertices of every edge, that last bit seems redundant imo), maybe some extra zeros are needed to rule out sums of 3*E, 5*E etc numbers giving the right answer, not sure...
Guess it works, but I get the feeling that reducing this problem back to a Hamiltonian path problem will give me more than the V vertices I started with. Anyone a link on how to do that? 84.197.178.75 (talk) 02:04, 16 March 2012 (UTC)[reply]
You don't have to look for a direct reduction. Normally in these proofs you don't care at all about the degrees of the polynomials involved, so you can show equivalence by (say) reducing both problems to 3SAT. 67.117.144.57 (talk) 08:07, 17 March 2012 (UTC)[reply]
Good point, guess I'm really looking for a transformation that preserves the number of posibilities, not for a reduction. Not sure if that exists or is implied by NP-complete. 84.197.178.75 (talk) 15:09, 17 March 2012 (UTC)[reply]
Try to be precise with your question. If you're asking for a bijection between instances of the Hamiltonian path problem and the subset sum problem, where the bijection can be applied in polynomial time and preserves the yes/no answer, then this might be impossible. The existence of this is not implied by NP-completeness. If instead you're asking for transformations (not necessarily bijections) between the Hamiltonian path problem and the subset sum problem such that the number of Hamiltonian paths is equal to the number of subset sum solutions, then in general these are called parsimonious transformations. There is currently no Wikipedia article about the topic, but you can try Google (the singular form with quotes works best) and ignore search results that are unrelated to complexity theory. 98.248.42.252 (talk) 02:24, 18 March 2012 (UTC)[reply]

March 16

Dividing a square into triangles

Maybe I am missing something obvious, but here is an apparently simple problem that is baffling me. The problem is: divide a square into an odd number of triangles, each with the same area (the triangles do not need to be congruent). The problem is easily solved if we replace "odd" with "even", or replace "triangles" with "rectangles" or even "irregular pentagons". But I can't find a solution for an odd number of triangles, nor can I find a proof that this is impossible. Any thoughts ? Gandalf61 (talk) 08:59, 16 March 2012 (UTC)[reply]

Sperner's lemma and 2-adic valuation proves that it cannot be done. A search for those should give you the answer. 84.197.178.75 (talk) 09:58, 16 March 2012 (UTC)[reply]
Thank you ! Following your hints I found that this is known as Monsky's theorem, it was proved in 1970 by Paul Monsky, there is an explanation of the proof here, and it also appears in Chapter 20 of the latest edition of Aigner and Ziegler's Proofs from THE BOOK (although not, alas, in my 1999 edition). And we don't seem to have a Wikipedia article on this ! Anyway, I am somewhat relieved that the proof is not trivial. Gandalf61 (talk) 11:08, 16 March 2012 (UTC)[reply]

 Done Thank you

There's also a 171 page pdf presentation/slide show that gives every step. www.math.osu.edu/~fowler.291/teaching/talks/cutting.pdf 84.197.178.75 (talk) 15:00, 17 March 2012 (UTC)[reply]

March 17

Name the terms

Products have factors, sums have summands. Do we have names for the individual terms in subtraction and division? (In the case of division, please don't say numerator and denominator.) I'm guessing that we don't because subtraction and division are not associative operations, and so a long string of "unbracketed" repeated subtractions or divisions is not always well defined. Fly by Night (talk) 18:33, 17 March 2012 (UTC)[reply]

See the lead sections of Division (mathematics) and Subtraction. Some of these terms are used more than others though; i'd never heard of minuend or subtrahend before. Qwfp (talk) 18:43, 17 March 2012 (UTC)[reply]
Thanks for that. I'll take a look now.
Exponentiation is also nonassociative, but we have the common terms "base" and "exponent". Other obscure terms: The thing inside a radical is the radicand. The thing inside an integral is the integrand. Not sure if there are words like this for the object of a derivative or logarithm (or anything else). Staecker (talk) 22:01, 17 March 2012 (UTC)[reply]
I think most people are familiar with integrand. Fly by Night (talk) 22:44, 17 March 2012 (UTC)[reply]
Where would we be without useful mathematical terms such as abscissa? Rschwieb (talk) 00:39, 18 March 2012 (UTC)[reply]

March 18

Infinite sum

Please solve the following:

where pn is the n-th prime number. 113.178.27.141 (talk) 05:11, 18 March 2012 (UTC)[reply]

Please do your own homework.
Welcome to the Wikipedia Reference Desk. Your question appears to be a homework question. I apologize if this is a misinterpretation, but it is our aim here not to do people's homework for them, but to merely aid them in doing it themselves. Letting someone else do your homework does not help you learn nearly as much as doing it yourself. Please attempt to solve the problem or answer the question yourself first. If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help. If you need help grasping the concept of a problem, by all means let us know. --Trovatore (talk) 05:26, 18 March 2012 (UTC)[reply]
This is telescoping series. The answer is . You can prove that inductively, and the reason is because all the terms cancel except the first and last. . Widener (talk) 06:18, 18 March 2012 (UTC)[reply]
Did you not agree that this looked like a homework question, or are you deliberately going against the accepted practice? --Trovatore (talk) 06:27, 18 March 2012 (UTC)[reply]
"If you need help with a specific part of your homework, feel free to tell us where you are stuck and ask for help."
He did tell us were he was stuck, any further, he would have solved it ;-) Besides, unless he copy/pasted, he did make the effort to use math notation, may have taken some time to figure out. 84.197.178.75 (talk) 11:11, 18 March 2012 (UTC)[reply]
It is not a good thing to just give people the answer. Word will get around and we'll be inundated with this sort of request. Seriously, people. Just don't do it. --Trovatore (talk) 13:02, 18 March 2012 (UTC)[reply]

Minesweeper

Here is a position from Minesweeper. I am interested in the probability of hitting a mine when clicking a square. The probability when clicking one of the 8 squares around the "2" are 2 in 8 or 0.25. Around the "1" square it is 1 in 8 or 0.125. But what about the three squares that are common to both groups? Obviously, one can count all the possible permutations of positions of mines and determine how many have mines occuring in the overlap, but is there an analytical method of doing this?

Also, my results from the brute force counting method is 15/65 (=3/13) permutations have mines in the overlap. The probability of hitting a mine if all three are clicked is 3/13. Since there is only one mine in the overlap, the probability of hitting a mine from one click is one third of this, 1/13 ≈ 0.08. Is this right? It is very counter-intuitive; I was expecting a figure between 0.125 and 0.25. It means that clicking in the overlap between already discovered mines is safer than clicking in the non-overlap and way safer than clicking a random blank area (mine density ~ 0.2). SpinningSpark 19:22, 18 March 2012 (UTC)[reply]

Cominatorics question

in how many ways can i arrange numbers 1,2,3,4,5,6 such that number one 1 isn't in the first position, number 2 isn't in the second. number 3 isn't in the 3rd position... --Thepurplelefant (talk) 19:50, 18 March 2012 (UTC)[reply]