Jump to content

Primality test: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
Yulli67 (talk | contribs)
mNo edit summary
Kier07 (talk | contribs)
Line 64: Line 64:


==Fast deterministic tests==
==Fast deterministic tests==
Near the beginning of the 20th century, it was shown that a result of [[Fermat's little theorem]] could be used to test for primality. This resulted in the [[Pocklington primality test]].<ref>{{MathWorld |urlname=PocklingtonsTheorem |title=Pocklington's Theorem}}</ref> However, as this test requires a partial factorization of ''n'' − 1 the running time was still quite slow in the worst case. The first [[deterministic algorithm|deterministic]] primality test significantly faster than the naïve methods was the [[Adleman-Pomerance-Rumely primality test|cyclotomy test]]; its runtime can be proven to be [[Big O notation|O]]((log&nbsp;''n'')<sup>''c''log log log ''n''</sup>), where ''n'' is the number to test for primality and ''c'' is a constant independent of ''n''. Many further improvements were made, but none could be proven to have polynomial running time. (Note that running time is measured in terms of the size of the input, which in this case is ~ log ''n'', that being the number of bits needed to represent the number ''n''). The [[Elliptic curve primality proving|elliptic curve primality test]] can be proven to run in O((log&nbsp;''n'')<sup>6</sup>), but only if some still unproven (but widely assumed to be true) statements of [[analytic number theory]] are used{{Which?|date=April 2010}}. Similarly, under the [[generalized Riemann hypothesis]], the Miller–Rabin test can be turned into a deterministic version (called [[Miller-Rabin primality test#Deterministic variants of the test|Miller's test]]) with runtime [[big O notation#Extensions to the Bachmann–Landau notations|Õ]]((log&nbsp;''n'')<sup>4</sup>).<ref>{{cite journal |doi=10.1016/S0022-0000(76)80043-8 |author=[[Gary L. Miller (mathematician)|Gary L. Miller]] |title=Riemann's Hypothesis and Tests for Primality |journal=[[Journal of Computer and System Sciences]] |volume=13 |issue=3 |pages=300–317 |year=1976}}</ref> In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all. Because the implementation of these methods is rather difficult and creates a risk of programming errors, the slower but simpler tests are often preferred.
Near the beginning of the 20th century, it was shown that a result of [[Fermat's little theorem]] could be used to test for primality. This resulted in the [[Pocklington primality test]].<ref>{{MathWorld |urlname=PocklingtonsTheorem |title=Pocklington's Theorem}}</ref> However, as this test requires a partial factorization of ''n'' − 1 the running time was still quite slow in the worst case. The first [[deterministic algorithm|deterministic]] primality test significantly faster than the naïve methods was the [[Adleman-Pomerance-Rumely primality test|cyclotomy test]]; its runtime can be proven to be [[Big O notation|O]]((log&nbsp;''n'')<sup>''c''log log log ''n''</sup>), where ''n'' is the number to test for primality and ''c'' is a constant independent of ''n''. Many further improvements were made, but none could be proven to have polynomial running time. (Note that running time is measured in terms of the size of the input, which in this case is ~ log ''n'', that being the number of bits needed to represent the number ''n''.) The [[Elliptic curve primality proving|elliptic curve primality test]] can be proven to run in O((log&nbsp;''n'')<sup>6</sup>), but only if some still unproven (but widely assumed to be true) statements of [[analytic number theory]] are used{{Which?|date=April 2010}}. Similarly, under the [[generalized Riemann hypothesis]], the Miller–Rabin test can be turned into a deterministic version (called [[Miller-Rabin primality test#Deterministic variants of the test|Miller's test]]) with runtime [[big O notation#Extensions to the Bachmann–Landau notations|Õ]]((log&nbsp;''n'')<sup>4</sup>).<ref>{{cite journal |doi=10.1016/S0022-0000(76)80043-8 |author=[[Gary L. Miller (mathematician)|Gary L. Miller]] |title=Riemann's Hypothesis and Tests for Primality |journal=[[Journal of Computer and System Sciences]] |volume=13 |issue=3 |pages=300–317 |year=1976}}</ref> In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all. Because the implementation of these methods is rather difficult and creates a risk of programming errors, the slower but simpler tests are often preferred.


In 2002 the first provably polynomial time test for primality was invented by [[Manindra Agrawal]], [[Neeraj Kayal]] and [[Nitin Saxena]]. The [[AKS primality test]], runs in Õ((log&nbsp;''n'')<sup>12</sup>) (improved to Õ((log&nbsp;''n'')<sup>7.5</sup>) in the published revision of their paper), which can be further reduced to Õ((log&nbsp;''n'')<sup>6</sup>) if the [[Sophie Germain prime|Sophie Germain conjecture]] is true.<ref name="AKS">Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "[http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf PRIMES is in P]", ''Annals of Mathematics'' 160 (2004), no. 2, pp. 781–793.</ref> Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log&nbsp;''n'')<sup>6</sup>) unconditionally.<ref>Carl Pomerance and Hendrik W. Lenstra. [http://www.math.dartmouth.edu/~carlp/PDF/complexity12.pdf Primality testing with Gaussian periods]</ref>
In 2002 the first provably polynomial time test for primality was invented by [[Manindra Agrawal]], [[Neeraj Kayal]] and [[Nitin Saxena]]. The [[AKS primality test]], runs in Õ((log&nbsp;''n'')<sup>12</sup>) (improved to Õ((log&nbsp;''n'')<sup>7.5</sup>) in the published revision of their paper), which can be further reduced to Õ((log&nbsp;''n'')<sup>6</sup>) if the [[Sophie Germain prime|Sophie Germain conjecture]] is true.<ref name="AKS">Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "[http://www.cse.iitk.ac.in/users/manindra/algebra/primality_v6.pdf PRIMES is in P]", ''Annals of Mathematics'' 160 (2004), no. 2, pp. 781–793.</ref> Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log&nbsp;''n'')<sup>6</sup>) unconditionally.<ref>Carl Pomerance and Hendrik W. Lenstra. [http://www.math.dartmouth.edu/~carlp/PDF/complexity12.pdf Primality testing with Gaussian periods]</ref>

Revision as of 23:27, 18 August 2011

A primality test is an algorithm for determining whether an input number is prime. Amongst other fields of mathematics, it is used for cryptography. Unlike integer factorization, primality tests do not generally give prime factors, only stating whether the input number is prime or not. As of 2010, factorization is a computationally difficult problem, whereas primality testing is comparatively easy (its running time is polynomial in the size of the input). Some primality tests prove that a number is prime, while others like Miller-Rabin prove that a number is composite. Therefore we might call the latter compositeness tests instead of primality tests.

Naïve methods

The simplest primality test is as follows: Given an input number n, check whether any integer m from 2 to n − 1 divides n. If n is divisible by any m then n is composite, otherwise it is prime.

However, rather than testing all m up to n − 1, it is only necessary to test m up to : if n is composite then it can be factored into two values, at least one of which must be less than or equal to .

The efficiency can also be improved by skipping all even m except 2, since if any even number divides n then 2 does. It can be improved further by observing that all primes are of the form 6k ± 1, with 2 and 3 being the only exceptions. This is because all integers can be expressed as (6k + i) for some integer k and for i = −1, 0, 1, 2, 3, or 4; 2 divides (6k + 0), (6k + 2), (6k + 4); and 3 divides (6k + 3). So a more efficient method is to test if n is divisible by 2 or 3, then to check through all the numbers of form 6k ± 1 . This is 3 times as fast as testing all m.

Generalising further, it can be seen that all primes are of the form c#k + i for i < c# where i represents the numbers that are coprime to c# and where c and k are integers. For example, let c = 6. Then c# = 2 3 5  = 30. All integers are of the form 30k + i for i = 0, 1, 2,...,29 and k an integer. However, 2 divides 0, 2, 4,...,28 and 3 divides 0, 3, 6,...,27 and 5 divides 0, 5, 10,...,25. So all prime numbers are of the form 30k + i for i = 1, 7, 11, 13, 17, 19, 23, 29 (i.e. for i < 30 such that gcd(i,30) = 1). Note that if i and 30 are not coprime, then 30k + i is divisible by a prime divisor of 30, namely 2, 3 or 5, and is therefore not prime.

As c → ∞, the number of values that c#k + i can take over a certain range decreases, and so the time to test n decreases. For this method, it is also necessary to check for divisibility by all primes that are less than c. Observations analogous to the preceding can be applied recursively, giving the Sieve of Eratosthenes.

A good way to speed up these methods (and all the others mentioned below) is to pre-compute and store a list of all primes up to a certain bound, say all primes up to 200. (Such a list can be computed with the Sieve of Eratosthenes). Then, before testing n for primality with a serious method, n can first be checked for divisibility by any prime from the list. If it is divisible by any of those numbers then it is composite, and any further tests can be skipped.

A simple, but very inefficient primality test uses Wilson's theorem, which states that p is prime if and only if:

Although this method requires about p modular multiplications, rendering it impractical, theorems about primes and modular residues form the basis of many more practical methods.

Probabilistic tests

Most popular primality tests are probabilistic tests. These tests use, apart from the tested number n, some other numbers a which are chosen at random from some sample space; the usual randomized primality tests never report a prime number as composite, but it is possible for a composite number to be reported as prime. The probability of error can be reduced by repeating the test with several independently chosen values of a; for two commonly used tests, for any composite n at least half the a's detect n's compositeness, so k repetitions reduce the error probability to at most 2k, which can be made arbitrarily small by increasing k.

The basic structure of randomized primality tests is as follows:

  1. Randomly pick a number a.
  2. Check some equality (corresponding to the chosen test) involving a and the given number n. If the equality fails to hold true, then n is a composite number, a is known as a witness for the compositeness, and the test stops.
  3. Repeat from step 1 until the required certainty is achieved.

After several iterations, if n is not found to be a composite number, then it can be declared probably prime.

The simplest probabilistic primality test is the Fermat primality test (actually a compositeness test). It works as follows:

Given an integer n, choose some integer a coprime to n and calculate an − 1 modulo n. If the result is different from 1, then n is composite. If it is 1, then n may or may not be prime.

The Fermat primality test is only a heuristic test; some composite numbers (Carmichael numbers) will be declared "probably prime" no matter what witness is chosen. Nevertheless, it is often used if a rapid screening of numbers is needed, for instance in the key generation phase of the RSA public key cryptographic algorithm.

The Miller–Rabin primality test and Solovay–Strassen primality test are more sophisticated variants which detect all composites (once again, this means: for every composite number n, at least 3/4 (Miller–Rabin) or 1/2 (Solovay–Strassen) of numbers a are witnesses of compositeness of n). These are also compositeness tests.

The Miller–Rabin primality test works as follows: Given an integer n, choose some integer a < n. Let 2sd = n − 1 where d is odd. If

and

for all

then n is composite and a is a witness for the compositeness. Otherwise, n may or may not be prime.

The Solovay–Strassen primality test uses another equality: Given an odd number n, choose some integer a  < n, if

, where is the Jacobi symbol,

then n is composite and a is a witness for the compositeness. Otherwise, n may or may not be prime.

These two primality tests are often the methods of choice, as they are simple and much faster than other general primality tests. One method of improving efficiency further in some cases is the Frobenius pseudoprimality test; a round of this test takes about three times as long as a round of Miller–Rabin, but achieves a probability bound comparable to seven rounds of Miller–Rabin.

Leonard Adleman and Huang presented an errorless (but expected polynomial-time) variant of the elliptic curve primality test. Unlike the other probabilistic tests, this algorithm produces a primality certificate, and thus can be used to prove that a number is prime. The algorithm is prohibitively slow in practice.

Fast deterministic tests

Near the beginning of the 20th century, it was shown that a result of Fermat's little theorem could be used to test for primality. This resulted in the Pocklington primality test.[1] However, as this test requires a partial factorization of n − 1 the running time was still quite slow in the worst case. The first deterministic primality test significantly faster than the naïve methods was the cyclotomy test; its runtime can be proven to be O((log n)clog log log n), where n is the number to test for primality and c is a constant independent of n. Many further improvements were made, but none could be proven to have polynomial running time. (Note that running time is measured in terms of the size of the input, which in this case is ~ log n, that being the number of bits needed to represent the number n.) The elliptic curve primality test can be proven to run in O((log n)6), but only if some still unproven (but widely assumed to be true) statements of analytic number theory are used[which?]. Similarly, under the generalized Riemann hypothesis, the Miller–Rabin test can be turned into a deterministic version (called Miller's test) with runtime Õ((log n)4).[2] In practice, this algorithm is slower than the other two for sizes of numbers that can be dealt with at all. Because the implementation of these methods is rather difficult and creates a risk of programming errors, the slower but simpler tests are often preferred.

In 2002 the first provably polynomial time test for primality was invented by Manindra Agrawal, Neeraj Kayal and Nitin Saxena. The AKS primality test, runs in Õ((log n)12) (improved to Õ((log n)7.5) in the published revision of their paper), which can be further reduced to Õ((log n)6) if the Sophie Germain conjecture is true.[3] Subsequently, Lenstra and Pomerance presented a version of the test which runs in time Õ((log n)6) unconditionally.[4]

Complexity

In computational complexity theory, the formal language corresponding to the prime numbers is denoted as PRIMES. It is easy to show that PRIMES is in Co-NP: its complement COMPOSITES is in NP because one can decide compositeness by nondeterministically guessing a factor.

In 1975, Vaughan Pratt showed that there existed a certificate for primality that was checkable in polynomial time, and thus that PRIMES was in NP, and therefore in NP ∩ coNP. See primality certificate for details.

The subsequent discovery of the Solovay-Strassen and Miller-Rabin algorithms put PRIMES in coRP. In 1992, the Adleman-Huang algorithm reduced the complexity to ZPP = RPcoRP, which superseded Pratt's result.

The cyclotomy test of Adleman, Pomerance, and Rumely from 1983 put PRIMES in QP (quasi-polynomial time), which is not known to be comparable with the classes mentioned above.

Because of its tractability in practice, polynomial-time algorithms assuming the Riemann hypothesis, and other similar evidence, it was long suspected but not proven that primality could be solved in polynomial time. The existence of the AKS primality test finally settled this long-standing question and placed PRIMES in P. However, PRIMES is not known to be P-complete, and it is not known whether it lies in classes lying inside P such as NC or L.

Number-theoretic methods

Certain number-theoretic methods exist for testing whether a number is prime, such as the Lucas test and Proth's test. These tests typically require factorization of n + 1, n − 1, or a similar quantity, which means that they are not useful for general-purpose primality testing, but they are often quite powerful when the tested number n is known to have a special form.

The Lucas test relies on the fact that the multiplicative order of a number a modulo n is n − 1 for a prime n when a is a primitive root modulo n. If we can show a is primitive for n, we can show n is prime.

References

  1. ^ Weisstein, Eric W. "Pocklington's Theorem". MathWorld.
  2. ^ Gary L. Miller (1976). "Riemann's Hypothesis and Tests for Primality". Journal of Computer and System Sciences. 13 (3): 300–317. doi:10.1016/S0022-0000(76)80043-8.
  3. ^ Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P", Annals of Mathematics 160 (2004), no. 2, pp. 781–793.
  4. ^ Carl Pomerance and Hendrik W. Lenstra. Primality testing with Gaussian periods

Template:Link FA