I'm pretty sure you can get rid of the 0xFFFFFFFF / p and get some more speedup by manually implementing the bitarray ops. You can get another boost by using BSF instruction [1] to quickly scan for the next set bit. And you really only need to store odd numbers; storing the even numbers is just wasteful.
You can get even more speedup by taking into account cache effects. When you cross out all the multiples of 3 you use 512MB of bandwidth. Then when you cross out all multiples of 5 you use 512MB more. Then 512MB again when you cross out all multiples of 7. The fundamental problem is that you have many partially generated cache-sized chunks and you cycle through them in order with each prime. I'm pretty sure it's faster if you instead fully generate each chunk and then never access it again. So e.g. if your cache is 128k you create a 128k chunk and cross out multiples of 3, 5, 7, etc. for that 128k chunk. Then you do the next 128k chunk again crossing out multiples of 3, 5, 7, etc. That way you only use ~512MB of memory bandwidth in total instead of 512MB per prime number. (Actually it's only really that high for small primes, it starts becoming less once your primes get bigger than the number of bits in a cache line.)
I have a little tool called Prime Grid Explorer at https://susam.net/primegrid.html that I wrote for my own amusement. It can display all primes below 3317044064679887385961981 (an 82-bit integer).
So essentially it can test all 81-bit integers and some 82-bit integers for primality. It does so using the Miller-Rabin primality test with prime bases derived from https://oeis.org/A014233 (OEIS A014233). The algorithm is implemented in about 80 lines of plain JavaScript. If you view the source, look for the function isPrimeByMR.
The Miller-Rabin test is inherently probabilistic. It tests whether a number is a probable prime by checking whether certain number theoretic congruence relations hold for a given base a. The test can yield false positives, that is, a composite number may pass the test. But it cannot have false negatives, so a number that fails the test is definitely composite. The more bases for which the test holds, the more likely it is that the tested number is prime. It has been computationally verified that there are no false positives below 3317044064679887385961981 when tested with prime bases 2, 3, 5, ..., 41. So although the algorithm is probabilistic, it functions as a deterministic test for all numbers below this bound when tested with these 13 bases.
show comments
senfiaj
There is also the segmented Sieve of Eratosthenes. It has a simlar performance but uses much less memory: the number of prime numbers from 2 to sqrt(n). For example, for n = 1000000, the RAM has to store only 168 additional numbers.
You can combine the Sieve and Wheel techniques to reduce the memory requirements dramatically. There's no need to use a bit for numbers that you already know can't be prime. You can find a Python implementation at https://stackoverflow.com/a/62919243/5987
show comments
forinti
If you take all 53 8 bit primes, you can use modular arithmetic with a residue base to work with numbers up to
This got me through many of the first 100 problems on Project Euler:
n = 1000000 # must be even
sieve = [True] * (n/2)
for i in range(3,int(n**0.5)+1,2):
if sieve[i/2]: sieve[i*i/2::i] = [False] * ((n-i*i-1)/(2*i)+1)
…
# x is prime if x%2 and sieve[x/2]
Edit: I guess I irked someone. :/ Yes this is a memory hog, but to me beautiful because it’s so tiny and simple. I never tried very hard, but I wonder if it could be made a real one-liner.
davispeck
I always like seeing implementations that start from trial division and gradually introduce optimizations like wheel factorization.
It makes the trade-offs much clearer than jumping straight to a complex sieve.
reader9274
Very well written
ZyanWu
> There is a long way to go from here. Kim Walisch's primesieve can generate all 32-bit primes in 0.061s (though this is without writing them to a file)
Oh, come on, just use a bash indirection and be done with it. It takes 1 minute and you had another result for comparison
marxisttemp
Why include writing the primes to a file instead of, say, standard output? That increases the optimization space drastically and the IO will eclipse all the careful bitwise math
Does having the primes in a file even allow faster is-prime lookup of a number?
show comments
logicallee
there are also very fast primality tests that work statistically. It's called Miller-Rabin, I tested in the browser here[1] and it can do them all in about three minutes on my phone.
I'm pretty sure you can get rid of the 0xFFFFFFFF / p and get some more speedup by manually implementing the bitarray ops. You can get another boost by using BSF instruction [1] to quickly scan for the next set bit. And you really only need to store odd numbers; storing the even numbers is just wasteful.
You can get even more speedup by taking into account cache effects. When you cross out all the multiples of 3 you use 512MB of bandwidth. Then when you cross out all multiples of 5 you use 512MB more. Then 512MB again when you cross out all multiples of 7. The fundamental problem is that you have many partially generated cache-sized chunks and you cycle through them in order with each prime. I'm pretty sure it's faster if you instead fully generate each chunk and then never access it again. So e.g. if your cache is 128k you create a 128k chunk and cross out multiples of 3, 5, 7, etc. for that 128k chunk. Then you do the next 128k chunk again crossing out multiples of 3, 5, 7, etc. That way you only use ~512MB of memory bandwidth in total instead of 512MB per prime number. (Actually it's only really that high for small primes, it starts becoming less once your primes get bigger than the number of bits in a cache line.)
[1] https://en.wikipedia.org/wiki/Find_first_set
I have a little tool called Prime Grid Explorer at https://susam.net/primegrid.html that I wrote for my own amusement. It can display all primes below 3317044064679887385961981 (an 82-bit integer).
The largest three primes it can show are
Visit https://susam.net/primegrid.html#3317044064679887385961781-2... to see them plotted. Click the buttons labelled '·' and 't' to enable the grid and tooltips, then hover over each circle to see its value.So essentially it can test all 81-bit integers and some 82-bit integers for primality. It does so using the Miller-Rabin primality test with prime bases derived from https://oeis.org/A014233 (OEIS A014233). The algorithm is implemented in about 80 lines of plain JavaScript. If you view the source, look for the function isPrimeByMR.
The Miller-Rabin test is inherently probabilistic. It tests whether a number is a probable prime by checking whether certain number theoretic congruence relations hold for a given base a. The test can yield false positives, that is, a composite number may pass the test. But it cannot have false negatives, so a number that fails the test is definitely composite. The more bases for which the test holds, the more likely it is that the tested number is prime. It has been computationally verified that there are no false positives below 3317044064679887385961981 when tested with prime bases 2, 3, 5, ..., 41. So although the algorithm is probabilistic, it functions as a deterministic test for all numbers below this bound when tested with these 13 bases.
There is also the segmented Sieve of Eratosthenes. It has a simlar performance but uses much less memory: the number of prime numbers from 2 to sqrt(n). For example, for n = 1000000, the RAM has to store only 168 additional numbers.
I use this algorithm here https://surenenfiajyan.github.io/prime-explorer/
You can combine the Sieve and Wheel techniques to reduce the memory requirements dramatically. There's no need to use a bit for numbers that you already know can't be prime. You can find a Python implementation at https://stackoverflow.com/a/62919243/5987
If you take all 53 8 bit primes, you can use modular arithmetic with a residue base to work with numbers up to
64266330917908644872330635228106713310880186591609208114244758680898150367880703152525200743234420230
This would require 334 bits.
Do you know the https://en.wikipedia.org/wiki/Sieve_of_Atkin? It's mind-blowing.
This got me through many of the first 100 problems on Project Euler:
Edit: I guess I irked someone. :/ Yes this is a memory hog, but to me beautiful because it’s so tiny and simple. I never tried very hard, but I wonder if it could be made a real one-liner.I always like seeing implementations that start from trial division and gradually introduce optimizations like wheel factorization.
It makes the trade-offs much clearer than jumping straight to a complex sieve.
Very well written
> There is a long way to go from here. Kim Walisch's primesieve can generate all 32-bit primes in 0.061s (though this is without writing them to a file)
Oh, come on, just use a bash indirection and be done with it. It takes 1 minute and you had another result for comparison
Why include writing the primes to a file instead of, say, standard output? That increases the optimization space drastically and the IO will eclipse all the careful bitwise math
Does having the primes in a file even allow faster is-prime lookup of a number?
there are also very fast primality tests that work statistically. It's called Miller-Rabin, I tested in the browser here[1] and it can do them all in about three minutes on my phone.
[1] https://claude.ai/public/artifacts/baa198ed-5a17-4d04-8cef-7...