I have written a python code implementing Miller Rabin Primality Test
The code finds the largest prime less than the input. Here is a code snippet of primality testing algorithm
def rabinMiller(n,a) :
if n%2 == 0 :
if n == 2 :
return 1
else :
return 0
k = 0
n1 = n - 1
while n1%2 == 0 :
k = k+1
n1 = n1/2
u = int(n1)
b = pow(a,u,n)
if b == 1 or b == n-1 :
return 1
for i in range(k-1) :
b = pow(b,2,n)
if b == n-1 : return 1
if b == 1 : return 0
return 0
And a C code implementing Miller Rabin primality test
int rabinmiller(lld a,lld n){
lld b,temp,k; // typedef long long int lld
b = n -1 ;
while( b%2 == 0){
b = b/2 ;
}
temp = power(a,b,n); //(a^b)%n by exponentiation by squaring algorithm
if(temp == 1) return 1 ;
for(;b < n-1;b*=2){
k = mulmod(temp,temp,n); //k=(temp*temp)%n care is taken so that there is no integer overflow
if(k == 1 && temp!= 1 && temp != n-1 ) return 0;
temp = k ;
}
return temp == 1 ;
}
The above two function give same result upto 10^{15} but with 10^{17} and 10^{18} both code gives different result.I think, theoretically they implement the same thing(I may be wrong) even if they are coded slightly differently. But I don't know the reason why they behave differently in input above 10^{15}. Please help
Note The seed(=a), in my code to the rabinMiller(n,a) function is same in both the program and it is not randomly chosen(unlike the original algorithm states).