Modulo of negative integers in Go - go

I am learning Go and I come from a Python background.
Recently, I stumbled onto a behaviour of the %(modulo) operator which is different from the corresponding operator in Python. Quite contrary to the definition of modular operation and remainder, the modulus of negative integers by a positive integer returns a negative value.
Example:
Python
a, b, n = -5, 5, 3
for i in range(a, b):
print(i%n)
Output:
1
2
0
1
2
0
1
2
0
1
Go
a, b, n := -5, 5, 3
for i:=a; i<b; i++ {
fmt.Println(i%n)
}
Output:
-2
-1
0
-2
-1
0
1
2
0
1
After reading about the Modulo operator and few similar questions asked about the reason behind these differences, I understand that these were due to design goals of the concerned languages.
Is there a built-in functionality in Go which replicates the modulus operation of Python?
Alternate: Is there an internal method for computing the "modulus" instead of the "remainder"?

See this comment by one of the language designers:
There are a several reasons for the current definition:
the current semantics for % is directly available as a result from x86 architectures
it would be confusing to change the meaning of the elementary operator % and not change its name
it's fairly easy to compute another modulus from the % result
Note that % computes the "remainder" as opposed to the "modulus".
There is not an operator or function in the standard library which replicates the modulus operation of Python.
It is possible to write a function which replicates the modulus operation of Python:
func modLikePython(d, m int) int {
var res int = d % m
if ((res < 0 && m > 0) || (res > 0 && m < 0)) {
return res + m
}
return res
}
Note that in Python 5 % -3 is -1 and this code replicates that behavior as well. If you don't want that, remove the second part after || in the if statement.

Is there an internal method for computing the "modulus" instead of the "remainder"?
Note that % computes the "remainder" as opposed to the "modulus".
These quotes are a bit misleading.
Look up any definition of "modulo", by and large it will say that it is the remainder after division. The problem is that when we say "the remainder", it implies that there is only one. When negative numbers are involved, there can be more than one distinct remainder. On the Wikipedia page for Remainder, it differentiates between the least positive remainder and the least absolute remainder. You could also add a least negative remainder (least negative meaning negative, but closest to 0).
Generally for modulus operators, if it returned a positive value, it was the least positive remainder and if it returned a negative value, it was the least negative remainder. The sign of the returned value can be determined in multiple ways. For example given c = a mod b, you could define the sign of c to be
The sign of a (what % does in Go)
The sign of b (what % does in Python)
Non-negative always
Here's a list of programming languages and their modulo implementations defined in this way https://en.wikipedia.org/wiki/Modulo_operation#In_programming_languages
Here's a branchless way to replicate Python's % operator with a Go function
func mod(a, b int) int {
return (a % b + b) % b
}
To reiterate, this follows the rule:
given c = a mod b, the sign of c will be the sign of b.
Or in other words, the modulus result has the same sign as the divisor

math/big does Euclidean modulus:
package main
import "math/big"
func mod(x, y int64) int64 {
bx, by := big.NewInt(x), big.NewInt(y)
return new(big.Int).Mod(bx, by).Int64()
}
func main() {
z := mod(-5, 3)
println(z == 1)
}
https://golang.org/pkg/math/big#Int.Mod

On Q2, you could use:
func modNeg(v, m int) int {
return (v%m + m) % m
}
Would output:
modNeg(-1, 5) => 4
modNeg(-2, 3) => 0

In most cases, just add the second number to the result:
Python:
-8%6 => 4
Golang:
-8%6 + 6 => 4
So the function will be like this:
func PyMod(d int, m int) int {
d %= m
if d < 0 {
d += m
}
return d
}
It works for some other situations such as a%-b in addition to -a%b.
But if you want it to work even for -a%-b, do like this:
func PyMod(d int, m int) int {
// Add this condition at the top
if d < 0 && m < 0 {
return d % m
}
d %= m
if d < 0 {
d += m
}
return d
}

Related

What kind of operation is x%y in golang?

I'm going through some golang tutorials, and I came across this for loop:
for n := 0; n <= 5; n++ {
if n%2 == 0 {
continue
}
fmt.Println(n)
}
I'm confused by the n%2 statement.
The output of this is:
1
3
5
It looks like these are not multiples of 2, but I'm not understanding the == 0 part of the statement if that's the case? Is there a resource that talks about this operation, or something I should look up?
This is called the remainder operator, it returns the remainder of a division operation. Hence X % Y == 0 will be true when X can be evenly divided by Y.
This operator and % to represent it is common in many languages.
See related question: Understanding The Modulus Operator %
It's the remainder/modulo-operator. This returns the rest of the division with the given number:
https://en.wikipedia.org/wiki/Modulo_operation
This code fragment calculates all uneven numbers.

Given a number, produce another random number that is the same every time and distinct from all other results

Basically, I would like help designing an algorithm that takes a given number, and returns a random number that is unrelated to the first number. The stipulations being that a) the given output number will always be the same for a similar input number, and b) within a certain range (ex. 1-100), all output numbers are distinct. ie., no two different input numbers under 100 will give the same output number.
I know it's easy to do by creating an ordered list of numbers, shuffling them randomly, and then returning the input's index. But I want to know if it can be done without any caching at all. Perhaps with some kind of hashing algorithm? Mostly the reason for this is that if the range of possible outputs were much larger, say 10000000000, then it would be ludicrous to generate an entire range of numbers and then shuffle them randomly, if you were only going to get a few results out of it.
Doesn't matter what language it's done in, I just want to know if it's possible. I've been thinking about this problem for a long time and I can't think of a solution besides the one I've already come up with.
Edit: I just had another idea; it would be interesting to have another algorithm that returned the reverse of the first one. Whether or not that's possible would be an interesting challenge to explore.
This sounds like a non-repeating random number generator. There are several possible approaches to this.
As described in this article, we can generate them by selecting a prime number p and satisfies p % 4 = 3 that is large enough (greater than the maximum value in the output range) and generate them this way:
int randomNumberUnique(int range_len , int p , int x)
if(x * 2 < p)
return (x * x) % p
else
return p - (x * x) % p
This algorithm will cover all values in [0 , p) for an input in range [0 , p).
Here's an example in C#:
private void DoIt()
{
const long m = 101;
const long x = 387420489; // must be coprime to m
var multInv = MultiplicativeInverse(x, m);
var nums = new HashSet<long>();
for (long i = 0; i < 100; ++i)
{
var encoded = i*x%m;
var decoded = encoded*multInv%m;
Console.WriteLine("{0} => {1} => {2}", i, encoded, decoded);
if (!nums.Add(encoded))
{
Console.WriteLine("Duplicate");
}
}
}
private long MultiplicativeInverse(long x, long modulus)
{
return ExtendedEuclideanDivision(x, modulus).Item1%modulus;
}
private static Tuple<long, long> ExtendedEuclideanDivision(long a, long b)
{
if (a < 0)
{
var result = ExtendedEuclideanDivision(-a, b);
return Tuple.Create(-result.Item1, result.Item2);
}
if (b < 0)
{
var result = ExtendedEuclideanDivision(a, -b);
return Tuple.Create(result.Item1, -result.Item2);
}
if (b == 0)
{
return Tuple.Create(1L, 0L);
}
var q = a/b;
var r = a%b;
var rslt = ExtendedEuclideanDivision(b, r);
var s = rslt.Item1;
var t = rslt.Item2;
return Tuple.Create(t, s - q*t);
}
That generates numbers in the range 0-100, from input in the range 0-100. Each input results in a unique output.
It also shows how to reverse the process, using the multiplicative inverse.
You can extend the range by increasing the value of m. x must be coprime with m.
Code cribbed from Eric Lippert's article, A practical use of multiplicative inverses, and a few of the previous articles in that series.
You can not have completely unrelated (particularly if you want the reverse as well).
There is a concept of modulo inverse of a number, but this would work only if the range number is a prime, eg. 100 will not work, you would need 101 (a prime). This can provide you a pseudo random number if you want.
Here is the concept of modulo inverse:
If there are two numbers a and b, such that
(a * b) % p = 1
where p is any number, then
a and b are modular inverses of each other.
For this to be true, if we have to find the modular inverse of a wrt a number p, then a and p must be co-prime, ie. gcd(a,p) = 1
So, for all numbers in a range to have modular inverses, the range bound must be a prime number.
A few outputs for range bound 101 will be:
1 == 1
2 == 51
3 == 34
4 == 76
etc.
EDIT:
Hey...actually you know, you can use the combined approach of modulo inverse and the method as defined by #Paul. Since every pair will be unique and all numbers will be covered, your random number can be:
random(k) = randomUniqueNumber(ModuloInverse(k), p) //this is Paul's function

What's a more efficient implementation of this puzzle?

The puzzle
For every input number n (n < 10) there is an output number m such that:
m's first digit is n
m is an n digit number
every 2 digit sequence inside m must be a different prime number
The output should be m where m is the smallest number that fulfils the conditions above. If there is no such number, the output should be -1;
Examples
n = 3 -> m = 311
n = 4 -> m = 4113 (note that this is not 4111 as that would be repeating 11)
n = 9 -> m = 971131737
My somewhat working solution
Here's my first stab at this, the "brute force" approach. I am looking for a more elegant solution as this is very inefficient as n grows larger.
public long GetM(int n)
{
long start = n * (long)Math.Pow((double)10, (double)n - 1);
long end = n * (long)Math.Pow((double)10, (double)n);
for (long x = start; x < end; x++)
{
long xCopy = x;
bool allDigitsPrime = true;
List<int> allPrimeNumbers = new List<int>();
while (xCopy >= 10)
{
long lastDigitsLong = xCopy % 100;
int lastDigits = (int)lastDigitsLong;
bool lastDigitsSame = allPrimeNumbers.Count != 0 && allPrimeNumbers.Contains(lastDigits);
if (!IsPrime(lastDigits) || lastDigitsSame)
{
allDigitsPrime = false;
break;
}
xCopy /= 10;
allPrimeNumbers.Add(lastDigits);
}
if (n != 1 && allDigitsPrime)
{
return x;
}
}
return -1;
}
Initial thoughts on how this could be made more efficient
So, clearly the bottleneck here is traversing through the whole list of numbers that could fulfil this condition from n.... to (n+1).... . Instead of simply incrementing the number of every iteration of the loop, there must be some clever way of skipping numbers based on the requirement that the 2 digit sequences must be prime. For instance for n = 5, there is no point going through 50000 - 50999 (50 isn't prime), 51200 - 51299 (12 isn't prime), but I wasn't quite sure how this could be implemented or if it would be enough of an optimization to make the algorithm run for n=9.
Any ideas on this approach or a different optimization approach?
You don't have to try all numbers. You can instead use a different strategy, summed up as "try appending a digit".
Which digit? Well, a digit such that
it forms a prime together with your current last digit
the prime formed has not occurred in the number before
This should be done recursively (not iteratively), because you may run out of options and then you'd have to backtrack and try a different digit earlier in the number.
This is still an exponential time algorithm, but it avoids most of the search space because it never tries any numbers that don't fit the rule that every pair of adjacent digits must form a prime number.
Here's a possible solution, in R, using recursion . It would be interesting to build a tree of all the possible paths
# For every input number n (n < 10)
# there is an output number m such that:
# m's first digit is n
# m is an n digit number
# every 2 digit sequence inside m must be a different prime number
# Need to select the smallest m that meets the criteria
library('numbers')
mNumHelper <- function(cn,n,pr,cm=NULL) {
if (cn == 1) {
if (n==1) {
return(1)
}
firstDigit <- n
} else {
firstDigit <- mod(cm,10)
}
possibleNextNumbers <- pr[floor(pr/10) == firstDigit]
nPossible = length(possibleNextNumbers)
if (nPossible == 1) {
nextPrime <- possibleNextNumbers
} else{
# nextPrime <- sample(possibleNextNumbers,1)
nextPrime <- min(possibleNextNumbers)
}
pr <- pr[which(pr!=nextPrime)]
if (is.null(cm)) {
cm <- nextPrime
} else {
cm = cm * 10 + mod(nextPrime,10)
}
cn = cn + 1
if (cn < n) {
cm = mNumHelper(cn,n,pr,cm)
}
return(cm)
}
mNum <- function(n) {
pr<-Primes(10,100)
m <- mNumHelper(1,n,pr)
}
for (i in seq(1,9)) {
print(paste('i',i,'m',mNum(i)))
}
Sample output
[1] "i 1 m 1"
[1] "i 2 m 23"
[1] "i 3 m 311"
[1] "i 4 m 4113"
[1] "i 5 m 53113"
[1] "i 6 m 611317"
[1] "i 7 m 7113173"
[1] "i 8 m 83113717"
[1] "i 9 m 971131737"
Solution updated to select the smallest prime from the set of available primes, and remove bad path check since it's not required.
I just made a list of the two-digit prime numbers, then solved the problem by hand; it took only a few minues. Not every problem requires a computer!

Is it possible to compute the minimum of three numbers by using two comparisons at the same time?

I've been trying to think up of some way that I could do two comparisons at the same time to find the greatest/least of three numbers. Arithmetic operations on them are considered "free" in this case.
That is to say, the classical way of finding the greater of two, and then comparing it to the third number isn't valid in this case because one comparison depends on the result of the other.
Is it possible to use two comparisons where this isn't the case? I was thinking maybe comparing the differences of the numbers somehow or their products or something, but came up with nothing.
Just to reemphasize, two comparisons are still done, just that neither comparison relies on the result of the other comparison.
Great answers so far, thanks guys
Ignoring the possibility of equal values ("ties"), there are 3! := 6 possible orderings of three items. If a comparison yields exactly one bit, then two comparisons can only encode 2*2 := 4 possible configurations. and 4 < 6. IOW: you cannot decide the order of three items using two fixed comparisons.
Using a truth table:
a b c|min|a<b a<c b<c| condition needed using only a<b and a<c
-+-+-+---+---+---+---+------------------
1 2 3| a | 1 1 1 | (ab==1 && ac==1)
1 3 2| a | 1 1 0 | ...
2 1 3| b | 0 1 1 | (ab==0 && ac==1)
3 1 2| b | 0 0 1 | (ab==0 && ac==0) <<--- (*)
2 3 1| c | 1 0 0 | (ab==1 && ac==0)
3 2 1| c | 0 0 0 | (ab==0 && ac==0) <<--- (*)
As you can see, you cannot distinguish the two cases marked by (*), when using only the a<b and a<c comparisons. (choosing another set of two comparisons will of course fail similarly, (by symmetry)).
But it is a pity: we fail to encode the three possible outcomes using only two bits. (yes, we could, but we'd need a third comparison, or choose the second comparison based on the outcome of the first)
I think it's possible (the following is for the min, according to the original form of the question):
B_lt_A = B < A
C_lt_min_A_B = C < (A + B - abs(A - B)) / 2
and then you combine these (I have to write it sequentially, but this is rather a 3-way switch):
if (C_lt_min_A_B) then C is the min
else if (B_lt_A) then B is the min
else A is the min
You might argue that the abs() implies a comparison, but that depends on the hardware. There is a trick to do it without comparison for integers. For IEEE 754 floating point it's just a matter of forcing the sign bit to zero.
Regarding (A + B - abs(A - B)) / 2: this is (A + B) / 2 - abs(A - B) / 2, i.e., the minimum of A and B is half the distance between A and B down from their midpoint. This can be applied again to yield min(A,B,C), but then you lose the identity of the minimum, i.e., you only know the value of the minimum, but not where it comes from.
One day we may find that parallelizing the 2 comparisons gives a better turnaround time, or even throughput, in some situation. Who knows, maybe for some vectorization, or for some MapReduce, or for something we don't know about yet.
If you were only talking integers, I think you can do it with zero comparisons using some math and a bit fiddle. Given three int values a, b, and c:
int d = ((a + b) - Abs(a - b)) / 2; // find d = min(a,b)
int e = ((d + c) - Abs(d - c)) / 2; // find min(d,c)
with Abs(x) implemented as
int Abs(int x) {
int mask = x >> 31;
return (x + mask) ^ mask;
}
Not extensively tested, so I may have missed something. Credit for the Abs bit twiddle goes to these sources
How to compute the integer absolute value
http://graphics.stanford.edu/~seander/bithacks.html#IntegerAbs
From Bit Twiddling Hacks
r = y ^ ((x ^ y) & -(x < y)); // min(x, y)
min = r ^ ((z ^ r) & -(z < r)); // min(z, r)
Two comparisons!
How about this to find the minimum:
If (b < a)
Swap(a, b)
If (c < a)
Swap(a, c)
Return a;
You can do this with zero comparisons in theory, assuming 2's complement number representation (and that right shifting a signed number preserves its sign).
min(a, b) = (a+b-abs(a-b))/2
abs(a) = (2*(a >> bit_depth)+1) * a
and then
min(a,b,c) = min(min(a,b),c)
This works because assuming a >> bit_depth gives 0 for positive numbers and -1 for negative numbers then 2*(a>>bit_depth)+1 gives 1 for positive numbers and -1 for negative numbers. This gives the signum function and we get abs(a) = signum(a) * a.
Then it's just a matter of the min(a,b) formula. This can be demonstrated by going through the two possibilities:
case min(a,b) = a:
min(a,b) = (a+b - -(a-b))/2
min(a,b) = (a+b+a-b)/2
min(a,b) = a
case min(a,b) = b:
min(a,b) = (a+b-(a-b))/2
min(a,b) = (a+b-a+b)/2
min(a,b) = b
So the formula for min(a,b) works.
The assumptions above only apply to the abs() function, if you can get a 0-comparison abs() function for your data type then you're good to go.
For example, IEEE754 floating point data has a sign bit as the top bit so the absolute value simply means clearing that bit. This means you can also use floating point numbers.
And then you can extend this to min of N numbers in 0 comparisons.
In practice though, it's hard to imagine this method will beat anything not intentionally slower. This is all about using less than 3 independent comparisons, not about making something faster than the straightforward implementation in practice.
if cos(1.5*atan2(sqrt(3)*(B-C), 2*A-B-C))>0 then
A is the max
else
if cos(1.5*atan2(sqrt(3)*(C-A), 2*B-C-A))>0 then
B is the max
else
C is the max

What is the fastest algorithm for checking if a 14 digit number is prime?

I need the fastest possible program for primality check on a 14 digit(or bigger) number. I searched on multiple sites but i'm not sure the ones I found will work with numbers as big as this.
A 14-digit number is not very big as far as prime testing is concerned. When the number has some special structure, specialised tests may be available that are faster (e.g. if it's a Mersenne number), but in general, the fastest tests for numbers in that range are
Start trial division by some small numbers. If you plan to do many checks, it's worth to make a list of the n smallest primes, so that the trial division only divides by primes, for a single test, just avoiding even test divisors (except 2), and multiples of 3 (except 3), is good enough. What "small" means is up to interpretation, choices between 100 and 10000 for the cutoff seem reasonable, that many (few) divisions are still quickly done, and they find the overwhelming majority of composite numbers.
If the trial division has not determined the number as composite (or prime, if it's actually smaller than the square of the cutoff), you can use one of the fast probabilistic prime tests that are known to be definitive for the range you're interested in, the usual candidates are
the Baillie/Pomerance/Selfridge/Wagstaff test, a strong Fermat test for base 2, followed by a test for being a square and a (strong) Lucas test. That doesn't have false positives below 264, so it's definitive for numbers with 14-18 digits.
strong Fermat tests for a collection of bases known to be definitive for the range considered. According to Chris Caldwell's prime pages, "If n < 341,550,071,728,321 is a 2, 3, 5, 7, 11, 13 and 17-SPRP, then n is prime".
Somewhat slower, and considerably harder to implement, would be the fast deterministic general-purpose prime tests, APR-CL, ECPP, AKS. They should already beat pure trial division for numbers of 14 or more digits, but be much slower than the incidentally-known-to-be-correct-for-the-range probabilistic tests.
But, depending on your use-case, the best method could also be to sieve a contiguous range of numbers (If you want to find the primes between 1014-109 and 1014, for example, a sieve would be much faster than several hundred million fast individual prime tests).
As Daniel Fischer notes, a 14-digit number isn't particularly large for primality testing. That gives you several options. The first is simple trial division:
function isPrime(n)
d := 2
while d * d <= n
if n % d == 0
return Composite
d := d + 1
return Prime
The square root of 10^14 is 10^7, so that might take a little while. Somewhat faster is to use a prime wheel:
struct wheel(delta[0 .. len-1], len, cycle)
w := wheel([1,2,2,4,2,4,2,4,6,2,6], 11, 3)
function isPrime(n, w)
d := 2; next := 0
while d * d <= n
if n % d == 0
return Composite
else
d := d + w.delta[next]
next := next + 1
if next == w.len
next := w.cycle
return Prime
That should speed up the naive trial division by a factor of 2 or 3 times, which might be sufficient for your needs.
A better option is probably a Miller-Rabin pseudoprimality tester. Start with a strong pseudoprime test:
function isStrongPseudoprime(n, a)
d := n - 1; s := 0
while d is even
d := d / 2; s := s + 1
t := powerMod(a, d, n)
if t == 1 return ProbablyPrime
while s > 0
if t == n - 1
return ProbablyPrime
t := (t * t) % n
s := s - 1
return DefinitelyComposite
Each a for which the function returns ProbablyPrime is a witness to the primality of n:
function isPrime(n)
for a in [2,3,5,7,11,13,17]
if isStrongPseudoprime(n, a) == DefinitelyComposite
return DefinitelyComposite
return ProbablyPrime
As Fischer noted, for n < 10^14 this is perfectly reliable, according to a paper by Gerhard Jaeschke; if you want to test the primality of larger numbers, choose 25 witnesses a
at random. The powerMod(b,e,m) function returns b ^ e (mod m). If your language doesn't provide that function, you can efficiently calculate it like this:
function powerMod(b, e, m)
x := 1
while e > 0
if e % 2 == 1
x := (b * x) % m
b := (b * b) % m
e := floor(e / 2)
return x
If you're interested in the math behind this test, I modestly recommend the essay Programming with Prime Numbers at my blog.
Loop variable 'x' to increment by 1 until it reaches the value of the number 'num'.
While looping check if num is divisible by x by using modulo. If remainder is 0, stop.
Ex.
mod = 1;
while (mod != 0)
{
mod = num % x;
x++;
}
Tadah! Prime Number checker... Not sure if there's a faster way than this though.
I made a much faster algorithm recently.. it can easily hash out a 14 digit number in a few seconds. Just paste this code into anywhere that accepts javascript code and run it.. Keep in mind that the version of javascript must be recent as of this post as it has to have BigInteger support in order to do these operations. Typically the latest browsers (Chrome, firefox, Safari) will support features like this.. but its anyone's guess if other browsers such as Microsoft IE will support it properly.
--
The algorithm works by using a combination of some of the previously mentioned algorithmic ideas.
However...
This algorithm was actually GENERATED by visualizing sets of prime numbers and multiplying them by various different values and then performing various mod operations to those values and using those numbers to create a 3d representation of all prime numbers which revealed the true patterns that exist within prime number sets.
var prime_nums = [2n,3n,5n,7n,11n,13n,17n,19n,23n,29n,31n,37n,41n,43n,47n,53n,59n,61n,67n,71n,73n,79n,83n,89n,97n,101n,103n,107n,109n,113n,127n,131n,137n,139n,149n,151n,157n,163n,167n,173n,179n,181n,191n,193n,197n,199n,211n,223n,227n,229n,233n,239n,241n,251n,257n,263n,269n,271n,277n,281n,283n,293n,307n,311n,313n,317n,331n,337n,347n,349n,353n,359n,367n,373n,379n,383n,389n,397n,401n,409n,419n,421n,431n,433n,439n,443n,449n,457n,461n,463n,467n,479n,487n,491n];
function isHugePrime(_num) {
var num = BigInt(_num);
var square_nums = [BigInt(9) , BigInt(25) , BigInt(49) ,BigInt(77) , BigInt(1) , BigInt(35) , BigInt(55)];
var z = BigInt(30);
var y = num % z;
var yList = [];
yList.push(num % BigInt(78));
var idx_guess = num / 468n;
var idx_cur = 0;
while ((z * z) < num) {
z += BigInt(468);
var l = prime_nums[prime_nums.length - 1]
while (l < (z / BigInt(3))) {
idx_cur++;
l += BigInt(2);
if (isHugePrime(l)) {
prime_nums.push(l);
}
}
y = num % z;
yList.push(y);
}
for (var i=0; i<yList.length; i++) {
var y2 = yList[i];
y = y2;
if (prime_nums.includes(num)) { return true; }
if ((prime_nums.includes(y)) || (y == BigInt(1)) || (square_nums.includes(y))) {
if ((y != BigInt(1)) && ((num % y) != BigInt(0))) {
for (var t=0; t<prime_nums.length; t++) {
var r = prime_nums[t];
if ((num % r) == BigInt(0)) { return false; }
}
return true;
}
if (y == BigInt(1)) {
var q = BigInt(num);
for (var t=0; t<prime_nums.length; t++) {
var r = prime_nums[t];
if ((q % r) == BigInt(0)) { return false; }
}
return true;
}
}
}
return false;
}

Resources