Sum of the largest odd divisors of the first n numbers - algorithm

I've been working on topcoder recently and I stumbled upon this question which I can't quite make understand.
The question is to find F(n) = f(1)+f(2)+....+f(n) for a given "n" such that f(n) is the largest odd divisor for n.
There are many trivial solutions for the answer; however, I found this solution very intriguing.
int compute(n) {
if(n==0) return 0;
long k = (n+1)/2;
return k*k + compute(n/2);
}
However, I don't quite understand how to obtain a recursive relation from a problem statement such as this. Could someone help out?

I believe they are trying to use the following facts:
f(2k+1) = 2k+1, i.e. the largest odd divisor of an odd number is the number itself.
f(2k) = f(k). i.e the largest odd divisor of an even number 2m is same as the largest odd divisor of the number m.
Sum of first k odd numbers is equal to k^2.
Now split {1,2,..., 2m+1} as {1,3,5,7,...} and {2,4,6,...,2m} and try to apply the above facts.

You can use dynamic approach also using auxiliary spaces
int sum=0;
int a[n+1];
for(int i=1;i<=n;i++){
if(i%2!=0)
a[i] = i;
else
a[i] = a[i/2];
}
for(int i=1;i<=n;i++){
sum+=a[i];
}
cout<<sum;
As when number is odd then the number itself will be the greatest odd divisor and a[i] will store it's value and when number is even then the a[number/2] will be stored in a[i] because for even number the greatest odd divisor of number/2 will be the greatest odd divisor of the number.
It can also be solved using three cases when number is odd then add number itself else if number is power of 2 then add 1 else if number is even except power of 2 divide it by 2 till you get odd and add that odd to sum.

I cannot see how that algorithm could possible work for the problem you described. (I'm going to assume that "N" and "n" refer to the same variable).
Given n = 12.
The largest odd divisor is 3 (the others are 1, 2, 4, 6 & 12)
F(12) is therefor f(1) + f(2) + f(3) or 1 + 1 + 3 or 5.
Using this algorithm:
k = (12+1)/2 or 6
and we return 6 * 6 + f(6), or 36 + some number which is not going to be negative 31.

if this were Java, I'd say:
import java.util.*;
int sum_largest_odd_factors (int n){
ArrayList<Integer> array = new ArrayList();//poorly named, I know
array.add(1);
for(int j = 2; j <= n; j++){
array.add(greatestOddFactor(j));
}
int sum = 0;
for(int i = 0; i < array.size(); i++){
sum += array.get(i);
}
return sum;
}
int greatestOddFactor(int n){
int greatestOdd = 1;
for(int i = n-((n%2)+1); i >= 1; i-=2){
//i: starts at n if odd or n-1 if even
if(n%i == 0){
greatestOdd = i;
break;
//stop when reach first odd factor b/c it's the largest
}
}
return greatestOdd;
}
This is admittedly tedious and probably an O(n^2) operation, but will work every time. I'll leave it to you to translate to C++ as Java and J are the only languages I can work with (and even that on a low level). I'm curious as to what ingenious algorithms other people can come up with to make this much quicker.

IF u are looking for sum of all the odd divisors till n..
Sum of the all odd divisors of the first n numbers
...
for(long long int i=1;i<=r;i=i+2)
{
sum1=sum1+i*(r/i);
}
for sum of all divisors in a range l to r
for(long long int i=1;i<=r;i=i+2)
{
sum1=sum1+i*(r/i);
}
for(long long int i=1;i<l;i=i+2)
{
sum2=sum2+i*((l-1)/i);
}
ans=sum1-sum2;;;
THANK YOU!!

Related

Any faster way to find the number of "lucky triples"?

I am working on a code challenge problem -- "find lucky triples". "Lucky triple" is defined as "In a list lst, for any combination of triple like (lst[i], lst[j], lst[k]) where i < j < k, where lst[i] divides lst[j] and lst[j] divides lst[k].
My task is to find the number of lucky triples in a given list. The brute force way is to use three loops but it takes too much time to solve the problem. I wrote this one and the system respond "time exceed". The problems looks silly and easy but the array is unsorted so general methods like binary search do not work. I am stun in the problem for one day and hope someone can give me a hint. I am seeking a way to solve the problem faster, at least the time complexity should be lower than O(N^3).
A simple dynamic programming-like algorithm will do this in quadratic time and linear space. You just have to maintain a counter c[i] for each item in the list, that represents the number of previous integers that divides L[i].
Then, as you go through the list and test each integer L[k] with all previous item L[j], if L[j] divides L[k], you just add c[j] (which could be 0) to your global counter of triples, because that also implies that there exist exactly c[j] items L[i] such that L[i] divides L[j] and i < j.
int c[] = {0}
int nbTriples = 0
for k=0 to n-1
for j=0 to k-1
if (L[k] % L[j] == 0)
c[k]++
nbTriples += c[j]
return nbTriples
There may be some better algorithm that uses fancy discrete maths to do it faster, but if O(n^2) is ok, this will do just fine.
In regard to your comment:
Why DP? We have something that can clearly be modeled as having a left to right order (DP orange flag), and it feels like reusing previously computed values could be interesting, because the brute force algorithm does the exact same computations a lot of times.
How to get from that to a solution? Run a simple example (hint: it should better be by treating input from left to right). At step i, compute what you can compute from this particular point (ignoring everything on the right of i), and try to pinpoint what you compute over and over again for different i's: this is what you want to cache. Here, when you see a potential triple at step k (L[k] % L[j] == 0), you have to consider what happens on L[j]: "does it have some divisors on its left too? Each of these would give us a new triple. Let's see... But wait! We already computed that on step j! Let's cache this value!" And this is when you jump on your seat.
Full working solution in python:
c = [0] * len(l)
print c
count = 0
for i in range(0,len(l)):
j=0
for j in range(0, i):
if l[i] % l[j] == 0:
c[i] = c[i] + 1
count = count + c[j]
print j
print c
print count
Read up on the Sieve of Eratosthenes, a common technique for finding prime numbers, which could be adapted to find your 'lucky triples'. Essentially, you would need to iterate your list in increasing value order, and for each value, multiply it by an increasing factor until it is larger than the largest list element, and each time one of these multiples equals another value in the list, the multiple is divisible by the base number. If the list is sorted when given to you, then the i < j < k requirement would also be satisfied.
e.g. Given the list [3, 4, 8, 15, 16, 20, 40]:
Start at 3, which has multiples [6, 9, 12, 15, 18 ... 39] within the range of the list. Of those multiples, only 15 is contained in the list, so record under 15 that it has a factor 3.
Proceed to 4, which has multiples [8, 12, 16, 20, 24, 28, 32, 36, 40]. Mark those as having a factor 4.
Continue through the list. When you reach an element that has an existing known factor, then if you find any multiples of that number in the list, then you have a triple. In this case, for 16, this has a multiple 32 which is in the list. So now you know that 32 is divisible by 16, which is divisible by 4. Whereas for 15, that has no multiples in the list, so there is no value that can form a triplet with 3 and 15.
A precomputation step to the problem can help reduce time complexity.
Precomputation Step:
For every element(i), iterate the array to find which are the elements(j) such that lst[j]%lst[i]==0
for(i=0;i<n;i++)
{
for(j=i+1;j<n;j++)
{
if(a[j]%a[i] == 0)
// mark those j's. You decide how to store this data
}
}
This Precomputation Step will take O(n^2) time.
In the Ultimate Step, use the details of the Precomputation Step, to help find the triplets..
Forming a graph - an array of the indices which are multiples ahead of the current index. Then calculating the collective sum of multiples of these indices, referred from the graph. It has a complexity of O(n^2)
For example, for a list {1,2,3,4,5,6} there will be an array of the multiples. The graph will look like
{ 0:[1,2,3,4,5], 1:[3,5], 2: [5], 3:[],4:[], 5:[]}
So, total triplets will be {0->1 ->3/5} and {0->2 ->5} ie., 3
package com.welldyne.mx.dao.core;
import java.util.LinkedList;
import java.util.List;
public class LuckyTriplets {
public static void main(String[] args) {
int[] integers = new int[2000];
for (int i = 1; i < 2001; i++) {
integers[i - 1] = i;
}
long start = System.currentTimeMillis();
int n = findLuckyTriplets(integers);
long end = System.currentTimeMillis();
System.out.println((end - start) + " ms");
System.out.println(n);
}
private static int findLuckyTriplets(int[] integers) {
List<Integer>[] indexMultiples = new LinkedList[integers.length];
for (int i = 0; i < integers.length; i++) {
indexMultiples[i] = getMultiples(integers, i);
}
int luckyTriplets = 0;
for (int i = 0; i < integers.length - 1; i++) {
luckyTriplets += getLuckyTripletsFromMultiplesMap(indexMultiples, i);
}
return luckyTriplets;
}
private static int getLuckyTripletsFromMultiplesMap(List<Integer>[] indexMultiples, int n) {
int sum = 0;
for (int i = 0; i < indexMultiples[n].size(); i++) {
sum += indexMultiples[(indexMultiples[n].get(i))].size();
}
return sum;
}
private static List<Integer> getMultiples(int[] integers, int n) {
List<Integer> multiples = new LinkedList<>();
for (int i = n + 1; i < integers.length; i++) {
if (isMultiple(integers[n], integers[i])) {
multiples.add(i);
}
}
return multiples;
}
/*
* if b is the multiple of a
*/
private static boolean isMultiple(int a, int b) {
return b % a == 0;
}
}
I just wanted to share my solution, which passed. Basically, the problem can be condensed to a tree problem. You need to pay attention to the wording of the question, it only treats numbers different on basis of the index not value. so {1,1,1} will have only 1 triple, but {1,1,1,1} will have 4. the constraint is {li,lj,lk} such that the divide and i<j<k
def solution(l):
count = 0
data = l
max_element = max(data)
tree_list = []
for p,element in enumerate(data):
if element == 0:
tree_list.append([])
else:
temp = []
for el in data[p+1:]:
if el%element == 0:
temp.append(el)
tree_list.append(temp)
for p,element_list in enumerate(tree_list):
data[p] = 0
temp = data[:]
for element in element_list:
pos_element = temp.index(element)
count += len(tree_list[pos_element])
temp[pos_element] = 0
return count

Sample an index of a maximal number in an array, with a probability of 1/(number of maximal numbers)

This is one of the recent interview question that I faced. Program to return the index of the maximum number in the array [ To Note : the array may or may not contain multiple copies of maximum number ] such that each index ( which contains the maximum numbers ) have the probability of 1/no of max numbers to be returned.
Examples:
[-1 3 2 3 3], each of positions [1,3,4] have the probability 1/3 to be returned (the three 3s)
[ 2 4 6 6 3 1 6 6 ], each of [2,3,6,7] have the probability of 1/4 to be returned (corresponding to the position of the 6s).
First, I gave O(n) time and O(n) space algorithm where I collect the set of max-indexes and then return a random number from the set. But he asked for a O(n) time and O(1) complexity program and then I came up with this.
int find_maxIndex(vector<int> a)
{
max = a[0];
max_index = 0;
count = 0;
for(i = 1 to a.size())
{
if(max < a[i])
{
max = a[i];
count = 0;
}
if(max == a[i])
{
count++;
if(rand < 1/count) //rand = a random number in the range of [0,1]
max_index = i;
}
}
return max_index;
}
I gave him this solution. But my doubt is if this procedure would select one of the indexes of max numbers with equal probability. Hope I am clear.Is there any other method to do this ?
What you have is Reservoir sampling! There is another easy to understand solution, but requires two passes.
int find_maxIndex(vector<int> a){
int count = 1;
int maxElement = a[0];
for(int i = 1; i < a.size(); i++){
if(a[i] == maxElement){
count ++;
} else if(a[i] > maxElement){
count = 1;
maxElement = a[i];
}
}
int occurrence = rand() % count + 1;
int occur = 0;
for(int i = 0; i < a.size(); i++){
if(a[i] == maxElement){
occur++;
if(occur == occurrence) return i;
}
}
}
The algorithm is pretty simple, first find the number of times the max element occurs in the first pass. And choose a random occurrence and return the index of that occurrence. It takes two passes though, but very easy to understand.
Your algorithm works fine, and you can prove it via induction.
That is, assuming it works for any array of size N, prove it works for any array of size N+1.
So, given an array of size N+1, think of it as a sub-array of size N followed a new element at the end. By assumption, your algorithm uniformly selects one of the max elements of the sub-array... And then it behaves as follows:
If the new element is larger than the max of the sub-array, return that element. This is obviously correct.
If the new element is less than the max of the sub-array, return the result of the algorithm on the sub-array. Also obviously correct.
The only slightly tricky part is when the new element equals the max element of the sub-array. In this case, let the number of max elements in the sub-array be k. Then, by hypothesis, your algorithm selected one of them with probability 1/k. By keeping that same element with probability k/(k+1), you make the overall probability of selecting that same element equal 1/k * k /(k+1) == 1/(k+1), as desired. You also select the last element with the same probability, so we are done.
To complete the inductive proof, just verify the algorithm works on an array of size 1. Also, for quality of implementation purposes, fix it not to crash on arrays of size zero :-)
[Update]
Incidentally, this algorithm and its proof are closely related to the Fisher-Yates shuffle (which I always thought was "Knuth's card-shuffling algorithm", but Wikipedia says I am behind the times).
The idea is sound, but the devil is in the details.
First off, what language are you using? It might make a difference. The rand() from C and C++ will return an integer, which isn't likely to be less than 1/count unless it returns 0. Even then, if 1/count is an integer division, that result is always going to be 0.
Also your count is off by 1. It starts as 1 when you get a new max, but you immediately increment it in the next if statement.

Finding the missing number in an array

An array a[] contains all of the integers from 0 to N, except one. However, you cannot access an element with a single operation. Instead, you can call get(i, k) which returns the kth bit of a[i] or you can call swap(i, j) which swaps the ith and jth elements of a[]. Design a O(N) algorithm to find the missing integer.
(For simplicity, assume N is a power of 2.)
If N is a power of 2, it can be done in O(N) using divide and conquer.
Note that there are logN bits in the numbers. Now, using this information - you can use a combination of partition based selection algorithm and radix-sort.
Iterate the numbers for the first bit, and divide the array to two
halves - the first half has this bit as 0, the other half has it as 1. (Use the swap() for partitioning the array).
Note that one half has ceil(N/2) elements, and the other has floor(N/2) elements.
Repeat the process for the smaller array, until you find the missing
number.
The complexity of this approach will be N + N/2 + N/4 + ... + 1 < 2N, so it is O(n)
O(N*M), where M is the number of bits:
N is a power of 2, only one number is missing, so if you check each bit, and count the numbers where that bit is 0, and count where is 1, you'll get 2^(M-1) and 2^(M-1)-1, the shorter one belongs to the missing number. With this, you can get all the bits of the missing number.
there are really no even need to use swap operation!!
Use XOR!
Okay, first you can calculate binary XOR of all number from 0 to N.
So first:
long nxor = 0;
for (long i = 0; i <= N; i++)
nxor = XOR(nxor, i);
Then we can calculate XOR of all numbers in array, it's also simple. Let's call as K - maximal number of bits inside all number.
long axor = 0;
long K = 0;
long H = N;
while (H > 0)
{
H >>= 1; K++;
}
for (long i = 0; i < N - 1; i++)
for (long j = 0; j < K; k++)
axor = XOR(axor, get(i,j) << j);
Finally you can calculate XOR of result:
long result = XOR(nxor, axor).
And by the way, if n is a power of 2, then nxor value will be equal to n ;-)!
Suppose that the input is a[]=0,1,2,3,4,5,7,8, so that 6 is missing. The numbers are sorted for convenience only, because they don't have to be sorted for the solution to work.
Since N is 8 then the numbers are represented using 4 bits.
From 0000 to 1000.
First partition the array using the most significant bit.
You get 0,1,2,3,4,5,7 and 8. Since 8 is present, continue with the left partition.
Partition the sub array using the 2nd most significant bit.
You get 0,1,2,3 and 4,5,7. Now continue with the partition that has odd number of elements, which is 4,5,7.
Partition the sub array using the 3rd most significant bit.
You get 4,5 and 7. Again continue with the partition that has odd number of elements, which is 7.
Partition the sub array using the 4th most significant bit you get nothing and 7.
So the missing number is 6.
Another example:
a[]=0,1,3,4,5,6,7,8, so that 2 is missing.
1st bit partition: 0,1,3,4,5,6,7 and 8, continue with 0,1,3,4,5,6,7.
2nd bit partition: 0,1,3 and 4,5,6,7, continue with 0,1,3 (odd number of elements).
3rd bit partition: 0,1 and 3, continue with 3 (odd number of elements).
4th bit partition: nothing and 3, so 2 is missing.
Another example:
a[]=1,2,3,4,5,6,7,8, so that 0 is missing.
1st bit partition: 1,2,3,4,5,6,7 and 8, continue with 1,2,3,4,5,6,7.
2nd bit partition: 1,2,3 and 4,5,6,7, continue with 1,2,3 (odd number of elements).
3rd bit partition: 1 and 2,3, continue with 1 (odd number of elements).
4th bit partition: nothing and 1, so 0 is missing.
The 1st partition takes N operations.
The 2nd partition takes N operations.
The 3rd partition takes N/2 operations.
The 4th partition takes N/4 operations.
And so on.
So the running time is O(N+N+N/2+N/4+...)=O(N).
And also you another anwer when we will use sum operation instead of xor operation.
Just below please find code.
long allsum = n * (n + 1) / 2;
long sum = 0;
long K = 0;
long H = N;
while (H > 0)
{
H >>= 1; K++;
}
for (long i = 0; i < N - 1; i++)
for (long j = 0; j < K; k++)
sum += get(i,j) << j;
long result = allsum - sum.
With out xor operation, we will answer this question like this way
package missingnumberinarray;
public class MissingNumber
{
public static void main(String args[])
{
int array1[] = {1,2,3,4,6,7,8,9,10}; // we need sort the array first.
System.out.println(array1[array1.length-1]);
int n = array1[array1.length-1];
int total = (n*(n+1))/2;
System.out.println(total);
int arraysum = 0;
for(int i = 0; i < array1.length; i++)
{
arraysum += array1[i];
}
System.out.println(arraysum);
int mis = total-arraysum;
System.out.println("The missing number in array is "+mis);
}
}

number of subarrays where sum of numbers is divisible by K

Given an array, find how many such subsequences (does not require to be contiguous) exist where sum of elements in that subarray is divisible by K.
I know an approach with complexity 2^n as given below. it is like finding all nCi where i=[0,n] and validating if sum is divisible by K.
Please provide Pseudo Code something like linear/quadratic or n^3.
static int numways = 0;
void findNumOfSubArrays(int [] arr,int index, int sum, int K) {
if(index==arr.length) {
if(sum%k==0) numways++;
}
else {
findNumOfSubArrays(arr, index+1, sum, K);
findNumOfSubArrays(arr, index+1, sum+arr[index], K);
}
}
Input - array A in length n, and natural number k.
The algorithm:
Construct array B: for each 1 <= i <= n: B[i] = (A[i] modulo K).
Now we can use dynamic programming:
We define D[i,j] = maximum number of sub-arrays of - B[i..n] that the sum of its elements modulo k equals to j.
1 <= i <= n.
0 <= j <= k-1.
D[n,0] = if (b[n] == 0), 2. Otherwise, 1.
if j > 0 :
D[n,j] = if (B[n] modulo k) == j, than 1. Otherwise, 0.
for i < n and 0 <= j <= k-1:
D[i,j] = max{D[i+1,j], 1 + D[i+1, D[i+1,(j-B[i]+k) modulo k)]}.
Construct D.
Return D[1,0].
Overall running time: O(n*k)
Acutally, I don't think this problem can likely be solved in O(n^3) or even polynomial time, if the range of K and the range of numbers in array is unknown. Here is what I think:
Consider the following case: the N numbers in arr is something like
[1,2,4,8,16,32,...,2^(N-1)]
,
in this way, the sums of 2^N "subarrays" (that does not require to be contiguous) of arr, is exactly all the integer numbers in [0,2^N)
and asking how many of them is divisible by K, is equivalent to asking how many of integers are divisible by K in [0, 2^N).
I know the answer can be calculated directly like (2^N-1)/K (or something) in the above case. But , if we just change a few ( maybe 3? 4? ) numbers in arr randomly, to "dig some random holes" in the perfect-contiguous-integer-range [0,2^N), that makes it looks impossible to calculate the answer without going through almost every number in [0,2^N).
ok just some stupid thoughts ... could be totally wrong.
Use an auxiliary array A
1) While taking input, store the current grand total in the corresponding index (this executes in O(n)):
int sum = 0;
for (int i = 0; i < n; i++)
{
cin >> arr[i];
sum += arr[i];
A[i] = sum;
}
2) now,
for (int i = 0; i < n; i++)
for (int j = i; j < n; j++)
check that (A[j] - A[i] + arr[i]) is divisible by k
There you go: O(n^2)...

Finding the list of prime numbers in shortest time

I read lot many algorithms to find prime numbers and the conclusion is that a number is a prime number if it is not divisible by any of its preceding prime numbers.
I am not able to find a more precise definition. Based on this I have written a code and it performs satisfactory till the max number I pass is 1000000. But I believe there are much faster algorithms to find all primes lesser than a given number.
Following is my code, can I have a better version of the same?
public static void main(String[] args) {
for (int i = 2; i < 100000; i++) {
if (checkMod(i)) {
primes.add(i);
}
}
}
private static boolean checkMod( int num) {
for (int i : primes){
if( num % i == 0){
return false;
}
}
return true;
}
The good thing in your primality test is that you only divide by primes.
private static boolean checkMod( int num) {
for (int i : primes){
if( num % i == 0){
return false;
}
}
return true;
}
The bad thing is that you divide by all primes found so far, that is, all primes smaller than the candidate. That means that for the largest prime below one million, 999983, you divide by 78497 primes to find out that this number is a prime. That's a lot of work. So much, in fact, that the work spent on primes in this algorithm accounts for about 99.9% of all work when going to one million, a larger part for higher limits. And that algorithm is nearly quadratic, to find the primes to n in this way, you need to perform about
n² / (2*(log n)²)
divisions.
A simple improvement is to stop the division earlier. Let n be a composite number (i.e. a number greter than 1 that has divisors other than 1 and n), and let d be a divisor of n.
Now, d being a divisor of n means that n/d is an integer, and also a divisor of n: n/(n/d) = d.
So we can naturally group the divisors of n into pairs, each divisor d gives rise to the pair (d, n/d).
For such a pair, there are two possibilities:
d = n/d, which means n = d², or d = √n.
The two are different, then one of them is smaller than the other, say d < n/d. But that immediately translates to d² < n or d < √n.
So, either way, each pair of divisors contains (at least) one not exceeding √n, hence, if n is a composite number, its smallest divisor (other than 1) does not exceed √n.
So we can stop the trial division when we've reached √n:
private static boolean checkMod( int num) {
for (int i : primes){
if (i*i > n){
// We have not found a divisor less than √n, so it's a prime
return true;
}
if( num % i == 0){
return false;
}
}
return true;
}
Note: That depends on the list of primes being iterated in ascending order. If that is not guaranteed by the language, you have to use a different method, iterate by index through an ArrayList or something like that.
Stopping the trial division at the square root of the candidate, for the largest prime below one million, 999983, we now only need to divide it by the 168 primes below 1000. That's a lot less work than previously. Stopping the trial division at the square root, and dividing only by primes, is as good as trial division can possibly get and requires about
2*n^1.5 / (3*(log n)²)
divisions, for n = 1000000, that's a factor of about 750, not bad, is it?
But that's still not very efficient, the most efficient methods to find all primes below n are sieves. Simple to implement is the classical Sieve of Eratosthenes. That finds the primes below n in O(n*log log n) operations, with some enhancements (eliminating multiples of several small primes from consideration in advance), its complexity can be reduced to O(n) operations. A relatively new sieve with better asymptotic behaviour is the Sieve of Atkin, which finds the primes to n in O(n) operations, or with the enhancement of eliminating the multiples of some small primes, in O(n/log log n) operations.
The Sieve of Atkin is more complicated to implement, so it's likely that a good implementation of a Sieve of Eratosthenes performs better than a naive implementation of a Sieve of Atkin. For implementations of like optimisation levels, the performance difference is small unless the limit becomes large (larger than 1010; and it's not uncommon that in practice, a Sieve of Eratosthenes scales better than a Sieve of Atkin beyond that, due to better memory access patterns). So I would recommend beginning with a Sieve of Eratosthenes, and only when its performance isn't satisfactory despite honest efforts at optimisation, delve into the Sieve of Atkin. Or, if you don't want to implement it yourself, find a good implementation somebody else has already seriously tuned.
I have gone into a bit more detail in an answer with a slightly different setting, where the problem was finding the n-th prime. Some implementations of more-or-less efficient methods are linked from that answer, in particular one or two usable (though not much optimised) implementations of a Sieve of Eratosthenes.
I always use Eratosthenes sieve:
isPrime[100001] // - initially contains only '1' values (1,1,1 ... 1)
isPrime[0] = isPrime[1] = 0 // 0 and 1 are not prime numbers
primes.push(2); //first prime number. 2 is a special prime number because is the only even prime number.
for (i = 2; i * 2 <= 100000; i++) isPrime[i * 2] = 0 // remove all multiples of 2
for (i = 3; i <= 100000; i += 2) // check all odd numbers from 2 to 100000
if (isPrime[i]) {
primes.push(i); // add the new prime number to the solution
for (j = 2; i * j <= 100000; j++) isPrime[i * j] = 0; // remove all i's multiples
}
return primes
I hope you understand my comments
I understand a prime number to be a number that is only divisible by itself and the number 1 (with no remainder). See Wikipedia Article
That being said, I don't understand the algorithm very well in the second comment but one small improvement to your algorithm would be to change your for loop to:
for (int i = 5; i < 100000; i = i + 2) {
if (checkMod(i)) {
primes.add(i);
}
}
This is based on the assumption that 1, 2, and 3 are all prime numbers and all even numbers thereafter are not prime numbers. This at least cuts your algorithm in half.
I want to make a still slightly improved version to the 0ne suggested by Benjamin Oman above,
This is just one modification to avoid checking for primality of all the numbers ending with digit '5', because these numbers are certainly not primes as these are divisible by 5.
for (int i = 7;(i < 100000) && (!i%5==0); i = i + 2) {
if (checkMod(i)) {
primes.add(i);
}
}
This is based on the assumption that 2,3,5 are primes. The above little change will reduce all factors of 5 and improve.
Nicely explained by #Daniel Fischer.
A Implementation in C++ from his explanation:
#include<iostream>
using namespace std;
long* getListOfPrimeNumbers (long total)
{
long * primes;
primes = new long[total];
int count = 1;
primes[0] = 2;
primes[1] = 3;
while (count < total)
{
long composite_number = primes[count] + 2;
bool is_prime = false;
while (is_prime == false)
{
is_prime = true;
for (int i = 0; i <= count; i++)
{
long prime = primes[i];
if (prime * prime > composite_number)
{
break;
}
if (composite_number % prime == 0)
{
is_prime = false;
break;
}
}
if (is_prime == true)
{
count++;
primes[count] = composite_number;
}
else
{
composite_number += 2;
}
}
}
return primes;
}
int main()
{
long * primes;
int total = 10;
primes = getListOfPrimeNumbers(total);
for (int i = 0; i < total; i++){
cout << primes[i] << "\n";
}
return 0;
}
import array , math
print("enter a range to find prime numbers")
a= 0
b= 5000
c=0
x=0
k=1
g=[2]
l=0
for I in range( a , b):
for k in g:
x=x+1
if k>2:
if k > math . sqrt( I ):
break
if( I % k==0):
c=c+1
break
if c==0:
if I!=1:
g . append( I )
c=0
print g
#this algorithm will take only 19600 iteration for a range from 1-5000,which is one of fastest algorithm according to me
I found the mathematicians say 'that' "prime numbers after 3 are always one side of the multiple of 6".
It maen 5 ,7 prime numbers is nearer to 6.
11,13 are also nearer to 62.
17,19 also 63.
21,23 also 6*4.
I wrote both normally and like this up to 1million, I found this algorithm is also right and more quickly.😁
num=1000000
prime=[2,3]
def test(i):
for j in prime:
if(i%j==0):
break
if(j*j>i):
prime.append(i)
break
return 0
for i in range (6,num,6):
i=i-1
test(i)
i=i+2
test(i)
i=i-1
print(prime)

Resources