I was solving problems on the the ProjectEuler, And I am stuck on the 12th problem, the following code takes too longer not even done in five minutes and my CPU got warm.
Essentially what I am doing is generate a sequence of triangular numbers by adding successive natural numbers like:
1 -> 1 (i.e. 1)
2 -> 3 (i.e. 1+2)
3 -> 6 (i.e. 1+2+3)
And so on, then finding the first triangular number which has more than 500 factors (i.e. 501 factors).
fun main() {
val numbers = generateTriangularNumbers()
val result = numbers.first {
val count = factorOf(it).count()
// println(count) // just to see the count
count > 500
}
println(result)
}
// Finds factor of input [x]
private fun factorOf(x: Long): Sequence<Long> = sequence {
var current = 1L
while (current <= x) {
if (x % current == 0L) yield(current++) else current++
}
}
// generates triangular numbers, like 1, 3, 6, 10. By adding numbers like 1+2+3+...n.
private fun generateTriangularNumbers(from: Long = 1): Sequence<Long> = sequence {
val mapper: (Long) -> Long = { (1..it).sum() }
var current = from
while (true) yield(mapper(current++))
}
The count (number of factors of triangular numbers) is hardly getting over 200, Is there a way to efficiently solve this problem, maybe within a minute?
Project Euler is about math. Programming comes second. You need to do some homework.
Prove that triangular numbers are in form of n*(n+1)/2. Trivial.
Prove that n and n+1 are coprime. Trivial.
Prove, or at least convince yourself, that d(n) is multiplicative.
Combine this knowledge to come up with an efficient algorithm. You wouldn't need to actually compute the triangular numbers, and you would need to factorize the number much smaller; memoization would also avoid quite a few factorizations.
Related
convert method "FINAL" to divide and conquer algorithm
the task sounded like this: The buyer has n coins of
H1,...,Hn.
The seller has m
coins in denominations of
B1,...,Bm.
Can the buyer purchase the item
the cost S so that the seller has an exact change (if
necessary).
fun Final(H: ArrayList<Int>, B: ArrayList<Int>, S: Int): Boolean {
var Clon_Price = false;
var Temp: Int;
for (i in H) {
if (i == S)
return true;
}
for (i in H.withIndex()) {
Temp = i.value - S;
for (j in B) {
if (j == Temp)
Clon_Price = true;
}
}
return Clon_Price;
}
fun main(args: Array<String>) {
val H:ArrayList<Int> = ArrayList();
val B:ArrayList<Int> = ArrayList();
println("Enter the number of coins the buyer has:");
var n: Int = readln().toInt();
println("Enter their nominal value:")
while (n > 0){
H.add(readln().toInt());
n--
}
println("Enter the number of coins the seller has:");
var m: Int = readln().toInt();
println("Enter their nominal value:")
while (m > 0){
B.add(readln().toInt());
m--
}
println("Enter the product price:");
val S = readln().toInt();
if(Final(H,B,S)){
println("YES");
}
else{
println("No!");
}
Introduction
Since this is an assignment, I will only give you insights to solve this problem and you will need to do the coding yourself.
The algorithm
Receives two ArrayList<Int> and an Int parameter
if the searched (S) element can be found in H, then the result is true
Otherwise it loops H
Computes the difference between the current element and S
Searches for a match in B and if it's found, then true is being returned
If the method has not returned yet, then return false;
Divide et impera (Divide and conquer)
Divide and conquer is the process of breaking down a complicated task into similar, but simpler subtasks, repeating this breaking down until the subtasks become trivial (this was the divide part) and then, using the results of the trivial subtasks we can solve the slightly more complicated subtasks and go upwards in our layers of unsolved complexities until the problem is solved (this is the conquer part).
A very handy data-structure to use is the Stack. You can use the stack of your memory, which are fancy words for recursion, or, you can solve it iteratively, by managing such a stack yourself.
This specific problem
This algorithm does not seem to necessitate divide and conquer, given the fact that you only have two array lists that can be iterated, so, I guess, this is an early assignment.
To make sure this is divide and conquer, you can add two parameters to your method (which are 0 and length - 1 at the start) that reflect the current problem-space. And upon each call, check whether the starting and ending index (the two new parameters) are equal. If they are, you already have a trivial, simplified subtask and you just iterate the second ArrayList.
If they are not equal, then you still need to divide. You can simply
//... Some code here
return Final(H, B, S, start, end / 2) || Final(H, B, S, end / 2 + 1, end);
(there you go, I couldn't resist writing code, after all)
for your nontrivial cases. This automatically breaks down the problem into sub-problems.
Self-criticism
The idea above is a simplistic solution for you to get the gist. But, in reality, programmers dislike recursion, as it can lead to trouble. So, once you complete the implementation of the above, you are well-advised to convert your algorithm to make sure it's iterative, which should be fairly easy once you succeeded implementing the recursive version.
Considering the implementation of the iterative binary search code:
// Java implementation of iterative Binary Search
class BinarySearch {
// Returns index of x if it is present in arr[],
// else return -1
int binarySearch(int arr[], int x)
{
int l = 0, r = arr.length - 1;
while (l <= r) {
int m = l + (r - l) / 2;
// Check if x is present at mid
if (arr[m] == x)
return m;
// If x greater, ignore left half
if (arr[m] < x)
l = m + 1;
// If x is smaller, ignore right half
else
r = m - 1;
}
// if we reach here, then element was
// not present
return -1;
}
// Driver method to test above
public static void main(String args[])
{
BinarySearch ob = new BinarySearch();
int arr[] = { 2, 3, 4, 10, 40 };
int n = arr.length;
int x = 10;
int result = ob.binarySearch(arr, x);
if (result == -1)
System.out.println("Element not present");
else
System.out.println("Element found at "
+ "index " + result);
}
}
The GeeksforGeeks website says the following:
"For example Binary Search (iterative implementation) has O(Logn) time complexity."
My question is what does the division by 2 have to do with logarithm in base 2? What is the relationship between each other? I will use the analogy of 1 pizza (array) to facilitate the understanding of my question:
1 pizza - divided into 2 parts = 2 pieces of pizza
2 pieces of pizza - divide each piece in half = 4 pieces of pizza
4 pieces of pizza - divide each piece in half = 8 pieces of pizza
8 pieces of pizza - divide each piece in half = 16 pieces of pizza
Logₐb = x
b = logarithming
a = base
x = logarithm result
aˣ = b
The values of pieces of pizza are 1, 2, 4, 8 and 16 are similar to logarithms, but I still can't understand what the relationship is. What would be the relationship among logarithming (b), base (a) and the result of logarithm (x) with the division by 2 of a array (pizza)? Would x be the final amount of pieces that I can divide my array(pizza)? Or is the x the number of divisions of my array (pizza)?
Contrary to your belief, O(log(n)) is independent of any base.
If you have a pizza consisting of 16 slices of unit size, how often can you halve it (and throw away one of the halves) until you get a single slice of unit size?
Answer: log2(16) = 4 times
If you have an array of length n, how often can you halve it (and throw away one of the halves) until you get an array slice of length 1?
Answer: log2(n)
More generally, how does an n-ary search algorithm relate to logarithms?
Logₐb = x
b = the size of the array to search
a = the number of slices you get after one cut (all but one are thrown away)
x = the number of cuts you need to make until you get a slice of size 1
Let's use the same pizza analogy you have, and assume we have 1 whole pizza and we want 8 slices. Every time we cut, we divide by 2 as well.
The first cut means we will have 2 slices. The second cut gives us 4 slices. The third cut results in 8 slices. We made 3 cuts to get to 8 slices. Mathematically, it turns out that there is a relationship with the numbers 2, 3, and 8. The log function connects those numbers accordingly. When we are limited to how much we can divide, that is our base (base = 2). We have a quantity which is 8. The number of operations was lg(8) = 3 (using lg as log of base 2).
The same idea applies to binary search. We divide each section of the array we search by 2, the quantity is whatever our size of the array is, and the number of operations we perform is asymptotically lg(n).
Considering the answers, comments and the following video:
StackOverflow response 1
StackOverflow response 2
Binary Search Video
#Mo B. comment:
The question is not: how many cuts are necessary to get 16 slices. But rather: how many cuts are necessary to get a slice of size 1? And that's 4. In other words, like in the algorithm, you cut in half and throw away one of the halves at each step. How often can you do that with an array of size 16?
#Yves Daoust comment:
The logarithm of a number is roughly the number of times you can halve it until you reach 1.
My conclusions are:
The logarithm of a array of size n is approximately the number of times we can divide it in half (considering the base = 2) until it reaches the smallest unit of size 1.
If (x = Logₐb) then 1*2ˣ = n
So x = # times you can multiply 1 by 2 until you get to n
Reversing Logic: x = # of times you can divide n by 2 until you get to 1
The example in the figure would be Log₂10 = x, where x is a result that is not exact. However, if I had drawn the array with 16 positions, this would imply Log₂16 = 4, the result 4 is the number of levels or divisions.
There are 70 coins and out of which there is one fake coin. Need to detect the fake coin in minimum number of weighing. You have only a weighing scale and you know that the fake coin is lighter.
I am not sure if the below simulation of the problem is right or wrong i.e. representing it in a array and doing the comparison as i have done in my code. I am trying to simulate it with a array with all one's except one zero which is considered as fake coin. Below is my code. Please let me know if i have got it wrong.
It would be really be helpful if someone can prove/explain why 3 way division is better in simple maths.
Pseudo code for the below code:
INPUT : integer n
if n = 1 then
the coin is fake
else
divide the coins into piles of A = ceiling(n/3), B = ceiling(n/3),
and C = n-2*ceiling(n/3)
weigh A and B
if the scale balances then
iterate with C
else
iterate with the lighter of A and B
Code:
import random
def getmin(data, start, end, total_items):
if total_items == 1:
#for sure we have a fake coin
return (0, start)
elif total_items == 2:
if data[start] > data[end]:
return (1, end)
elif data[start] < data[end]:
return (1, start)
else:
partition = total_items/3
a_weight = sum(data[start:start+partition])
b_weight = sum(data[start+partition:start+2*partition])
c_weight = sum(data[start+2*partition:end])
if a_weight == b_weight:
result = getmin(data, start+2*partition, end, end-(start+2*partition))
return (1+result[0], result[1])
else:
if a_weight > b_weight:
result = getmin(data, start+partition, start+2*partition, partition)
return (1+result[0], result[1])
else:
result = getmin(data, start, start+partition, partition)
return (1+result[0], result[1])
n = int(raw_input())
data = [1]*n
data[random.randint(0, n-1)] = 0
total_weighing, position = getmin(data, 0, len(data), len(data))
print(total_weighing, position)
The complexity of this algorithm is O(log3N) because you reduce your problem size to 1/3 in each iteration. Complexity-wise O(log3(n)) == O(log2(n)) == O(log10(n)) so it doen't matter if you divide your problem size by 3 or by 10. The only better complexity is O(1) and that means regardless of size of the problem you can find the fake coin in a fixed number of operations, which is quite unlikely.
Note that in this algorithm we assume that we can find the sum of a range of elements in O(1), Otherwise the algorithm's complexity is O(n).
You ask "why 3-way division is better in simple maths." Better than what? In this problem, it's the best solution because it achieves the answer in the fewest weighings. The properties of a trivial balance scale yield three basic results: left is heavier, right is heavier, and equal weights. That's a 3-way decision, so information theory yields that the best algorithm is to divide the objects in three (if you can practically achieve it) at each phase.
You need 4 weighings for 28-81 coins.
Fortunately, your problem allows for exhaustive testing.
The code above performs one trial of random testing. That's okay for starters, but with only 70 cases to check, I recommend that you try them all. Wrap your main program in a loop over range(70), something like this:
n = 70
for bad_coin in range(70):
data = [1]*n
data[bad_coin] = 0
total_weighing, position = getmin(data, 0, n, n)
print ("trial", bad_coin)
if total_weighing != 4:
print ("Wrong number of weighings:", total_weighing)
if position != bad_coin:
print ("Wrong ID:", position)
This will quickly show any error in your program for the assigned 70 coins.
BTW, replace the if statements with assert, if you're comfortable with that feature.
Fibonacci sequence is obtained by starting with 0 and 1 and then adding the two last numbers to get the next one.
All positive integers can be represented as a sum of a set of Fibonacci numbers without repetition. For example: 13 can be the sum of the sets {13}, {5,8} or {2,3,8}. But, as we have seen, some numbers have more than one set whose sum is the number. If we add the constraint that the sets cannot have two consecutive Fibonacci numbers, than we have a unique representation for each number.
We will use a binary sequence (just zeros and ones) to do that. For example, 17 = 1 + 3 + 13. Then, 17 = 100101. See figure 2 for a detailed explanation.
I want to turn some integers into this representation, but the integers may be very big. How to I do this efficiently.
The problem itself is simple. You always pick the largest fibonacci number less than the remainder. You can ignore the the constraint with the consecutive numbers (since if you need both, the next one is the sum of both so you should have picked that one instead of the initial two).
So the problem remains how to quickly find the largest fibonacci number less than some number X.
There's a known trick that starting with the matrix (call it M)
1 1
1 0
You can compute fibbonacci number by matrix multiplications(the xth number is M^x). More details here: https://www.nayuki.io/page/fast-fibonacci-algorithms . The end result is that you can compute the number you're look in O(logN) matrix multiplications.
You'll need large number computations (multiplications and additions) if they don't fit into existing types.
Also store the matrices corresponding to powers of two you compute the first time, since you'll need them again for the results.
Overall this should be O((logN)^2 * large_number_multiplications/additions)).
First I want to tell you that I really liked this question, I didn't know that All positive integers can be represented as a sum of a set of Fibonacci numbers without repetition, I saw the prove by induction and it was awesome.
To respond to your question I think that we have to figure how the presentation is created. I think that the easy way to find this is that from the number we found the closest minor fibonacci item.
For example if we want to present 40:
We have Fib(9)=34 and Fib(10)=55 so the first element in the presentation is Fib(9)
since 40 - Fib(9) = 6 and (Fib(5) =5 and Fib(6) =8) the next element is Fib(5). So we have 40 = Fib(9) + Fib(5)+ Fib(2)
Allow me to write this in C#
class Program
{
static void Main(string[] args)
{
List<int> fibPresentation = new List<int>();
int numberToPresent = Convert.ToInt32(Console.ReadLine());
while (numberToPresent > 0)
{
int k =1;
while (CalculateFib(k) <= numberToPresent)
{
k++;
}
numberToPresent = numberToPresent - CalculateFib(k-1);
fibPresentation.Add(k-1);
}
}
static int CalculateFib(int n)
{
if (n == 1)
return 1;
int a = 0;
int b = 1;
// In N steps compute Fibonacci sequence iteratively.
for (int i = 0; i < n; i++)
{
int temp = a;
a = b;
b = temp + b;
}
return a;
}
}
Your result will be in fibPresentation
This encoding is more accurately called the "Zeckendorf representation": see https://en.wikipedia.org/wiki/Fibonacci_coding
A greedy approach works (see https://en.wikipedia.org/wiki/Zeckendorf%27s_theorem) and here's some Python code that converts a number to this representation. It uses the first 100 Fibonacci numbers and works correctly for all inputs up to 927372692193078999175 (and incorrectly for any larger inputs).
fibs = [0, 1]
for _ in xrange(100):
fibs.append(fibs[-2] + fibs[-1])
def zeck(n):
i = len(fibs) - 1
r = 0
while n:
if fibs[i] <= n:
r |= 1 << (i - 2)
n -= fibs[i]
i -= 1
return r
print bin(zeck(17))
The output is:
0b100101
As the greedy approach seems to work, it suffices to be able to invert the relation N=Fn.
By the Binet formula, Fn=[φ^n/√5], where the brackets denote the nearest integer. Then with n=floor(lnφ(√5N)) you are very close to the solution.
17 => n = floor(7.5599...) => F7 = 13
4 => n = floor(4.5531) => F4 = 3
1 => n = floor(1.6722) => F1 = 1
(I do not exclude that some n values can be off by one.)
I'm not sure if this is an efficient enough for you, but you could simply use Backtracking to find a(the) valid representation.
I would try to start the backtracking steps by taking the biggest possible fib number and only switch to smaller ones if the consecutive or the only once constraint is violated.
I have written this code to check if a number is prime (for numbers upto 10^9+7)
Is this a good method ??
What will be the time complexity for this ??
What I have done is that I have made a unordered_set which stores the prime numbers upto sqrt(n).
When checking if a number is prime or not if first check if its is less than the max number in the table.
If it is less it is searched in the table so the complexity should be O(1) in this case.
If it is more the number is put through a divisibility test with the numbers from the set of number containing the prime numbers.
#include<iostream>
#include<set>
#include<math.h>
#include<unordered_set>
#define sqrt10e9 31623
using namespace std;
unordered_set<long long> primeSet = { 2, 3 }; //used for fast lookups
void genrate_prime_set(long range) //this generates prime number upto sqrt(10^9+7)
{
bool flag;
set<long long> tempPrimeSet = { 2, 3 }; //a temporay set is used for genration
set<long long>::iterator j;
for (int i = 3; i <= range; i = i + 2)
{
//cout << i << " ";
flag = true;
for (j = tempPrimeSet.begin(); *j * *j <= i; ++j)
{
if (i % (*j) == 0)
{
flag = false;
break;
}
}
if (flag)
{
primeSet.insert(i);
tempPrimeSet.insert(i);
}
}
}
bool is_prime(long long i,unordered_set<long long> primeSet)
{
bool flag = true;
if(i <= sqrt10e9) //if number exist in the lookup table
return primeSet.count(i);
//if it doesn't iterate through the table
for (unordered_set<long long>::iterator j = primeSet.begin(); j != primeSet.end(); ++j)
{
if (*j * *j <= i && i % (*j) == 0)
{
flag = false;
break;
}
}
return flag;
}
int main()
{
//long long testCases, a, b, kiwiCount;
bool primeFlag = true;
//unordered_set<int> primeNum;
genrate_prime_set(sqrt10e9);
cout << primeSet.size()<<"\n";
cout << is_prime(9999991,primeSet);
return 0;
}
This doesn't strike me as a particularly efficient way to do the job at hand.
Although it probably won't make a big difference in the end, the efficient way to generate all the primes up to some specific limit is clearly to use a sieve--the sieve of Eratosthenes is simple and fast. There are a couple of modifications that can be faster, but for the small size you're dealing with, they're probably not worthwhile.
These normally produce their output in a more effective format than you're currently using as well. In particular, you typically just dedicate one bit to each possible prime (i.e., each odd number) and end up with it zeroed if the number is composite, and one if it's prime (you can, of course, reverse the sense if you prefer).
Since you only need one bit for each odd number from 3 to 31623, this requires only about 16 K bits, or about 2K bytes--a truly minuscule amount of memory by modern standards (especially: little enough to fit in L1 cache quite easily).
Since the bits are stored in order, it's also trivial to compute and test by the factors up to the square root of the number you're testing instead of testing against all the numbers in the table (including those greater than the square root of the number you're testing, which is obviously a waste of time). This also optimizes access to the memory in case some of it's not in the cache (i.e., you can access all the data in order, making life as easy as possible for the hardware prefetcher).
If you wanted to optimize further, I'd consider just using the sieve to find all primes up to 109+7, and look up inputs. Whether this is a win will depend (heavily) upon the number of queries you can expect to receive. A quick check shows that a simple implementation of the Sieve of Eratosthenes can find all primes up to 109 in about 17 seconds. After that, each query is (of course) essentially instantaneous (i.e., the cost of a single memory read). This does require around 120 megabytes of memory for the result of the sieve, which would once have been a major consideration, but (except on fairly limited systems) normally wouldn't be any more.
The very short answer: do research on the subject, starting with the term "Miller-Rabin"
The short answer is no:
Looking for factors of a number is a poor way to check for primality
Exhaustively searching through primes is a poor way to look for factors
Especially if you search through every prime, rather than just the ones less than or equal to the square root of the number
Doing a primality test on each number of them is a poor way to generate a list of primes
Also, you should take in primeSet by reference rather than copy, if it really needs to be a parameter.
Note: testing small primes to see if they divide a number is a useful first step of a primality test, but should generally only be used for the smallest primes before switching to a better method
No, it's not a very good way to determine if a number is prime. Here is pseudocode for a simple primality test that is sufficient for numbers in your range; I'll leave it to you to translate to C++:
function isPrime(n)
d := 2
while d * d <= n
if n % d == 0
return False
d := d + 1
return True
This works by trying every potential divisor up to the square root of the input number n; if no divisor has been found, then the input number could not be composite, meaning of the form n = p × q, because one of the two divisors p or q must be less than the square root of n while the other is greater than the square root of n.
There are better ways to determine primality; for instance, after initially checking if the number is even (and hence prime only if n = 2), it is only necessary to test odd potential divisors, halving the amount of work necessary. If you have a list of primes up to the square root of n, you can use that list as trial divisors and make the process even faster. And there are other techniques for larger n.
But that should be enough to get you started. When you are ready for more, come back here and ask more questions.
I can only suggest a way to use a library function in Java to check the primality of a number. As for the other questions, I do not have any answers.
The java.math.BigInteger.isProbablePrime(int certainty) returns true if this BigInteger is probably prime, false if it's definitely composite. If certainty is ≤ 0, true is returned. You should try and use it in your code. So try rewriting it in Java
Parameters
certainty - a measure of the uncertainty that the caller is willing to tolerate: if the call returns true the probability that this BigInteger is prime exceeds (1 - 1/2^certainty). The execution time of this method is proportional to the value of this parameter.
Return Value
This method returns true if this BigInteger is probably prime, false if it's definitely composite.
Example
The following example shows the usage of math.BigInteger.isProbablePrime() method
import java.math.*;
public class BigIntegerDemo {
public static void main(String[] args) {
// create 3 BigInteger objects
BigInteger bi1, bi2, bi3;
// create 3 Boolean objects
Boolean b1, b2, b3;
// assign values to bi1, bi2
bi1 = new BigInteger("7");
bi2 = new BigInteger("9");
// perform isProbablePrime on bi1, bi2
b1 = bi1.isProbablePrime(1);
b2 = bi2.isProbablePrime(1);
b3 = bi2.isProbablePrime(-1);
String str1 = bi1+ " is prime with certainity 1 is " +b1;
String str2 = bi2+ " is prime with certainity 1 is " +b2;
String str3 = bi2+ " is prime with certainity -1 is " +b3;
// print b1, b2, b3 values
System.out.println( str1 );
System.out.println( str2 );
System.out.println( str3 );
}
}
Output
7 is prime with certainity 1 is true
9 is prime with certainity 1 is false
9 is prime with certainity -1 is true