It is a straightforward question: Is there a faster alternative to all(a(:,i)==a,1) in MATLAB?
I'm thinking of a implementation that benefits from short-circuit evaluations in the whole process. I mean, all() definitely benefits from short-circuit evaluations but a(:,i)==a doesn't.
I tried the following code,
% example for the input matrix
m = 3; % m and n aren't necessarily equal to those values.
n = 5000; % It's only possible to know in advance that 'm' << 'n'.
a = randi([0,5],m,n); % the maximum value of 'a' isn't necessarily equal to
% 5 but it's possible to state that every element in
% 'a' is a positive integer.
% all, equal solution
tic
for i = 1:n % stepping up the elapsed time in orders of magnitude
%%%%%%%%%% all and equal solution %%%%%%%%%
ax_boo = all(a(:,i)==a,1);
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
toc
% alternative solution
tic
for i = 1:n % stepping up the elapsed time in orders of magnitude
%%%%%%%%%%% alternative solution %%%%%%%%%%%
ax_boo = a(1,i) == a(1,:);
for k = 2:m
ax_boo(ax_boo) = a(k,i) == a(k,ax_boo);
end
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
end
toc
but it's intuitive that any "for-loop-solution" within the MATLAB environment will be naturally slower. I'm wondering if there is a MATLAB built-in function written in a faster language.
EDIT:
After running more tests I found out that the implicit expansion does have a performance impact in evaluating a(:,i)==a. If the matrix a has more than one row, all(repmat(a(:,i),[1,n])==a,1) may be faster than all(a(:,i)==a,1) depending on the number of columns (n). For n=5000 repmat explicit expansion has proved to be faster.
But I think that a generalization of Kenneth Boyd's answer is the "ultimate solution" if all elements of a are positive integers. Instead of dealing with a (m x n matrix) in its original form, I will store and deal with adec (1 x n matrix):
exps = ((0):(m-1)).';
base = max(a,[],[1,2]) + 1;
adec = sum( a .* base.^exps , 1 );
In other words, each column will be encoded to one integer. And of course adec(i)==adec is faster than all(a(:,i)==a,1).
EDIT 2:
I forgot to mention that adec approach has a functional limitation. At best, storing adec as uint64, the following inequality must hold base^m < 2^64 + 1.
Since your goal is to count the number of columns that match, my example converts the binary encoding to integer decimals, then you just loop over the possible values (with 3 rows that are 8 possible values) and count the number of matches.
a_dec = 2.^(0:(m-1)) * a;
num_poss_values = 2 ^ m;
num_matches = zeros(num_poss_values, 1);
for i = 1:num_poss_values
num_matches(i) = sum(a_dec == (i - 1));
end
On my computer, using 2020a, Here are the execution times for your first 2 options and the code above:
Elapsed time is 0.246623 seconds.
Elapsed time is 0.553173 seconds.
Elapsed time is 0.000289 seconds.
So my code is 853 times faster!
I wrote my code so it will work with m being an arbitrary integer.
The num_matches variable contains the number of columns that add up to 0, 1, 2, ...7 when converted to a decimal.
As an alternative you can use the third output of unique:
[~, ~, iu] = unique(a.', 'rows');
for i = 1:n
ax_boo = iu(i) == iu;
end
As indicated in a comment:
ax_boo isolates the indices of the columns I have to sum in a row vector b. So, basically the next line would be something like c = sum(b(ax_boo),2);
It is a typical usage of accumarray:
[~, ~, iu] = unique(a.', 'rows');
C = accumarray(iu,b);
for i = 1:n
c = C(i);
end
Working on project Euler problem (26), and wanting to use an algorithm looking for the prime, p with the largest order of 10 modulo p. Essentially the problem is to look for the denominator which creates the longest repetend in a decimal. After a bunch of wikipedia reading, it looks like the prime described above would fulfill that. But, unfortunately, it looks like taking the very large powers of 10 results in an error. My question then is : is there a way of getting around this error (making the numbers smaller), or should I abandon this strategy and just do long division (with the plan being to focus on the primes).
[of note, in the order_ten method I can get it to run if I limit the powers of 10 to 300 and probably can go a bit long, which goes along with the length of a long]
import math
def prime_seive(limit):
seive_list = [True]*limit
seive_list[0] = seive_list[1] = False
for i in range(2, limit):
if seive_list[i] == True :
n = 2
while i*n < limit :
seive_list[i*n] = False #get rid of multiples
n = n+1
prime_numbers = [i for i,j in enumerate(seive_list) if j == True]
return prime_numbers
def order_ten(n) :
for k in range(1, n) :
if (math.pow(10,k) -1)%n == 0:
return k
primes = prime_seive(1000)
max_order = 0
max_order_d = -1
for x in reversed(primes) :
order = order_ten(x)
if order > max_order :
max_order = order
max_order_d = x
print max_order
print max_order_d
I suspect that the problem is that your numbers get to large when first taking a large power of ten and then computing the value mod n. (For instance If I asked you to compute 10^11 mod 11, you could remark than 10 mod 11 is (-1) and thus 10^11 mod 11 is just (-1)^11 mod 11 ie. -1.)
Maybe you could try programming your own exponentiation routine mod n, something like (in pseudo code)
myPow (int k, int n) {
if (k==0) return 1;
else return ((myPow(k-1,n)*10)%n);
}
This way you never deal with numbers larger than n.
The way it is written you will get a linear complexity in k for computing the power, and thus a quadratic complexity in n for your function order_ten(n). If this is too slow for you could improve the function myPow to use some smart exponentiation.
Given a number K which is a product of two different numbers (A,B), find the maximum number(<=A & <=B) who's square divides the K .
Eg : K = 54 (6*9) . Both the numbers are available i.e 6 and 9.
My approach is fairly very simple or trivial.
taking the smallest of the two ( 6 in this case).Lets say A
Square the number and divide K, if its a perfect division, that's the number.
Else A = A-1 ,till A =1.
For the given example, 3*3 = 9 divides K, and hence 3 is the answer.
Looking for a better algorithm, than the trivial solution.
Note : The test cases are in 1000's so the best possible approach is needed.
I am sure someone else will come up with a nice answer involving modulus arithmetic. Here is a naive approach...
Each of the factors can themselves be factored (though it might be an expensive operation).
Given the factors, you can then look for groups of repeated factors.
For instance, using your example:
Prime factors of 9: 3, 3
Prime factors of 6: 2, 3
All prime factors: 2, 3, 3, 3
There are two 3s, so you have your answer (the square of 3 divides 54).
Second example of 36 x 9 = 324
Prime factors of 36: 2, 2, 3, 3
Prime factors of 9: 3, 3
All prime factors: 2, 2, 3, 3, 3, 3
So you have two 2s and four 3s, which means 2x3x3 is repeated. 2x3x3 = 18, so the square of 18 divides 324.
Edit: python prototype
import math
def factors(num, dict):
""" This finds the factors of a number recursively.
It is not the most efficient algorithm, and I
have not tested it a lot. You should probably
use another one. dict is a dictionary which looks
like {factor: occurrences, factor: occurrences, ...}
It must contain at least {2: 0} but need not have
any other pre-populated elements. Factors will be added
to this dictionary as they are found.
"""
while (num % 2 == 0):
num /= 2
dict[2] += 1
i = 3
found = False
while (not found and (i <= int(math.sqrt(num)))):
if (num % i == 0):
found = True
factors(i, dict)
factors(num / i, dict)
else:
i += 2
if (not found):
if (num in dict.keys()):
dict[num] += 1
else:
dict[num] = 1
return 0
#MAIN ROUTINE IS HERE
n1 = 37 # first number (6 in your example)
n2 = 41 # second number (9 in your example)
dict = {2: 0} # initialise factors (start with "no factors of 2")
factors(n1, dict) # find the factors of f1 and add them to the list
factors(n2, dict) # find the factors of f2 and add them to the list
sqfac = 1
# now find all factors repeated twice and multiply them together
for k in dict.keys():
dict[k] /= 2
sqfac *= k ** dict[k]
# here is the result
print(sqfac)
Answer in C++
int func(int i, j)
{
int k = 54
float result = pow(i, 2)/k
if (static_cast<int>(result)) == result)
{
if(i < j)
{
func(j, i);
}
else
{
cout << "Number is correct: " << i << endl;
}
}
else
{
cout << "Number is wrong" << endl;
func(j, i)
}
}
Explanation:
First recursion then test if result is a positive integer if it is then check if the other multiple is less or greater if greater recursive function tries the other multiple and if not then it is correct. Then if result is not positive integer then print Number is wrong and do another recursive function to test j.
If I got the problem correctly, I see that you have a rectangle of length=A, width=B, and area=K
And you want convert it to a square and lose the minimum possible area
If this is the case. So the problem with your algorithm is not the cost of iterating through mutliple iterations till get the output.
Rather the problem is that your algorithm depends heavily on the length A and width B of the input rectangle.
While it should depend only on the area K
For example:
Assume A =1, B=25
Then K=25 (the rect area)
Your algorithm will take the minimum value, which is A and accept it as answer with a single
iteration which is so fast but leads to wrong asnwer as it will result in a square of area 1 and waste the remaining 24 (whatever cm
or m)
While the correct answer here should be 5. which will never be reached by your algorithm
So, in my solution I assume a single input K
My ideas is as follows
x = sqrt(K)
if(x is int) .. x is the answer
else loop from x-1 till 1, x--
if K/x^2 is int, x is the answer
This might take extra iterations but will guarantee accurate answer
Also, there might be some concerns on the cost of sqrt(K)
but it will be called just once to avoid misleading length and width input
https://projecteuler.net/problem=35
All problems on Project Euler are supposed to be solvable by a program in under 1 minute. My solution, however, has a runtime of almost 3 minutes. Other solutions I've seen online are similar to mine conceptually, but have runtimes that are exponentially faster. Can anyone help make my code more efficient/run faster?
Thanks!
#genPrimes takes an argument n and returns a list of all prime numbers less than n
def genPrimes(n):
primeList = [2]
number = 3
while(number < n):
isPrime = True
for element in primeList:
if element > number**0.5:
break
if number%element == 0 and element <= number**0.5:
isPrime = False
break
if isPrime == True:
primeList.append(number)
number += 2
return primeList
#isCircular takes a number as input and returns True if all rotations of that number are prime
def isCircular(prime):
original = prime
isCircular = True
prime = int(str(prime)[-1] + str(prime)[:len(str(prime)) - 1])
while(prime != original):
if prime not in primeList:
isCircular = False
break
prime = int(str(prime)[-1] + str(prime)[:len(str(prime)) - 1])
return isCircular
primeList = genPrimes(1000000)
circCount = 0
for prime in primeList:
if isCircular(prime):
circCount += 1
print circCount
Two modifications of your code yield a pretty fast solution (roughly 2 seconds on my machine):
Generating primes is a common problem with many solutions on the web. I replaced yours with rwh_primes1 from this article:
def genPrimes(n):
sieve = [True] * (n/2)
for i in xrange(3,int(n**0.5)+1,2):
if sieve[i/2]:
sieve[i*i/2::i] = [False] * ((n-i*i-1)/(2*i)+1)
return [2] + [2*i+1 for i in xrange(1,n/2) if sieve[i]]
It is about 65 times faster (0.04 seconds).
The most important step I'd suggest, however, is to filter the list of generated primes. Since each circularly shifted version of an integer has to be prime, the circular prime must not contain certain digits. The prime 23, e.g., can be easily spotted as an invalid candidate, because it contains a 2, which indicates divisibility by two when this is the last digit. Thus you might remove all such bad candidates by the following simple method:
def filterPrimes(primeList):
for i in primeList[3:]:
if '0' in str(i) or '2' in str(i) or '4' in str(i) \
or '5' in str(i) or '6' in str(i) or '8' in str(i):
primeList.remove(i)
return primeList
Note that the loop starts at the fourth prime number to avoid removing the number 2 or 5.
The filtering step takes most of the computing time (about 1.9 seconds), but reduces the number of circular prime candidates dramatically from 78498 to 1113 (= 98.5 % reduction)!
The last step, the circulation of each remaining candidate, can be done as you suggested. If you wish, you can simplify the code as follows:
circCount = sum(map(isCircular, primeList))
Due to the reduced candidate set this step is completed in only 0.03 seconds.
I'm writing a recursive infinite prime number generator, and I'm almost sure I can optimize it better.
Right now, aside from a lookup table of the first dozen primes, each call to the recursive function receives a list of all previous primes.
Since it's a lazy generator, right now I'm just filtering out any number that is modulo 0 for any of the previous primes, and taking the first unfiltered result. (The check I'm using short-circuits, so the first time a previous prime divides the current number evenly it aborts with that information.)
Right now, my performance degrades around when searching for the 400th prime (37,813). I'm looking for ways to use the unique fact that I have a list of all prior primes, and am only searching for the next, to improve my filtering algorithm. (Most information I can find offers non-lazy sieves to find primes under a limit, or ways to find the pnth prime given pn-1, not optimizations to find pn given 2...pn-1 primes.)
For example, I know that the pnth prime must reside in the range (pn-1 + 1)...(pn-1+pn-2). Right now I start my filtering of integers at pn-1 + 2 (since pn-1 + 1 can only be prime for pn-1 = 2, which is precomputed). But since this is a lazy generator, knowing the terminal bounds of the range (pn-1+pn-2) doesn't help me filter anything.
What can I do to filter more effectively given all previous primes?
Code Sample
#doc """
Creates an infinite stream of prime numbers.
iex> Enum.take(primes, 5)
[2, 3, 5, 7, 11]
iex> Enum.take_while(primes, fn(n) -> n < 25 end)
[2, 3, 5, 7, 11, 13, 17, 19, 23]
"""
#spec primes :: Stream.t
def primes do
Stream.unfold( [], fn primes ->
next = next_prime(primes)
{ next, [next | primes] }
end )
end
defp next_prime([]), do: 2
defp next_prime([2 | _]), do: 3
defp next_prime([3 | _]), do: 5
defp next_prime([5 | _]), do: 7
# ... etc
defp next_prime(primes) do
start = Enum.first(primes) + 2
Enum.first(
Stream.drop_while(
Integer.stream(from: start, step: 2),
fn number ->
Enum.any?(primes, fn prime ->
rem(number, prime) == 0
end )
end
)
)
end
The primes function starts with an empty array, gets the next prime for it (2 initially), and then 1) emits it from the Stream and 2) Adds it to the top the primes stack used in the next call. (I'm sure this stack is the source of some slowdown.)
The next_primes function takes in that stack. Starting from the last known prime+2, it creates an infinite stream of integers, and drops each integer that divides evenly by any known prime for the list, and then returns the first occurrence.
This is, I suppose, something similar to a lazy incremental Eratosthenes's sieve.
You can see some basic attempts at optimization: I start checking at pn-1+2, and I step over even numbers.
I tried a more verbatim Eratosthenes's sieve by just passing the Integer.stream through each calculation, and after finding a prime, wrapping the Integer.stream in a new Stream.drop_while that filtered just multiples of that prime out. But since Streams are implemented as anonymous functions, that mutilated the call stack.
It's worth noting that I'm not assuming you need all prior primes to generate the next one. I just happen to have them around, thanks to my implementation.
For any number k you only need to try division with primes up to and including √k. This is because any prime larger than √k would need to be multiplied with a prime smaller than √k.
Proof:
√k * √k = k so (a+√k) * √k > k (for all 0<a<(k-√k)). From this follows that (a+√k) divides k iff there is another divisor smaller than √k.
This is commonly used to speed up finding primes tremendously.
You don't need all prior primes, just those below the square root of your current production point are enough, when generating composites from primes by the sieve of Eratosthenes algorithm.
This greatly reduces the memory requirements. The primes are then simply those odd numbers which are not among the composites.
Each prime p produces a chain of its multiples, starting from its square, enumerated with the step of 2p (because we work only with odd numbers). These multiples, each with its step value, are stored in a dictionary, thus forming a priority queue. Only the primes up to the square root of the current candidate are present in this priority queue (the same memory requirement as that of a segmented sieve of E.).
Symbolically, the sieve of Eratosthenes is
P = {3,5,7,9, ...} \ ⋃ {{p2, p2+2p, p2+4p, p2+6p, ...} | p in P}
Each odd prime generates a stream of its multiples by repeated addition; all these streams merged together give us all the odd composites; and primes are all the odd numbers without the composites (and the one even prime number, 2).
In Python (can be read as an executable pseudocode, hopefully),
def postponed_sieve(): # postponed sieve, by Will Ness,
yield 2; yield 3; # https://stackoverflow.com/a/10733621/849891
yield 5; yield 7; # original code David Eppstein / Alex Martelli
D = {} # 2002, http://code.activestate.com/recipes/117119
ps = (p for p in postponed_sieve()) # a separate Primes Supply:
p = ps.next() and ps.next() # (3) a Prime to add to dict
q = p*p # (9) when its sQuare is
c = 9 # the next Candidate
while True:
if c not in D: # not a multiple of any prime seen so far:
if c < q: yield c # a prime, or
else: # (c==q): # the next prime's square:
add(D,c + 2*p,2*p) # (9+6,6 : 15,21,27,33,...)
p=ps.next() # (5)
q=p*p # (25)
else: # 'c' is a composite:
s = D.pop(c) # step of increment
add(D,c + s,s) # next multiple, same step
c += 2 # next odd candidate
def add(D,x,s): # make no multiple keys in Dict
while x in D: x += s # increment by the given step
D[x] = s
Once a prime is produced, it can be forgotten. A separate prime supply is taken from a separate invocation of the same generator, recursively, to maintain the dictionary. And the prime supply for that one is taken from another, recursively as well. Each needs to be supplied only up to the square root of its production point, so very few generators are needed overall (on the order of log log N generators), and their sizes are asymptotically insignificant (sqrt(N), sqrt( sqrt(N) ), etc).
I wrote a program that generates the prime numbers in order, without limit, and used it to sum the first billion primes at my blog. The algorithm uses a segmented Sieve of Eratosthenes; additional sieving primes are calculated at each segment, so the process can continue indefinitely, as long as you have space to store the sieving primes. Here's pseudocode:
function init(delta) # Sieve of Eratosthenes
m, ps, qs := 0, [], []
sieve := makeArray(2 * delta, True)
for p from 2 to delta
if sieve[p]
m := m + 1; ps.insert(p)
qs.insert(p + (p-1) / 2)
for i from p+p to n step p
sieve[i] := False
return m, ps, qs, sieve
function advance(m, ps, qs, sieve, bottom, delta)
for i from 0 to delta - 1
sieve[i] := True
for i from 0 to m - 1
qs[i] := (qs[i] - delta) % ps[i]
p := ps[0] + 2
while p * p <= bottom + 2 * delta
if isPrime(p) # trial division
m := m + 1; ps.insert(p)
qs.insert((p*p - bottom - 1) / 2)
p := p + 2
for i from 0 to m - 1
for j from qs[i] to delta step ps[i]
sieve[j] := False
return m, ps, qs, sieve
Here ps is the list of sieving primes less than the current maximum and qs is the offset of the smallest multiple of the corresponding ps in the current segment. The advance function clears the bitarray, resets qs, extends ps and qs with new sieving primes, then sieves the next segment.
function genPrimes()
bottom, i, delta := 0, 1, 50000
m, ps, qs, sieve := init(delta)
yield 2
while True
if i == delta # reset for next segment
i, bottom := -1, bottom + 2 * delta
m, ps, qs, sieve := \textbackslash
advance(m, ps, qs, sieve, bottom, delta)
else if sieve[i] # found prime
yield bottom + 2*i + 1
i := i + 1
The segment size 2 * delta is arbitrarily set to 100000. This method requires O(sqrt(n)) space for the sieving primes plus constant space for the sieve.
It is slower but saves space to generate candidates with a wheel and test the candidates for primality.
function genPrimes()
w, wheel := 0, [1,2,2,4,2,4,2,4,6,2,6,4,2,4, \
6,6,2,6,4,2,6,4,6,8,4,2,4,2,4,8,6,4,6, \
2,4,6,2,6,6,4,2,4,6,2,6,4,2,4,2,10,2,10]
p := 2; yield p
repeat
p := p + wheel[w]
if w == 51 then w := 4 else w := w + 1
if isPrime(p) yield p
It may be useful to begin with a sieve and switch to a wheel when the sieve grows too large. Even better is to continue sieving with some fixed set of sieving primes, once the set grows too large, then report only those values bottom + 2*i + 1 that pass a primality test.