Hi I am trying a soft version of Analogue to Digital Converter (ADC) on Wolfram Cloud. The code is given below.
min =0.0;
max =15.0;
val=5.0;
avg =0.0;
ans= ConstantArray[0.0,8];
i=0;
while[i<8,i=i+1;
avg = (max+min)/2;
min = If[avg<val , min , avg];
max = If[avg<val, avg, max];
Insert[ans, If[val<avg,0,1], i];
Print[avg]
];
Print[ans];
The problem I am facing is that the while loop only runs once and the output is shown below. I have also tried a For loop but results are the same.
7.5`
{0.`,0.`,0.`,0.`,0.`,0.`,0.`,0.`}
Any idea what's going on?
Use capital 'w' for While.
While[i < 8, i = i + 1;
avg = (max + min)/2;
min = If[avg < val, min, avg];
max = If[avg < val, avg, max];
Insert[ans, If[val < avg, 0, 1], i];
Print[avg]]
Related
I have an ND array and, for each element in the array, I need to find the index of the largest element in a vector that is below it. I'm doing this maaaany times so I really super extra interested in it being as fast as possible.
I have written a function locate that I call with some representative example data. (I use arrayfun to increase the number of times timeit runs the function to minimize random fluctuations.)
Xmin = 5;
Xmax = 300;
Xn = 40;
X = linspace(Xmin, Xmax, Xn)';
% iters = 1000;
% timeit(#() arrayfun(#(iter) locate(randi(Xmax + 10, 5, 6, 6), X), 1:iters, 'UniformOutput', false))
timeit(#() locate(randi(Xmax + 10, 5, 6, 6), X))
My original version of locate looked like this:
function indices = locate(x, X)
% Preallocate
indices = ones(size(x));
% Find indices
for ix = 1:numel(x)
if x(ix) <= X(1)
indices(ix) = 1;
elseif x(ix) >= X(end)
indices(ix) = length(X) - 1;
else
indices(ix) = find(X <= x(ix), 1, 'last');
end
end
end
And the fastest version that I can muster looks like this:
function indices = locate(x, X)
% Preallocate
indices = ones(size(x));
% Find indices
% indices(X(1) > x) = 1; % No need as indices are initialized to 1
for iX = 1:length(X) - 1
indices(X(iX) <= x & X(iX + 1) > x) = iX;
end
indices(X(iX) <= x) = length(X) - 1;
end
Can you think of any other way that would be faster?
what you need is basically the 2nd output of histc
[pos,bin]=histc(magic(5),X);
bin(bin==0)=1;
Have run the following code in both Octave 4.0.0 and MATLAB 2014. Time difference is silly, i.e. more than two orders of magnitude. Running on Windows laptop. What can be done to improve Octave computational speed?
startTime = cputime;
iter = 1; % iter is the current iteration of the loop
itSum = 0; % itSum is the sum of the iterations
stopCrit = sqrt(275); % stopCrit is the stopping criteria for the while loop
while itSum < stopCrit
itSum = itSum + 1/iter;
iter = iter + 1;
if iter > 1e7, break, end
end
iter-1
totTime = cputime - startTime
Octave: totTime ~ 112
MATLAB: totTime < 0.4
It takes a lot of iterations in the loop to compute the results in your code. Vectorizing the code will help speed up a lot. My following code do exactly what you did, but vectorize the computation quite a bit. See if it helps.
startTime = cputime;
iter = 1; % iter is the current iteration of the loop
itSum = 0; % itSum is the sum of the iterations
stopCrit = sqrt(275); % stopCrit is the stopping criteria for the while loop
step=1000;
while(itSum < stopCrit && iter <= 1e7)
itSum=itSum+sum(1./(iter:iter+step));
iter = iter + step+ 1;
end
iter=iter-step-1;
itSum=sum(1./(1:iter));
for i=(iter+1):(iter+step)
itSum=itSum+1/i;
if(itSum+1/i>stopCrit)
iter=i-1;
break;
end
end
totTime = cputime - startTime
My runtime is only about 0.6 second using the above code. If you do not care about exactly when the loop stops, the following code is even faster:
startTime = cputime;
iter = 1; % iter is the current iteration of the loop
itSum = 0; % itSum is the sum of the iterations
stopCrit = sqrt(275); % stopCrit is the stopping criteria for the while loop
step=1000;
while(itSum < stopCrit && iter <= 1e7)
itSum=itSum+sum(1./(iter:iter+step));
iter = iter + step+ 1;
end
iter=iter-step-1;
totTime = cputime - startTime
My runtime is only about 0.35 second in latter case.
You can also try:
itSum = sum(1./(1:exp(stopCrit)));
%start the iteration
iter = exp(stopCrit-((stopCrit-itSum)/abs(stopCrit-itSum))*(stopCrit-itSum));
itSum = sum(1./(1:iter))
With this methode you will only have 1 or 2 iteration. But of course you sum each time the whole array.
I have been working on a project to simulate biologically inspired neural networks using arrayfire. I got to the point of doing some timing tests and was disappointed with the results I was getting. I decided to try and go with one of the fastest, dirt-simple models for a timing test case, the Izhikevich model. When I ran the new test with that model the results were worse. The code I am using is below. It is not doing anything fancy. It is just standard matrix algebra. However, it takes over 5 seconds to do a single evaluation of the equation for just 10 neurons! Every stop after that takes roughly that same amount of time as well.
Code:
unsigned int neuron_count = 10;
array a = af::constant(0.02, neuron_count);
array b = af::constant(0.2, neuron_count);
array c = af::constant(-65.0, neuron_count);
array d = af::constant(6, neuron_count);
array v = af::constant(-70.0, neuron_count);
array u = af::constant(-20.0, neuron_count);
array i = af::constant(14, neuron_count);
double tau = 0.2;
void StepIzhikevich()
{
v = v + tau*(0.04*pow(v, 2) + 5 * v + 140 - u + i);
//af_print(v);
u = u + tau*a*(b*v - u);
//Leaving off spike threshold checks for now
}
void TestIzhikevich()
{
StepIzhikevich();
timer::start();
StepIzhikevich();
printf("elapsed seconds: %g\n", timer::stop());
}
Here are the timing results for different numbers of neurons.
results:
neurons seconds
10 5.18275
100 5.27969
1000 5.20637
10000 4.86609
Increasing the number of neurons does not appear to have a huge effect. The time goes down a little. Am I doing something wrong here? Is there a better way to optimize things with arrayfire to get better results?
When I switched the v equation to use v*v instead pow(v, 2) the time required for a step went down to 3.75762. That is still extremely slow though, so something odd is happening.
[EDIT]
I tried to split the processing up into pieces and found something new. Here is the code I am using now.
Code:
unsigned int neuron_count = 10;
array a = af::constant(0.02, neuron_count);
array b = af::constant(0.2, neuron_count);
array c = af::constant(-65.0, neuron_count);
array d = af::constant(6, neuron_count);
array v = af::constant(-70.0, neuron_count);
array u = af::constant(-20.0, neuron_count);
array i = af::constant(14, neuron_count);
array g = af::constant(0.0, neuron_count);
double tau = 0.2;
void StepIzhikevich()
{
array j = tau*(0.04*pow(v, 2));
//af_print(j);
array k = 5 * v + 140 - u + i;
//af_print(k);
array l = v + j + k;
//af_print(l);
v = l; //If this line is here time is long on second loop
//g = l; //If this is here then time is short.
//u = u + tau*a*(b*v - u);
//Leaving off spike threshold checks for now
}
void TestIzhikevich()
{
timer::start();
StepIzhikevich();
printf("elapsed seconds: %g\n", timer::stop());
timer::start();
StepIzhikevich();
printf("elapsed seconds: %g\n", timer::stop());
}
When I run it without reassigning back to v, or assigning it to a new variable g, then the time for the step on both the first and second run are small
results:
elapsed seconds: 0.0036143
elapsed seconds: 0.00340621
However, when I put v = l; back in, then the first time it runs it is fast, but from then on it is slow.
results:
elapsed seconds: 0.0034497
elapsed seconds: 2.98624
Any ideas on what is causing this?
[EDIT 2]
I still do not know why it is doing this, but I have found a workaround by copying the v array before using it again.
Code:
unsigned int neuron_count = 100000;
array v = af::constant(-70.0, neuron_count);
array u = af::constant(-20.0, neuron_count);
array i = af::constant(14, neuron_count);
double tau = 0.2;
void StepIzhikevich()
{
//array vp = v;
array vp = v.copy();
//af_print(vp);
array j = tau*(0.04*pow(vp, 2));
//af_print(j);
array k = 5 * vp + 140 - u + i;
//af_print(k);
array l = vp + j + k;
//af_print(l);
v = l; //If this line is here time is long on second loop
}
void TestIzhikevich()
{
for (int i = 0; i < 10; i++)
{
timer::start();
StepIzhikevich();
printf("loop: %d ", i);
printf("elapsed seconds: %g\n", timer::stop());
timer::start();
}
}
Here are the results now. The second time it runs it is a bit slow, but now it is fast after that. Huge improvement over before.
Results:
loop: 0 elapsed seconds: 0.657355
loop: 1 elapsed seconds: 0.981287
loop: 2 elapsed seconds: 0.000416182
loop: 3 elapsed seconds: 0.000415045
loop: 4 elapsed seconds: 0.000421014
loop: 5 elapsed seconds: 0.000413339
loop: 6 elapsed seconds: 0.00041675
loop: 7 elapsed seconds: 0.000412202
loop: 8 elapsed seconds: 0.000473321
loop: 9 elapsed seconds: 0.000677432
I have a piece of code here I need to streamline as it is greatly increasing the runtime of my script:
size=300;
resultLength = (size+1)^3;
freqResult=zeros(1, resultLength);
inc=1;
for i=0:size,
for j=0:size,
for k=0:size,
freqResult(inc)=(c/2)*sqrt((i/L)^2+(j/W)^2+(k/H)^2);
inc=inc+1;
end
end
end
c, L, W, and H are all constants. As the size input gets over about 400, the runtime is too long to wait for, and I can watch my disk space draining by the gigabyte. Any advice?
Thanks!
What about this:
[kT, jT, iT] = ind2sub([size+1, size+1, size+1], [1:(size+1)^3]);
for indx = 1:numel(iT)
i = iT(indx) - 1;
j = jT(indx) - 1;
k = kT(indx) - 1;
freqResult1(indx) = (c/2)*sqrt((i/L)^2+(j/W)^2+(k/H)^2);
end
On my PC, for size = 400, version with 3 loops takes 136s and this one takes 19s.
For more "matlaby" way u could also even do as follows:
[kT, jT, iT] = ind2sub([size+1, size+1, size+1], [1:(size+1)^3]);
func = #(i, j, k) (c/2)*sqrt((i/L)^2+(j/W)^2+(k/H)^2);
freqResult2 = arrayfun(func, iT-1, jT-1, kT-1);
But for some reason, this is slower then the above version.
A faster solution can be (based on Marcin's answer):
[k, j, i] = ind2sub([size+1, size+1, size+1], [1:(size+1)^3]);
freqResult = (c/2)*sqrt(((i-1)/L).^2+((j-1)/W).^2+((k-1)/H).^2);
It takes about 5 seconds to run on my PC for size = 300
The following is even faster (but it doesn't look very good):
k = repmat(0:size,[1 (size+1)^2]);
j = repmat(kron(0:size, ones(1,size+1)),[1 (size+1)]);
i = kron(0:size, ones(1,(size+1)^2));
freqResult = (c/2)*sqrt((i/L).^2+(j/W).^2+(k/H).^2);
which takes ~3.5s for size = 300
So, I've written this code that should effectively estimate the area under the curve of the function defined as h(x). My problem is that i need to be able to estimate the area to within 6 decimal places, but the algorithm i've defined in estimateN seems to be using too heavy for my machine. Essentially the question is how can i make the following code more efficient? Is there a way i can get rid of that loop?
h = function(x) {
return(1+(x^9)+(x^3))
}
estimateN = function(n) {
count = 0
k = 1
xpoints = runif(n, 0, 1)
ypoints = runif(n, 0, 3)
while(k <= n){
if(ypoints[k]<=h(xpoints[k]))
count = count+1
k = k+1
}
#because of the range that im using for y
return(3*(count/n))
}
#uses the fact that err<=1/sqrt(n) to determine size of dataset
estimate_to = function(i) {
n = (10^i)^2
print(paste(n, " repetitions: ", estimateN(n)))
}
estimate_to(6)
Replace this code:
count = 0
k = 1
while(k <= n){
if(ypoints[k]<=h(xpoints[k]))
count = count+1
k = k+1
}
With this line:
count <- sum(ypoints <= h(xpoints))
If it's truly efficiency you're striving for, integrate is several orders of magnitude faster (not to mention more memory efficient) for this problem.
integrate(h, 0, 1)
# 1.35 with absolute error < 1.5e-14
microbenchmark(integrate(h, 0, 1), estimate_to(3), times=10)
# Unit: microseconds
# expr min lq median uq max neval
# integrate(h, 0, 1) 14.456 17.769 42.918 54.514 83.125 10
# estimate_to(3) 151980.781 159830.956 162290.668 167197.742 174881.066 10