python iterative loop gets slower as it continues - performance

I have a for loop like
for tot in range(0,100000):
at first iterations it is quite fast but as it reaches toward the middle it gets extremely slow. I don't have such code lines that you say sth gets accumulated and gets large so it's computation takes longer.
It stays the same as the first iteration always, but gets processed slower.
Is it because each time tot goes from beginning of "range" to reach to the proper iteration, and as iterative method continues it takes longer to reach to middle of the "range"?
I have no idea why this happens!!!
Inside loop sample code:
for tot in range(0,nt):
for k in range(0,nx):
if k!=0 and k!=nx-1:
for i in range(1,nz-1):
phai0[i,k]=(1.0/dz**2)*(w0[i+1,k]-2.0*w0[i,k]+w0[i-1,k])+(1.0/dx**2)*(w0[i,k+1]-2.0*w0[i,k]+w0[i,k-1])
phai1[i,k]=(1.0/dz**2)*(w1[i+1,k]-2.0*w1[i,k]+w1[i-1,k])+(1.0/dx**2)*(w1[i,k+1]-2.0*w1[i,k]+w1[i,k-1])
phai2[i,k]=(2.0-Ncoef**2*dt**2)*phai1[i,k]-phai0[i,k]+1.0*(Ncoef**2)*(dt**2*1.0/dz**2)*(w1[i+1,k]-2*w1[i,k]+w1[i-1,k])
if k==0:
for i in range(1,nz-1):
phai0[i,k]=(1.0/dz**2)*(w0[i+1,k]-2.0*w0[i,k]+w0[i-1,k])+(1.0/dx**2)*(w0[i,k+1]-2.0*w0[i,k]+w0[i,k-2])
phai1[i,k]=(1.0/dz**2)*(w1[i+1,k]-2.0*w1[i,k]+w1[i-1,k])+(1.0/dx**2)*(w1[i,k+1]-2.0*w1[i,k]+w1[i,k-2])
phai2[i,k]=(2.0-Ncoef**2*dt**2)*phai1[i,k]-phai0[i,k]+1.0*(Ncoef**2)*(dt**2*1.0/dz**2)*(w1[i+1,k]-2*w1[i,k]+w1[i-1,k])
if k==nx-1:
for i in range(1,nz-1):
phai0[i,k]=(1.0/dz**2)*(w0[i+1,k]-2.0*w0[i,k]+w0[i-1,k])+(1.0/dx**2)*(w0[i,1]-2.0*w0[i,k]+w0[i,k-1])
phai1[i,k]=(1.0/dz**2)*(w1[i+1,k]-2.0*w1[i,k]+w1[i-1,k])+(1.0/dx**2)*(w1[i,1]-2.0*w1[i,k]+w1[i,k-1])
phai2[i,k]=(2.0-Ncoef**2*dt**2)*phai1[i,k]-phai0[i,k]+1.0*(Ncoef**2)*(dt**2*1.0/dz**2)*(w1[i+1,k]-2*w1[i,k]+w1[i-1,k])
for N in range(0,N_inner_iteration):
sum_max=0
for k in range(0,nx):
if k!=0 and k!=nx-1:
for i in range(1,nz):
if i!=nz-1:
w21[i,k]=0.5*(dz**2)*(w20[i,k+1]+w20[i,k-1])/(dx**2+dz**2)+0.5*(dx**2)*(w20[i+1,k]+w20[i-1,k])/(dx**2+dz**2)-(0.5/(dx**2+dz**2))*(dx**2*dz**2)*phai2[i,k]
if i==nz-1:
w21[i,k]=1/(1+(p(k*dx,1)/(dx*p(k*dx,2)))+((2/p(k*dx,2)*(p(k*dx,1))**2-p(k*dx,0))/dz)+(p(k*dx,1)/dx/dz))*(((p(k*dx,1)/(dx*p(k*dx,2)))+(p(k*dx,1)/dx/dz))*w0[i,k-1]+(((2/p(k*dx,2)*(p(k*dx,1))**2-p(k*dx,0))/dz)+(p(k*dx,1)/dx/dz))*w0[i-1,k]-((p(k*dx,1)/dx/dz))*w0[i-1,k-1])
sum_max=sum_max+abs(w21[i,k]-w20[i,k])
# w20[i,k]=w21[i,k]
if k==0:
for i in range(1,nz):
if i!=nz-1:
w21[i,k]=0.5*(dz**2)*(w20[i,1]+w20[i,k-2])/(dx**2+dz**2)+0.5*(dx**2)*(w20[i+1,k]+w20[i-1,k])/(dx**2+dz**2)-(0.5/(dx**2+dz**2))*(dx**2*dz**2)*phai2[i,k]
if i==nz-1:
k=0.00000001
w21[i,k]=1/(1+(p(k*dx,1)/(dx*p(k*dx,2)))+((2/p(k*dx,2)*(p(k*dx,1))**2-p(k*dx,0))/dz)+(p(k*dx,1)/dx/dz))*(((p(k*dx,1)/(dx*p(k*dx,2)))+(p(k*dx,1)/dx/dz))*w0[i,k-2]+(((2/p(k*dx,2)*(p(k*dx,1))**2-p(k*dx,0))/dz)+(p(k*dx,1)/dx/dz))*w0[i-1,k]-((p(k*dx,1)/dx/dz))*w0[i-1,k-2])
sum_max=sum_max+abs(w21[i,k]-w20[i,k])
if k==nx-1:
for i in range(1,nz):
# w21[i,k]=w21[i,0]
# w20[i,k]=w21[i,k]
if i!=nz-1:
w21[i,k]=0.5*(dz**2)*(w20[i,1]+w20[i,k-1])/(dx**2+dz**2)+0.5*(dx**2)*(w20[i+1,k]+w20[i-1,k])/(dx**2+dz**2)-(0.5/(dx**2+dz**2))*(dx**2*dz**2)*phai2[i,k]
if i==nz-1:
w21[i,k]=1/(1+(p(k*dx,1)/(dx*p(k*dx,2)))+((2/p(k*dx,2)*(p(k*dx,1))**2-p(k*dx,0))/dz)+(p(k*dx,1)/dx/dz))*(((p(k*dx,1)/(dx*p(k*dx,2)))+(p(k*dx,1)/dx/dz))*w0[i,k-1]+(((2/p(k*dx,2)*(p(k*dx,1))**2-p(k*dx,0))/dz)+(p(k*dx,1)/dx/dz))*w0[i-1,k]-((p(k*dx,1)/dx/dz))*w0[i-1,k-1])
sum_max=sum_max+abs(w21[i,k]-w20[i,k])
# w20[i,k]=w21[i,k]
w20[:]=w21[:]
if (1.0/(nx*nz))*sum_max<0.0000001:
break
print (1.0/(nx*nz))*sum_max, "N=",N
w0[:]=w1[:]
w1[:]=w20[:]
w_final[:,:,tot]=w1[:]
for ordstep in range(1,num_of_orders+1):
integ_sum2=0
for zstep in range (0,nz):
for xstep in range (0,nx):
integ_sum2=integ_sum2+w1[zstep,xstep]*np.sin(ordstep*(k_z*1.0/m)*zstep*dz)*np.sin(ordstep*(k_x*1.0/n)*xstep*dx-omega*tot*dt)*dx*dz
Amp[ordstep-1,tot]=4.0/(l*h)*integ_sum2/Ampref
if tot%1==0:
Ampsave=np.reshape(Amp[:,tot],(1,5))
with open('test.csv', 'a') as file:
np.savetxt(file,np.array(Ampsave))
# np.save(outfile,Amp)
oldcol = wframe
wframe = ax.plot_surface(X, Z, w1, rstride=2, cstride=2)
if oldcol is not None:
ax.collections.remove(oldcol)
plt.pause(.001)
ax.set_xlabel('(X)')
ax.set_ylabel('(Z)')
ax.set_zlabel('$ W_{Numerical} $')
plt.figure('Amplitude Evolution')
plt.axis([0, 40000, -2, 2])
plt.scatter(tot, Amp[0,tot])
plt.scatter(tot, Amp[1,tot])
print Amp[0,tot]
plt.draw()
plt.legend(bbox_to_anchor=(1, 1), loc=1, borderaxespad=0.)
plt.title('Amplitude Evolution')
plt.xlabel('Time[s]',fontsize=25)
plt.ylabel('Amplitude',fontsize=25)
plt.savefig("res.png", transparent = True, pad_inches=0)
plt.show()
I have also changed all range to xrange and avoided using np. inside the loops by predefining them.

Are you using Python2.7 ou Python3.x?
If you are using Python2.7 you should choose xrange generator:
for tot in xrange(0,100000): print tot
It will be faster for sequences large than this. :-)

Related

With ruamel.yaml how can I conditionally convert flow maps to block maps based on line length?

I'm working on a ruamel.yaml (v0.17.4) based YAML reformatter (using the RoundTrip variant to preserve comments).
I want to allow a mix of block- and flow-style maps, but in some cases, I want to convert a flow-style map to use block-style.
In particular, if the flow-style map would be longer than the max line length^, I want to convert that to a block-style map instead of wrapping the line somewhere in the middle of the flow-style map.
^ By "max line length" I mean the best_width that I configure by setting something like yaml.width = 120 where yaml is a ruamel.yaml.YAML instance.
What should I extend to achieve this? The emitter is where the line-length gets calculated so wrapping can occur, but I suspect that is too late to convert between block- and flow-style. I'm also concerned about losing comments when I switch the styles. Here are some possible extension points, can you give me a pointer on where I'm most likely to have success with this?
Emitter.expect_flow_mapping() probably too late for converting flow->block
Serializer.serialize_node() probably too late as it consults node.flow_style
RoundTripRepresenter.represent_mapping() maybe? but this has no idea about line length
I could also walk the data before calling yaml.dump(), but this has no idea about line length.
So, where should I and where can I adjust the flow_style whether a flow-style map would trigger line wrapping?
What I think the most accurate approach is when you encounter a flow-style mapping in the dumping process is to first try to emit it to a buffer and then get the length of the buffer and if that combined with the column that you are in, actually emit block-style.
Any attempt to guesstimate the length of the output without actually trying to write that part of a tree is going to be hard, if not impossible to do without doing the actual emit. Among other things the dumping process actually dumps scalars and reads them back to make sure no quoting needs to be forced (e.g. when you dump a string that reads back like a date). It also handles single key-value pairs in a list in a special way ( [1, a: 42, 3] instead of the more verbose [1, {a: 42}, 3]. So a simple calculation of the length of the scalars that are the keys and values and separating comma, colon and spaces is not going to be precise.
A different approach is to dump your data with a large line width and parse the output and make a set of line numbers for which the line is too long according to the width that you actually want to use. After loading that output back you can walk over the data structure recursively, inspect the .lc attribute to determine the line number on which a flow style mapping (or sequence) started and if that line number is in the set you built beforehand change the mapping to block style. If you have nested flow-style collections, you might have to repeat this process.
If you run the following, the initial dumped value for quote will be on one line.
The change_to_block method as presented changes all mappings/sequences that are too long
that are on one line.
import sys
import ruamel.yaml
yaml_str = """\
movie: bladerunner
quote: {[Batty, Roy]: [
I have seen things you people wouldn't believe.,
Attack ships on fire off the shoulder of Orion.,
I watched C-beams glitter in the dark near the Tannhäuser Gate.,
]}
"""
class Blockify:
def __init__(self, width, only_first=False, verbose=0):
self._width = width
self._yaml = None
self._only_first = only_first
self._verbose = verbose
#property
def yaml(self):
if self._yaml is None:
self._yaml = y = ruamel.yaml.YAML(typ=['rt', 'string'])
y.preserve_quotes = True
y.width = 2**16
return self._yaml
def __call__(self, d):
pass_nr = 0
changed = [True]
while changed[0]:
changed[0] = False
try:
s = self.yaml.dumps(d)
except AttributeError:
print("use 'pip install ruamel.yaml.string' to install plugin that gives 'dumps' to string")
sys.exit(1)
if self._verbose > 1:
print(s)
too_long = set()
max_ll = -1
for line_nr, line in enumerate(s.splitlines()):
if len(line) > self._width:
too_long.add(line_nr)
if len(line) > max_ll:
max_ll = len(line)
if self._verbose > 0:
print(f'pass: {pass_nr}, lines: {sorted(too_long)}, longest: {max_ll}')
sys.stdout.flush()
new_d = self.yaml.load(s)
self.change_to_block(new_d, too_long, changed, only_first=self._only_first)
d = new_d
pass_nr += 1
return d, s
#staticmethod
def change_to_block(d, too_long, changed, only_first):
if isinstance(d, dict):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
# don't change keys if any value is changed
for v in d.values():
Blockify.change_to_block(v, too_long, changed, only_first)
if only_first and changed[0]:
return
if changed[0]: # don't change keys if value has changed
return
for k in d:
Blockify.change_to_block(k, too_long, changed, only_first)
if only_first and changed[0]:
return
if isinstance(d, (list, tuple)):
if d.fa.flow_style() and d.lc.line in too_long:
d.fa.set_block_style()
changed[0] = True
return # don't convert nested flow styles, might not be necessary
for elem in d:
Blockify.change_to_block(elem, too_long, changed, only_first)
if only_first and changed[0]:
return
blockify = Blockify(96, verbose=2) # set verbose to 0, to suppress progress output
yaml = ruamel.yaml.YAML(typ=['rt', 'string'])
data = yaml.load(yaml_str)
blockified_data, string_output = blockify(data)
print('-'*32, 'result:', '-'*32)
print(string_output) # string_output has no final newline
which gives:
movie: bladerunner
quote: {[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]}
pass: 0, lines: [1], longest: 186
movie: bladerunner
quote:
[Batty, Roy]: [I have seen things you people wouldn't believe., Attack ships on fire off the shoulder of Orion., I watched C-beams glitter in the dark near the Tannhäuser Gate.]
pass: 1, lines: [2], longest: 179
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
pass: 2, lines: [], longest: 67
-------------------------------- result: --------------------------------
movie: bladerunner
quote:
[Batty, Roy]:
- I have seen things you people wouldn't believe.
- Attack ships on fire off the shoulder of Orion.
- I watched C-beams glitter in the dark near the Tannhäuser Gate.
Please note that when using ruamel.yaml<0.18 the sequence [Batty, Roy] never will be in block style
because the tuple subclass CommentedKeySeq does never get a line number attached.

Detectron2- How to log validation loss during training?

I copied the idea from mnslarcher and wrote the following two functions for my keypoint detector (resnet50 backbone) algorithm.
def build_valid_loader(cfg):
_cfg = cfg.clone()
_cfg.defrost() # make this cfg mutable.
_cfg.DATASETS.TRAIN = cfg.DATASETS.TEST
return build_detection_train_loader(_cfg)
def store_valid_loss(model, data, storage):
training_mode = model.training
with torch.no_grad():
loss_dict = model(data)
losses = sum(loss_dict.values())
assert torch.isfinite(losses).all(), loss_dict
loss_dict_reduced = {k: v.item()
for k, v in comm.reduce_dict(loss_dict).items()}
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
if comm.is_main_process():
storage.put_scalars(val_loss=losses_reduced, **loss_dict_reduced)
model.train(training_mode)
then in plain_train_net.py I am calling them as bellow.
val_data_loader = build_valid_loader(cfg)
logger.info("Starting training from iteration {}".format(start_iter))
with EventStorage(start_iter) as storage:
for data, val_data, iteration in zip(data_loader, val_data_loader, range(start_iter, max_iter)):
iteration = iteration + 1
..
..
#At the end of the for loop.
# Calculate and log validation loss.
store_valid_loss(model, val_data, storage)
after 1k iteration, loss_keypoint is increasing, but total_loss is same compared to without store_valid_loss call. What am I missing? Can anyone please help to understand?
I am using 4 GeForce RTX 2080 Ti.

reading in parallel from" generator in Keras

I have a big dataset divided in files.
I would like to read and process my data one file at the time and for this I have this keras generator:
def myGenerator():
while 1:
rnd = random.randint(1,200)
strRnd = str(rnd)
lenRnd = len(strRnd)
rndPadded = strRnd.rjust(5, '0')
nSearchesInBatch = 100
f = "path/part-" + rndPadded + "*" #read one block of data
data = sqlContext.read.load(f).toPandas()
imax = int(data.shape[0]/nSearchesInBatch) #number of batches that will be created sequentially from the generator
for i in range(imax):
data_batch = data[i*nSearchesInBatch:(i+1)*nSearchesInBatch]
features = data_batch['features']
output = data_batch['output']
yield features, output
The problem is that the reading takes the biggest part (each file is around 200mb), and in the meanwhile the GPU sits waiting, it is possible to pre-read the next batch while the GPU is traning on the previous one?
At the moment one file is read and split in steps (the inner loop), the CPUs are hidden and the GPU training, as soon as the epoch finishes, the GPU goes idle and the cpu start reading (which takes 20/30 seconds).
Any solution to parallelize this?

Working with very big data faster in Matlab?

I have to deal with very big data (Point clouds generally more than 30 000 000 points) using Matlab. I can read ascii data using textscan function. After reading, I need to detect invalid data (points with 0,0,0 coordinates) and then I need to do some mathematical operations on each point or each line in the data. In my way, first I read data with textscan and then I assign this data to a matrix. Secondly, I use for loops for detecting invalid points and doing some mathematical operations on each point or line in the data. A sample of my code is shown as below. According to profile tool of Matlab textscan takes 37% and line
transformed_list((i:i),(1:4)) = coordinate_list((i:i),(1:4))*t_matrix;
takes 35% of all computation time.
I tried it with another point cloud (stores around 5 500 000) and profile tool reported same results. Is there a way of avoiding for loops, or is there another way of speeding up this computation?
fileID = fopen('C:\Users\Mustafa\Desktop\ptx_all_data\dede5.ptx');
original_data = textscan(fileID,'%f %f %f %f %f %f %f', 'delimiter',' ');
fclose(fileID);
column = original_data{1}(1);
row = original_data{1}(2);
t_matrix = [original_data{1}(7) original_data{2}(7) original_data{3}(7) original_data{4}(7)
original_data{1}(8) original_data{2}(8) original_data{3}(8) original_data{4}(8)
original_data{1}(9) original_data{2}(9) original_data{3}(9) original_data{4}(9)
original_data{1}(10) original_data{2}(10) original_data{3}(10) original_data{4}(10)];
coordinate_list(:,1) = original_data{1}(11:length(original_data{1}));
coordinate_list(:,2) = original_data{2}(11:length(original_data{2}));
coordinate_list(:,3) = original_data{3}(11:length(original_data{3}));
coordinate_list(:,4) = 0;
coordinate_list(:,5) = original_data{4}(11:length(original_data{4}));
transformed_list = zeros(length(coordinate_list),5);
for i = 1:length(coordinate_list)
if coordinate_list(i,1) == 0 && coordinate_list(i,2) == 0 && coordinate_list(i,3) == 0
transformed_list(i,:) = NaN;
else
%transformed_list(i,:) = coordinate_list(i,:)*t_matrix;
transformed_list((i:i),(1:4)) = coordinate_list((i:i),(1:4))*t_matrix;
transformed_list(i,5) = coordinate_list(i,5);
end
%i
end
Thanks in advance
for loops with conditional statements like those will take ages to run. But what Matlab lacks in loop speed it makes up with vectorization and indexing.
Let's try some logical indexing like this to solve the first step:
coordinate_list(coordinate_list(:,1) == 0 .* ...
coordinate_list(:,2) == 0 .* ...
coordinate_list(:,3) == 0)=nan;
And then vectorize the second statement:
transformed_list(:,(1:4)) = coordinate_list(:,(1:4))*t_matrix;
As EBH mentioned above this might be a bit heavy on your RAM. If it's more than your computer can handle asks yourself if the coordinates really have to be doubles, maybe single precision will do. If that still doesn't do, try slicing the vector and performing the operation in parts.
Small example to give you an idea because I had a 2million element point cloud around here:
In R2015a
transformed_list = zeros(length(coordinate_list),5);
tic
for i = 1:length(coordinate_list)
if coordinate_list(i,1) == 0 && coordinate_list(i,2) == 0 && coordinate_list(i,3) == 0
transformed_list(i,:) = NaN;
else
%transformed_list(i,:) = coordinate_list(i,:)*t_matrix;
transformed_list((i:i),(1:3)) = coordinate_list((i:i),(1:3))*t_matrix;
transformed_list(i,5) = 1;
end
%i
end
toc
Returns Elapsed time is 10.928142 seconds.
transformed_list=coordinate_list;
tic
coordinate_list(coordinate_list(:,1) == 0 .* ...
coordinate_list(:,2) == 0 .* ...
coordinate_list(:,3) == 0)=nan;
transformed_list(:,(1:3)) = coordinate_list(:,(1:3))*t_matrix;
toc
Returns Elapsed time is 0.101696 seconds.
Rather than read the whole file, you'd be better off using a loop with
fscanf(fileID, '%f', 7)
and processing input as you read it.

Measure estimated completion time of ruby script

I've been running a lot of scripts lately that iterate over 10k - 300k objects, and I'm thinking of writing some code that estimates the completion time of the script (they take 20-180 minutes). I've got to imagine though that there's something out there that does this already. Is there?
To Clarify (edit):
Were I to write code to do this, it would work by measuring how long it takes to perform "the operation" on a single object, multiplying that amount of time by the number of objects left, and adding it to the current time.
Granted, this would only work in situations where you have a script involving a single loop that takes up 99% of the script's total run time, and in which you could reasonably expect to be able to calculate an semi-accurate average for each iteration of that loop. This is true of the scripts for which I'd like estimate completion time.
Have a look at the ruby-progressbar gem: https://github.com/jfelchner/ruby-progressbar
It generates a nice progressbar and estimates completion time (ETA):
example task: 67% |oooooooooooooooooooooo | ETA: 00:01:15
You can granularity measure the time of each method within your script and then sum the components as described here.
You let your process run, and after a set time of iterations, you measure the elapsed time. You then use that value as an estimation for the time left. This ensures that the time is always dynamically estimated according to the current task.
This example is extra verbose, like a code double whopper with triple cheese:
# Some variables for this test
iterations = 1000
probe_at = (iterations * 0.1).to_i
time_total = 0
#======================================
iterations.times do |i|
time_start = Time.now
#you could yield here if this were a function
5000.times do # <tedius task simulation>
Math.sqrt(rand(200000))
end # <end of tedious task simulation>
time_total += time_taken = Time.now - time_start
if i == probe_at
iteration_cost = (time_total / probe_at)
time_left = iteration_cost * (iterations - probe_at)
puts "Time taken (ACTUAL): #{time_total} | iteration: #{i}"
puts "Time left (ESTIMATE): #{time_left} | iteration: #{i}"
puts "Estimated total: #{time_total + time_left} | iteration: #{i}"
end
if i == 999
puts "Time taken (ACTUAL): #{time_total} | iteration: #{i}"
end
end
You could easily rewrite this into a class or a method.

Resources