I read this topic because I forget a method I found in the net few month ago, and I don't know why I can't find it today, it was very simple and works good but...
So I tried one method but I think it doesn't work good or maybe my computer which is 5 years old is better than today's computer...
import time
debut=time.clock()
def t(n):
aaa=[]
b=n-1
c=0
if n==0 or n==1:
return 1
else:
while n != 1:
if n % 2==0:
n=n//2
aaa.append(n)
else:
n = n+b
aaa.append(n)
return [b,b+1]+aaa, len(aaa)+2
fin=time.clock()
print(t(100000),fin-debut)
For n=10.000.000 i can count in my head approx 5 secondes and computer always return 3.956927685067058e-06 ... can someone explain me ?
And the method I found, used this from time import perf_counter as pc
And I had to return print(pc()-t)
If someone can enlighten me because i really don't remember the method.
Thank you in advance
Look at the timeit module, https://docs.python.org/3.0/library/timeit.html.
you would set yours something like...
time = Timer( "t(100000)", "from __main__ import t")
print("time: ", timer.timeit(number=1000))
You are measuring the time it takes to define the function.
This will measure the execution of the function:
import time
def t(n):
aaa=[]
b=n-1
c=0
if n==0 or n==1:
return 1
else:
while n != 1:
if n % 2==0:
n=n//2
aaa.append(n)
else:
n = n+b
aaa.append(n)
return [b,b+1]+aaa, len(aaa)+2
start = time.time()
value = t(100000)
end = time.time()
duration = end - start
print(value, duration)
Related
I have two functions which are timeconsuming to be run. I wanna run each using multiprocessing library in python. I see some examples but I don't know once the processors have done with their calculations, how retrieve the output of each and sum up the total results? Each function returns a value.
For example:
from multiprocessing import Pool
import time
def f1(n):
time.sleep(0.5)
global f1out
f1out = n**2
return n**2
def f2(n):
time.sleep(0.5)
global f2out
f2out = n**2
return n**3
start = time.time()
if __name__ == "__main__":
results1 = Process(target=f1,args=(range(10)))
results2 = Process(target=f2,args=(range(10)))
results1.start()
results2.start()
results1.join()
results2.join()
print(results1)
print(results2)
end = time.time()
TT = end-start
print(TT)
I wanna calculate the (results1 + results2)
but results1 and 2 are not values!
I have a simple model created with Keras and I need to measure the execution time for prediction per image. Right now I just do this:
start = time.clock()
my_model.predict(images_test)
end = time.clock()
print("Time per image: {} ".format((end-start)/len(images_test)))
But I noticed that the calculated time is bigger when len(images_test) is smaller. For example when len(images_test) = 32 I get: 0.06 and when len(images_test) = 1024 I get: 0.006
Is there a "right" way to do this ?
if use TF it seems no Asynchronous problem
but if use pytorch it has Asynchronous problem.
in TF:
start = time.clock()
result = my_model.predict(images_test)
end = time.clock()
in pytorch:
torch.cuda.synchronize()
start = time.clock()
my_model.predict(images_test)
torch.cuda.synchronize()
end = time.clock()
But i think you can do 10 times Loop model_predict
and print time_list
(computer need load keras model so first time load slower than other times )
in TF:
pred_time_list=[]
for i in range(10):
start = time.clock()
result = my_model.predict(images_test)
end = time.clock()
pred_time_list.append(end-start)
print(pred_time_list)
(print the pred_time_list and you may find out why the times incorrect)
Reference:
[1]
https://discuss.pytorch.org/t/doing-qr-decomposition-on-gpu-is-much-slower-than-on-cpu/21213/6
[2]
https://discuss.pytorch.org/t/is-there-any-code-torch-backends-cudnn-benchmark-torch-cuda-synchronize-similar-in-tensorflow/51484/2
I am studying for this great Coursera course https://www.coursera.org/learn/algorithmic-toolbox . On the fourth week, we have an assignment related to binary trees.
I think I did a good job. I created a binary search code that solves this problem using recursion in Python3. That's my code:
#python3
data_in_sequence = list(map(int,(input().split())))
data_in_keys = list(map(int,(input()).split()))
original_array = data_in_sequence[1:]
data_in_sequence = data_in_sequence[1:]
data_in_keys = data_in_keys[1:]
def binary_search(data_in_sequence,target):
answer = 0
sub_array = data_in_sequence
#print("sub_array",sub_array)
if not sub_array:
# print("sub_array",sub_array)
answer = -1
return answer
#print("target",target)
mid_point_index = (len(sub_array)//2)
#print("mid_point", sub_array[mid_point_index])
beg_point_index = 0
#print("beg_point_index",beg_point_index)
end_point_index = len(sub_array)-1
#print("end_point_index",end_point_index)
if sub_array[mid_point_index]==target:
#print ("final midpoint, ", sub_array[mid_point_index])
#print ("original_array",original_array)
#print("sub_array[mid_point_index]",sub_array[mid_point_index])
#print ("answer",answer)
answer = original_array.index(sub_array[mid_point_index])
return answer
elif target>sub_array[mid_point_index]:
#print("target num higher than current midpoint")
beg_point_index = mid_point_index+1
sub_array=sub_array[beg_point_index:]
end_point_index = len(sub_array)-1
#print("sub_array",sub_array)
return binary_search(sub_array,target)
elif target<sub_array[mid_point_index]:
#print("target num smaller than current midpoint")
sub_array = sub_array[:mid_point_index]
return binary_search(sub_array,target)
else:
return None
def bin_search_over_seq(data_in_sequence,data_in_keys):
final_output = ""
for key in data_in_keys:
final_output = final_output + " " + str(binary_search(data_in_sequence,key))
return final_output
print (bin_search_over_seq(data_in_sequence,data_in_keys))
I usually get the correct output. For instance, if I input:
5 1 5 8 12 13
5 8 1 23 1 11
I get the correct indexes of the sequences or (-1) if the term is not in sequence (first line):
2 0 -1 0 -1
However, my code does not pass on the expected running time.
Failed case #4/22: time limit exceeded (Time used: 13.47/10.00, memory used: 36696064/536870912.)
I think this happens not due to the implementation of my binary search (I think it is right). Actually, I think this happens due to some inneficieny in a peripheral part of the code. Like the way I am managing to output the final answer. However, the way I am presenting the final answer does not seem to be really "heavy"... I am lost.
Am I not seeing something? Is there another inefficiency I am not seeing? How can I solve this? Just trying to present the final result in a faster way?
I have made that loop my self and Iam trying to make it faster, better... but sometimes after it repeat searching for existing... it press random ( i think cuz its not similar to any img iam using in sikuli ) place on the screen. Maybe you will know why.
Part of this loop below
while surowiec_1:
if exists("1451060448708.png", 1) or exists("1451061746632.png", 1):
foo = [w_lewo, w_prawo, w_dol, w_gore]
randomListElement = foo[random.randint(0,len(foo)-1)]
click(randomListElement)
wait(3)
else:
if exists("1450930340868.png", 1 ):
click(hemp)
wait(1)
hemp = exists("1450930340868.png", 1)
elif exists("1451086210167.png", 1):
click(tree)
wait(1)
tree = exists("1451086210167.png", 1)
elif exists("1451022614047.png", 1 ):
hover("1451022614047.png")
click(flower)
flower = exists("1451022614047.png", 1)
elif exists("1451021823366.png", 1 ):
click(fish)
fish = exists("1451021823366.png")
elif exists("1451022083851.png", 1 ):
click(bigfish)
bigfish = exists("1451022083851.png", 1)
else:
foo = [w_lewo, w_prawo, w_dol, w_gore]
randomListElement = foo[random.randint(0,len(foo)-1)]
click(randomListElement)
wait(3)
I wonder if this is just program problem with img recognitions or I have made a mistake.
You call twice the exist method indending to get the same match (the first one in your if statement, the second time to assign it to the value. You ask sikuli to evaluate the image twice, and it can have different results.
From the method's documentation
the best match can be accessed using Region.getLastMatch() afterwards.
I'd like to implement a real time plot of my CPU and GPU load.
I already have a script that retrieve the data and echo it in a terminal.
What I want to do now is to plot this information to see the evolution with time.
I don't know if I need python for instance, and in this case using which module?
Is there any other alternative?
Does someone have any experience doing real time plotting?
Éric.
here is what I did so far
#! /usr/bin/python
import pylab
import time
t = 0
dt = .25
old = None
if __name__ == "__main__":
pylab.ion()
#pylab.xlabel('this is x!')
#pylab.ylabel('this is y!')
#pylab.title('My First Plot')
while True:
with open('/proc/stat') as stat:
new = map(float, stat.readline().strip().split()[1:])
if old is not None:
diff = [n - o for n, o in zip(new, old)]
idle = diff[3] / sum(diff)
pylab.clf()
pylab.plot(t,int(255 * (1 - idle)),'-b', label='cpu')
#print t,int(255 * (1 - idle))
pylab.draw()
old = new
time.sleep(dt)
t = t + dt
Well, something happens when I execute it but nothing is displayed.
Any suggestion?
Thank you,
Éric.