I'm looping 1000 times with time delay 1ms and calculating total time. It's very interesting how the total time is 15.6 seconds instead of 1. When I opened Google Chrome and surfed some websites, it ran correctly with 1 sec total. Also, it ran fine with Macbook too.
I'm wondering what kind of solutions that I need to do to fix this problem? Please try to run it without Chrome opened an again with Chrome opened to see the difference. It ran normally when Quora or Reddit or Stackoverflow opened on my system.
from timeit import default_timer as timer
import time
start = timer()
for i in range(1000):
time.sleep(0.001)
end = timer()
print ("Total time: ", end - start)
Edit: I didn't run it on Python. I just opened up Chrome and browsed some websites to speed up the time delay.
Updated: It's about the timer resolution from Windows. So basically, Chrome changed the timer resolution from 15.6ms to 1ms. This article explains very well: https://randomascii.wordpress.com/2013/07/08/windows-timer-resolution-megawatts-wasted/
I finally figured it out. Thanks a lot for the comments. Those gave me hints to solve it. To explain why this happened, Windows OS has its default timer resolution set to 15.625 ms or 64 Hz, which is decently enough for most of the applications. However, for applications that need very short sampling rate or time delay, then 15.625 ms is not sufficient. Therefore, when I ran my program itself, it's stuck at 15.6 s for 1000 points. However, when Chrome is opened, the higher resolution timer is triggered and changed to 1 ms instead of 15.6, which caused my program to run as expected.
Therefore, in order to solve it, I needed to call a Windows function named timeBeginPeriod(period) to change the resolution timer. Fortunately, python made it easy for me to fix it by providing ctypes library. The final code is provided below:
from time import perf_counter as timer
import time
from ctypes import windll #new
timeBeginPeriod = windll.winmm.timeBeginPeriod #new
timeBeginPeriod(1) #new
start = timer()
for i in range(1000):
print (i)
time.sleep(0.001)
end = timer()
print ("Total time: ", end - start)
Warning: I read about how this high timer resolution will affect the overall performance and also the battery. I have not seen anything happening yet, and CPU Usage on Activity on Windows Task Manage doesn't seem to overwhelming either. But keep that in mind if your applications happen to cause some strange behaviors.
Related
I'm writing a Fusion 360 python add-in, which is an event-driven way to extent their product (their code calls my functions that hooked in to their events).
Inside my code, I would like to send a single HTTP GET (or POST) request to a remote server without making the user wait (e.g. if they're offline, I want no delay - it just needs to fail silently).
There are many dozens of async examples around, but all of them appear to require that you're running a "normal" program, and that every part of the program is async to start with (i.e. I can't find any examples of a regular program, with an async bit added).
I'm new to python, and the async Doc is drowning me :-(
That said - I do kinda know what I'm doing in other languages, and I understand how processes work (not so much threads though).
I did manage to partly "solve" my own question with this:
subprocess.Popen([get_exec(),os.path.join(prog_folder,"send_data.py"),str(VERSION)])
and a second script - except that pops open an ugly black "DOS" box which hangs around until the transfer completes and looks highly unprofessional. All attempts at avoiding the black box failed (I do not get the luxury of specifying my user's environment, and there is no "windows UI build" python version shipped that works.)
So basically - two questions
a) is it even possible for an event-driven python function to even "spawn" a thread at all? Perhaps imagine it this way: you've written a python module, and any caller can call a function in your module, which returns immediately, but your function then continues to do work for another minute in parallel - but crucially - the caller does not need to do anything special.
b) assuming it more-or-less is possible - can anyone give me a hint or a pointer to an example or something might might give me a clue where to start?
Python 3.7.6+ is my minimum environment.
My main problem (pardon the pun) is that all examples I can find do this:
loop.run_until_complete(asyncio.wait(print_http_headers(url)))
or this:
asyncio.run(main())
both of which block. Even the asyncio doc's "hello world" example is non-async as well (if only they had printed "world" first (after a 1s delay) and then printed "hello" second with no delay - that would have solved everything!!!)
All other suggestions gratefully received (there's bound to be an "outside the box" alternative I've not realized yet I expect - so long as the box isn't black and in-your-face that is :-)
Thanks! #user4815162342 - that totally did the trick!!
import _thread, time, socket
def YoBlably(stuff):
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(("example.com", 80))
sent=s.send(b'GET /my_path/check_update.asp?u=u1.20200505&v=1.20200503&a=_aft&p=my_prog HTTP/1.1\x0d\x0aHost: example.com\x0d\x0aAccept-Encoding: identity\x0d\x0aUser-Agent: Python\x0d\x0aConnection: close\x0d\x0a\x0d\x0a')
if sent==0:
print("s Problem")
chunk = s.recv(1024000)
if chunk == b'':
print("r Problem")
print('got {}.'.format(chunk))
s.close()
print("Starting in 1s...");
time.sleep(1)
_thread.start_new_thread(YoBlably, ('foo',))
print("Started...");
for i in range(0,6):
time.sleep(1)
print('{}...'.format(i))
print("The end...");
outputs:-
$ python pythreadsock.py
Starting in 1s...
Started...
got b'HTTP/1.1 200 OK\r\nDate: Wed, 20 May 2020 01:47:28 GMT\r\nServer: Apache/2.0.52\r\nExpires: Sun, 17 May 2020 23:58:28 GMT\r\nPragma: no-cache\r\nCache-Control: no-cache\r\nContent-Length: 35\r\nConnection: close\r\nContent-Type: application/json; charset=UTF-8\r\n\r\n{"current_version_xyz":1.20200502}\n'.
0...
1...
2...
3...
4...
5...
The end...
I met a wired problem but I wonder if I'm asking the correct question:
result = parLapply(cl, 1:4,
function(j,rho_list_needed,delta0_needed,
V_iter_s,Sigma_list_needed) {
rhoj = rho_list_needed[[j]]
delta0_in_cpp = delta0_needed
v = as.vector(V_iter_s[,,,j])
sigmaj = Sigma_list_needed[[j]]
sourceCpp('sample_Z.cpp')#first time complie slow,then cashed
return(Sample_Z(rhoj,delta0_in_cpp, v,sigmaj,A,Cmatrix))
},rho_list_needed,delta0_needed,
V_iter[[s]],Sigma_list_needed)
When I was testing my sample_Z.cpp with parallel through parLapply, the single calculation takes around 1 sec. By parallel, my 4 iterations takes around 1.2 secs, which is a big improvement compared to unparalleled version, which is 8 sec.
There's no problem at all when I run my program yesterday. Just now I noticed a bug and revised my program. To give my PC a fresh environment, I restarted my computer. When started to run my program, I only opened the .R file, and run. But it took 9 sec for that parallel, which used to be 1.2 sec. The 9 sec was after warming up my cores, i.e., already sourced the cpp before I time it.
I just don't know where is the bug. Then try to source the cpp file directly in my global merriment, and I found out that there was no caching at all. The second time took the same time as the first one.
But I accidentally opened the sample_Z.cpp in Rstudio, explicitly at the editor. And then, everything works correct now.
I don't know how to search this similar problem on google with what kind of key words and I don't know if opening the cpp file is a must, while I never known before.
Can anyone tell me what's the real issue? Thanks!
After restarting your PC, you probably had extra processes running which would have competed for CPU cores that slowed down your algorithm. The fact you're rebooting suggests to me you're not using Linux... but if you are, watch with top while starting your code, or equivalent for your platform.
I'm running into some considerable speed bottlenecks with a Python-Matplotlib-Xcode combination. I know some immediate responses will probably ask "Why are you doing python stuff in Xcode, just man up and use vim" --> I like the organizing ability and the built in version control, it makes elements of my work easier to deal with.
Getting python to run in xcode in the first place was a bit more tricky than I had hoped, but its possible. Now I have the following scenario:
A master file, 'main.py' does all the import stuff for me and sets up some universal formatting to make all the figures (for eventual inclusion in my PhD thesis) nice and uniform. Afterwards it runs a series of execfile commands to generate whichever graphics I need. Two things I can think of right off the bat:
1) at the very beginning of main.py after I import all the normal python stuff you tend to need, I call a system script which checks whether a certain filesystem is mounted. I keep all my climate model data on there since my local hard drive is too small to deal with all of it at once. Python pauses itself and waits for the system to do its thing, but once the filesystem has been found, it keeps going. Usually this only needs to happen once in the morning when I get to work, or if the VPN server kicked me off for whatever reason. (Side question, it'd be cool to know if theres a trick to automate an VPN login to reconnect as soon as it notices its not connected)
2) I'm not sure how much xcode is using on its own. running the same program from terminal is (somewhat) faster. I've tried to be memory conscience and turn off stuff I don't need while running the python/xcode combination.
Also, python launches a little window whenever I call plt.show(), this in itself takes time, I've considered just saving them as quick png files and opening them with some other viewer, although I guess that would also have to somehow take time to open up. Given how often these graphics change as I add model runs or think of nicer ways of displaying the data, it'd be nice to not waste something on the order of 15 to 30 minutes (possibly more) out of the entire day twiddling my thumbs and waiting for a window to pop up.
Benchmark it!
import datetime
start = datetime.datetime.now()
# your plotting code
td = datetime.datetime.now() - start
print td.total_seconds() # requires python version >= 2.7
Run it in xcode and from the command line, see what the difference is.
I am using VB6 SP6
This code has work correctly for years but I am now having a problem on a WIN7 to WIN7 network. It also works correctly on an XP to Win7 network.
Open file for random as ChannelNum LEN =90
'the file is on the other computer on the network
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
'(MyAcFile is UDT that is less than 90 long)
.......... other code that does not reference file or RecNum - then
RecNum = (LOF(ChannelNum) \ 90) + 2
Put ChannelNum, RecNum, MyAcFile
Close ChannelNum
The second record overwrites the first.
We had a similar problem in the past with OpportunisticLocking so we turn that off at install - along with some other keys that cause errors in data in Windows networks.
However we have had no problems like this for years, so I think MS have some new "better" option that they think will "improve" networking.
Thanks for your help
I doubt there is any "bug" here except in your approach. The file metadata that LOF() interrogates is not meant to be updated immediately by simple writes. A delay seems like a silly idea, prone to occasional failure unless a very long delay is used and sapping performance at best. Even close/reopen can be iffy: VB6's Close statement is an async operation. That's why the Reset statement exists.
This is also why things like FlushFileBuffers() and SetEndOfFile() exist at the API level. They are also relatively expensive operations from a performance standpoint.
Track your records yourself. Only rely on LOF() if necessary after you first open the file.
Hmmm... is file (as per in the open statement at the top of the code sample) UNC filename or similar to x:\ where x is the mapped drive? Are you not incrementing RecNum? Judging by the code, the RecNum is unchanged and hence appears to overwrite the first record...Sorry for sounding ummm no pun intended... basic...It would be of help to show some more code here...
Hope this helps,
Best regards,
Tom.
It can be just timing issue. In some runs your LOF() function returns more updated information than in other runs. The file system API is asynchronous, for example when some write function is called it will not be immediately reflected as the increazed size.
In short: you code have shown an old bug, which is just easier to reproduce on Windows 7.
To fix the bug the cheapest way: you may decide to add a delay (it can be significant delay of say 5 seconds).
More elaborate fix is to force the size update by closing and reopening file.
To clarify, I mean time spent while the system is suspended/hibernated, not the calling thread (GetTickCount() returns the number of milliseconds since system boot).
As far as I know, GetTickCount is unrelated to threads and counts the time since the system has started. But it is better to use GetTickCount64 to avoid the 49.7 day roleover.
By the way, to get what you want you need the GetThreadTimes function. It records the creation and exit time and the amount of time the thread has spend in user or kernel space. So you have a nice way to calculate the amount of time spend.
Ok, I missed the "system" part of the question. But that is simple. When in hibernation GetTickCount continues the counting. Because people have suffered from the 49.7 days bug when the computer was in hibernate most of the time. See link text here for more information.
Short answer : Yes.
Longer answer: Read the GetTickCount() docs: It's the elapsed time since system startup, and even MS wouldn't suggest that time stands still while your computer is hibernating...
Yes, GetTickCount does include suspend/hibernate time.
In the following python script I call the Sleep API to wait 40 seconds to give me a chance to put the computer into hibernate mode, and I print the time before and after, and the tick count difference after.
import win32api
import time
print time.strftime("%H:%M:%S", time.localtime())
before = win32api.GetTickCount()
print "sleep"
win32api.Sleep(40000)
print time.strftime("%H:%M:%S", time.localtime())
print str(win32api.GetTickCount()-before)
Output:
17:44:08
sleep
17:51:30
442297
If GetTickCount did not include the time during hibernate it would be much less than the time I hibernated for, but it matches the actual time elapsed (7 minutes 22 seconds equals 442 seconds, i.e. 442000 millisecond "ticks").
For any one looking for answer under Windows CE platform, from docs:
http://msdn.microsoft.com/en-us/library/ms885645.aspx
you can read:
For Release configurations, this function returns the number of
milliseconds since the device booted, excluding any time that the
system was suspended. GetTickCount starts at 0 on boot and then counts
up from there.
GetTickCount() gives you the time in milliseconds since the computer booted. it has nothing to do with the process calling it.
No, GetTickCount() does not include the time the system spend during hibernate.
A simple test proves this.
in Python:
import win32api
win32api.GetTickCount()
-- do hibernate --
win32api.GetTickCount()
and you'll see the result...