I am using the wxProgressdialog to show time between switching ports and time between taking measurements. I am running this test for over 24 hours (repeating the same thing over and over while recording the data). Therro that appears during hour 7 is:
Traceback (most recent call last):
File "C:\Users\localuser\Desktop\Thermal\Cheyenne_Antenna_Cal_PDA_Thermal_Test.py", line 2117, in take_measurements_at_interval
self.take_measurement(self)
File "C:\Users\localuser\Desktop\Thermal\Cheyenne_Antenna_Cal_PDA_Thermal_Test.py", line 2185, in take_measurement
self.Measure_Plot(self)
File "C:\Users\localuser\Desktop\Thermal\Cheyenne_Antenna_Cal_PDA_Thermal_Test.py", line 2231, in Measure_Plot
style=wx.PD_AUTO_HIDE | wx.PD_ELAPSED_TIME | wx.PD_REMAINING_TIME)
File "C:\Python27\lib\site-packages\wx-2.8-msw-unicode\wx_windows.py", line 2951, in init
windows.ProgressDialog_swiginit(self,windows.new_ProgressDialog(*args, **kwargs))
wx._core.PyAssertionError: C++ assertion "wxAssertFailure" failed at ....\src\msw\control.cpp(159) in wxControl::MSWCreateControl(): CreateWindowEx("STATIC", flags=52000100, ex=00000000) failed
Here is the code that is being used to 'delay time'
#Giving Time for switch to toggle next port
progressMax = 5
dialog = wx.ProgressDialog("A progress box", "Time to switch", progressMax,
style=wx.PD_AUTO_HIDE | wx.PD_ELAPSED_TIME | wx.PD_REMAINING_TIME)
keepGoing = True
count = 0
while keepGoing and count < progressMax:
count = count + 1
wx.Sleep(1)
keepGoing = dialog.Update(count)
dialog.Destroy()
The code pauses 5 seconds to allow switch hardware and PNA to be steady before data is recorded. All of this is happening in a 'For' loop for a period of time. If anyone needs more information I will be happy to proved.
If the window creation fails after running for a long time, chances are you simply run out of windows, which are still a very limited resource under Microsoft Windows (the exact limit depends on the Windows version, but could be as log as 16,384).
This could happen if you never return to the main event loop during all this time because the top level windows will only be really destroyed (and not just hidden) once you get back to it.
Related
I am writing a Windows service in VB.Net that will go out to some devices and data log points of information. I am using a Background Worker to do that so the service itself is still responsive. I have a timer that runs every second and checks the minute component of the current time. Each time the minute component changes I check which devices need to be checked, some are every minute, some every 5, some every 10, etc. These processes can take a few seconds or over a minute (I only rerun the worker if it's not already running and log a error if the last process took longer then the data retrieval interval).
In my OnStop event for the service I want to make sure the workers all close down. I call CancelAsync on the worker and the worker checks for cancellation to hopefully exit cleanly (i.e., check cancelation, if false retrieve data, save data into database, loop).
My problem is I don't want to use a sleep statement as it will lock everything but I also don't want the service to never shut down. So for example I have this currently:
Protected Overrides Sub OnStop()
' Add code here to perform any tear-down necessary to stop your service.
My.Application.Log.WriteEntry("ServiceABC shutting down for device " & DeviceID)
ServiceTimer.enable = false
If DataRetrievalBackgroundWorker.IsBusy Then
DataRetrievalBackgroundWorker.CancelAsync()
Dim x As Integer = 0
While ((DataRetrievalBackgroundWorker.IsBusy) Or (x < 15))
Threading.Thread.Sleep(1000)
x += 1
End While
End If
End Sub
This should work since the background worker is on another thread correct? Is there a better way to handle this?
You're close, if you don't want to Sleep(1000) and lock things up, do a Sleep(1).
'Dim x As Integer = 0
'While ((DataRetrievalBackgroundWorker.IsBusy) Or (x < 15))
' Threading.Thread.Sleep(1000)
' x += 1
'End While
Dim T As Date = Now.AddSeconds(15)
While DataRetrievalBackgroundWorker.IsBusy Or Now() < T
Threading.Thread.Sleep(1)
Application.DoEvents()
End While
I'm developing a load testing tool with gevent.
I create a testing script like the following
while True:
# send http request
response = client.sendAndRecv()
gevent.sleep(0.001)
The send/receive action completed very quick, like 0.1ms
So the expected rate should be close to 1000 per second.
But actually I got it like about 500 per second on both Ubuntu and Windows platform.
Most likely the gevent sleep is not accuate.
Gevent use libuv or libev for internal loop. And I got the following description about how libuv handle poll timeout from here
If the loop was run with the UV_RUN_NOWAIT flag, the timeout is 0.
If the loop is going to be stopped (uv_stop() was called), the timeout is 0.
If there are no active handles or requests, the timeout is 0.
If there are any idle handles active, the timeout is 0.
If there are any handles pending to be closed, the timeout is 0.
If none of the above cases matches, the timeout of the closest timer is taken, or if there are no active timers, infinity.
It seems when we have gevent sleep , actually it will setup a timer, and libuv loop use the timeout of the closest timer.
I really doubt that is the root cause : the OS system select timeout is not precise !!
I noticed libuv loop could run with UV_RUN_NOWAIT mode, and it will make loop timeout 0. That is no sleeping if no iOS event.
It may cause the load of one CPU core to 100%, but it is acceptable to me.
So I modify the function run of gevent code hub.py, as the following
loop.run(nowait=True)
But when I run the tool, I got the complain 'This operation would block forever', like the following
gevent.sleep(0.001)
File "C:\Python37\lib\site-packages\gevent\hub.py", line 159, in sleep
hub.wait(t)
File "src\gevent\_hub_primitives.py", line 46, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src\gevent\_hub_primitives.py", line 55, in gevent.__hub_primitives.WaitOperationsGreenlet.wait
File "src\gevent\_waiter.py", line 151, in gevent.__waiter.Waiter.get
File "src\gevent\_greenlet_primitives.py", line 60, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src\gevent\_greenlet_primitives.py", line 60, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src\gevent\_greenlet_primitives.py", line 64, in gevent.__greenlet_primitives.SwitchOutGreenletWithLoop.switch
File "src\gevent\__greenlet_primitives.pxd", line 35, in gevent.__greenlet_primitives._greenlet_switch
gevent.exceptions.LoopExit: This operation would block forever
So what should I do?
Yes, I finally found the trick.
if libuv loop run mode is not UV_RUN_DEFAULT, gevent will do some checking and if libuv loop is 'nowait' mode, It will say "This operation would block forever".
That's wired, actually it will not blcok forever.
Anyway, I just modify the line 473 of the file libuv/loop.py as the following
if mode == libuv.UV_RUN_DEFAULT:
while self._ptr and self._ptr.data:
self._run_callbacks()
self._prepare_ran_callbacks = False
# here, change from UV_RUN_ONCE to UV_RUN_NOWAIT
ran_status = libuv.uv_run(self._ptr, libuv.UV_RUN_NOWAIT)
After that, run the load tool, Wow..... exactly as what I expected, TPS is very close to what I set, but one core load is 100%.
That totally acceptable, because it is load testing tool.
So if we have real time OS kenel, we don't bother to do that.
I have a C++ written Windows service, and on startup, if the SERVICE_STATUS stays in SERVICE_START_PENDING too long, I end up with this error :
Error 1053: The service did not respond to the start or control request in a timely fashion
This happens when keeping the progress bar dialog opened. It does not affect the service startup itself. The service will continue in SERVICE_START_PENDING until the work is completed and I set SERVICE_RUNNING.
The Windows documentation on dwWaitHint here :
https://msdn.microsoft.com/en-us/library/windows/desktop/ms685996(v=vs.85).aspx
states that the service must call SetServiceStatus with an incremented dwCheckPoint before the dwWaitHint time elapses.
So for example, I set dwWaitHint to 5 minutes, and call SetServiceStatus every 10 seconds with an incremented dwCheckPoint but I still get the 1053 error after 5 minutes. In other words, the SetServiceStatus calls don't seem to do anything. (and these calls are NOT failing, I checked).
By doing the above, can't the service startup time take longer than dwWaitHint ???
UPDATE: I can reproduce with Microsoft's service sample code. Here's a snippet.
{
gSvcStatus.dwServiceType = SERVICE_WIN32_OWN_PROCESS;
gSvcStatus.dwServiceSpecificExitCode = 0;
// Report initial status to the SCM
ReportSvcStatus( SERVICE_START_PENDING, NO_ERROR, 300000 );
int limit = 6; // 6 minutes total
while(limit--)
{
Sleep(60000); // sleep 1 min
ReportSvcStatus( SERVICE_START_PENDING, NO_ERROR, 300000 ); // 5 minute dwWaitHint
}
// We've completed startup, report RUNNING to SCM
ReportSvcStatus( SERVICE_RUNNING, NO_ERROR, 0 );
}
VOID ReportSvcStatus( DWORD dwCurrentState, DWORD dwWin32ExitCode, DWORD dwWaitHint)
{
static DWORD dwCheckPoint = 1;
// Fill in the SERVICE_STATUS structure.
gSvcStatus.dwCurrentState = dwCurrentState;
gSvcStatus.dwWin32ExitCode = dwWin32ExitCode;
gSvcStatus.dwWaitHint = dwWaitHint;
if (dwCurrentState == SERVICE_START_PENDING)
gSvcStatus.dwControlsAccepted = 0;
else gSvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP;
if ( (dwCurrentState == SERVICE_RUNNING) ||
(dwCurrentState == SERVICE_STOPPED) )
gSvcStatus.dwCheckPoint = 0;
else gSvcStatus.dwCheckPoint = dwCheckPoint++;
// Report the status of the service to the SCM.
SetServiceStatus( gSvcStatusHandle, &gSvcStatus );
}
You are sure you are treating dwWaitHint as millseconds and not seconds? (i.e. your dwWaitHint is 300000?)
My experience is that the docs are right on this point, that the wait hint only applies to the next SetServiceStatus call.
Although I would also say a 5min service start time is excessive even if it actually takes that long to load or check data. Mostly I say that because the service control interface is stuck that entire time. SQLServer for example does a fairly quick service start even after a system crash that requires hours of validation.
Well, there are definitely limitations to the Microsoft Management Console (MMC) Services snap-in, specifically this dialog here :
See this link here from MS :
https://support.microsoft.com/en-ca/help/307806/the-services-snap-in-times-out-with-error-1053
When any control operation is initiated, the Services snap-in displays a progress dialog box with the title "Service Control". If a service requires a significant amount of time to process an operation, the progress bar will slowly increment as the Services snap-in waits for the operation to finish. After 125 seconds, the progress bar will be full and the Services snap-in will display the error 1053 (ERROR_SERVICE_REQUEST_TIMEOUT) message. The service process itself will continue its operation as usual even after the error message has appeared.
But, the somewhat good news is I've proven this 125 seconds statement to be false, at least on Windows 10 (haven't tried other Windows versions). As stated in my question, when setting the SERVICE_START_PENDING, you can set the dwWaitHint to something higher, and the progress bar will respect that. But, you only have 1 chance at this, if you then update SERVICE_START_PENDING by calling SetServiceStatus with a higher dwWaitHint, it will not affect the progress bar dialog.
The only downside to setting dwWaitHint really high is that the progress bar will slow down, and when you set the SERVICE_RUNNING status, the progress bar might just be half way. But not a big deal, just aesthetic.
I am developing a Wikipedia bot to analyze editing contributions. Unfortunately, it takes hours to complete a single run and during that time Wikipedia's database replication delay—at some point during the run—is sure to exceed 5 seconds (the default maxlag value). The recommendation in the API's maxlag parameter is to detect the lag error, pause for X seconds and retry.
But all I am doing is reading contributions with:
usrpg = pywikibot.Page(site, 'User:' + username)
usr = pywikibot.User(usrpg)
for contrib in usr.contributions(total=max_per_user_contribs):
# (analyzes contrib here)
How to detect the error and resume it? This is the error:
WARNING: API error maxlag: Waiting for 10.64.32.21: 7.1454429626465 seconds lagged
Traceback (most recent call last):
File ".../bot/core/pwb.py", line 256, in <module>
if not main():
File ".../bot/core/pwb.py", line 250, in main
run_python_file(filename, [filename] + args, argvu, file_package)
File ".../bot/core/pwb.py", line 121, in run_python_file
main_mod.__dict__)
File "analyze_activity.py", line 230, in <module>
attrs = usr.getprops()
File ".../bot/core/pywikibot/page.py", line 2913, in getprops
self._userprops = list(self.site.users([self.username, ]))[0]
File ".../bot/core/pywikibot/data/api.py", line 2739, in __iter__
self.data = self.request.submit()
File ".../bot/core/pywikibot/data/api.py", line 2183, in submit
raise APIError(**result['error'])
pywikibot.data.api.APIError: maxlag: Waiting for 10.64.32.21:
7.1454 seconds lagged [help:See https://en.wikipedia.org/w/api.php for API usage]
<class 'pywikibot.data.api.APIError'>
CRITICAL: Closing network session.
It occurs to me to catch the exception thrown in that line of code:
raise APIError(**result['error'])
But then restarting the contributions for the user seems terribly inefficient. Some users have 400,000 edits, so rerunning that from the beginning is a lot of backsliding.
I have googled for examples of doing this (detecting the error and retrying) but I found nothing useful.
Converting the previous conversation in comments into an answer.
One possible method to resolve this is to try/catch the error and redo the piece of code which caused the error.
But, pywikibot already does this internally for us ! Pywikibot, by default tries to retry every failed API call 2 times if you're using the default user-config.py it generates. I found that increasing the following configs does the trick in my case:
maxlag = 20
retry_wait = 20
max_retries = 8
The maxlag is the parameter recommended to increase according to the documentation of Maxlag parameter, especially if you're doing a large number of writes in a short span of time. But, the retry_wait and max_retries configs are useful in case someone else is writing a lot (As is my case: My scripts just read from wiki).
I have an Electronic Workforce (EWF) application that records the caller speaking. The system needs to record for 120 seconds then play a message and hangup. I set a maximum length of 120 seconds and a minimum length of 1 second. I didn't want any input to disrupt the recording, so I checked "Discard Earlier User Input", "Tone Input Stops Recording" (with keys that stop recording = ""), and "Discard the Key".
I also added "VCE.RECORD.beeptime = 0" to the cta.cfg file to the remove the beep before the recording. To the cta file I also added "VCE.RECORD.gain = 2" to increase the volume of the recordings and "VCE.RECORD.silencetime = 120000" to allow up to 120 seconds of silence if the user doesn't say anything to be recorded.
These settings all worked fine in my testing in that the only way I was able to get a file shorter than 120 seconds was to hangup early. Now that we have gone live though, customers seem to have found a way to get a file consistently five seconds long. We have about 120 recordings a day and about 10 a day are exactly five seconds long. The exception returned is "Voice Msg Too Short".
My question is how is this happening and what can I do (if anything) to prevent it?
User -BMM- on the Edify/Intervoice/Convergys customer forum gave me a good answer to this question. There are two settings that can cause a recording step to timeout with the Voice Msg Too Short error as follows...
VCE.RECORD.novoicetime = 0
VCE.RECORD.silencetime = 0
The value is in seconds, but zero disables the timeouts entirely so that silence at the start of a sound and silence at the end do not cause the exception to be thrown.