When I start the Volume Shadow copy programmatically how much time is maximum that I should wait before the Shadow is prepared ie: before I proceed taking backup. Just now it happens with about 10 secs in my machine. But I need timeout when deploying elsewhere.
This article (which describes the flow of events that happen when you create a Shadow Copy using a "Hardware Provider") has a nice sequence diagram that illustrates the timeout in the 'I/O Flush & Hold' window: https://msdn.microsoft.com/en-us/library/windows/desktop/aa384615(v=vs.85).aspx
The total timeout enforced by VSS in this window is 60 seconds. Hence, you can expect the IVssAsync::Wait() call, on the IVssAsync pointer returned by the call to DoSnapshotSet(), to automatically timeout in 60 seconds.
As such, you do not need to implement a timeout yourself, since VSS does this for you.
Related
We're currently using H2 version 199 in embedded mode with default nio file protocol and MVStore storage system. The write_delay parameter is set to 60 seconds.
We run a batch insert/update/delete of about 30.000 statements within 2 seconds (in one transaction) followed by another batch of a couple of hundred statements only 30 seconds later (in a second transaction). The next attempt to open a db connection (only 2 minutes later) shows that the DB is corrupt:
File corrupted while reading record: null. Possible solution: use the recovery tool [90030-199]
Since the transactions occur within a minute, we wonder whether the write_delay of 60 seconds might be contributing to the issue.
Changing write_delay to 60s (from a default value of 0.5s) will definitely increase your risk of lost transactions, and I do not see a good reason for doing it. Should not cause a db corruption, though. More likely some thread interruptions do that, since you are running in the same JVM a web server and who knows what else. Using async file store might help in that area, and yes it is stable enough (how much worse it can go for your app, than a database corruption, anyway).
I have a progress bar in a desktop application user interface (written in JavaFX). In general, is there an ideal update interval to use (in milliseconds) for updating progress bars with continuously changing progress (such as in the case of a file copy or download)? Right now I am getting good results with a 20ms update interval. That is, I have a timer thread that updates the progress bar every 20ms. My reasoning for using this value is that 20ms is higher than 30 FPS, which is supposed to be where the human eye stops seeing individual frames. Is there any reason to avoid using lower intervals, such as 1ms? What is the best practice for this?
Use a Task for your operation and call updateProgress on the task as often as you like. Bind your ProgressBar's progress property to the task's progress property. The JavaFX system will coalesce any superfluous updates so that the progress property is updated once per pulse. By default the JavaFX system will process pulses 60 times a second.
I'm running Coldfusion8 and have a cfc, that loops through a set of database records.
Each record contains two fields image path and image file. I'm constructing a path for every image, upload it to a temp folder, resize and then store it to S3.
Depending on the number of records, this may take quite some time and I have not been able to successfully finish the upload cycle with larger sets of images (eventually times out).
I'm already settings my timeout threshold to 5000, but it still does not seem enough.
I can pick up where I left, because I'm keeping a media log to check against, before uploading to S3. This way I can finish the task, but I need to trigger this function 5x to upload 400 items.
Question:
Is there way to avoid a timeout without setting (in S3 case) httptimeout to some 50000000? And would it make sense to run this in a CFTHREAD or will this be a problem if the user leaves the import page while the system is still uploading?
Thanks for some insights.
You can use a CFthread to perform the task, but make sure you LOCK THE SCOPE! otherwise you could end up running this memory intensive proccess several times over and kill the server, you only want this proccess running once at a time if its so intensive.
You have other options though, if this is not something that your application users will need to run and its a one-off proccess your doing, you could set a scheduled task with an exceedingly long timeout to run overnight, when the server is not very high use, This allows you to set the timeout independently to the application so the rest of the application is unaffected by global timeout changes.
Another option is, if this is something users will be doing semi-regularly then a thread which pushes a notification via email, log or other means (Ajax or Websockets) letting the user know they're task is complete. This has the upside that timeouts can be changed, calculated on the amount of data to be proccessed dynamically at thread generation. However, if your not careful you can overload your server with many threads proccessing large datasets (plus log file read-write locks will be harder to manage).
I would encourage you though, to take this away and see what solution works for you and post your final solution so others can see what the outcome is.
Hope this helps.
I have hosted a state machine Worklow as a WCF service..And the workflow is called in an ASP.NET code. I used netTcpContextBinding for workflow hosting. Problem is that if a SendRecieve activity within the workflow is taking a lot of time (say 1 minute) to execute, then it will show transaction aborted error and will terminate.. i have already set the binding values for send, recieve, open, close timeouts to maximum values in both web.config and the app.config..
How can i overcome this issue?
A TransactionScope has a default timeout of 60 seconds so if whatever you are doing in there takes longer it will time out and abort. You can increase the timeout on the TransactionScope but quite frankly the 60 seconds is already quite long. In most cases you are better of at doing any long running work to collect data before the transaction and keep your transaction time as short as possible.
I'm using the scripting bridge to query iTunes from my cocoa application. Sometimes iTunes pops up a window (like if an ipod needs updating etc.) and while that popup window is open I can't get any information from iTunes. So if I request information from iTunes when it's in this state my application completely locks-up until that popup window is dismissed.
So I need some sort of mechanism where I can ask itunes something simple in a separate thread to see if I can get a response from it... and if that separate thread doesn't receive a response within a short period of time my main thread will just kill that thread and thus know not to query itunes at that particular time.
Any ideas how to create such a mechanism? I searched for ways to kill a thread but haven't found any.
Your problem is nothing to do with threads; it's that your timeout is too long. Whatever you're doing should fail after about a minute.
To fix this, send a setTimeout: message to the SBApplication object, passing the amount of time you want it to wait. The value is in ticks, of which there are exactly 60 per second.
(Some sources say 60.15, and Apple's own docs say “approximately” 60, but I just measured ten minutes' worth of TickCount, and the result of the division by 600 seconds is exactly 60.0. The code I used:
NSLog(#"Ticks per second: %f", (end - start) / (60.0 * numMinutes)); where end and start are results from TickCount.)
Check out NSOperation/NSOperationQueue.