I recently moved my background synch downloads to a view controller and need some advice on how to best handle them asynch. I have written all the code to show a progressview as the download occurs but as you might have guessed it's not that simple. Here's how it works.
user sees a tableview with two entires one for each database. they can press a button to download the database and when the download starts that fires off the asynch URL connection,etc. This works to a certain extent however it's not that simple.
here's what i want it to do.
download the main update URL (works ok)
then download a secondary URL.
then apply the first URL content to the sqlite store (code written for that)
then apply the 2nd URL content to the sqlite store (code written for that)
(All the while showing progress to the user)
when the downloads were synch it was easy as i just waited for them to finish in order to fire the next activity off but when using the asynch method i'm struggling with how to get them to wait. Step 3 depends upon step 1 finishing and step 4 depends on step 2 finishing and overall success relies on all finishing. step 4 needs to wait for step 3 to finish otherwise the database locks will cause a clash.
the second complication is that if the user presses the second button while the first is downloading then steps 3, 4 will clash if they execute at the same time as the first row is accessing the database.
Has anyone done anything similar and if so what was the strategy you used to manage the flow of events.
Also i wanted to wrap this all up in a backgroundTask with ExpirationHandler so it would survive the user pressing the home button... but the delegate methods don't get called when i do that.
Ok Here is what i did to fix the problem.
Created an NSOperationQueue
Added the URL operations as NSURLInnvocationOperations
3.waited until the URL operations were complete (waituntilalloperationsarefinished).
Then set the max concurrent count to 1 which forced the subsequent database operations to execute in series one after the other and thus prevented SQLite from locking it's self out.
Related
I am pretty new to power automate. I created a flow that triggers when an item is created or modified. It initializes some variables and then does some switch cases to assign values to each of them. The variables then go into an array and another variable is incremented to get the total of the array. I then have a conditional to assign a value to a column in the list. I tested the flow specifically going into the modern view of the list and clicking the save button. This worked a bunch of times and I sent it for user testing. One of the users edited multiple items by double clicking into the item which saves after each column change(which I assume triggers a run of the flow)
The flow seemingly works but seemed to get bogged down at a point based on run history. I let it sit overnight and then tested again and now it shows runs from multiple IDs at a time even though I only edited one specific one.
I had another developer take a look at my flow and he could not spot anything wrong with it and it never had a hard error in testing only warnings about conditionals causing a loop but all my conditionals rectify. Pictures included. I am just not sure of any caveats I might be missing.
I am currently letting the flow sit to see if it finishes getting caught up. I read about the concurrent run option as well as conditions on the trigger itself. I am curious as to why it seems to run on two records(or more) all at once without me or anyone editing each one.
You might be able to ignore the updates from the service account/account which is used in the connection of the actions by using the following trigger condition expression:
#not(equals(triggerOutputs()?['body/Editor/Claims'], 'i:0#.f|membership|johndoe#contoso.onmicrosoft.com'))
I want to test the interaction between two users, communicating through a remote server, using CasperJS. My app is not a chat app, but that is an easy way to illustrate what I want to do.
So I'll login browser window A, then login to browser window B, then back in browser window A I'd input the chat message, call click() on the send button, and then back in browser B I'd wait for the message to appear. Then write a message, and go back to browser A to make sure it arrives.
I found this discussion on parallel browsing, which turns out to be serial. Serial is fine for me, but it appears doing more than one action in each browser is going to get very messy. It'd be something like this:
A.start(...);
A.then(...);
A.then(...);
B.start(...);
B.when(...);
A.run(function(){
B.run(function(){
A.start(...);
A.then(...);
A.run(function(){
B.start(...);
B.run(function(){
//and so on
});
});
});
});
(I've not actually tested that will work; I started writing it that way and thought there must be a better way?!)
and let each of them run asynchronously from the commandline
+1
I would do it that way :
Two scripts :
script A with A login
script B with B login
Then script A first step (after login) : writing in the chat.
Script B first step : waiting for A text then sending its answer.
Script A second step : waiting for B answer etc...
You launch these two scripts in parallel using node (child process) and they will interact with the wait() statements.
There is just one delicate point : wait for the two pages to be rendered -or to login- at the same time (approximatively), because if one of them freeze a little, you could get the timeouterror... So maybe increase the waitTimeout; to be safer. (though for me the 5sec default timeout should be sufficient).
You could also use an external file to 'synchronize' it, but I don't see how it could be helpful, because you would have to wait for the data to be updated in this file anyway.
So this solution it's asynchronous, but it works.
This will not work because of the step/scheduling nature of casperjs. See also Running multiple instances of casperjs.
In your code example the B instance is only started when A is finished, because the execution begins with the call to run.
The easiest would be to write two separate casperjs scripts (or one script, but invoked with different data for the two sides) and let each of them run asynchronously from the commandline. On linux I would use nohup ... & for this.
As for the specific test steps. I think it is easier to let your application handle the events that are needed for the synchronization of the two casperjs clients. If it is a chat app and you want to let the two caspers chat, you would write a dialog beforehand, which includes at what step a client says what.
You can then synchronize the clients using waitForText:
A sends some fixed/known text while B waits for this fixed text to appear
B receives this fixed text while A is in the next step and waits for B's response (also known text)
B sends the next fixed text and A is still waiting
Of course you would need to play around with wait timeouts.
I run a Symfony 1.4 project with very large amount of data. The main page and category pages are using pagers which need to know how much rows are available. I'm passing a query which contains joins to the pager which leads to a loading-time of 1 minute on these pages.
I configured cache.yml for the respective actions. But I think the workaround is insufficient and here are my assumptions:
Symfony rebuilds the cache within a single request which is made by a user. Let's call this user "cache-victim" to simplify things.
In our case, the data needs to be up-to-update - a lifetime of 10 minutes would be sufficient. Obviously, the cache won't be rebuilt, if no user is willing to be the "cache-victim" and therefore just cancels the request. Are these assumptions correct?
So, I came up with this idea:
Symfony should fake the http-request after rebuilding the cache. The new cache-entries should be written on a temporary file/directory and should be swapped with the previous cache-entries, as soon as cache rebuilding has finished.
Is this possible?
In my opinion, this is similar to the concept of double buffering.
Wouldn't it be silly, if there was a single "gpu-victim" in a multiplayer game who sees the screen building up line by line? (This is a lop-sided comparison, I know ... ;) )
Edit
There is no "cache-victim" - Every 10 minutes page reloading takes 1 minute for every user.
I think your problem is due to some missing or wrong indexes. I've a sf1.4 project for a large soccer site (i.e. 2M pages/day) and pagers aren't going so slow even if our database has more than 1M rows these days. Take a look at your query with EXPLAIN and check where it is going bad...
Sorry for necromancing (is there a badge for that?).
By configuring cache.yml you are just caching the view layer of your app (that is, css, js and html) for REQUESTS WITHOUT PARAMETERS. Navigating the pager obviously has a ?page=X on the GET request.
Taken from symfony 1.4 config.yml documentation:
An incoming request with GET parameters in the query string or submitted with the POST, PUT, or DELETE method will never be cached by symfony, regardless of the configuration. http://www.symfony-project.org/reference/1_4/en/09-Cache
What might help you is to cache the database results, but its a painful process on symfony/doctrine. Refer to:
http://www.symfony-project.org/more-with-symfony/1_4/en/08-Advanced-Doctrine-Usage#chapter_08_using_doctrine_result_caching
Edit:
This might help you as well:
http://www.zalas.eu/symfony-meets-apc-alternative-php-cache
I am a newbie in Developer2000.
I have an Oracle pl/sql procedure (say, proc_submit_request) that fetches thousands of requests and submits them to dbms_job scheduler. The calling of dbms_job is
coded inside a loop for each request fetched.
Currently, i have a button (say, SUBMIT button) in oracle forms screen clicking which calls proc_submit_request.
The problem here is... the control does not return to my screen untill ALL of the requests fetched are submitted to the dbms_job (this takes hours to complete)
The screen grays out and just the hour-glass appears untill the completion of the procedure proc_submit_request.
proc_submit_appears returns a message to screen telling "XXXX requests submitted".
My requirement now is, once the user clicks the SUBMIT button, the screen should no longer gray out. The user should be able to navigate to other screens and not just struck with the submit screen untill the called procedure is completed.
I suggested running listeners (shell scripts and perl stuff) that can listen for any messages in pipe and run requests as back-ground process.
But the user is asking me to fix the issue in the application rather running listeners.
I've heard a little of OPEN_FORM built-in.
Suppose, I have two forms namely Form-1 and Form-2. Form-1 calls Form-2 using OPEN_FORM.
Now are the following things possible using OPEN_FORM?
On calling open_form('Form-2',OTHER-ARGUMENTS...), control must be in Form-1 (i.e. USer should not know that another form is getting opened) and Form-2 should call proc_submit_request.
User must be able to navigate to other screens in the application. But Form-2 must still be running until proc_submit_procedure is completed.
What happens if the user closes (exits) Form-1 ? Will Form-2 be still running?
Please provide me answers or suggest a good solution.
Good thought on the Form-1, Form-2 scenario - I'm not sure if that would work or not. But here is a much easier way without having to fumble around with coordinating forms being hidden and running stuff, and coming to the forefront when the function actually returns...etc, etc.
Rewrite your function that runs the database jobs to run as an AUTONOMOUS_TRANSACTION. Have a look at the compiler directive PRAGMA AUTONOMOUS_TRANSACTION for more details on that. You must use this within a database function/package/procedure - it is not valid with Forms (at least forms 10, not sure about 11).
You can then store the jobs result somewhere from your function (package variable, table, etc) and then use the built-in called CREATE_TIMER in conjunction with the form level trigger WHEN-TIMER-EXPIRED to go and check your storage location every 10 seconds or so - you can then display a message to the user about the jobs, and kill the timer using DELETE_TIMER.
You could create a single DBMS_JOB to call proc_submit_request. That way your form will only have to make one call; and the creation of all the other jobs will be done in a separate session.
Here's the scenario:
Process 1 (P1) - reads in various flat files one consequence of which is deleting or adding photos URLs to a database table to indicate that these need to be downloaded
Process 2 (P2) - Looks up the photo URLs that need to be downloaded, actually performs the download, then marks the record as downloaded
P1 and P2 sometimes run concurrently based on the amount of data to process and sometimes P1 removes records that P2 has already loaded and prepared to download.
This is not actually a problem - if the image is downloaded but no longer needed, the nature of these image URLs is such that it may end up being used at a later date anyway and space is no concern. So, the only trouble is that if P2 selects a group of records and then some of those records cease to exist during the LINQ update, an error is thrown by SubmitChanges() similar to:
"1 of 4 updates failed."
My question is: what happened when this update failed? As far as I can tell, there are 3 possibilities:
The entire transaction was rolled back
The transaction was not rolled back and all records that could be updated were
The transaction was not rolled back and the 1st record was updated but the 2nd failed so the rest of the updates weren't attempted.
The actual call was as follows - no ConflictMode set:
this.SomeDataContext.SubmitChanges();
How would this call be altered so that any updates that could be executed would and the others ignored? Does the following do the trick:
this.SomeDataContext.SubmitChanges(ConflictMode.ContinueOnConflict);
I don't see anything in the MSDN indicating the default ConflictMode of the parameter-less call:
http://msdn.microsoft.com/en-us/library/bb292162.aspx
..though there is an indication in the single parameter call indicating that the default is FailOnFirstConflict:
http://msdn.microsoft.com/en-us/library/system.data.linq.datacontext.submitchanges.aspx
With Linq-To-SQL, all changes that are committed when SubmitChanges() is called are saved or rolled back as a single transaction. So, if you have 2 inserts, 2 updates and 2 deletes pending when SubmitChanges() is called, they are all saved or rolled back as a single unit of work.
It is possible to make many SubmitChanges() calls and have all inserts/updates/deletes from all SubmitChanges() treated as a single unit of work by wrapping all the SubmitChanges() calls within a TransactionScope object.
I would have commented but dont have enough reputation.
I cam here looking for an answer to the default ConflictMode used. Later I found http://msdn.microsoft.com/en-us/library/bb345081(v=vs.90).aspx
Which seems to have it documented now.