I have a Laravel project that receives session data via an API.
A client will send data to the API for storage every 30 seconds during a session.
If the client does not send data for say 2 minutes, then I would like to trigger a job to process the session data with the assumption that the session has finished.
My initial thought was to dispatch a queue job with a 2 minute delay every time the data comes in with some sort of reference to the client/session. However, I seems that once I dispatch the job, there is no way of updating the delay again when new data comes in within the 30 seconds.
Is there an easy way to achieve this or does anyone have any other suggestions on how this may be achieved another way?
I hope this is clear.
EDIT
To provide an example:
Let's say game players are accumulating points every 30 seconds, their database record gets updated with this score every 30 seconds.
There is no way of knowing that this player has stopped accumulating points, in other words finished the game and can be processed, eg. get's an email notification, etc.
Related
I've built a system based on Laravel where users are able to begin a "task" which repeats a number of times, with a delay between each repetition. I've accomplished this by queueing a job with an amount argument, which then recursively queues an additional job until the count is up.
For example, I start my task with 3 repetitions:
A job is queued with an amount argument of 3. It is ran, the amount is decremented to 2. The same job is queued again with a delay of 5 seconds specified.
When the job runs again, the process repeats with an amount of 1.
The last job executes, and now that the amount has reached 0, it is not queued again and the tasks have been completed.
This is working as expected, but I need to know whether a user currently has any tasks being processed. I need to be able to do the following:
Check if a particular queue has any jobs started by a particular user.
Check the value that was set for amount on that job.
I'm using the database driver for a queue named tasks. Is there any existing method to accomplish my goals here?
Thanks!
You shoudln't be using delay to queue multiple repetitions of the same job over and over. That functionality is meant for something like retrying a failed network request. Keeping jobs in the queue for multiple hours at a time can lead to memory issues with your queues if the count gets too high.
I would suggest you use the php artisan schedule:run functionality to run a command every 1-5 minutes to check the database if it is time to run a user's job. If so, kick off that job and add a status flag to the user table (or whatever table you want to keep track of these things). When finished you mark that same row as completed and wait for the next time the cron runs to do it again.
I'm using load test in Visual Studio to test our web api services. But to my surprise I can't seem to test what I want to. Actually I have a single url in my .webtest file and try to send the same url time and again to see what is the avg. response time.
Here are the details
1.I use constant load of 1 user
2.Test duration of 1 hour
3.Think time of 10 seconds (not the think time between iterations)
4.The avg. response time that I get is 1.5 seconds
5.So the avg. test time comes out to be 11.5 seconds
6.Requests/sec are 0.088
7.And I'm using Sequential Test Order among 4 types of different tests
So these figures are making me think that every time a virtual user sends a request besides the specified think time it waits for the request to complete before he sends a new one (request). Thus technically the total think time becomes
Total think time = think time specified + avg. response time
But I don't want the user to wait for an already sent request to come back and then send a new one after a specified think time. I need to configure the load test in such a way that if the think time is 10 seconds then the user should send next request after every 10 seconds without waiting the first one to come back then think for another 10 seconds and then send a new request (hence making the total think time to 11.5 seconds in my case as mentioned above). And no matter what type of test I choose among 4 different types Visual Studio is always forcing the virtual user to wait for the completion of the request then add specified think time and then send a new one.
I know what Visual Studio load test is doing is more of a practical approach where the user sends the request wait till it comes back then think or interact with the website and then sends a new one.
Any help or suggestion would be appreciated towards what I'm trying to achieve.
In the properties of the scenario, set the "Test mix type" to be "Test mix based on user pace" and set the "Tests per user per hour" as appropriate. See here.
The suggestion in the question that:
Total think time = think time specified + avg. response time
is erroneous. To my mind adding the values does not provide a useful result. The two values on the right are as stated. Think time simulates the time a user spends reading the page, deciding what to do next and typing/clicking/etc their response. Response time is the "turn around" time between sending a request and getting the response. Adding them does not increase the think time in any sense, it just makes the total duration for handing the request in this specific test. Another test might make the same request with a different think time. Note that many web pages cause more than one request and response to be issued; JavaScript and other allow web pages to do many clever things.
I am new to jBPM. I am working on jBPM version 6.2.0. I want to perform following tasks.
Send reminder email to user / group.
Remind the user again after 1 business day if the task is not yet complete. Continue to send reminder everyday untill the task is done.
Also what happens if jboss / tomcat server restarts after sending one reminder email. Will the later emails still schedule ?
I am able to add Deadlines (Escalation- Notification) But it runs once and sends only 1 email. I need to keep reminding the user on a daily basis (or hourly) to complete the task.
I tried looking in jBPM 6 user guide but it does not have clarity about Boundary timer events and intermediate catch time events. And when i use any of them then it runs once.
Any help is much appreciated.
Here is an example of something that I did recently for sending periodic emails.
This should loop until a user finally completes the task. You might have trouble with the one business day rule since I do not know if the ISO 8601 spec is flexible enough to know about weekends/holidays/business days. You could add that logic into your service task for sending the email.
Be aware that this loop will continue forever until the task is complete. You might want to consider adding some additional timeout. You could add a loop count so after X amount of times the process will be cancelled. Some of my processes have a rule that if the process is not complete in Y days, the process should be cancelled. I accomplished that by have a process variable CancelDate and set a Timer Event definition to Date/Time and the value #{CancelDate}.
My understanding of the parse.com API rate limit is that it’s not a concurrent-job limit, it’s just the number of requests started in a given second. So if a user is, say, uploading a file from a slow network and it takes 30 seconds, that’s not 1 of my 30 req/s taken up that whole time. It’s just one request, the first second.
On my team, though, is a wonderful security guy whose job it is to worry. He thinks that if 30 users upload a file each, for 30 seconds, at a 30 r/s limit, no one else will be able to use our app until they are done.
Which one is correct?
Your understanding was correct. It's the number of requests started per second. The duration of the request does not come in to play.
Source: I work at Parse.
I think you are right. I've made some experiments with Parse, for example i reloaded a UITableview 10 or 20 times in one second (can't remember) for 3-4 minutes and checked the requests in the admin panel. The maximum value was always less than 30, but it doesn't matter, the point is that you can test it this way and get more informations.
Just create some test project and reload the SampleViewController.m (which contains a Parse query) 30 times in one second, after this you can check the data browser which will display the traffic by req/sec.
As a second option you can upload a bunch of images by current user in every second, since the upload time is longer than 1 sec, you can check what happens when you start uploading a bunch of images (or other data) in every second.
I have hosted a state machine Worklow as a WCF service..And the workflow is called in an ASP.NET code. I used netTcpContextBinding for workflow hosting. Problem is that if a SendRecieve activity within the workflow is taking a lot of time (say 1 minute) to execute, then it will show transaction aborted error and will terminate.. i have already set the binding values for send, recieve, open, close timeouts to maximum values in both web.config and the app.config..
How can i overcome this issue?
A TransactionScope has a default timeout of 60 seconds so if whatever you are doing in there takes longer it will time out and abort. You can increase the timeout on the TransactionScope but quite frankly the 60 seconds is already quite long. In most cases you are better of at doing any long running work to collect data before the transaction and keep your transaction time as short as possible.