Laravel holt a queued job - laravel

We are creating an application using laravel like example.com. On our application, there is a post api "example.com/api/order-place". In this api, we store some data on our database and send a successful response to our customer. We also call a third-party application to get some others data (third-party.com/api/get-data) on the same request. We are using a Queue job to get this data, without hampering the main order place request journey.
But sometimes the third-party api service is down. At that time, we want to store that third-party api call in some place (queue) and when the third-party application service is up, we want to process all queued jobs.
How could we achieve this? is it possible to solve this problem using laravel queue? Like when third-party applications are down, we hold our queue, and also when the third-party application is up, process these jobs.
We can do this using queue retry on failed jobs. But don't wants that. We just wants to holt a queue when third-party application is down

Should works like this
Create helper function to detect 3rd party api if up or down
Create a class or helper function for processing the 3rd party API request, throw an error if the request to 3rd party API fail ( you might want to call #1)
Create a job with no retries (must only run once) and call #2 in handle
Push your Job on different queue e.i. WhatEverJob::dispatch()->onQueue('apirequest');. You should also process this queue in your supervisor worker e.i
php artisan queue:work --queue=default,apirequest
Create a Task Scheduler (cron) that runs every hour, minute or whatever. This should first run #1 (or query the failed_jobs first whatever suits you) and exit if its down, if its up, then proceed to querying the failed_jobs table, only pull data with column queue value apirequest. you can do additional filter with payload->displayName which should be your Job Class, you can also check payload->maxTries and do something if its more than number of tries. then manually retry each failed entry.
sample;
if ( ThirdPartyAPI::is('down') )
return;
$failedEntries = DB::table('failed_jobs') ->where('queue', 'apirequest')->pluck('uuid');
if ( !$failedEntries )
return;
$uuids = implode(' ', $failedEntries);
\Artisan::call('queue:retry '. $uuids);

Related

Azure web app - how to avoid multiple users triggering the same endpoint at time

Is there any way to set a limit on the number of requests for the azure web app (asp.net web API)?
I have an endpoint that runs for a long time and I like to avoid multiple triggers while the application is processing one request.
Thank you
This would have to be a custom implementation which can be done in a few ways
1. Leverage a Queue
This involves separating the background process into a separate execution flow by using a queue. So, your code would be split up into two parts
API Endpoint that receives the request and inserts a message into a queue
Separate Method (or Service) that listens on the queue and processes messages one by one
The second method could either be in the same Web App or could be separated into a Function App. The queue could be in Azure Service Bus, which your Web App or Function would be listening in on.
This approach has the added benefits of durability since if the web app or function were to crash and you would like to ensure that requests are all processed, the message would be processed again in order if not completed on the queue.
2. Distributed Lock
This approach is simpler but lacks durability. Here you would simply use an in-memory queue to process requests but ensure only one is being processed at a time but having the method acquire a lock which the following requests would wait for before being processed.
You could leverage blob storage leases as a option for distributed locks.

How can I trigger spring batch execution from vuejs

I am trying to trigger a spring batch execution from endpoint. I have implemented a service at the backend. So from Vue i am trying make a call to that endpoint.
async trigger(data) {
let response = await Axios.post('')
console.log(response.data.message)
}
My service at the backend returns a response " Batch started" and does execution in the background since it is async but does not return back once job is executed(i see the status only in console). In such scenario how can i await the call from vue for the service execution to complete. I understand that service send no response once execution is complete/failed. Any changes i need to make either at the backend or frontend to support this. Please let me know your thoughts.
It's like you said, the backend service is asynchronous, which means that once the code has been executed, it moves to the next line. If no next line exits, the function exists, the script closes, and the server sends an empty response back to the frontend.
Your options are:
Implement a websocket that broadcasts back when the service has been completed, and use that instead.
Use a timeout function to watch for a flag change within the service that indicates that the service has finished its duties, or
Don't use an asynchronus service
how can i await the call from vue for the service execution to complete
I would not recommend that since the job may take too long to complete and you don't want your web client to wait that long to get a reply. When configured with an asynchronous task executor, the job launcher returns immediately a job execution with an Id which you can inspect later on.
Please check the Running Jobs from within a Web Container for more details and code examples.
My suggestion is that you should make the front-end query for the job status instead of waiting for the job to complete and respond because the job may take very long to complete.
Your API to trigger the job start should return the job ID, you can get the job ID in the JobExecution object. This object is returned when you call JobLauncher.run.
You then implement a Query API in your backend to get the status of the job by job ID. You can implement this using the Spring JobExplorer.
Your front-end can then call this Query API to get the job status. You should do this in an interval (E.g. 30 secs, 5 mins, .etc depending on your job). This will prevent your app from stuck in waiting for the job and time-out errors.

Service synchronization issue

I've created two services.
One of them (scheduler) only requests to the other (backoffice) for performing some "large" operations.
When backoffice receives a request:
first creates a mark (key on redis) in order to set that the process has started.
Each time a request is reached:
backoffice checks if the mark exist.
When it exists means that the previous process has not yet finished, and escape it.
Perform the large process.
When process is finished, the previous key in redis is removed.
It would be something like this:
if (key exists)
return;
make long process... (1);
remove key;
The problem arises when service is destroyed when the process has not already finished and then it doesn't removes the mark on redis. It means the process will never run again.
Is there any way to solve this kind of problems?
The way to solve this problem is use an existing engine as building custom scalable and robust solution for reliable service orchestration is really hard.
I recommend looking at Uber Cadence Workflow which would allow to convert your pseudocode into a real production application with minor changes.
You can fire a background job that updates timestamp under the key, e.g. every minute.
When service attempts to start the process it must verify key existence (as it does now) + timestamp under the key. If it is more than 1 minute ago then the previous attempt is stale and you can start over.
Sounds like you should be using a messaging queue to schedule tasks for the back office service. Queuing solutions like RabbitMQ allow you to manually acknowledge (or “ack”) that the process is complete. Whenever a subscriber crashes, the queue detects that the connection dropped without acknowledgement and will re-enqueue the same task which will be picked up by the next available subscriber. Here’s another thread talking about this problem specifically focused on messaging queues:
What happens to fetched messages when RabbitMQ consumer crashes?

How reliable is delaying a Mail in Laravel?

I want to inform the seller, that the buyer is coming soon (about 2 hours before pickup time) via mail.
I would normally do it the hard way with CRON and a database table. Checking hourly if I find an order with pickup time minus 2 hours, only then sending the mail out.
Now, I would like to know if you would recommend using Queueing Jobs for sending Mails out.
With
$when = now()->addDays(10); //I would dynamically set the date
Mail::to($order->seller())
->later($when, new BuyerIsComing($order));
I can delay the delivery of a queued email message.
But how safe would this be? Especially, if someone is ordering something but is picking it up in let us exaggerate two months?
Is the Laravel queueing system rigid enough to behave correctly after long delays (i.e. 2 months)?
Edit
I'm using Redis for Queueing
You actually have nothing to worry about. Sending mail usually increases the response time of your application, so it's a good thing you want to delay the sending.
Queues are the way to go and it's pretty easy to setup in Laravel. Laravel supports a couple of them out of the box. I would advise you start with database and then try beanstalk etc.
Lastly and somehow more importantly, use a process manager like Supervisor to monitor and maintain your queue workers...
Take a look at https://laravel.com/docs/5.7/queues for more insight.
Cheers.
If by safe, you mean reliable, then it would be little different than sending an email immediately. If there's ever a possibility that your server "hiccups" and doesn't send an email, that possibility would be the same now as 10 minutes from now. Once the job is in the queue, it is persisted until completion (unless you use a memory-based driver, like Redis, which could get reset if the server reboots).
If you are using a database queue driver or remote, the log of queued jobs will remain even if the server is unavailable for a short period of time. Your queue will be honored even if the exact time stamp for when you want to send the job has expired. For instance if you schedule to send an email at 1:00pm but your server is down at that exact moment, when it comes back online it will still see the job because it is stored as incomplete and the time for the job is in the past, which will trigger the execution of the job at the next time your queue worker checks the job list.
Of course, this assumes that you have your queue worker set up to always check jobs and automatically restart, even after a server failure, but that's a different discussion with lots of solutions...such as those shown here.
If you're using database driver with Laravel queues to process your email then you don't need to worry about anything.
Jobs are only removed from Jobs table if they are successfully completed otherwise their next attempt time is set which is few minutes in future and they are executed again (if your queue worker is online).
So its completely safe to use Laravel queues

Parse Server with independent workers

Image we want to check two weeks after a user's registration if she has been active and otherwise I want to notify her.
To achieve this we currently use the following setup (this runs on Heroku):
The parse server puts a task into the redis queue. The worker fetches tasks from that queue. Then it performs checks on the activity of the user. For this it needs to access the parse server to fetch that information. This puts additional load on our api.
I image the following scenario to be better:
I wonder: is it possible to achieve this scenario using parse server? (The worker dynos don't have a HTTP interface to run a parse server...)

Resources