Laravel - Setting up right Jobs per function - laravel

I've a question about setting up some Jobs for my functions.
The case
1. A customer uploads images to an FTP environment
2. A function will check if there are any images in the folder the client has to upload to
3. Check if image is older than 5 minutes (to be sure there is no image still in "upload" process
4. Check if image is greater than 0kb (yes, it happens that the client uploads 0kb images)
5. Reduce image with "intervention/image"
6. Copy image to local website
7. Move image to "uploaded" folder as backup
So this are all single tasks that has to be done.
My question is, do I have to make a single job per function/task or can I put all functions in one job?
Thanks!

You don't need to use cronjobs, you can dispach a job from the controller:
ProcessImage::dispatch($image);
I think about approaching this using a class that contains all of those tasks while each task represented by a public function (additional private helper function can be added according to your needs).
Afterwards, create a Laravel Job which gets the class you've created earlier by dependency injection using the Service Container. In the handle method of you Job, call the functions from your class in any order you desire.

Related

Avoid processing same file twice in Spring Batch

I have to write a Spring Batch job as follows:
Step 1: Load an XML file from the file system and write its contents to a database staging table
Step 2: Call Oracle PL/SQL procedure to process the staging table.
(Comments on that job structure are welcome, but not the question).
In Step 1, I want to move the XML file to another directory after I have loaded it. I want this, as much as possible, to be "transactional" with the write to the staging table. That is, either both the writes to staging and the file move succeed, or neither does.
I feel this necessary because if (A) the staging writes happen but the file does not move, the next run will pick up the file again and process it again and (B) if the file gets moved but the staging writes do not happen, then we will have missed that file's processing.
This interface's requirements are all about robustness. I know I could just put a step execution listener to move all the files at the end, but I want the approach that is going to guarantee that we never miss processing data and never process the same file twice.
Part of the difficulty is that I am using a MultiResourceItemReader. I read that ChunkListener.beforeChunk() happens as part of the chunk transaction, so I tried to make a custom chunk CompletionPolicy to force chunks to complete after each change of resource (file) name, but I could not get it to work. In any case, I would have needed an afterChunk() listener, which is not part of the transaction anyway.
I'll take any guidance on my specific questions or an expert explanation of how to robustly process files in Spring Batch (which I am only just learning). Thanks!
I have pretty similar spring batch process right now.
Spring batch fits good to your requirement.
I would recommend to start using here spring integration.
In spring integration you can configure to monitor your folder and then make it trigger batch job. There is good example in official documentation.
Then you should use powerful concept of spring batch - identifying parameters. Spring batch job runs with unique parameters, and if you put this parameter as identifying, then no other job could be spawned with same parameter (though you can restart your original job).
/**
* Add a new String parameter for the given key.
*
* #param key - parameter accessor.
* #param parameter - runtime parameter
* #param identifying - indicates if the parameter is used as part of identifying a job instance
* #return a reference to this object.
*/
public JobParametersBuilder addString(String key, String parameter, boolean identifying) {
parameterMap.put(key, new JobParameter(parameter, identifying));
return this;
}
So here you need to ask yourself what is your uniquely identifying constraint for batch job? I would suggest it's full file path. But then you need to be sure that nobody provides different files with same filename.
Also spring integration can see if file was already seen by application and ignore it. Please check documentation on AcceptOnceFileListFilter.
If you want to have guaranteed 'transactional-like' logic in batch - then don't put it into Listeners, create a specific step which will move file. Listeners are good for suplimental logic.
In this way if this step will fail for any reason, you will still be able fix issue and to retry job.
This kind of process can be easy done with a job with 2 step and 1 listener:
A standard (read from XML -> process? -> write to DB) step; you don't care about restartability because SB is smart enough to avoid data read repetition
a listener attached to step 1 to move file after successfully step execution (example 1, example 2 or example 3)
A second step with data processing
#3 may may be inserted as step 1 process phase

Fineuploader retrieve qquuid before actual post?

I'm using the jquery version of FineUploader v5.3, and chunk all my uploaded files. Thanks for the help yesterday, and this is working nicely now.
Here's my current issue:
On the server I currently look at the qquuid within the Request and create a folder as necessary to house the temporary chunk files that are coming in. This was originally designed when only 1 file was incoming, but now it's unnecessary code on the second and subsequent chunks (folder already exists, but index order of chunks is unknown). Is there a way to determine the qquuid that will be used for the file within an event BEFORE the actual post so I can do the needed infrastructure? I tried using the OnSubmitted event, but all that I can get from here is the ID (chunk index) or the name. Please let me know if you require any clarification. Thanks!
v3.5 is incredibly old - about 2.5 years old to be exact. Looking at the API doc from that time, I see that a getUuid method existed. If you pass in the file ID, you will be able to get a handle on the file's UUID. You can make this call in your onSubmitted callback handler.

How to modify current chunk data during upload process with FineUploader

We need to call a third party lib, ideally at the time of onUploadChunk callback.
As shown in the documentation (http://docs.fineuploader.com/branch/master/api/events.html#uploadChunk), we can have some parameters in order to identify the chunk and do stuff with the javascript slice method.
But, the question is : how to give back updated chunk into the fineuploader upload process ?
Thanks a lot for help.
You cannot modify the chunks created by Fine Uploader, nor should you be able to as it may change the size of the chunk, the expected total number of chunks, and require adjustments of internal state and sent parameters. If you'd like to modify any files, you have two options:
Modify the file before it is sent to to Fine Uploader
Modify the file before the file upload begins. You can cancel the original file, and then submit the changed versions via the addFiles API method.

Immediately Display New Metrics

I am using graphite and coda hale metrics to try and track the number of times particular API's are called and also the top 10 callers. I have assigned a metric to each user who calls the API and use graphite to bring back the top 10.
The problem is, if it is a new user - ie a new metric, this will only be displayed in Graphite when the tool is refreshed - Has anyone come across a work around for this ? Is there some way Graphite can automatically detect new meters?
Just to be clear - I can see the top ten API callers for the last 30 minutes.........unless it is a brand new user that has never logged in before.
It seems that graphite-web uses an on disk index generated by a glorified find command. Another script is available so you can run it as cron to update the metric index file.
Whenever you update the index file, graphite-web process will detect it and reload it.
Since reloading the index might be heavy for large (1M) number of metrics, I would advise to modify the update script a bit to conditionnaly update the file (only if different for instance).
EDIT: after test, graphite does not seem to call the reloading code

How to handle multiple asynch downloads

I recently moved my background synch downloads to a view controller and need some advice on how to best handle them asynch. I have written all the code to show a progressview as the download occurs but as you might have guessed it's not that simple. Here's how it works.
user sees a tableview with two entires one for each database. they can press a button to download the database and when the download starts that fires off the asynch URL connection,etc. This works to a certain extent however it's not that simple.
here's what i want it to do.
download the main update URL (works ok)
then download a secondary URL.
then apply the first URL content to the sqlite store (code written for that)
then apply the 2nd URL content to the sqlite store (code written for that)
(All the while showing progress to the user)
when the downloads were synch it was easy as i just waited for them to finish in order to fire the next activity off but when using the asynch method i'm struggling with how to get them to wait. Step 3 depends upon step 1 finishing and step 4 depends on step 2 finishing and overall success relies on all finishing. step 4 needs to wait for step 3 to finish otherwise the database locks will cause a clash.
the second complication is that if the user presses the second button while the first is downloading then steps 3, 4 will clash if they execute at the same time as the first row is accessing the database.
Has anyone done anything similar and if so what was the strategy you used to manage the flow of events.
Also i wanted to wrap this all up in a backgroundTask with ExpirationHandler so it would survive the user pressing the home button... but the delegate methods don't get called when i do that.
Ok Here is what i did to fix the problem.
Created an NSOperationQueue
Added the URL operations as NSURLInnvocationOperations
3.waited until the URL operations were complete (waituntilalloperationsarefinished).
Then set the max concurrent count to 1 which forced the subsequent database operations to execute in series one after the other and thus prevented SQLite from locking it's self out.

Resources