I have a doubt regarding multiple file transfer with qftp. There is no direct way to transfer multiple files with qftp class. Well, I tried it using arbitrary ftp command "mput" with "rawCommand" in QFTP. But it doesnt work for me.
Please let me know how I could do a multiple file transfer with qftp.
Thanks,
Use a for-loop, and start a new transfer for every iteration. Then collect all the commandFinished() signals at the end.
QFtp works asynchronously. If an operation cannot be executed immediately, the operation will automatically be scheduled for later execution. The results of scheduled operations are reported via signals.
Related
I have a batch job that reads hundreds of images from an SFTP location and then encodes them into base64 and uploads them via API using HTTP connector.
I would like to make the process run quicker and hence trying to split the payload into 2 via scatter-gather and then sending then sending payload1 to one batch job in a subflow and payload2 to another batch job in another subflow.
Is this the right approach?
Or is it possible to split the load in just one batch process, ie for one half of the payload to be processed by batch step 1 and second half will be processed by batch step 2 at the same time?
Thank you
No, it is not a good approach. Batch jobs are always executed asynchronously (ie using different threads) so there is no benefit on using scatter-gather and it has the cons of increasing resource usage.
Splitting the payload in different batch steps doesn't make sense either. You should not try to scale by adding steps.
Batch jobs should be used naturally to work in parallel by iterating on an input. It may be able to handle the splitting itself or you can manually split the input payload before. Then let it handle the concurrency automatically. There are some configurations you can use to tune it, like block sizing.
From this stackoverflow question i understand a batch is sent out one at a time (without bother pipeline in this discussion), meaning, a second batch won't be sent until the first one is delivered.
My follow up question is, what condition starts a batch creation process. If i understand correctly (i could obviously be wrong....), a batch is created/cut, or let's call it a batch creation process is completed, if BATCHSZ reached, or BATCHLIM reached, or BATCHINT (=/=0) reached, or XMIT-Q is empty, but what starts a batch creation process. Is the batch creation process synchronous or asynchronous to batch transfer? Does batch creation process start only after the previous batch is delivered (synchronous), or it's totally decoupled from the previous batch (eg. while the previous batch is still in transfer)?
This is a sibling/follow up question to 1. The intention is to estimate our QRepl-MQ-transfer upper limit. As documented in entry "[added on Dec.20]" in the first (self-)answer in 1, our observation seems support the batch creation process starts synchronously AFTER the previous batch transfer is complete, but i couldn't find ibm references documenting the details......
Thanks for your help.
our observation seems support the batch creation process starts
synchronously AFTER the previous batch transfer is complete, but i
couldn't find ibm references documenting the details.
Yes that is how it works. If a 2nd batch started before the 1st batch finished then you would have newer messages jumping in front of older messages, which could cause all kinds of issues.
Yes, I know, applications are not suppose to rely on messages coming in a logical order (i.e. 1,2,3,etc.) but they do.
Think of MCA (Message Channel Agent) which is the process getting messages from the XMIT the same as a security guard at a store on Black Friday. He lets in 50 people form the line (batch). After many people leave the store, he lets in another 50 people into the store. Would you want ASYNC batching of the line at the store - absolutely not. The security guard wants order not chaos.
The same is true for MQ's MCA. It creates a batch of "n" messages, sends them, acknowledges them, then goes onto the next batch.
I want to write the single log file (which gets created on daily basis) by multiple SPs running in different session.
This is what i have done.
create or replace PKG_LOG:
procedure SP_LOGFILE_OPEN
step 1) Open the logfile:
LF_LOG := UTL_FILE.FOPEN(LV_FILE_LOC,O_LOGFILE,'A',32760);
end SP_LOGFILE_OPEN;
procedure SP_LOGFILE_write
step 1) Write the logs as per application need:
UTL_FILE.PUT_LINE(LF_LOG,'whatever i want to write');
step 2) Flush the content as i want to logs to be written in real time.
UTL_FILE.FFLUSH(LF_LOG);
end SP_LOGFILE_write;
Now whenever in any stored procedure i want to write the log first i am calling SP_LOGFILE_OPEN and then SP_LOGFILE_write(as many time as i want).
Problem is, if there are two stored procedures say SP1 and SP2. If both of them try to open it same concurrently it never throughs error or waits for another to finish. Instead it gets open in both the sessions where SP1 and SP2 is executing.
The content of SP1(if it started running first) will be completly written into logfile but content from SP2 will be partially written into logfile. SP2 starts wrtting only when SP1's execution stops. Also initial content of SP2 which it was trying to write into logfile gets lost due to FFLUSH.
As per my requirement i dont want to lose the content of second SP2 when SP1 was running.
Any suggestions please. I dont want to drop teh idea of FFLUSH as i need in real time.
Thanks.
You could use DBMS_LOCK to get a custom lock or wait until a lock is available, then do your write, then release the lock. It has to be serialized.
But this will make your concurrency problem even worse. You're basically saying that all calls to this procedure must get in a line and be processed one by one. And remember that disk I/O is slow, so your database is now only as fast as your disk.
Yours is a bad idea. Instead of writing directly to a file, simply enqueue a log message to an Oracle advanced queue and create a job running very frequently (every few seconds) to dequeue from the AQ. It's the procedure invoked by the job that actually writes to the file. This way you can synchronize different SP executions trying to log concurrently on the same file. The actual logging is made by one single SP invoked by the job.
I have task observe folder where files are coming from SFTP. File are big and processing one file is relatively time consuming. I am looking for best approach to do it. Here are some ideas how to do it, but I am not sure what is the best way.
Run scheduller each 5 min to check for new files
For each new file trigger event that there is new file.
Create listener which will listen for this event and which will using queues. In the listener for new files copy new file in the processing folder and process it. When processing of new files start insert record in the DB with status processing. When processing is done change record status and copy file to processed folder.
I this solution I have 2 copy operations for each file. This is because it is possible if second scheduler executes before all files are processed than some files could overlap in 2 processing jobs.
What is the best way to do it? Should I use another approach to avoid 2 copy operations? Something like to put database check during scheduler execution to see if the file is already in the processing state?
You should use the ->withoutOverlapping(); as stated in the manual of task Scheduler here.
Using this you will make sure that only one instance of the task run at any given time.
I have tried to originate a call from cli, My call file will hit two Java applications at a time so that they starts communicating to each other. Now My requirement is to originate multiple calls at once so that multiple threads run at same time. and Thus I can test the load etc. I have tried following for originating single call It works fine.
originate loopback/1234/default &bridge({ignore_early_media=true}sofia/internal/1789#XX.XX.XX.XX)
The above file can only be executed once, If I run it in a loop even then only one call is invoked. Please suggest me some way how to originate more number of calls in freeswitch
Your code is getting stuck waiting for the result -- 'api' commands are blocking
If you execute this as 'bgapi originate....' then it will be a background execution (bg) and be non-blocking -- it will return a job uuid, and let you execute more commands.
See:
http://wiki.freeswitch.org/wiki/Event_Socket_Library#bgapi
http://wiki.freeswitch.org/wiki/Event_Socket#bgapi
bgapi returns only Job-UUID, but not call uuid. what to do next?