Command mqreply.c timeout - ibm-mq

We with my colleague built mqreply.sh from https://github.com/ibm-messaging/mq-rfhutil/tree/master/mqperf
But we don't suggest that command mqreply has timeout after which process with command is closed.
I attach our file with params for executing mqreply:
[header]
qname=DEV.QUEUE.1
qmgr=QM1
msgcount=10
msgtype=2
format="MQSTR"
codepage=1208
persist=0
replyq=DEV.QUEUE.2
sleeptime=1000
maxWaitTime=5
maxtime=60
waitTime=60
replyFilename=/tmp/msqtoload.dat
I try to set maxWaitTime and maxtime, waitTime, but it doesn't affect timeout for life of process.
Can you say how can I let mqreply doesn't close or maybe increase timeout?
Thank you

The while loop around the MQGET in the mqreply sample you link to does this:-
while ((compcode == MQCC_OK) && (0 == terminate) && ((0 == parms.totcount) || (msgsRead < parms.totcount)))
{
Also, the MQGET will only wait for 1 seconds. There is a comment thus:-
/* since we have a signal handler installed, we do not want to be in an MQGET for a long time */
This suggests that if you want to keep mqreply open and running for longer, you need to specify msgcount as a number bigger than 10.

Related

RXCPP: Timeout on blocking function

Consider a blocking function: this_thread::sleep_for(milliseconds(3000));
I'm trying to get the following behavior:
Trigger Blocking Function
|---------------------------------------------X
I want to trigger the blocking function and if it takes too long (more than two seconds), it should timeout.
I've done the following:
my_connection = observable<>::create<int>([](subscriber<int> s) {
auto s2 = observable<>::just(1, observe_on_new_thread()) |
subscribe<int>([&](auto x) {
this_thread::sleep_for(milliseconds(3000));
s.on_next(1);
});
}) |
timeout(seconds(2), observe_on_new_thread());
I can't get this to work. For starters, I think s can't on_next from a different thread.
So my question is, what is the correct reactive way of doing this? How can I wrap a blocking function in rxcpp and add a timeout to it?
Subsequently, I want to get an RX stream that behaves like this:
Trigger Cleanup
|------------------------X
(Delay) Trigger Cleanup
|-----------------X
Great question! The above is pretty close.
Here is an example of how to adapt blocking operations to rxcpp. It does libcurl polling to make http requests.
The following should do what you intended.
auto sharedThreads = observe_on_event_loop();
auto my_connection = observable<>::create<int>([](subscriber<int> s) {
this_thread::sleep_for(milliseconds(3000));
s.on_next(1);
s.on_completed();
}) |
subscribe_on(observe_on_new_thread()) |
//start_with(0) | // workaround bug in timeout
timeout(seconds(2), sharedThreads);
//skip(1); // workaround bug in timeout
my_connection.as_blocking().subscribe(
[](int){},
[](exception_ptr ep){cout << "timed out" << endl;}
);
subscribe_on will run the create on a dedicated thread, and thus create is allowed to block that thread.
timeout will run the timer on a different thread, that can be shared with others, and transfer all the on_next/on_error/on_completed calls to that same thread.
as_blocking will make sure that subscribe does not return until it has completed. This is only used to prevent main() from exiting - most often in test or example programs.
EDIT: added workaround for bug in timeout. At the moment, it does not schedule the first timeout until the first value arrives.
EDIT-2: timeout bug has been fixed, the workaround is not needed anymore.

Recommendations for workflow when debugging Python scripts employing multiprocessing?

I use the Spyder IDE. Usually, when I am running non-parallelized scripts, I tend to debug using print statements. Depending on which statements are printed (or not), I can see where errors are occurring.
For example:
print "Started while loop..."
doWhileLoop = False
while doWhileLoop == True:
print "Doing something important!"
time.sleep(5)
print "Finished while loop..."
Above, I am missing a line that changes doWhileLoop to False at some point, so I will be stuck perpetually in the while loop, but my print statements let me see where it is in my code that I have hung up.
However, when running scripts that are parallelized, I get no output to the console until after the process has finished. Normally, what I do in this case is attempt to debug with a single process (i.e. temporarily deparallelize the program by running only one task, for instance), but currently, I am dealing with an error that seems to occur only when I am running more than one task.
So, I am having trouble figuring out what this error is using my usual methods -- how should I change my usual debugging practice in order to efficiently debug scripts employing multiprocessing?
Like #roippi said, debugging parallel things is hard. Another tool is using logging over print. Logging gives you severity, timestamps, and most importantly which process is doing something.
Example code:
import logging, multiprocessing, Queue
def myproc(arg):
return arg*2
def worker(inqueue, outqueue):
mylog = multiprocessing.get_logger()
mylog.info('start')
for job in iter(inqueue.get, 'STOP'):
mylog.info('got %s', job)
try:
outqueue.put( myproc(job), timeout=1 )
except Queue.Full:
mylog.error('queue full!')
mylog.info('done')
def executive(inqueue):
total = 0
mylog = multiprocessing.get_logger()
for num in iter(inqueue.get, 'STOP'):
total += num
mylog.info('got {}\ttotal{}', job, total)
logger = multiprocessing.log_to_stderr(
level=logging.INFO,
)
logger.info('setup')
inqueue, outqueue = multiprocessing.Queue(), multiprocessing.Queue()
if 0: # debug 'queue full!' issues
outqueue = multiprocessing.Queue(maxsize=1)
# prefill with 3 jobs
for num in range(3):
inqueue.put(num)
# signal end of jobs
inqueue.put('STOP')
worker_p = multiprocessing.Process(
target=worker, args=(inqueue, outqueue),
name='worker',
)
worker_p.start()
worker_p.join()
logger.info('done')
Example output:
[INFO/MainProcess] setup
[INFO/worker] child process calling self.run()
[INFO/worker] start
[INFO/worker] got 0
[INFO/worker] got 1
[INFO/worker] got 2
[INFO/worker] done
[INFO/worker] process shutting down
[INFO/worker] process exiting with exitcode 0
[INFO/MainProcess] done
[INFO/MainProcess] process shutting down

Rate Exceeding in workflow_execution polling

I am currently trying to modify a plugin for posting metrics to new-relic via AWS. I have successfully managed to make the plugin post metrics from swf to new relic (not originally in plugin), but have encountered a problem if the program runs for too long.
When the program runs for a bout 10 minutes I get the following error:
Error occurred in poll cycle: Rate exceeded
I believe this is coming from my polling swf for the workflow executions
domain.workflow_executions.each do |execution|
starttime = execution.started_at
endtime = execution.closed_at
isOpen = execution.open?
status = execution.status
if endtime != nil
running_workflow_runtime_total += (endtime - starttime)
number_of_completed_executions += 1
end
if status.to_s == "open"
openCount = openCount + 1
elsif status.to_s == "completed"
completedCount = completedCount + 1
elsif status.to_s == "failed"
failedCount = failedCount + 1
elsif status.to_s == "timed_out"
timed_outCount = timed_outCount + 1
end
end
This is called in a polling cycle every 60 seconds
Is there a way to set the polling rate? Or another way to get the workflow executions?
Thanks, here's a link to the ruby sdk for swf => link
The issue is likely that you are creating a large number of workflow executions and each iteration through the loop in workflow_executions is causing a lookup, which eventually is exceeding your rate limit.
This could also be getting a bit expensive, so be careful.
It's not clear what you're really trying to do, so I can't tell you how to fix it unless you post all your code (or the parts around calls to SWF).
You can see here:
https://github.com/aws/aws-sdk-ruby/blob/05d15cd1b6037e98f2db45f8c2597014ee376a59/lib/aws/simple_workflow/workflow_execution_collection.rb
That a call is made to SWF for each workflow in the collection.

Same command, with different parameters, on a while true loop with bash or something else

I always become crazy with bash, i don't understand it.
I basically want to do this (i'm not using some specific syntax, it's just to explain my problem):
processes_count = 20;
for (i = 0; i < processes_count; i++)
{
php -f file.php "{$i}-{$processes_count}" &
proc_id[i] = $!
}
The above cycle start the processes. The next one should keep the processes "alive for ever"!
while(true)
{
foreach(proc_id as id)
{
if(!exist(proc_id[id]))
{
php -f file.php "{$id}-{$processes_count}" &
proc_id[id] = $!
}
}
sleep 5
}
If someone can help translating this into bash, python or something, thank you :)
I don't think you can do that because bash doesn't provide a method to 'wait for any one child process to die and let me know which one it was that died'. The nearest approach is wait:
wait
wait [jobspec or pid ...]
Wait until the child process specified by each process id pid or job specification
jobspec exits and return the exit status of the last command waited for. If a
job spec is given, all processes in the job are waited for. If no arguments are
given, all currently active child processes are waited for, and the return status
is zero. If neither jobspec nor pid specifies an active child process of the shell,
the return status is 127.
This means you can wait for a specific child to die, or you can wait for all children to die, but you can't do what you want.
If you drop into Perl or Python, you can do it, using the wait system call.

How do I do a non-blocking read from a pipe in Perl?

I have a program which is calling another program and processing the child's output, ie:
my $pid = open($handle, "$commandPath $options |");
Now I've tried a couple different ways to read from the handle without blocking with little or no success.
I found related questions:
perl-win32-how-to-do-a-non-blocking-read-of-a-filehandle-from-another-process
why-does-my-perl-sysread-block-when-reading-from-a-socket
But they suffer from the problems:
ioctl consistently crashes perl
sysread blocks on 0 bytes (a common occurrence)
I'm not sure how to go about solving this problem.
Pipes are not as functional on Windows as they are on Unix-y systems. You can't use the 4-argument select on them and the default capacity is miniscule.
You are better off trying a socket or file based workaround.
$pid = fork();
if (defined($pid) && $pid == 0) {
exit system("$commandPath $options > $someTemporaryFile");
}
open($handle, "<$someTemporaryFile");
Now you have a couple more cans of worms to deal with -- running waitpid periodically to check when the background process has stopped creating output, calling seek $handle,0,1 to clear the eof condition after you read from $handle, cleaning up the temporary file, but it works.
I have written the Forks::Super module to deal with issues like this (and many others). For this problem you would use it like
use Forks::Super;
my $pid = fork { cmd => "$commandPath $options", child_fh => "out" };
my $job = Forks::Super::Job::get($pid);
while (!$job->is_complete) {
#someInputToProcess = $job->read_stdout();
... process input ...
... optional sleep here so you don't consume CPU waiting for input ...
}
waitpid $pid, 0;
#theLastInputToProcess = $job->read_stdout();

Resources