issue in taking backup of messages in MQ using dmpmqmsg - ibm-mq

We faced a issue in production.A queue has curdepth of 176.but while taking backup,its showing only 170 messages . There is no uncommitted messages also.. why 6 messages are not getting backed up?
Issued command:-
dmpmqmsg -m <qmname> -i <queuename> -f <filename>
When I checked using the amqsbcg sample program, I could see all 176 messages. 176 headers were there. There were no empty messages.

A couple of thoughts:
(1) From the time you issued the 'dis ql({qname})' command to you running the dmpmqmsg program, 6 messages could have been consumed by another program.
(2) Those 6 messages could be expired messages.
(3) Even though UNCOM has a value of 'NO', you can run the dspmqtrn program to see if there are any uncommitted messages:
Internally coordinated:
dspmqtrn -i -m {QMgrName}
Externally coordinated:
dspmqtrn -e -m {QMgrName}

Related

inotifywait: wait for some time after the first file was uploaded to server [duplicate]

I want to send an e-mail notification to guys in our company if a file changed in their staff folder on the server.
I have a script that works fine on sending an e-mail on every file change using inotifywait.
What I would like to do is on multiple file uploads (lets say 10 jpg's are being uploaded to somebody's staff folder) to only send out one email.
This script sends an email on every file change:
inotifywait --recursive --exclude '.DS_Store' -e create -e moved_to -m /media/server/Staff/christoph |
while read path action file ; do
echo "The file '$file' appeared in directory '$path' via '$action'"
sendEmail -f server#email.com -t user#gmail.com -s smtpout.secureserver.net:80 -xu user#email.com -xp password \
-u ""The file $file appeared in your directory"" -m ""To view your file go to $path""
done
What is the smartest way to go about this? Does it make sense to have inotify wait for further input for lets say 2 mins?
BTW I'm using sendemail for this since port 25 is blocked by the ISP.
I would likely do this by writing the modification notices to a file (if I was doing it I would probably use a SQLite database), then run a cron job every few minutes to check the database and send an aggregated email.
Another option would be to use inotifywait to trigger a script to watch that specific file, it would just loop checking the size/modified time of the file, then sleep for some period of time. Once the file stopped growing you the script would append the file info out to a message file. The cron job would send the message file (if it was not empty) then truncate the file. This would avoid the need to read and write data from a log file.

Why does curl send report on speed and time as error output

I'm using the following command in a script:
curl -O --time-cond $_input_file_name $_location/$_input_file_name
and it produces a report with this heading:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
but it seems to be sent to error output, even though the transfer has been successful and the return code from curl is zero. Why does it do this? Is there a way to suppress this without suppressing actual error messages? Adding -s or -sS to the curl command doesn't seem to alter this behaviour.
Running the command in a terminal, the -s option does suppress the output. The problem arises only within a script. The script is being triggered in crontab via cronic.
I'm working in Debian 9.1 with curl 7.52.1 (x86_64-pc-linux-gnu).
Curl was designed, at least originally, to send its output to stdout by default (see here), something a large number of other Unix utilities also do.
Some programs will allow you to write their output to stdout by specifying - as an output file name but this is not the way curl went.
The reason all the progress messages would therefore need to be sent to stderr would be so they don't corrupt your actual stream of data coming out on stdout.
If you examine the man page, you should see that the --silent --show-error options should disable the progress stuff while still showing an error.
Use "-s -S"
-S, --show-error
When used with -s, --silent, it makes curl show an error message if it fails.
-s, --silent
Silent or quiet mode. Don't show progress meter or error messages. Makes Curl mute. It will still out-
put the data you ask for, potentially even to the terminal/stdout unless you redirect it.
Use -S, --show-error in addition to this option to disable progress meter but still show error messages.
See also -v, --verbose and --stderr.

How to check whether already queue name is exists or not in IBM MQ for linux?

if [[ $(dspmq | grep '(Running)' | grep "$QMgr" | wc -l | tr -d " ") != 1 ]]
The above code is to check if queue manager is running or not.
Is there any command to check whether the given queue name is exists in the queue manager or not?
Adding another suggestion in addition to what Rob and T.Rob have said.
MQ v7.1 and higher come with the dmpmqcfg command and you can use this to check for a specific queue.
The examples below are in line with your sample that checks if a queue manager is running:
To use dmpmqcfg to check if a queue name of any type exists you could do this:
if dmpmqcfg -m ${QMgr} -t queue -x object -o 1line -n ${QName}|egrep '^DEFINE '; then
echo "Queue ${QName} exists on Queue Manager ${QMgr}
fi
Using the method Rob Parker provided* to check if a queue name of any type exists:
*Note I used DISPLAY Q( instead of DISPLAY QLOCAL(
if printf "DISPLAY Q(${QName})" | runmqsc ${QMgr} 2>&1 >/dev/null; then
echo "Queue ${QName} exists on Queue Manager ${QMgr}
fi
Your example check for a queue manager Running could be simplified to this:
if dspmq -m ${QMgr}| grep --quiet '(Running)'; then
echo "Queue Manager ${QMgr} is Running"
fi
There's not a specific command but you could use:
printf "DISPLAY QLOCAL(<QUEUE NAME>)" | runmqsc <QM Name>
You will get a Return Code of 10 if it does not exist and 0 if it does. One thing to note, the Queue Manager must be running and you must run the command as someone who has access to the Queue Manager in question otherwise you'll get different return codes! (20 for Queue Manager not running and not authorized)
Given you haven't specified a specific Queue type i've assumed you're looking for QLocal's but you could search on any Queue Type by modifying the above command.
In addition to what Rob said, the way to do this programmatically is to attempt to open the queue. If the queue exists you get either RC=0 or RC=2 with a Reason Code of 2035 MQRC_NOT_AUTHORIZED. If the queue does not exist you get back RC=2 with a Reason Code of 2085 MQRC_UNKNOWN_OBJECT_NAME.
In the event someone else has that queue open for exclusive input you can't open it for input without getting an error, but at least the error tells you the queue exists. To work around that open the queue for inquiry if all you need is to know it exists. That also lets you discover other attributes about it with the API's inquiry options.
Finally, if you have access to the Command Queue, you can drop a PCF command on it that is equivalent to DIS Q(<QUEUE NAME>) that Rob mentioned. In general, business applications do not need access to the Command Queue but then again business applications do not normally need to inquire as to whether their queue exists or not. That's an administrative function and the app either finds its queue or throws a fatal error. As an MQ Admin I would question any business application that asked for rights to use runmqsc or that inquired as to whether its queue was there, its channels were up, etc. Most shops I've worked at would not let a business app into Production with that design or privileges.
On the other hand, instrumentation applications routinely need to be able to inquire on things like queue inventory so would be expected to have access to and use the Command Queue for that function, or have access to runmqsc to inquire from scripts.

How should one deal with Mercurial's "nothing changed" error ($? = 1) for scripted commits?

I'm cleaning up a client's tracking system that uses Mercurial (version 2.0.2) to capture state and automatically commits all changes every hour. I migrated it from cron to Rundeck so they will get status if/when things fail, which immediately caused the job to start filling Rundeck's failed jobs due to "nothing changed" errors. I immediately went to Google and I am surprised that this issue is raised, but I did not find answers.*
It seems like three should be a basic, clean sh or bash option (but their command line environment supports Python and pip modules if necessary).
My go-to responses for this type of thing are:
issue the 'correct' command before issuing the command that might fail when things are OK, then failure of the command actually indicates errors
read the docs and use the error codes to distinguish what's happening, implementing responses appropriately**
do some variant of #1 or #2 where I issue the command and grep the output***
* I will concede that I struggle to search for hg material, in part because of the wealth of information, similarity to git, and my bad habit of thinking in git terms. That being said, I see the issue out there, including "[issue2341] hg commit returns error when nothing changed". I did not find any replies to this and I find no related discussion on StackOverflow.
** I see at https://www.selenic.com/mercurial/hg.1.html that hg commit "Returns 0 on success, 1 if nothing changed." Unfortunately, I know there are other ways for commit to fail. Just now, I created a dummy Mercurial (mq) patch and attempted a commit, getting "abort: cannot commit over an applied mq patch". In that instance, the return code was 255.
*** I suppose I can issue the initial commit and capture error code, stdout, and stderr; then process the text if return is 1 and error or continue as appropriate.
If you want a command that only commits when something has changed, write a command that checks if something has changed. You can do it more "elegantly" by writing it as a simple bash conditional:
if [ -n "$(hg status -q)" ]
then
hg commit -m "Automatic commit $(date)"
fi
You can hg status | wc -l (or grep output) before commit, and commit only if there are changes in working directory
My initial workaround (my least favorite of the options I mention) is to add a Rundeck 'error handler', a command that executes in response to an error. My command is:
hg commit -m "Automatic commit..." files 2>&1 | grep "nothing changed" && echo "Ignoring 'nothing changed' error"
This duplicates the 'nothing changed', but suppresses the error only if it is the 'nothing changed' error. Ugly, but tolerable if nobody has better suggestions...

Is it possible to output the last added/modified text in a file in the last hour?

I am using CronTab to schedule sending e-mails hourly with the latest logged errors in a logged file debug.log
So far I managed to set CronTab to send an e-mail with the last 5 logged errors (using a shell script). The thing is that I don't want the same errors to be sent: If an error has been sent at 12 pm , I don't want it to be sent again at 1 pm if it is among those 5.
Note: I used 5 as a random number. It was to test to see if I can do this. But I need help with what I previously mentioned.
I don't need to know how to send the e-mail and all that. All I need is to know how to output the errors logged in the file in the last hour.
You can try using command below
tail -f debug.log | grep 'ERROR_INDICATOR' >> error.log
and then modify your crontab job script to delete the content of the error.log right after you send the email.

Resources