Many WMQ amqrmppa processes - ibm-mq

I am running WebSphere MQ v7.1.0.1 under Linux. Is it normal to see too many amqrmppa processes for the same Queue Manager?
mqm 3504 1745 2 Nov01 ? 03:40:23 /opt/mqm/bin/amqrmppa -m TEST
mqm 4804 1745 0 08:56 ? 00:01:21 /opt/mqm/bin/amqrmppa -m TEST
mqm 5022 1745 27 08:56 ? 01:17:32 /opt/mqm/bin/amqrmppa -m TEST
mqm 5944 1745 27 09:30 ? 01:07:45 /opt/mqm/bin/amqrmppa -m TEST
Thanks.

This is normal. amqrrmpa is the channel pooling process. When WMQ used inetd you would see one process per channel instance. Then it was changed to stand-alone listeners and there were only so many child processes you could run per listener. The new model uses amqrrmpa to pool channels and it will dynamically spawn or kill processes as needed, depending on load. Do not expect it to kill them quickly if you shut down all your channels, though. It will kill unused amqrrmpa instances if resources get low, otherwise it assumes 'you needed this many before, you'll probably need this many again' and lets them hang around a while.

Related

Can I stop a specific queue?

Is it possible in IBM MQ through the console (command-line) to stop a queue?
I know how to stop a Queue Manager with endmqm. But can I run a console command which will stop one queue?
A queue cannot be stopped independently of the queue manager.
However, you may find that restricting puts and/or gets on/from the queue achieves your desired goal, depending on your setup. This will allow you to prevent applications from adding new messages to the queue and/or removing messages from the queue.
This can be done using commands in the runmqsc interface. https://www.ibm.com/support/knowledgecenter/SSFKSJ_latest/com.ibm.mq.ref.adm.doc/q083460_.htm
Here are the commands you'll need:
runmqsc QUEUE_MANAGER_NAME
ALTER QLOCAL('QUEUE_NAME') GET(DISABLED)
ALTER QLOCAL('QUEUE_NAME') PUT(DISABLED)
EXIT
Below is the commands and output for me disabling PUT and GET for my queue named Q1 on queue manager MyQM1.
mqa(mqcli)# runmqsc MyQM1
5724-H72 (C) Copyright IBM Corp. 1994, 2020.
Starting MQSC for queue manager MyQM1.
ALTER QLOCAL('Q1') GET(DISABLED)
1 : ALTER QLOCAL('Q1') GET(DISABLED)
AMQ8008I: IBM MQ Appliance queue changed.
ALTER QLOCAL('Q1') PUT(DISABLED)
2 : ALTER QLOCAL('Q1') PUT(DISABLED)
AMQ8008I: IBM MQ Appliance queue changed.
I'd recommend trying this out in a test environment first, to ensure it meets your needs and that your applications behave correctly to the error messages. E.g. "MQGET calls are currently inhibited for the queue. (2016)"

MySql server has gone away when run in queue

We have maradb 10.1 and beanstalkd 1.10 and laravel 4.2
We have one query that run successfully without queue. but when run it in beanstalkd not afected and we get 'MySql server has gone away' error in log file
config:
wait_timeout = 120
max_allowed_packet = 1024M
Why different behavior between with and without queue
We had similar issues and either it was because the code was running in different thread, and connection being lost, or a strange garbage collection and closing of connection for long running processes.
Anyway what we implemented is
- when a job is reserved and starts processing we reconnect the DB always
- when we detect a connection gone away, we release the job (so it will be picked up again)
In case it happens in the middle of the processing flow, you may want to reconnect to lose work done so far on that job, if the job is somehow transactional.

deny parallel ssh connection to server for specific host / IP

I have a bot machine (controlled via mobile device) which
connects to the Server and fetch information from it by method os
"ssh, shell script, os commands,sql queries etc" than it feed that
information over the internet (private)
I want to disallow this multiple connection to the server via the
bot machine ONLY.. there are other machine which connects to the server which must not be affected
Suppose
Client A from his mobile acess bot machine (via webpage) than the bot
machine connect to server (1st session) now if the process of this
connection is 5 minute during this period the bot machine will be
creating, quering, deleting, appending, updating etc
in the very mean time of that 5 minute duration (suppose 2min after
the 1st session started) Client B from his mobile access bot machine
(via webpage) than the bot machine connect to server (2nd session) now
it will conflict with the 1st session and create Havoc...
Limitation
Now first of all i do not want to editing any setting on the SERVER
ANY WHAT SO EVER
I do not want to edit the webpage/mobile etc
I already know abt the lock file method of parallel shell script and
it is implemented at script level but what abt the OS commands and
stuff like that which are not in bash script
My Thougth
What i thougt was whenever we create a connection with server it
create a process named what ever (SSH) which is viewable in ps -fu
OSUSER so by applying a unique id/tag/name to our connection we can
identify if one session is active or not. This will be check as soon
as the bot connects to the server. But i do not know how to do
that.... Please also suggest any more information over it.
Also is there is way to identify if the existing process is hanged or
the time of the process started or elapsed?
Maybe try using limits.conf to enforce a hard limit of 1 login for the user/group.
You might need a periodic cron job to check for and remove any stale logins.
Locks/mutexes are hard to get right and add complexity. Limits.conf is a standard feature of most unix/linux systems and should be more reliable, emphasis on should...
A similar question was raised here:
https://unix.stackexchange.com/questions/127077/number-of-ssh-connections-on-a-single-linux-machine
Details here:
http://linux.die.net/man/5/limits.conf
I assume you have a single login for the ssh account and that this runs a script on login
Add something like this to the script at login
#!/bin/bash
LOCK_FILE="/tmp/sshlock"
trap "rm $LOCK_FILE; exit" SIGHUP SIGINT SIGTERM
if [ $(( (`date +%s` - `stat -L --format %Y $LOCK_FILE`) < (30*60) )) ]; then
exit 0
fi
touch $LOCK_FILE
When the processes that the ssh login calls end, delete the $LOCK_FILE
The trap statement is an important part of this way of locking, please do use it
The "30*60" is a 30 minute timeout, thanks to the answer on this question How can I tell if a file is older than 30 minutes from /bin/sh?

MQ java process taking 100% of CPU

Following process in our linux server is taking 100% of CPU
java -DMQJMS_LOG_DIR=/opt/hd/ca/mars/tmp/logs/log -DMQJMS_TRACE_DIR=/opt/hd/ca/mars/tmp/logs/trace -DMQJMS_INSTALL_PATH=/opt/isv/mqm/java com.ibm.mq.jms.admin.JMSAdmin -v -cfg /opt/hd/ca/mars/mqm/data/JMSAdmin.config
I forcibly killed the process and bounced MQ then i don't see this. What might be the reason for this to happen?
The java process com.ibm.mq.jms.admin.JMSAdmin is normally executed via the IBM MQ script /opt/mqm/java/bin/JMSAdmin.
The purpose of JMSAdmin is to create JNDI resources for connecting to IBM MQ, these are normally file based and stored in a file called .binding, the location of the .binding file would be found in configuration file that is passed to the command. In your output above the configuration file is /opt/hd/ca/mars/mqm/data/JMSAdmin.config.
JMSAdmin is an interactive process where you run commands such as:
DEFINE QCF(QueueConnectionFactory1) +
QMANAGER(XYZ) +
...
I would be unable to tell you why it was taking 100% CPU, but the process itself does not directly interact with or connect to the queue manager and it would be safe to kill off the process with out needing to restart the queue manager. The .binding file that JMSAdmin generates is used by JMS applications in some configurations to find details of how to connect to MQ and the names of queues and topics to access.
In July 2011 you would have been using IBM MQ v7.0 or lower all of which are out of support, if anyone should come across a similar issue with a recent supported version of MQ I would suggest you take a java thread dump and open a case with IBM to investigate why it is taking up 100% of the CPU.
*PS I know this is a 9 year old question, but I thought an answer may be helpful to someone who finds this when searching for a similar problem.

Killing an Oracle job. 10g specific

We're using a job scheduling system that runs on top of DBMS_JOB. It uses a master job to create one-time jobs. We deploy the same set of jobs to all our clients, but can specify which jobs should only run at certain clients.
We get occasional problems with a process run by a job hanging. The main cause of this is UTL_TCP not timing out when it does get an expected response. I want to be able to kill those jobs so that they can run again.
I'm looking at creating a new job that kill any of these one-time jobs that have been running for longer than a certain time.
We're stuck with Oracle 10g for a while yet, so I'm limited to what that can do.
There's an article that seems to cover most of this at
http://it.toolbox.com/blogs/database-solutions/killing-the-oracle-dbms_job-6498
I have a feeling that this is not going to cover all eventualities, including:
We run can jobs as several different users and a user can only break/remove jobs they created. I believe that I may be able to use DBMS_IJOB to get around that, but I need to get the DBA to let me execute it.
We have Oracle RAC systems. I understand 10g limits ALTER SYSTEM KILL SESSION to killing sessions on the current instance. I could arrange for all jobs to run on the same instance, but I've not tried that yet.
Anything else I should consider? Stack Overflow needs a definitive answer on this.
You can get the PID from the job tables and kill the stuck process via the normal OS commands.
You can kill jobs on any instance. On 10g, you need to know on which instance the stuck job is running, and connect to that instance:
To get your instance and pid:
select inst_id, process from gv$session where ...
Connect to a specific instance:
sqplus admin#node3 as sysdba
alter system kill session ...
There are more ways to kill a session on oracle. Depends on your plattform. Running on unix sessions (background jobs too) are represented by processes. Killing the process, kills the session. On windows sessions are represented by a thread. Killing the thread using orakill, kills the session. The process (or thread) id is stored in gv$process.

Resources