How do I suppress the boilerplate 'WASX7209I: Connected to process...' from the wsadmin command? - wsadmin

I have a wsadmin script that needs to produce a very specific output format that will be consumed by a third-party monitoring tool. I've written my Jython script to produce the correct output except that wsadmin always seems to spit out this boilerplate at the beginning:
WASX7209I: Connected to process "dmgr" on node [node] using SOAP connector; The type of process is: DeploymentManager
Is there a way to suppress this output or will I need to do some post processing to strip off this superfluous info?

I'm not aware of any way to suppress that output from being generated. I think you're going to have to strip it out post execution if your consuming system can't handle it...

Related

Returning an output from a JVM application to a bash script

I have a JVM app that is being run from a bash script. I would like for the app to return an output to the script, so that the script can use it as a parameter for other commands.
One suggestion I've read is to use System.out.print on the desired output. However, my application does a significant amount of logging using log4j. It also invokes other libraries which also log other info as well. If my bash-script tries to read from stdout, wouldn't it read all of those log-outputs as well?
Another option I thought of is:
The script passes in a /tmp/${RANDOM}.out file-path to the application
The JVM application writes the desired output to the specified file
The script reads the value off the specified file, once the application has finished running
The above approach seems more cumbersome, and makes certain assumptions about the system's file-system and write-permissions. But it's the best option I can think of.
Is there a better way to do this?
This answer assumes some experience and knowledge about bash IO channels, Java out/err print, and log4j configuration files.
There are 2 techniques:
1. In the Java file, send relevant output to System.err. In the bash script, capture channel 2 which is stderr. Channel 1 is stdout.
2. In the log4j config .xml, use a file appender. This will send logging data to a file. Define the Logger Root to use this appender.
There are many subtleties that can affect these techniques. Hopefully, one of these options will suffice.

Is there a way in Laravel to send an output to both console and log?

I am writing a command which has a lot of service information that I need to see during the command is running.
I am outputing this info simply running echo "some text", and that way I can see what happens when running this command. When the same command is run with scheduler I have to log all this info. So I have to duplicate all the same messages with: Log::info("some text").
If I want to avoid duplication I can create a helper class that can have all this, that I then include in all the service classes that are related to this command and use this helper class to avoid code duplication, but I still feel that this is not ideal solution. Is there maybe a built in way in Laravel on how to sent to console output and Log at the same time?
You could add: ->appendOutputTo('path')); when running your task that execute your command, to store the output messages in your log file. Although, I'm not sure if this will log all console I/O (it will be good to know in case you test it).
Check this.

Backend Java application testing using jmeter

I have a Java program which works on backend .It's a kind of batch processing type where I will have a text file that contains messages.The Java program will fetch message from text file and will load in to DB or Mainframe.Instead of sequential fetching we need to try parallel fetching .How can I do through Jmeter?
I tried my converting the program to a Jar file and calling it through class name.
Also tried by pasting code and in argument place we kept the CSV (the text file converted to .CSV).
Both of this giving Sampler client exception..
Can you please help me how to proceed or is there something we are missing or other way to do it.
The easiest way to kick off multiple instances of your program is running it via JMeter's OS Process Sampler which can run arbitrary commands and print their output + measure execution time.
If you have your program as an executable jar you can kick it off like:
See How to Run External Commands and Programs Locally and Remotely from JMeter article for more information on the approach

Run MQSC Command via SYSTEM.ADMIN.COMMAND.EVENT

I have connected remotely to a QMgr via MQ Explorer on Windows. The MQ server version is 7.5.0.1. I can put messages in SYSTEM.ADMIN.COMMAND.EVENT from MQ Explorer successfully and when I dump SYSTEM.ADMIN.COMMAND.EVENT, I can see my messages. As long as I know, I should be able to run PCF commands and MQSC commands via this channel. So, I put DISPLAY QMGR ALL message inside this queue and I can successfully see this message on MQ Server. My question is how can I run this message remotely via this channel? Thanks.
IBM Doc indicates that I should be able to receive the command result in SYSTEM.MQSC.REPLY.QUEUE. But I can not browse this queue from client MQ Explorer. The queue type for this queue is Model.
Couple of problems here.
First, you are using the wrong queue. The command server listens on SYSTEM.ADMIN.COMMAND.QUEUE. The queue to which you are sending messages, SYSTEM.ADMIN.COMMAND.EVENT is the queue to which the QMgr puts event messages after executing commands, provided of course that command events are enabled.
The second problem, as Jason mentions, is that the runmqsc processor takes human-readable script and converts it into commands the QMgr can understand. Passing textual commands directly to the command server won't work.
Typically we do what you want by passing the commands to runmqsc directly such as...
echo DISPLAY QMGR ALL | runmqsc MYQMGRNAME
If you require the ability to do this as a client, then you want to either download SupportPac MO72, or head over to MQ Gem and pick up a copy of MQSCX. Either of these will accept the command above on a local queue manager, and both can also be supplied with MQ Channel params and connect to a remote QMgr.
In addition to this basic functionality, the MQSCX product also has its own internal script parsing and execution. Suppose, for instance, that you want to do something depending on the command level of the QMgr.
Using runmqsc you could issue the command above, filter the resulting 2-column output through grep, awk, or similar, then capture the final output into a variable. You might need to do this multiple times to capture multiple values, invoking a new runmqsc each time and parsing the output in your script. You must then generate the string for the actual command you wanted to run when you started all this, and pass it to another invocation of runmqsc.
Alternatively, MQSCX lets you issue the DISPLAY command, then reference the resulting values directly by name. For example, you can pass MQSCX a couple lines of script telling it to inquire on the QMgr and then take a conditional action based on the command version, all without ever having to drop back into shell, bat or Perl script.
Full disclosure, I do not work for or get a commission from MQ Gem. I just don't like to beat my head against the wall writing 100 lines of code where 2 will do. If you do any amount of MQSC scripting, the ROI on MQSCX is measured in minutes. And it happens to be 100% on-topic as an answer to this question.
The command server doesnt process textual messages, it processes PCF messages. You need to build a message in PCF format and it can be processed. See http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.adm.doc/q019980_.htm
Ideally you would use real PCF format but there is a PCF format where you can send MQSC commands ('escaped' PCF - see here http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.ref.adm.doc/q087230_.htm?lang=en)

Formatting shell output into structured data?

Are there any means of formatting the output of shell commands to a structured data format like JSON or XML to be processed by another application?
Use case: Bunch of CentOS servers on a network. I'd like to programatically login to them via SSH, run commands to obtain system stats and eventually run basic maintenance commands. Instead of parsing all the text output myself I'm wondering if there is anything out there will help me return the data in a structured format? Even if only some shell commands were supported that would be a head start.
Sounds like the task for SNMP.
It is possible to use puppet fairly lightly. You can configure it to run it's checks only on what you want to check for.
Your entire puppet config could consist of:
exec { "yum install foo":
unless => "some-check for software",
}
That would run yum install foo but only if some-check for software failed.
That said there are lots of benefits if you're managing more that a couple of servers to getting as much of your config and build into puppet manifests (or cfengine, bcfg2, or similar) as possible.
Check out Nagios (http://www.nagios.org/) for remote system monitoring. What you are looking for may already exist out there.

Resources