How to set Encoding in MQ headers using Beanshell in Jmeter - jmeter

I am developing a test script to put a message onto a queue using IBM MQ API 8.0. I am using JMeter 3.1 and Beanshell Sampler for this (see code below).
The problem I am having is setting the "Encoding" field in the MQ headers. I've tried different methods as per API documentation, but nothing worked for me.
Has anyone faced this issue?
Thanks in advance!
Code below:
try {
MQEnvironment.hostname = _hostname;
MQEnvironment.channel = _channel;
MQEnvironment.port = _port;
MQEnvironment.userID = "";
MQEnvironment.password = "";
log.info("Using queue manager: " + _qMgr);
MQQueueManager _queueManager = new MQQueueManager(_qMgr);
int openOptions = CMQC.MQOO_OUTPUT + CMQC.MQOO_FAIL_IF_QUIESCING + CMQC.MQOO_INQUIRE + CMQC.MQOO_BROWSE
+ CMQC.MQOO_SET_IDENTITY_CONTEXT;
log.info("Using queue: " + _queueName + ", openOptions: " + openOptions);
MQQueue queue = _queueManager.accessQueue(_queueName, openOptions);
log.info("Building message...");
MQMessage sendmsg = new MQMessage();
sendmsg.clearMessage();
// Set MQ MD Headers
sendmsg.messageType = CMQC.MQMT_DATAGRAM;
sendmsg.replyToQueueName = _queueName;
sendmsg.replyToQueueManagerName = _qMgr;
sendmsg.userId = MQuserId;
sendmsg.setStringProperty("BAH_FR", fromBIC); // from /AppHdr/Fr/FIId/FinInstnId/BICFI
sendmsg.setStringProperty("BAH_TO", toBIC); // from /AppHdr/To/FIId/FinInstnId/BICFI
sendmsg.setStringProperty("BAH_MSGDEFIDR", "pacs.008.001.05"); // from /AppHdr/MsgDefIdr
sendmsg.setStringProperty("BAH_BIZSVC", "cus.clear.01-" + bizSvc); // from /AppHdr/BizSvcr
sendmsg.setStringProperty("BAH_PRTY", "NORM"); // priority
sendmsg.setStringProperty("userId", MQuserId); // user Id
sendmsg.setStringProperty("ConnectorId", connectorId);
sendmsg.setStringProperty("Roles", roleId);
MQPutMessageOptions pmo = new MQPutMessageOptions(); // accept the defaults, same as MQPMO_DEFAULT constant
pmo.options = CMQC.MQOO_SET_IDENTITY_CONTEXT; // set identity context by userId
// Build message
String msg = "<NS1> .... </NS1>";
// MQRFH2 Headers
sendmsg.format = CMQC.MQFMT_STRING;
//sendmsg.encoding = CMQC.MQENC_INTEGER_NORMAL | CMQC.MQENC_DECIMAL_NORMAL | CMQC.MQENC_FLOAT_IEEE_NORMAL;
sendmsg.encoding = 546; // encoding - 546 Windows/Linux
sendmsg.messageId = msgID.getBytes();
sendmsg.correlationId = CMQC.MQCI_NONE;
sendmsg.writeString(msg);
String messageIdBefore = new String(sendmsg.messageId, "UTF-8");
log.info("Before put, messageId=[" + messageIdBefore + "]");
int depthBefore = queue.getCurrentDepth();
log.info("Queue Depth=" + depthBefore);
log.info("Putting message on " + _queueName + ".... ");
queue.put(sendmsg, pmo);
int depthAfter = queue.getCurrentDepth();
log.info("Queue Depth=" + depthAfter);
log.info("**** Done");
String messageIdAfter = new String(sendmsg.messageId, "UTF-8");
log.info("After put, messageId=[" + messageIdAfter + "]");
log.info("Closing connection...");
} catch (Exception e) {
log.info("\\nFAILURE - Exception\\n");
StringWriter errors = new StringWriter();
e.printStackTrace(new PrintWriter(errors));
log.error(errors.toString());
}

I think you are over thinking the problem. If you are not doing some sort of weird manual character/data conversion then you should be using:
sendmsg.encoding = MQC.MQENC_NATIVE;

Related

How to wait for request to finish in Jmeter

I am currently new to Jmeter, and trying to create a Jmeter script to test how long a request takes to process and complete.
a) Authenticate using Token - Complete
b) Post Request - Complete - Returns 200
c) Get Request - Partially Completed
C: I am Trying to get be able to monitor this request to find out when its either completed failed etc.
I have created the Http Request Sample with a Get Request
I am able to get the Request 200 but it doesn't wait for completion
So running this in a console app, it waits for a certain time checking for status....
Is there a way to possibly write a code similar to the C# code in bean shell or groovy to wait. I was reading about while controller as well...
var result = WaitForBuildToComplete(dest, requestData, token, timeout);
static string GetStatus(string path, Token token)
{
var httpWebRequest = (HttpWebRequest)WebRequest.Create(path);
httpWebRequest.ContentType = "application/json";
httpWebRequest.Method = "GET";
AddToken(token, httpWebRequest);
WebResponse response = httpWebRequest.GetResponse();
string responseFromServer = "";
using (Stream dataStream = response.GetResponseStream())
{
StreamReader reader = new StreamReader(dataStream);
responseFromServer = reader.ReadToEnd();
}
// Close the response.
response.Close();
return responseFromServer;
}
static int WaitForBuildToComplete(string dest, RequestData requestData, Token token, int
timeout)
{
if (timeout <= 0) return 0;
var path = $"{ConfigurationManager.AppSettings[dest]}/policy?id={requestData.id}";
var startTime = DateTime.Now;
do
{
var status = GetStatus(path, token);
var msg = JsonConvert.DeserializeObject<string>(status);
var requestStatus = JsonConvert.DeserializeObject<RequestStatus>(msg);
if (!string.IsNullOrEmpty(requestStatus.DllUrl))
{
Console.WriteLine($"\nResult dll at: {requestStatus.DllUrl}");
return 0;
}
if (requestStatus.Status.ToUpper() == "FAILED")
{
Console.WriteLine($"\nFAILED");
Console.WriteLine(requestStatus.Message);
return -1;
}
if (requestStatus.Status.ToUpper() == "FAILED_DATA_ERROR")
{
Console.WriteLine($"\nFAILED_DATA_ERROR");
Console.WriteLine(requestStatus.Message);
return -1;
}
if (requestStatus.Status.ToUpper() == "NOT_NEEDED")
{
Console.WriteLine($"\nNOT_NEEDED");
Console.WriteLine(requestStatus.Message);
return -1;
}
Console.Write(".");
System.Threading.Thread.Sleep(1000);
} while ((DateTime.Now - startTime).TotalSeconds < timeout);
Console.WriteLine("Time out waiting for dll.");
return -1;
}
I started by looking at JSR223 Sampler but wanted to see if there is a better and easier way to accomplish this.
List<String> sendRequest(String url, String method, Map<String,Object> body) {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(2000)
.setSocketTimeout(3000)
.build();
StringEntity entity = new StringEntity(new Gson().toJson(body), "UTF-8");
HttpUriRequest request = RequestBuilder.create(method)
.setConfig(requestConfig)
.setUri(url)
.setHeader(HttpHeaders.CONTENT_TYPE, "application/json;charset=UTF-8")
.setEntity(entity)
.build();
String req = "REQUEST:" + "\n" + request.getRequestLine() + "\n" + "Headers: " +
request.getAllHeaders() + "\n" + EntityUtils.toString(entity) + "\n";
HttpClientBuilder.create().build().withCloseable {httpClient ->
httpClient.execute(request).withCloseable {response ->
String res = "RESPONSE:" + "\n" + response.getStatusLine() + "\n" + "Headers: " +
response.getAllHeaders() + "\n" +
(response.getEntity() != null ? EntityUtils.toString(response.getEntity()) : "") + "\n";
System.out.println(req + "\n" + res );
return Arrays.asList(req, res);
}
}
}
List sendGet(String url, Map<String,String> body) {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(2000)
.setSocketTimeout(3000)
.build();
RequestBuilder requestBuilder = RequestBuilder.get()
.setConfig(requestConfig)
.setUri(url)
.setHeader(HttpHeaders.CONTENT_TYPE, "application/json;charset=UTF-8");
body.forEach({key, value -> requestBuilder.addParameter(key, value)});
HttpUriRequest request = requestBuilder.build();
String req = "REQUEST:" + "\n" + request.getRequestLine() + "\n" + "Headers: " +
request.getAllHeaders() + "\n";
HttpClientBuilder.create().build().withCloseable {httpClient ->
httpClient.execute(request).withCloseable {response ->
String res = "RESPONSE:" + "\n" + response.getStatusLine() + "\n" + "Headers: " +
response.getAllHeaders() + "\n" +
(response.getEntity() != null ? EntityUtils.toString(response.getEntity()) : "") + "\n";
System.out.println(req + "\n" + res );
return Arrays.asList(req, res);
}
}
}
The approach which is normally used in JMeter is placing your request under the While Controller which will be checking the Status value which in its turn can be fetched from the response using a suitable Post-Processor so the request will be retried unless the "Status" changes to some value which you expect (or times out)
If you place the whole construction under the Transaction Controller you will get the whole time for the status to change.
Example test plan outline:

MQHeaderList size is 0, but reading a specific MQRFH2 works

I am wring custom java code to read messages from Websphere MQ (version 8) and read all the headers from the MQ message.
When I use the MQHeaderList to parse all the headers the list size is 0:
MQMessage message = new MQMessage();
queue.get(message, getOptions);
DataInput in = new DataInputStream (new ByteArrayInputStream (b));
MQHeaderList headersfoundlist = null;
headersfoundlist = new MQHeaderList (in);
System.out.println("headersfoundlist size: " + headersfoundlist.size());
However, I read only a specific MQRFH2 it works
MQMessage message = new MQMessage();
queue.get(message, getOptions);
DataInput in = new DataInputStream (new ByteArrayInputStream (b));
MQRFH2 rfh2 = new MQRFH2(in);
Element usrfolder = rfh2.getFolder("usr", false);
System.out.println("usr folder" + usrfolder);
How can I parse all the headers of the MQ Message?
DataInput in = new DataInputStream (new ByteArrayInputStream (b));
What's that about? Not sure why you want to do that.
It should just be:
MQMessage message = new MQMessage();
queue.get(message, getOptions);
MQHeaderList headersfoundlist = new MQHeaderList(message);
System.out.println("headersfoundlist size: " + headersfoundlist.size());
Read more here.
Update:
#anshu's comment about it not working, well, I've always found MQHeaderList class to be very buggy. Hence, that is why I don't use it.
Also, 99.99% messages in MQ will only ever have 1 embedded MQ header (i.e. MQRFH2). Note: A JMS message == MQRFH2 message. The only case where you will find 2 embedded MQ headers are for messages on the Dead Letter Queue.
i.e.
{MQDLH}{MQRFH2}{message payload}
Is there a real need for your application to process multiple embedded MQ headers? Is your application putting/getting JMS messages (aka MQRFH2 messages)?
If so then you should do something like the following:
queue.get(receiveMsg, gmo);
if (CMQC.MQFMT_RF_HEADER_2.equals(receiveMsg.format))
{
receiveMsg.seek(0);
MQRFH2 rfh2 = new MQRFH2(receiveMsg);
int strucLen = rfh2.getStrucLength();
int encoding = rfh2.getEncoding();
int CCSID = rfh2.getCodedCharSetId();
String format= rfh2.getFormat();
int flags = rfh2.getFlags();
int nameValueCCSID = rfh2.getNameValueCCSID();
String[] folderStrings = rfh2.getFolderStrings();
for (String folder : folderStrings)
System.out.println.logger("Folder: "+folder);
if (CMQC.MQFMT_STRING.equals(format))
{
String msgStr = receiveMsg.readStringOfByteLength(receiveMsg.getDataLength());
System.out.println.logger("Data: "+msgStr);
}
else if (CMQC.MQFMT_NONE.equals(format))
{
byte[] b = new byte[receiveMsg.getDataLength()];
receiveMsg.readFully(b);
System.out.println.logger("Data: "+new String(b));
}
}
else if ( (CMQC.MQFMT_STRING.equals(receiveMsg.format)) ||
(CMQC.MQFMT_NONE.equals(receiveMsg.format)) )
{
Enumeration<String> props = receiveMsg.getPropertyNames("%");
if (props != null)
{
System.out.println.logger("Named Properties:");
while (props.hasMoreElements())
{
String propName = props.nextElement();
Object o = receiveMsg.getObjectProperty(propName);
System.out.println.logger(" Name="+propName+" : Value="+o);
}
}
if (CMQC.MQFMT_STRING.equals(receiveMsg.format))
{
String msgStr = receiveMsg.readStringOfByteLength(receiveMsg.getMessageLength());
System.out.println.logger("Data: "+msgStr);
}
else
{
byte[] b = new byte[receiveMsg.getMessageLength()];
receiveMsg.readFully(b);
System.out.println.logger("Data: "+new String(b));
}
}
else
{
byte[] b = new byte[receiveMsg.getMessageLength()];
receiveMsg.readFully(b);
System.out.println.logger("Data: "+new String(b));
}
I found the mistake in my code. I have a few more steps before reading the headers. It was moving the cursor in message buffer to the end.
I added message.setDataOffset(0); before reading headers and it worked.

Unable to upload files greater than 7KB via ajax call to servlet over https

I am running JBoss 6.3 portal and have deployed a war file containing following two files
1) DocUpload.jsp
Containing following code snippet making an ajax call to send the mentioned fields in data2 along with file object fd.
fd.append('file', document.getElementById('file1').files[0]);
data2 = encodeURIComponent(document.getElementById("lob").value)+'#'+encodeURIComponent(document.getElementById("loantype").value)+
'#'+encodeURIComponent(document.getElementById('docType').value)+'#'+encodeURIComponent(document.getElementById('docName').value)+
'#'+encodeURIComponent(document.getElementById('entity').value)+'#'+encodeURIComponent(document.getElementById('userName').value)+
'#'+encodeURIComponent(document.getElementById('PartyName').value)+'#'+encodeURIComponent(document.getElementById('loanAccount').value)+
'#'+encodeURIComponent(document.getElementById('LoanAmount').value)+'#'+encodeURIComponent(e4)+'#'+encodeURIComponent(document.getElementById('hiddenWIName').value)+'#'+encodeURIComponent(e3);
alert("data2 "+data2);
var ret = doPostAjax("${pageContext.request.contextPath}/AddDocumentsServlet?data="+data2,fd);
2) AddDocumentsServlet.java
Containing code for handling the request
File path = new File(RootFolderPath + File.separator + "Portal_TmpDoc" + File.separator + todayAsString + File.separator + lMilliSecondsCurrent);
UploadPath = path.getAbsolutePath();
if (!path.isDirectory()) {
path.mkdirs();
}
if (isMultipart) {
System.out.println("Inside if isMultipart");
DiskFileItemFactory factory = new DiskFileItemFactory();
factory.setSizeThreshold(MEMORY_THRESHOLD);
factory.setRepository(new File(System.getProperty("java.io.tmpdir")));
ServletFileUpload upload = new ServletFileUpload(factory);
upload.setFileSizeMax(MAX_FILE_SIZE);
upload.setSizeMax(MAX_REQUEST_SIZE);
try {
//List multiparts = upload.parseRequest(request);
System.out.println("Before parsing request");
List<FileItem> multiparts = upload.parseRequest(request);
System.out.println("multiparts :::"+multiparts);
for (Iterator iterator = multiparts.iterator(); iterator.hasNext();) {
FileItem item = (FileItem) iterator.next();
logger.info(item);
if (!item.isFormField()) {
String fileobject = item.getFieldName();
System.out.println(request.getParameter("data"));
String[] fileArray = request.getParameter("data").split("#");
name = new File(item.getName()).getName();
name = name.substring(name.lastIndexOf(File.separatorChar) + 1);
ext = name.substring(name.lastIndexOf(".") + 1);
logger.info(name);
File directory = new File(UploadPath);
File[] afile;
int j = (afile = directory.listFiles()).length;
for (int i = 0; i < j; i++) {
File f = afile[i];
if (f.getName().startsWith(filename))
f.delete();
}
item.write(new File(UploadPath + File.separator + filename + "." + ext));
System.out.println("File Uploaded");
}
}
My problem is when I use http connection for the above requests and getting session the program runs just fine. But while using https and uploading file greater than 7 KB the page becomes unresponsive.
On further analysis I found that the program flow gets stuck on this line
List<FileItem> multiparts = upload.parseRequest(request);
and despite having this line in a try block no exception is caught.

Retrieving data from Messaging queue

public string WriteMsg(string strInputMsg)
{
string strReturn = "";
try
{
MQQueue queue = null;
MQQueueManager QueueManagerName = null ;
QueueManagerName = new MQQueueManager("GRBAAQM");
queue = QueueManagerName.AccessQueue(QueueName, MQC.MQOO_OUTPUT
+ MQC.MQOO_FAIL_IF_QUIESCING);
message = strInputMsg;
queueMessage = new MQMessage();
queueMessage.WriteString(message);
queueMessage.Format = MQC.MQFMT_STRING;
queuePutMessageOptions = new MQPutMessageOptions();
queue.Put(queueMessage, queuePutMessageOptions);
strReturn = "Message sent to the queue successfully";
}
catch (MQException MQexp)
{
strReturn = "Exception: " + MQexp.Message;
}
catch (Exception exp)
{
strReturn = "Exception: " + exp.Message;
}
return strReturn;
}
public string ReadMsg()
{
String strReturn = "";
try
{
MQQueue queue = null;
MQQueueManager QueueManagerName = null;
QueueManagerName = new MQQueueManager("GRBAAQM");
queue = QueueManagerName.AccessQueue(QueueName, MQC.MQOO_INPUT_AS_Q_DEF +
MQC.MQOO_FAIL_IF_QUIESCING);
queueMessage = new MQMessage();
queueMessage.Format = MQC.MQFMT_STRING;
queueGetMessageOptions = new MQGetMessageOptions();
queue.Get(queueMessage, queueGetMessageOptions);
strReturn =
queueMessage.ReadString(queueMessage.MessageLength);
}
catch (MQException MQexp)
{
strReturn = "Exception : " + MQexp.Message;
}
catch (Exception exp)
{
strReturn = "Exception: " + exp.Message;
}
return strReturn;
}
These two methods in this program helps us to read the messages from queue and displays but how to insert this feature while reading message FROM queue, read only if the message count has reached 10.
Why do you care how many messages are in the queue? MQ is NOT a database. If a message is in the queue then it should be processed. If you need to group messages together then have the sender use MQ's message grouping feature.
Did you read about MQ triggering? A program can be triggered (started) based on a triggering event. i.e. Trigger-first, trigger-every & trigger-depth.

Elasticsearch Indexing by BulkRequestBuilder getting slow down

Hi all elasticsearch masters.
I have millions of data to be indexed by elasticsearch Java API.
The number of cluster nodes for elasticsearch are three (1 as master + 2 nodes).
My code snippet is below.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "MyClusterName").build();
TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300;
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";
while((readLine = br.readLine()) != null){
id = somefunction(readLine);
String json = new ObjectMapper().writeValueAsString(readLine);
bulkBuilder.add(client.prepareIndex(index, type, id)
.setSource(json));
bulkBuilderLength++;
if(bulkBuilderLength % 1000== 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
}
}
br.close();
if(bulkBuilder.numberOfActions() > 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
It works fine but the performance getting SLOW DOWN RAPIDLY after thousands of document.
I've already tried to change settings value of "refresh_interval" as -1 and "number_of_replicas" as 0.
However, the situation of performance decreasing is the same.
If I monitor the status of my cluster using bigdesk, the GC value reaches 1 in every seconds like the screenshot below.
Anyone can help me?
Thanks in advance.
=================== UPDATED ===========================
Finally, I've solved this problem. (See the answer).
The cause of the problem is that I've missed recreate a new BulkRequestBuilder.
Performance degradation is never occurred after I've changed my code snippet like below.
Thank you very much.
Settings settings = ImmutableSettings.settingsBuilder()
.put("cluster.name", "MyClusterName").build();
TransportClient client = new TransportClient(settings);
String hostname = "myhost ip";
int port = 9300;
client.addTransportAddress(new InetSocketTransportAddress(hostname, port));
BulkRequestBuilder bulkBuilder = client.prepareBulk();
BufferedReader br = new BufferedReader(new InputStreamReader(new DataInputStream(new FileInputStream("my_file_path"))));
long bulkBuilderLength = 0;
String readLine = "";
String index = "my_index_name";
String type = "my_type_name";
String id = "";
while((readLine = br.readLine()) != null){
id = somefunction(readLine);
String json = new ObjectMapper().writeValueAsString(readLine);
bulkBuilder.add(client.prepareIndex(index, type, id)
.setSource(json));
bulkBuilderLength++;
if(bulkBuilderLength % 1000== 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk(); // This line is my mistake and the solution !!!
}
}
br.close();
if(bulkBuilder.numberOfActions() > 0){
logger.info("##### " + bulkBuilderLength + " data indexed.");
BulkResponse bulkRes = bulkBuilder.execute().actionGet();
if(bulkRes.hasFailures()){
logger.error("##### Bulk Request failure with error: " + bulkRes.buildFailureMessage());
}
bulkBuilder = client.prepareBulk();
}
The problem here is that you don't recreate again a new Bulk after Bulk execution.
It means that you are reindexing the same first data again and again.
BTW, look at BulkProcessor class. Definitely better to use.

Resources