Spring Boot IBM Queue - Discover all Destinations - spring-boot

I am writing a small spring boot application that is supposed to monitor queues on an external IBM Queue installation.
I am able to connect via MQXAQueueConnectionFactory, but I have not found a way to discover all remote queues/destinations on that Host programmatically. I don't want to add them fix in my code.
How can I get a list of all existing queues in order to add listeners? I have to mention that an access via REST-API is not possible because this feature has been disabled by the administration.

You can use the IBM MQ Programmable Command Formats. If you installed the IBM MQ samples, the tools/pcf/samples/PCF_DisplayActiveLocalQueues.java gives you an idea for your use case.
Here is how I use it in my unit tests to find all the queues with messages:
import java.io.IOException;
import com.ibm.mq.MQException;
import com.ibm.mq.MQGetMessageOptions;
import com.ibm.mq.MQMessage;
import com.ibm.mq.MQQueue;
import com.ibm.mq.MQQueueManager;
import com.ibm.mq.constants.CMQC;
import com.ibm.mq.constants.CMQCFC;
import com.ibm.mq.constants.MQConstants;
import com.ibm.mq.headers.MQDataException;
import com.ibm.mq.headers.pcf.PCFMessage;
import com.ibm.mq.headers.pcf.PCFMessageAgent;
public class MqUtils {
public static void queuesWithMessages(MQQueueManager qmgr) {
try {
PCFMessageAgent agent = new PCFMessageAgent(qmgr);
try {
PCFMessage request = new PCFMessage(CMQCFC.MQCMD_INQUIRE_Q);
// NOTE: You can not use a queue name pattern like "FOO.*" together with
// the "addFilterParameter" method. This is a limitation of PCF messages.
// If you want to filter on queue names, you would have to do it in the
// for loop after sending the PCF message.
request.addParameter(CMQC.MQCA_Q_NAME, "*");
request.addParameter(CMQC.MQIA_Q_TYPE, MQConstants.MQQT_LOCAL);
request.addFilterParameter(CMQC.MQIA_CURRENT_Q_DEPTH, CMQCFC.MQCFOP_GREATER, 0);
for (PCFMessage response : agent.send(request)) {
String queueName = (String) response.getParameterValue(CMQC.MQCA_Q_NAME);
if (queueName == null
|| queueName.startsWith("SYSTEM")
|| queueName.startsWith("AMQ")) {
continue;
}
Integer queueDepth = (Integer) response.getParameterValue(CMQC.MQIA_CURRENT_Q_DEPTH);
// Do something with this queue that has messages
}
} catch (MQException | IOException e) {
throw new RuntimeException(e);
} finally {
agent.disconnect();
}
} catch (MQDataException e) {
throw new RuntimeException(e);
}
}
}
And this should give you ideas how to configure the MQQueueManager (see also IBM docs):
import com.ibm.mq.MQEnvironment;
import com.ibm.mq.MQException;
import com.ibm.mq.MQQueueManager;
#Configuration
static class MQConfig {
#Bean(destroyMethod = "disconnect")
public MQQueueManager mqQueueManager() throws MQException {
MQEnvironment.hostname = "the.host.com";
MQEnvironment.port = 1415;
MQEnvironment.channel = "xxx.CL.FIX";
return new MQQueueManager("xxx");
}
}
The chapter Using with IBM MQ classes for JMS explains how you can use PCF messages in pure JMS.

Related

How to Understand if a Batch ended in a Batch To Record Adapter

I am developing a springboot application that reads messages from a topic. Messages are managed in transaction and read as string in batch mode and then deserialized to an object. This operation may fail but I don't want to discard all the batch but rather I would move failed messages to DLQ.
As I am using spring-kafka 2.6.5 I found out that I can use BatchToRecordAdapter in order to achieve this purpose. However I did not find out how to know when I am reading the last message of any batch.
I would like to read one message at a time, serialize it and then store in an ArrayList; when listener reads the last message I want to make some processing and finally commit the transaction.
Thanks,
Giuseppe.
UPDATE
In order to achieve this purpose I override BatchToRecordAdapter and added headers that allow me to know the position in a batch of every element.
package com.doxee.commons.lifecycle.kafka;
import java.util.List;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.consumer.Consumer;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.listener.ConsumerRecordRecoverer;
import org.springframework.kafka.listener.adapter.BatchToRecordAdapter;
import org.springframework.kafka.support.Acknowledgment;
import org.springframework.messaging.Message;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.util.Assert;
/*
* Insert a description here.
*
* Bugs: none known
*
* #author gmiano gmiano#doxee.com
* #createDate 25/01/21
*
* Copyright (C) 2021 Doxee S.p.A. C.F. - P.IVA: IT02714390362. All Rights Reserved
*/
#Slf4j
public class BatchToEnrichedRecordAdapter<K, V> implements BatchToRecordAdapter<K, V> {
private final ConsumerRecordRecoverer recoverer;
public BatchToEnrichedRecordAdapter(ConsumerRecordRecoverer recoverer) {
Assert.notNull(recoverer, "'recoverer' cannot be null");
this.recoverer = recoverer;
}
#Override
public void adapt(List<Message<?>> messages, List<ConsumerRecord<K, V>> records,
Acknowledgment ack, Consumer<?, ?> consumer, Callback<K, V> callback) {
for (int i = 0; i < messages.size(); ++i) {
Message enrichedMessage = MessageBuilder.fromMessage(messages.get(i))
.setHeader(MyHeaders.BATCH_SIZE, messages.size())
.setHeader(MyHeaders.MESSAGE_BATCH_POSITION, i + 1)
.build();
try {
callback.invoke(records.get(i), ack, consumer, enrichedMessage);
} catch (Exception var9) {
this.recoverer.accept(records.get(i), var9);
}
}
}
}
with this bean as recoverer
#Bean
ConsumerRecordRecoverer recoverer(KafkaOperations<?, ?> template) {
return new DeadLetterPublishingRecoverer(template, (record, ex) -> {
String srcTopic = record.topic();
String srcKey = record.key().toString();
log.error("Failed consume of message {} from topic {}", srcKey, srcTopic, ex);
String dstTopic;
if (ex.getCause() instanceof ClientResumableException) {
dstTopic = srcTopic.concat(".RECOVERABLE");
} else {
dstTopic = srcTopic.concat(".DLT");
}
log.error("Cannot retry. Try to write message to topic: {}", dstTopic);
return new TopicPartition(dstTopic, 0);
});
}
Is this the proper solution?

Push pull on couchabase server side thro' couchbase lite client side

i have tried to create one small java code to handle couchbase lite database and to do push pull operation
senario in depth is as follows
what i did is i have created bucket named as sync_gateway,
and conected with couchbase server by below config.json
{
"interface":":4984",
"adminInterface":":4985",
"databases":{
"db":{
"server":"http://localhost:8091",
"bucket":"sync_gateway",
"sync":function(doc) {
channel(doc.channels);
}
}
}
}
this had created metadata in sync_gateway bucket on server,
the n i have written sample java code for local database CBL , and wrote functions for push pull operations ...
code:
package com.Testing_couchbaseLite;
import java.io.IOException;
import java.net.MalformedURLException;
import java.net.URL;
import java.util.HashMap;
import java.util.Map;
import javax.naming.ldap.ManageReferralControl;
import org.apache.http.cookie.Cookie;
import com.couchbase.lite.Context;
import com.couchbase.lite.CouchbaseLiteException;
import com.couchbase.lite.Database;
import com.couchbase.lite.Document;
import com.couchbase.lite.JavaContext;
import com.couchbase.lite.Manager;
import com.couchbase.lite.ManagerOptions;
import com.couchbase.lite.QueryOptions;
import com.couchbase.lite.replicator.Replication;
import com.couchbase.lite.support.HttpClientFactory;
public class Test_syncGateWay {
private URL createSyncURL(boolean isEncrypted){
URL syncURL = null;
String host = "https://localhost"; //sync gateway ip
String port = "4984"; //sync gateway port
String dbName = "db";
try {
syncURL = new URL(host + ":" + port + "/" + dbName);
} catch (MalformedURLException me) {
me.printStackTrace();
}
return syncURL;
}
private void startReplications() throws CouchbaseLiteException {
try {
Map<String, Object> map = new HashMap<String, Object>();
map.put("id", "1");
map.put("name","ram");
Manager man = new Manager(new JavaContext(), Manager.DEFAULT_OPTIONS);
Database db = man.getDatabase("sync_gateway");
Document doc = db.createDocument();
doc.putProperties(map);
System.out.println("-------------done------------");
System.out.println(man.getAllDatabaseNames());
System.out.println(man.getDatabase("sync_gateway").getDocumentCount());
System.out.println(db.getDocument("1").getCurrentRevisionId());
System.out.println(db.exists());
Replication pull = db.createPullReplication(this.createSyncURL(true));
Replication push = db.createPushReplication(this.createSyncURL(true));
pull.setContinuous(true);
push.setContinuous(true);
pull.start();
push.start();
} catch (IOException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void createDatabase() throws CouchbaseLiteException, IOException {
// TODO Auto-generated method stub
}
public static void main(String[] args) throws CouchbaseLiteException, IOException {
new Test_syncGateWay().startReplications();
}
}
now i am stating sync gateway by that config file and running java code to create document on CBL and CB server by push pull operation.
bt it is showing error as
Jul 08, 2016 10:27:21 AM com.couchbase.lite.util.SystemLogger e
SEVERE: RemoteRequest: RemoteRequest{GET, https://localhost:4984/db/_local/2eafda901c4de2fe022af262d5cc7d1c0cb5c2d2}: executeRequest() Exception: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated. url: https://localhost:4984/db/_local/2eafda901c4de2fe022af262d5cc7d1c0cb5c2d2
so is there any misunderstanding in my concept??? and how do i resolve this problem??
You have not set up your Sync Gateway for SSL. You need to add the SSLCert and SSLPass keys to your config file.

Route lines from file to persistent JMS queue: How to improve performance?

I need some help with performance tuning of a use case. In this use case the Camel route is tailing status lines in a log file and sends each line as a message to a JMS queue. I have implemented the use case like this:
package tests;
import java.io.File;
import java.net.URI;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.broker.BrokerFactory;
import org.apache.activemq.broker.BrokerService;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.sjms.SjmsComponent;
import org.apache.camel.main.Main;
public class LinesToQueue {
public static void main() throws Exception {
final File file = new File("data/log.txt");
final String uri = "tcp://127.0.0.1:61616";
final BrokerService jmsService = BrokerFactory.createBroker(new URI("broker:" + uri));
jmsService.start();
final SjmsComponent jmsComponent = new SjmsComponent();
jmsComponent.setConnectionFactory(new ActiveMQConnectionFactory(uri));
final Main main = new Main();
main.bind("jms", jmsComponent);
main.addRouteBuilder(new RouteBuilder() {
#Override
public void configure() throws Exception {
fromF("stream:file?fileName=%s&scanStream=true&scanStreamDelay=0", file.getAbsolutePath())
.routeId("LinesToQueue")
.to("jms:LogLines?synchronous=false");
}
});
main.enableHangupSupport();
main.run();
}
}
When I run this use case with a file already filled with 1.000.000 lines the overall performance I get in the route is about 313 lines/second. This means that it takes about 55 minutes to process the file.
As some sort of reference I also have created another use case. In this use case the Camel route is tailing status lines in a log file and sends each line as a document to an Elasticsearch index. I have implemented the use case like this:
package tests;
import java.io.File;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.main.Main;
public class LinesToIndex {
public static void main() throws Exception {
final File file = new File("data/log.txt");
final String uri = "local";
final Main main = new Main();
main.addRouteBuilder(new RouteBuilder() {
#Override
public void configure() throws Exception {
fromF("stream:file?fileName=%s&scanStream=true&scanStreamDelay=0", file.getAbsolutePath())
.routeId("LinesToIndex")
.bean(new LineConverter())
.toF("elasticsearch://%s?operation=INDEX&indexName=log&indexType=line", uri);
}
});
main.enableHangupSupport();
main.run();
}
}
When I run this use case with a file already filled with 1.000.000 lines the overall performance I get in the route is about 8333 lines/second. This means that it takes about 2 minutes to process the file.
I understand that there is a huge difference between a JMS queue and an Elasticsearch index but how can have the JMS use case above to perform better?
Update #1:
It seems to be the persistence in the JMS service that is the bottleneck in my first use case above. If I disable the persistence in the JMS service then the performance in the route is about 11111 lines/second. Which persistence storage for the JMS service will give me a better performance?
a couple of things to consider...
ActiveMQ producer connections are expensive, make sure you use a pooled connection factory...
consider using the VM transport for an in process ActiveMQ instance
consider using an external ActiveMQ broker over TCP (so it doesn't compete for resources with your test)
setup/tune KahaDB or LevelDB to optimize persistent storage for your use case

Dummy TIBCO queue for testing

I have written a piece of code to connect to TIBCO queue and fetch customer data. But I cannot find a way to test it before moving it to production. Can we create a dummy queue somehow so that I can ensure that my program is functional before packaging it for production?
Here is my code..
package GI;
import javax.jms.JMSSecurityException;
import javax.naming.InitialContext;
import javax.jms.Queue;
import javax.jms.TextMessage;
import javax.jms.QueueSender;
import javax.jms.DeliveryMode;
import javax.jms.QueueSession;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueReceiver;
public class Queue
{
public Queue()
{
System.err.println("\n------------------------------------------------------------------------");
System.err.println("tibjmsTEQueue SAMPLE");
System.err.println("------------------------------------------------------------------------");
}
public String QueueProcess(String XMLRequest, String serverUrl, String userName, String password, String InQueueName)
{
String [] args = new String[4];
args[0]=serverUrl;
args[1]=userName;
args[2]=password;
args[3]=InQueueName;
System.out.println("Server....................... "+serverUrl);
System.out.println("User......................... "+userName);
System.out.println("InQueue...................... "+InQueueName);
System.out.println("------------------------------------------------------------------------\n");
try
{
tibjmsUtilities.initSSLParams(serverUrl,args);
}
catch (JMSSecurityException e)
{
System.err.println("JMSSecurityException: "+e.getMessage()+", provider="+e.getErrorCode());
e.printStackTrace();
System.exit(0);
}
System.err.println("Starting");
QueueConnection connectionIn = null;
try
{
InitialContext context = new InitialContext();
Queue tibcoQueue = (Queue) context.lookup("queue/queue0");
QueueConnectionFactory factory = new com.tibco.tibjms.TibjmsQueueConnectionFactory(serverUrl);
connectionIn = factory.createQueueConnection(userName, password);
QueueSession sessionIn = connectionIn.createQueueSession(false,javax.jms.Session.AUTO_ACKNOWLEDGE);
QueueSender queueSender = sessionIn.createSender(tibcoQueue);
queueSender.setDeliveryMode(DeliveryMode.NON_PERSISTENT);
QueueReceiver queueReceiver = sessionIn.createReceiver(tibcoQueue);
connectionIn.start();
System.out.println("Connected to Queue...");
System.out.println("XMLRequest: " + XMLRequest);
TextMessage sendXMLRequest = sessionIn.createTextMessage(XMLRequest);
queueSender.send(sendXMLRequest);
System.out.println("sent: " + sendXMLRequest.getText());
TextMessage XMLResponse = (TextMessage) queueReceiver.receive();
System.out.println("received: " + XMLResponse.getText());
return((String) XMLResponse.getText());
}
catch(Exception e)
{
System.err.println("Exitting with Error");
e.printStackTrace();
System.exit(0);
}
finally
{
System.err.println("Exitting");
try {connectionIn.close();}
catch(Exception e1) {}
}
return("Failure");
}
public static void main(String args[])
{
Queue t = new Queue();
}
}
It is never suggested to mix production and non-production traffic on the same EMS messaging instance. That is a very risky practice since it introduces the possibility of inadvertently sending non-production traffic to production applications. I would suggest first checking with your EMS operations team as they can likely point you to a development EMS instance that you can use.
For unit and/or integration testing where you are not looking to stress-test the environment, many developers install an instance of EMS on their local development workstation. Again your EMS operations team may be able to provide you with a license and installation binary to install locally.

How to register my custom MessageBodyReader in my CLIENT?

Maybe somebody can help me find out how to solve this.
I am using jersey-apache-client 1.17
I tried to use Jersey client to build a standalone application (no Servlet container or whatever, just the Java classes) which communicates with a RESTFUL API, and everything worked fine until I tried to handle the mediatype "text/csv; charset=utf-8" which is a CSV stream sent by the server.
The thing is that I can read this stream with the following code:
InputStreamReader reader = new InputStreamReader(itemExportBuilder
.get(ClientResponse.class).getEntityInputStream());
Csv csv = new Csv();
Input input = csv.createInput(reader);
try {
String[] readLine;
while ((readLine = input.readLine()) != null) {
LOG.debug("Reading CSV: {}", readLine);
}
} catch (IOException e) {
e.printStackTrace();
}
try {
input.close();
} catch (IOException e) {
e.printStackTrace();
}
But I'd like to encapsulate it and put it into a MessageBodyReader. But after writing this code, I just can't make the client use the following class:
package client.response;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.lang.annotation.Annotation;
import java.lang.reflect.Type;
import java.util.ArrayList;
import java.util.List;
import javax.ws.rs.WebApplicationException;
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import javax.ws.rs.ext.MessageBodyReader;
import javax.ws.rs.ext.Provider;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
#Provider
public class ItemExportMessageBodyReader implements MessageBodyReader<ItemExportResponse> {
private static final Logger LOG = LoggerFactory.getLogger(ItemExportMessageBodyReader.class);
private static final Integer SKU = 0;
private static final Integer BASE_SKU = 1;
public boolean isReadable(Class<?> paramClass, Type type, Annotation[] annotations,
MediaType mediaType) {
LOG.info("Cheking if content is readable or not");
return paramClass == ItemExportResponse.class && !mediaType.isWildcardType()
&& !mediaType.isWildcardSubtype()
&& mediaType.isCompatible(MediaType.valueOf("text/csv; charset=utf-8"));
}
public ItemExportResponse readFrom(Class<ItemExportResponse> paramClass, Type paramType,
Annotation[] paramArrayOfAnnotation, MediaType paramMediaType,
MultivaluedMap<String, String> paramMultivaluedMap, InputStream entityStream)
throws IOException, WebApplicationException {
InputStreamReader reader = new InputStreamReader(entityStream);
Csv csv = new Csv();
Input input = csv.createInput(reader);
List<Item> items = new ArrayList<Item>();
try {
String[] readLine;
while ((readLine = input.readLine()) != null) {
LOG.trace("Reading CSV: {}", readLine);
Item item = new Item();
item.setBaseSku(readLine[BASE_SKU]);
items.add(item);
}
} catch (IOException e) {
LOG.warn("Item export HTTP response handling failed", e);
} finally {
try {
input.close();
} catch (IOException e) {
LOG.warn("Could not close the HTTP response stream", e);
}
}
ItemExportResponse response = new ItemExportResponse();
response.setItems(items);
return response;
}
}
The following documentation says that the preferred way of making this work in a JAX-RS client to register the message body reader with the code below:
Using Entity Providers with JAX-RS Client API
Client client = ClientBuilder.newBuilder().register(MyBeanMessageBodyReader.class).build();
Response response = client.target("http://example/comm/resource").request(MediaType.APPLICATION_XML).get();
System.out.println(response.getStatus());
MyBean myBean = response.readEntity(MyBean.class);
System.out.println(myBean);
Now the thing is that I can't use the ClientBuilder. I have to extend from a specific class which constructs the client another way, and I have no access to change the construction.
So when I receive the response from the server, the client fails with the following Exception:
com.sun.jersey.api.client.ClientHandlerException: A message body reader for Java class client.response.ItemExportResponse, and Java type class client.response.ItemExportResponse, and MIME media type text/csv; charset=utf-8 was not found
Any other way to register my MessageBodyReader?
OK. If anybody would bump into my question I solved this mystery by upgrading from Jersey 1.17 to version 2.9. The documentation I linked above also covers this version not the old one, this is where the confusion stems from.
Jersey introduced backward INCOMPATIBLE changes starting from version 2, so I have no clue how to configure it in version 1.17.
In version 2 the proposed solution worked fine.

Resources