Connection time out in jpos client - jpos

I am using jpos client (In one of the class of java Spring MVC Program) to connect the ISO8585 based server, however due to some reason server is not able to respond back, due to which my program keeps waiting for the response and results in hanging my program. So what is the proper way to implement connection timeout?
My client program look like this:
public FieldsModal sendFundTransfer(FieldsModal field){
try {
JposLogger logger = new JposLogger(ISO_LOG_LOCATION);
org.jpos.iso.ISOPackager customPackager = new GenericPackager(ISO_PACKAGER);
ISOChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
logger.jposlogconfig(channel);
channel.connect();
log4j.info("Connection established using PostChannel");
ISOMsg m = new ISOMsg();
m.set(0, field.getMti());
//m.set(2, field.getField2());
m.set(3, field.getField3());
m.set(4, field.getField4());
m.set(11, field.getField11());
m.set(12, field.getField12());
m.set(17, field.getField17());
m.set(24, field.getField24());
m.set(32, field.getField32());
m.set(34, field.getField34());
m.set(41, field.getField41());
m.set(43, field.getField43());
m.set(46, field.getField46());
m.set(49, field.getField49());
m.set(102,field.getField102());
m.set(103,field.getField103());
m.set(123, field.getField123());
m.set(125, field.getField125());
m.set(126, field.getField126());
m.set(127, field.getField127());
m.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(m.pack()));
channel.send(m);
log4j.info("Message has been send");
ISOMsg r = channel.receive();
r.setPackager(customPackager);
System.out.println(ISOUtil.hexdump(r.pack()));
channel.disconnect();
}catch (Exception err) {
System.out.println("sendFundTransfer : " + err);
}
return field;
}

Well the real proper way would be to use Q2. Given you don't need a persistent connection you coud just set a timeout for the channel.
PostChannel channel = new PostChannel(ISO_SERVER_IP, Integer.parseInt(ISO_SERVER_PORT), customPackager);// live
channel.setTimeout(timeout); //timeout in millies.
This way channel will autodisconnect if nothing happens during the time specified by timeout , and your call to receive will throw an exception.
The alternative is using Q2 and a mux (see QMUX, for which you need to run Q2, or ISOMUX which is kind of deprecated).

Related

How to consume message from RabbitMQ dead letter queue one by one

The requirement is like to process the messages from dead letter queue by exposed a REST service API(Spring Boot).
So that once REST service is called, one message will be consumed from the DL queue and will publish in the main queue again for processing.
#RabbitListener(queues = "QUEUE_NAME") consumes the message immediately which is not required as per the scenario. The message only has to be consumed by the REST service API.
Any suggestion or solution?
I do not think RabbitListener will help here.
However you could implement this behaviour manually.
Spring Boot automatically creates RabbitMq connection factory so you could use it. When http call is made just read single message from the queue manually, you could use basic.get to synchronously get just one message:
#Autowire
private ConnectionFactory factory
void readSingleMessage() {
Connection connection = null;
Channel channel = null;
try {
connection = factory.newConnection();
channel = connection.createChannel();
channel.queueDeclare(QUEUE_NAME, true, false, false, null);
GetResponse response = channel.basicGet(QUEUE_NAME, true);
if (response != null) {
//Do something with the message
}
} finally {
//Check if not null
channel.close();
connection.close();
}
}
If you are using Spring; you can avoid all the boilerplate in the other answer using RabbitTemplate.receive(...).
EDIT
To manually ack/reject the message, use the execute method instead.
template.execute(channel -> {
GetResponse got = channel.basicGet("foo", false);
// ...
channel.basicAck(got.getEnvelope().getDeliveryTag(), false);
return null;
});
It's a bit lower level, but again, most of the boilerplate is taken care of for you.

ZeroMQ choose recipient

I'm new to ZeroMQ (and to networking in general), and have a question about using ZeroMQ in a setup where multiple clients connect to a single server. My situation is as follows:
--1 server
--multiple clients
--Clients send messages to server: I've already figured out how to do this part.
--Server sends messages to a specific client: This is the part I'm having trouble with. When certain events get handled on the server, the server will need to send a message to a specific client -- not all clients. In other words, the server will need to be able to choose which client to send a given message to.
Right now, this is my server code:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateResponseSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.Send("ack"); // There is no overload for the 'Send'
method that takes an IP address as an argument!
}
}
}
I have a feeling that the problem is that my design is wrong, and that the ResponseSocket type isn't meant to be used in the way that I want to use it. Since I'm new to this, any advice is very much appreciated!
when using the Response socket you always replying to the client that sent you the message. So the Request-Response socket types together are just simple request response.
To more complicated scenarios you probably want to use Dealer-Router.
With router the first frame of each message is the routing id (the identity of the client that sent you the message)
so your example with router will look like:
using (NetMQContext ctx = NetMQContext.Create())
{
using (var server = ctx.CreateRouterSocket())
{
server.Bind(#"tcp://127.0.0.1:5555");
while (true)
{
byte[] routingId = server.Receive();
string fromClientMessage = server.ReceiveString();
Console.WriteLine("From Client: {0}", fromClientMessage);
server.SendMore(routingId).Send("ack");
}
}
}
I also suggest to read the zeromq guide, it will probably answer most of your questions.

HornetQ Queue Browser

I`m trying to look at the messages from a queue using a Browser.
Code is like:
javax.naming.InitialContext ctx = new javax.naming.InitialContext();
javax.jms.QueueConnectionFactory qcf = (javax.jms.QueueConnectionFactory)ctx.lookup('java:/XAConnectionFactory');
javax.jms.QueueConnection connection = qcf.createQueueConnection('admin', 'admin'); // qcf.createQueueConnection();
javax.jms.QueueSession session = connection.createQueueSession(false, javax.jms.Session.AUTO_ACKNOWLEDGE);
connection.start();
// It is a "special" queue and it is not looked up from JNDI but constructed directly
javax.jms.Queue queue = (javax.jms.Queue)ctx.lookup('/queue/myQueue');
javax.jms.QueueBrowser browser = session.createBrowser(queue);
TreeMap<Date, javax.jms.Message> messageMap = new TreeMap<Date, javax.jms.Message>();
int counter = 0;
Enumeration<javax.jms.Message> enumeration = browser.getEnumeration();
while (enumeration.hasMoreElements()) {
counter++;
javax.jms.Message message = enumeration.nextElement();
messageMap.put(new Date(message.getJMSTimestamp()), message);
}
connection.stop();
ctx.close();
session.close();
connection.close();
The problem is that I always get only 1 message in the enumeration, even though when looking with the jmx-console and invoke listMessagesAsJSON I get tons of messages.
Any ideas on what am I doing wrong ?
It could be that you are hitting a bug as Sergiu said.
You could as a workaround define consumer-window-size on your connection factory differently. Maybe have a connection factory just for this use-case... or maybe upgrade the version of HornetQ.
When setting the consumer-window-size (like I did in my app) it seems that you can hit bug https://issues.jboss.org/browse/HORNETQ-691 .

MQQueueManager: What to expect from isConnected state after creation, during use?

I inherited this lovely bit of code below.
The way I read it the developer makes three assumptions:
An MQQueueManager instance is not necessarily created in a state where isConnected() returns true
If it is created in state isConnected() == false, the state might change "later", hence the timeout code
If you try to create an access queue from a disconnected MQQueueManager, it will not throw an exception.
What I would expect is that an MQQueueManager instance is created in state isConnected() == true, that this state might change later (network failure etc), and that this state change (isConnected() == false) would cause an operation on the queue to fail with an MQException.
The documentation is delightfully silent on these points, except to note that the only way to reconnect to a queue after manually disconnecting the MQQueueManager is to create a new instance of MQQueueManager.
Who can set me straight here?
qMgr = new MQQueueManager( qManager );
// Set up the options on the queue we wish to open...
// Note. All WebSphere MQ Options are prefixed with MQC in Java.
final int openOptions = MQC.MQOO_INPUT_AS_Q_DEF | MQC.MQOO_OUTPUT;
// Now specify the queue that we wish to open,
// and the open options...
queue = qMgr.accessQueue( queueName, openOptions );
// Set the get message options...
final MQGetMessageOptions gmo = new MQGetMessageOptions(); // accept the
// defaults
gmo.options = MQC.MQGMO_WAIT;
gmo.waitInterval = 1000;
connectionStatus = CONNECTING;
int timeOutCounter = 0;
while(!qMgr.isConnected()) {
InboundMsgTask.sleep(1000);
timeOutCounter++;
if(timeOutCounter > 4) {
connectionStatus = TIME_OUT;
return;
}
}
connectionStatus = CONNECTED;
Instead of checking the IsConnected==True, it is better to go ahead make the actual MQ .NET method call (Get, Put etc). If the connection is broken these calls would throw a connection broken execption (MQRC 2009). Remember the IsConnected could be True before a MQ method is called but it can change during the execution of a MQ method. Your code needs to handle the connection broken exception and call the MQQueueManager.Disconnect method and then re-establish the connection. The Disconnect call would free up any resources allocated and close all gracefully any queue manager objects that were opened. Ignore any exception thrown by the Disconnect method.
If you are using MQ v7.1 or v7.5, then the .NET client can automatically reconnect to queue manager if it detects connection errors. You will need to enable the automatic reconnect option. Please see the MQ InfoCenter.
EDIT:
A new MQQueueManager() will return an instance of MQQueueManager class if connection to queue manager is successfully established. In case of errors, a MQExceptionwill be thrown. There is no need to wait for connection to complete as MQQueueManager constructor is a blocking call.

Async sends in .NET ActiveMQ

I'm looking to increase the performance of a high-throughput producer that I'm writing against ActiveMQ, and according to this useAsyncSend will:
Forces the use of Async Sends which adds a massive performance boost;
but means that the send() method will return immediately whether the
message has been sent or not which could lead to message loss.
However I can't see it making any difference to my simple test case.
Using this very basic application:
const string QueueName = "....";
const string Uri = "....";
static readonly Stopwatch TotalRuntime = new Stopwatch();
static void Main(string[] args)
{
TotalRuntime.Start();
SendMessage();
Console.ReadLine();
}
static void SendMessage()
{
var session = CreateSession();
var destination = session.GetQueue(QueueName);
var producer = session.CreateProducer(destination);
Console.WriteLine("Ready to send 700 messages");
Console.ReadLine();
var body = new byte[600*1024];
Parallel.For(0, 700, i => SendMessage(producer, i, body, session));
}
static void SendMessage(IMessageProducer producer, int i, byte[] body, ISession session)
{
var message = session.CreateBytesMessage(body);
var sw = new Stopwatch();
sw.Start();
producer.Send(message);
sw.Stop();
Console.WriteLine("Running for {0}ms: Sent message {1} blocked for {2}ms",
TotalRuntime.ElapsedMilliseconds,
i,
sw.ElapsedMilliseconds);
}
static ISession CreateSession()
{
var connectionFactory = new ConnectionFactory(Uri)
{
AsyncSend = true,
CopyMessageOnSend = false
};
var connection = connectionFactory.CreateConnection();
connection.Start();
var session = connection.CreateSession(AcknowledgementMode.AutoAcknowledge);
return session;
}
I get the following output:
Ready to send 700 messages
Running for 2430ms: Sent message 696 blocked for 12ms
Running for 4275ms: Sent message 348 blocked for 1858ms
Running for 5106ms: Sent message 609 blocked for 2689ms
Running for 5924ms: Sent message 1 blocked for 2535ms
Running for 6749ms: Sent message 88 blocked for 1860ms
Running for 7537ms: Sent message 610 blocked for 2429ms
Running for 8340ms: Sent message 175 blocked for 2451ms
Running for 9163ms: Sent message 89 blocked for 2413ms
.....
Which shows that each message takes about 800ms to send and the call to session.Send() blocks for about two and a half seconds. Even though the documentation says that
"send() method will return immediately"
Also these number are basically the same if I either change the parallel for to a normal for loop or change the AsyncSend = true to AlwaysSyncSend = true so I don't believe that the async switch is working at all...
Can anyone see what I'm missing here to make the send asynchronous?
After further testing:
According to ANTS performance profiler that vast majority of the runtime is being spent waiting for synchronization. It appears that the issue is that the various transport classes block internally through monitors. In particular I seem to get hung up on the MutexTransport's OneWay method which only allows one thread to access it at a time.
It looks as though the call to Send will block until the previous message has completed, this explains why my output shows that the first message blocked for 12ms, while the next took 1858ms. I can have multiple transports by implementing a connection-per-message pattern which improves matters and makes the message sends work in parallel, but greatly increases the time to send a single message, and uses up so many resources that it doesn't seem like the right solution.
I've retested all of this with 1.5.6 and haven't seen any difference.
As always the best thing to do is update to the latest version (1.5.6 at the time of this writing). A send can block if the broker has producer flow control enabled and you've reached a queue size limit although with async send this shouldn't happen unless you are sending with a producerWindowSize set. One good way to get help is to create a test case and submit it via a Jira issue to the NMS.ActiveMQ site so that we can look into it using your test code. There have been many fixes since 1.5.1 so I'd recommend giving that new version a try as it could already be a non-issue.

Resources