When we compare QueueBrowser with MessageListener, QueueBrowser is very slow.
QueueBrowser is taking approx 1 min to process 100 messages where as consumer is processing ~840 messages.
This mush difference is expected? can you please suggest if anything needs to be changed in the below code:
queueEnum = queueBrowserIn.GetEnumerator();
while (true)
{
if (queueEnum.MoveNext())
{
messageCount++;
LogWrite($"Message No - {messageCount} - Method: ProcessNewMesage" + DateTime.Now);
IBytesMessage bytesMessage = queueEnum.Current as IBytesMessage;
if (bytesMessage != null)
{
byte[] arrayMessage = new byte[bytesMessage.BodyLength];
bytesMessage.ReadBytes(arrayMessage);
string message = System.Text.Encoding.Default.GetString(arrayMessage);
}
}
}
Related
I know that google states that protobufs don't support large messages (i.e. greater than 1 MB), but I'm trying to stream a dataset using gRPC that's tens of megabytes, and it seems like some people say it's ok, or at least with some splitting...
However, when I try to send an array this way (repeated uint32), it takes like 20 seconds on the same local machine.
#proto
service PAS {
// analyze single file
rpc getPhotonRecords (PhotonRecordsRequest) returns (PhotonRecordsReply) {}
}
message PhotonRecordsRequest {
string fileName = 1;
}
message PhotonRecordsReply {
repeated uint32 PhotonRecords = 1;
}
where PhotonRecordsReply needs to be ~10 million uint32 in length...
Does anyone have an idea on how to speed this up? Or what technology would be more appropriate?
So I think I've implemented streaming based on comments and answers given, but it still takes the same amount of time:
#proto
service PAS {
// analyze single file
rpc getPhotonRecords (PhotonRecordsRequest) returns (stream PhotonRecordsReply) {}
}
class PAS_GRPC(pas_pb2_grpc.PASServicer):
def getPhotonRecords(self, request: pas_pb2.PhotonRecordsRequest, _context):
raw_data_bytes = flb_tools.read_data_bytes(request.fileName)
data = flb_tools.reshape_flb_data(raw_data_bytes)
index = 0
chunk_size = 1024
len_data = len(data)
while index < len_data:
# last chunk
if index + chunk_size > len_data:
yield pas_pb2.PhotonRecordsReply(PhotonRecords=data[index:])
# all other chunks
else:
yield pas_pb2.PhotonRecordsReply(PhotonRecords=data[index:index + chunk_size])
index += chunk_size
Min repro
Github example
If you changed it over to use streams that should help. It took less than 2 seconds to transfer for me. Note this was without ssl and on localhost. This code I threw together. I did run it and it worked. Not sure what might happen if the file is not a multiple of 4 bytes for example. Also the endian order of bytes read is the default for Java.
I made my 10 meg file like this.
dd if=/dev/random of=my_10mb_file bs=1024 count=10240
Here's the service definition. Only thing I added here was the stream to the response.
service PAS {
// analyze single file
rpc getPhotonRecords (PhotonRecordsRequest) returns (stream PhotonRecordsReply) {}
}
Here's the server implementation.
public class PhotonsServerImpl extends PASImplBase {
#Override
public void getPhotonRecords(PhotonRecordsRequest request, StreamObserver<PhotonRecordsReply> responseObserver) {
log.info("inside getPhotonRecords");
// open the file, I suggest using java.nio API for the fastest read times.
Path file = Paths.get(request.getFileName());
try (FileChannel fileChannel = FileChannel.open(file, StandardOpenOption.READ)) {
int blockSize = 1024 * 4;
ByteBuffer byteBuffer = ByteBuffer.allocate(blockSize);
boolean done = false;
while (!done) {
PhotonRecordsReply.Builder response = PhotonRecordsReply.newBuilder();
// read 1000 ints from the file.
byteBuffer.clear();
int read = fileChannel.read(byteBuffer);
if (read < blockSize) {
done = true;
}
// write to the response.
byteBuffer.flip();
for (int index = 0; index < read / 4; index++) {
response.addPhotonRecords(byteBuffer.getInt());
}
// send the response
responseObserver.onNext(response.build());
}
} catch (Exception e) {
log.error("", e);
responseObserver.onError(
Status.INTERNAL.withDescription(e.getMessage()).asRuntimeException());
}
responseObserver.onCompleted();
log.info("exit getPhotonRecords");
}
}
The client just logs the size of the array received.
public long getPhotonRecords(ManagedChannel channel) {
if (log.isInfoEnabled())
log.info("Enter - getPhotonRecords ");
PASGrpc.PASBlockingStub photonClient = PASGrpc.newBlockingStub(channel);
PhotonRecordsRequest request = PhotonRecordsRequest.newBuilder().setFileName("/udata/jdrummond/logs/my_10mb_file").build();
photonClient.getPhotonRecords(request).forEachRemaining(photonRecordsReply -> {
log.info("got this many photons: {}", photonRecordsReply.getPhotonRecordsCount());
});
return 0;
}
I am wring custom java code to read messages from Websphere MQ (version 8) and read all the headers from the MQ message.
When I use the MQHeaderList to parse all the headers the list size is 0:
MQMessage message = new MQMessage();
queue.get(message, getOptions);
DataInput in = new DataInputStream (new ByteArrayInputStream (b));
MQHeaderList headersfoundlist = null;
headersfoundlist = new MQHeaderList (in);
System.out.println("headersfoundlist size: " + headersfoundlist.size());
However, I read only a specific MQRFH2 it works
MQMessage message = new MQMessage();
queue.get(message, getOptions);
DataInput in = new DataInputStream (new ByteArrayInputStream (b));
MQRFH2 rfh2 = new MQRFH2(in);
Element usrfolder = rfh2.getFolder("usr", false);
System.out.println("usr folder" + usrfolder);
How can I parse all the headers of the MQ Message?
DataInput in = new DataInputStream (new ByteArrayInputStream (b));
What's that about? Not sure why you want to do that.
It should just be:
MQMessage message = new MQMessage();
queue.get(message, getOptions);
MQHeaderList headersfoundlist = new MQHeaderList(message);
System.out.println("headersfoundlist size: " + headersfoundlist.size());
Read more here.
Update:
#anshu's comment about it not working, well, I've always found MQHeaderList class to be very buggy. Hence, that is why I don't use it.
Also, 99.99% messages in MQ will only ever have 1 embedded MQ header (i.e. MQRFH2). Note: A JMS message == MQRFH2 message. The only case where you will find 2 embedded MQ headers are for messages on the Dead Letter Queue.
i.e.
{MQDLH}{MQRFH2}{message payload}
Is there a real need for your application to process multiple embedded MQ headers? Is your application putting/getting JMS messages (aka MQRFH2 messages)?
If so then you should do something like the following:
queue.get(receiveMsg, gmo);
if (CMQC.MQFMT_RF_HEADER_2.equals(receiveMsg.format))
{
receiveMsg.seek(0);
MQRFH2 rfh2 = new MQRFH2(receiveMsg);
int strucLen = rfh2.getStrucLength();
int encoding = rfh2.getEncoding();
int CCSID = rfh2.getCodedCharSetId();
String format= rfh2.getFormat();
int flags = rfh2.getFlags();
int nameValueCCSID = rfh2.getNameValueCCSID();
String[] folderStrings = rfh2.getFolderStrings();
for (String folder : folderStrings)
System.out.println.logger("Folder: "+folder);
if (CMQC.MQFMT_STRING.equals(format))
{
String msgStr = receiveMsg.readStringOfByteLength(receiveMsg.getDataLength());
System.out.println.logger("Data: "+msgStr);
}
else if (CMQC.MQFMT_NONE.equals(format))
{
byte[] b = new byte[receiveMsg.getDataLength()];
receiveMsg.readFully(b);
System.out.println.logger("Data: "+new String(b));
}
}
else if ( (CMQC.MQFMT_STRING.equals(receiveMsg.format)) ||
(CMQC.MQFMT_NONE.equals(receiveMsg.format)) )
{
Enumeration<String> props = receiveMsg.getPropertyNames("%");
if (props != null)
{
System.out.println.logger("Named Properties:");
while (props.hasMoreElements())
{
String propName = props.nextElement();
Object o = receiveMsg.getObjectProperty(propName);
System.out.println.logger(" Name="+propName+" : Value="+o);
}
}
if (CMQC.MQFMT_STRING.equals(receiveMsg.format))
{
String msgStr = receiveMsg.readStringOfByteLength(receiveMsg.getMessageLength());
System.out.println.logger("Data: "+msgStr);
}
else
{
byte[] b = new byte[receiveMsg.getMessageLength()];
receiveMsg.readFully(b);
System.out.println.logger("Data: "+new String(b));
}
}
else
{
byte[] b = new byte[receiveMsg.getMessageLength()];
receiveMsg.readFully(b);
System.out.println.logger("Data: "+new String(b));
}
I found the mistake in my code. I have a few more steps before reading the headers. It was moving the cursor in message buffer to the end.
I added message.setDataOffset(0); before reading headers and it worked.
I have a problem that I managed to fix... However I'm a little concerned as I don't really understand why the solution worked;
I am using NetMQ, and specifically a NetMQ poller which has a number of sockets, one of which is a REQ-REP pair.
I have a queue of requests which get dequeued into requests, and the server handles each request type as required and sends back an appropriate response. This had been working without issue, however when I tried to add in an additional request type the system stopped working as expected; what would occur is that the request would reach the server, the server would send the response... and the client would not receive it. The message would not be received at the client until the server was shut down (unusual behavior!).
I had been managing the REQ-REP pair with a flag that I set before sending a request, and reset on receipt of a reply. I managed to fix the issue by only triggering replies within the "SendReady" event of the REQ socket - this automagically fixed all of my issues, however I can't really find anything in the documentation that tells me why the socket might not have been in the "sendready" state, or what this actually does.
Any information that could be shed on why this is working now would be great :)
Cheers.
Edit: Source
Client:
"Subscribe" is run as a separate thread to the UI
private void Subscribe(string address)
{
using (var req = new RequestSocket(address + ":5555"))
using (var sub = new SubscriberSocket(address + ":5556"))
using (var poller = new NetMQPoller { req, sub })
{
// Send program code when a request for a code update is received
sub.ReceiveReady += (s, a) =>
{
var type = sub.ReceiveFrameString();
var reply = sub.ReceiveFrameString();
switch (type)
{
case "Type1":
manager.ChangeValue(reply);
break;
case "Type2":
string[] args = reply.Split(',');
eventAggregator.PublishOnUIThread(new MyEvent(args[0], (SimObjectActionEventType)Enum.Parse(typeof(MyEventType), args[1])));
break;
}
};
req.ReceiveReady += Req_ReceiveReady;
poller.RunAsync();
sub.Connect(address + ":5556");
sub.SubscribeToAnyTopic();
sub.Options.ReceiveHighWatermark = 10;
reqQueue = new Queue<string[]>();
reqQueue.Enqueue(new string[] { "InitialiseClient", "" });
req_sending = false;
while (programRunning)
{
if (reqQueue.Count > 0 && !req_sending)
{
req_sending = true;
string[] request = reqQueue.Dequeue();
Console.WriteLine("Sending " + request[0] + " " + request[1]);
req.SendMoreFrame(request[0]).SendFrame(request[1]);
}
Thread.Sleep(1);
}
}
}
private void Req_ReceiveReady(object sender, NetMQSocketEventArgs e)
{
var req = e.Socket;
var messageType = req.ReceiveFrameString();
Console.WriteLine("Received {0}", messageType);
switch (messageType)
{
case "Reply1":
// Receive action
break;
case "Reply2":
// Receive action
break;
case "Reply3":
// Receive action
break;
}
req_sending = false;
}
Server:
using (var rep = new ResponseSocket("#tcp://*:5555"))
using (var pub = new PublisherSocket("#tcp://*:5556"))
using (var beacon = new NetMQBeacon())
using (var poller = new NetMQPoller { rep, pub, beacon })
{
// Send program code when a request for a code update is received
rep.ReceiveReady += (s, a) =>
{
var messageType = rep.ReceiveFrameString();
var message = rep.ReceiveFrameString();
Console.WriteLine("Received {0} - Content: {1}", messageType, message);
switch (messageType)
{
case "InitialiseClient":
// Send
rep.SendMoreFrame("Reply1").SendFrame(repData);
break;
case "Req2":
// do something
rep.SendMoreFrame("Reply2").SendFrame("RequestOK");
break;
case "Req3":
args = message.Split(',');
if (args.Length == 2)
{
// Do Something
rep.SendMoreFrame("Reply3").SendFrame("RequestOK");
}
else
{
rep.SendMoreFrame("Ack").SendFrame("RequestError - incorrect argument format");
}
break;
case "Req4":
args = message.Split(',');
if (args.Length == 2)
{
requestData = //do something
rep.SendMoreFrame("Reply4").SendFrame(requestData);
}
else
{
rep.SendMoreFrame("Ack").SendFrame("RequestError - incorrect argument format");
}
break;
default:
rep.SendMoreFrame("Ack").SendFrame("Error");
break;
}
};
// setup discovery beacon with 1 second interval
beacon.Configure(5555);
beacon.Publish("server", TimeSpan.FromSeconds(1));
// start the poller
poller.RunAsync();
// run the simulation loop
while (serverRunning)
{
// todo - make this operate for efficiently
// push updated variable values to clients
foreach (string[] message in pubQueue)
{
pub.SendMoreFrame(message[0]).SendFrame(message[1]);
}
pubQueue.Clear();
Thread.Sleep(2);
}
poller.StopAsync();
}
You are using the Request socket from multiple threads, which is not supported. You are sending on the main thread and receiving on the poller thread.
Instead of using regular queue try to use NetMQQueue, you can add it to the poller and enqueue from the UI thread. Then the sending is happening on the poller thread as well as the receiving.
You can read the docs here:
http://netmq.readthedocs.io/en/latest/queue/
Only thing I can think of is that the REP socket is ready to send only after you actually received a message fully (all parts).
I am fairly new to MQMessage broker. In my project, I want to send an xml message. Every thing is ok but when message get larger than 500 bytes, My code send broken message to the Queue. what I am doing is
//queueManager has been initialized in the class constructor and connected to a channel.
public MQResponse WriteMsg(string QueueName, string strInputMsg)
{
MQResponse response = new MQResponse();
try
{
queue = queueManager.AccessQueue(QueueName,
MQC.MQOO_OUTPUT + MQC.MQOO_FAIL_IF_QUIESCING );
queueMessage = new MQMessage();
queueMessage.DataOffset = 0;
//queueMessage.MessageLength = 2000000;
queueMessage.ResizeBuffer(6 * strInputMsg.Length);
queueMessage.WriteString(strInputMsg);
queueMessage.Format = MQC.MQFMT_STRING;
queuePutMessageOptions = new MQPutMessageOptions();
queue.Put(queueMessage, queuePutMessageOptions);
response.Message = "Message sent to the queue successfully";
response.Status=MQResponseStatus.WriteSuccessful;
}
catch (MQException MQexp)
{
response.Message = "Exception: " + MQexp.Message;
response.Status=MQResponseStatus.WriteFail;
response.CatchedException=MQexp;
}
catch (Exception exp)
{
response.Message = "Exception: " + exp.Message;
response.Status=MQResponseStatus.WriteFail;
response.CatchedException=exp;
}
return response;
}
I guess queueMessage should be initialized correctly so that we able to send whole message.
First of all how did you determine that the message is broken? Did you try to receive the sent message and compared with the original message or you viewed the message using MQExplorer or some other means. MQExplorer by default displays first 1000 bytes of the message. To view more you need to change the Max data bytes displayed setting in Window/Preferences/Messages panel.
WebSphere MQ can handle messages of size as large as 100 MB.
Regarding your code snippet above: The few lines of code is enough to build and send a message.
queueMessage = new MQMessage();
queueMessage.Format = MQC.MQFMT_STRING;
queueMessage.WriteString(strInputMsg);
queuePutMessageOptions = new MQPutMessageOptions();
queue.Put(queueMessage, queuePutMessageOptions);
I have the following working code to get a stream from a url:
private Stream GetDownloadStream(string url)
{
Stream stream = null;
AutoResetEvent downloadCompleted = new AutoResetEvent(false);
httpRequest = (HttpWebRequest)WebRequest.Create(url);
httpRequest.AllowReadStreamBuffering = false;
httpRequest.BeginGetResponse(
result =>
{
try
{
httpResponse = (HttpWebResponse)httpRequest.EndGetResponse(result);
stream = httpResponse.GetResponseStream();
}
catch (WebException)
{
downloadCompleted.Set();
Abort();
}
finally
{
downloadCompleted.Set();
}
},
null);
bool completed = downloadCompleted.WaitOne(15 * 1000);
if (completed) {
return stream;
}
return null;
}
It doesn't matter the streams I choose to play. It always returns a stream for the first 6 requests and it returns null on the seven request.
I already tried to increase the timeout to 30 seconds but on the seventh request it won't enter on the httpRequest.BeginGetResponse callback.
Any ideas why?
You're hitting the limit on the number of concurrent web requests (which is 6).
Try closing the stream when you've finished with it or staggering your requests so that you're not trying to make too many at once.