I am Passing data as JSONObject as String in Header and in next module/processor using below code but data on next module i received as "java.io.DataInputStream#7c6a857" and not able to convert it back to JSONObject as String.
messageOut = MessageBuilder.withPayload(message-payload).copyHeaders(payload.getHeaders()).setHeader("ActualRecord", JSONObject).build();
I assume you are using RabbitMQ as the transport.
The DefaultMessagePropertiesConverter used within the bus limits headers to 1024 bytes by default; anything larger is left as a DataInputStream in the headers. The MessagePropertiesConverter is configurable for this size, but XD does not currently expose a setting to allow this to be increased.
You need to read from the DataInputStream into a byte[] to restore the JSON.
I have opened a JIRA Issue to support configuration of the LongString limit.
Related
Could anyone explain how can we reduce total message data send to a topic, like unnecessary message headers. I am trying to send a string of message to a topic and retrieve from that topic using an MQGET call. https://www.ibm.com/docs/en/ibm-mq/9.2?topic=calls-mqget-get-message as defined in this link, datalength is the total size of message received from the topic. The difference of datalength and actual string size is very different. datalength is much more higher than the actual string I am sending.
So it must be that IBM MQ is padding the message with headers and properties which are not required for just sending a string to a topic.
Can we disable the unused headers and properties so that the datalength can come down?
EDIT:
Publisher's side code. here protomsg is the google's protobuf.
string buffer; // message buffer
protomsg.SerializeToString(&buffer);
long n =buffer.length();
char *char_array;
char_array = &buffer[0];
MQPUT(Hcon,
Hobj,
&md,
&pmo,
n,
char_array,
&CompCode,
&Reason);
You can use MQGMO properties options to control which properties are included and whether the properties are returned as message headers or in a separate message handle.
You might want to set the MQGMO_NO_PROPERTIES option, documented here: https://www.ibm.com/docs/en/ibm-mq/9.2?topic=mqgmo-options-mqlong
The typical flow when returning the contents of file from a server back to the client are to:
1.) Obtain an inputstream to the file
2.) Write chunks of the stream to the open socket
3.) Close the input stream
When using OkHttp's mockwebserver the MockResponse only accepts a Okio buffer. This means we must read the entire input stream contents into the buffer before sending it. This will probably result in an OutOfMemory exception if the file is too large. Is there a way to accomplish the logic flow I outlined above without using a duplex response or should I use another library? Here's how I'm currently sending the file in kotlin:
val inputStream = FileInputStream(file)
val source = inputStream.source()
val buf = Buffer()
buf.writeAll(source.buffer())
source.close()
val response = HTTP_200
response.setHeader("Content-Type", "video/mp4")
response.setBody(buf)
return response
// Dispatch the response, etc...
This is a design limitation of MockWebServer, guaranteeing that there’s no IOExceptions on the serving side. If you have a response that's bigger than you can keep in-memory, MockWebServer is the wrong tool for the job.
I have a protobuf message like this
message ImgReply {
bytes data = 1;
}
And I want to assign its contents with set_allocated method:
string *buf = new string();
GRPC_CALL_BACK_FUNCTION() {
.....
reply->set_allocated_data(buf);
return Status::OK;
}
Now each time the grpc function is call, the buf will be released automatically. I would like to reuse it such that I do not need to reallocate memory each time. I tried to call the reply->release_data(); method will just clear the data field and the client would receive no data at all. So how could I reuse this buf variable and do not let protobuf delete it automatically please ?
The gRPC C++ sync API doesn't provide any feature for custom memory allocation. The callback API has been planned with a message allocator feature, but that hasn't been de-experimentalized yet, so it isn't ready to use publicly. That should be available within the next month or two.
I am using chronicle queue v4 for writing serialized object to queue. But I keep getting below Exception
Exception in thread "CLF-1" java.lang.AssertionError: you cant put a header inside a header, check that you have not nested the documents.
at net.openhft.chronicle.wire.AbstractWire.writeHeader(AbstractWire.java:228)
at net.openhft.chronicle.queue.impl.single.StoreRecovery.writeHeader(StoreRecovery.java:28)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueStore.writeHeader(SingleChronicleQueueStore.java:298)
at net.openhft.chronicle.queue.impl.single.SingleChronicleQueueExcerpts$StoreAppender.writingDocument(SingleChronicleQueueExcerpts.java:232)
at net.openhft.chronicle.wire.MarshallableOut.writeDocument(MarshallableOut.java:68)
This is how my code looks
SingleChronicleQueue queue = SingleChronicleQueueBuilder.binary(queueFullPath).build();
ExcerptAppender queueWriter = queue.acquireAppender();
UserStat stat=new UserStat();//this is my object
byte[] bytes = convertObjectToBytes(stat);//custom serialization to convert java object to byte array
queueWriter.writeDocument(w -> w
.getValueOut().bytes(bytes));
There is nothing written to .cq4 file. But I see the last modified time changed everytime writeDocument() method is called.
Most likely (according to the stack trace) the file you're writing to is damaged. You need to clean it up and retry (and it seems you were using fairly old version). Try to test with new version of Chronicle Queue - chances are high it is solved.
I have a Java desktop client application that uploads files to a REST service.
All calls to the REST service are handled using the Spring RestTemplate class.
I'm looking to implement a progress bar and cancel functionality as the files being uploaded can be quite big.
I've been looking for a way to implement this on the web but have had no luck.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
I've even tried overriding the CommonsClientHttpRequestFactory.createRequest() method and implementing my own RequestEntity class with a special writeRequest() method but the same issue occurs (stream is all read before actually sending the post).
Am I looking in the wrong place? Has anyone done something similar.
A lot of the stuff I've read on the web about implementing progress bars talks about staring the upload off and then using separate AJAX requests to poll the web server for progress which seems like an odd way to go about it.
Any help or tips greatly appreciated.
This is an old question but it is still relevant.
I tried implementing my own ResourceHttpMessageConverter and substituting the writeInternal() method but this method seems to be called during some sort of buffered operation prior to actually posting the request (so the stream is read all in one go before sending takes place).
You were on the right track. Additionally, you also needed to disable request body buffering on the RestTemplate's HttpRequestFactory, something like this:
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new HttpComponentsClientHttpRequestFactory();
clientHttpRequestFactory.setBufferRequestBody(false);
RestTemplate restTemplate = new RestTemplate(clientHttpRequestFactory);
Here's a working example for tracking file upload progress with RestTemplate.
There was not much detail about what this app is, or how it works so this response is vague but I believe you can do something like this to track your upload progress.
If this really is a Java Client App (i.e. Not HTML/JavaScript but a java program) and you really are having it upload a file as a stream then you should be able to track your upload progress by counting the bytes in the array being transmitted in the stream buffer and comparing that to the total byte count from the file object.
When you get the file get its size.
Integer totalFile = file.getTotalSpace();
Where ever you are transmitting as a stream you are presumably adding bytes to a output buffer of some kind
byte[] bytesFromSomeFileReader = [whatEverYouAreUsingToReadTheFile];
ByteArrayOutputStream byteStreamToServer = new ByteArrayOutputStream();
Integer bytesTransmitted = 0;
for (byte fileByte : bytesFromSomeFileReader) {
byteStreamToServer.write(fileByte);
//
// Update your progress bar every killo-byte sent.
//
bytesTransmitted++;
if( (bytesTransmitted % 1000) = 0) {
someMethodToUpdateProgressBar();
}
}