jmeter kafka consumer throwing error as [ClassCastException: [Ljava.lang.String; cannot be cast to java.util.List] - jmeter

I am trying to read kafka messages using a kafka consumer in jmeter using jsr223 sampler. iam unable to understand the error
[Response message: javax.script.ScriptException: javax.script.ScriptException: java.lang.ClassCastException: [Ljava.lang.String; cannot be cast to java.util.List]
Please Help me solve the issue so that i can subscribe and consume messages using the kafka consumer.
import java.util.Properties;
import java.util.Arrays;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.ConsumerRecord;
Properties props = new Properties();
String groupID = "REQUEST_RESPONSE_JOB_GROUP";
String clientID = "REQUEST_RESPONSE_JOB_CLIENT";
String BSID = "kafka:9092";
String topic = "PROC_REST_EVENTS";
props.put("bootstrap.servers", BSID);
props.put("group.id", groupID);
props.put("client.id", clientID);
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("session.timeout.ms", "30000");
props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
props.put("partition.assignment.strategy","org.apache.kafka.clients.consumer.RangeAssignor");
KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props);
//Kafka Consumer subscribes list of topics here.
consumer.subscribe(Arrays.asList(topic));
//print the topic name
System.out.println("Subscribed to topic " + topic);
while (true) {
ConsumerRecords<String, String> records = consumer.poll(100);
for (ConsumerRecord<String, String> record : records)
// print the offset,key and value for the consumer records.
System.out.printf("offset = %d, key = %s, value = %s\n",
record.offset(), record.key(), record.value());
return records;
}

Most probably you're getting a List from the Kafka topic while your consumer expects a String, you need to amend consumer configuration to match the types which come from the topic.
Try out the following Groovy code which sends 3 messages to the test topic (if it doesn't exist you will need to create it) and reads them after this.
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.clients.consumer.KafkaConsumer
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.clients.producer.ProducerConfig
import org.apache.kafka.clients.producer.ProducerRecord
import org.apache.kafka.common.serialization.LongDeserializer
import org.apache.kafka.common.serialization.LongSerializer
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.kafka.common.serialization.StringSerializer
def BOOTSTRAP_SERVERS = 'localhost:9092'
def TOPIC = 'test'
Properties kafkaProps = new Properties()
kafkaProps.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS)
kafkaProps.put(ProducerConfig.CLIENT_ID_CONFIG, 'KafkaExampleProducer')
kafkaProps.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, LongSerializer.class.getName())
kafkaProps.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class.getName())
kafkaProps.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, BOOTSTRAP_SERVERS)
kafkaProps.put(ConsumerConfig.GROUP_ID_CONFIG, 'KafkaExampleConsumer')
kafkaProps.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, LongDeserializer.class.getName())
kafkaProps.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class.getName())
def producer = new KafkaProducer<>(kafkaProps)
def consumer = new KafkaConsumer<>(kafkaProps)
1.upto(3) {
def record = new ProducerRecord<>(TOPIC, it as long, 'Hello from JMeter ' + it)
producer.send(record)
log.info('Sent record(key=' + record.key() + 'value=' + record.value() + ')')
}
consumer.subscribe(Collections.singletonList(TOPIC))
final int giveUp = 100
int noRecordsCount = 0
while (true) {
def consumerRecords = consumer.poll(1000)
if (consumerRecords.count() == 0) {
noRecordsCount++
if (noRecordsCount > giveUp) break
else continue
}
consumerRecords.each { record ->
log.info('Received Record:(' + record.key() + ', ' + record.value() + ')')
}
consumer.commitAsync()
}
consumer.close()
You should see output like:
Once done you should be able to use the above code as a basis for your own Kafka messages consumption testing. See Apache Kafka - How to Load Test with JMeter article for more information on Kafka load testing with JMeter.

Related

JMeter SocketTimeout and OutofMemoryErrors

I have the following code in JSR223 Sampler and I get following errors when I run this jmeter command. Can anyone please advise if something is wrong with my code? I'm basically reading a image and changing it little bit and sending a multipart/form-data POST request.
jmeter -n -Jthreads=20 -Jrampup=30 -Jduration=60 -Jiterations=-1 -t script.jmx
javax.script.ScriptException: java.net.SocketTimeoutException: Read timed out
Uncaught Exception java.lang.OutOfMemoryError: Java heap space in thread Thread[Thread Group 1-10,5,main]
import org.apache.http.HttpHeaders
import org.apache.http.client.config.RequestConfig
import org.apache.http.client.methods.HttpUriRequest
import org.apache.http.client.methods.RequestBuilder
import org.apache.http.conn.ssl.NoopHostnameVerifier
import org.apache.http.conn.ssl.SSLConnectionSocketFactory
import org.apache.http.conn.ssl.TrustStrategy
import org.apache.http.entity.StringEntity
import org.apache.http.impl.client.HttpClients
import org.apache.http.ssl.SSLContextBuilder
import org.apache.http.util.EntityUtils
import org.apache.http.entity.mime.MultipartEntityBuilder;
import org.apache.http.entity.mime.HttpMultipartMode;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.awt.Graphics;
import java.io.ByteArrayOutputStream;
import org.apache.http.entity.ContentType;
import java.security.cert.CertificateException
import java.security.cert.X509Certificate
List<String> sendRequest(String url, String method, String body) {
RequestConfig requestConfig = RequestConfig.custom()
.setConnectTimeout(2000)
.setSocketTimeout(3000)
.build();
BufferedImage image = ImageIO.read(new File("C:/Users/bd3249/Pictures/5.JPG"));
Graphics graphics = image.getGraphics();
graphics.setFont(graphics.getFont().deriveFont(16f));
graphics.drawString("User " + ctx.getThreadNum() + '-' + Math.random() +"; iteration: " + ctx.getVariables().getIteration(), 50, 50);
graphics.dispose();
ByteArrayOutputStream bytes = new ByteArrayOutputStream();
ImageIO.write(image, "jpg", bytes);
final MultipartEntityBuilder multipartEntity = MultipartEntityBuilder.create();
multipartEntity.setMode(HttpMultipartMode.STRICT);
multipartEntity.addBinaryBody("File", bytes.toByteArray(),ContentType.IMAGE_JPEG, "5.JPG");
HttpUriRequest request = RequestBuilder.create(method)
.setConfig(requestConfig)
.setUri(url)
.setHeader("x-ibm-client-id","248a20f3-c39b-45d0-b26a-9019c26e9be8")
.setHeader("X-Client-Id","2861D410-773B-4DD9-AE74-882116B08856")
.setEntity(multipartEntity.build())
.build();
String req = "REQUEST:" + "User " + ctx.getThreadNum() + "; iteration: " + ctx.getVariables().getIteration() + " " + request.getRequestLine() + "\n";
def builder = new SSLContextBuilder();
builder.loadTrustMaterial(null, new TrustStrategy() {
#Override
public boolean isTrusted(X509Certificate[] chain, String authType) throws CertificateException {
return true;
}
});
def trustAllFactory = new SSLConnectionSocketFactory(builder.build(), new NoopHostnameVerifier());
HttpClients.custom().setSSLSocketFactory(trustAllFactory).build().withCloseable { httpClient ->
httpClient.execute(request).withCloseable { response ->
String res = "RESPONSE:" + "User " + ctx.getThreadNum() + "; iteration: " + ctx.getVariables().getIteration() + " " + response.getStatusLine() + (response.getEntity() != null ? EntityUtils.toString(response.getEntity()) : "") + "\n";
log.info(req + "\n" + res);
return Arrays.asList(req, res);
}
}
}
List test1 = sendRequest("https://test.com/upload", "POST", "");
The error you're getting clearly indicates that you don't have sufficient Java Heap space, by default JMeter 5.3 comes with 1 GB of heap and depending on your image size and response size it might be not enough for your test.
Use i.e. Active Threads Over Time listener to see how many virtual users were online when the error occurs and increase the heap size proportionally.
More information: 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure

how to spark streaming use connection pool for impala(JDBC to kudu)

i use impala(JDBC) twice to get kafka offset and save data in foreachRDD.
but impala and kudu always shutdown. now i want to set connect pool, but little for scala.
it's my pseudo-code:
#node-1
val newOffsets = getNewOffset() // JDBC read kafka offset in kudu
val messages = KafkaUtils.createDirectStream(*,newOffsets,)
messages.foreachRDD(rdd => {
val spark = SparkSession.builder.config(rdd.sparkContext.getConf).getOrCreate()
#node-2
Class.forName(jdbcDriver)
val con = DriverManager.getConnection("impala url")
val stmt = con.createStatement()
stmt.executeUpdate(sql)
#node-3
val offsetRanges = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
offsetRanges.foreach { r => {
val rt_upsert = s"UPSert into ${execTable} values('${r.topic}',${r.partition},${r.untilOffset})"
stmt.executeUpdate(rt_upsert)
stmt.close()
conn.close()
}}
}
how to code by c3p0 or other ? I'll appreciate your help.
Below is the code for reading data from kafka and inserting the data to kudu.
import kafka.message.MessageAndMetadata
import kafka.serializer.StringDecoder
import org.apache.kudu.client.KuduClient
import org.apache.kudu.client.KuduSession
import org.apache.kudu.client.KuduTable
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.streaming.Milliseconds,
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.kafka.KafkaUtils
import scala.collection.immutable.List
import scala.collection.mutable
import scala.util.control.NonFatal
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.sql.types._
import org.apache.kudu.Schema
import org.apache.kudu.Type._
import org.apache.kudu.spark.kudu.KuduContext
import scala.collection.mutable.ArrayBuffer
object KafkaKuduConnect extends Serializable {
def main(args: Array[String]): Unit = {
try {
val TopicName="TestTopic"
val kafkaConsumerProps = Map[String, String]("bootstrap.servers" -> "localhost:9092")
val KuduMaster=""
val KuduTable=""
val sparkConf = new SparkConf().setAppName("KafkaKuduConnect")
val sc = new SparkContext(sparkConf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val ssc = new StreamingContext(sc, Milliseconds(1000))
val kuduContext = new KuduContext(KuduMaster, sc)
val kuduclient: KuduClient = new KuduClient.KuduClientBuilder(KuduMaster).build()
//Opening table
val kudutable: KuduTable = kuduclient.openTable(KuduTable)
// getting table schema
val tableschema: Schema = kudutable.getSchema
// creating the schema for the data frame using the table schema
val FinalTableSchema =generateStructure(tableschema)
//To create the schema for creating the data frame from the rdd
def generateStructure(tableSchema:Schema):StructType=
{
var structFieldList:List[StructField]=List()
for(index <-0 until tableSchema.getColumnCount)
{
val col=tableSchema.getColumnByIndex(index)
val coltype=col.getType.toString
println(coltype)
col.getType match {
case INT32 =>
structFieldList=structFieldList:+StructField(col.getName,IntegerType)
case STRING =>
structFieldList=structFieldList:+StructField(col.getName,StringType)
case _ =>
println("No Class Type Found")
}
}
return StructType(structFieldList)
}
// To create the Row object with values type casted according to the schema
def getRow(schema:StructType,Data:List[String]):Row={
var RowData=ArrayBuffer[Any]()
schema.zipWithIndex.foreach(
each=>{
var Index=each._2
each._1.dataType match {
case IntegerType=>
if(Data(Index)=="" | Data(Index)==null)
RowData+=Data(Index).toInt
case StringType=>
RowData+=Data(Index)
case _=>
RowData+=Data(Index)
}
}
)
return Row.fromSeq(RowData.toList)
}
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaConsumerProps, Set(TopicName))
messages.foreachRDD(
//we are looping through eachrdd
eachrdd=>{
//we are creating the Rdd[Row] to create dataframe with our schema
val StructuredRdd= eachrdd.map(eachmessage=>{
val record=eachmessage._2
getRow(FinalTableSchema, record.split(",").toList)
})
//DataFrame with required structure according to the table.
val DF = sqlContext.createDataFrame(StructuredRdd, FinalTableSchema)
kuduContext.upsertRows(DF,KuduTable)
}
)
}
catch {
case NonFatal(e) => {
print("Error in main : " + e)
}
}
}
}

Not able to convert the byte[] to string in scala

**I'm trying to stream the data from kafka and convert it in to a data frame.followed this link
But when im running both producer and consumer applications, this is the output on my console.**
(0,[B#370ed56a) (1,[B#2edd3e63) (2,[B#3ba2944d) (3,[B#2eb669d1)
(4,[B#49dd304c) (5,[B#4f6af565) (6,[B#7714e29e)
Which is literally the output of the kafka producer, the topic is empty before pushing messages.
Here is the producer code snippet :
Properties props = new Properties();
props.put("bootstrap.servers", "##########:9092");
props.put("key.serializer",
"org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer",
"org.apache.kafka.common.serialization.ByteArraySerializer");
props.put("producer.type", "async");
Schema.Parser parser = new Schema.Parser();
Schema schema = parser.parse(EVENT_SCHEMA);
Injection<GenericRecord, byte[]> records = GenericAvroCodecs.toBinary(schema);
KafkaProducer<String, byte[]> producer = new KafkaProducer<String, byte[]>(props);
for (int i = 0; i < 100; i++) {
GenericData.Record avroRecord = new GenericData.Record(schema);
setEventValues(i, avroRecord);
byte[] messages = records.apply(avroRecord);
ProducerRecord<String, byte[]> producerRecord = new ProducerRecord<String, byte[]>(
"topic", String.valueOf(i),messages);
System.out.println(producerRecord);
producer.send(producerRecord);
}
And its output is:
key=0, value=[B#680387a key=1, value=[B#32bfb588 key=2,
value=[B#2ac2e1b1 key=3, value=[B#606f4165 key=4, value=[B#282e7f59
Here is my consumer code snippet written in scala,
"group.id" -> "KafkaConsumer",
"zookeeper.connection.timeout.ms" -> "1000000"
val topicMaps = Map("topic" -> 1)
val messages = KafkaUtils.createStream[String, Array[Byte], StringDecoder, DefaultDecoder](ssc, kafkaConf, topicMaps, StorageLevel.MEMORY_ONLY_SER)
messages.print()
I've tried with both StringDecoder and DefaultDecoder in createStream().I'm sure that, the producer and consumer are in compliance with each other.
Any help , from anybody?

Why did not my custom Receiver get the right output?

I have a question about custom receiver of spark streaming.
I code a copy like the following :
import java.io.{BufferedReader, InputStreamReader}
import java.net.Socket
import java.nio.charset.StandardCharsets
import org.apache.spark.SparkConf
import org.apache.spark.Logging
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.receiver.Receiver
/**
* Custom Receiver that receives data over a socket. Received bytes is interpreted as
* text and \n delimited lines are considered as records. They are then counted and printed.
*
* To run this on your local machine, you need to first run a Netcat server
* `$ nc -lk 9999`
* and then run the example
* `$ bin/run-example org.apache.spark.examples.streaming.CustomReceiver localhost 9999`
*/
object CustomReceiver2 {
def main(args: Array[String]) {
// Create the context with a 1 second batch size
val sparkConf = new SparkConf().setAppName("CustomReceiver")
val ssc = new StreamingContext(sparkConf, Seconds(1))
// Create a input stream with the custom receiver on target ip:port and count the
// words in input stream of \n delimited text (eg. generated by 'nc')
val lines = ssc.receiverStream(new CustomReceiver2("192.168.1.0", 11.toInt))
lines.saveAsObjectFiles("hdfs:/tmp/customreceiver/")
ssc.start()
ssc.awaitTermination()
}
}
class CustomReceiver2(host: String, port: Int)
extends Receiver[String](StorageLevel.MEMORY_AND_DISK_2) with Logging {
def onStart() {
// Start the thread that receives data over a connection
new Thread("Socket Receiver") {
override def run() { receive() }
}.start()
}
def onStop() {
}
/** Create a socket connection and receive data until receiver is stopped */
private def receive() {
var socket: Socket = null
var userInput: String = null
try {
logInfo("Connecting to " + host + ":" + port)
socket = new Socket(host, port)
logInfo("Connected to " + host + ":" + port)
val reader = new BufferedReader(
new InputStreamReader(socket.getInputStream(), StandardCharsets.UTF_8))
userInput = reader.readLine()
while(!isStopped && userInput != null) {
println("userInput= "+userInput)
store(userInput)
userInput = reader.readLine()
println("==store data finished==")
}
reader.close()
socket.close()
logInfo("Stopped receiving")
restart("Trying to connect again")
} catch {
case e: java.net.ConnectException =>
restart("Error connecting to " + host + ":" + port, e)
case t: Throwable =>
restart("Error receiving data", t)
}
}
}
It passes the compiler and runs smoothly. I did not take the output as i expected, because the hdfs file was empty all the time.
Could anyone give me a hand ?

Get Hawtio to show more than 255 chars in JMS messages

I'm using Hawtio to browse my ActiveMQ queues. I'd also like to be able to edit a JMS message before resending it to another queue.
I don't see how I can edit a message in Hawtio, but that's fine, I guess this is not really legal to modify a message directly in the broker.
Instead, I though I would copy the message body and send a new message with the body modified. Now, the problem I'm facing is that I can only see the first 255 chars of the message body. How can I see the entire ActiveMQ message in hawtio? Not just the first 255 characters.
Hawtio uses to browse the queue the JMX interface. It calls the browse() method on the queue. Which returns the messages as CompositedData[].
When a ActiveMQBytesMessage is converted (check class org.apache.activemq.broker.jmx.OpenTypeSupport.ByteMessageOpenTypeFactory) two fields are added BodyLength and BodyPreview. The fields return the following data.
BodyLength - the length of JMS message body
BodyPreview - the first 255 bytes of the JMS message body (the length which is hardcoded, as Claus Ibsen already said in his answer ;-) )
Check in class org.apache.activemq.broker.jmx.OpenTypeSupport.ByteMessageOpenTypeFactory the method Map<String, Object> getFields(Object o).
Hawtio uses the field BodyPreview to display the message, for non text messages.
Check in Hawtio the file hawtio-web/src/main/webapp/app/activemq/js/browse.ts
function createBodyText(message) {
if (message.Text) {
...
} else if (message.BodyPreview) {
...
if (code === 1 || code === 2) {
// bytes and text
var len = message.BodyPreview.length;
var lenTxt = "" + textArr.length;
body = "bytes:\n" + bytesData + "\n\ntext:\n" + textData;
message.textMode = "bytes (" + len + " bytes) and text (" + lenTxt + " chars)";
} else {
// bytes only
var len = message.BodyPreview.length;
body = bytesData;
message.textMode = "bytes (" + len + " bytes)";
}
...
} else {
message.textMode = "unsupported";
...
If you want to change it you either have to change it in ActiveMQ or in Hawtio.
A lengthy and verbose example to demonstrate the explanation.
import static java.lang.System.out;
import java.lang.management.ManagementFactory;
import java.util.Enumeration;
import java.util.concurrent.TimeUnit;
import javax.jms.BytesMessage;
import javax.jms.Connection;
import javax.jms.MessageProducer;
import javax.jms.Queue;
import javax.jms.QueueBrowser;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.management.MBeanServer;
import javax.management.MBeanServerInvocationHandler;
import javax.management.ObjectName;
import javax.management.openmbean.CompositeData;
import javax.management.openmbean.CompositeType;
import org.apache.activemq.ActiveMQConnectionFactory;
import org.apache.activemq.broker.BrokerFactory;
import org.apache.activemq.broker.BrokerService;
import org.apache.activemq.broker.jmx.QueueViewMBean;
import org.apache.activemq.command.ActiveMQBytesMessage;
import org.apache.activemq.command.ActiveMQTextMessage;
public class BodyPreviewExample {
public static void main(String[] args) throws Exception {
String password = "password";
String user = "user";
String queueName = "TEST_QUEUE";
String brokerUrl = "tcp://localhost:61616";
BrokerService broker = BrokerFactory.createBroker("broker:"+brokerUrl);
broker.start();
broker.waitUntilStarted();
Connection conn = new ActiveMQConnectionFactory(brokerUrl)
.createConnection(user, password);
conn.start();
Session session = conn.createSession(false, Session.AUTO_ACKNOWLEDGE);
Queue producerQueue = session.createQueue(queueName);
MessageProducer producer = session.createProducer(producerQueue);
// create a dummy message
StringBuilder sb = new StringBuilder(1000);
for (int i = 0; i < 100; i++) {
sb.append(">CAFEBABE<");
}
// create and send a JMSBytesMessage
BytesMessage bytesMsg = session.createBytesMessage();
bytesMsg.writeBytes(sb.toString().getBytes());
producer.send(bytesMsg);
// create and send a JMSTextMessage
TextMessage textMsg = session.createTextMessage();
textMsg.setText(sb.toString());
producer.send(textMsg);
producer.close();
out.printf("%nmessage info via session browser%n");
String format = "%-20s = %s%n";
Queue consumerQueue = session.createQueue(queueName);
QueueBrowser browser = session.createBrowser(consumerQueue);
for (Enumeration p = browser.getEnumeration(); p.hasMoreElements();) {
out.println();
Object next = p.nextElement();
if (next instanceof ActiveMQBytesMessage) {
ActiveMQBytesMessage amq = (ActiveMQBytesMessage) next;
out.printf(format, "JMSMessageID", amq.getJMSMessageID());
out.printf(format, "JMSDestination", amq.getJMSDestination());
out.printf(format, "JMSXMimeType", amq.getJMSXMimeType());
out.printf(format, "BodyLength", amq.getBodyLength());
} else if (next instanceof ActiveMQTextMessage) {
ActiveMQTextMessage amq = (ActiveMQTextMessage) next;
out.printf(format, "JMSMessageID", amq.getJMSMessageID());
out.printf(format, "JMSDestination", amq.getJMSDestination());
out.printf(format, "JMSXMimeType", amq.getJMSXMimeType());
out.printf(format, "text.length", amq.getText().length());
} else {
out.printf("unhandled message type: %s%n", next.getClass());
}
}
session.close();
conn.close();
// access the queue via JMX
out.printf("%nmessage info via JMX browse operation%n");
MBeanServer mbeanServer = ManagementFactory.getPlatformMBeanServer();
ObjectName name = new ObjectName("org.apache.activemq:type=Broker"
+ ",brokerName=localhost"
+ ",destinationType=Queue"
+ ",destinationName=" + queueName);
QueueViewMBean queue
= MBeanServerInvocationHandler.newProxyInstance(mbeanServer,
name, QueueViewMBean.class, true);
CompositeData[] browse = queue.browse();
for (CompositeData compositeData : browse) {
out.println();
CompositeType compositeType = compositeData.getCompositeType();
out.printf(format, "CompositeType", compositeType.getTypeName());
out.printf(format,"JMSMessageID",compositeData.get("JMSMessageID"));
if (compositeData.containsKey("BodyLength")) {
// body length of the ActiveMQBytesMessage
Long bodyLength = (Long) compositeData.get("BodyLength");
out.printf(format, "BodyLength", bodyLength);
// the content displayed by hawtio
Byte[] bodyPreview = (Byte[]) compositeData.get("BodyPreview");
out.printf(format, "size of BodyPreview", bodyPreview.length);
} else if (compositeData.containsKey("Text")) {
String text = (String) compositeData.get("Text");
out.printf(format, "Text.length()", text.length());
}
}
// uncomment if you want to check with e.g. JConsole
// TimeUnit.MINUTES.sleep(5);
broker.stop();
}
}
example output
message info via session browser
JMSMessageID = ID:hostname-50075-1467979678722-3:1:1:1:1
JMSDestination = queue://TEST_QUEUE
JMSXMimeType = jms/bytes-message
BodyLength = 1000
JMSMessageID = ID:hostname-50075-1467979678722-3:1:1:1:2
JMSDestination = queue://TEST_QUEUE
JMSXMimeType = jms/text-message
text.length = 1000
message info via JMX browse operation
CompositeType = org.apache.activemq.command.ActiveMQBytesMessage
JMSMessageID = ID:hostname-50075-1467979678722-3:1:1:1:1
BodyLength = 1000
size of BodyPreview = 255
CompositeType = org.apache.activemq.command.ActiveMQTextMessage
JMSMessageID = ID:hostname-50075-1467979678722-3:1:1:1:2
Text.length() = 1000
I think there is a hardcoded limitation in ActiveMQ when you query and browse the queues using the JMX API which is what hawtio uses. But cannot remember if its only 255 bytes or not higher.
Look in hawtio settings, there is maybe an ActiveMQ plugin setting to change the 255 chars, can't remember either ;)

Resources