springboot-kafka java 8 time serialization - spring-boot

Currently working with spring-boot 2.0.4 with spring-kafka 2.1.8.RELEASE.
I've wanted to simplify the interchange a bit sending objects to kafka template and used json as format. Some of the messages that needs to be deserialized however contains java.time.LocalDateTime. So my setup is
Config (application.yml):
spring:
jackson:
serialization:
write_dates_as_timestamps: false
kafka:
consumer:
group-id: foo
enable-auto-commit: true
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.springframework.kafka.support.serializer.JsonDeserializer
properties:
spring.json.trusted.packages: my.package
producer:
value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
key-serializer: org.apache.kafka.common.serialization.StringSerializer
properties:
spring.json.trusted.packages: my.package
retries: 3
acks: all
as for the jackson dependencies which is supposed to be needed for it to work, my dependency tree is:
[INFO] | | +- com.fasterxml.jackson.core:jackson-databind:jar:2.9.6:compile
[INFO] | | | +- com.fasterxml.jackson.core:jackson-annotations:jar:2.9.0:compile
[INFO] | | | \- com.fasterxml.jackson.core:jackson-core:jar:2.9.6:compile
[INFO] | | \- com.fasterxml.jackson.datatype:jackson-datatype-jsr310:jar:2.9.6:compile
[INFO] | | +- com.fasterxml.jackson.datatype:jackson-datatype-jdk8:jar:2.9.6:compile
[INFO] | | \- com.fasterxml.jackson.module:jackson-module-parameter-names:jar:2.9.6:compile
This however produces the following error:
org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition Foo-0 at offset 4. If needed, please seek past the record to continue consumption.
Caused by: org.apache.kafka.common.errors.SerializationException: Can't deserialize data [[123, 34, 105, 100, 34, 58, 34, 97, 50, 99, 50, 56, 99, 99, 101, 97, 49, 98, 98, 52, 51, 97, 97, 56, 53, 50, 49, 53, 99, 101, 49, 54, 57, 48, 52, 51, 51, 98, 51, 45, 50, 34, 44, 34, 97, 117, 116, 104, 111, 114, 34, 58, 34, 97, 110, 116, 111, 110, 105, 111, 34, 44, 34, 99, 114, 101, 97, 116, 101, 100, 34, 58, 123, 34, 104, 111, 117, 114, 34, 58, 49, 56, 44, 34, 109, 105, 110, 117, 116, 101, 34, 58, 52, 48, 44, 34, 115, 101, 99, 111, 110, 100, 34, 58, 53, 49, 44, 34, 110, 97, 110, 111, 34, 58, 51, 50, 53, 48, 48, 48, 48, 48, 48, 44, 34, 100, 97, 121, 79, 102, 89, 101, 97, 114, 34, 58, 50, 52, 48, 44, 34, 100, 97, 121, 79, 102, 87, 101, 101, 107, 34, 58, 34, 84, 85, 69, 83, 68, 65, 89, 34, 44, 34, 109, 111, 110, 116, 104, 34, 58, 34, 65, 85, 71, 85, 83, 84, 34, 44, 34, 100, 97, 121, 79, 102, 77, 111, 110, 116, 104, 34, 58, 50, 56, 44, 34, 121, 101, 97, 114, 34, 58, 50, 48, 49, 56, 44, 34, 109, 111, 110, 116, 104, 86, 97, 108, 117, 101, 34, 58, 56, 44, 34, 99, 104, 114, 111, 110, 111, 108, 111, 103, 121, 34, 58, 123, 34, 99, 97, 108, 101, 110, 100, 97, 114, 84, 121, 112, 101, 34, 58, 34, 105, 115, 111, 56, 54, 48, 49, 34, 44, 34, 105, 100, 34, 58, 34, 73, 83, 79, 34, 125, 125, 44, 34, 97, 103, 103, 114, 101, 103, 97, 116, 101, 73, 100, 34, 58, 34, 97, 50, 99, 50, 56, 99, 99, 101, 97, 49, 98, 98, 52, 51, 97, 97, 56, 53, 50, 49, 53, 99, 101, 49, 54, 57, 48, 52, 51, 51, 98, 51, 34, 44, 34, 118, 101, 114, 115, 105, 111, 110, 34, 58, 48, 44, 34, 112, 114, 105, 122, 101, 73, 110, 102, 111, 34, 58, 123, 34, 110, 117, 109, 98, 101, 114, 79, 102, 87, 105, 110, 110, 101, 114, 115, 34, 58, 49, 44, 34, 112, 114, 105, 122, 101, 80, 111, 111, 108, 34, 58, 49, 48, 44, 34, 112, 114, 105, 122, 101, 84, 97, 98, 108, 101, 34, 58, 91, 49, 48, 93, 125, 125]] from topic [Foo]
Caused by: com.fasterxml.jackson.databind.exc.MismatchedInputException: Expected array or string.
at [Source: (byte[])"{"id":"a2c28ccea1bb43aa85215ce1690433b3-2","author":"foo","created":{"hour":18,"minute":40,"second":51,"nano":325000000,"dayOfYear":240,"dayOfWeek":"TUESDAY","month":"AUGUST","dayOfMonth":28,"year":2018,"monthValue":8,"chronology":{"calendarType":"iso8601","id":"ISO"}},"aggregateId":"a2c28ccea1bb43aa85215ce1690433b3","version":0,"prizeInfo":{"numberOfWinners":1,"prizePool":10,"prizeTable":[10]}}"; line: 1, column: 73] (through reference chain: my.package.Foo["created"])
at com.fasterxml.jackson.databind.exc.MismatchedInputException.from(MismatchedInputException.java:63) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.DeserializationContext.reportInputMismatch(DeserializationContext.java:1342) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.DeserializationContext.handleUnexpectedToken(DeserializationContext.java:1138) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.datatype.jsr310.deser.JSR310DeserializerBase._handleUnexpectedToken(JSR310DeserializerBase.java:99) ~[jackson-datatype-jsr310-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer.deserialize(LocalDateTimeDeserializer.java:141) ~[jackson-datatype-jsr310-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.datatype.jsr310.deser.LocalDateTimeDeserializer.deserialize(LocalDateTimeDeserializer.java:39) ~[jackson-datatype-jsr310-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.impl.FieldProperty.deserializeAndSet(FieldProperty.java:136) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserializeFromObject(BeanDeserializer.java:369) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:159) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.ObjectReader._bindAndClose(ObjectReader.java:1611) ~[jackson-databind-2.9.6.jar:2.9.6]
at com.fasterxml.jackson.databind.ObjectReader.readValue(ObjectReader.java:1234) ~[jackson-databind-2.9.6.jar:2.9.6]
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:228) ~[spring-kafka-2.1.8.RELEASE.jar:2.1.8.RELEASE]
at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:923) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher.access$2600(Fetcher.java:93) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1100) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1200(Fetcher.java:949) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:570) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:531) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1154) ~[kafka-clients-1.0.2.jar:na]
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111) ~[kafka-clients-1.0.2.jar:na]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:699) ~[spring-kafka-2.1.8.RELEASE.jar:2.1.8.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_131]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_131]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_131]
Due to this i have tried the following but non had worked so far:
1.Custom ObjectMapper declared as bean
#Bean
public ObjectMapper objectMapper() {
ObjectMapper objectMapper = new ObjectMapper();
objectMapper.registerModule(new JavaTimeModule());
objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
return objectMapper;
}
2.Serializer annotation on LocalDateTime fields
To be sure that i have the correct object mapper settings and the necessary dependencies, i've created a rest controller to simulate the response as json as a rest endpoint returning an object with date time fields, this returns correctly; sample:
[
{
"playerId": "foo",
"points": 10,
"entryDateTime": "2018-08-19T09:30:20.051"
},
{
"playerId": "bar",
"points": 3,
"entryDateTime": "2018-08-27T09:30:20.051"
}
]

Using the Json(De)Serializer constructor with the object mapper param worked for me. I was having trouble (de)serializing a pojo that had an java.time.Instant field, so after hours of troubleshooting this same org.apache.kafka.common.errors.SerializationException***, I finally realized (with the help of answers such as those on here) that the issue is not spring, but kafka's own serialization. Given the objectmapper bean I had, I resolved by autowiring this into the JsonSerializer and JsonDeserializer parameters of my kafka producer and consumer set-ups.
#Configuration
public class JacksonConfig {
#Bean
#Primary
public ObjectMapper objectMapper(Jackson2ObjectMapperBuilder builder) {
ObjectMapper objectMapper = builder.build();
objectMapper.registerModule(new JavaTimeModule());
objectMapper.disable(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS);
return objectMapper;
}
}
#Configuration
public class KafkaProducerConfig {
#Value(value="${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Autowired
private ObjectMapper objectMapper;
#Bean
public KafkaTemplate<String, Order> orderKafkaTemplate(){
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
ProducerFactory<String, Order> producerFactory = new DefaultKafkaProducerFactory<>(props, new StringSerializer(), new JsonSerializer<Order>(objectMapper));
return new KafkaTemplate<>(producerFactory);
}
}
#Configuration
public class KafkaConsumerConfig {
#Value(value="${kafka.bootstrapAddress}")
private String bootstrapAddress;
#Value(value="${kafka.consumer.groupId}")
private String groupId;
#Autowired
private ObjectMapper objectMapper;
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Order> orderKafkaListenerContainerFactory(){
ConcurrentKafkaListenerContainerFactory<String, Order> factory = new ConcurrentKafkaListenerContainerFactory<>();
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapAddress);
props.put(ConsumerConfig.GROUP_ID_CONFIG, groupId);
ConsumerFactory<String, Order> consumerFactory = new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(Order.class, objectMapper));
factory.setConsumerFactory(consumerFactory);
return factory;
}
}
(Pojo shown for further clarity)
public class Order {
private long accountId;
private long assetId;
private long quantity;
private long price;
private Instant createdOn = Instant.now();
// no args constructor, constructor with params for all fields except createdOn, and getters/setters for all fields omitted
***often the cause was: com.fasterxml.jackson.databind.exc.InvalidDefinitionException: Cannot construct instance of 'java.time.Instant' (no Creators, like default construct, exist): cannot deserialize from object value (no delegate- or property-based Creator) at [Source: (byte[])"{"accountId":1,"assetId":2,"quantity":100,"price":1000,"createdOn":{"epochSecond":1558570217,"nano":728000000}}"...

When you set the serializers/deserializers using properties, Kafka instantiates them, not Spring. Kafka knows nothing about Spring or the customized ObjectMapper.
You need to override Boot's default producer/consumer factories and use the alternate constructors (or setters) to add the serializers/deserializers.
See the documentation.
Important
Only simple configuration can be performed with properties; for more advanced configuration (such as using a custom ObjectMapper in the serializer/deserializer), you should use the producer/consumer factory constructors that accept a pre-built serializer and deserializer. For example, with Spring Boot, to override the default factories:
#Bean
public ConsumerFactory<Foo, Bar> kafkaConsumerFactory(KafkaProperties properties,
JsonDeserializer customDeserializer) {
return new DefaultKafkaConsumerFactory<>(properties.buildConsumerProperties(),
customDeserializer, customDeserializer);
}
#Bean
public ProducererFactory<Foo, Bar> kafkaProducerFactory(KafkaProperties properties,
JsonSserializer customSerializer) {
return new DefaultKafkaConsumerFactory<>(properties.buildProducerProperties(),
customSerializer, customSerializer);
}
Setters are also provided, as an alternative to using these constructors.

You can extend Spring Kafka's JsonSerializer:
public class JsonSerializerWithJTM<T> extends JsonSerializer<T> {
public JsonSerializerWithJTM() {
super();
objectMapper.registerModule(new JavaTimeModule());
//whatever you want to configure here
}
}
Use this class in Kafka's configuration instead of the original one:
spring:
kafka:
consumer:
value-deserializer: com.foo.JsonSerializerWithJTM

Related

Spring Cloud Data Flow urlExpression of httpclient cannot be set properly

I parse image urls from JSON produced by Twitter with Spring Cloud Data Flow and I'd like to download the image with httpclient.
Here is the pipeline:
twitterstream --twitter.credentials.consumerKey=*** --twitter.credentials.consumerSecret=*** --twitter.credentials.accessToken=*** --twitter.credentials.accessTokenSecret=*** | splitter --expression=#jsonPath(payload,'$.entities.media[*].media_url') | httpclient --httpclient.httpMethod=GET --httpclient.urlExpression=payload | log
If I exclude the httpclient the following log appears in the log, therefore I suppose the url extraction is successful and the httpclient gets the url.
2019-12-21 12:46:23.120 INFO 1 --- [container-0-C-1] log-sink : http://pbs.twimg.com/media/EMBcr-XVAAARY7x.png
I get the following exception from the httpclient (URI is not absolute)
2019-12-21 19:17:06.741 ERROR 1 --- [container-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: ConsumerRecord(topic = test.splitter, partition = 0, offset = 74, CreateTime = 1576954352460, serialized key size = -1, serialized value size = 87, headers = RecordHeaders(headers = [RecordHeader(key = sequenceNumber, value = [49]), RecordHeader(key = sequenceSize, value = [49]), RecordHeader(key = deliveryAttempt, value = [49]), RecordHeader(key = scst_nativeHeadersPresent, value = [116, 114, 117, 101]), RecordHeader(key = correlationId, value = [34, 56, 98, 57, 100, 56, 99, 57, 99, 45, 49, 100, 52, 50, 45, 55, 97, 101, 54, 45, 50, 54, 49, 55, 45, 50, 98, 102, 52, 101, 51, 100, 99, 98, 57, 55, 50, 34]), RecordHeader(key = contentType, value = [34, 97, 112, 112, 108, 105, 99, 97, 116, 105, 111, 110, 47, 106, 115, 111, 110, 34]), RecordHeader(key = spring_json_header_types, value = [123, 34, 115, 101, 113, 117, 101, 110, 99, 101, 78, 117, 109, 98, 101, 114, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 73, 110, 116, 101, 103, 101, 114, 34, 44, 34, 115, 99, 115, 116, 95, 110, 97, 116, 105, 118, 101, 72, 101, 97, 100, 101, 114, 115, 80, 114, 101, 115, 101, 110, 116, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 66, 111, 111, 108, 101, 97, 110, 34, 44, 34, 115, 101, 113, 117, 101, 110, 99, 101, 83, 105, 122, 101, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 73, 110, 116, 101, 103, 101, 114, 34, 44, 34, 100, 101, 108, 105, 118, 101, 114, 121, 65, 116, 116, 101, 109, 112, 116, 34, 58, 34, 106, 97, 118, 97, 46, 117, 116, 105, 108, 46, 99, 111, 110, 99, 117, 114, 114, 101, 110, 116, 46, 97, 116, 111, 109, 105, 99, 46, 65, 116, 111, 109, 105, 99, 73, 110, 116, 101, 103, 101, 114, 34, 44, 34, 99, 111, 114, 114, 101, 108, 97, 116, 105, 111, 110, 73, 100, 34, 58, 34, 106, 97, 118, 97, 46, 117, 116, 105, 108, 46, 85, 85, 73, 68, 34, 44, 34, 99, 111, 110, 116, 101, 110, 116, 84, 121, 112, 101, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 125])], isReadOnly = false), key = null, value = [B#73d60921)
org.springframework.integration.transformer.MessageTransformationException: Failed to transform Message; nested exception is org.springframework.messaging.MessageHandlingException: nested exception is java.lang.IllegalArgumentException: URI is not absolute, failedMessage=GenericMessage [payload=byte[87], headers={sequenceNumber=1, sequenceSize=1, deliveryAttempt=3, kafka_timestampType=CREATE_TIME, kafka_receivedMessageKey=null, kafka_receivedTopic=test.splitter, kafka_offset=74, scst_nativeHeadersPresent=true, kafka_consumer=org.apache.kafka.clients.consumer.KafkaConsumer#103e165f, correlationId=8b9d8c9c-1d42-7ae6-2617-2bf4e3dcb972, kafka_receivedPartitionId=0, contentType=application/json, kafka_receivedTimestamp=1576954352460}]
at org.springframework.integration.transformer.MessageTransformingHandler.handleRequestMessage(MessageTransformingHandler.java:115) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:123) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:169) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:132) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:105) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:453) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:401) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187) ~[spring-messaging-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166) ~[spring-messaging-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) ~[spring-messaging-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109) ~[spring-messaging-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:205) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:369) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$400(KafkaMessageDrivenChannelAdapter.java:74) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:431) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:402) ~[spring-integration-kafka-3.1.0.RELEASE.jar!/:3.1.0.RELEASE]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:120) ~[spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:287) ~[spring-retry-1.2.4.RELEASE.jar!/:na]
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:211) ~[spring-retry-1.2.4.RELEASE.jar!/:na]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:114) ~[spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:40) ~[spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:1278) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:1261) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1222) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1203) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:1123) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:938) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:751) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:700) [spring-kafka-2.2.8.RELEASE.jar!/:2.2.8.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_192]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_192]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_192]
Caused by: org.springframework.messaging.MessageHandlingException: nested exception is java.lang.IllegalArgumentException: URI is not absolute
at org.springframework.integration.handler.LambdaMessageProcessor.processMessage(LambdaMessageProcessor.java:111) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.transformer.AbstractMessageProcessingTransformer.transform(AbstractMessageProcessingTransformer.java:113) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
at org.springframework.integration.transformer.MessageTransformingHandler.handleRequestMessage(MessageTransformingHandler.java:109) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
... 33 common frames omitted
Caused by: java.lang.IllegalArgumentException: URI is not absolute
at java.net.URI.toURL(URI.java:1088) ~[na:1.8.0_192]
at org.springframework.http.client.SimpleClientHttpRequestFactory.createRequest(SimpleClientHttpRequestFactory.java:145) ~[spring-web-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.http.client.support.HttpAccessor.createRequest(HttpAccessor.java:87) ~[spring-web-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:731) ~[spring-web-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.web.client.RestTemplate.exchange(RestTemplate.java:637) ~[spring-web-5.1.9.RELEASE.jar!/:5.1.9.RELEASE]
at org.springframework.cloud.stream.app.httpclient.processor.HttpclientProcessorFunctionConfiguration.lambda$httpRequest$0(HttpclientProcessorFunctionConfiguration.java:102) ~[spring-cloud-starter-stream-processor-httpclient-2.1.2.RELEASE.jar!/:2.1.2.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_192]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_192]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_192]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_192]
at org.springframework.integration.handler.LambdaMessageProcessor.processMessage(LambdaMessageProcessor.java:102) ~[spring-integration-core-5.1.7.RELEASE.jar!/:5.1.7.RELEASE]
... 35 common frames omitted
I think the problem is with the --httpclient.urlExpression=payload parameter, where I try to reference the url by the payload keyword. What's the problem exactly?
UPDATE:
I think the problem is that the payload is a series of ASCII character codes. How can I create a string from that?
The problem is that the payload is a series of ASCII codes and I solved the issue by setting --httpclient.urlExpression='new String(payload)'. It converts the ASCII codes to a string, but I think it's not the best solution so I'm waiting for a better one.

Message conversion in SCDF on Kafka and NonTrustedHeaders

I am having a hard time figuring out how to get a simple SCDF pipeline functional.
I am using a local setup:
{
"versionInfo": {
"implementation": {
"name": "spring-cloud-dataflow-server-local",
"version": "1.6.0.BUILD-SNAPSHOT"
},
"core": {
"name": "Spring Cloud Data Flow Core",
"version": "1.6.0.BUILD-SNAPSHOT"
},
"dashboard": {
"name": "Spring Cloud Dataflow UI",
"version": "1.6.0.M1"
},
"shell": {
"name": "Spring Cloud Data Flow Shell",
"version": "1.6.0.BUILD-SNAPSHOT",
"url": "https://repo.spring.io/libs-snapshot/org/springframework/cloud/spring-cloud-dataflow-shell/1.6.0.BUILD-SNAPSHOT/spring-cloud-dataflow-shell-1.6.0.BUILD-SNAPSHOT.jar"
}
},
"featureInfo": {
"analyticsEnabled": true,
"streamsEnabled": true,
"tasksEnabled": true,
"skipperEnabled": false
},
"securityInfo": {
"isAuthenticationEnabled": false,
"isAuthorizationEnabled": false,
"isFormLogin": false,
"isAuthenticated": false,
"username": null,
"roles": []
},
"runtimeEnvironment": {
"appDeployer": {
"platformSpecificInfo": {},
"deployerImplementationVersion": "1.3.7.RELEASE",
"deployerName": "LocalAppDeployer",
"deployerSpiVersion": "1.3.2.RELEASE",
"javaVersion": "1.8.0_45",
"platformApiVersion": "Mac OS X 10.13.4",
"platformClientVersion": "10.13.4",
"platformHostVersion": "10.13.4",
"platformType": "Local",
"springBootVersion": "1.5.14.RELEASE",
"springVersion": "4.3.18.RELEASE"
},
"taskLauncher": {
"platformSpecificInfo": {},
"deployerImplementationVersion": "1.3.7.RELEASE",
"deployerName": "LocalTaskLauncher",
"deployerSpiVersion": "1.3.2.RELEASE",
"javaVersion": "1.8.0_45",
"platformApiVersion": "Mac OS X 10.13.4",
"platformClientVersion": "10.13.4",
"platformHostVersion": "10.13.4",
"platformType": "Local",
"springBootVersion": "1.5.14.RELEASE",
"springVersion": "4.3.18.RELEASE"
}
}
}
The pipeline is pretty simple:
http --port=9191 | transform --expression=payload.toUpperCase() | log
When I trigger the http endpoint with cURL like this:
curl -v -H"Referer: http://localhost:8080" -H"Content-Type: text/plain" -XPOST localhost:9191/ -d 'test'
I see the following error message in the logfile of the transform processor:
2018-07-11 09:56:59.758 ERROR 66396 --- [container-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: ConsumerRecord(topic = edded.http, partition = 0, offset = 0, CreateTime = 1531295816669, serialized key size = -1, serialized value size = 17, headers = RecordHeaders(headers = [RecordHeader(key = referer, value = [34, 104, 116, 116, 112, 58, 47, 47, 108, 111, 99, 97, 108, 104, 111, 115, 116, 58, 56, 48, 56, 48, 34]), RecordHeader(key = content-length, value = [49, 55]), RecordHeader(key = http_requestMethod, value = [34, 80, 79, 83, 84, 34]), RecordHeader(key = host, value = [34, 108, 111, 99, 97, 108, 104, 111, 115, 116, 58, 57, 49, 57, 49, 34]), RecordHeader(key = http_requestUrl, value = [34, 104, 116, 116, 112, 58, 47, 47, 108, 111, 99, 97, 108, 104, 111, 115, 116, 58, 57, 49, 57, 49, 47, 34]), RecordHeader(key = contentType, value = [123, 34, 116, 121, 112, 101, 34, 58, 34, 116, 101, 120, 116, 34, 44, 34, 115, 117, 98, 116, 121, 112, 101, 34, 58, 34, 112, 108, 97, 105, 110, 34, 44, 34, 112, 97, 114, 97, 109, 101, 116, 101, 114, 115, 34, 58, 123, 34, 99, 104, 97, 114, 115, 101, 116, 34, 58, 34, 85, 84, 70, 45, 56, 34, 125, 44, 34, 113, 117, 97, 108, 105, 116, 121, 86, 97, 108, 117, 101, 34, 58, 49, 46, 48, 44, 34, 99, 104, 97, 114, 115, 101, 116, 34, 58, 34, 85, 84, 70, 45, 56, 34, 44, 34, 119, 105, 108, 100, 99, 97, 114, 100, 84, 121, 112, 101, 34, 58, 102, 97, 108, 115, 101, 44, 34, 119, 105, 108, 100, 99, 97, 114, 100, 83, 117, 98, 116, 121, 112, 101, 34, 58, 102, 97, 108, 115, 101, 44, 34, 99, 111, 110, 99, 114, 101, 116, 101, 34, 58, 116, 114, 117, 101, 125]), RecordHeader(key = user-agent, value = [34, 77, 111, 122, 105, 108, 108, 97, 47, 53, 46, 48, 32, 40, 99, 111, 109, 112, 97, 116, 105, 98, 108, 101, 59, 32, 77, 83, 73, 69, 32, 57, 46, 48, 59, 32, 87, 105, 110, 100, 111, 119, 115, 32, 78, 84, 32, 54, 46, 49, 59, 32, 84, 114, 105, 100, 101, 110, 116, 47, 53, 46, 48, 41, 34]), RecordHeader(key = accept, value = [123, 34, 116, 121, 112, 101, 34, 58, 34, 42, 34, 44, 34, 115, 117, 98, 116, 121, 112, 101, 34, 58, 34, 42, 34, 44, 34, 112, 97, 114, 97, 109, 101, 116, 101, 114, 115, 34, 58, 123, 125, 44, 34, 113, 117, 97, 108, 105, 116, 121, 86, 97, 108, 117, 101, 34, 58, 49, 46, 48, 44, 34, 99, 104, 97, 114, 115, 101, 116, 34, 58, 110, 117, 108, 108, 44, 34, 119, 105, 108, 100, 99, 97, 114, 100, 84, 121, 112, 101, 34, 58, 116, 114, 117, 101, 44, 34, 119, 105, 108, 100, 99, 97, 114, 100, 83, 117, 98, 116, 121, 112, 101, 34, 58, 116, 114, 117, 101, 44, 34, 99, 111, 110, 99, 114, 101, 116, 101, 34, 58, 102, 97, 108, 115, 101, 125]), RecordHeader(key = spring_json_header_types, value = [123, 34, 114, 101, 102, 101, 114, 101, 114, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 44, 34, 99, 111, 110, 116, 101, 110, 116, 45, 108, 101, 110, 103, 116, 104, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 76, 111, 110, 103, 34, 44, 34, 104, 116, 116, 112, 95, 114, 101, 113, 117, 101, 115, 116, 77, 101, 116, 104, 111, 100, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 44, 34, 104, 111, 115, 116, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 44, 34, 104, 116, 116, 112, 95, 114, 101, 113, 117, 101, 115, 116, 85, 114, 108, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 44, 34, 99, 111, 110, 116, 101, 110, 116, 84, 121, 112, 101, 34, 58, 34, 111, 114, 103, 46, 115, 112, 114, 105, 110, 103, 102, 114, 97, 109, 101, 119, 111, 114, 107, 46, 104, 116, 116, 112, 46, 77, 101, 100, 105, 97, 84, 121, 112, 101, 34, 44, 34, 117, 115, 101, 114, 45, 97, 103, 101, 110, 116, 34, 58, 34, 106, 97, 118, 97, 46, 108, 97, 110, 103, 46, 83, 116, 114, 105, 110, 103, 34, 44, 34, 97, 99, 99, 101, 112, 116, 34, 58, 34, 111, 114, 103, 46, 115, 112, 114, 105, 110, 103, 102, 114, 97, 109, 101, 119, 111, 114, 107, 46, 104, 116, 116, 112, 46, 77, 101, 100, 105, 97, 84, 121, 112, 101, 34, 125])], isReadOnly = false), key = null, value = [B#4bc28689)
org.springframework.messaging.MessageHandlingException: nested exception is org.springframework.expression.spel.SpelEvaluationException: EL1004E: Method call: Method toUpperCase() cannot be found on type byte[]
at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:107) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:93) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:109) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:158) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:132) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:105) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:73) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:445) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:394) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:181) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:160) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:108) ~[spring-messaging-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:203) ~[spring-integration-core-5.0.6.RELEASE.jar!/:5.0.6.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:70) ~[spring-integration-kafka-3.0.3.RELEASE.jar!/:3.0.3.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:387) ~[spring-integration-kafka-3.0.3.RELEASE.jar!/:3.0.3.RELEASE]
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:364) ~[spring-integration-kafka-3.0.3.RELEASE.jar!/:3.0.3.RELEASE]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:120) ~[spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter$$Lambda$659/1406308390.doWithRetry(Unknown Source) ~[na:na]
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:287) ~[spring-retry-1.2.2.RELEASE.jar!/:na]
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:211) ~[spring-retry-1.2.2.RELEASE.jar!/:na]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:114) ~[spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:40) ~[spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:1071) [spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:1051) [spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:998) [spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:866) [spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:724) [spring-kafka-2.1.7.RELEASE.jar!/:2.1.7.RELEASE]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_45]
at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_45]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Since i've provided the Content-Type header in HTTP request, and after reading this blog post, I assumed that during message conversion the payload of the message (I understand the default wire format for Kafka is byte[]) would then be converted to a String representation. However, the type of the Message.payload that TransformProcessorConfiguration.transform receives is still byte[].
Does this behavior have something to do with the fact that the Content-Type header appears as a NonTrustedHeaderType in the MessagingMessageConverter.toMessage() call? Stepping through with the debugger shows the following for the contentType header:
headerValue = {"type":"text","subtype":"plain","parameters":{"charset":"UTF-8"},"qualityValue":1.0,"charset":"UTF-8","wildcardType":false,"wildcardSubtype":false,"concrete":true}
untrustedType = "org.springframework.http.MediaType"
This is the list of rawHeaders that the MessagingMessageConverter resolves:
"referer"->"http://localhost:8080"
"content-length"->"17"
"http_requestMethod"->"POST"
"kafka_timestampType"->"CREATE_TIME"
"kafka_receivedMessageKey"->"null"
"kafka_receivedTopic"->"edded.http"
"accept"->"NonTrustedHeaderType
"kafka_offset"->"1"
"scst_nativeHeadersPresent"->"true"
"kafka_consumer"->
"host"->"localhost:9191"
"http_requestUrl"->"http://localhost:9191/"
"kafka_receivedPartitionId"->"0"
"contentType"->"NonTrustedHeaderType
"kafka_receivedTimestamp"->"1531296520235"
"user-agent"->"Mozilla/5.0
Another potentially related issue that I found is described here. However, I have no clue how to control mappers trustedPackages via binder properties, if that is at all related to my problem.
I also tried setting app.*.spring.cloud.stream.bindings.input.producer.headerMode=raw in the deployment properties, did not have any effect.
Thanks!
Actually the blog you pointed should not result in the assumption that there will be conversion based on the content-type header. Conversion is done only based on the type required by the handler and if such type is generic (i.e., Object) or byte[], no conversion will be performed. What is the signature of the TransformProcessorConfiguration.transform(..) method? Also, if you are attempting to do any kind of SPEL evaluation on the payload, you must assume that it is always a byte[] since conversion will happen only when a handler method is about to be invoked, so if you are using some expression in condition on the payload and assume String, don't.

Standalone JMS to Mainframe MQ in EBCDIC

I am new to Websphere MQ (IBM z/OS) technologies. We had a requirement to implement a standalone application that uses JMS technology to connect to an MQ server (on IBM z/OS. This is maintained by a different organization for which we have only limited access) and put a message on the queue.
Here are pieces of my code below.
private void sendMessage(String queue, String msg) {
JmsFactoryFactory ff = JmsFactoryFactory.getInstance(WMQConstants.WMQ_PROVIDER);
JmsConnectionFactory cf = ff.createConnectionFactory();
cf.setStringProperty(WMQConstants.WMQ_HOST_NAME, host);
cf.setIntProperty(WMQConstants.WMQ_PORT, port);
cf.setStringProperty(WMQConstants.WMQ_CHANNEL, channel);
cf.setIntProperty(WMQConstants.WMQ_CONNECTION_MODE, WMQConstants.WMQ_CM_CLIENT);
cf.setStringProperty(WMQConstants.WMQ_QUEUE_MANAGER, queueManagerName);
cf.setStringProperty(WMQConstants.USERID, user);
cf.setStringProperty(WMQConstants.PASSWORD, password);
Connection connection = null;
Session session = null;
Destination destination = null;
MessageProducer producer = null;
connection = cf.createConnection(user, password);
session = connection.createSession(false, Session.AUTO_ACKNOWLEDGE);
destination = session.createQueue(queue);
//((MQDestination)destination).setCCSID(37);
producer = session.createProducer(destination);
TextMessage message = session.createTextMessage();
message.setIntProperty(WMQConstants.JMS_IBM_CHARACTER_SET, 37);
//message.setIntProperty(WMQConstants.JMS_IBM_ENCODING, 785);
message.setText(msg);
// Start the connection
connection.start();
// And, send the message
producer.send(message);
}
I was successfully able to connect to the MQ server on the other end and put the messages on the remote server in ASCII format. I was able to consume the message that I have put on the queue from an AIX server.
But since the MQ is running on z/OS and the consumer is also an Mainframe application the message I put appears to be a garbage/unreadable format. After some research I figured out that messages needs to be converted to EBCDIC to be put on z/OS MQ. I expected that this will be taken care of by the IBM MQ libraries.
Please help on how can I put the messages in EBCDIC format.
You are doing this wrong:
message.setIntProperty(WMQConstants.JMS_IBM_CHARACTER_SET, 37);
You need to declare the character set that you are putting on the queue. Since that looks like Java, I'm assuming it is a UTF-16 string. Declare it as 1208, not 37.
On the other end, if they want it in EBCDIC, they will do a GET-With-Convert, declaring that they want to receive it in IBM 37/1140 and MQ will invoke Unicode Conversion Services for z/OS and make it happen.
More importantly, if your receiver is not a Java client, you need to disable the JMS header as follows:
destination = session.createQueue("queue:///" + queue + "?targetClient=1")
or by invoking the Native MQ implementation :
((MQDestination)destination).setMessageBodyStyle(WMQConstants.WMQ_MESSAGE_BODY_MQ)
See:
https://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q032120_.htm
https://www-01.ibm.com/support/knowledgecenter/SSFKSJ_8.0.0/com.ibm.mq.dev.doc/q032140_.htm?lang=en
http://www-01.ibm.com/support/knowledgecenter/SSFKSJ_7.0.1/com.ibm.mq.csqzaw.doc/jm10910_.htm
To store your message in a non-standard-encoding you will have to use BytesMessage instead of TextMessage. This might work (untested!):
byte[] messageBytes = msg.getBytes("IBM037");
BytesMessage message = session.createBytesMessage();
message.writeBytes(messageBytes);
But it would be preferable to have the given message-encoding respected at the consuming side - that's why you put it there.
If possible, use MQ's MQGMO convert option to convert into the local machine's character set. But if you want (or can't) use that mechanism, you can also implement your own character set translation to have full control. For example:
//---------------------------------------------------
//Character Translation Table for: IBM500
private static char[] EBCDIC2ASCII_IBM500 = new char[] {
0, 1, 2, 3, 156, 9, 134, 127, 151, 141, 142, 11, 12, 13, 14, 15,
16, 17, 18, 19, 157, 10, 8, 135, 24, 25, 146, 143, 28, 29, 30, 31,
128, 129, 130, 131, 132, 10, 23, 27, 136, 137, 138, 139, 140, 5, 6, 7,
144, 145, 22, 147, 148, 149, 150, 4, 152, 153, 154, 155, 20, 21, 158, 26,
32, 160, 226, 228, 224, 225, 227, 229, 231, 241, 91, 46, 60, 40, 43, 33,
38, 233, 234, 235, 232, 237, 238, 239, 236, 223, 93, 36, 42, 41, 59, 94,
45, 47, 194, 196, 192, 193, 195, 197, 199, 209, 166, 44, 37, 95, 62, 63,
248, 201, 202, 203, 200, 205, 206, 207, 204, 96, 58, 35, 64, 39, 61, 34,
216, 97, 98, 99, 100, 101, 102, 103, 104, 105, 171, 187, 240, 253, 254, 177,
176, 106, 107, 108, 109, 110, 111, 112, 113, 114, 170, 186, 230, 184, 198, 164,
181, 126, 115, 116, 117, 118, 119, 120, 121, 122, 161, 191, 208, 221, 222, 174,
162, 163, 165, 183, 169, 167, 182, 188, 189, 190, 172, 124, 175, 168, 180, 215,
123, 65, 66, 67, 68, 69, 70, 71, 72, 73, 173, 244, 246, 242, 243, 245,
125, 74, 75, 76, 77, 78, 79, 80, 81, 82, 185, 251, 252, 249, 250, 255,
92, 247, 83, 84, 85, 86, 87, 88, 89, 90, 178, 212, 214, 210, 211, 213,
48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 179, 219, 220, 217, 218, 159
};
private static char[] ASCII2EBCDIC_IBM500 = new char[] {
0, 1, 2, 3, 55, 45, 46, 47, 22, 5, 37, 11, 12, 13, 14, 15,
16, 17, 18, 19, 60, 61, 50, 38, 24, 25, 63, 39, 28, 29, 30, 31,
64, 79, 127, 123, 91, 108, 80, 125, 77, 93, 92, 78, 107, 96, 75, 97,
240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 122, 94, 76, 126, 110, 111,
124, 193, 194, 195, 196, 197, 198, 199, 200, 201, 209, 210, 211, 212, 213, 214,
215, 216, 217, 226, 227, 228, 229, 230, 231, 232, 233, 74, 224, 90, 95, 109,
121, 129, 130, 131, 132, 133, 134, 135, 136, 137, 145, 146, 147, 148, 149, 150,
151, 152, 153, 162, 163, 164, 165, 166, 167, 168, 169, 192, 187, 208, 161, 7,
32, 33, 34, 35, 36, 0, 6, 23, 40, 41, 42, 43, 44, 9, 10, 27,
48, 49, 26, 51, 52, 53, 54, 8, 56, 57, 58, 59, 4, 20, 62, 255,
65, 170, 176, 177, 159, 178, 106, 181, 189, 180, 154, 138, 186, 202, 175, 188,
144, 143, 234, 250, 190, 160, 182, 179, 157, 218, 155, 139, 183, 184, 185, 171,
100, 101, 98, 102, 99, 103, 158, 104, 116, 113, 114, 115, 120, 117, 118, 119,
172, 105, 237, 238, 235, 239, 236, 191, 128, 253, 254, 251, 252, 173, 174, 89,
68, 69, 66, 70, 67, 71, 156, 72, 84, 81, 82, 83, 88, 85, 86, 87,
140, 73, 205, 206, 203, 207, 204, 225, 112, 221, 222, 219, 220, 141, 142, 223
};
public static void main(String[] args)
{
String ebcdic = "" + (char)0xC1 + (char)0xC2 + (char)0xC3;
System.err.println("ebcdic: " + ebcdic);
String ascii = "";
for( char c: ebcdic.toCharArray() ) {
ascii += EBCDIC2ASCII_IBM500[c];
}
System.err.println("ascii: " + ascii);
ebcdic="";
for( char c: ascii.toCharArray() ) {
ebcdic += ASCII2EBCDIC_IBM500[c];
}
System.err.println("ebcdic: " + ebcdic);
}
And here's the code to create these tables:
public static void createTranslationTable(Charset charset)
{
System.out.println();
System.out.println("// ---------------------------------------------------" );
System.out.println("// Character Translation Tables for: " + charset.name() );
byte[] b = new byte[256];
for( int i=0;i<256;i++ ) b[i] = (byte)i;
String s = "";
try {
s = new String(b,charset.name());
}
catch (UnsupportedEncodingException e) {
e.printStackTrace();
}
int[] inverse = new int[256];
System.out.println("unsigned char EBCDIC2ASCII_" + charset.name() + "[256] = {");
for( int i=0;i<256;i++ ) {
int c = s.charAt(i); // %256;
if( c>255 ) c=i;
inverse[c] = i;
System.out.print( c + (i<255?", ":"") );
if( i%16==15 ) System.out.println();
}
System.out.println("};");
System.out.println("unsigned char ASCII2EBCDIC_" + charset.name() + "[256] = {");
for( int i=0;i<256;i++ ) {
int c = inverse[i]; // %256;
System.out.print( c + (i<255?", ":"") );
if( i%16==15 ) System.out.println();
}
System.out.println("};");
}
And you could use it like this:
createTranslationTable( Charset.forName("CP037") );

Elasticsearch + Oracle JDBC River

Maybe this is a simple question but im new at ElasticSearch.. Installed es 1.4, Oracle 10g is up and running, jdbc plugin loaded in ES without issue. Marvel also working.. Tried to create a river with this statement in Marvel/Sense:
PUT _river/mydata/_meta
{
"type": "jdbc",
"jdbc": {
"driver": "oracle.jdbc.OracleDriver",
"url": “jdbc:oracle:thin:#host:1521:SID",
"user": “oracleusr",
"password": “ oraclepass",
"index": “myindex",
"type": “mytype",
"sql": "select * from aTable"
}
}
And i get this error all the time:
{
"error": "MapperParsingException[failed to parse]; nested: ElasticsearchParseException[Failed to derive xcontent from (offset=0, length=323): [80, 85, 84, 32, 95, 114, 105, 118, 101, 114, 47, 109, 52, 99, 47, 95, 109, 101, 116, 97, 32, 10, 123, 10, 32, 32, 34, 116, 121, 112, 101, 34, 58, 32, 34, 106, 100, 98, 99, 34, 44, 10, 32, 32, 34, 106, 100, 98, 99, 34, 58, 32, 123, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 100, 114, 105, 118, 101, 114, 34, 58, 32, 34, 111, 114, 97, 99, 108, 101, 46, 106, 100, 98, 99, 46, 79, 114, 97, 99, 108, 101, 68, 114, 105, 118, 101, 114, 34, 44, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 117, 114, 108, 34, 58, 32, -30, -128, -100, 106, 100, 98, 99, 58, 111, 114, 97, 99, 108, 101, 58, 116, 104, 105, 110, 58, 64, 49, 48, 46, 49, 57, 52, 46, 49, 55, 46, 49, 55, 51, 58, 49, 53, 50, 49, 58, 109, 52, 99, 34, 44, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 117, 115, 101, 114, 34, 58, 32, -30, -128, -100, 109, 52, 99, 34, 44, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 112, 97, 115, 115, 119, 111, 114, 100, 34, 58, 32, -30, -128, -100, 32, 109, 52, 99, 49, 50, 48, 57, 48, 53, 34, 44, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 105, 110, 100, 101, 120, 34, 58, 32, -30, -128, -100, 109, 52, 99, 34, 44, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 116, 121, 112, 101, 34, 58, 32, -30, -128, -100, 112, 97, 105, 115, 34, 44, 10, 32, 32, 32, 32, 32, 32, 32, 32, 34, 115, 113, 108, 34, 58, 32, 34, 115, 101, 108, 101, 99, 116, 32, 42, 32, 102, 114, 111, 109, 32, 77, 52, 67, 80, 65, 73, 83, 34, 10, 32, 32, 32, 32, 32, 32, 125, 10, 125, 10]]; ",
"status": 400
}
on the CLI i get:
org.elasticsearch.index.mapper.MapperParsingException: failed to parse
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:562)
at org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490)
at org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:413)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:189)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:511)
at org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:419)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.elasticsearch.ElasticsearchParseException: Failed to derive xcontent from (offset=0, length=323)
I first though was a connection problem but checking around it seems its a problem with the mapping in ES & Oracle. Does someone have done ES+Oracle River integration? Any help would be really appreciated.
I found a solution that worked out really straigth forward. Instead of putting the command on the what i did is have a json config file like this:
{
"type" : "jdbc" ,
"jdbc" :{
"strategy" : "oneshot",
"driver" : "oracle.jdbc.OracleDriver",
"url" : "jdbc:oracle:thin:#host:1521:DNS",
"user" : "user",
"password" : "password",
"sql" : "select * from aTable",
"poll" : "1h",
"scale" : 0,
"autocommit" : false,
"fetchsize" : 100,
"max_rows" : 0,
"max_retries" : 3,
"max_retries_wait" : "10s"
},
"index" : {
"index" : "aIndex",
"type" : "aType",
"bulk_size" : 100
}
}
and then call it with curl like this:
'http://127.0.0.1:9200/_river/jdbcriver/_meta' -d #config.json
I don't know it's a parser issue or what but this way worked like a charm.
Thanks

Getting error mapper parsing exception while indexing

I am new to elasticsearch. I was trying to index the attachment but getting error.
i had executed following step.
Installed the Mapper-attachment plugin
i had converted text file to base64 with openssl command
openssl enc -base64 -in test3.txt -out t3.file
after that i had created mapping
[root#n1 testcase]# curl -XPUT 'http://localhost:9200/indextryes/?pretty=1' -d '
{ "mappings" : { "doc" : { "properties" : {"file" : {"type" : "attachment"}}}}}'
{
"ok" : true,
"acknowledged" : true
}
when i try to index it i got following error message
[root#n1 testcase]# curl -X POST "localhost:9200/indextryes/text" -d #t3.file
{"error":"MapperParsingException[failed to parse]; nested: ElasticSearchParseException[Failed to derive xcontent from (offset=0, length=64): [98, 71, 86, 48, 99, 121, 66, 107, 98, 121, 66, 104, 98, 109, 57, 48, 97, 71, 86, 121, 73, 72, 82, 108, 99, 51, 81, 103, 89, 87, 53, 107, 73, 72, 90, 108, 99, 109, 108, 109, 101, 83, 66, 108, 98, 71, 70, 122, 100, 71, 108, 106, 99, 50, 86, 104, 99, 109, 78, 111, 76, 103, 61, 61]]; ","status":400}
thanks for help...

Resources