send multiple jms messages in one transaction - jms

I have to send a message to 2 different queues(queue1 and queue2). However, i want to rollback, if the send is failed for any of the queue(queue1 or queue2).
my source code looks as follows. can anyone through some inputs on this.
public void sendMessage(final Map<String, String> mapMessage) {
jmsTemplate.send(queue1, session -> {
MapMessage message = session.createMapMessage();
Iterator<Entry<String, String>> it = mapMessage.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, String> pair = it.next();
message.setStringProperty(pair.getKey(), pair.getValue());
}
message.setJMSRedelivered(true);
message.setJMSCorrelationID(UUID.randomUUID().toString().replaceAll("-", ""));
return message;
});
jmsTemplate.send(queue2, session -> {
MapMessage message = session.createMapMessage();
Iterator<Entry<String, String>> it = mapMessage.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, String> pair = it.next();
message.setStringProperty(pair.getKey(), pair.getValue());
}
message.setJMSRedelivered(true);
message.setJMSCorrelationID(UUID.randomUUID().toString().replaceAll("-", ""));
return message;
});
}

Start a transaction before entering the sendMessage method, e.g. with #Transactional - see the Spring Framework Reference Manual.

Related

How to implement PayUmonney in Android & how to create a Hash key in local because I don't know how to create in server

public void PayuMonney(){
**JAVA
this method i am using in my code but it's not done
here i am use payumonney code given to Documentation https://payumobile.gitbook.io/sdk-integration/android/payucheckoutpro. i trying too much but the toast appear in "invalid Hash"**
PayUPaymentParams.Builder builder = new PayUPaymentParams.Builder();
builder.setAmount(mAmount)
.setIsProduction(true)
.setProductInfo(mProductInfo)
.setKey(mMerchantKey)
.setPhone(mPhoneNumber)
.setTransactionId(mTXNId)
.setFirstName(mFirstName)
.setEmail(mEmailId)
.setSurl("https://www.payumoney.com/mobileapp/payumoney/success.php")
.setFurl("https://www.payumoney.com/mobileapp/payumoney/failure.php");
**Optional can contain any additional PG params**
PayUPaymentParams payUPaymentParams = builder.build();
***here i am calling Payuminney checkout process Sample code***
PayUCheckoutPro.open(
this,
payUPaymentParams,
new PayUCheckoutProListener() {
#Override
public void onPaymentSuccess(Object response) {
//Cast response object to HashMap
HashMap<String,Object> result = (HashMap<String, Object>) response;
String payuResponse = (String)result.get(PayUCheckoutProConstants.CP_PAYU_RESPONSE);
String merchantResponse = (String) result.get(PayUCheckoutProConstants.CP_MERCHANT_RESPONSE);
}
#Override
public void onPaymentFailure(Object response) {
//Cast response object to HashMap
HashMap<String,Object> result = (HashMap<String, Object>) response;
String payuResponse = (String)result.get(PayUCheckoutProConstants.CP_PAYU_RESPONSE);
String merchantResponse = (String) result.get(PayUCheckoutProConstants.CP_MERCHANT_RESPONSE);
}
#Override
public void onPaymentCancel(boolean isTxnInitiated) {
}
#Override
public void onError(ErrorResponse errorResponse) {
*//code give some error toast here i the onError fuction*
String errorMessage = errorResponse.getErrorMessage();
Toast.makeText(FinalPlaceOrderActivity.this, errorMessage, Toast.LENGTH_SHORT).show();
}
#Override
public void setWebViewProperties(#Nullable WebView webView, #Nullable Object o) {
*//For setting webview properties, if any. Check Customized Integration section for more details on this*
}
#Override
public void generateHash(HashMap<String, String> valueMap, PayUHashGenerationListener hashGenerationListener) {
String hashName = valueMap.get(PayUCheckoutProConstants.CP_HASH_NAME);
String hashData = valueMap.get(PayUCheckoutProConstants.CP_HASH_STRING);
if (!TextUtils.isEmpty(hashName) && !TextUtils.isEmpty(hashData)) {
*//Do not generate a hash from local, it needs to be calculated from server-side only. Here, hashString contains hash created from your server side.
//here i am call the server file where generate a hash*
StringRequest OrderPlace = new StringRequest(Request.Method.POST, "url link", new Response.Listener() {
#Override
public void onResponse(String response) {
//here i am get hash from the server
System.out.println(response);
String merchandHsh = response;
HashMap<String, String> dataMap = new HashMap<>();
dataMap.put(hashName, merchandHsh);
hashGenerationListener.onHashGenerated(dataMap);
}
}, new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
Toast.makeText(getApplicationContext(), "We can't process this time..", Toast.LENGTH_SHORT).show();
}
}
) {
#Nullable
#Override
protected Map<String, String> getParams() throws AuthFailureError {
Map<String, String> map = new HashMap<String, String>();
map.put("key", mMerchantKey);
map.put("texID", mTXNId);
map.put("amount", mAmount);
map.put("productname", mProductInfo);
map.put("name", mFirstName);
map.put("email", mEmailId);
return map;
}
};
Volley.newRequestQueue(FinalPlaceOrderActivity.this).add(OrderPlace);
}
}
}
);
}
but the toast appears a "Hash invalid". this method I am using in my code but it's not done
here I am using the pay money code given to Documentation https://payumobile.gitbook.io/sdk-integration/android/payucheckoutpro. I trying too much but the toast appears in "invalid Hash"
How to implement PayUmonney in Android & how to create a Hash key in local because I don't know how to create in server

quasar fiber returning empty results after the thread is started

I am testing my POST endpoint locally on my spring boot application. I have a method that spawns a fiber thread to run a set of instructions that calls an endpoint A and my POST endpoint returns the results returned by A. However, when my POST request is completed, the results shown in postman is empty.
My code is as below
#RequestMapping("/prediction")
public CustomResponse prediction(#RequestBody CustomRequest input, HttpServletRequest request) {
return predictionClass.prediction(input);
}
public CustomResponse prediction(CustomRequest input) {
CustomResponse customResponse = new customResponse();
new Fiber<CustomResponse>(new SuspendableRunnable() {
public void run() throws SuspendExecution, InterruptedException {
List<CustomRequest> inputs = new ArrayList<>();
// A for loop is here to duplicate CustomRequest input parameter received and populate the inputs list
List<CustomResponse> customResponses = inputs.stream()
.map(req -> processPrediction(req)).collect(Collectors.toList());
for (CustomResponse x : customResponses) {
if (inputs.size() > 1) {
for (String outputKey : x.getOutputVars().keySet()) {
customResponse.getOutputVars().put(x.getModelName() + "_" + outputKey, x.getOutputVars().get(outputKey));
}
} else {
// Else statement will be run because the input is only size 1
customResponse.getOutputVars().putAll(x.getOutputVars());
}
System.out.println(customResponse.getOutputVars().size());
}
}).start();
return customResponse;
}
public CustomResponse processPrediction(CustomRequest input) {
CustomResponse res = new CustomResponse();
RestTemplate gzipRestTemplate = new RestTemplateBuilder()
.additionalInterceptors(new GzipHttpRequestInterceptor())
.build();
HttpHeaders headers = new HttpHeaders();
headers.setContentType(MediaType.APPLICATION_JSON);
HttpEntity<Map<String, Object>> entity = new HttpEntity<>(input, headers);
ResponseEntity<Map> responseEntity = gzipRestTemplate.postForEntity("an-endpoint-url", entity, Map.class);
Map<String, Object> outputs = (Map<String, Object>) responseEntity.getBody();
res.getOutputVars().putAll(outputs);
return res;
}
In this test my input is only size 1, when I trigger the POST request using Postman, the System.out.println(customResponse.getOutputVars().size()); returned 16 but on Postman it shows my outputVars is empty.
Interestingly I decided to do 2 experiments as below.
Experiment 1
public CustomResponse prediction() {
CustomResponse customResponse = new CustomResponse ();
new Fiber<Void>(new SuspendableRunnable() {
public void run() throws SuspendExecution, InterruptedException {
customResponse .setModelName("name");
Map<String, Object> test = new HashMap<>();
test.put("pcd4Score", "hello");
customResponse .getOutputVars().put("message", "hello");
}
}).start();
return customResponse ;
}
Postman returns customResponse with message and hello in it
Experiment 2
This experiment is the same as experiment 1 but with Thread.sleep(1000); I was thinking thread.sleep could represent processPrediction I have in my original code
public CustomResponse prediction() {
CustomResponse customResponse = new CustomResponse ();
new Fiber<Void>(new SuspendableRunnable() {
public void run() throws SuspendExecution, InterruptedException {
customResponse .setModelName("name");
Map<String, Object> test = new HashMap<>();
test.put("pcd4Score", "hello");
customResponse .getOutputVars().put("message", "hello");
}
}).start();
return customResponse ;
}
This time customResponse was empty and in my spring boot application terminal the error was
[quasar] ERROR: while transforming {the-path-to-my-class-for-prediction-method}$1: Unable to instrument {the-path-to-my-class-for-prediction-method}$1#run()V because of blocking call to java/lang/Thread#sleep(J)V
It feels like Experiment 1 was a success because the instructions wasn't as cpu intensive, I know I can code it in a way that I start the fiber in a separate method, and then only call prediction because it seems like postman returns in empty CustomResponse, then only the instructions inside run() started running, I just want to understand the behavior of Fiber. I have trouble googling for my situation (my google keywords were rest endpoint not returning results after a fiber thread is started) hence I am asking this on stackoverflow. I am also very new to the whole multithreading in java topic.
I solved it by adding fiber join before customResponse is being returned like this. However it doesn't seem very elegant to have a try and catch just for .join(), is there a more elegant way to redo this whole method?
public CustomResponse prediction(CustomRequest input) {
CustomResponse customResponse = new customResponse();
Fiber fiber = new Fiber<CustomResponse>(new SuspendableRunnable() {
public void run() throws SuspendExecution, InterruptedException {
List<CustomRequest> inputs = new ArrayList<>();
// A for loop is here to duplicate CustomRequest input parameter received and populate the inputs list
List<CustomResponse> customResponses = inputs.stream()
.map(req -> processPrediction(req)).collect(Collectors.toList());
for (CustomResponse x : customResponses) {
if (inputs.size() > 1) {
for (String outputKey : x.getOutputVars().keySet()) {
customResponse.getOutputVars().put(x.getModelName() + "_" + outputKey, x.getOutputVars().get(outputKey));
}
} else {
// Else statement will be run because the input is only size 1
customResponse.getOutputVars().putAll(x.getOutputVars());
}
System.out.println(customResponse.getOutputVars().size());
}
}).start();
try {
fiber.join();
} catch (Exception e) {
e.printStackTrace();
}
return customResponse;
}

AggregatingReplyingKafkaTemplate releaseStrategy Question

There seem to be an issue when I use AggregatingReplyingKafkaTemplate with template.setReturnPartialOnTimeout(true) in that, it returns timeout exception even if partial results are available from consumers.
In example below, I have 3 consumers to reply to the request topic and i've set the reply timeout at 10 seconds. I've explicitly delayed the response of Consumer 3 to 11 seconds, however, I expect the response back from Consumer 1 and 2, so, I can return partial results. However, I am getting KafkaReplyTimeoutException. Appreciate your inputs. Thanks.
I follow the code based on the Unit Test below.
[ReplyingKafkaTemplateTests][1]
I've provided the actual code below:
#RestController
public class SumController {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
public static final String D_REPLY = "dReply";
public static final String D_REQUEST = "dRequest";
#ResponseBody
#PostMapping(value="/sum")
public String sum(#RequestParam("message") String message) throws InterruptedException, ExecutionException {
AggregatingReplyingKafkaTemplate<Integer, String, String> template = aggregatingTemplate(
new TopicPartitionOffset(D_REPLY, 0), 3, new AtomicInteger());
String resultValue ="";
String currentValue ="";
try {
template.setDefaultReplyTimeout(Duration.ofSeconds(10));
template.setReturnPartialOnTimeout(true);
ProducerRecord<Integer, String> record = new ProducerRecord<>(D_REQUEST, null, null, null, message);
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>> future =
template.sendAndReceive(record);
future.getSendFuture().get(5, TimeUnit.SECONDS); // send ok
System.out.println("Send Completed Successfully");
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>> consumerRecord = future.get(10, TimeUnit.SECONDS);
System.out.println("Consumer record size "+consumerRecord.value().size());
Iterator<ConsumerRecord<Integer, String>> iterator = consumerRecord.value().iterator();
while (iterator.hasNext()) {
currentValue = iterator.next().value();
System.out.println("response " + currentValue);
System.out.println("Record header " + consumerRecord.headers().toString());
resultValue = resultValue + currentValue + "\r\n";
}
} catch (Exception e) {
System.out.println("Error Message is "+e.getMessage());
}
return resultValue;
}
public AggregatingReplyingKafkaTemplate<Integer, String, String> aggregatingTemplate(
TopicPartitionOffset topic, int releaseSize, AtomicInteger releaseCount) {
//Create Container Properties
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
//Set the consumer Config
//Create Consumer Factory with Consumer Config
DefaultKafkaConsumerFactory<Integer, Collection<ConsumerRecord<Integer, String>>> cf =
new DefaultKafkaConsumerFactory<>(consumerConfigs());
//Create Listener Container with Consumer Factory and Container Property
KafkaMessageListenerContainer<Integer, Collection<ConsumerRecord<Integer, String>>> container =
new KafkaMessageListenerContainer<>(cf, containerProperties);
// container.setBeanName(this.testName);
AggregatingReplyingKafkaTemplate<Integer, String, String> template =
new AggregatingReplyingKafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs()), container,
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
template.setSharedReplyTopic(true);
template.start();
return template;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test_id");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
return props;
}
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
org.apache.kafka.common.serialization.StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);
return props;
}
public ProducerFactory<Integer,String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#KafkaListener(id = "def1", topics = { D_REQUEST}, groupId = "D_REQUEST1")
#SendTo // default REPLY_TOPIC header
public String dListener1(String in) throws InterruptedException {
return "First Consumer : "+ in.toUpperCase();
}
#KafkaListener(id = "def2", topics = { D_REQUEST}, groupId = "D_REQUEST2")
#SendTo // default REPLY_TOPIC header
public String dListener2(String in) throws InterruptedException {
return "Second Consumer : "+ in.toLowerCase();
}
#KafkaListener(id = "def3", topics = { D_REQUEST}, groupId = "D_REQUEST3")
#SendTo // default REPLY_TOPIC header
public String dListener3(String in) throws InterruptedException {
Thread.sleep(11000);
return "Third Consumer : "+ in;
}
}
'''
[1]: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/test/java/org/springframework/kafka/requestreply/ReplyingKafkaTemplateTests.java
template.setReturnPartialOnTimeout(true) simply means the template will consult the release strategy on timeout (with the timeout argument = true, to tell the strategy it's a timeout rather than a delivery call).
It must return true to release the partial result.
This is to allow you to look at (and possibly modify) the list to decide whether you want to release or discard.
Your strategy ignores the timeout parameter:
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
You need return timeout ? true : { ... }.

Spring Kafka - how to fetch timestamp (event time) when message was produced

I have a requirement to fetch timestamp (event-time) when the message was produced, in the kafka consumer application. I am aware of the timestampExtractor, which can be used with kafka stream , but my requirement is different as I am not using stream to consume message.
My kafka producer is as follows :
#Override
public void run(ApplicationArguments args) throws Exception {
List<String> names = Arrays.asList("priya", "dyser", "Ray", "Mark", "Oman", "Larry");
List<String> pages = Arrays.asList("blog", "facebook", "instagram", "news", "youtube", "about");
Runnable runnable = () -> {
String rPage = pages.get(new Random().nextInt(pages.size()));
String rName = pages.get(new Random().nextInt(names.size()));
PageViewEvent pageViewEvent = new PageViewEvent(rName, rPage, Math.random() > .5 ? 10 : 1000);
Message<PageViewEvent> message = MessageBuilder
.withPayload(pageViewEvent).
setHeader(KafkaHeaders.MESSAGE_KEY, pageViewEvent.getUserId().getBytes())
.build();
try {
this.pageViewsOut.send(message);
log.info("sent " + message);
} catch (Exception e) {
log.error(e);
}
};
Kafka Consumer is implemented using Spring kafka #KafkaListener.
#KafkaListener(topics = "test1" , groupId = "json", containerFactory = "kafkaListenerContainerFactory")
public void receive(#Payload PageViewEvent data,#Headers MessageHeaders headers) {
LOG.info("Message received");
LOG.info("received data='{}'", data);
}
Container factory configuration
#Bean
public ConsumerFactory<String,PageViewEvent > priceEventConsumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "json");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
return new DefaultKafkaConsumerFactory<>(props, new StringDeserializer(), new JsonDeserializer<>(PageViewEvent.class));
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, PageViewEvent> priceEventsKafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, PageViewEvent> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(priceEventConsumerFactory());
return factory;
}
The producer, which is sending the message when I print give me below data :
[payload=PageViewEvent(userId=blog, page=about, duration=10),
headers={id=8ebdad85-e2f7-958f-500e-4560ac0970e5,
kafka_messageKey=[B#71975e1a, contentType=application/json,
timestamp=1553041963803}]
This does have a produced timestamp. How can I fetch the message produced time stamp with Spring kafka?
RECEIVED_TIMESTAMP means it is the time stamp from the record that was received not the time it was received.. We avoid putting it in TIMESTAMP to avoid inadvertent propagation to an outbound message.
You can use something like below:
final Producer<String, String> producer = new KafkaProducer<String, String>(properties);
long time = System.currentTimeMillis();
final CountDownLatch countDownLatch = new CountDownLatch(5);
int count=0;
try {
for (long index = time; index < time + 10; index++) {
String key = null;
count++;
if(count<=5)
key = "id_"+ Integer.toString(1);
else
key = "id_"+ Integer.toString(2);
final ProducerRecord<String, String> record =
new ProducerRecord<>(TOPIC, key, "B2B Sample Message: " + count);
producer.send(record, (metadata, exception) -> {
long elapsedTime = System.currentTimeMillis() - time;
if (metadata != null) {
System.out.printf("sent record(key=%s value=%s) " +
"meta(partition=%d, offset=%d) time=%d timestamp=%d\n",
record.key(), record.value(), metadata.partition(),
metadata.offset(), elapsedTime, metadata.timestamp());
System.out.println("Timestamp:: "+metadata.timestamp() );
} else {
exception.printStackTrace();
}
countDownLatch.countDown();
});
}
try {
countDownLatch.await(25, TimeUnit.SECONDS);
} catch (InterruptedException e) {
e.printStackTrace();
}
}finally {
producer.flush();
producer.close();
}
}

Can't access the data in Kafka Spark Streaming globally

I am trying to Streaming the data from Kafka to Spark
JavaPairInputDStream<String, String> directKafkaStream = KafkaUtils.createDirectStream(ssc,
String.class,
String.class,
StringDecoder.class,
StringDecoder.class,
kafkaParams, topics);
Here i am iterating over the JavaPairInputDStream to process the RDD's.
directKafkaStream.foreachRDD(rdd ->{
rdd.foreachPartition(items ->{
while (items.hasNext()) {
String[] State = items.next()._2.split("\\,");
System.out.println(State[2]+","+State[3]+","+State[4]+"--");
};
});
});
I can able to fetch the data in foreachRDD and my requirement is have to access State Array globally. When i am trying to access the State Array globally i am getting Exception
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
Any suggestions ? Thanks.
This is more of a joining your lookup table with streaming RDD to get all the items that have a matching 'code' and 'violationCode' fields.
The flow should be like this.
Create an RDD of Hive lookup table => lookupRdd
Create DStream from kafka stream
For each RDD in Dstream, join lookupRDD with streamRdd, process the joined items(calculate sum of amount...) and save this processed result.
Note Below code is incomplete. Please complete all the TODO comments.
JavaPairDStream<String, String> streamPair = directKafkaStream.mapToPair(new PairFunction<Tuple2<String, String>, String, String>() {
#Override
public Tuple2<String, String> call(Tuple2<String, String> tuple2) throws Exception {
System.out.println("Tuple2 Message is----------" + tuple2._2());
String[] state = tuple2._2.split("\\,");
return new Tuple2<>(state[4], tuple2._2()); //pair <ViolationCode, data>
}
});
streamPair.foreachRDD(new Function<JavaPairRDD<String, String>, Void>() {
JavaPairRDD<String, String> hivePairRdd = null;
#Override
public Void call(JavaPairRDD<String, String> stringStringJavaPairRDD) throws Exception {
if (hivePairRdd == null) {
hivePairRdd = initHiveRdd();
}
JavaPairRDD<String, Tuple2<String, String>> joinedRdd = stringStringJavaPairRDD.join(hivePairRdd);
System.out.println(joinedRdd.take(10));
//todo process joinedRdd here and save the results.
joinedRdd.count(); //to trigger an action
return null;
}
});
}
public static JavaPairRDD<String, String> initHiveRdd() {
JavaRDD<String> hiveTableRDD = null; //todo code to create RDD from hive table
JavaPairRDD<String, String> hivePairRdd = hiveTableRDD.mapToPair(new PairFunction<String, String, String>() {
#Override
public Tuple2<String, String> call(String row) throws Exception {
String code = null; //TODO process 'row' and get 'code' field
return new Tuple2<>(code, row);
}
});
return hivePairRdd;
}

Resources