Fetch properties from Sonarqube via Sonarqube wsClient - sonarqube

I'd like to fetch sonar.timemachine.period1 via wsclient.
Seeing that it doesn't have one, I decided to bake one for myself
private Map<String, String> retrievePeriodProperties(final WsClient wsClient, int requestedPeriod) {
if (requestedPeriod > 0) {
final WsRequest propertiesWsRequestPeriod =
new GetRequest("api/properties/sonar.timemachine.period" + requestedPeriod);
final WsResponse propertiesWsResponsePeriod =
wsClient.wsConnector().call(propertiesWsRequestPeriod);
if (propertiesWsResponsePeriod.isSuccessful()) {
String resp = propertiesWsResponsePeriod.content();
Map<String, String> map = new HashMap<>();
map.put(Integer.toString(requestedPeriod), resp);
return map;
}
}
return new HashMap<>();
}
but it always return an empty Map<>
Any lead where I can go from this direction?

You can use org.sonar.api.config.Settings to fetch properties defined in SonarQube.

Related

How can I read Flux<DataBuffer> content?

I want to read mulitpart/formdata, one part is application/JSON. I can't get them to Map<String,String>, Is there any way to parse Part to String?
private Map<String, String> getFormData(String path, MultiValueMap<String, Part> partMultiValueMap) {
if (partMultiValueMap != null) {
Map<String, String> formData = new HashMap<>();
Map<String, Part> multiPartMap = partMultiValueMap.toSingleValueMap();
for (Map.Entry<String, Part> partEntry : multiPartMap.entrySet()) {
Part part = partEntry.getValue();
if (part instanceof FormFieldPart) {
formData.put(partEntry.getKey(), ((FormFieldPart) part).value());
} else {
String bodyString = bufferToStr(part.content());
formData.put(partEntry.getKey(), bodyString);
}
}
return formData;
}
return null;
}
extra Flux
private String bufferToStr(Flux<DataBuffer> content){
AtomicReference<String> res = new AtomicReference<>();
content.subscribe(buffer -> {
byte[] bytes = new byte[buffer.readableByteCount()];
buffer.read(bytes);
DataBufferUtils.release(buffer);
res.set(new String(bytes, StandardCharsets.UTF_8));
});
return res.get();
}
Subscribe is async; bufferToStr value may be null?
You could do it in non-blocking way with StringDecoder
Basically you could write your code to return Mono<Map<>>
Note: I'm using Pair class here to return key-value and later collect them to Map
Pair I'm using here is from package org.springframework.data.util.Pair
public Mono<Map<String, String>> getFormData(MultiValueMap<String, Part> partMultiValueMap) {
Map<String, Part> multiPartMap = partMultiValueMap.toSingleValueMap();
return Flux.fromIterable(multiPartMap.entrySet())
.flatMap(entry -> {
Part part = entry.getValue();
if (part instanceof FormFieldPart) {
return Mono.just(
Pair.of(entry.getKey(), ((FormFieldPart) part).value()) // return Pair
);
} else {
return decodePartToString(part.content()) // decoding DataBuffers to string
.flatMap(decodedString ->
Mono.just(Pair.of(entry.getKey(), decodedString))); // return Pair
}
})
.collectMap(Pair::getFirst, Pair::getSecond); // map and collect pairs to Map<>
}
private Mono<String> decodePartToString(Flux<DataBuffer> dataBufferFlux) {
StringDecoder stringDecoder = StringDecoder.textPlainOnly();
return stringDecoder.decodeToMono(dataBufferFlux,
ResolvableType.NONE,
MimeTypeUtils.TEXT_PLAIN,
Collections.emptyMap()
);
}

AggregatingReplyingKafkaTemplate releaseStrategy Question

There seem to be an issue when I use AggregatingReplyingKafkaTemplate with template.setReturnPartialOnTimeout(true) in that, it returns timeout exception even if partial results are available from consumers.
In example below, I have 3 consumers to reply to the request topic and i've set the reply timeout at 10 seconds. I've explicitly delayed the response of Consumer 3 to 11 seconds, however, I expect the response back from Consumer 1 and 2, so, I can return partial results. However, I am getting KafkaReplyTimeoutException. Appreciate your inputs. Thanks.
I follow the code based on the Unit Test below.
[ReplyingKafkaTemplateTests][1]
I've provided the actual code below:
#RestController
public class SumController {
#Value("${kafka.bootstrap-servers}")
private String bootstrapServers;
public static final String D_REPLY = "dReply";
public static final String D_REQUEST = "dRequest";
#ResponseBody
#PostMapping(value="/sum")
public String sum(#RequestParam("message") String message) throws InterruptedException, ExecutionException {
AggregatingReplyingKafkaTemplate<Integer, String, String> template = aggregatingTemplate(
new TopicPartitionOffset(D_REPLY, 0), 3, new AtomicInteger());
String resultValue ="";
String currentValue ="";
try {
template.setDefaultReplyTimeout(Duration.ofSeconds(10));
template.setReturnPartialOnTimeout(true);
ProducerRecord<Integer, String> record = new ProducerRecord<>(D_REQUEST, null, null, null, message);
RequestReplyFuture<Integer, String, Collection<ConsumerRecord<Integer, String>>> future =
template.sendAndReceive(record);
future.getSendFuture().get(5, TimeUnit.SECONDS); // send ok
System.out.println("Send Completed Successfully");
ConsumerRecord<Integer, Collection<ConsumerRecord<Integer, String>>> consumerRecord = future.get(10, TimeUnit.SECONDS);
System.out.println("Consumer record size "+consumerRecord.value().size());
Iterator<ConsumerRecord<Integer, String>> iterator = consumerRecord.value().iterator();
while (iterator.hasNext()) {
currentValue = iterator.next().value();
System.out.println("response " + currentValue);
System.out.println("Record header " + consumerRecord.headers().toString());
resultValue = resultValue + currentValue + "\r\n";
}
} catch (Exception e) {
System.out.println("Error Message is "+e.getMessage());
}
return resultValue;
}
public AggregatingReplyingKafkaTemplate<Integer, String, String> aggregatingTemplate(
TopicPartitionOffset topic, int releaseSize, AtomicInteger releaseCount) {
//Create Container Properties
ContainerProperties containerProperties = new ContainerProperties(topic);
containerProperties.setAckMode(ContainerProperties.AckMode.MANUAL_IMMEDIATE);
//Set the consumer Config
//Create Consumer Factory with Consumer Config
DefaultKafkaConsumerFactory<Integer, Collection<ConsumerRecord<Integer, String>>> cf =
new DefaultKafkaConsumerFactory<>(consumerConfigs());
//Create Listener Container with Consumer Factory and Container Property
KafkaMessageListenerContainer<Integer, Collection<ConsumerRecord<Integer, String>>> container =
new KafkaMessageListenerContainer<>(cf, containerProperties);
// container.setBeanName(this.testName);
AggregatingReplyingKafkaTemplate<Integer, String, String> template =
new AggregatingReplyingKafkaTemplate<>(new DefaultKafkaProducerFactory<>(producerConfigs()), container,
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
template.setSharedReplyTopic(true);
template.start();
return template;
}
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,bootstrapServers);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test_id");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringDeserializer.class);
return props;
}
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
// list of host:port pairs used for establishing the initial connections to the Kakfa cluster
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,
bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,
org.apache.kafka.common.serialization.StringSerializer.class);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, org.apache.kafka.common.serialization.StringSerializer.class);
return props;
}
public ProducerFactory<Integer,String> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
#KafkaListener(id = "def1", topics = { D_REQUEST}, groupId = "D_REQUEST1")
#SendTo // default REPLY_TOPIC header
public String dListener1(String in) throws InterruptedException {
return "First Consumer : "+ in.toUpperCase();
}
#KafkaListener(id = "def2", topics = { D_REQUEST}, groupId = "D_REQUEST2")
#SendTo // default REPLY_TOPIC header
public String dListener2(String in) throws InterruptedException {
return "Second Consumer : "+ in.toLowerCase();
}
#KafkaListener(id = "def3", topics = { D_REQUEST}, groupId = "D_REQUEST3")
#SendTo // default REPLY_TOPIC header
public String dListener3(String in) throws InterruptedException {
Thread.sleep(11000);
return "Third Consumer : "+ in;
}
}
'''
[1]: https://github.com/spring-projects/spring-kafka/blob/master/spring-kafka/src/test/java/org/springframework/kafka/requestreply/ReplyingKafkaTemplateTests.java
template.setReturnPartialOnTimeout(true) simply means the template will consult the release strategy on timeout (with the timeout argument = true, to tell the strategy it's a timeout rather than a delivery call).
It must return true to release the partial result.
This is to allow you to look at (and possibly modify) the list to decide whether you want to release or discard.
Your strategy ignores the timeout parameter:
(list, timeout) -> {
releaseCount.incrementAndGet();
return list.size() == releaseSize;
});
You need return timeout ? true : { ... }.

Java8 generate Map containing another Map

How do I achieve this using java=8
I have a CSV in below format and from this i want to populate Map<String, Map<String, String>
where the outer map will have key scriptId and transationType as these are the distinct Type and inner map for scriptId key should contain first 5 values stating from position 2 as key and 3 as value.
<scriptId<
<TATA,TATA Moters>
<REL,Reliance Industries Ltd>
<LNT, L&T>
<SBI, State Bank of India>>
<transactionType,<
<P,B>
<S,S>>
Content of CSV File
Type,ArcesiumValue,GICValue
scriptId,TATA,TATA Moters
scriptId,REL,Reliance Industries Ltd
scriptId,LNT,L&T
scriptId,SBI,State Bank of India
transactionType,P,B
transactionType,S,S
How do i generate this using Java8
public void loadReferenceData() throws IOException {
List<Map<String, Map<String, String>>> cache = Files.lines(Paths.get("data/referenceDataMapping.csv")).skip(1)
.map(mapRefereneData).collect(Collectors.toList());
System.out.println(cache);
}
public static Function<String, Map<String, Map<String, String>>> mapRefereneData = (line) -> {
String[] sp = line.split(",");
Map<String, Map<String, String>> cache = new HashMap<String, Map<String, String>>();
try {
if (cache.containsKey(sp[0])) {
cache.get(sp[0]).put(sp[1], sp[2]);
} else {
Map<String, String> map = new HashMap<String, String>();
map.put(sp[1], sp[2]);
cache.put(sp[0], map);
}
} catch (NumberFormatException e) {
e.printStackTrace();
}
return cache;
};
Well it is much simpler to use two Collectors:
Map<String, Map<String, String>> groupCSV = Files.lines(Paths.get("..."))
.skip(1L).map(l -> l.split(","))
.collect(Collectors.groupingBy(a -> a[0], Collectors.toMap(a -> a[1], a -> a[2])));

How to serialize an Object to Map by Moshi

I want to serialize an Object to Map by Moshi.Here is my codes by Gson
public static Map<String, String> toMap(Object obj, Gson gson) {
if (gson == null) {
gson = new Gson();
}
String json = gson.toJson(obj);
Map<String, String> map = gson.fromJson(json, new TypeToken<Map<String, String>>() {
}.getType());
return map;
}
And how to write by Moshi ?
Here's one way. Check out the toJsonValue doc here.
Moshi moshi = new Moshi.Builder().build();
JsonAdapter<Object> adapter = moshi.adapter(Object.class);
Object jsonStructure = adapter.toJsonValue(obj);
Map<String, Object> jsonObject = (Map<String, Object>) jsonStructure;
If you know the type of obj, it'd be better to look up the adapter of that type, rather than of Object. (The Object JsonAdadpter has to look up the runtime type on every toJson call.
#NanoJava8 solution crashes but can be made to work with a minor change using Map instead of HashMap
Type type = Types.newParameterizedType(Map.class, String.class, String.class);
JsonAdapter<Map<String,String>> adapter = moshi.adapter(type);
Map<String,String> map = adapter.fromJson(json);
As stated by Jesse in the answer Moshi support fields as Map but not HashMap.
In Kotlin:
val type = Types.newParameterizedType(
MutableMap::class.java,
String::class.java,
String::class.java
)
val adapter: JsonAdapter<Map<String, String>> = moshi.adapter(type)
val map: Map<String, String> = adapter.fromJson(responseJson)
Type type = Types.newParameterizedType(HashMap.class, String.class, String.class);
JsonAdapter<Map<String,String>> adapter = moshi.adapter(type);
Map<String,String> map = adapter.fromJson(json);
class HashMapJsonAdapter<K, V>(
private val keyAdapter: JsonAdapter<K>,
private val valueAdapter: JsonAdapter<V>
) : JsonAdapter<HashMap<K, V>>() {
#Throws(IOException::class)
override fun toJson(writer: JsonWriter, map: HashMap<K, V>?) {
writer.beginObject()
for ((key, value) in map ?: emptyMap<K, V>()) {
if (key == null) {
throw JsonDataException("Map key is null at ${writer.path}")
}
keyAdapter.toJson(writer, key)
valueAdapter.toJson(writer, value)
}
writer.endObject()
}
#Throws(IOException::class)
override fun fromJson(reader: JsonReader): HashMap<K, V>? {
val result = linkedMapOf<K, V>()
reader.beginObject()
while (reader.hasNext()) {
val name = keyAdapter.fromJson(reader)
val value = valueAdapter.fromJson(reader)
val replaced = result.put(name!!, value!!)
if (replaced != null) {
throw JsonDataException("Map key '$name' has multiple values at path ${reader.path} : $replaced and value")
}
}
reader.endObject()
return result
}
override fun toString(): String = "JsonAdapter($keyAdapter=$valueAdapter)"
companion object
}

send multiple jms messages in one transaction

I have to send a message to 2 different queues(queue1 and queue2). However, i want to rollback, if the send is failed for any of the queue(queue1 or queue2).
my source code looks as follows. can anyone through some inputs on this.
public void sendMessage(final Map<String, String> mapMessage) {
jmsTemplate.send(queue1, session -> {
MapMessage message = session.createMapMessage();
Iterator<Entry<String, String>> it = mapMessage.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, String> pair = it.next();
message.setStringProperty(pair.getKey(), pair.getValue());
}
message.setJMSRedelivered(true);
message.setJMSCorrelationID(UUID.randomUUID().toString().replaceAll("-", ""));
return message;
});
jmsTemplate.send(queue2, session -> {
MapMessage message = session.createMapMessage();
Iterator<Entry<String, String>> it = mapMessage.entrySet().iterator();
while (it.hasNext()) {
Map.Entry<String, String> pair = it.next();
message.setStringProperty(pair.getKey(), pair.getValue());
}
message.setJMSRedelivered(true);
message.setJMSCorrelationID(UUID.randomUUID().toString().replaceAll("-", ""));
return message;
});
}
Start a transaction before entering the sendMessage method, e.g. with #Transactional - see the Spring Framework Reference Manual.

Resources