Multiple setImplicitCollections using XStreamMarshaller - spring

I am trying XStreamMarshaller. But when I try to parse two xsd:complexType of xml file i am getting this error :
class[1] : com.mc.batch.mapping.authorization.PIECES_JOINTES
converter-type[1] : com.thoughtworks.xstream.converters.reflection.ReflectionConverter
Xml :
<DOCUMENT>
<ARTICLES>
<ARTICLE>
<NUMERO_ARTICLE>1</NUMERO_ARTICLE>
</ARTICLE>
<ARTICLE>
<NUMERO_ARTICLE>2</NUMERO_ARTICLE>
</ARTICLE>
</ARTICLES>
<PIECES_JOINTES>
<PIECES_JOINTE>
<TYPE_DOCUMENT>PDF</TYPE_DOCUMENT>
</PIECES_JOINTE>
<PIECES_JOINTE>
<TYPE_DOCUMENT>WORD</TYPE_DOCUMENT>
</PIECES_JOINTE>
<PIECES_JOINTE>
<TYPE_DOCUMENT>XLS</TYPE_DOCUMENT>
</PIECES_JOINTE>
</PIECES_JOINTES>
</DOCUMENT>
code :
#Bean
MessageConverter messageConverter() {
Map<String, Class<?>> aliases = new HashMap<>();
XStreamMarshaller marshallerAuthorization = new XStreamMarshaller();
aliases.put("DOCUMENT", DOCUMENT.class);
marshallerAuthorization.setAliases(aliases);
Map implicitArticle = Collections.singletonMap(ARTICLES.class, "ARTICLE");
Map implicitPiece = Collections.singletonMap(PIECES_JOINTES.class, "PIECES_JOINTE");
marshallerAuthorization.setImplicitCollections(implicitPiece);
marshallerAuthorization.setImplicitCollections(implicitArticle);
MarshallingMessageConverter messageConverterAuthorization = new MarshallingMessageConverter(marshallerAuthorization);
messageConverterAuthorization.setTargetType(MessageType.TEXT);
return messageConverterAuthorization;
}
But how to use two setImplicitCollections for mapping PIECES_JOINTES.class and ARTICLES.class
How do I resolve this conflict ? Any help would be welcome. Thanks in advance.

how to use two setImplicitCollections for mapping PIECES_JOINTES.class and ARTICLES.class
You don't need to call setImplicitCollections twice, the value passed in the second call will override the first one. This method accepts a map, so you can write something like:
Map<Class<?>, String> implicitCollections = new HashMap<>();
implicitCollections.put(ARTICLES.class, "ARTICLE");
implicitCollections.put(PIECES_JOINTES.class, "PIECES_JOINTE");
marshallerAuthorization.setImplicitCollections(implicitCollections);
Instead of:
Map implicitArticle = Collections.singletonMap(ARTICLES.class, "ARTICLE");
Map implicitPiece = Collections.singletonMap(PIECES_JOINTES.class, "PIECES_JOINTE");
marshallerAuthorization.setImplicitCollections(implicitPiece);
marshallerAuthorization.setImplicitCollections(implicitArticle);

Related

How to extract the values from the nested Maps using lambdas expression?

I need to extract the foreign conversion from the nested map using lambda expression of java 8:
I was able to solve it by the old school of the java 8 for each but wanted to see how it works with the lambda expression of java 8.
e.g i want to filter the maps inside map .
for cmp1, fee1, Inr-Try we have value present as 31. which is desired output
// camp1
Map<String,String> forexMap3_1 = new HashMap();
forexMap3_1.put("Eur-Try","11");
forexMap3_1.put("Usd-Try","21");
forexMap3_1.put("Inr-Try","31");
Map<String,String> forexMap3_2= new HashMap();
forexMap3_2.put("Eur-Try","12");
forexMap3_2.put("Usd-Try","22");
forexMap3_2.put("Inr-Try","32");
Map<String, Map> feeMap2 = new HashMap();
feeMap2.put("fee1", forexMap3_1);
feeMap2.put("fee2",forexMap3_2);
campaigns.put("cmp1", feeMap2);
// camp2
Map<String,String> forexMap3_3 = new HashMap();
forexMap3_3.put("Eur-Try","11");
forexMap3_3.put("Usd-Try","21");
forexMap3_3.put("Inr-Try","31");
Map<String,String> forexMap3_4= new HashMap();
forexMap3_4.put("Eur-Try","12");
forexMap3_4.put("Usd-Try","22");
forexMap3_4.put("Inr-Try","32");
Map<String, Map> feeMap3 = new HashMap();
feeMap3.put("fee3", forexMap3_3);
feeMap3.put("fee4",forexMap3_4);
campaigns.put("cmp2", feeMap3);
Try this :
out.entrySet().stream().filter(x->x.getKey().equals(yourkey)).flatMap(x->x.getValue().entrySet().stream()).collect(Collectors.toMap(x->x.getKey(),x->x.getValue()));
just iterate on campaign children:
HashMap<String, String> finalMap = new HashMap<>();
campaigns.forEach((s, stringMapMap) -> stringMapMap.forEach((s1, map) -> finalMap.putAll(map)));
System.out.println(finalMap.get("Inr-Try")); // output: 31

Spring Batch Writer to write Map<Key,Values> to file

I am using the Spring batch to develop CSV feed files. I had used a writer similar to the one given below to generate my output file.
#Bean
public FlatFileItemWriter<CustomObj> writer()
{
BeanWrapperFieldExtractor<CustomObj> extractor = new BeanWrapperFieldExtractor<>();
extractor.setNames(new String[] {"name", "emailAddress", "dob"});
DelimitedLineAggregator<CustomObj> lineAggregator = new DelimitedLineAggregator<>();
lineAggregator.setDelimiter(";");
FieldExtractor<CustomObj> fieldExtractor = createStudentFieldExtractor();
lineAggregator.setFieldExtractor(fieldExtractor);
FlatFileItemWriter<CustomObj> writer = new FlatFileItemWriter<>();
writer.setResource(outputResource);
writer.setAppendAllowed(true);
//writer.setHeaderCallback(headerCallback);
writer.setLineAggregator(lineAggregator);
return writer;
}
output
name;emailAddress;dob
abc;abc#xyz.com;10-10-20
But now we have a requirement to make this writer generic such that we no longer pass the object, instead pass a Map<String, String> and the object values are now stored in the Map of key Value pairs
Eg: name-> abc , emailAddress->abc#xyz.com, dob -> 10-10-20
We tried to use the writer similar to the below one,
But the problem here is that as there is no FieldExtractor set and thus the header and the values may become out of sync.
The PassThroughFieldExtractor just passes all the values in the collection(Map) in any order. even if the Map contains more fields, it prints all the fields.
Header and the values are not bound together in this case.
Is there any way to implement a custom field extractor which will make sure even if we change the ordering of the header, the ordering of the values remain consistent with the header.
#Bean
public FlatFileItemWriter<Map<String,String>> writer()
{
DelimitedLineAggregator<Map<String,String>> lineAggregator = new DelimitedLineAggregator<>();
lineAggregator.setDelimiter(";");
FieldExtractor<Map<String,String>> fieldExtractor = createStudentFieldExtractor();
lineAggregator.setFieldExtractor(new PassThroughFieldExtractor<>());
FlatFileItemWriter<Map<String,String>> writer = new FlatFileItemWriter<>();
writer.setResource(outputResource);
writer.setAppendAllowed(true);
writer.setLineAggregator(lineAggregator);
return writer;
}
output
name;emailAddress;dob
abc#xyz.com;abc;10-10-20
expected Output
case 1:
name;emailAddress;dob
abc;abc#xyz.com;10-10-20
case 2:
emailAddress;dob
abc#xyz.com;10-10-20
you need a custom fieldset extractor that extracts values from the map in the same order as the headers. Spring Batch does not provides such an extractor so you need to implement it yourself. For example, you can pass the headers at construction time to the extractor, and extract values from the map according to the headers order.

Evaluate expressions as well as regex in single field in custom processors of Nifi

In my custom processor I have added below field
public static final PropertyDescriptor CACHE_VALUE = new PropertyDescriptor.Builder()
.name("Cache Value")
.description("Cache Value")
.required(true)
.addValidator(StandardValidators.NON_EMPTY_VALIDATOR)
.expressionLanguageSupported(ExpressionLanguageScope.FLOWFILE_ATTRIBUTES)
.build();
Where I expect to read flowfile attributes like ${fieldName}
as well as regex like .* to read full content or some part of content like $.nodename.subnodename
For that I have added below code
for (FlowFile flowFile : flowFiles) {
final String cacheKey = context.getProperty(CACHE_KEY).evaluateAttributeExpressions(flowFile).getValue();
String cacheValue = null;
cacheValue = context.getProperty(CACHE_VALUE).evaluateAttributeExpressions(flowFile).getValue();
if (".*".equalsIgnoreCase(cacheValue.trim())) {
final ByteArrayOutputStream bytes = new ByteArrayOutputStream();
session.exportTo(flowFile, bytes);
cacheValue = bytes.toString();
}
cache.put(cacheKey, cacheValue);
session.transfer(flowFile, REL_SUCCESS);
}
How to achieve this one some part of content like $.nodename.subnodename.
Do I need to parse the json or is there any other way?
You will either have to parse the JSON yourself, or use an EvaluateJsonPath processor before reaching this processor to extract content values out to attributes via JSON Path expressions, and then in your custom code, reference the value of the attribute.

Convert ImmutableListMultimap to Map using Collectors.toMap

I would like to convert a ImmutableListMultimap<String, Character> to Map<String, List<Character>>.
I used to do it in the non-stream way as follows
void convertMultiMaptoList(ImmutableListMultimap<String, Character> reverseImmutableMultiMap) {
Map<String, List<Character>> result = new TreeMap<>();
for( Map.Entry<String, Character> entry: reverseImmutableMultiMap.entries()) {
String key = entry.getKey();
Character t = entry.getValue();
result.computeIfAbsent(key, x-> new ArrayList<>()).add(t);
}
//reverseImmutableMultiMap.entries().stream().collect(Collectors.toMap)
}
I was wondering how to write the above same logic using java8 stream way (Collectors.toMap).
Please share your thoughts
Well there is already a asMap that you can use to make this easier:
Builder<String, Character> builder = ImmutableListMultimap.builder();
builder.put("12", 'c');
builder.put("12", 'c');
ImmutableListMultimap<String, Character> map = builder.build();
Map<String, List<Character>> map2 = map.asMap()
.entrySet()
.stream()
.collect(Collectors.toMap(Entry::getKey, e -> new ArrayList<>(e.getValue())));
If on the other hand you are OK with the return type of the asMap than it's a simple method call:
ImmutableMap<String, Collection<Character>> asMap = map.asMap();
Map<String, List<Character>> result = reverseImmutableMultiMap.entries().stream()
.collect(groupingBy(Entry::getKey, TreeMap::new, mapping(Entry::getValue, toList())));
The important detail is mapping. It will convert the collector (toList) so that it collects List<Character> instead of List<Entry<String, Character>>. According to the mapping function Entry::getValue
groupingBy will group all entries by the String key
toList will collect all values with same key to a list
Also, passing TreeMap::new as an argument to groupingBy will make sure you get this specific type of Map instead of the default HashMap

How to get all Keys from Redis using redis template

I have been stuck with this problem with quite some time.I want to get keys from redis using redis template.
I tried this.redistemplate.keys("*");
but this doesn't fetch anything. Even with the pattern it doesn't work.
Can you please advise on what is the best solution to this.
I just consolidated the answers, we have seen here.
Here are the two ways of getting keys from Redis, when we use RedisTemplate.
1. Directly from RedisTemplate
Set<String> redisKeys = template.keys("samplekey*"));
// Store the keys in a List
List<String> keysList = new ArrayList<>();
Iterator<String> it = redisKeys.iterator();
while (it.hasNext()) {
String data = it.next();
keysList.add(data);
}
Note: You should have configured redisTemplate with StringRedisSerializer in your bean
If you use java based bean configuration
redisTemplate.setDefaultSerializer(new StringRedisSerializer());
If you use spring.xml based bean configuration
<bean id="stringRedisSerializer" class="org.springframework.data.redis.serializer.StringRedisSerializer"/>
<!-- redis template definition -->
<bean
id="redisTemplate"
class="org.springframework.data.redis.core.RedisTemplate"
p:connection-factory-ref="jedisConnectionFactory"
p:keySerializer-ref="stringRedisSerializer"
/>
2. From JedisConnectionFactory
RedisConnection redisConnection = template.getConnectionFactory().getConnection();
Set<byte[]> redisKeys = redisConnection.keys("samplekey*".getBytes());
List<String> keysList = new ArrayList<>();
Iterator<byte[]> it = redisKeys.iterator();
while (it.hasNext()) {
byte[] data = (byte[]) it.next();
keysList.add(new String(data, 0, data.length));
}
redisConnection.close();
If you don't close this connection explicitly, you will run into an exhaustion of the underlying jedis connection pool as said in https://stackoverflow.com/a/36641934/3884173.
try:
Set<byte[]> keys = RedisTemplate.getConnectionFactory().getConnection().keys("*".getBytes());
Iterator<byte[]> it = keys.iterator();
while(it.hasNext()){
byte[] data = (byte[])it.next();
System.out.println(new String(data, 0, data.length));
}
Try redisTemplate.setKeySerializer(new StringRedisSerializer());
Avoid to use keys command. It may ruin performance when it is executed against large databases.
You should use scan command instead. Here is how you can do it:
RedisConnection redisConnection = null;
try {
redisConnection = redisTemplate.getConnectionFactory().getConnection();
ScanOptions options = ScanOptions.scanOptions().match("myKey*").count(100).build();
Cursor c = redisConnection.scan(options);
while (c.hasNext()) {
logger.info(new String((byte[]) c.next()));
}
} finally {
redisConnection.close(); //Ensure closing this connection.
}
or do it much simplier with Redisson Redis Java client:
Iterable<String> keysIterator = redisson.getKeys().getKeysByPattern("test*", 100);
for (String key : keysIterator) {
logger.info(key);
}
Try
import org.springframework.data.redis.core.RedisTemplate;
import org.apache.commons.collections.CollectionUtils;
String key = "example*";
Set keys = redisTemplate.keys(key);
if (CollectionUtils.isEmpty(keys)) return null;
List list = redisTemplate.opsForValue().multiGet(keys);
It did work, but seems not recommended? Because we can't use Keys command in production. I assume RedisTemplate.getConnectionFactory().getConnection().keys is calling redis Keys command. What are the alternatives?
I was using redisTemplate.keys(), but it was not working. So I used jedis, it worked. The following is the code that I used.
Jedis jedis = new Jedis("localhost", 6379);
Set<String> keys = jedis.keys("*".getBytes());
for (String key : keys) {
// do something
} // for
Solution can be like this
String pattern = "abc"+"*";
Set<String> keys = jedis.keys(pattern);
for (String key : keys) {
jedis.keys(key);
}
Or you can use jedis.hscan() and ScanParams instead.

Resources