I'm marshaling objects which have fields of type Set. The implementation is unsorted, so the order of resulting XML elements is arbitrary, moreover I got a different order every time I do marshaling.
Is there a way to tell marshaller how to sort a field contents during marshaling?
You could take advantage of a SortedSet. If you initialize an instance of a Set on your instance then the JAXB will use that implementation instead of creating a new one:
package forum7686859;
import java.util.Set;
import java.util.TreeSet;
import javax.xml.bind.annotation.XmlRootElement;
#XmlRootElement
public class Root {
//private Set<String> children = new HashSet<String>();
private Set<String> children = new TreeSet<String>();
public Set<String> getChildren() {
return children;
}
public void setChildren(Set<String> children) {
this.children = children;
}
}
Related
I have a POJO:
class Test {
private int i;
void setI(int i) {
this.i = i;
}
}
this is what I have so far for the aspect:
public aspect t perthis(within(#Tracking *)){
private Set<String> set = new HashSet<>();
pointcut setterMethod() : execution(public void set*(..));
after(Object o) returning() : setterMethod() && this(o) {
set.add(thisJoinPoint.getSignature().getName());
System.out.println(set);
}
public Set<String> go() {
return set;
}
}
I want a Set<String> set for any instance of ANY class that has #Tracking. I also want to add the go() method for any instance of ANY class that has #Tracking.
Can't figure out the syntax. The go() method doesn't get added. If I put Test.go() then the method get added, but then it crashes during runtime.
Marker interface:
package de.scrum_master.tracking;
import static java.lang.annotation.ElementType.TYPE;
import static java.lang.annotation.RetentionPolicy.RUNTIME;
import java.lang.annotation.Retention;
import java.lang.annotation.Target;
#Retention(RUNTIME)
#Target(TYPE)
public #interface Track {}
Sample POJOs with/without marker annotation:
We use two sample classes for positive/negative tests.
package de.scrum_master.app;
import de.scrum_master.tracking.Track;
#Track
class TestTracked {
private int number;
private String text;
public void setNumber(int number) {
this.number = number;
}
public void setText(String text) {
this.text = text;
}
#Override
public String toString() {
return "TestTracked [number=" + number + ", text=" + text + "]";
}
}
package de.scrum_master.app;
class TestUntracked {
private int number;
private String text;
public void setNumber(int number) {
this.number = number;
}
public void setText(String text) {
this.text = text;
}
#Override
public String toString() {
return "TestUntracked [number=" + number + ", text=" + text + "]";
}
}
"Dirty tracker" interface:
We want the aspect to implement the following interface for each class annotated by #Track by means of inter-type declaration (ITD).
package de.scrum_master.tracking;
import java.util.Set;
public interface Trackable {
Set<String> getDirty();
}
Driver application:
Here, we are assuming that each POJO class annotated with the marker interface automagically implements the Trackable interface and therefore knows the getDirty() method, which the we call in order to verify that the aspect correctly tracks setter calls.
package de.scrum_master.app;
import de.scrum_master.tracking.Trackable;
public class Application {
public static void main(String[] args) {
TestTracked testTracked = new TestTracked();
testTracked.setNumber(11);
testTracked.setText("foo");
System.out.println(testTracked);
if (testTracked instanceof Trackable)
System.out.println("Dirty members: " + ((Trackable) testTracked).getDirty());
TestUntracked testUntracked = new TestUntracked();
testUntracked.setNumber(22);
testUntracked.setText("bar");
System.out.println(testUntracked);
if (testUntracked instanceof Trackable)
System.out.println("Dirty members: " + ((Trackable) testUntracked).getDirty());
}
}
Aspect:
This aspect makes each #Track-annotated class implement interface Trackable and provides both a private field storing tracking information and a getDirty() method implementation returning its value. Furthermore, the aspect makes sure to store the "dirty" information for each successfully executed setter.
package de.scrum_master.tracking;
import java.util.HashSet;
import java.util.Set;
public aspect TrackingAspect {
private Set<String> Trackable.dirty = new HashSet<>();
public Set<String> Trackable.getDirty() {
return dirty;
}
declare parents : #Track * implements Trackable;
pointcut setterMethod() : execution(public void set*(..));
after(Trackable trackable) returning() : setterMethod() && this(trackable) {
System.out.println(thisJoinPoint);
trackable.dirty.add(thisJoinPoint.getSignature().getName().substring(3));
}
}
Console log:
You will see this when running the driver application:
execution(void de.scrum_master.app.TestTracked.setNumber(int))
execution(void de.scrum_master.app.TestTracked.setText(String))
TestTracked [number=11, text=foo]
Dirty members: [Number, Text]
TestUntracked [number=22, text=bar]
The reason why we do not need perthis or pertarget instantiation is that we store the "dirty" information right inside the original classes. Alternatively, we could use per* instantiation and keep all information inside the corresponding aspect instances instead of using ITD. In that case however, the "dirty" information would be unaccessible from outside the aspect, which might even be desirable with regard to encapsulation. But then, whatever action needs to be performed when storing the "dirty" instances, would also need to happen from inside the aspect. As you did not provide an MCVE and hence your question is lacking detail, I did not consider this functionality in the aspect. I can easily imagine how this could be done with both aspect variants - per* instantiation vs. ITD - but I hate to speculate and then be wrong.
I have list of products returned from a controller as a Flux. I am able to verify the count but I don't know how to verify the individual products. The product has a lot of properties so I do want to do a direct equals which may not work. Here is a subset of the properties of the class. The repository layer works fine and returns the data. The problem is that I don't know how to use the StepVerifier to validate the data returned by the ProductService. I am using a mock ProductRepository not shown as it just mocks return a Flux of hardcoded products like this Flux.just(productData)
import java.time.LocalDateTime;
import java.util.List;
public class ProductData {
static class Order {
String orderId;
String orderedBy;
LocalDateTime orderDate;
List<OrderItem> orderItems;
}
static class OrderItem {
String itemCode;
String name;
int quantity;
int quantityOnHold;
ItemGroup group;
}
static class ItemGroup{
String category;
String warehouseID;
String warehoueLocation;
}
}
Here is the service class.
import lombok.RequiredArgsConstructor;
import reactor.core.publisher.Flux;
#RequiredArgsConstructor
public class ProductService {
final ProductRepository productRepository;
Flux<ProductData> findAll(){
return productRepository.findAll();
}
}
Since your example ProductData class doesn't have any fields to verify, let's assume it has one order field:
public class ProductData {
Order order;
//rest of the code
}
Then fields can be verified like this:
#Test
void whenFindAllThenReturnFluxOfProductData() {
Flux<ProductData> products = productRepository.findAll();
StepVerifier.create(products)
.assertNext(Assertions::assertNotNull)
.assertNext(product -> {
ProductData.Order order = product.order;
LocalDateTime expectedOrderDate = LocalDateTime.now();
assertNotNull(order);
assertEquals("expectedOrderId", order.orderId);
assertEquals(expectedOrderDate, order.orderDate);
//more checks
}).assertNext(product -> {
List<ProductData.OrderItem> orderItems = product.order.orderItems;
int expectedSize = 12;
assertNotNull(orderItems);
assertEquals( expectedSize, orderItems.size());
//more checks
})
.expectComplete()
.verify();
}
I have something in mind:
If I use Spring Boot to build a Neo4j-based project, I need to define the properties and methods of the Entity in advance. If I later want to add new edges or attributes to the graph, even new types of nodes, how should I handle entities?
On Referring the Spring Data - Neo4j Docs
you can write Entities in the following way
Example Entity code:
package com.example.accessingdataneo4j;
import java.util.Collections;
import java.util.HashSet;
import java.util.Optional;
import java.util.Set;
import java.util.stream.Collectors;
import org.neo4j.ogm.annotation.GeneratedValue;
import org.neo4j.ogm.annotation.Id;
import org.neo4j.ogm.annotation.NodeEntity;
import org.neo4j.ogm.annotation.Relationship;
#NodeEntity
public class Person {
#Id #GeneratedValue private Long id;
private String name;
private Person() {
// Empty constructor required as of Neo4j API 2.0.5
};
public Person(String name) {
this.name = name;
}
/**
* Neo4j doesn't REALLY have bi-directional relationships. It just means when querying
* to ignore the direction of the relationship.
* https://dzone.com/articles/modelling-data-neo4j
*/
#Relationship(type = "TEAMMATE", direction = Relationship.UNDIRECTED)
public Set<Person> teammates;
public void worksWith(Person person) {
if (teammates == null) {
teammates = new HashSet<>();
}
teammates.add(person);
}
public String toString() {
return this.name + "'s teammates => "
+ Optional.ofNullable(this.teammates).orElse(
Collections.emptySet()).stream()
.map(Person::getName)
.collect(Collectors.toList());
}
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Edges or Relationships can be done by the following way
#Relationship(type = "TEAMMATE", direction = Relationship.UNDIRECTED)
public Set<Person> teammates;
Here the person [Node] is connected to another nodes [team-mates] and Stored.
where ever you design you can add new classes and write schema and start the server.
To Perform CRUD operations you can use Spring data jpa Repository.
PersonRepository extends the GraphRepository interface and plugs in the type on which it operates: Person. This interface comes with many operations, including standard CRUD (create, read, update, and delete) operations.
I am using Chronicle queue 5.16.8
I am getting this warning
net.openhft.chronicle.threads.Pauser : Using Pauser.sleepy() as not enough processors, have 1, needs 8+
Is it possible to increase this processor ?
source code
My approach here is use a Chronicle Map for storing index reader. I guess, I can have the same behavior turn in on recordHistory ...
I had to use Jackson Json converted ... I didn't get how to use writeDocument method.
The While true loop, it another not nice thing that I have here ... I couldn't figure how to find last index in the queue.
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.SerializationFeature;
import com.fasterxml.jackson.datatype.jsr310.JavaTimeModule;
import net.openhft.chronicle.map.ChronicleMap;
import net.openhft.chronicle.map.ChronicleMapBuilder;
import net.openhft.chronicle.queue.ChronicleQueue;
import net.openhft.chronicle.queue.ExcerptAppender;
import net.openhft.chronicle.queue.ExcerptTailer;
import net.openhft.chronicle.queue.RollCycles;
import net.openhft.chronicle.queue.impl.single.SingleChronicleQueueBuilder;
import org.springframework.stereotype.Service;
import javax.annotation.PostConstruct;
import java.io.File;
import java.util.LinkedList;
import java.util.List;
#Service
public class QueueService {
public static final String INDEX = "index";
private ObjectMapper mapper = new ObjectMapper(); // json converter
#PostConstruct
public void init() {
mapper.registerModule(new JavaTimeModule());
mapper.configure(SerializationFeature.WRITE_DATES_AS_TIMESTAMPS, false);
}
public void write(List dtos, String path) throws Exception {
try (ChronicleQueue queue = SingleChronicleQueueBuilder.binary(path).rollCycle(RollCycles.DAILY).build()) {
final ExcerptAppender appender = queue.acquireAppender();
for (int i=0; i<dtos.size(); i++) {
appender.writeText(mapper.writeValueAsString(dtos.get(i)));
}
}
}
public void write(Object dto, String path) throws Exception {
try (ChronicleQueue queue = SingleChronicleQueueBuilder.binary(path).rollCycle(RollCycles.DAILY).build()) {
final ExcerptAppender appender = queue.acquireAppender();
appender.writeText(mapper.writeValueAsString(dto));
}
}
public List readList(String path, Class aClass) throws Exception {
List dtoList = new LinkedList<>();
try (ChronicleQueue queue = SingleChronicleQueueBuilder.binary(path).build()) {
final ExcerptTailer tailer = queue.createTailer();
ChronicleMap<String, Long> indexMap = getReaderIndexMap(queue.fileAbsolutePath());
if (indexMap.containsKey(INDEX)) {
tailer.moveToIndex(indexMap.get(INDEX));
}
while (true) { // something smart ?
String json = tailer.readText();
if (json == null) {
break;
} else {
dtoList.add(mapper.readValue(json, aClass));
}
}
indexMap.put(INDEX, tailer.index());
indexMap.close();
}
return dtoList;
}
public ChronicleMap<String, Long> getReaderIndexMap(String queueName) throws Exception {
ChronicleMapBuilder<String, Long> indexReaderMap = ChronicleMapBuilder.of(String.class, Long.class)
.name("index-reader-map")
.averageKey(INDEX)
.entries(1);
ChronicleMap<String, Long> map = indexReaderMap.createPersistedTo(new File(queueName+"/reader.idx"));
return map;
}
}
This is based on the number of available processors Java believes you have.
If you have a virtual machine, you can configure your host to have more CPUs.
If you have a physical machine, you can change your processor to one with more cores.
Or you can ignore the warning.
Busy pausing with only one CPU is probably not a good idea as it will use all the CPU you have.
NOTE: We generally recommend having at least 4 cores, even for development.
I have a simple POJO that has a Map inside it.
public class Product {
public Map map;
}
then my csv looks like this:
"mapEntry1","mapEntry2","mapEntry3"
So I created a custom cell processor for parsing those:
public class MapEntryCellProcessor {
public Object execute(Object val, CsvContext context) {
return next.execute(new AbstractMap.SimpleEntry<>("somekey", val), context);
}
}
and then I add an entry setter method in my Product:
public void setName(Entry<String, String> entry) {
if (getName() == null) {
name = new HashMap<>();
}
name.put(entry.getKey(), entry.getValue());
}
Unfortunately this means I have 2 setter methods: one that accepts a map and another one that accepts an entry which doesn't really work for me (I have no control on how the POJOs are generated). Is there any other way I can parse such a CSV and have only setter that accepts a Map in my Product?
It's possible to write a cell processor that collects each column into a map. For example, the following processor allows you to specify the key and the map to add to.
package org.supercsv.example;
import java.util.Map;
import org.supercsv.cellprocessor.CellProcessorAdaptor;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.util.CsvContext;
public class MapCollector extends CellProcessorAdaptor {
private String key;
private Map<String, String> map;
public MapCollector(String key, Map<String, String> map){
this.key = key;
this.map = map;
}
public MapCollector(String key, Map<String, String> map,
CellProcessor next){
super(next);
this.key = key;
this.map = map;
}
public Object execute(Object value, CsvContext context) {
validateInputNotNull(value, context);
map.put(key, String.valueOf(value));
return next.execute(map, context);
}
}
Then assuming your Product bean has a field name of type Map<String,String>, you can use the processor as follows.
package org.supercsv.example;
import java.io.IOException;
import java.io.StringReader;
import java.util.HashMap;
import java.util.Map;
import junit.framework.TestCase;
import org.supercsv.cellprocessor.ift.CellProcessor;
import org.supercsv.io.CsvBeanReader;
import org.supercsv.io.ICsvBeanReader;
import org.supercsv.prefs.CsvPreference;
public class MapCollectorTest extends TestCase {
private static final String CSV = "John,L,Smith\n" +
"Sally,P,Jones";
public void testMapCollector() throws IOException{
ICsvBeanReader reader = new CsvBeanReader(
new StringReader(CSV),
CsvPreference.STANDARD_PREFERENCE);
// only need to map the field once, so use nulls
String[] nameMapping = new String[]{"name", null, null};
// create processors for each row (otherwise every bean
// will contain the same map!)
Product product;
while ((product = reader.read(Product.class,
nameMapping, createProcessors())) != null){
System.out.println(product.getName());
}
}
private static CellProcessor[] createProcessors() {
Map<String, String> nameMap = new HashMap<String, String>();
final CellProcessor[] processors = new CellProcessor[]{
new MapCollector("name1", nameMap),
new MapCollector("name2", nameMap),
new MapCollector("name3", nameMap)};
return processors;
}
}
This outputs:
{name3=Smith, name2=L, name1=John}
{name3=Jones, name2=P, name1=Sally}
You'll notice that while the processors execute on all 3 columns, it's only mapped to the bean once (hence the nulls in the nameMapping array).
I've also created the processors each time a row is read, otherwise every bean will be using the same map...which probably isn't what you want ;)