How to sum multiple fields of a list of objects using stream in java - spring-boot

I'm trying to get the sums of two fields from a Model class. and return it using using a pojo but kept getting syntax errors. What I am trying to achieve is similar to the highest voted answer in This: Summing multiple different fields in a list of objects using the streams api? but I got syntax error. Here is my model:
public class BranchAccount {
#NotNull(message = "Account balance is required")
private Double accountBalance;
#NotNull(message = "Profit is required")
private Double profit;
#Temporal(TemporalType.TIMESTAMP)
private Date dateCreated;
#Temporal(TemporalType.TIMESTAMP)
private Date lastUpdated;
}
My Pojo:
public class ProfitBalanceDto {
private Double accountBalance;
private Double profit;
}
My code to get sum of accountBalance and profit from BranchAccount:
public ProfitBalanceDto getAllBranchAccount() {
List<BranchAccount> branchAccounts = branchAccountRepository.findAll();
branchAccounts.stream()
.reduce(new ProfitBalanceDto(0.0, 0.0), (branchAccount1, branchAccount2) -> {
return new ProfitBalanceDto(
branchAccount1.getAccountBalance() + branchAccount2.getAccountBalance(),
branchAccount1.getProfit() + branchAccount2.getProfit());
});
return null;
}
My errors:
Please am I doing wrong?
PS: I want to use stream for this.

As #Holger mentioned in his comment, you can map to ProfitBalanceDto before reducing
public ProfitBalanceDto getAllBranchAccount2() {
List<BranchAccount> branchAccounts = branchAccountRepository.findAll();
return branchAccounts.stream()
.map(acc -> new ProfitBalanceDto(acc.getAccountBalance(), acc.getProfit()))
.reduce(new ProfitBalanceDto(0.0, 0.0),
(prof1, prof2) -> new ProfitBalanceDto(prof1.getAccountBalance()+ prof2.getAccountBalance(),
prof1.getProfit() + prof2.getProfit()));
}
If you are using Java 12 or higher using the teeing collector might be a better option
public ProfitBalanceDto getAllBranchAccount() {
List<BranchAccount> branchAccounts = branchAccountRepository.findAll();
return branchAccounts.stream()
.collect(Collectors.teeing(
Collectors.summingDouble(BranchAccount::getAccountBalance),
Collectors.summingDouble(BranchAccount::getProfit),
ProfitBalanceDto::new));
}

Related

JPA CriteriaQuery Unable to locate appropriate constructor for sum of date diffrerence

I know this question look similar, but I think for me the case is different
this is the exception message
Exception while fetching data (/periodicReport/userReport) :
org.hibernate.hql.internal.ast.QuerySyntaxException: Unable to locate
appropriate constructor on class
[com.analytics.entity.projections.UserParticipantCountAndDurationImpl].
Expected arguments are: java.util.UUID, long, java.util.Date [select
new
com.analytics.entity.projections.UserParticipantCountAndDurationImpl(generatedAlias0.userID,
count(distinct generatedAlias0.userID), sum(function('TIMEDIFF',
generatedAlias0.endTime, generatedAlias0.startTime))) from
com.analytics.entity.ConferenceParticipant as
generatedAlias0 group by generatedAlias0.userID order by :param0 desc]
I'm expecting the sum to be Long and I have specified it in the criteria query.
I don't have any idea where Date is defined
the snipet for the buider
Expression<Long> sum = criteriaBuilder.sum(
criteriaBuilder.function(
"TIMEDIFF",
Long.class,
participantRoot.<Date>get("endTime"),
participantRoot.<Date>get("startTime")
)
);
Expression<Long> count = criteriaBuilder.countDistinct(participantRoot.get("userID"));
reportQuery.select(criteriaBuilder.construct(
UserParticipantCountAndDurationImpl.class,
participantRoot.get("userID").as(UUID.class).alias(USER_ID),
count.as(Long.class).alias(PARTICIPANTS),
sum.as(Long.class).alias(PARTICIPANT_DURATION)
));
the class in question
#Data
#NoArgsConstructor
public class UserParticipantCountAndDurationImpl implements UserParticipantCountAndDuration, Serializable {
private UUID userID;
private long participants;
public long participantDuration;
public UserParticipantCountAndDurationImpl(
UUID userID,
long participants,
long participantDuration
) {
this.userID = userID;
this.participants = participants;
this.participantDuration = participantDuration;
}
}
And yes I have tried to change signature of the constructor. in that case the query will run, but then thows CastException since it java.util.Date
you need a Expression with the name of the unit you want to extract from the timediff function
public static class TimeUnitExpression extends BasicFunctionExpression<String> implements Serializable {
public TimeUnitExpression(CriteriaBuilderImpl criteriaBuilder, Class<String> javaType,
String functionName) {
super(criteriaBuilder, javaType, functionName);
}
#Override
public String render(RenderingContext renderingContext) {
return getFunctionName();
}
}
so your sum expression becomes
Expression<Long> sum = criteriaBuilder.sum(
criteriaBuilder.function(
"TIMEDIFF",
Long.class,
new TimeUnitExpression(null, String.class, "MILLISECOND"),
participantRoot.<Date>get("endTime"),
participantRoot.<Date>get("startTime")
)
);
TIMEDIFF return date time as the result, so it cant be casted to Long or double, the the Sql object in the raw result is DateTime, refer this.
I can solve this by using a function to get minutes from the datetime. but I solved this by getting difference of UNIXTIMESTAMP for both the datetime for the summation. Since UNIXTTIMESTAMP is always long I can cast it to Long or BigInteger

how does Chronicle-wire support schema evolution?

I am new to Chronicle-wire. In the document it claims support for "setting of fields to the default, if not available" in the schema evolution section.
Do we have an example of how this works?
I have an example of adding an array field to a simple Marshallable object. When reading the journals contains the old version of the object, how can we set a default value (eg. new String[0]) to the field instead of a null?
There're a few ways to achieve that, one example is below:
public class TestMarshallable implements Marshallable {
private long a;
private int b;
private String newField = "defaultValue";
#Override
public void readMarshallable(#NotNull WireIn wire) throws IORuntimeException {
a = wire.read("a").int64();
b = wire.read("b").int32();
if (wire.bytes().readRemaining() > 0)
newField = wire.read("newField").text();
}
}
In this example, it is assumed that your new field will be written last, hence you can simply check if there's more to read - and do so. Default value is the one you assign to the field.
More complicated, but way more flexible way:
public class TestMarshallable implements Marshallable {
private long a = 0;
private int b = 1;
private String newField = "defaultValue";
#Override
public void readMarshallable(#NotNull WireIn wire) throws IORuntimeException {
#NotNull StringBuilder name = new StringBuilder();
while (!wire.isEmpty()) {
#NotNull ValueIn in = wire.read(name);
if (StringUtils.isEqual(name, "a"))
a = in.int64();
else if (StringUtils.isEqual(name, "b"))
b = in.int32();
else if (StringUtils.isEqual(name, "newField"))
newField = in.text();
else
unexpectedField(name, in);
wire.consumePadding();
}
}
}
In the last example readMarshallable simply overwrites the fields it could find in the stream leaving others with default values (NB this can also be used to save certain amount of writes, if you often write default values you can skip them altogether in writeMarshallable)

Spring Boot Neo4J has no property with propertyKey="__type__"

Am using spring boot 1.2.3.RELEAE and spring-data-neo4j 3.2.2.RELEASE which uses Neo4J 2.1.5.
I am trying to build a graph of Stations which are connected to other stations via "CONNECTED_TO" relationship. The relationship has "distance" as a property. Later on we are planning to do dijkstra's algorithm on the graph. But anyway... this is what we have:
#NodeEntity
#TypeAlias("Station")
public class Station {
#GraphId
private Long id;
#Indexed
public String name;
#Indexed(unique = true)
public String tlc;
public double gps_x;
public double gps_y;
#RelatedTo(type = "CONNECTED_TO", direction = OUTGOING)
public Set<Station> connectedTo = new HashSet<>();
#Fetch
#RelatedToVia(type = "CONNECTED_TO", direction = OUTGOING)
public Set<ConnectedTo> connections = new HashSet<>();
// getter + setters
}
#RelationshipEntity(type = "CONNECTED_TO")
public class ConnectedTo {
#GraphId
private Long id;
#Fetch
#StartNode
private Station fromStation;
#Fetch
#EndNode
private Station toStation;
private double distance;
// getter + setters
}
And have a stations.csv with 2K plus stations... here is a sample:
name,tlc,gps_y,gps_x
Alexandra Palace,AAP,-0.120235219,51.59792231
Achanalt,AAT,-4.913851155,57.60959445
Aberdare,ABA,-3.443083402,51.71505622
Altnabreac,ABC,-3.706280166,58.38814697
And then for the relations ships (5K plus) we have station_connections.csv.. here is a sample:
name,from_tlc,to_tlc,distance
Alexandra Palace,AAP,BOP,0.7
Alexandra Palace,AAP,HRN,0.9
Alexandra Palace,AAP,NSG,1.5
Achanalt,AAT,ACN,6.5
Achanalt,AAT,LCC,4.2
Aberdare,ABA,CMH,0.8
Altnabreac,ABC,FRS,8.1
Altnabreac,ABC,SCT,9.1
Then I have a import service to import the CSVs
Firstly, I import the stations from stations.csv. This works fine. This is the code to import it:
#Transactional
public void importStations(CsvReader stationsFile) throws IOException {
// id,terminalName,name,installed,locked,temporary,lat,lng
while (stationsFile.readRecord()) {
Station station = new Station()
.setName(stationsFile.get("name").toUpperCase())
.setTlc(stationsFile.get("tlc").toUpperCase())
.setGps_y(asDouble(stationsFile.get("gps_y")))
.setGps_x(asDouble(stationsFile.get("gps_x")));
stationRepository.save(station);
}
}
Secondly, I want to import station connections from station_connections.csv. using the following code:
#Transactional
public void importConnections(CsvReader stationsFile) throws IOException {
// name,from_tlc,to_tlc,distance
while (stationsFile.readRecord()) {
String from_tlc = stationsFile.get("from_tlc").toUpperCase();
String to_tlc = stationsFile.get("to_tlc").toUpperCase();
String distance = stationsFile.get("distance");
Station fromStation = stationRepository.findByTlc(from_tlc);
Station toStation = stationRepository.findByTlc(to_tlc);
if (fromStation != null && toStation != null) {
// need to do this get the connected stations...!!!
template.fetch(fromStation.getConnectedTo());
template.fetch(toStation.getConnectedTo());
fromStation.addStation(toStation);
template.save(fromStation);
System.out.println(from_tlc + " connected to: " + to_tlc);
}
}
}
So when it tries to import the connections I get the following error: RELATIONSHIP[4434] has no property with propertyKey="__type__".
Exception in thread "main" org.neo4j.graphdb.NotFoundException: RELATIONSHIP[4434] has no property with propertyKey="__type__".
at org.neo4j.kernel.impl.core.RelationshipProxy.getProperty(RelationshipProxy.java:195)
at org.springframework.data.neo4j.support.typerepresentation.AbstractIndexBasedTypeRepresentationStrategy.readAliasFrom(AbstractIndexBasedTypeRepresentationStrategy.java:126)
at org.springframework.data.neo4j.support.mapping.TRSTypeAliasAccessor.readAliasFrom(TRSTypeAliasAccessor.java:36)
at
So I basically I am baffled at this error and would appreciate some help.
If there is a better way of doing this please do let me know.
GM
Can you check Relationships with id 4434 ? If it really doesn't have that property, and what kind of relationship it is.
It means that SDN couldn't load a relationship mapped to a Java type because somehow the type-information was not stored on it.
You can do that after the fact with template.postEntityCreation(rel, ConnectedTo.class);

Spring Data MongoDB: Accessing and updating sub documents

First experiments with Spring Data and MongoDB were great. Now I've got the following structure (simplified):
public class Letter {
#Id
private String id;
private List<Section> sections;
}
public class Section {
private String id;
private String content;
}
Loading and saving entire Letter objects/documents works like a charm. (I use ObjectId to generate unique IDs for the Section.id field.)
Letter letter1 = mongoTemplate.findById(id, Letter.class)
mongoTemplate.insert(letter2);
mongoTemplate.save(letter3);
As documents are big (200K) and sometimes only sub-parts are needed by the application: Is there a possibility to query for a sub-document (section), modify and save it?
I'd like to implement a method like
Section s = findLetterSection(letterId, sectionId);
s.setText("blubb");
replaceLetterSection(letterId, sectionId, s);
And of course methods like:
addLetterSection(letterId, s); // add after last section
insertLetterSection(letterId, sectionId, s); // insert before given section
deleteLetterSection(letterId, sectionId); // delete given section
I see that the last three methods are somewhat "strange", i.e. loading the entire document, modifying the collection and saving it again may be the better approach from an object-oriented point of view; but the first use case ("navigating" to a sub-document/sub-object and working in the scope of this object) seems natural.
I think MongoDB can update sub-documents, but can SpringData be used for object mapping? Thanks for any pointers.
I figured out the following approach for slicing and loading only one subobject. Does it seem ok? I am aware of problems with concurrent modifications.
Query query1 = Query.query(Criteria.where("_id").is(instance));
query1.fields().include("sections._id");
LetterInstance letter1 = mongoTemplate.findOne(query1, LetterInstance.class);
LetterSection emptySection = letter1.findSectionById(sectionId);
int index = letter1.getSections().indexOf(emptySection);
Query query2 = Query.query(Criteria.where("_id").is(instance));
query2.fields().include("sections").slice("sections", index, 1);
LetterInstance letter2 = mongoTemplate.findOne(query2, LetterInstance.class);
LetterSection section = letter2.getSections().get(0);
This is an alternative solution loading all sections, but omitting the other (large) fields.
Query query = Query.query(Criteria.where("_id").is(instance));
query.fields().include("sections");
LetterInstance letter = mongoTemplate.findOne(query, LetterInstance.class);
LetterSection section = letter.findSectionById(sectionId);
This is the code I use for storing only a single collection element:
MongoConverter converter = mongoTemplate.getConverter();
DBObject newSectionRec = (DBObject)converter.convertToMongoType(newSection);
Query query = Query.query(Criteria.where("_id").is(instance).and("sections._id").is(new ObjectId(newSection.getSectionId())));
Update update = new Update().set("sections.$", newSectionRec);
mongoTemplate.updateFirst(query, update, LetterInstance.class);
It is nice to see how Spring Data can be used with "partial results" from MongoDB.
Any comments highly appreciated!
I think Matthias Wuttke's answer is great, for anyone looking for a generic version of his answer see code below:
#Service
public class MongoUtils {
#Autowired
private MongoTemplate mongo;
public <D, N extends Domain> N findNestedDocument(Class<D> docClass, String collectionName, UUID outerId, UUID innerId,
Function<D, List<N>> collectionGetter) {
// get index of subdocument in array
Query query = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query.fields().include(collectionName + "._id");
D obj = mongo.findOne(query, docClass);
if (obj == null) {
return null;
}
List<UUID> itemIds = collectionGetter.apply(obj).stream().map(N::getId).collect(Collectors.toList());
int index = itemIds.indexOf(innerId);
if (index == -1) {
return null;
}
// retrieve subdocument at index using slice operator
Query query2 = new Query(Criteria.where("_id").is(outerId).and(collectionName + "._id").is(innerId));
query2.fields().include(collectionName).slice(collectionName, index, 1);
D obj2 = mongo.findOne(query2, docClass);
if (obj2 == null) {
return null;
}
return collectionGetter.apply(obj2).get(0);
}
public void removeNestedDocument(UUID outerId, UUID innerId, String collectionName, Class<?> outerClass) {
Update update = new Update();
update.pull(collectionName, new Query(Criteria.where("_id").is(innerId)));
mongo.updateFirst(new Query(Criteria.where("_id").is(outerId)), update, outerClass);
}
}
This could for example be called using
mongoUtils.findNestedDocument(Shop.class, "items", shopId, itemId, Shop::getItems);
mongoUtils.removeNestedDocument(shopId, itemId, "items", Shop.class);
The Domain interface looks like this:
public interface Domain {
UUID getId();
}
Notice: If the nested document's constructor contains elements with primitive datatype, it is important for the nested document to have a default (empty) constructor, which may be protected, in order for the class to be instantiatable with null arguments.
Solution
Thats my solution for this problem:
The object should be updated
#Getter
#Setter
#Document(collection = "projectchild")
public class ProjectChild {
#Id
private String _id;
private String name;
private String code;
#Field("desc")
private String description;
private String startDate;
private String endDate;
#Field("cost")
private long estimatedCost;
private List<String> countryList;
private List<Task> tasks;
#Version
private Long version;
}
Coding the Solution
public Mono<ProjectChild> UpdateCritTemplChild(
String id, String idch, String ownername) {
Query query = new Query();
query.addCriteria(Criteria.where("_id")
.is(id)); // find the parent
query.addCriteria(Criteria.where("tasks._id")
.is(idch)); // find the child which will be changed
Update update = new Update();
update.set("tasks.$.ownername", ownername); // change the field inside the child that must be updated
return template
// findAndModify:
// Find/modify/get the "new object" from a single operation.
.findAndModify(
query, update,
new FindAndModifyOptions().returnNew(true), ProjectChild.class
)
;
}

Hibernate tuple criteria queries

I am trying to create a query using hibernate following the example given in section 9.2 of chapter 9
The difference with my attempt is I am using spring MVC 3.0. Here is my Address class along with the method i created.
#RooJavaBean
#RooToString
#RooEntity
#RooJson
public class Address {
#NotNull
#Size(min = 1)
private String street1;
#Size(max = 100)
private String street2;
private String postalcode;
private String zipcode;
#NotNull
#ManyToOne
private City city;
#NotNull
#ManyToOne
private Country country;
#ManyToOne
private AddressType addressType;
#Transient
public static List<Tuple> jqgridAddresses(Long pID){
CriteriaBuilder builder = Address.entityManager().getCriteriaBuilder();
CriteriaQuery<Tuple> criteria = builder.createTupleQuery();
Root<Address> addressRoot = criteria.from( Address.class );
criteria.multiselect(addressRoot.get("id"), addressRoot.get("street1"), addressRoot.get("street2"));
criteria.where(builder.equal(addressRoot.<Set<Long>>get("id"), pID));
return Address.entityManager().createQuery( criteria ).getResultList();
}
}
The method called jqgridAddresses above is the focus. I opted not to use the "Path" because when I say something like Path idPath = addressRoot.get( Address_.id ); as in section 9.2 of the documentation, the PathAddress_.id stuff produces a compilation error.
The method above returns an empty list of type Tuple as its size is zero even when it should contain something. This suggests that the query failed. Can someone please advise me.
OK so i made some minor adjustments to my logic which is specific to my project, however, the following approach worked perfectly. Hope it hepls someone in need !
#Transient
public static List<Tuple> jqgridPersons(Boolean isStudent, String column, String orderType, int limitStart, int limitAmount){
CriteriaBuilder builder = Person.entityManager().getCriteriaBuilder();
CriteriaQuery<Tuple> criteria = builder.createTupleQuery();
Root<Person> personRoot = criteria.from(Person.class );
criteria.select(builder.tuple(personRoot.get("id"), personRoot.get("firstName"), personRoot.get("lastName"), personRoot.get("dateOfBirth"), personRoot.get("gender"), personRoot.get("maritalStatus")));
criteria.where(builder.equal( personRoot.get("isStudent"), true));
if(orderType.equals("desc")){
criteria.orderBy(builder.desc(personRoot.get(column)));
}else{
criteria.orderBy(builder.asc(personRoot.get(column)));
}
return Address.entityManager().createQuery( criteria ).setFirstResult(limitStart).setMaxResults(limitAmount).getResultList();
}

Resources