I have a set of data that I return in a method, how can I sort this data by createDate, I can do it using a stream, but the method returns a list, so I don’t want to use a stream, but if I can use a stream and return data in view list, can you show how?
public List<Mark> getMarksByUser(){
List<Mark> marks = markService.getMarksByUser(AuthenticationController.selfUserName());
}
sorting should be in this method
my mark entity with createDate
public class Mark extends BaseEntity<Integer> {
private String mark;
private String text;
private String sku;
private Date dateCreated;
private Date dateUpdated;
private Boolean deleted;
private String username;
Related
Folks..!!
Have a requirement to work on reading specific column data by using Spring batch. Well i am creating a spring batch application which has a requirement to read specific column.
In my csv file i have a column "msisdn", that field is mapped to an POJO. I want to read the values of "msisdn" no which is of Long data type.
well i am taking reference of below link.
read only selective columns from csv file using spring batch
Customer POJO
public class Customer {
private String id_type;
private String id_number;
private String customer_name;
private String email_address;
private LocalDate birthday;
private String citizenship;
private String address;
private Long msisdn;
private LocalDateTime kyc_date;
private String kyc_level;
private String goalscore;
private String mobile_network;
}
I am using a CustomMapper class to implement this feature. As you can see CustomMapper class implements FieldSetMapper type. fieldSet method returns String[] Array and msisdn is of Long type.Not able to understand how to get all values in msisdn column as fieldSet is only giving String[] type of data.
CustomMapper
============
public class CustomMapper implements FieldSetMapper<Customer> {
#Override
public Customer mapFieldSet(FieldSet fieldSet) throws BindException {
String[] custArray = null;
Customer customer = new Customer();
customer.setMsisdn(fieldSet.get);
return null;
}
}
please help me on this?
You can use fieldSet.readLong(int index) or fieldSet.readLong(String name) to select a field by name or index from the field set. Obviously this field should have been selected when parsing the file in your item reader.
I have a nested object like
public class SQSMessage implements Serializable {
private String type;
private boolean isEntity;
private String eventType;
private SystemInfo systemInfo;
private DomainAttributes domainAttributes;
#Data
public static class SystemInfo implements Serializable {
private String identifier;
private String ownedBy;
private Payload payload;
private EntityTags entityTags;
private long createdOn;
private String createdBy;
private long version;
private long lastUpdatedOn;
private String lastUpdatedBy;
private String attrEncKeyName;
#Data
public static class Payload implements Serializable {
private String bucketName;
private String objName;
private String encKeyName;
private byte[] payloadBytes;
private byte[] decryptedBytes;
private byte[] sanitizedBytes;
}
#Data
public static class EntityTags implements Serializable {
private List<Tag> tags;
#Data
public static class Tag implements Serializable {
private String tagName;
private String tagValue;
}
}
}
#Data
public static class DomainAttributes implements Serializable {
private String updatedByAuthId;
private String saveType;
private String docName;
private String ceDataType;
private String year;
private String appId;
private String formSetId;
private String appSku;
private String deviceId;
private String deviceName;
}
}
I would like to query the collection of SQSObjects by applying a filter like
ResultSet<SQSMessage> results = parser.retrieve(indexedSQSMessage, "SELECT * FROM indexedSQSMessage WHERE type='income' and DomainAttributes.saveType in ('endSession', 'cancelled')or (DomainAttributes.countryCode is null or DomainAttributes.countryCode='US'");
Is that possible using CQEngine? if yes.. please send me the examples.
The reason why I want o make that as sql... where clause is dynamic for various use cases.
Your example is more complicated than it needs to be for the question, so I am just skimming it. (Read about SSCCE)
However generally this kind of thing should be possible. See this related question/answer for how to construct nested queries: Can CQEngine query an object inside another object
If you set up attributes like that, you should be able to use them in SQL queries as well as programmatically.
I want to do a join between Timesheet:
#Data
#AllArgsConstructor
#NoArgsConstructor
#Document(collection = TIMESHEET_COLLECTION)
public class Timesheet {
#Id
private ObjectId id;
private ObjectId employeeId;
private LocalDate date;
private String occupationTitle;
private BigDecimal salary;
private List<TimesheetEntry> entries;
}
and Employee (as embedded document):
#Data
#AllArgsConstructor
#NoArgsConstructor
#Document(collection = Employee.EMPLOYEE_COL)
public class Employee {
#Id
private ObjectId id;
private String registry;
private String cpf;
private String firstName;
private String lastName;
private String nickname;
private String phone;
private LocalDate dateOfBirth;
private LocalDate admissionDate;
private EmployeeOccupation occupation;
private EmployeePaymentPreferences paymentPreferences;
private Map<String, String> equipmentPreferences;
private Boolean active;
}
So I have this aggregation query, with match, lookup, unwind and projection operations.
Aggregation aggregation = Aggregation.newAggregation(matchTimesheetFilter(timesheetFilter), lookupEmployee(), unwindEmployee(), projectEmployee());
There are lookup and unwind implementations. I'm unwinding because employee should be a single object, not an array.
private LookupOperation lookupEmployee(){
return LookupOperation.newLookup()
.from("employee")
.localField("employeeId")
.foreignField("_id")
.as("employee");
}
private UnwindOperation unwindEmployee(){
return Aggregation.unwind("employee");
}
It returns successfully a Timesheet document with a embedded Employee document. The point is: I don't want all data from employee. I only want a few fields.
So, I tried to exclude unwanted fields from employee, using my projection operation:
private ProjectionOperation projectEmployee() {
return Aggregation.project().andExclude("employee.nickname", "employee.firstName", "employee.fullName");
}
It didn't work. My embedded employee is still being returned with all fields. However I can successfully exclude fields from Timesheet, if I do something like this:
private ProjectionOperation projectEmployee() {
return Aggregation.project().andExclude("startDate", "endDate");
}
How can I project custom fields from a document embedded through a lookup operation?
i think you need to exclude "employee.nickname", "employee.firstName", "employee.fullName", instead of "nickname", "firstName", "fullName"
Try this:
private ProjectionOperation projectEmployee() {
return Aggregation.project().andExclude("employee.nickname", "employee.firstName", "employee.fullName");
}
i did it this way (not sure if it's right but it works):
private LookupOperation lookupEmployee(){
return LookupOperation.newLookup()
.from("employee")
.localField("employeeId")
.foreignField("_id")
.as("employeeLookup");
}
no unwind used
Aggregation.project().and("employeeLookup.firstName").as("employee.firstName")
I am trying to query the last 5 digits of a number field from an Oracle DB within a Springboot/JPA Repository.
In Oracle, the following query works:
Select * FROM DP1_Attachments Where trim(substr(dp1_submit_date_dp1_number, -5, 5)) = [a-five-digit-number]
I have tried implementing the following:
ENTITY:
#IdClass(CompositeKey.class)
public class DP1Attachments {
#Id
private Integer attachmentsFolder;
#Id
private Integer attachmentNumber;
private Integer dp1SubmitDateDp1Number;
private String attachmentName;
private Integer attachmentSize;
private String attachmentType;
#JsonFormat(pattern = "MM/dd/yyyy HH:mm")
private LocalDateTime attachmentDate;
private String attachmentBy;
private Integer attachmentByUsersId;
private String attachmentActive;
private String modifiedBy;
private Integer modifiedByUsersId;
#JsonFormat(pattern = "MM/dd/yyyy HH:mm")
private LocalDateTime modifiedDate;
REPOSITORY
#Query("SELECT a FROM DP1Attachments a WHERE trim(substring(a.dp1SubmitDateDp1Number, -5, 5)) = :dp1Number")
List<DP1Attachments> queryByDp1SubmitDateDp1Number(#Param("dp1Number") String dp1Number);
}
SERVICE
public List<DP1Attachments> getAttachmentsList (String dp1Number){
return attachmentsRepository.queryByDp1SubmitDateDp1Number(dp1Number);
}
CONTROLLER
public ResponseEntity<?> getAttachmentsInFolder (#PathVariable String attachmentFolder){
List<DP1Attachments>files = attachmentsService.getAttachmentsList(attachmentFolder);
return new ResponseEntity<>(files, HttpStatus.OK);
}
The query runs but returns an empty array.
I also tried changing the substring to a LIKE statement:
#Query("SELECT a FROM DP1Attachments a WHERE TO_CHAR(a.dp1SubmitDateDp1Number) like :dp1Number")
List<DP1Attachments> queryByDp1SubmitDateDp1Number(#Param("dp1Number") String dp1Number);
And the service changed to :
public List<DP1Attachments> getAttachmentsList (String dp1Number){
return attachmentsRepository.queryByDp1SubmitDateDp1Number("%" + dp1Number);
}
This also runs w/o errors but returns an empty array.
I have two such Java object:
public class PSubject
{
#Column
#Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
#org.apache.solr.client.solrj.beans.Field("name")
private String name;
#Column
#Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
#org.apache.solr.client.solrj.beans.Field("type")
private String type;
#Column
#Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
#org.apache.solr.client.solrj.beans.Field("uri")
private String uri;
#OneToMany(fetch=FetchType.EAGER,cascade=CascadeType.ALL)
#IndexedEmbedded
#org.apache.solr.client.solrj.beans.Field("attributes")
private Set<PAttribute> attributes = new HashSet<PAttribute>();
.....
}
#Entity
#Indexed
#Table(name="PAttribute")
#Inheritance(strategy = InheritanceType.TABLE_PER_CLASS)
public class PAttribute extends PEntity
{
private static final long serialVersionUID = 1L;
#Column
#Field(index=Index.YES, analyze=Analyze.YES, store=Store.YES)
#org.apache.solr.client.solrj.beans.Field("attr_name")
private String name;
#Column
#Field(index=Index.YES, analyze=Analyze.YES, store=Store.YES)
#org.apache.solr.client.solrj.beans.Field("attr_value")
private String value;
.....
}
And my Spring Data Solr query interface:
public interface DerivedSubjectRepository extends SolrCrudRepository<PSubject, String> {
Page<PSubject> findByName(String name, Pageable page);
List<PSubject> findByNameStartingWith(String name);
Page<PSubject> findBy(Pageable page);
#Query("name:*?0* or description:*?0* or type:*?0* or mac_address:*?0* or uri:*?0* or attributes:*?0*")
Page<PSubject> find(String keyword,Pageable page);
#Query("name:*?0* or description:*?0* or type:*?0* or mac_address:*?0* or uri:*?0* or attributes:*?0*")
List<PSubject> find(String keyword);
}
I can search any by name, description, type and mac_address, but can't search any result by attribute.
Update:
For example,when user search "ipod", it's probably means the type of subject or name of subject, or the name of attribute or the value of attribute. And I want get all the matched subject in one request. I know I can search the attribute object in a separate query. But that makes the code in the backend complex.
So, how can I search this nested object?
Update:
I flattened my data:
#Transient
#Field(index=Index.YES, analyze=Analyze.YES, store=Store.NO)
#org.apache.solr.client.solrj.beans.Field("attrs")
private String attrs;
public String getAttrs() {
return attrs;
}
public void setAttrs(Set<PAttribute> attributes) {
StringBuffer attrs = new StringBuffer();
if(attributes==null) {
attributes = this.getAttributes();
}
for(PAttribute attr:attributes){
attrs.append(attr.getName()+" " + attr.getValue()).append(" ");
}
this.attrs =attrs.toString();
}
The issue is resolved.
IIRC it is not possible to store nested data structures in solr - it depends how you flatten your data to fit into an eg. multivalue field - a little hard not knowing your schema.
see: http://lucene.472066.n3.nabble.com/Possible-to-have-Solr-documents-with-deeply-nested-data-structures-i-e-hashes-within-hashes-td4004285.html
How does the data look like in you index, and did you have a look at the http request sent by spring-data-solr?