I am using Emit mapper to copy values from one object to the other.
When I am mapping the objects, I need to ignore certain fields from being mapped/copied over. The fields to be ignored keeps changing based on scenario.
How can this be done in EmitMapper? The .Map method itself does not take any additional parameters to ignore certain properties. I can specify fields to be ignored using DefaultMapConfig, but that is static and cannot be changed during mapping.
Please help.
You have to configure the Mapper:
string[] fieldsToIgnore = { "NameOfThePropertyToIgnore" };
var mapper = ObjectMapperManager.DefaultInstance
.GetMapper<SourceClass, DestClass>(
new DefaultMapConfig()
.IgnoreMembers<SourceClass, DestClass>(fieldsToIgnore)
);
Related
I am using Springboot + OpenCSV to parse a CSV with 120 columns (sample 1). I upload the file process each rows and in case of error, return a similar CSV (say errorCSV). This errorCSV will have only errored out rows with 120 original columns and 3 additional columns for details on what went wrong. Sample Error file 2
I have used annotation based processing and beans are populating fine. But I need to get header names in the order they appear in the csv. This particular part is quite challenging. Then capture exception and original data during parsing. The two together can later be used in writing CSV.
CSVReaderHeaderAware headerReader;
headerReader = new CSVReaderHeaderAware(reader);
try {
header = headerReader.readMap().keySet();
} catch (CsvValidationException e) {
e.printStackTrace();
}
However the header order is jumbled and there is no way to get header index. The reason being CSVReaderHeaderAware internally uses a HashMap. In order to solve this I built my custom class. It is a replica of CSVReaderHeaderAware 3 except that I used LinkedHashMap
public class CSVReaderHeaderOrderAware extends CSVReader {
private final Map<String, Integer> headerIndex = new LinkedHashMap<>();
}
....
// This code cannot be done with a stream and Collectors.toMap()
// because Map.merge() does not play well with null values. Some
// implementations throw a NullPointerException, others simply remove
// the key from the map.
Map<String, String> resultMap = new LinkedHashMap<>(headerIndex.size()*2);
It does the job however wanted to check if this is the best way out or can you think of a better way to get header names and failed values back and write in a csv.
I referred to following links but couldn't get much help
How to read from particular header in opencsv?
Can anyone help with a MongoTemplate question?
I have got a record structure which has nested arrays and I want to update a specific entry in a 2nd level array. I can find the appropriate entry easier enough by the Set path needs the indexes of both array entries & the '$' only refers to the leaf item. For example if I had an array of teams which contained an array of player I need to generate an update path like :
val query = Query(Criteria.where( "teams.players.playerId").`is`(playerId))
val update = Update()
with(update) {
set("teams.$.players.$.name", player.name)
This fails as the '$' can only be used once to refer to the index in the players array, I need a way to generate the equivalent '$' for the index in the teams array.
I am thinking that I need to use a separate Aggregate query using the something like this but I can't get it to work.
project().and(ArrayOperators.arrayOf( "markets").indexOf("")).`as`("index")
Any ideas for this Mongo newbie?
For others who is facing similar issue, One option is to use arrayFilters in UpdateOptions. But looks like mongotemplate in spring does not yet support the use of UpdateOptions directly. Hence, what can be done is:
Sample for document which contain object with arrays of arrayObj (which contain another arrays of arrayObj).
Bson filter = eq("arrayObj.arrayObj.id", "12345");
UpdateResult result = mongoTemplate.getDb().getCollection(collectionName)
.updateOne(filter,
new Document("$set", new Document("arrayObj.$[].arrayObj.$[x].someField"), "someValueToUpdate"),
new UpdateOptions().arrayFilters(
Arrays.asList(Filters.eq("x.id, "12345))
));
I have an entity as below
Class Person{
String id;
String name;
String numberOfHands;
}
With Spring Data Rest (Gosling Release Train), I'm able to specify
localhost/Person?sort=name,asc
for sorting name name ascending. Now, in a case where I need to sort by numberOfHands descending and name ascending. I'm able to specify
localhost/Person?sort=numberOfHands,name,asc
But, I'm not able to specify
localhost/Person?sort=numberOfHands,desc,name,asc
Is there a way to specify multiple sort order?
Thanks!
Solution (tl;dr)
When wanting to sort on multiple fields you simply put the sort parameter multiple times in the URI. For example your/uri?sort=name,asc&sort=numberOfHands,desc. Spring Data is then capable of constructing a Pageable object with multiple sorts.
Explanation
There is not really a defined standard on how to submit multiple values for a parameter in a URI. See Correct way to pass multiple values for same parameter name in GET request.
However there is some information in the Java Servlet Spec which hints on how Java servlet containers parse request parameters.
The getParameterValues method returns an array of String objects containing all the parameter values associated with a parameter name. ... - Java Servlet Spec, section 3.1
The sample further in that section states (although it mixes request and body data)
For example, if a request is made with a query string of a=hello and a post body of a=goodbye&a=world, the resulting parameter set would be ordered a=hello, goodbye, world.
This sample shows that when a parameter (a in the example) is presented multiple times the results will be aggregated into a String[].
Here is how to construct the multi Sort object manually/programatically.
Sort sort = Sort.by(
Sort.Order.asc("name"),
Sort.Order.desc("numberOfHands"));
return personRepository.findAll(sort);
Note: This solution does not directly solve the original question asked, but may help visitors that landed on this question while searching for a solution how to sort on multiple properties from a backend perspective / in a somewhat "hardcoded" way. (this solution does not require/take any URI parameters)
When dynamic fields are there then you simply do match with fields and add in sorting list like.
List<Sort.Order> sorts= new ArrayList<>();
if (sort == "name" && sortingOrder.equalsIgnoreCase("DESC")) {
sorts.add(new Sort.Order(Sort.Direction.DESC,"name"));
} else if (sort == "numberOfHands" && sortingOrder.equalsIgnoreCase("DESC")) {
sorts.add(new Sort.Order(Sort.Direction.DESC,"numberOfHands"));
}
return personRepository.findAll(Sort.by(sorts));
If you are using Pagination then directly add in PageRequest Request.
return personRepository.findPersons(PageRequest.of(pageNo, pageSize, Sort.by(sorts)));
in my Spring MVC project i m using Hibernate, by using Criteria API i am applying Group BY and Order BY clause. Query get executed on DB successfully and it brings result also but its an array of Object--
Here is code of Criteria API
Criteria criteria = session.createCriteria(DashboardSubindicatorSubmission.class, "DashboardSubindicatorSubmission")
.setProjection(Projections.projectionList()
.add(Projections.sum("InputValue").as("InputValue"))
.add(Projections.groupProperty("fkAccademicYearId"))
.add(Projections.groupProperty("fkAssessmentPlanID"))
.add(Projections.groupProperty("fkSubindicatorID"))
.add(Projections.groupProperty("InputTitle")))
.addOrder(Order.asc("fkAccademicYearId"))
.addOrder(Order.asc("fkAssessmentPlanID"))
.addOrder(Order.asc("InputTitle"));
List<DashboardSubindicatorSubmission> dashboardSubindicatorSubmissionList = (List<DashboardSubindicatorSubmission>)criteria.list();
session.flush();
transaction.commit();
return dashboardSubindicatorSubmissionList;
I am casting criteria.list() to List<DashboardSubindicatorSubmission> but when i try to do dashboardSubindicatorSubmissionList.get(i) on controller it gives me exception java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to mkcl.accreditation.model.DashboardSubindicatorSubmission.
i come to know that, though i m casting it to List<DashboardSubindicatorSubmission> still its an list of object[] thats why i cant do dashboardSubindicatorSubmissionList.get(i) because it returns me object of DashboardSubindicatorSubmission. (Correct me if i am wrong)
So how can i convert my result into list of DashboardSubindicatorSubmission class?
Does setResultTransformer() helps me in this case?
You have two options. When you use projections, Hibernate doesn't know how to respect each field because it uses the name of each field to build objects and he doesn't know the names yet.
Thus, your first option is to name the fields grouped so that they match the names of object properties. This is necessary even if the string you use in projection is already the name of the object field. Something like:
.add(Projections.groupProperty("fkAccademicYearId"), "fkAccademicYearId") // same value
.add(Projections.groupProperty("fkAssessmentPlanID"), "other") // other value
The second option is to do what you yourself suggested, create your own implementation of ResultTransformer. I reckon this a interesting option if you want to extract other object of this query, as when you make a report.
In our HBase table, each row has a column called crawl identifier. Using a MapReduce job, we only want to process at any one time rows from a given crawl. In order to run the job more efficiently we gave our scan object a filter that (we hoped) would remove all rows except those with the given crawl identifier. However, we quickly discovered that our jobs were not processing the correct number of rows.
I wrote a test mapper to simply count the number of rows with the correct crawl identifier, without any filters. It iterated over all the rows in the table and counted the correct, expected number of rows (~15000). When we took that same job, added a filter to the scan object, the count dropped to ~3000. There was no manipulation of the table itself during or in between these two jobs.
Since adding the scan filter caused the visible rows to change so dramatically, we expect that we simply built the filter incorrectly.
Our MapReduce job features a single mapper:
public static class RowCountMapper extends TableMapper<ImmutableBytesWritable, Put>{
public String crawlIdentifier;
// counters
private static enum CountRows {
ROWS_WITH_MATCHED_CRAWL_IDENTIFIER
}
#Override
public void setup(Context context){
Configuration configuration=context.getConfiguration();
crawlIdentifier=configuration.get(ConfigPropertyLib.CRAWL_IDENTIFIER_PROPERTY);
}
#Override
public void map(ImmutableBytesWritable legacykey, Result row, Context context){
String rowIdentifier=HBaseSchema.getValueFromRow(row, HBaseSchema.CRAWL_IDENTIFIER_COLUMN);
if (StringUtils.equals(crawlIdentifier, rowIdentifier)){
context.getCounter(CountRows.ROWS_WITH_MATCHED_CRAWL_IDENTIFIER).increment(1l);
}
}
}
The filter setup is like this:
String crawlIdentifier=configuration.get(ConfigPropertyLib.CRAWL_IDENTIFIER_PROPERTY);
if (StringUtils.isBlank(crawlIdentifier)){
throw new IllegalArgumentException("Crawl Identifier not set.");
}
// build an HBase scanner
Scan scan=new Scan();
SingleColumnValueFilter filter=new SingleColumnValueFilter(HBaseSchema.CRAWL_IDENTIFIER_COLUMN.getFamily(),
HBaseSchema.CRAWL_IDENTIFIER_COLUMN.getQualifier(),
CompareOp.EQUAL,
Bytes.toBytes(crawlIdentifier));
filter.setFilterIfMissing(true);
scan.setFilter(filter);
Are we using the wrong filter, or have we configured it wrong?
EDIT: we're looking at manually adding all the column families as per https://issues.apache.org/jira/browse/HBASE-2198 but I'm pretty sure the Scan includes all the families by default.
The filter looks correct, but under certain conditions one scenario that could cause this relates to character encodings. Your Filter is using Bytes.toBytes(String) which uses UTF8 [1], whereas you might be using native character encoding in HBaseSchema or when you write the record if you use String.getBytes()[2]. Check that the crawlIdentifier was originally written to HBase using the following to ensure the filter is comparing like for like in the filtered scan.
Bytes.toBytes(crawlIdentifier)
[1] http://hbase.apache.org/apidocs/org/apache/hadoop/hbase/util/Bytes.html#toBytes(java.lang.String)
[2] http://docs.oracle.com/javase/1.4.2/docs/api/java/lang/String.html#getBytes()