I have the below requirement and I want to use Java-8 to meet the desired solution:
Input ->
ArrayList<String> places = new ArrayList<String>(
Arrays.asList("Buenos Aires", "Córdoba", "La Plata", "Paris"));
Output-> HashMap<String,List<String>> detailsMap=new HashMap<>();
{"Buenos Aires"=[ "Córdoba", "La Plata", "Paris"], "Córdoba"=["Buenos Aires", "La Plata", "Paris"],
"La Plata"=["Buenos Aires", "Córdoba", "Paris"], "Paris"=["Buenos Aires", "Córdoba", "La Plata"]}
How I can achieve this using Java-8?
Here is the way to achieve that using streams.
List<String> places = Arrays.asList("Buenos Aires", "Córdoba", "La Plata", "Paris");
Map<String, List<String>> map = places.stream().collect(Collectors.toMap(Function.identity(), e-> places.stream().filter(k -> !e.equals(k)).collect(Collectors.toList())));
Related
if I have a collection of books :-
{author: "tolstoy", title:"war & peace", price:100, pages:800}
{author: "tolstoy", title:"Ivan Ilyich", price:50, pages:100}
and if i want a result like this after grouping them by author :-
{ author: "tolstoy",
books: [
{author: "tolstoy", title:"war & peace", price:100, pages:800}
{author: "tolstoy", title:"Ivan Ilyich", price:50, pages:100}
]
}
using raw mongo queries I can do something like this:-
{$group: {
_id: "$author",
books:{$push: {author:"$author", title:"$title", price:"$price", pages:"$pages"}},
}}
But how do I do this using spring , I tried something like this:-
private GroupOperation getGroupOperation() {
return group("author").push("title").as("title").push("price").as("price").push("pages").as("pages");
}
but this does not seem to work. Any help would be appreciated.
UPDATE:-
I used the solution as in the link suggested by #Veeram and it works great but I ran into another issue when I project it. I have my projection class which looks like:-
public class BookSummary{
private String author;
private List<Book> bookList;
//all getters and setters below
}
The group method looks like this:-
private GroupOperation getGroupOperation() {
return group("author").push(new BasicDBObject("id","$_id").append("title","$title").append("pages","$pages").append("price","$price")).as("bookList");
}
the projection method looks like this:-
private ProjectionOperation getProjectOperation() {
return project("author").and("bookList").as("bookList");
}
and the final aggregation operation:-
mongoTemplate.aggregate(Aggregation.newAggregation(groupOperation,projectionOperation), Book.class, BookSummary.class).getMappedResults();
However this gives the result:-
[
{
"author": null,
"bookList": [
{
"id": null,
"title": "title1",
"pages": "100",
"price":"some price"
},
{
"id": null,
"title": "title2",
"pages": "200",
"price":"some price"
}
]
}
]
Why is the author and id null here? Any help would be appreciated
You should be projecting using _id instead in the project phase.
private ProjectionOperation getProjectOperation() {
return project("_id").and("bookList").as("bookList");
}
I am developing a dot net core 1.1 app in which I am trying to use Accord.Net. According to examples in this page (Naive Bayes) I need to convert data retrieved from DB to DataTable.
The thing is that while using DataTable I got this error:
The type 'DataTable' exists in both 'Shim, ...' and
'System.Data.Common, ...'
Even if I use this:
DataTable learningDataNotCodifiedAsDataTable = new DataTable();
or this:
System.Data.DataTable learningDataNotCodifiedAsDataTable = new System.Data.DataTable();
TG.
While the DataTable is not available in .NET Core 1.1, it is now available in .NET Core 2.0. If you can upgrade your project to .NET Core 2.0, then you will be able to use it in your code.
However, if you cannot switch to .NET Core 2.0 right now, then please note that you are not required to use DataTables with any of the methods in Accord.NET framework. They are given or shown just because they can give some extra convenience, but they are not really required, as shown in the example below:
string[] columnNames = { "Outlook", "Temperature", "Humidity", "Wind", "PlayTennis" };
string[][] data =
{
new string[] { "Sunny", "Hot", "High", "Weak", "No" },
new string[] { "Sunny", "Hot", "High", "Strong", "No" },
new string[] { "Overcast", "Hot", "High", "Weak", "Yes" },
new string[] { "Rain", "Mild", "High", "Weak", "Yes" },
new string[] { "Rain", "Cool", "Normal", "Weak", "Yes" },
new string[] { "Rain", "Cool", "Normal", "Strong", "No" },
new string[] { "Overcast", "Cool", "Normal", "Strong", "Yes" },
new string[] { "Sunny", "Mild", "High", "Weak", "No" },
new string[] { "Sunny", "Cool", "Normal", "Weak", "Yes" },
new string[] { "Rain", "Mild", "Normal", "Weak", "Yes" },
new string[] { "Sunny", "Mild", "Normal", "Strong", "Yes" },
new string[] { "Overcast", "Mild", "High", "Strong", "Yes" },
new string[] { "Overcast", "Hot", "Normal", "Weak", "Yes" },
new string[] { "Rain", "Mild", "High", "Strong", "No" },
};
// Create a new codification codebook to
// convert strings into discrete symbols
Codification codebook = new Codification(columnNames, data);
// Extract input and output pairs to train
int[][] symbols = codebook.Transform(data);
int[][] inputs = symbols.Get(null, 0, -1); // Gets all rows, from 0 to the last (but not the last)
int[] outputs = symbols.GetColumn(-1); // Gets only the last column
// Create a new Naive Bayes learning
var learner = new NaiveBayesLearning();
NaiveBayes nb = learner.Learn(inputs, outputs);
// Consider we would like to know whether one should play tennis at a
// sunny, cool, humid and windy day. Let us first encode this instance
int[] instance = codebook.Translate("Sunny", "Cool", "High", "Strong");
// Let us obtain the numeric output that represents the answer
int c = nb.Decide(instance); // answer will be 0
// Now let us convert the numeric output to an actual "Yes" or "No" answer
string result = codebook.Translate("PlayTennis", c); // answer will be "No"
// We can also extract the probabilities for each possible answer
double[] probs = nb.Probabilities(instance); // { 0.795, 0.205 }
If you have System.Data assembly in Assemblies and don't want or can't delete it, then you can bypass it by using extern alias, but when I bypassed this error using it I got 'DataTable' does not contain a constructor that takes 0/1 arguments error, and if believe this discussion the reason is:
System.Data.DataTable is present in .Net core(1.0,1.1) as an empty class to
complete the interfaces implementation. This issue is to track the
work needed to bring in an API to provide DataTable like API in .Net
Core.
And it changed only in .NET Core 2.0, see this SO post. I tried you code in .NET Core 2.0 project (in VS 2017 15.3) and only then it worked fine.
UPDATE:
I meant this assemblies.
But as you say you have only NUGET packages, then you also can use aliases in you csproj file for Nuget packages like below(I used System.Data.Common you can replace it with your Shim package if needed) :
<Target Name="DataAlias" BeforeTargets="FindReferenceAssembliesForReferences;ResolveReferences">
<ItemGroup>
<ReferencePath Condition="'%(FileName)' == 'System.Data.Common'">
<Aliases>MyData</Aliases>
</ReferencePath>
</ItemGroup>
</Target>
and then reference it in C# like this:
extern alias MyData; //1st line in .cs file
...
using MyData::System.Data;
...
DataTable datatable = new DataTable();
But still you won't be able to use because you will get the error about constructor I wrote above. Here you has 2 options how to solve this:
Switch to .NET Core 2.0
Try to use workaround solution from this post using DbDataReader if it suits you
I can create the following string saved in a Java String object called updates.
{ "update":{ "_index":"myindex", "_type":"order", "_id":"1"} }
{ "doc":{"field1" : "aaa", "field2" : "value2" }}
{ "update":{ "_index":"myindex", "_type":"order", "_id":"2"} }
{ "doc":{"field1" : "bbb", "field2" : "value2" }}
{ "update":{ "_index":"myindex", "_type":"order", "_id":"3"} }
{ "doc":{"field1" : "ccc", "field2" : "value2" }}
Now I want to do bullk update within a Java program:
Client client = getClient(); //TransportClient
BulkRequestBuilder bulkRequest = client.prepareBulk();
//?? how to attach updates variable to bulkRequest?
BulkResponse bulkResponse = bulkRequest.execute().actionGet();
I am unable to find a way to attach the above updates variable to bulkRequest before execute.
I notice that I am able to add UpdateRequest object to bulkRequest, but it seems to add only one document one time. As indicated above, I have multiple to-be-updated document in one string.
Can someone enlighten me on this? I have a gut feel that I may do things wrong way.
Thanks and regards.
The following code should work fine for you.
For each document updation , you need to create a separate update request as below and keep on adding it to the bulk requests.
Once the bulk requests is ready , execute a get on it.
JSONObject obj = new JSONObject();
obj.put("field1" , "value1");
obj.put("field2" , "value2");
UpdateRequest updateRequest = new UpdateRequest(index, indexType, id1).doc(obj.toString());
BulkRequestBuilder bulkRequest = client.prepareBulk();
bulkRequest.add(updateRequest);
obj = new JSONObject();
obj.put("fieldX" , "value1");
obj.put("fieldY" , "value2");
updateRequest = new UpdateRequest(index, indexType, id2).doc(obj.toString());
bulkRequest = client.prepareBulk();
bulkRequest.add(updateRequest);
bulkRequest.execute().actionGet();
I ran into the same problem where only 1 document get updated in my program. Then I found the following way which worked perfectly fine. This uses spring java client. I have also listed the the dependencies I used in the code.
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.index.query.QueryBuilder;
import org.springframework.data.elasticsearch.core.query.UpdateQuery;
import org.springframework.data.elasticsearch.core.query.UpdateQueryBuilder;
private UpdateQuery updateExistingDocument(String Id) {
// Add updatedDateTime, CreatedDateTime, CreateBy, UpdatedBy field in existing documents in Elastic Search Engine
UpdateRequest updateRequest = new UpdateRequest().doc("UpdatedDateTime", new Date(), "CreatedDateTime", new Date(), "CreatedBy", "admin", "UpdatedBy", "admin");
// Create updateQuery
UpdateQuery updateQuery = new UpdateQueryBuilder().withId(Id).withClass(ElasticSearchDocument.class).build();
updateQuery.setUpdateRequest(updateRequest);
// Execute update
elasticsearchTemplate.update(updateQuery);
}
I have my indices created, and mapping type for my 'suggest' field set to completion. I can't figure out how to configure the query for completion suggestions in elastic-search (Java API)
I'm trying to use this Query to base my implementation off of.
"song-suggest" : {
"text" : "n",
"completion" : {
"field" : "suggest"
}
}
Here's what I have so far,
CompletionSuggestionBuilder compBuilder = new CompletionSuggestionBuilder("complete");
compBuilder.text("n");
compBuilder.field("suggest");
SearchResponse searchResponse = localClient.prepareSearch(INDEX_NAME)
.setTypes("completion")
.setQuery(QueryBuilders.matchAllQuery())
.addSuggestion(compBuilder)
.execute().actionGet();
CompletionSuggestion compSuggestion = searchResponse.getSuggest().getSuggestion("complete");
Am I missing something, doing something wrong? Thanks!
Not sure if this is the best thing to do. But this works for me. Hope it helps.
#Override
public List<SuggestionResponse> findSuggestionsFor(String suggestRequest) {
CompletionSuggestionBuilder suggestionsBuilder = new CompletionSuggestionBuilder("completeMe");
suggestionsBuilder.text(suggestRequest);
suggestionsBuilder.field("suggest");
SuggestRequestBuilder suggestRequestBuilder =
client.prepareSuggest(MUSIC_INDEX).addSuggestion(suggestionsBuilder);
logger.debug(suggestRequestBuilder.toString());
SuggestResponse suggestResponse = suggestRequestBuilder.execute().actionGet();
Iterator<? extends Suggest.Suggestion.Entry.Option> iterator =
suggestResponse.getSuggest().getSuggestion("completeMe").iterator().next().getOptions().iterator();
List<SuggestionResponse> items = new ArrayList<>();
while (iterator.hasNext()) {
Suggest.Suggestion.Entry.Option next = iterator.next();
items.add(new SuggestionResponse(next.getText().string()));
}
return items;
}
paramsMap = req.getParameterMap();
String prefix = getParam("prefix");
if (prefix == null) {
EndpointUtil.badRequest("Autocomplete EndPoint: prefix parameter is missing", resp);
return;
}
SearchRequest searchRequest;
SearchSourceBuilder searchSourceBuilder;
searchRequest = new SearchRequest("section");
searchSourceBuilder = new SearchSourceBuilder();
searchSourceBuilder.timeout(new TimeValue(60, TimeUnit.SECONDS));
searchSourceBuilder.from(0);
searchSourceBuilder.size(MAX_HITS);
CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder("text.completion")
.prefix(prefix, Fuzziness.AUTO).size(MAX_HITS);
SuggestBuilder suggestBuilder = new SuggestBuilder();
suggestBuilder.addSuggestion(SUGGEST_NAME, suggestionBuilder);
searchSourceBuilder.suggest(suggestBuilder);
searchRequest.source(searchSourceBuilder);
SearchResponse searchResponse = getElasticClient().search(searchRequest);
Suggest suggest = searchResponse.getSuggest();
List<Document> results = new ArrayList<Document>();
Suggest.Suggestion<Suggest.Suggestion.Entry<Suggest.Suggestion.Entry.Option>> suggestion
= suggest.getSuggestion(SUGGEST_NAME);
List<Suggest.Suggestion.Entry<Suggest.Suggestion.Entry.Option>> list = suggestion.getEntries();
for(Suggest.Suggestion.Entry<Suggest.Suggestion.Entry.Option> entry :list) {
List<Suggest.Suggestion.Entry.Option> options = entry.getOptions();
for(Suggest.Suggestion.Entry.Option option : options) {
Document doc = new Document();
doc.append("text",option.getText().toString());
results.add(doc);
}
}
sendJsonResult(results, resp);
But I'm running into the error "field "suggest" doesn't have type 'completion'. My mapping looks like this: code .field("suggest") .startObject() .field("type", "completion") .field("index_analyzer","simple") .field("search_analyzer","simple") .endObject()
It sounds like, that your mapping is not applied correctly. Did you check it out?
Based on the mapping you provided, I think you are missing the properties around your mapping. Try the following mapping:
XContentFactory.jsonBuilder().startObject()
.startObject("properties")
.startObject("suggest")
.field("type", "completion")
.endObject()
.endObject()
.endObject()
Btw, SimpleAnalyzer is the default Analyzer for the suggestions. Thus, you need not define it explicitly.
To anyone who still needs this. The code snippet below works with ES v 6.3:-
CompletionSuggestionBuilder suggestionBuilder = new CompletionSuggestionBuilder("<field_name>").prefix("<search_term>");
SearchRequestBuilder requestBuilder =
oaEsClient.client().prepareSearch("<index_name>").setTypes("<type_name>")
.suggest(new SuggestBuilder().addSuggestion("<suggestion_name>",suggestionBuilder))
.setSize(20)
.setFetchSource(true)
.setExplain(false)
;
SearchResponse response = requestBuilder.get();
Suggest suggest = response.getSuggest();
I'm not sure if this is possible.
My class I have a list of looks like this:
class Person
{
string Firstname
string Lastname
DateTime Timestamp
}
Now I would like to create groups by Firstname and Lastname.
John Deer, 3:12
John Deer, 6:34
John Deer, 11:12
Tom Kin, 1:12
Tom Kin, 3:49
Tom Kin, 4:22
Markus Fert, 11:23
Further more I would like to sort this groups by their Timestamp, the last should be first while the groups should stay to display them in a listView.
Markus Fert (Group Header)
11:23 (Content Element)
John Deer
11:12
6:34
Tom Kin
4:22
3:49
John Deer
3:12
Tom Kin
1:22
Hope any Linq genius can help me solving the problem :)
Thanks!!
Much Thanks to Sergey, worked like a charm!
Further I would like to create a custom Class for my group Key to display different additional things in my ListView headers. (not only a spliced together string)
I would like to assign my query to an IEnumerable like this:
IEnumerable<IGrouping<Header, Person>> PersonGroups
Where the header contains some other properties contained in each Person (e.g. there is also a Country, Age,... for each Person). Maybe you can help me there too?
Thanks again Sergey. Solved my problem by implementing an Header class which implements the ICompareable interface.
IEnumerable<IGrouping<Header, Person>> PersonGroups
public class Header: IComparable<Header>
{
public Header(string firstname, string lastname)
{
Firstname= firstname;
Lastname = lastname;
}
public string Firstname{ get; set; }
public string Lastname{ get; set; }
public int CompareTo(Header that)
{
if (this.Firstname == that.Firstname&& this.Lastname == that.Lastname)
return 0;
else
return -1;
}
}
My query now looks like this:
PersonGroups= persons.OrderByDescending(p => p.Timestamp)
.GroupConsecutive(p => new Header(p.Firstname, p.Lastname));
Actually you need to order results by timestamp first. And only then group this ordered sequence by consecutive people:
var query =
people.OrderByDescending(p => p.Timestamp.TimeOfDay)
.GroupConsecutive(p => String.Format("{0} {1}", p.Firstname, p.Lastname))
.Select(g => new {
Header = g.Key,
Content = String.Join("\n", g.Select(p => p.Timestamp.TimeOfDay))
});
You will need GroupConsecutive implementation, which creates groups of consecutive items based on same value of provided selector (full name in your case).
For your sample input result is:
[
{
"Header": "Markus Fert",
"Content": "11:23:00"
},
{
"Header": "John Deer",
"Content": "11:12:00\n06:34:00"
},
{
"Header": "Tom Kin",
"Content": "04:22:00\n03:49:00"
},
{
"Header": "John Deer",
"Content": "03:12:00"
},
{
"Header": "Tom Kin",
"Content": "01:12:00"
}
]
Here's an approach using the built-in link operator Aggregate.
First I need to order the list by descending timestamp and then I created a name formatter function.
var op = people
.OrderByDescending(p => p.Timestamp)
.ToArray();
Func<Person, string> toName = p =>
String.Format("{0} {1}", p.Firstname, p.Lastname);
Now I can build the query:
var query =
op
.Skip(1)
.Aggregate(new []
{
new
{
Name = toName(op.First()),
Timestamps = new List<string>()
{
op.First().Timestamp.ToShortTimeString(),
},
},
}.ToList(), (a, p) =>
{
var name = toName(p);
if (name == a.Last().Name)
{
a.Last().Timestamps.Add(p.Timestamp.ToShortTimeString());
}
else
{
a.Add(new
{
Name = name,
Timestamps = new List<string>()
{
p.Timestamp.ToShortTimeString(),
},
});
}
return a;
});
I got this result: