C# ElasticClient v6.0.2 LowLevel.IndexAsync Creating Empty Documents - elasticsearch

Below is a portion of my code, that I've taken from here. I had to make few changes to make work with the new v6.X of Elasticsearch.
It runs without any errors and creates new documents BUT with empty field values. If I take the same JSON payload and PUT in Elasticsearch using Postman, the document gets indexed fine, with all fields populated. Please let me know if I am using the right Elasticsearch API methods, and using them right.
string strJsonMessage = #"
{
message : {
content: 'Test Message Content'
}
}";
ConnectionSettings connectionSettings = new ConnectionSettings(new Uri("xxx")).BasicAuthentication("xx", "xx");
ElasticClient client = new ElasticClient(connectionSettings);
JObject msg = JObject.Parse(strJsonMessage);
var result = await client.LowLevel.IndexAsync<BytesResponse>("events-2018.03.27", "event", PostData.Serializable(msg));
if (result.Success)
{
log.Info("Data successfully sent.");
log.Verbose(result.Body);
}
else
{
log.Error("Failed to send data.");
}
****OUTPUT TRACE:****
2018-03-27T18:37:18.961 [Info] Data successfully sent.
2018-03-27T18:37:18.961 [Verbose] {"_index":"events-2018.03.27","_type":"event","_id":"u9HPaGIBBm3ZG7GB5jM_","_version":1,"result":"created","_shards":{"total":2,"successful":1,"failed":0},"_seq_no":29,"_primary_term":1}
Querying elasticsearch for this document by its ID gives me this:
{
"_index": "events-2018.03.27",
"_type": "event",
"_id": "u9HPaGIBBm3ZG7GB5jM_",
"_version": 1,
"found": true,
"_source": {
"message": {
"content": []
}
}
}

In case someone comes across the same issue, I got the solution here.

Related

Reducing output of GraphQL

I have set up a GraphQL-mongoose-express-apollo combo as per this guide.
When I run a query to get multiple results, is there a way to reduce the resulting array before I actually get to processing the response from the query.
Query:
query GetSomeUsers {
userMany (limit: 3){
_id
}
}
Actual output:
{
"data": {
"userMany": [
{
"_id": "5e950543cb48dbaafc60722d"
},
{
"_id": "5e950543cb48dbaafc60722e"
},
{
"_id": "5e950547cb48dbaafc60722f"
}
]
}
}
Desired output:
{
"data": {
"userMany": [
"5e950543cb48dbaafc60722d",
"5e950543cb48dbaafc60722e",
"5e950547cb48dbaafc60722f"
]
}
}
So far I have only found something that seems to be relevant in an article on GraphQL Leveler, but I don't see how it would work with graphql-compose-mongoose, as the GraphQL schema is automatically generated and there does not seem to be any place in the code to put in that LevelerObjectType in place of a GraphQLObjectType.

how to create a join relation using elasticsearch python client

I am looking for any examples that implement the parent-child relationship using the python interface.
I can define a mapping such as
es.indices.create(
index= "docpage",
body= {
"mappings": {
"properties": {
"my_join_field": {
"type": "join",
"relations": {
"my_document": "my_page"
}
}
}
}
}
)
I am then indexing a document using
res = es.index(index="docpage",doc_type="_doc",id = 1, body=jsonDict) ,
where jsonDict is a dict structure of document's text,
jsonDict['my_join_field']= 'my_document', and other relevant info.
Reference example.
I tried adding pageDict where the page is a string containing text on a page in a document, and
pageDict['content']=page
pageDict['my_join_field']={}
pageDict['my_join_field']['parent']="1"
pageDict['my_join_field']['name']="page"
res = es.index(index="docpage",doc_type="_doc",body=pageDict)
but I get a parser error:
RequestError(400, 'mapper_parsing_exception', 'failed to parse')
Any ideas?
This worked for me :
res=es.index(index="docpage",doc_type="_doc",body={"content":page,
"my-join-field":{
"name": "my_page",
"parent": "1"}
})
The initial syntax can work if the parent is also repeated in the "routing" key of the main query body:
res = es.index(index="docpage",doc_type="_doc",body=pageDict, routing=1)

Spring MongoDB distict - can't get full document

I have collection in following format.
{
id:____,
name:"Carlos",
city:"Mumbai"
},
{
id:____,
name:"Pravin",
city:"Mumbai"
},
{
id:_____,
name:"Gaurav",
city:"Ahmedabad"
}
I want whole document distinct by city. I tried the db.collection.distinct("city"). But it returns only distinct cities.
Current Output:
["Mumbai","Ahmedabad"]
Expected Output:
{
id:____,
name:"Carlos",
city:"Mumbai"
},
{
id:_____,
name:"Gaurav",
city:"Ahmedabad"
}
Above you can see there is only one record of "Mumbai". I need this kind of output.
Anyone know how we can get whole document with distinct in spring-mongodb?
You could try running an aggregation pipeline operation where you can include the the other fields inside the $group pipeline stage using the $first operator. Two examples that show this approach follow:
Mongo Shell:
pipeline = [
{
"$group": {
"_id": "$city",
"id": { "$first": "$_id" },
"name": { "$first": "$name" }
}
}
]
db.collection.aggregate(pipeline);
Spring Data MongoDB:
import static org.springframework.data.mongodb.core.aggregation.Aggregation.*;
MongoTemplate mongoTemplate = repository.getMongoTemplate();
Aggregation agg = newAggregation(
group("city")
.first("_id").as("id")
.first("name").as("name")
);
AggregationResults<OutputType> result = mongoTemplate.aggregate(agg,
"collection", OutputType.class);
List<OutputType> mappedResult = result.getMappedResults();

Spring Data Mongodb remove an array of document in other document

Example:
{
"_id": "851504284360",
"createUserId": "User73E68059D1A44EF3BBF88650EA3B83B0",
"userIds": [],
"userList": [
{
"_id": "User73E68059D1A44EF3BBF88650EA3B83B0",
"phone": "kodD9QDWdTCl0KTTXjIRHw==",
"token": "343cdbc09545e7e9812fea4998a83dd8"
}
]
}
I'm trying to delete an array of document in mongoDb ,but can't succeed in it. I've tried with update.pull("userIds",value) succeed ,but the userList I've tried like :
update.pull("userList",new String[]{value}) ,
update.pull("userList",new ArrayList().add(Object)),
update.pull("userList",new Object(key,value))//etc ...
but can't succeed.
How can I remove userList this document in spring mongodb ? I was use mongoTemplate.
To remove userList use $unset operator which deletes a particular field.
In Mongo you would do something like this:
db.collection.update(
{ "_id": "851504284360" },
{ "$unset": { "userList": "" } }
)
In mongoTemplate, you could do something like:
WriteResult wr = mongoTemplate.updateMulti(
new Query(where("_id").is("851504284360")),
new Update().unset("userList"),
Entity.class
);

ElasticSearch NEST adds $type in the serialized request in MVC app

I'm trying to use NEST from MVC app, however I'm getting the request serialized incorrectly:
iisexpress.exe Error: 0 : NEST POST http://localhost:9200/_search (00:00:00.8188240):
StatusCode: 400,
Method: POST,
Url: http://localhost:9200/_search,
Request: {
"$type": "Nest.SearchDescriptor`1[[System.Object, mscorlib]], Nest",
"aggs": {
"Period": {
"$type": "Nest.AggregationDescriptor`1[[System.Object, mscorlib]], Nest",
"date_histogram": {
"$type": "Nest.DateHistogramAggregationDescriptor`1[[System.Object, mscorlib]], Nest",
"field": "Timestamp",
"interval": "day",
"format": "yyyy-MM-dd"
}
}
}
}
The query is very simple:
var cs2 = new ConnectionSettings(new Uri("http://localhost:9200")).EnableTrace();
var client = new ElasticClient(cs2);
var res3 = client.Search<object>(q =>q.Aggregations(agg =>
agg.DateHistogram("DayAgg", t => t.Field("Timestamp").Interval("day"))));
The exact same code works fine in console application, so I'm thinking that could be related to the serialization, since in the bad case the "$type" property is added.
Found related issue: Serialization error with Elasticsearch NEST/C#
The rootcause is following settings:
config.Formatters.JsonFormatter.SerializerSettings.TypeNameHandling = TypeNameHandling.Objects;
And now there is a new api supported for settings: SetJsonSerializerSettingsModifier
var cs2 = new ConnectionSettings(new Uri("http://localhost:9200"))
.SetJsonSerializerSettingsModifier(settings => settings.TypeNameHandling = TypeNameHandling.None)
.EnableTrace();

Resources