Elastic search update fails after including null values - elasticsearch

We recently upgraded to elastic search v5 and nest v5.6
We are trying to set a field to null, however, the default serialization settings are ignoring null values.
var pool = new SingleNodeConnectionPool(new Uri("http://local:9200"));
var connectionSettings =
new ConnectionSettings(pool)
.DisableDirectStreaming();
var elasticClient = new ElasticClient(connectionSettings);
var indexName = "myIndexName";
var typeName = "myTypeName";
var documentId = 2;
var pendingDescriptor = new BulkDescriptor();
pendingDescriptor.Index(indexName).Type(typeName);
var pendingUpdate = new Dictionary<string, object>
{
{ $"DocumentType_TAG_PENDING_Id", null }
};
var updateRequest = new UpdateRequest<dynamic, dynamic>(indexName, typeName, new Id(documentId));
updateRequest.Doc = pendingUpdate;
elasticClient.Update<dynamic>(updateRequest);
This results in the request:
{"update":{"_id":2,"_retry_on_conflict":3}}
{"doc":{}}
The field value isn't set to null
I tried to modify the serializer to include null values after reading here https://www.elastic.co/guide/en/elasticsearch/client/net-api/5.x/modifying-default-serializer.html
var connectionSettings =
new ConnectionSettings(pool, connection, new SerializerFactory((settings, values) =>
{
settings.NullValueHandling = NullValueHandling.Include;
}));
Now my request becomes:
{"update":{"_index":null,"_type":null,"_id":2,"_version":null,"_version_type":null,"_routing":null,"_parent":null,"_timestamp":null,"_ttl":null,"_retry_on_conflict":3}}
{"doc":{"DocumentType_TAG_PENDING_Id":null},"upsert":null,"doc_as_upsert":null,"script":null,"scripted_upsert":null}
And I get the following error:
{"error":{"root_cause":[{"type":"json_parse_exception","reason":"Current token (VALUE_NULL) not of boolean type\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#181f5854; line: 1, column: 82]"}],"type":"json_parse_exception","reason":"Current token (VALUE_NULL) not of boolean type\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#181f5854; line: 1, column: 82]"},"status":500}
Please help

So far, we have two options:
Upgrade to v6, where they have separated document and request serializers.
So we can customize the way our documents are serialized without affecting the request/response headers. For more info see https://www.elastic.co/guide/en/elasticsearch/client/net-api/master/custom-serialization.html
Use the elastic search low-level client with post request avoid the nest serializer.
https://www.elastic.co/guide/en/elasticsearch/client/net-api/5.x/elasticsearch-net.html
Our preferred way will be to go with the upgrade if everything works and revert to low-level client if any issues

Related

DeleteRequest example with elastic java api client 8.2.0

I need example for DeleteRequest with respect to ES 8.2.0 Java Api client where we don't have type. we have only index and documents. I am looking for code reference where I want to delete one particular document by passing index name and doc id.
You can use below code for deleting document from index. You need to provide index_name and doc_id to delete document.
RestClient restClient = RestClient.builder(new HttpHost("localhost", 9200)).build();
ElasticsearchTransport transport = new RestClientTransport(restClient, new JacksonJsonpMapper());
ElasticsearchClient esClient = new ElasticsearchClient(transport);
DeleteRequest request = DeleteRequest.of(d -> d.index("index_name").id("doc_id"));
DeleteResponse response = esClient.delete(request);
you can try this out
Syntax:
DeleteRequest request = new DeleteRequest("your-index-name","doc-id");
Example:
DeleteRequest deleteRequest = new DeleteRequest("employeeindex","002");
DeleteResponse deleteResponse = client.delete(deleteRequest, RequestOptions.DEFAULT);
System.out.println("response id: "+deleteResponse.getId());
for more information
https://www.elastic.co/guide/en/elasticsearch/client/java-rest/current/java-rest-high-document-delete.html

Elasticsearch Nest 6 - Get index metadata

Currently, I can retrieve the index mapping metadata from the following command on Kibana
GET /[indexName]/_mapping/[documentType]
Is there a way to do that on Elasticsearch Nest Client? If not, what other options would I have?
You can retrieve it with
var defaultIndex = "default-index";
var pool = new SingleNodeConnectionPool(new Uri("http://localhost:9200"));
var settings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex);
var client = new ElasticClient(settings);
var mappingResponse = client.GetMapping<MyDocument>();
which will send a request to
GET http://localhost:9200/default-index/_mapping/mydocument
In this case
index will be "default-index", the default index configured on Connection Settings
type will be "mydocument", inferred from the POCO type MyDocument
You can specify index and/or type explicitly if you want to
var mappingResponse = client.GetMapping<MyDocument>(m => m
.Index("foo")
.Type("bar")
);
which sends the following request
GET http://localhost:9200/foo/_mapping/bar
As well as target all indices and/or all types
var mappingResponse = client.GetMapping<MyDocument>(m => m
.AllIndices()
.AllTypes()
);
which sends the following request
GET http://localhost:9200/_mapping

Bulk Update on ElasticSearch using NEST

I am trying to replacing the documents on ES using NEST. I am seeing the following options are available.
Option #1:
var documents = new List<dynamic>();
`var blkOperations = documents.Select(doc => new BulkIndexOperation<T>`(doc)).Cast<IBulkOperation>().ToList();
var blkRequest = new BulkRequest()
{
Refresh = true,
Index = indexName,
Type = typeName,
Consistency = Consistency.One,
Operations = blkOperations
};
var response1 = _client.Raw.BulkAsync<T>(blkRequest);
Option #2:
var descriptor = new BulkDescriptor();
foreach (var eachDoc in document)
{
var doc = eachDoc;
descriptor.Index<T>(i => i
.Index(indexName)
.Type(typeName)
.Document(doc));
}
var response = await _client.Raw.BulkAsync<T>(descriptor);
So can anyone tell me which one is better or any other option to do bulk updates or deletes using NEST?
You are passing the bulk request to the ElasticsearchClient i.e. ElasticClient.Raw, when you should be passing it to ElasticClient.BulkAsync() or ElasticClient.Bulk() which can accept a bulk request type.
Using BulkRequest or BulkDescriptor are two different approaches that are offered by NEST for writing queries; the former uses an Object Initializer Syntax for building up a request object while the latter is used within the Fluent API to build a request using lambda expressions.
In your example, BulkDescriptor is used outside of the context of the fluent API, but both BulkRequest and BulkDescriptor implement IBulkRequest so can be passed to ElasticClient.Bulk(IBulkRequest).
As for which to use, in this case it doesn't matter so whichever you prefer.

Unable to insert dynamic object to Elastic Search using NEST

I am creating a dynamic object. I assign the values via IDictionary. Add the collections of the IDictionary to the object. Then I add the dynamic object to Elastic Search using NEST code. It throws me stackoverflow exception."An unhandled exception of type 'System.StackOverflowException' occurred in mscorlib.dll"
Here is what I have tried.
var node = new Uri("http://localhost:9200");
var settings = new ConnectionSettings(node,defaultIndex: "test-index");
var client = new ElasticClient(settings);
try
{
dynamic x = new ExpandoObject();
Dictionary<string, object> dic = new Dictionary<string, object>();
dic.Add("NewProp", "test1");
dic.Add("NewProp3", "test1");
x = dic;
var index3 = client.Index(x);
}
catch (Exception ex)
{
string j = ex.StackTrace;
}
I need to create an index in ElasticSearch, using a dynamic object, because I will be having an excel work book consisting of over 300 worksheet, and each and every sheet will be named as type, and the contents inside the worksheet will be the _source.
In the above example 'x' the dynamic object created is the name of the worksheet, and the values added into the dictionary are the rows and columns of excel sheet.
Where am I going wrong.
Regards,
Hema
I belive you can skip ExpandoObject and just index Dictionary<string, object>().
var dictionary = new Dictionary<string, object>();
dictionary.Add("NewProp", "test1");
dictionary.Add("NewProp3", "test1");
client.Index(dictionary);

Is Dynamics CRM 2013 sdk cache result of QueryExpresions by Default?

I wrote this simple query:
var connectionString = String.Format("Url={0}; Username={1}; Password={2}; Domain={3}", url, username, password, domain);
var myConnection = CrmConnection.Parse(connectionString);
CrmOrganizationServiceContext _service = new CrmOrganizationServiceContext(myConnection);
var whoAmI = _service.Execute(new WhoAmIRequest());
var query = new QueryExpression
{
EntityName = "phonecall",
ColumnSet = new ColumnSet(true)
};
query.PageInfo = new PagingInfo
{
Count = 20,
PageNumber = 1,
PagingCookie = null
};
query.Orders.Add(new OrderExpression
{
AttributeName = "actualstart",
OrderType = OrderType.Descending
});
query.Criteria = new FilterExpression() { FilterOperator = LogicalOperator.And };
query.Criteria.AddCondition("call_caller", ConditionOperator.In, lines);
var entities = _service.RetrieveMultiple(query).Entities;
I have a program which runs this query every minute. On the first execution the correct results are displayed but for subsequent queries the results never change as I update records in CRM.
If I restart my program the results refresh correctly again on the first load.
Why are the results not updating as records are modified in CRM?
It is the CrmOrganizationServiceContext that is doing the caching - I found the following worked a treat and the results of my RetrieveMultiple are no longer cached :)
Context = new CrmOrganizationServiceContext(CrmConnection.Parse(connectionString));
Context.TryAccessCache(cache => cache.Mode = OrganizationServiceCacheMode.Disabled);
RetrieveMultiple always brings back fresh results so there must be some other aspect of your program which is causing stale data to be displayed.

Resources