lock individual documents in elastic search using nest - elasticsearch

I am looking for a good example to lock a document in elastic search. The below link explain how to do this in elasticsearch.
http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/concurrency-solutions.html
How can i achieve this using NEST. can someone suggest an approach to start with.

To start with, read the NEST Quick Start article. In particular, note the following:
NEST is a high level elasticsearch client that still maps very closely to the original elasticsearch API. Requests and Responses have been mapped to CLR objects and NEST also comes with a powerful strongly typed query dsl.
Consider also, that you may not require the added level of abstraction that NEST provides, and may prefer to directly utilise Elasticsearch.NET. Read also, it's own Quick Start article.
With that in mind, implementing the steps in the Solving concurrency issues article using NEST / Elasticsearch.NET is simply a matter of identifying the .NET methods that correspond to the necessary Elasticsearch API methods. The first one for example:
PUT /fs/lock/global/_create
{}
In NEST would look something like:
var createGlobalLockResponse = esClient.Index<object>(new object(), f => f
.Index("fs")
.Type("lock")
.Id("global")
.OpType(global::Elasticsearch.Net.OpType.Create));
Regarding the check of whether "this create request fails with a conflict exception", check the properties on the createGlobalLockResponse object. In particular, Created and if you're intending to get a bit more specific in the handling, ServerError.

Related

Umbraco 8 - Get Children Of Node Using ContentAtXPath() Method

I've been refactoring an existing Umbraco project to use more performant querying when getting back document data as everything was previously being returned using LINQ. I have been using a combination of Umbraco's querying via XPaths and Examine.
I am currently stumped on trying to get child documents using the Umbraco.ContentAtXPath() method. What I would like to do is get child document based on a path I parse to the method. This is what I have currently:
IEnumerable<IPublishedContent> umbracoPages = Umbraco.ContentAtXPath("//* [#isDoc]/descendant::/About [#isDoc]");
Running this returns a "Object reference not set to an instance of an object." error and unable to see exactly where I'm going wrong (new to this form of querying in Umbraco).
Ideally, I'd like to enhance the querying to also carry out sorting using the non-LINQ approach, as demonstrated here.
Up until Umbraco 8, content was cached in an XML file, which made XPath perfect for querying content efficiently. In v8, however, the so called "NuCache" is not file based nor XML based, so the XPath query support is only there for ... well... Old times sake, I guess? Either way it's probably not going to be super efficient and (I'd advise) not something to "aim for". That said I of course don't know what you are changing from (Linq can be a lot of things) :-/
It certainly depends on how big your dataset is.
As Umbraco has moved away from the XML backed cache, you should look into Linq queries against your content models. Make sure you use ModelsBuilder to generate the models.
On a small dataset Linq will be much quicker than examine. On a large dataset Examine/Lucene will be much more steady on performance.
Querying NuCache is pretty fast in Umbraco 8, only beaten by an Examine search.
Assuming you're using Models Builder, and your About page is a child of Home page, you could use:
var homePage = (HomePage) Model.Root();
var aboutPage = homePage?.Children<AboutPage>().FirstOrDefault();
var umbracoPages = aboutPage.Children();
Where HomePage is your home page Document Type Alias and AboutPage is your About page Document Type alias.

Pagination feature in Pivotal GemFire

Is there any built-in API which provides a pagination feature in Pivotal GemFire, such as in the QueryService API or using Functions? We are currently using Pivotal GemFire 9.3.0 running in PCC.
TIA.
No yet. It is a planned feature for SD Lovelace/Moore release trains. See SGF-524 - Add support for PagingAndSortingRepositories. NOTE: sorting is already supported; paging is a WIP.
There is not. However there is a common pattern for this. Instead of directly querying the matching objects, write your query to return the keys of the matching objects. The client can then implement paging (e.g. page1 is key0-key99, page 2 is key 100-199, etc.). Use the "getAll" method with a list of keys to pull back one page at a time.
BTW, you can query the keys like this: "select key from /person.entries where value.ssn='222-22-2222'"

Resulting all the indices while trying to get only those indices which are associated with specific alias

I have recently upgraded application from Elastic search 5.3.1 to 6.0.
My requirement was to get all the indices which were associated with specific alias.
I used below mentioned snippet to fetch all indices associated with specific alias . This snippet is working fine in 5.3.1 and giving only those indices which are associated with that specific alias.
GetAliasesResponse r = client.admin().indices().getAliases(new
GetAliasesRequest("givenalias")).actionGet();
But after ES 6.0 , the same snippet giving all the indices which are created in system .
Ideally it should only return those indices which are associated with given alias .Not others. This was working in Elastic search 5.3.1.
TL;DR: It is an intended breaking change in the Java API of Elasticsearch (though it is not explicitly mentioned in the "Breaking changes in 6.0 ยป Java API changes" page).
The following is the story of discovering this fact. (Note: the original answer was heavily edited, hence comments might be out of date.)
Breaking changes in the REST API in 6.0
First I noticed that this part of the REST API changed in Elasticsearch 6.0. There are two reported breaking changes concerning aliases:
GET /_aliases,_mappings syntax was removed in favor of GET /_aliases/GET /_mappings
Indices aliases api resolves indices expressions only against indices
Though there isn't mentioned anything concerning OP's case.
From what I have seen doing queries, this query works in Elasticsearch 5:
GET /alias1/_aliases
And does not work in Elasticsearch 6, giving the following error:
{
"error": "Incorrect HTTP method for uri [/alias1/_aliases] and method [GET], allowed: [PUT]",
"status": 405
}
Interestingly, GET /alias1/_alias works in both versions and returns same result.
Moreover, I didn't manage to find an example of GET /alias1/_aliases in the documentation of nor 5.6, neither 6.0.
Reproducing the bug
After having realised that OP is actually using the Java API, I managed to reproduce the exact same behavior.
The following code:
GetAliasesResponse alias1 = client.admin().indices()
.getAliases(new GetAliasesRequest("alias1")).actionGet();
In ES 5 produces this in the IntelliJ debugger:
And for ES 6 I've got the following:
As you can see, there are extra keys in the second output, which have empty values.
Diving into the source code
Quick search over the elasticsearch codebase gave me the final explanation. In ES 5 there was a test testIndicesGetAliases which was checking that list of indices returned for a test alias has exactly one element (IndexAliasesIT.java#L554):
logger.info("--> getting alias1");
GetAliasesResponse getResponse = admin().indices().prepareGetAliases("alias1").get();
assertThat(getResponse, notNullValue());
assertThat(getResponse.getAliases().size(), equalTo(1));
And in 6.0 it checks that the size is 5! (IndexAliasesIT.java#L573)
logger.info("--> getting alias1");
GetAliasesResponse getResponse = admin().indices().prepareGetAliases("alias1").get();
assertThat(getResponse, notNullValue());
assertThat(getResponse.getAliases().size(), equalTo(5));
This change was introduced in this commit, which is related to these issues:
Remove comma-separated feature parsing for GetIndicesAction #24723
_alias API no longer accepts index wildcards #25090
This is actually interesting, because one of the reported REST API breaking changes that we have see above also broke compatibility of some Java API calls.
What you can do
In short term, you need just to filter out those keys with empty values.
In longer term I think it makes sense to migrate to Java High Level REST Client since Elastic plan to deprecate the TransportClient in version 7.0:
We plan on deprecating the TransportClient in Elasticsearch 7.0 and
removing it completely in 8.0. Instead, you should be using the Java
High Level REST Client, which executes HTTP requests rather than
serialized Java requests.
In general, Elasticsearch breaks compatibility quite often, so it's better to stay away from the dark corners of it like Java API.
Thanks for reading.
Hope that helps!

Django haystack narrow with OR operator between fields

I do a search. I narrow by field A. I narrow by field B. I get results that include burlap AND sack. What I want is to get results that include burlap OR sack.
sqs = sqs.narrow(fieldA='burlap')
sqs = sqs.narrow(fieldB='sack')
You can do some level of OR narrowing with the following:
sqs = sqs.narrow(fieldA=('burlap' or 'tweed' or 'plastic'))
sqs = sqs.narrow(fieldB='sack')
But you still end up with results with burlap AND sack. An alternative to this method is the following, but it is not ideal since it seems to be slow on large data sets:
sqs = sqs.filter_or(fieldA='burlap')
sqs = sqs.filter_or(fieldB='sack')
Where is Daniel Lindsay when you need him?
YMMV -- the docs (http://django-haystack.readthedocs.org/en/latest/searchqueryset_api.html#narrow) point out that this method is not portable between backends and that the syntax depends on the backend. The example in that section even has a lucene looking "SearchQuerySet().narrow('title:smoothie')" example.
In the source it looks like haystack pretty trustingly passes whatever you have as your narrow argument to the back end. You didn't say what backend you are using, but maybe something like this would get you the fq you want in solr:
sqs = sqs.narrow('fieldA:burlap OR fieldB:sack')
Filter_or is a different animal than narrow, at least with solr. Filter_or will add that clause to the main query, resulting in a different set of results, different scoring, etc. Narrow will create a filter query. This is instead used to filter your original results (shocking, right?) and it can be cached, which can help performance if you're going to be using that filter a lot.
D'oh, I typed all that stuff and still don't know where Daniel Lindsay is.

Fulltext-index in Neo4J in 2.0

Is there a way to
create a fulltext-index with a given lucene-analyzer on a certain node-Type (and certain fields only)
to get this index updated automatically when a node of the given type is created / deleted
query this index over the Cypher- oder the REST-API
I am using the Cypher/REST-Interface (and of course the shell, etc.) of the server not the embedded version.
If this is not available (which I guess): Is something like this on the roadmap?
Thank you in advance!
Short answer: no
Little bit longer answer:
You can write a KernelExtension adding a TransactionEventHandler that amends the fields to be fulltext indexed to a manual index (aka legacy index).
The code should be wrapped into an unmanaged extension and deployed to the server.
There something similar implemented within https://github.com/sarmbruster/neo4j-uuid.
The contents of the legacy index can be accessed using start n=node:myindex('lucene query string') in Cypher

Resources