ElasticSearch automatic nested mapping - elasticsearch

I want to be able to insert documents and preferably map all inner objects to nested ones automatically. Is this possible?
My specific use case is that I am collecting documents of the same type that may or may not have the same fields of those currently in the store. So I would prefer if it can just automatically do the nested mapping without me having to tell it to do so.
Barring that could I potentially update the index before I insert an object with new fields? And would it be ok if I just set the type of the nested property to nested without specifying the fields of the property?
Code:
client.IndicesPutMapping("captures", "capture", new
{
capture = new
{
properties = new
{
CustomerInformations = new
{
type = "nested",
//...do not specify inner fields ?
}
}
}
});
Is partially mappings allowed when overriding mappings. In other words if I have the mapping above will the other properties of the capture objects still be mapped in a default way?

For those still struggling with the issue:
https://github.com/elastic/elasticsearch/issues/20886
The problem has been resolved in V5

Related

Documents with new field added before mapping update not queryable via new field

I have an index that for one reason or another we've added fields to that don't exist in our mapping. For example:
{
"name": "Bob" // Exists in mapping
"age": 12 // doesn't existing in mapping
}
After updating the mapping to add the age field, any document we add the age field to is queryable, but none of the documents that had age added before we updated the mapping are queryable.
Is there a way to tell Elastic to make those older documents queryable, not just any net-new/updated after the mapping update?
This implies that you must have dynamic: false in your mapping, i.e. whenever you send a new field, you prevent ES from creating it automatically.
Once you have updated your mapping, you can then simply call _update_by_query on your index in order to update it and have it reindex the data it contains with the new mappings.
Your queries will then work also on the "older" data.

hotchocolate throws error when using UseFiltering() on a field

I have a pretty simple setyp where I'm putting graphql over an entityframework datacontext (sql server).
I'm trying to get filtering to work. I've tried adding .UseFiltering() to a field descriptor like so...
descriptor.Field(t => t.AccountName).Type<NonNullType<StringType>>().UseFiltering();
But it causes this error on startup...
HotChocolate.SchemaException: 'Unable to infer or resolve a schema
type from the type reference Input: System.Char.'
I assume I'm doing something wrong somewhere...
"UseFiltering" is supposed to be used to filter data which represents a collection of items in some way (IQueryable, IEnumerable, etc).
For instance, if you have users collection and each user has AccountName property you could filter that collection by AccountName:
[ExtendObjectType(Name = "Query")]
public class UserQuery
{
[UseFiltering]
public async Task<IEnumerable<User>> GetUsers([Service]usersRepo)
{
IQueryable<User> users = usersRepo.GetUsersQueryable();
}
}
In that example the HotChocolate implementation of filtering will generate a number of filters by user fields which you can use in the following way:
users(where: {AND: [{accountName_starts_with: "Tech"}, {accountName_not_ends_with: "Test"}]})
According to your example: the system thinks that AccountName is a collection, so tries to build filtering across the chars the AccountName consists of.

Add typed additional attributes to an existing document elasticsearch

I added a field to the document:
POST /erection/shop/1/_update
{
"doc": {
"my_field":""
}
}
The new field is assigned to the type of "String". how can I create a new field with the type "Boolean"/"Integer"?
and 2nd question:
is it possible to add one field in all documents using one query? (without updating each document)
1) Explicitly define a mapping prior to the first update you do.
2) No, you can't. You can do it in your application using "scan" and then "bulk update"

Using NODE_DELETE without refetching data

Using a NODE_DELETE requires the parent, and to actually return the parent of the connection:
Relay Error when deleting: RelayMutationQuery: Invalid field name on fat query
Unfortunately, using this refetches ALL my nested items, which is simply unacceptable for my use case.
fragment on deleteItemNested #relay(pattern: true) {
id
ok
item {
nested {
edges {
node { id }
}
}
}
clientMutationId
}
Is there a way to delete an item from a connection/list without refetching all data? Trying not to fetch for the edges in nested results in nested being just an empty object.
All the nested items are refetched because #relay(pattern: true) was used in the query. This makes the query to match against the tracked query, which already includes the nested fields. See an excellent answer by steveluscher to the question Purpose of #relay(pattern:true).
The code example of NODE_DELETE in mutation documentation is worth taking a look.

how to modify the type mapping in elasticsearch to another type

The thing is that I already defined a field "myvalue" as INTEGER. Now I think was a mistake and I want to store in the same field an string, so I want to change it, without loosing data, to STRING. is there any way of making it?, or I need to re-create the index and re-index the whole data?
I already tried running:
{
"mappings": {
"myvalue": {
"type":"string"
}
}
}
But if I get the mapping again from the server still appear as Integer
There is not any way to change the mapping on a core field type for existing data. You will need to re-create the index with the myvalue field defined as a string and re-index your data.

Resources