I'm trying to create an enum field from the admin console but I can't achieve it right
I went through Elasticsearch's documentation but I don't really understand everything.
"ville": {
"type": "enum",
"typeOptions": {
"values": ["Montpelier", "Paris", "Lmoges", "Grenoble", "Bordeaux", "Rodez"],
"mandatory": "true"
}
}
}
Can someone guide me, please?
Elasticsearch does not handle enum type.
As far as I understand you are trying to use the Data Validation module. You cannot update a collection specifications (aka "validations") with the Admin Console but you will need to use the collection:updateSpecifications API action.
Related
I am trying to query an array of ids with graphQl. The query works with a single id as a variable. However it doesn't work when I enter an array of ids.
Here is my gql query with variables:
query GetAuthorContent($id: [ID]!, $idType: AuthorIdType) {
expert(id: $id, idType: $idType) {
excerpt
featuredImage {
node {
description
author {
node {
description
}
}
}
}
slug
}
}
{"id": ["author-1", "author-2", "author-3"], "idType": "SLUG" }
You can look at the definition of the graphql endpoint using a client and see if the Arrays are supported with query.
If it's supported, check the mutation signature and pass accordingly. In this case I think the services does not support querying using an Array.
Hi everyone and thank you for your help.
You guys were right, my DB doesn't allow an array of authors if it is per author singular. However it works with authors plural. This is the way my db works.
Hope it can help someone in the same situation.
I am trying to set all mapped fields to string ie if a json message comes with following:
{
"logDate": "2012-04-23T18:25:43.511Z",
"logId": 123131,
"message": {
"username": "pera",
"password": "pera123"
}
}
I need to log every value as string ie. logId should be logged as "logId": "123131".
Is there a way to tell fluent bit what index mapping to use of maybe there is another setting that changes dynamic type to string?
Maybe can try adding an index template.
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-templates.html
I am saving logs to Elasticsearch for analysis but I found there are mixed types of a particular field which causing error when indexing the document.
For example, I may save below log to the index where uuid is an object.
POST /index-000001/_doc
{
"uuid": {"S": "001"}
}
but from another event, the log would be:
POST /index-000001/_doc
{
"uuid": "001"
}
the second POST will fail because the type of uuid is not an object. so I get this error: object mapping for [uuid] tried to parse field [uuid] as object, but found a concrete value
I wonder what the best solution for that? I can't change the log because they are from different application. The first log is from the data of dynamodb while the second one is the data from application. How can I save both types of logs into ES?
If I disable dynamic mapping, I will have to specify all fields in the index mapping. For any new fields, I am not able to search them. so I do need dynamic mapping.
There will be many cases like that. so I am looking for a solution which can cover all conflict fields.
It's perfectly possible using ingest pipelines which are run before the indexing process.
The following would be a solution for your particular use case, albeit somewhat onerous:
create a pipeline
PUT _ingest/pipeline/uuid_normalize
{
"description" : "Makes sure uuid is a hash map",
"processors" : [
{
"script": {
"source": """
if (ctx.uuid != null && !(ctx.uuid instanceof java.util.HashMap)) {
ctx.uuid = ['S': ctx.uuid]; // hash map init
}
"""
}
}
]
}
run the pipeline when ingesting a new doc
POST /index-000001/_doc
{
"uuid": {"S": "001"}
}
POST /index-000001/_doc?pipeline=uuid_normalize <------
{
"uuid": "001"
}
You could now extend this to be as generic as you like but it is assumed that you know what you expect as input in each and every doc. In other words, unlike dynamic templates, you need to know what you want to safeguard against.
You can read more about painless script operators here.
You just cannot.
You should either normalize all your field in a way or another.
Or use 2 separate field.
I can suggest to use a field like this :
"uuid": {"key": "S", "value": "001"}
and skip the key when not necessary.
But you will have to preprocess your value before ingestion.
I am using https://github.com/babenkoivan/scout-elasticsearch-driver to implement Elasticsearch with Laravel Scout. Ivan mentions this on Github:
Indices created in Elasticsearch 6.0.0 or later may only contain a single mapping type. Indices created in 5.x with multiple mapping types will continue to function as before in Elasticsearch 6.x. Mapping types will be completely removed in Elasticsearch 7.0.0.
If I understood right here: https://www.elastic.co/guide/en/elasticsearch/reference/master/removal-of-types.html I either need to use:
PUT index?include_type_name=true
or, better:
2)
PUT index/_doc/1
{
"foo": "baz"
}
I am stuck since I have no idea how to use either 1) or 2)
How can I add the parameter include_type_name=true?
How can I create the right mapping without using the include_type_name parameter?
class TestIndexConfigurator extends IndexConfigurator
{
use Migratable;
/**
* #var array
*/
protected $settings = [
];
protected $name = 'test';
}
Earlier versions of Elasticsearch (<= 5) supported multiple types per index. That meant that you could have different data mappings for each type. With Elasticsearch 6, this was removed and you can only have single mapping type.
Therefore, for Elasticsearch 7 (latest release), you can add an index, setup mappings and add document like this:
Create an index
PUT user
Add mapping
PUT user/_mapping
{
"properties": {
"name": {
"type": "keyword"
},
"loginCount": {
"type": "long"
}
}
}
Add document(s)
PUT user/_doc/1
{
"name": "John",
"loginCount": 4
}
Check data in the index
GET user/_search
Now, regarding the scout-elasticsearch-driver that you use, after reading the documentation you mentioned, it is simply saying that you need to create separate index configurator for each searchable model, as multiple models cannot be stored inside the same index.
So to create the index, run
php artisan make:index-configurator MyIndexConfigurator
and then
php artisan elastic:create-index App\\MyIndexConfigurator
which will create the index in the elasticsearch for you.
To learn more about elasticsearch, I suggest you install both elasticsearch and kibana to your development machine and then play around with it in kibana - the interface is quite nice and supports autocomplete to ease the learning curve.
When I tried GET product/default/_mapping in Kibana console.
I kept getting this error.
"Types cannot be provided in get mapping requests, unless
include_type_name is set to true"
This is happening in elastic search 7.3.0.
Looks like the above command is no longer supported in latest versions of elastic search.
It worked for me when I remove the default from the above command.
GET product/_mapping
I getting same error like "Types cannot be provided in put mapping requests, unless the include_type_name parameter is set to true"
you have to add "include_type_name:true" inside the object
fix this problem above code
let type = true
return await esClient.indices.putMapping({
index:indexName,
type:mappingType,
body:mapping,
include_type_name:type
});
PUT busloggw4/_doc/_mapping?include_type_name=true
{
"properties": {
"log_flag": {
"type":"long"
}
}
}
I'm trying to implement a HATEOAS Rest Client using Spring Boot.
Right now, I'm stuck in a point where I need to convert HATEOAS into an actual API URI.
If I post a new object of type Customer like:
{
"name": "Frank",
"address": "http://localhost:8080/address/23"
}
And then I retrieved with a request to http://localhost:8080/api/customer/1`, HATEOAS gives me something like
{
"name": Frank,
"_links": {
"address": {
"href": "http://localhost:8080/api/customer/1/address"
}
}
}
Is it possible to convert a link of the form of http://localhost:8080/api/customer/1/address to an API call like http://localhost:8080/api/address/23 ?
If you see what HATEOS returns after you say,
GET: http://localhost:8080/api/customer/1
is
{
"name": Frank,
"_links": {
"address": {
"href": "http://localhost:8080/api/customer/1/address"
}
}
}
According to Understanding HATEOS,
It's possible to build more complex relationships. With HATEOAS, the output makes it
easy to glean how to interact with the service without looking up a specification or
other external document
which means,
after you have received resource details with
http://localhost:8080/api/customer/1
what other operations are possible with the received resource those will be shown for easier/click thru access to your service/application,
here in this case HATEOS could find a link http://localhost:8080/api/customer/1/address that was accessible once you have customer/1 and from there if you want then without going anywhere else customer/1 's address could be found with /customer/1/address.
Similarly if you have /customer/1's occupation details then there would be another link below address link called http://localhost:8080/api/customer/1/occupation.
So if address is dependent on customer i.e. there can be no address without customer then your API endpoint has to be /api/customer/1/address and not directly /api/address/23.
However, after understanding these standards and logic behind HATEOS's such responses if you still want to go with your own links that may not align with HATEOS's logic you can use,
Link Object provided by LinkBuilder interface of HATEOS.
Example:
With object of type Customer like:
Customer c = new Customer( /*parameters*/ );
Link link= linkTo(AnyController.class).slash("address").slash(addressId);
customer.add(link);
//considering you want to add link `http://localhost:8080/api/address/23` and `23 is your addressID`.
Also you can create a list of Links and keep adding many such links to that list and then add that list to your object.
Hope this helps you !