Unable to update mapping in elastic search - bash

I have a shell script to create a mapping to one of my document types in elastic search.
My elasticsearch index is bits and my document type is nts and I am trying to assign type long for 3 JSON keys in the document of type nts
namely NT, XT and YT.
#!/bin/bash
curl -XPUT 'http://localhost:9200/bits/nts/_mapping' -d '
{
"events" : {
"dynamic" : "strict",
"properties" : {
"NT" : {
type : "long"
},
"XT" : {
type : "long"
},
"YT" : {
type : "long"
}
}
},
}'
If I run the above bash script, I get the following error.
{"error":"ElasticsearchParseException[Failed to parse content to map]; nested: JsonParseException[Unexpected character ('}' (code 125)): was expecting either valid name character (for unquoted name) or double-quote (for quoted) to start field name\n at [Source: org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput#6d7702cc; line: 17, column: 6]]; ","status":400}

Remove the last comma and make the code like this
curl -XPUT 'http://localhost:9200/bits/nts/_mapping' -d '
{
"events" : {
"dynamic" : "strict",
"properties" : {
"NT" : {
type : "long"
},
"XT" : {
type : "long"
},
"YT" : {
type : "long"
}
}
}
}'

Related

Is it possible to combine _msearch and a suggester in Elasticsearch?

I'm trying to bundle multi search api with a term suggester.
When I try to use a suggester on the search endpoint, it works:
POST my-index-000001/_search
{
"query" : {
"match": {
"message": "tring out Elasticsearch"
}
},
"suggest" : {
"my-suggestion" : {
"text" : "tring out Elasticsearch",
"term" : {
"field" : "message"
}
}
}
}
Here's how to try to do the same with multi search:
GET my-index-000001/_msearch
{ }
{
"query" : {
"match": {
"message": "tring out Elasticsearch"
}
},
"suggest" : {
"my-suggestion" : {
"text" : "tring out Elasticsearch",
"term" : {
"field" : "message"
}
}
}
}
{"index": "my-index-000002"}
{"query" : {"match_all" : {}}}
As a result, I get an error:
{
"error" : {
"root_cause" : [
{
"type" : "json_e_o_f_exception",
"reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 1])\n at [Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 2]"
}
],
"type" : "json_e_o_f_exception",
"reason" : "Unexpected end-of-input: expected close marker for Object (start marker at [Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 1])\n at [Source: (org.elasticsearch.common.bytes.AbstractBytesReference$MarkSupportingStreamInputWrapper); line: 1, column: 2]"
},
"status" : 400
}
So my question is, is it even possible?
I found a similarly named question (Using Nest Phrase Suggester on MultiSearch query), but the question itself appears to be about multi-match queries, not multi search.
The version I'm using is 7.13.
Your queries are syntactically correct but the Multi Search API (_msearch) only supports newline-delimited json payloads (ndjson).
As such, line breaks (\n) can only occur when separating the payload header from the payload body:
header\n
body\n
header\n
body\n
...
So, adjust your line breaks to conform with the required structure:
GET my-index-000001/_msearch
{}
{"query":{"match":{"message":"tring out Elasticsearch"}},"suggest":{"my-suggestion":{"text":"tring out Elasticsearch","term":{"field":"message"}}}}
{"index":"my-index-000002"}
{"query":{"match_all":{}}}
Tip: if you're using Kibana (and it looks like you are), you can toggle between the expanded and the _msearch-valid mode with ⌘+i / ctrl+i.

Getting unknown setting [index._id] error while adding data to Elasticsearch

I have created a mapping eventlog in Elasticsearch 5.1.1. I added it successfully however while adding data under it, I am getting Illegal_argument_exception with reason unknown setting [index._id]. My result from getting the indices is yellow open eventlog sX9BYIcOQLSKoJQcbn1uxg 5 1 0 0 795b 795b
My mapping is:
{
"mappings" : {
"_default_" : {
"properties" : {
"datetime" : {"type": "date"},
"ip" : {"type": "ip"},
"country" : { "type" : "keyword" },
"state" : { "type" : "keyword" },
"city" : { "type" : "keyword" }
}
}
}
}
and I am adding the data using
curl -u elastic:changeme -XPUT 'http://localhost:8200/eventlog' -d '{"index":{"_id":1}}
{"datetime":"2016-03-31T12:10:11Z","ip":"100.40.135.29","country":"US","state":"NY","city":"Highland"}';
If I don't include the {"index":{"_id":1}} line, I get Illegal_argument_exception with reason unknown setting [index.apiKey].
The problem was arising with sending the data from the command line as a string. Keeping the data in a JSON file and sending it as binary solved it. The correct command is:
curl -u elastic:changeme -XPUT 'http://localhost:8200/eventlog/_bulk?pretty' --data-binary #eventlogs.json

elasticsearch field mapping affects acorss different types in same index

I was told that "Every type has its own mapping, or schema definition" at the official guide.
But the fact I've met is the mapping can affect other types within the same index. Here is the situation:
Mapping definition:
[root#localhost agent]# curl localhost:9200/agent*/_mapping?pretty
{
"agent_data" : {
"mappings" : {
"host" : {
"_all" : {
"enabled" : false
},
"properties" : {
"ip" : {
"type" : "ip"
},
"node" : {
"type" : "string",
"index" : "not_analyzed"
}
}
},
"vul" : {
"_all" : {
"enabled" : false
}
}
}
}
}
and then I index a record:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1.1.1.1"}'
{
"error" : {
"root_cause" : [ {
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]"
} ],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse [ip]",
"caused_by" : {
"type" : "number_format_exception",
"reason" : "For input string: \"1.1.1.1\""
}
},
"status" : 400
}
Seems that it tries to parse the ip as a number. So I put a number in this field:
[root#localhost agent]# curl -XPOST 'http://localhost:9200/agent_data/vul?pretty' -d '{"ip": "1123"}'
{
"error" : {
"root_cause" : [ {
"type" : "remote_transport_exception",
"reason" : "[Argus][127.0.0.1:9300][indices:data/write/index[p]]"
} ],
"type" : "illegal_argument_exception",
"reason" : "mapper [ip] cannot be changed from type [ip] to [long]"
},
"status" : 400
}
This problem goes away if I explicitly define the ip field of vul type as ip field-type.
I don't quite understand the behavior above. Do I miss something?
Thanks in advance.
The statement
Every type has its own mapping, or schema definition
is true. But this is not complete information.
There may be conflicts between different types with the same field within one index.
Mapping - field conflicts
Mapping types are used to group fields, but the fields in each
mapping type are not independent of each other. Fields with:
the same name
in the same index
in different mapping types
map to the same field internally, and must have the same mapping. If a
title field exists in both the user and blogpost mapping types, the
title fields must have exactly the same mapping in each type. The only
exceptions to this rule are the copy_to, dynamic, enabled,
ignore_above, include_in_all, and properties parameters, which may
have different settings per field.

Elasticsearch JDBC importer not importing entry correctly

Having the following mapping:
curl -XPUT 'localhost:9200/borrador' -d '{
"mappings": {
"item": {
"dynamic": "strict",
"properties" : {
"body" : { "type": "string" },
"source_id" : { "type": "integer" },
}}}}'
I'm trying to import my DB to Elasticsearch using the Elasticsearch-JDBC importer.
This is the script I'm using:
#!/bin/sh
bin=/usr/share/elasticsearch/elasticsearch-jdbc-2.1.1.2/bin
lib=/usr/share/elasticsearch/elasticsearch-jdbc-2.1.1.2/lib
echo "Indexando base de datos..."
echo '{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mydbip/mydbname",
"user" : "username",
"password" : "pw",
"sql" : "select source_id, body, id as _id from table_name",
"index" : "borrador",
"type" : "item"
}
}' | java \
-cp "${lib}/*" \
-Dlog4j.configurationFile=${bin}/log4j2.xml \
org.xbib.tools.Runner \
org.xbib.tools.JDBCImporter
Most of the rows of the table are indexed correctly, but the following row from that DB is giving me an error and it's not indexing correctly:
This is the error that shows up:
[ERROR][org.xbib.elasticsearch.helper.client.BulkTransportClient][elasticsearch[importer][listener][T#1]]
bulk [957] failed with 1 failed items, failure message = failure in
bulk execution:
[3499]: index [borrador], type [item], id [14327140], message [MapperParsingException[failed to parse [body]]; nested:
IllegalArgumentException[unknown property [records]];]
As you can see in this case, this specific row has a json format string ({"format":"MS Excel","price":"750","records":"577","recordType":"records"}<!-- com -->) instead of the normal string that has the other entries that are indexing correctly.
What is happening? I would like to store that as a normal string. It's problem of the mapping as it's reading it as a json or something? Even if I remove the "dynamic": "strict", or the entire mapping, it still gives me the error. Thanks in advance.
By default the JDBC importer tries to detect JSON strings in your data and will parse them. You need to modify the configuration of your importer with the detect_json setting and set it to false:
{
"type" : "jdbc",
"jdbc" : {
"url" : "jdbc:mydbip/mydbname",
"user" : "username",
"password" : "pw",
"sql" : "select source_id, body, id as _id from table_name",
"index" : "borrador",
"type" : "item",
"detect_json": false <--- add this
}
}

How to tell ElasticSearch to create nested fields

I'm putting data in ES and check the mapping which is created,
I'm executing this:
curl -XPOST 'http://localhost:9200/testnested2/type1/0' -d '{
"p1": ["1","2","3","4"],
"users" : {
"first" : "John",
"last" : "Sm11ith"
}
}'
and this is the schema it created:
{
"testnested2":{
"mappings":{
"type1":{
"properties":{
"p1":{"type":"string"},
"users":{
"properties":{
"first":{"type":"string"},
"last":{"type":"string"}
}
}
}
}
}
}
}
I was wondering if it's possible to tell it that "users" is nested or I have to create the mapping for myself.
I would like that ES could create an shema like this:
curl -XPOST http://180.5.5.93:9200/testnested3 -d '{
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"type1" : {
"properties" : {
"propiedad1" : { "type" : "string"},
"users" : {
"type" : "nested",
"include_in_parent": true,
"properties": {
"first" : {"type": "string" },
"last" : {"type": "string" }
}
}
}
}
}
}'
By default, the dynamic mapping feature of ElasticSearch will map users as an object instead of nested.
If you want to tune this behavior, you have to define explicitely a users attribute as nested either in :
the type1 mapping
the default mapping of your index. This way, for any type created, the users attribute will be set automatically to nested(see here for default mapping information)

Resources