changing documents timestamp value in elasticsearch for old documents - elasticsearch

we have an Elasticsaerch cluster that lost some data.
we need to import them again via filebeat but the problem is we should ship them to the elasticsearch with their own date. for example last week logs should ship to the es with last week date , not today.
so because of that we plan to send logs to es with filebeat and change the timestamp for resolving this issue .
im trying to use elasticsearch API to changing All documents timestamp label exist in an index :
```
DATE=`date -d '24 hours ago' "+%Y-%m-%dT%H:00:00.000Z"`
curl -XPOST -H 'Content-Type: application/json' "http://192.168.112.27:9200/core_1/_update_by_query?conflicts= proceed&pretty" -d
#-<<EOS
{
"query": {"match_all": {}},
"script": {"inline": "ctx._source["#timestamp"] = ["$DATE"];"}
}
EOS ```
and getting this error:
```{
"error" : {
"root_cause" : [
{
"type" : "json_parse_exception",
"reason" : "Unexpected character ('#' (code 64)): was expecting comma to separate Object entries\n at [Source: (org.elasticsearch.common.io.stream.ByteBufferStreamInput); line: 1, column: 67]"
}
],
"type" : "json_parse_exception",
"reason" : "Unexpected character ('#' (code 64)): was expecting comma to separate Object entries\n at [Source: (org.elasticsearch.common.io.stream.ByteBufferStreamInput); line: 1, column: 67]"
},
"status" : 400
}```
Do you have any idea?
i also tested this API via kibana dev tool without curl ,
but got this error again.

Related

A mapper_parsing_exception occurred when using the bulk API of Elasticsearch

Elasticsearch version: 8.3.3
Indexing was performed using the following Elasticsearch API.
curl -X POST "localhost:9200/bulk_meta/_doc/_bulk?pretty" -H 'Content-Type: application/json' -d'
{"index": { "_id": "1"}}
{"mydoc": "index action, id 1 "}
{"index": {}}
{"mydoc": "index action, id 2"}
'
In this case, the following error occurred.
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Malformed content, found extra data after parsing: START_OBJECT"
}
},
"status" : 400
}
I've seen posts asking to add \n, but that didn't help.
You need to remove _doc from the requst.
curl -X POST "localhost:9200/bulk_meta/_bulk?pretty" -H 'Content-Type: application/json' -d'
{"index":{"_id":"1"}}
{"mydoc":"index action, id 1 "}
{"index":{}}
{"mydoc":"index action, id 2"}
'

Elastic Search perform calculation on one document

I need to perform a calculation on a field of a specific document. As an example, I need to sum 50 to a price. I have tried the following options:
curl -X POST "localhost:9200/ex1/ex2/WPatZHgBEd7rI-6ZwNFC/_update?pretty" -H 'Content-Type: application/json' -d'{"doc": {"price": +50}}'
In this case it sets the price as 50. and if I try this:
curl -X POST "localhost:9200/ex1/ex2/WPatZHgBEd7rI-6ZwNFC/_update?pretty" -H 'Content-Type: application/json' -d'{"doc": {"price": "price"+50}}'
it gives the following error:
{
"error" : {
"root_cause" : [
{
"type" : "json_parse_exception",
"reason" : "Unexpected character ('-' (code 45)): was expecting comma to separate Object entries\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#4c7cecda; line: 1, column: 29]"
}
],
"type" : "json_parse_exception",
"reason" : "Unexpected character ('-' (code 45)): was expecting comma to separate Object entries\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#4c7cecda; line: 1, column: 29]"
},
"status" : 500
}
Use a script to increment a doc's attribute:
POST localhost:9200/ex1/ex2/WPatZHgBEd7rI-6ZwNFC/_update?pretty
{
"script": {
"source": "ctx._source.price += params.increment_by",
"params": {
"increment_by": 50
}
}
}
With cURL:
curl -XPOST "http://localhost:9200/localhost:9200/ex1/ex2/WPatZHgBEd7rI-6ZwNFC/_update?pretty" -H 'Content-Type: application/json' -d'{ "script": { "source": "ctx._source.price += params.increment_by", "params": { "increment_by": 50 } }}'

illegal bulk import with ElasticSearch

I'm trying to bulk import JSON file into my ES index using cURL.
I run
curl -u username -k -H 'Content-Type: application/x-ndjson' -XPOST 'https://elasticsearch.website.me/search_data/company_services/_bulk?pretty' --data-binary #services.json
and it returns
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [START_ARRAY]"
}
],
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [START_ARRAY]"
},
"status" : 400
}
The structure of my json is
{ "services": [
{ "id":1},
{"id":2},
...]
}
Not sure why this error is being thrown.
Have a closer read of the Bulk API documentation for what the contents of your data file should be. It's worth noting that in your data file each line should be a JSON object. The file shouldn't be a typical .json file structure, which is what you have at present.
Your file might look something like the following:
{"index": {"_id":"1"}}
{"field1":"foo1", "field2":"bar1",...}
{"index": {"_id": "2"}}
{"field1":"foo2", "field2":"bar2",...}

Using Curl to put data into ES and got Unexpected character ('n' (code 110))

I'm using Curl to put data into ES. I have already created a customer index.
The following command is from ES document.
curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d'
{
"name": "John Doe"
}
'
When I do this, I get an error.
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse",
"caused_by" : {
"type" : "json_parse_exception",
"reason" : "Unexpected character ('n' (code 110)): was expecting double-quote to start field name\n at [Source: org.elasticsearch.common.bytes.BytesReference$MarkSupportingStreamInputWrapper#1ec5236e; line: 3, column: 4]"
}
},
"status" : 400
}
I think, the below is the main reason of my error.
reason" : "Unexpected character ('n' (code 110)): was expecting double-quote to start field name
I have a feeling that I need to use (backslash) to escape. However, my attempt \' is not working great. Any advice?
I made it work like the below.
curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d '
{
\"name\": \"John Doe\" <==== I used "backslash" in front of all the "
}
'
Answer without my comment:
curl -X PUT "localhost:9200/customer/_doc/1?pretty" -H 'Content-Type: application/json' -d '
{
\"name\": \"John Doe\"
}
'

Graph-Aided Search Result filtering example

I've duplicated the Movie database of Neo4j on Elasticsearch and it's indexed with the index nodes. It has two types Movie and Person. I am trying to make a simple Result Filtering with Graph-Aided Search using this curl command line:
curl -X GET localhost:9200/nodes/_search?pretty -d '{
"query": {
"match_all" : {}
},
"gas-filter": {
"name": "SearchResultCypherfilter",
"query": "MATCH (p:Person)-[ACTED_IN]->(m:Movie) WHERE p.name= 'Parker Posey' RETURN m.uuid as id",
"ShouldExclude": true,
"protocol": "bolt"
}
}'
But I get as results all the 171 nodes of both types Movie and Person in my index nodes. However, as my query says I want to return only the type Movie by its title. So basically it doesn't look to the gas-filter part.
Also when I put false as the value of shouldExclude I am getting the same results.
[UPDATE]
I tried the suggestion of #Tezra, I am now returning only the id uuid and I put shouldExclude instead of exclude but still getting the same results.
I am working with:
Elasticsearch 2.3.2
graph-aided-search-2.3.2.0
Neo4j-community 2.3.2.10
graphaware-uuid-2.3.2.37.7
graphaware-server-community-all-2.3.2.37
graphaware-neo4j-to-elasticsearch-2.3.2.37.1
Result that should be returning:
The uuid of the movie titled You've Got Mail.
I tried to follow this tutorial for the configuration, and I found out that index.gas.enable had the value false so I changed it and finished the configuration just like in the tutorial:
mac$ curl -XPUT http://localhost:9200/nodes/_settings?index.gas.neo4j.hostname=http://localhost:7474
{"acknowledged":true}
mac$ curl -XPUT http://localhost:9200/nodes/_settings?index.gas.enable=true
{"acknowledged":true}
mac$ curl -XPUT http://localhost:9200/indexname/_settings?index.gas.neo4j.user=neo4j
{"acknowledged":true}
mac$ curl -XPUT http://localhost:9200/indexname/_settings?index.gas.neo4j.password=mypassword
{"acknowledged":true}
After that I tried to add the settings of boltHostname and bolt.secure but it didn't work and I had this error:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"Can't update non dynamic settings[[index.gas.neo4j.boltHostname]] for open indices [[nodes]]"}],"type":"illegal_argument_exception","reason":"Can't update non dynamic settings[[index.gas.neo4j.boltHostname]] for open indices [[nodes]]"},"status":400}
So I closed my index to configure it and then opened it again:
mac$ curl -XPOST http://localhost:9200/nodes/_close
{"acknowledged":true}
mac$ curl -XPUT http://localhost:9200/nodes/_settings?index.gas.neo4j.boltHostname=bolt://localhost:7687
{"acknowledged":true}
mac$ curl -XPUT http://localhost:9200/nodes/_settings?index.gas.neo4j.bolt.secure=false
{"acknowledged":true}
mac$ curl -XPOST http://localhost:9200/nodes/_open
{"acknowledged":true}
After finishing the configuration I tried again on Postman the same gas-filter query that I was executing with curl and now I am getting this error:
{
"error": {
"root_cause": [
{
"type": "runtime_exception",
"reason": "Failed to parse a search response."
}
],
"type": "runtime_exception",
"reason": "Failed to parse a search response.",
"caused_by": {
"type": "client_handler_exception",
"reason": "java.net.ConnectException: Connection refused (Connection refused)",
"caused_by": {
"type": "connect_exception",
"reason": "Connection refused (Connection refused)"
}
}
},
"status": 500
}
I don't know which connection the error is talking about. I am sure I passed the correct password of Neo4j in the configuration. I've even stopped and restarted again the servers of Elasticsearch and Neo4j but still the same errors.
The settings of my index nodes looks like this:
{
"nodes" : {
"settings" : {
"index" : {
"gas" : {
"enable" : "true",
"neo4j" : {
"hostname" : "http://localhost:7474",
"password" : "neo4j.",
"bolt" : {
"secure" : "false"
},
"boltHostname" : "bolt://localhost:7687",
"user" : "neo4j"
}
},
"creation_date" : "1495531307760",
"number_of_shards" : "5",
"number_of_replicas" : "1",
"uuid" : "SdrmQKhXQmyGKHmOh_xhhA",
"version" : {
"created" : "2030299"
}
}
}
}
}
Any ideas?
I figured out that the Connection refused exception that I was getting was because of the Wifi. So I had to disconnect from the internet to make it work. I know it is not the perfect solution. So if anyone found a better way of doing it please share it here.

Resources