I'm trying to bulk import JSON file into my ES index using cURL.
I run
curl -u username -k -H 'Content-Type: application/x-ndjson' -XPOST 'https://elasticsearch.website.me/search_data/company_services/_bulk?pretty' --data-binary #services.json
and it returns
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [START_ARRAY]"
}
],
"type" : "illegal_argument_exception",
"reason" : "Malformed action/metadata line [1], expected START_OBJECT or END_OBJECT but found [START_ARRAY]"
},
"status" : 400
}
The structure of my json is
{ "services": [
{ "id":1},
{"id":2},
...]
}
Not sure why this error is being thrown.
Have a closer read of the Bulk API documentation for what the contents of your data file should be. It's worth noting that in your data file each line should be a JSON object. The file shouldn't be a typical .json file structure, which is what you have at present.
Your file might look something like the following:
{"index": {"_id":"1"}}
{"field1":"foo1", "field2":"bar1",...}
{"index": {"_id": "2"}}
{"field1":"foo2", "field2":"bar2",...}
Related
Elasticsearch version: 8.3.3
Indexing was performed using the following Elasticsearch API.
curl -X POST "localhost:9200/bulk_meta/_doc/_bulk?pretty" -H 'Content-Type: application/json' -d'
{"index": { "_id": "1"}}
{"mydoc": "index action, id 1 "}
{"index": {}}
{"mydoc": "index action, id 2"}
'
In this case, the following error occurred.
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "failed to parse"
}
],
"type" : "mapper_parsing_exception",
"reason" : "failed to parse",
"caused_by" : {
"type" : "illegal_argument_exception",
"reason" : "Malformed content, found extra data after parsing: START_OBJECT"
}
},
"status" : 400
}
I've seen posts asking to add \n, but that didn't help.
You need to remove _doc from the requst.
curl -X POST "localhost:9200/bulk_meta/_bulk?pretty" -H 'Content-Type: application/json' -d'
{"index":{"_id":"1"}}
{"mydoc":"index action, id 1 "}
{"index":{}}
{"mydoc":"index action, id 2"}
'
we have an Elasticsaerch cluster that lost some data.
we need to import them again via filebeat but the problem is we should ship them to the elasticsearch with their own date. for example last week logs should ship to the es with last week date , not today.
so because of that we plan to send logs to es with filebeat and change the timestamp for resolving this issue .
im trying to use elasticsearch API to changing All documents timestamp label exist in an index :
```
DATE=`date -d '24 hours ago' "+%Y-%m-%dT%H:00:00.000Z"`
curl -XPOST -H 'Content-Type: application/json' "http://192.168.112.27:9200/core_1/_update_by_query?conflicts= proceed&pretty" -d
#-<<EOS
{
"query": {"match_all": {}},
"script": {"inline": "ctx._source["#timestamp"] = ["$DATE"];"}
}
EOS ```
and getting this error:
```{
"error" : {
"root_cause" : [
{
"type" : "json_parse_exception",
"reason" : "Unexpected character ('#' (code 64)): was expecting comma to separate Object entries\n at [Source: (org.elasticsearch.common.io.stream.ByteBufferStreamInput); line: 1, column: 67]"
}
],
"type" : "json_parse_exception",
"reason" : "Unexpected character ('#' (code 64)): was expecting comma to separate Object entries\n at [Source: (org.elasticsearch.common.io.stream.ByteBufferStreamInput); line: 1, column: 67]"
},
"status" : 400
}```
Do you have any idea?
i also tested this API via kibana dev tool without curl ,
but got this error again.
Hi,
I am trying to modify the date format in an elasticsearch index (operate-operation-0.26.0_). But I get the following error.
{
"took" : 148,
"errors" : true,
"items" : [
{
"index" : {
"_index" : "operate-operation-0.26.0_",
"_type" : "_doc",
"_id" : "WBGhSXcB_hD8-yfn-Rh5",
"status" : 400,
"error" : {
"type" : "strict_dynamic_mapping_exception",
"reason" : "mapping set to strict, dynamic introduction of [dynamic] within [_doc] is not allowed"
}
}
}
]
}
The json file I am using is bulk6.json:
{"index":{}}
{"dynamic":"strict","properties":{"date":{"type":"date","format":"yyyy-MM-dd'T'HH:mm:ss.SSSZZ"}}}
The command I am running is
curl -H "Content-Type: application/x-ndjson" -XPOST 'localhost:9200/operate-operation-0.26.0_/_bulk?pretty&refresh' --data-binary #"bulk6.json"
The _bulk API endpoint is not meant for changing mappings. You need to use the _mapping API endpoint like this:
The JSON file mapping.json should contain:
{
"dynamic": "strict",
"properties": {
"date": {
"type": "date",
"format": "yyyy-MM-dd'T'HH:mm:ss.SSSZZ"
}
}
}
And then the call can be made like this:
curl -H "Content-Type: application/json" -XPUT 'localhost:9200/operate-operation-0.26.0_/_mapping?pretty&refresh' --data-binary #"mapping.json"
However, this is still not going to work as you're not allowed to change the date format after the index has been created. You're going to get the following error:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "Mapper for [date] conflicts with existing mapper:\n\tCannot update parameter [format] from [strict_date_optional_time||epoch_millis] to [yyyy-MM-dd'T'HH:mm:ss.SSSZZ]"
}
],
"type" : "illegal_argument_exception",
"reason" : "Mapper for [date] conflicts with existing mapper:\n\tCannot update parameter [format] from [strict_date_optional_time||epoch_millis] to [yyyy-MM-dd'T'HH:mm:ss.SSSZZ]"
},
"status" : 400
}
You need to create a new index with the desired correct mapping and reindex your data.
While using Elasticsearch to load datasets with curl command->
curl -H "Content-Type: application/x-ndjson" -XPOST "localhost:9200/shakespeare/doc/_bulk?pretty" --data-binary #$shakespeare_6.0
Following warning is encountered->
Warning: Couldn't read data from file "$shakespeare_6.0", this makes an empty
Warning: POST.
{
"error" : {
"root_cause" : [
{
"type" : "parse_exception",
"reason" : "request body is required"
}
],
"type" : "parse_exception",
"reason" : "request body is required"
},
"status" : 400
}
My data is:
{"index":{"_index":"shakespeare","_id":0}}
{"type":"act","line_id":1,"play_name":"Henry IV", "speech_number":"","line_number":"","speaker":"","text_entry":"ACT I"}
What is the root cause of this warning? I am using 64 bit Windows 10.
Also, Please let me know what are the different ways to send the data into the elasticsearch? I am a noob.
You provided a wrong file name. The name of that file is shakespeare_6.0.json, not $shakespeare_6.0. This is the correct command:
curl -H "Content-Type: application/x-ndjson" -XPOST "localhost:9200/shakespeare/doc/_bulk?pretty" --data-binary #shakespeare_6.0.json
This assumes that the file is in the current directory.
I'm following the example online to import a json.gz wikipedia dump into elasticsearch: https://www.elastic.co/blog/loading-wikipedia.
After executing the following
curl -s 'https://'$site'/w/api.php?action=cirrus-mapping-dump&format=json&formatversion=2' |
jq .content |
sed 's/"index_analyzer"/"analyzer"/' |
sed 's/"position_offset_gap"/"position_increment_gap"/' |
curl -XPUT $es/$index/_mapping/page?pretty -d #-
I get an error:
{
"error" : {
"root_cause" : [
{
"type" : "mapper_parsing_exception",
"reason" : "Unknown Similarity type [arrays] for field [category]"
}
],
"type" : "mapper_parsing_exception",
"reason" : "Unknown Similarity type [arrays] for field [category]"
},
"status" : 400
}
Anybody got any ideas? I'm not able to ingest the wikipedia content using the method described. Wish the company would at least update their tutorial page.