Everytime I follow the instruction about Create Index, Mapping and Add Data in elasticsearch i have the error.
I'm using Postman.
First of all, i create index:
POST http://localhost:9200/schools
(actually, i have to use put to create succesfully)
Next, i create Mapping and Add Data:
POST http://localhost:9200/schools/_bulk
Request Body
{
"index":{
"_index":"schools", "_type":"school", "_id":"1"
}
}
{
"name":"Central School", "description":"CBSE Affiliation", "street":"Nagan",
"city":"paprola", "state":"HP", "zip":"176115", "location":[31.8955385, 76.8380405],
"fees":2000, "tags":["Senior Secondary", "beautiful campus"], "rating":"3.5"
}
{
"index":{
"_index":"schools", "_type":"school", "_id":"2"
}
}
{
"name":"Saint Paul School", "description":"ICSE
Afiliation", "street":"Dawarka", "city":"Delhi", "state":"Delhi", "zip":"110075",
"location":[28.5733056, 77.0122136], "fees":5000,
"tags":["Good Faculty", "Great Sports"], "rating":"4.5"
}
{
"index":{"_index":"schools", "_type":"school", "_id":"3"}
}
{
"name":"Crescent School", "description":"State Board Affiliation", "street":"Tonk Road",
"city":"Jaipur", "state":"RJ", "zip":"176114","location":[26.8535922, 75.7923988],
"fees":2500, "tags":["Well equipped labs"], "rating":"4.5"
}
But all i receive is just:
{
"error": {
"root_cause": [
{
"type": "json_e_o_f_exception",
"reason": "Unexpected end-of-input: expected close marker for Object (start marker at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#681c6189; line: 1, column: 1])\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#681c6189; line: 2, column: 3]"
}
],
"type": "json_e_o_f_exception",
"reason": "Unexpected end-of-input: expected close marker for Object (start marker at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#681c6189; line: 1, column: 1])\n at [Source: org.elasticsearch.transport.netty4.ByteBufStreamInput#681c6189; line: 2, column: 3]"
},
"status": 500
}
This is because your request body JSON is malformed. I'd advise checking with just one entry until you can get it into Elasticsearch, then add the others.
The following JSON is valid, though I'm not sure if it provides the structure you want:
{
"index":{
"_index":"schools", "_type":"school", "_id":"1"
},
"name":"Central School", "description":"CBSE Affiliation", "street":"Nagan",
"city":"paprola", "state":"HP", "zip":"176115", "location":[31.8955385, 76.8380405],
"fees":2000, "tags":["Senior Secondary", "beautiful campus"], "rating":"3.5"
}
You can use a tool for formatting and validating JSON to make sure it is valid JSON. Below are some examples.
http://jsonformatter.org/
https://jsonformatter.curiousconcept.com/
I see something which similar to my problem. My problem solved!
Elasticsearch Bulk API - Unexpected end-of-input: expected close marker for ARRAY
To load data to Elasticsearch, use the REST API endpoint is '/_bulk' which expects
the following newline delimited JSON (NDJSON) structure:
action_and_meta_data\n
optional_source\n
....
action_and_meta_data\n
optional_source\n
The Curl request example:
curl -H 'Content-Type: application/x-ndjson' -XPOST 'elasticsearchhost:port/index-name-sample/_bulk?pretty' --data-binary #sample.json
In your case, the request will be as follows:
curl -H 'Content-Type: application/x-ndjson' -XPOST 'localhost:9200/schools/_bulk?pretty' --data-binary #schools-sample.json
The schools-sample.json content:
{"index":{"_index":"schools", "_type":"school", "_id":"1"}}
{"name":"Central School", "description":"CBSE Affiliation", "street":"Nagan","city":"paprola", "state":"HP", "zip":"176115", "location":[31.8955385, 76.8380405],"fees":2000, "tags":["Senior Secondary", "beautiful campus"], "rating":"3.5"}
{"index":{"_index":"schools", "_type":"school", "_id":"2"}}
{"name":"Saint Paul School", "description":"ICSE Afiliation", "street":"Dawarka", "city":"Delhi", "state":"Delhi", "zip":"110075","location":[28.5733056, 77.0122136], "fees":5000,"tags":["Good Faculty", "Great Sports"], "rating":"4.5"}
/n
Important: the final line of data must end with a newline character \n. Each newline character may be preceded by a carriage return \r. Otherwise, you will get an error:
{
"error" : {
"root_cause" : [
{
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
}
],
"type" : "illegal_argument_exception",
"reason" : "The bulk request must be terminated by a newline [\n]"
},
"status" : 400
}
{ "index":{"_index":"schools", "_type":"school", "_id":"1" }}
{ "name":"Central School", "description":"CBSE Affiliation", "street":"Nagan", "city":"paprola", "state":"HP", "zip":"176115", "location":[31.8955385, 76.8380405], "fees":2000, "tags":["Senior Secondary", "beautiful campus"], "rating":"3.5\n"}
{ "index":{ "_index":"schools", "_type":"school", "_id":"2" }}
{ "name":"Saint Paul School", "description":"ICSE Afiliation", "street":"Dawarka", "city":"Delhi", "state":"Delhi", "zip":"110075","location":[28.5733056, 77.0122136], "fees":5000,"tags":["Good Faculty", "Great Sports"], "rating":"4.5\n" }
{ "index":{"_index":"schools", "_type":"school", "_id":"3"}}
{ "name":"Crescent School", "description":"State Board Affiliation", "street":"Tonk Road", "city":"Jaipur", "state":"RJ", "zip":"176114","location":[26.8535922, 75.7923988],"fees":2500, "tags":["Well equipped labs"], "rating":"4.5\n"}
Related
e.g. does getting the name look like this?
args := fmt.Sprintf("{\"tokenOwner\":\"%s\"}", "bob.near")
argsBase64 := base64.StdEncoding.EncodeToString([]byte(args))
param := map[string]string{
"request_type": "call_function",
"finality": "final",
"account_id": "ref-finance.near",
"method_name": "name",
"args_base64": argsBase64,
}
This is part of the metadata of each token. You can read the metadata standard at nomicon.io.
In particular you can query the metadata of an NEP-141 Fungible Token using the function ft_metadata as following:
❯ export NEAR_ENV=mainnet
❯ near view 76a6baa20598b6d203d3eae6cc87e326bcb60e43.factory.bridge.near ft_metadata "{}"
View call: 76a6baa20598b6d203d3eae6cc87e326bcb60e43.factory.bridge.near.ft_metadata({})
{
spec: 'ft-1.0.0',
name: 'Law Diamond Token',
symbol: 'nLDT',
icon: 'https://near.org/wp-content/themes/near-19/assets/img/brand-icon.png',
reference: '',
reference_hash: '',
decimals: 18
}
Update: Make this call directly from the RPC.
You can query the RPC directly as follows:
curl --location --request POST 'https://archival-rpc.mainnet.near.org/' \
--header 'Content-Type: application/json' \
--data-raw '{
"jsonrpc": "2.0",
"id": "dontcare",
"method": "query",
"params": {
"request_type": "call_function",
"finality": "final",
"account_id": "76a6baa20598b6d203d3eae6cc87e326bcb60e43.factory.bridge.near",
"method_name": "ft_metadata",
"args_base64": "e30="
}
}'
args_base64 field are the arguments serialised as base64. In this case it is an empty json:
base64("{}") = "e30="
The result is given as a sequence of bytes. In the case of ft_metadata it should be first decoded as a string and then decoded as json.
I have an existing index with mappings and data in ElasticSearch which I need to duplicate for testing new development. Is there anyway to create a temporary/duplicate index from the already existing one?
Coming from an SQL background, I am looking at something equivalent to
SELECT *
INTO TestIndex
FROM OriginalIndex
WHERE 1 = 0
I have tried the Clone API but can't get it to work.
I'm trying to clone using:
POST /originalindex/_clone/testindex
{
}
But this results in the following exception:
{
"error": {
"root_cause": [
{
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_', found: [_clone]"
}
],
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_', found: [_clone]"
},
"status": 400
}
I know someone would guide me quickly. Thanks in advance all you wonderful folks.
First you have to set the source index to be read-only
PUT /originalindex/_settings
{
"settings": {
"index.blocks.write": true
}
}
Then you can clone
POST /originalindex/_clone/testindex
If you need to copy documents to a new index, you can use the reindex api
curl -X POST "localhost:9200/_reindex?pretty" -H 'Content-Type:
application/json' -d'
{
"source": {
"index": "someindex"
},
"dest": {
"index": "someindex_copy"
}
}
'
(See: https://wrossmann.medium.com/clone-an-elasticsearch-index-b3e9b295d3e9)
Shortly after posting the question, I figured out a way.
First, get the properties of original index:
GET originalindex
Copy the properties and put to a new index:
PUT /testindex
{
"aliases": {...from the above GET request},
"mappings": {...from the above GET request},
"settings": {...from the above GET request}
}
Now I have a new index for testing.
I want to add the following file to Elasticsearch using the bulk API:
{"_id":{"date":"01-2007","profile":"Da","dgo":"DGO_E_AIEG","consumerType":"residential"},"value":{"min":120.42509,"minKwh":0.20071,"nbItems":6.0}}
using the command
curl -XPOST -H 'Content-Type: application/json' localhost:9200/_bulk --data-binary Downloads/bob/test.json
but I got the following mistake:
{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"The bulk request must be terminated by a newline [\n]"}],"type":"illegal_argument_exception","reason":"The bulk request must be terminated by a newline [\n]"},"status":400}
NB: The file clearly has a empty line at the end
In the docs it says:
NOTE: the final line of data must end with a newline character \n.
There is an example above that of what the document is expected to look like. https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html. Perhaps adding \n at the end of each line would fix the issue.
UPDATE:
There might be something wrong with the way you have placed your data into your JSON file. For example, the following data is in example.json:
{ "index" : { "_index" : "example", "_type" : "doc", "_id" : "1" } }
{ "field1" : "value1" }
<space here>
When running the following curl command, it works:
curl -X POST -H "Content-Type: application/x-ndjson" localhost:9200/_bulk --data-binary "#example.json"
It could be that you're not including something important in your JSON file, or you don't have "#your_file.json", or like the other poster mentioned, you don't have the content-type as application/x-ndjson.
The answer is very simple
{ "index":{ "_index":"schools_gov", "_type":"school", "_id":"1" } }
{ "name":"Model School", "city":"Hyderabad"}
{ "index":{ "_index":"schools_gov", "_type":"school", "_id":"2" } }
{ "name":"Government School", "city":"Pune"}
is not going to work but the below json will work
{ "index":{ "_index":"schools_gov", "_type":"school", "_id":"1" } }
{ "name":"Model School", "city":"Hyderabad"}
{ "index":{ "_index":"schools_gov", "_type":"school", "_id":"2" } }
{ "name":"Government School", "city":"Pune"}
//Give a new line here. Not '\n' but the actual new line.
The HTTP command would be POST http://localhost:9200/schools_gov/_bulk
As the error states, you simply need to add a new line to the end of the file.
If you are on a *nix system, you can do this:
echo "\n" >> Downloads/bob/test.json
Also, as explained in the documentation https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-bulk.html, the Content-Type should be application/x-ndjson
When sending requests to this endpoint the Content-Type header should
be set to application/x-ndjson
So the command should be:
curl -XPOST -H 'Content-Type: application/x-ndjson' localhost:9200/_bulk --data-binary Downloads/bob/test.json
The error message is very confusing. I typed -data-binary and got the same message. The message sent me to completely wrong direction.
I have some birth_dates that I want to store as a string. I don't plan on doing any querying or analysis on the data, I just want to store it.
The input data I have been given is in lots of different random formats and some even include strings like (approximate). Elastic has determined that this should be a date field with a date format which means when elastic receives a date like 1981 (approx) it freaks out and says the input is in an invalid format.
Instead of changing input dates I want to change the date type to string.
I have looked at the documentation and have been trying to update the mapping with the PUT mapping API, but elastic keeps returning a parsing error.
based on the documentation here:
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html
I have tried:
PUT /sanctions_lists/eu_financial_sanctions/_mapping
{
"mappings":{
"eu_financial_sanctions":{
"properties": {
"birth_date": {
"type": "string", "index":"not_analyzed"
}
}
}
}
}
but returns:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [mappings : {eu_financial_sanctions={properties={birth_date={type=string, index=not_analyzed}}}}]"
}
],
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [mappings : {eu_financial_sanctions={properties={birth_date={type=string, index=not_analyzed}}}}]"
},
"status": 400
}
Question Summary
Is it possible to override elasticsearch's automatically determined date field, forcing string as the field type?
NOTE
I'm using the google chrome sense plugin to send the requests
Elastic search version is 2.3
Just remove type reference and mapping from url, you have them inside request body. More examples.
PUT /sanctions_lists
{
"mappings":{
"eu_financial_sanctions":{
"properties": {
"birth_date": {
"type": "string", "index":"not_analyzed"
}
}
}
}
}
I am using Elasticsearch by Restclient in Firefox adds-on
and I have the following problem when updating a document
{
"error": "JsonParseException[Unexpected character (':' (code 58)): was expecting comma to separate OBJECT entries
at [Source: [B#142d626; line: 3, column: 12]]",
"status": 500
}
and i do this
method : post
url: http://localhost:9200/test2/t2/2/_update?pretty
in body
{ "doc" :
"name":"oooooo"
}
any help
thanks
Try with the following JSON in your body:
{
"doc": {
"name": "oooooo"
}
}
In order to do a partial update, the JSON in the body must have a single doc field which contains the fields to update, in this case "name": "oooooo". In your case, you were simply missing the curly braces around the name field.