I want to delete index entries directly from MOBZ's ElasticSearch head (web UI).
I tried a DELETE query in the "Any Request" section with the following:
{"query":{"term":{"supplier":"ABC"}}}
However, all I get in return is:
{
ok: true
acknowledged: true
}
and the entries do not get deleted.
What am I doing wrong?
You should have removed the "query" from your post data.
You only need it for _search, and you should be using the _query entrypoint for delete.
In that case it is obvious the post is only a query, thus making it redendant (and actually irrelevant) to explicitly state it's a query.
That is:
curl -XPOST 'localhost:9200/myindex/mydoc/_search' -d
'{"query":{"term":{"supplier":"ABC"}}}'
will work fine for search.
But to delete by query, if you try:
curl -XDELETE 'localhost:9200/myindex/mydoc/_query' -d
'{"query":{"term":{"supplier":"ABC"}}}'
it won't work (note the change in entry point to _query, as well as switch CURL parameter to delete).
You need to call:
curl -XDELETE 'localhost:9200/myindex/mydoc/_query' -d
'{"term":{"supplier":"ABC"}}'
Let me know if this helps.
If you want to do it in HEAD:
put /stock/one/_query in the any request text box next to the drop-box of "GET/PUT/POST/DELETE"
choose DELETE in the drop-down menu
the request body should be {"term":{"vendor":"Socks"}}
Your problem was that you used a request body of: {"query":{"term":{"vendor":"Socks"}}}
That is fine for search, but not for delete.
A simple way to delete from plugin head by doc Id:
Go to Any Request TAB in plugin head
Simply put http:/localhost:9200/myindex/indextype/id in the text box above DELETE drop-down
Select DELETE from drop-down
Execute the request by clicking in Request button
Here is the sample image:
I'd issue a search request first, to verify that the documents you want deleted are actually being returned by your query.
It's impossible to give clear help, since there are many things that could be going wrong, but here are some possible probelems:
You don't have the correct index/type specified in the ES Head query
You need to specify the index and type on the second input box, not the first. The first line is meant for the host address and automatically adds a trailing slash
You need to use the Delete command from the dropdown
The analyzer of your fields is altering the field text in a way that it isn't being found by the Term query.
In all likelihood, it is the last option. If you haven't specified an analyzer, the default one that ES picks is the Standard analyzer, which includes a lowercase filter. The term "ABC" is therefore never indexed, instead "abc" is indexed.
Term query is not analyzed at all, so case sensitivity is important.
If those tips don't help, post up your mapping and some sample data, and we can probably help better.
Related
I am using Elasticsearch and Kibana as plugin to view the data in the indices. I am using Kibana's DevTools to send commands for adding/deleting/updating indices etc.
I want to add a field to a certain text property so it will have a keyword field to be able to both make a full text searches and aggregate using this property.
1) Does a change like that means I need to update Kibana's index pattern as well?
2) I have read the ElasticSearch's docs on PUT Mappings and know how to use it to update the indices themselves, but I don't know how to update the index patterns.. I read the same API should be used to update it, but I don't know how to see the index pattern's original mapping in order to update it.
Yes, if you change the index mapping in ES, then you need to go in Kibana and refresh the related index patterns.
Right now, you need to go inside Kibana (Management > Index patterns), select the index pattern, and press the "Refresh" button at the top right of the window in order to pick up the mapping changes.
Also note that if you updated some text fields in order to have a keyword sub-field, you'll also need to call the _update_by_query API on your index in order to reindex the changed field in all your documents
I have hundreds of indexes and I want to fetch only a given field from every record under these indexes. I can do the following
curl -XGET 'http://localhost:9200/cms-2016-03-30/job/_search?pretty=true&field=CMSDataset'
This unfortunately returns a lot of things I don't want and also doesn't give me all the records (~10^6).
Also, there are many cms-* style indexes, and I want to parse through all of them and get only this field. How do I do this?
You need to use source filtering instead of fields (which is deprecated)
curl -XGET 'http://localhost:9200/cms-2016-03-30/job/_search?pretty=true&_source=CMSDataset'
^
|
change this
From the official documentation:
The fields parameter is about fields that are explicitly marked as stored in the mapping, which is off by default and generally not recommended. Use source filtering instead to select subsets of the original source document to be returned.
UPDATE
You can use the size parameter (e.g. 100) in order to return more records (by default it's 10) and then simply use * as the index name:
curl -XGET 'http://localhost:9200/*/_search?pretty=true&size=100&_source=CMSDataset'
I'm using kibana-4. Following the documentation here I should be able to create an index by putting this in my elasticsearch.yaml file:
PUT .kibana
{
"index.mapper.dynamic": true
}
I'm not sure I understand how to do this, because a yaml file should not take values formatted like the above block, right?
I noticed that .kibana was a default index, so after inputting it into the kibana console, I was asked to input a time field for the default index. However, the input HTML element is a dropdown that contained no options. Without selecting a time-field option I am not allowed to create a default index. What am I supposed to do? Has anyone else run into a similar problem?
I understand the problem faced by you. Even i faced the same while using Kibana 4 for first time.
Here are 2 possible solutions to your problem:-
1. Input data into elasticsearch which contains a timestamped field. So upon inputting data that field will be directly recognized by Kibana & would be showed to you in the dropdown menu (where you are currently seeing empty).
It is empty because Kibana couldn't recognize the timestamped field from the data inserted by you in elasticsearch.
2. Untick the option of Index contains time-based events which will allow you to just enter your index name & access Kibana.
Note:- while using Option 2 & specifying index name as .kibana you would notice that it doesn't contain any field or data because .kibana doesnt store any data.
I would suggest you to create an index using curl command and insert data in it with or without timestamped field. If inserted data without timestamped field use Option 2 otherwise use Option 1.
I have just updated a website, the update adds new fields to elasticsearch.
In my dev environment, it all works fine. but on the live site, the new fields are not being found.
Eg. I have added a new field with the value : 1
However, when adding a filtered query of
{"field":1}
It does not find any matching results.
When I look in the documents, I can see docs with the field set to 1
Would the reason for this be that the new field was added after the mappings was set? I am not all that familiar with elasticsearch, So I am not really sure where to start looking to fix it.
Any help would be appreciated.
Update:
querying from URL shows nothing either
_search/?pretty=true&size=50&q=field1:*
however there is another field that was added at the same time which I can search on.
I can see field1 in the result set but it just wont allow me to search on it.
Only difference i see in the mapping is that the one that is working is set to type:long whereas the one not working is set as type:string
Is it a length issue on the ngram? what was your "min_gram" settings?
When you check on your index settings like this:
GET <host>/<index_name>/_settings
Does it work when you filter for a two digit field?
Are all the field values one digit?
It's OK to add a field after the mapping was set. ElasticSearch will guess the mapping for you. (in fact, it's one of their selling features --- no need to define the mapping, just throw the data at it)
There are a few things that can go wrong:
Verify that data is actually in the index. To do that, just navigate to the _search url with no parameters, you should see the field if it is indexed.
Look at your mapping. Could it be that the field is explicitly set not to be indexed?
Another possibility is that your query is wrong (but that is unlikely, since you're saying it works in the development environment)
I create an index "myindex" with a specified document type "mytype". I am able to delete the index, but it appears that "mytype" still exists without being tied to the index.
How do I get rid of "mytype"?
If you really deleted the index, the mapping in this index should not exist anymore.
Do you have any other index in your cluster with a similar type name?
To answer to the question: How to delete document types in elasticsearch?, use Delete Mapping API:
curl -XDELETE http://localhost:9200/index/type
EDIT: From elasticsearch 2.0, it won't be possible anymore. See Mapping changes. You will have to install the Delete By Query plugin and run a query which will remove your documents but the mapping will still exist. So it will most likely better to reindex your documents in another index without the old type.
But as #mguillemin and #javanna said, when you delete an index, every mapping attached to this index is deleted as well:
curl -XDELETE http://localhost:9200/index
You can use _delete_by_query path to delete type.
POST index-name/type-name/_delete_by_query
{
"query": {
"match": {
"message": "some message"
}
}
}
For further reading see docs
In the latest version of elastic search, they no longer support deleting document types. It's mentioned in the documentation
It is no longer possible to delete the mapping for a type. Instead you
should delete the index and recreate it with the new mappings.
You don't even need to specify the request body. Just
curl -XPOST http://<host>:9200/<index>/<type>/_delete_by_query