How to log all requests to ElasticSearch? - elasticsearch

I have a problem with debugging my application,
so I would like to log all the requests that are send to Elasticsearch.
I learnt that I can do this via slowlog by setting the long time to 0s.
I tried this both in ES 2.4.2 and in ES 5.6.4, but no requests were logged.
In ES 2.4.2 I set in logging.yml:
index.search.slowlog: INFO, index_search_slow_log_file
index.indexing.slowlog: INFO, index_indexing_slow_log_file
In ES 5.6.4 I also changed the level to INFO (in log4j2.properties):
logger.index_search_slowlog_rolling.level = info
logger.index_indexing_slowlog.level = info
Then I started ES and issued:
curl -XPUT 'http://localhost:9200/com.example.app.model.journal/_settings' -d '
{
"index.search.slowlog.threshold.query.info" : "0s",
"index.search.slowlog.threshold.fetch.info": "0s",
"index.indexing.slowlog.threshold.index.info": "0s"
}
'
(I would prefer to set this settings for all indexes in configuration
file, is it possible?)
Then I searched for some data (and got results):
curl -XGET 'localhost:9200/com.example.app.model.journal/_search?pretty' -d '
{
"query":
{
"match" : { "rank" : "2" } }
}
}'
This requests was not logged, in ES 2.4.2 the slowlog files are created
and empty, in ES 5.6.4 files are not created. What am I doing wrong?

I couldn't find a solution to this, so as a workaround I used mitmdump. Run the proxy:
mitmdump -v -dddd -R http://localhost:9200
Replace the ES address with the proxy address:
curl -XGET 'localhost:8080/com.example.app.model.journal/_search?pretty' -d '
{
...

Related

How to write search queries in kibana using Query DSL for Elasticsearch aggregation

I am working on ELK stack to process Apache access logs. Spent a lot of time understanding Query DSL format so that more complex queries can be written. Currently am facing issues with running the queries in kibana interface but the same queries work just fine when posted using curl from command line.
Kibana version: 4.1.0
Elasticsearch version: 1.6.0
Java: 1.8.0_45
Using curl(working):
curl -XGET http://localhost:9200/cars/transactions/_search?search_type=count -d '{
"aggs" : {
"colors" : {
"terms" : {
"field" : "color"
}
}
}
}
Used data from here.
Using kibana(not working):
{ "aggs" : { "colors" : { "terms" : { "field" : "color" } } } }
Error:
org.elasticsearch.index.query.QueryPassingException:[.kibana] No query registered for [aggs]
Below are some of the queries I managed to run successfully in kibana using Query DSL on apache access log data:
{"filtered":{"filter":{"bool":{"must":{"terms":{"verb":["get"]}}}}}}
{"filtered":{"filter":{"bool":{"must_not":{"terms":{"agent":["crawler","spider","nagios"]}}}}}}
I have already searched google about it for hours but without luck.
I am not sure you can do this as the Discovery section already uses the timestamp aggregation.
Can you explain what are you trying to do? There are ways to add customer aggregations in the visualizations. If you open up the advanced section on the aggregation in the visualization you can see the ability to enter json that include additional aggregations or other parameters.
If you give me an example of what you are trying to do I can try and help - the example you gave can be easily done with the Kibana UI.

ElasticsearchIllegalArgumentException No feature for name

I have an Elasticsearch node setup. When I query the index via curl command I get the expected output.
curl -XPOST 'http://localhost:9200/one/employee/_search?pretty=true' -d '{
"query": {
"term": {
"emp_id":"4318W01149"
}
}
}'
but when I run similar query via browser I get the error
http://localhost:9200/one/employee/?q=emp_id:4318W01149
{"error":"ElasticsearchIllegalArgumentException[No feature for name [employee]]","status":400}
I'm on ES version 1.5.2
Thanks
you forgot _search in http://localhost:9200/one/employee/?q=emp_id:4318W01149
should be
http://localhost:9200/one/employee/_search?q=emp_id:4318W01149

Delete all documents from index/type without deleting type

I know one can delete all documents from a certain type via deleteByQuery.
Example:
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d '{
"query" : {
"term" : { "user" : "kimchy" }
}
}'
But i have NO term and simply want to delete all documents from that type, no matter what term. What is best practice to achieve this? Empty term does not work.
Link to deleteByQuery
I believe if you combine the delete by query with a match all it should do what you are looking for, something like this (using your example):
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d '{
"query" : {
"match_all" : {}
}
}'
Or you could just delete the type:
curl -XDELETE http://localhost:9200/twitter/tweet
Note: XDELETE is deprecated for later versions of ElasticSearch
The Delete-By-Query plugin has been removed in favor of a new Delete By Query API implementation in core. Read here
curl -XPOST 'localhost:9200/twitter/tweet/_delete_by_query?conflicts=proceed&pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}'
From ElasticSearch 5.x, delete_by_query API is there by default
POST: http://localhost:9200/index/type/_delete_by_query
{
"query": {
"match_all": {}
}
}
You can delete documents from type with following query:
POST /index/type/_delete_by_query
{
"query" : {
"match_all" : {}
}
}
I tested this query in Kibana and Elastic 5.5.2
Torsten Engelbrecht's comment in John Petrones answer expanded:
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d
'{
"query":
{
"match_all": {}
}
}'
(I did not want to edit John's reply, since it got upvotes and is set as answer, and I might have introduced an error)
Starting from Elasticsearch 2.x delete is not anymore allowed, since documents remain in the index causing index corruption.
Since ElasticSearch 7.x, delete-by-query plugin was removed in favor of new Delete By Query API.
The curl option:
curl -X POST "localhost:9200/my-index/_delete_by_query" -H 'Content-Type: application/json' -d' { "query": { "match_all":{} } } '
Or in Kibana
POST /my-index/_delete_by_query
{
"query": {
"match_all":{}
}
}
The above answers no longer work with ES 6.2.2 because of Strict Content-Type Checking for Elasticsearch REST Requests. The curl command which I ended up using is this:
curl -H'Content-Type: application/json' -XPOST 'localhost:9200/yourindex/_doc/_delete_by_query?conflicts=proceed' -d' { "query": { "match_all": {} }}'
In Kibana Console:
POST calls-xin-test-2/_delete_by_query
{
"query": {
"match_all": {}
}
}
(Reputation not high enough to comment)
The second part of John Petrone's answer works - no query needed. It will delete the type and all documents contained in that type, but that can just be re-created whenever you index a new document to that type.
Just to clarify:
$ curl -XDELETE 'http://localhost:9200/twitter/tweet'
Note: this does delete the mapping! But as mentioned before, it can be easily re-mapped by creating a new document.
Note for ES2+
Starting with ES 1.5.3 the delete-by-query API is deprecated, and is completely removed since ES 2.0
Instead of the API, the Delete By Query is now a plugin.
In order to use the Delete By Query plugin you must install the plugin on all nodes of the cluster:
sudo bin/plugin install delete-by-query
All of the nodes must be restarted after the installation.
The usage of the plugin is the same as the old API. You don't need to change anything in your queries - this plugin will just make them work.
*For complete information regarding WHY the API was removed you can read more here.
You have these alternatives:
1) Delete a whole index:
curl -XDELETE 'http://localhost:9200/indexName'
example:
curl -XDELETE 'http://localhost:9200/mentorz'
For more details you can find here -https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html
2) Delete by Query to those that match:
curl -XDELETE 'http://localhost:9200/mentorz/users/_query' -d
'{
"query":
{
"match_all": {}
}
}'
*Here mentorz is an index name and users is a type
I'm using elasticsearch 7.5 and when I use
curl -XPOST 'localhost:9200/materials/_delete_by_query?conflicts=proceed&pretty' -d'
{
"query": {
"match_all": {}
}
}'
which will throw below error.
{
"error" : "Content-Type header [application/x-www-form-urlencoded] is not supported",
"status" : 406
}
I also need to add extra -H 'Content-Type: application/json' header in the request to make it works.
curl -XPOST 'localhost:9200/materials/_delete_by_query?conflicts=proceed&pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}'
{
"took" : 465,
"timed_out" : false,
"total" : 2275,
"deleted" : 2275,
"batches" : 3,
"version_conflicts" : 0,
"noops" : 0,
"retries" : {
"bulk" : 0,
"search" : 0
},
"throttled_millis" : 0,
"requests_per_second" : -1.0,
"throttled_until_millis" : 0,
"failures" : [ ]
}
Just to add couple cents to this.
The "delete_by_query" mentioned at the top is still available as a plugin in elasticsearch 2.x.
Although in the latest upcoming version 5.x it will be replaced by
"delete by query api"
Elasticsearch 2.3 the option
action.destructive_requires_name: true
in elasticsearch.yml do the trip
curl -XDELETE http://localhost:9200/twitter/tweet
For future readers:
in Elasticsearch 7.x there's effectively one type per index - types are hidden
you can delete by query, but if you want remove everything you'll be much better off removing and re-creating the index. That's because deletes are only soft deletes under the hood, until the trigger Lucene segment merges*, which can be expensive if the index is large. Meanwhile, removing an index is almost instant: remove some files on disk and a reference in the cluster state.
* The video/slides are about Solr, but things work exactly the same in Elasticsearch, this is Lucene-level functionality.
If you want to delete document according to a date.
You can use kibana console (v.6.1.2)
POST index_name/_delete_by_query
{
"query" : {
"range" : {
"sendDate" : {
"lte" : "2018-03-06"
}
}
}
}

Elasticsearch ActiveMQ River Configuration

I start to configure an ActiveMQ river, I'm already installed the (ActiveMQ plugin) but I feel confused about how to make it working, the documentation was so brief, Actually, I follow exactly the steps of creating a new river but I don't know what are the following steps to follow?
Note:
I have the an ActiveMQ server up and running and I tested it using a
simple JMS app to push a message into a queue.
I created a new river using:
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '{
"type" : "activemq",
"activemq" : {
"user" : "guest",
"pass" : "guest",
"brokerUrl" : "failover://tcp://localhost:61616",
"sourceType" : "queue",
"sourceName" : "elasticsearch",
"consumerName" : "activemq_elasticsearch_river_myindex_river",
"durable" : false,
"filter" : ""
},
"index" : {
"bulk_size" : 100,
"bulk_timeout" : "10ms"
}
}'
After creating the previous river, I could get it's status using
curl -XGET 'localhost:9200/my_index/_status', it give me the index
status, not the created river.
Please, any help to get me the right road with ActiveMQ river configuration with the elasticsearch.
I told you on the mailing list. Define index.index value or set the name of your river to be your index name (easier):
curl -XPUT 'localhost:9200/_river/my_index/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
or
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"index":"my_index",
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
It should help.
If not, update your question with what you can see in logs.

Update elasticsearch settings via Tire

Is it possible to use Tire to update elasticsearch settings? I have this curl command I'd like to run automatically.
`curl -XPUT localhost:9200/tweets/_settings -d '{
"index" : {
"refresh_interval" : "-1"
}
}'`
The value is available via tire but I'm not sure how to apply it.
Tweet.tire.settings[:refresh_interval]
Possible, but ugly :)
Tire::Configuration.client.put([Tire::Configuration.url, Tweet.index.name].join('/'),
index: { refresh_interval: '-1' })
Will get nicer in future versions...

Resources