I have upgraded to elasticsearch 2.0.0 in my system and installed the elasticsearch-head plugin. But it is not getting connected and hence no display of the indices residing in my es server.
Im able to index documents and display them via CURL.
I have tried editing the elasticsearch.yml file like below:
http:cors.enabled : true
But this also seems not working.
Any idea,of why this is happening?.
You need to set the http.cors.allow-origin explicitly since there is no default value anymore since ES 2.0. Previously, that setting was set to * but that was considered a bad practice from the security point of view.
http.cors.allow-origin: /https?:\/\/localhost(:[0-9]+)?/
Related
I have been using elasticsearch and in that you just set the slowquerylog threshold to 0 and all queries would be logged so I tried the same in solr.
I am using the techproducts example here and just added the following config to the file
/home/ygrover/software/solr-8.3.1/server/solr/configsets/sample_techproducts_configs/conf/solrconfig.xml
<slowQueryThresholdMillis>0</slowQueryThresholdMillis>
also I changed the logging level in solr via the http://localhost:8983/solr/#/~logging/level to ALL
The log folder is at the location /home/ygrover/software/solr-8.3.1/server/logs
but there are no logs printing in the file solr_slow_requests.log
Am I missing something here.
Note : I am doing this for testing and local env only. also if there is an alternative way then please suggest but I need to know what is the missing peice here as this process works seamlessly in elasticsearch.
Edit 1 :
Facing this problem in cloud mode only when launching the techproducts example: followed this tutorial : https://lucene.apache.org/solr/guide/8_4/solr-tutorial.html
I have edited the _default config as well and set the slow query thrshold to 0 there as well. This config works when I dont run in cloud mode and I can then see all queries logged in the solr_slow_requests.log
I have managed to process log files using the ELK kit and I can now see my logs on Kibana.
I have scoured the internet and can't seem to find a way to remove all the old logs, viewable in Kibana, from months ago. (Well an explaination that I understand). I just want to clear my Kibana and start a fresh by loading new logs and them being the only ones displayed. Does anyone know how I would do that?
Note: Even if I remove all the Index Patterns (in Management section), the processed logs are still there.
Context: I have been looking at using ELK to analyse testing logs in my work. For that reason, I am using ElasticSearch, Kibana and Logstatsh v5.4, and I am unable to download a newer version due to company restrictions.
Any help would be much appreciated!
Kibana screenshot displaying logs
Update:
I've typed "GET /_cat/indices/*?v&s=index" into the Dev Tools>Console and got a list of indices.
I initially used the "DELETE" function, and it didn't appear to be working. However, after restarting everything, it worked the seond time and I was able to remove all the existing indices which subsiquently removed all logs being displayed in Kibana.
SUCCESS!
Kibana is just the visualization part of the elastic stack, your data is stored in elasticsearch, to get rid of it you need to delete your index.
The 5.4 version is very old and already passed the EOL date, it does not have any UI to delete the index, you will need to use the elasticsearch REST API to delete it.
You can do it from kibana, just click in Dev Tools, first you will need to list your index using the cat indices endpoint.
GET "/_cat/indices?v&s=index&pretty"
After that you will need to use the delete api endpoint to delete your index.
DELETE /name-of-your-index
On the newer versions you can do it using the Index Management UI, you should try to talk with your company to get the new version.
I'm new to elasticsearch and am still trying to set it up. I have installed elasticsearch 5.5.1 using default values I have also installed Kibana 5.5.1 using the default values. I've also installed the ingest-attachment plugin with the latest x-pack plugin. I have elasticsearch running as a service and I have Kibana open in my browser. On the Kibana dashboardI have an error stating that it is unable to fetch mappings. I guess this is because I havn't set up any indices or pipelines yet. This is where I need some steer, all the documentation I've found so far on-line isn't particularly clear. I have a directory with a mixture of document types such as pdf and doc files. My ultimate goal is to be able to search these documents with values that a user will enter via an app. I'm guessing I need to use the Dev Tools/console window in Kibana using the 'PUT' command to create a pipeline next, but I'm unsure of how I should do this so that it points to my directory with the documents. Can anybody provide me an example of this for this version please.
If I understand you correctly, let's first set some basic understanding about elasticsearch:
Elasticsearch in it's simple definition is a "Search engine". so you need to store some data, and then elastic will help you to search using a search criteria, and it will retrieve relevant data back
You need a "Container" to save your data to, and elastic has this thing like any database engine to store your data, but the terms are somehow different. for example a "Database" in sql-like systems is called "Index", and what you know as "table" is called "Type" in elastic.
from my understanding, you will need to create your index (with or without mappings) to have a starting point, and I recommend you to start without mappings just to "start" and get things working, but later on it's highly recommend to work with "mappings" if applicable, because elastic is smart, but it cannot know more about your data than you do
Because Kibana has failed to find a proper index to start with, it has complained and asked you to either provide a syntax for index names, or a specific index name so it can infer the inline mappings and give you the nice features of querying, displaying charts, etc of your data, so once you create your index, you will provide that to the starting page of Kibana, and you will be ready to go.
Let me know if you need something more specific to your needs :)
I have my Logstash configured with the following output:
output {
hosts => ["http://myhost/elasticsearch"]
}
This is a valid URL, as I can cURL commands to Elasticsearch with it, such as
curl "http://myhost/elasticsearch/_cat/indices?v"
returns my created indices.
However, when Logstash attempts to create a template, it uses the following URL:
http://myhost/_template/logstash
when I would expect it to use
http://myhost/elasticsearch/_template/logstash
It appears that the /elasticsearch portion of my URL is being chopped off. What's going on here? Is "elasticsearch" a reserved word in the URL that is removed? As far as I can tell, when I issue http://myhost/elasticsearch/elasticsearch, it attempts to find an index named "elasticsearch" which leads me to believe it isn't reserved.
Upon changing the endpoint URL to be
http://myhost/myes
Logstash is still attempting to access
http://myhost/_template/logstash
What might be the problem?
EDIT
Both Logstash and Elasticsearch are v5.0.0
You have not specified which version of logstash you are using. If you are using one of the 2.x versions, you need to use use the path => '/myes/' parameter to specify that your ES instance is behind a proxy. In 2.x, the hosts parameter was just a list of hosts, not URIs.
I have a parent/child relationship setup in elastic search. When I try to update the parent it sometimes works and sometimes not. I believe I've narrowed it down to getting a missing document error because I can't figure out how to specify routing using logstash. (Parent/Child relationships must route to the same shard). I thought elasticsearch would do this automatically given that I have setup the routing path in the mappings but it only seems to work when I specify the routing paramater in the REST API URL. But, logstash doesn't seem to have a way to add that when updating. I'm using the logstash elastic-search output plugin with http protocol.
To add to my confusion it seems elasticsearch 1.5 is deprecating the "path" property in mappings.
Is there any way to ensure proper routing with parent/child updates using logstash?