Related
I have a K8 cluster on GCP running elasticsearch. Now I need to create a backup.
I've installed the GCS-plugin on my pods in stateful-set and tried setting it up with the following documentation:
https://github.com/elastic/elasticsearch/blob/master/docs/plugins/repository-gcs.asciidoc
When I try to configure a repository to use credentials stored in keystore I get the following response back:
{
"error": {
"root_cause": [
{
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
}
],
"type": "repository_exception",
"reason": "[my_backup] repository type [gcs] does not exist"
},
"status": 500
}
Any lead would be helpful, thanks!
I think the problem is that I can't install the plugin on the nodes, so I’ve installed it on the pods instead. And that the installation is not persistent after I restart the pods. So to make the installation persist on K8 I needed to build a custom image that installs the plugin. A bit tricky, but the plugin seems to be intended for GCE. So I decided to move from K8 to a managed instance group on GCE instead.
Currently I am getting these alerts:
Upgrade Required Your version of Elasticsearch is too old. Kibana requires Elasticsearch 0.90.9 or above.
Can someone tell me if there is a way I can find the exact installed version of ELS?
from the Chrome Rest client make a GET request or
curl -XGET 'http://localhost:9200' in console
rest client: http://localhost:9200
{
"name": "node",
"cluster_name": "elasticsearch-cluster",
"version": {
"number": "2.3.4",
"build_hash": "dcxbgvzdfbbhfxbhx",
"build_timestamp": "2016-06-30T11:24:31Z",
"build_snapshot": false,
"lucene_version": "5.5.0"
},
"tagline": "You Know, for Search"
}
where number field denotes the elasticsearch version. Here elasticsearch version is 2.3.4
I would like to add which isn't mentioned in above answers.
From your kibana's dev console, hit following command:
GET /
This is similar to accessing localhost:9200 from browser.
Hope this will help someone.
You can check version of ElasticSearch by the following command. It returns some other information also:
curl -XGET 'localhost:9200'
{
"name" : "Forgotten One",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.3.4",
"build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f",
"build_timestamp" : "2016-06-30T11:24:31Z",
"build_snapshot" : false,
"lucene_version" : "5.5.0"
},
"tagline" : "You Know, for Search"
}
Here you can see the version number: 2.3.4
Typically Kibana is installed in /opt/logstash/bin/kibana . So you can get the kibana version as follows
/opt/kibana/bin/kibana --version
navigate to the folder where you have installed your kibana
if you have used yum to install kibana it will be placed in following location by default
/usr/share/kibana
then use the following command
bin/kibana --version
To check Version of Your Running Kibana,Try this:
Step1. Start your Kibana Service.
Step2. Open Browser and Type below line,
localhost:5601
Step3. Go to settings->About
You can See Version of Your Running kibana.
Another way to do it on Ubuntu 18.0.4
sudo /usr/share/kibana/bin/kibana --version
You can use the Dev Tools console in Kibana to obtain version information about Elasticsearch.
You click "Dev Tools" to navigate into console.
In the Dev Tools Console, you do a below query
GET /
You will see version and number like below with other details also.
{
"version" : {
"number" : "6.5.1",
...
}
}
You can Try this,
After starting Service of elasticsearch Type below line in your browser.
localhost:9200
It will give Output Something like that,
{
"status" : 200,
"name" : "Hypnotia",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.7.1",
"build_hash" : "b88f43fc40b0bcd7f173a1f9ee2e97816de80b19",
"build_timestamp" : "2015-07-29T09:54:16Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
If you have installed x-pack to secure elasticseach, the request should contains the valid credential details.
curl -XGET -u "elastic:passwordForElasticUser" 'localhost:9200'
Infact, if the security enabled all the subsequent requests should follow the same pattern (inline credentials should be provided).
If you are logged into your Kibana, you can click on the Management tab and that will show your Kibana version. Alternatively, you can click on the small tube-like icon and that will show the version number.
From Kibana host, a request to http://localhost:9200/ will not be answered, unless ElasticSearch is also running on the same node. Kibana listens on port 5601 not 9200.
In most cases, except for DEV, ElasticSearch will not be on the same node as Kibana, for a number of reasons.
Therefore, to get information about your ElasticSearch from Kibana, you should select the "Dev Tools" tab on the left and in the console issue the command: GET /
If you looking for version in kibana ui
I have tried to setup a kibana 3 with elasticsearch and logstash.
When i go to 127.0.0.1/kibana i get following error:
Error Could not contact Elasticsearch at http://127.0.0.1:9200. Please ensure that Elasticsearch is reachable from your system.
And when I check the console log i get the following:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://127.0.0.1:9200/_nodes. (Reason: CORS header 'Access-Control-Allow-Origin' missing).
When I go to the url http://127.0.0.1:9200 i get the following JSON text
{
"name" : "Meteor Man",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.1.1",
"build_hash" : "40e2c53a6b6c2972b3d13846e450e66f4375bd71",
"build_timestamp" : "2015-12-15T13:05:55Z",
"build_snapshot" : false,
"lucene_version" : "5.3.1"
},
"tagline" : "You Know, for Search"
}
and in http://127.0.0.1:9200/_nodes I get the following:
{"cluster_name":"elasticsearch","nodes":{"BKXqqrymQw6lShg5P7_-eA":{"name":"Meteor Man","transport_address":"127.0.0.1:9300","host":"127.0.0.1","ip":"127.0.0.1","version":"2.1.1","build":"40e2c53","http_address":"127.0.0.1:9200","settings":{"client":{"type":"node"},"name":"Meteor Man","pidfile":"/var/run/elasticsearch/elasticsearch.pid","path":{"data":"/var/lib/elasticsearch","home":"/usr/share/elasticsearch","conf":"/etc/elasticsearch","logs":"/var/log/elasticsearch"},"config":{"ignore_system_properties":"true"},"cluster":{"name":"elasticsearch"},"foreground":"false"},"os":{"refresh_interval_in_millis":1000,"name":"Linux","arch":"amd64","version":"3.19.0-25-generic","available_processors":4,"allocated_processors":4},"process":{"refresh_interval_in_millis":1000,"id":10545,"mlockall":false},"jvm":{"pid":10545,"version":"1.7.0_91","vm_name":"OpenJDK 64-Bit Server VM","vm_version":"24.91-b01","vm_vendor":"Oracle Corporation","start_time_in_millis":1453983811248,"mem":{"heap_init_in_bytes":268435456,"heap_max_in_bytes":1038876672,"non_heap_init_in_bytes":24313856,"non_heap_max_in_bytes":224395264,"direct_max_in_bytes":1038876672},"gc_collectors":["ParNew","ConcurrentMarkSweep"],"memory_pools":["Code Cache","Par Eden Space","Par Survivor Space","CMS Old Gen","CMS Perm Gen"]},"thread_pool":{"generic":{"type":"cached","keep_alive":"30s","queue_size":-1},"index":{"type":"fixed","min":4,"max":4,"queue_size":200},"fetch_shard_store":{"type":"scaling","min":1,"max":8,"keep_alive":"5m","queue_size":-1},"get":{"type":"fixed","min":4,"max":4,"queue_size":1000},"snapshot":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"force_merge":{"type":"fixed","min":1,"max":1,"queue_size":-1},"suggest":{"type":"fixed","min":4,"max":4,"queue_size":1000},"bulk":{"type":"fixed","min":4,"max":4,"queue_size":50},"warmer":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"flush":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"search":{"type":"fixed","min":7,"max":7,"queue_size":1000},"fetch_shard_started":{"type":"scaling","min":1,"max":8,"keep_alive":"5m","queue_size":-1},"listener":{"type":"fixed","min":2,"max":2,"queue_size":-1},"percolate":{"type":"fixed","min":4,"max":4,"queue_size":1000},"refresh":{"type":"scaling","min":1,"max":2,"keep_alive":"5m","queue_size":-1},"management":{"type":"scaling","min":1,"max":5,"keep_alive":"5m","queue_size":-1}},"transport":{"bound_address":["127.0.0.1:9300","[::1]:9300"],"publish_address":"127.0.0.1:9300","profiles":{}},"http":{"bound_address":["127.0.0.1:9200","[::1]:9200"],"publish_address":"127.0.0.1:9200","max_content_length_in_bytes":104857600},"plugins":[]}}}
You simply need to enable CORS in your elasticsearch.yml configuration file and restart ES, that setting is disabled by default.
http.cors.enabled: true
However, I'm not certain that Kibana 3 will work with ES 2.1.1. You might need to upgrade your Kibana in order for this work. Try to change the above settings and see it it helps. If not, upgrade Kibana to the latest release.
I have successfully installed both the license plugin and the shield plugin on my client nodes. The logs show it starting correctly, and i am able to authenticate using the credentials is supplied. However when i connect i am getting a 503 error. I went back through the docs to see if i missed something, but i don't see anything about configuring the data nodes after enabling shield. What am i missing?
{
"status" : 503,
"name" : "Vertigo",
"cluster_name" : "cluster01",
"version" : {
"number" : "1.7.2",
"build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
From the client logs
2015-10-28 03:14:52,235][INFO ][io.fabric8.elasticsearch.discovery.k8s.K8sDiscovery] [Vertigo] failed to send join request to master [[Abominatrix][T6zFRQO7RG-thZmOWVk2Xw][es-master-e6mj9][inet[/10.244.85.2:9300]]{data=false, master=true}], reason [RemoteTransportException[[Abominatrix][inet[/10.244.85.2:9300]][internal:discovery/zen/join]]; nested: RemoteTransportException[Failed to deserialize exception response from stream]; nested: TransportSerializationException[Failed to deserialize exception response from stream]; nested: InvalidClassException[failed to read class descriptor]; nested: ClassNotFoundException[org.elasticsearch.shield.authc.AuthenticationException]; ]
Andrei,
I figured it out. Since i am running containers that separate the master, data, and client nodes, i had only installed the plugin on the client nodes. Once installed the plugin on the master and data nodes, uploaded the image to docker hub and rebuilt the cluster, it all started working.
Thanks
-winn
I 'm intending to fix bugs on Elastic Search open-source project. I forked it and cloned the forked copy . Then I imported it as Maven project on Eclipse and then did Maven build . So far so good.
I opened ElasticSearchF.java file and tried to run it as a Java application.(This is as per directions written in http://www.lindstromhenrik.com/debugging-elasticsearch-in-eclipse/).
But I get an error saying path.home is not set for ElasticSearch and throws an error saying IllegalStateException.
My question is
Why is this error coming in the first place.
As I said , I want to fix bugs in ElasticSearch project.Is this the right way to set-up environment for my goal? Or should I have a client send the requests to the ElasticSearch server and then set-up debug points in Elastic Search source code. How to achieve this?
Thanks for your patience.
Update:
I did add VM argument as mentioned by one of the answerers.
Then it throws different errors and clue-less about why its throwing that.
java.io.IOException: Resource not found: "org/joda/time/tz/data/ZoneInfoMap" ClassLoader: sun.misc.Launcher$AppClassLoader#29578426
at org.joda.time.tz.ZoneInfoProvider.openResource(ZoneInfoProvider.java:210)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:127)
at org.joda.time.tz.ZoneInfoProvider.<init>(ZoneInfoProvider.java:86)
at org.joda.time.DateTimeZone.getDefaultProvider(DateTimeZone.java:514)
at org.joda.time.DateTimeZone.getProvider(DateTimeZone.java:413)
at org.joda.time.DateTimeZone.forID(DateTimeZone.java:216)
at org.joda.time.DateTimeZone.getDefault(DateTimeZone.java:151)
at org.joda.time.chrono.ISOChronology.getInstance(ISOChronology.java:79)
at org.joda.time.DateTimeUtils.getChronology(DateTimeUtils.java:266)
at org.joda.time.format.DateTimeFormatter.selectChronology(DateTimeFormatter.java:968)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:672)
at org.joda.time.format.DateTimeFormatter.printTo(DateTimeFormatter.java:560)
at org.joda.time.format.DateTimeFormatter.print(DateTimeFormatter.java:644)
at org.elasticsearch.Build.<clinit>(Build.java:53)
at org.elasticsearch.node.Node.<init>(Node.java:138)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:157)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:177)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:278)
at org.elasticsearch.bootstrap.ElasticsearchF.main(ElasticsearchF.java:30)
[2015-06-16 18:51:36,892][INFO ][node ] [Kismet Deadly] version[2.0.0-SNAPSHOT], pid[2516], build[9b833fd/2015-06-15T03:38:40Z]
[2015-06-16 18:51:36,892][INFO ][node ] [Kismet Deadly] initializing ...
[2015-06-16 18:51:36,899][INFO ][plugins ] [Kismet Deadly] loaded [], sites []
{2.0.0-SNAPSHOT}: Initialization Failed ...
- ExceptionInInitializerError
IllegalArgumentException[An SPI class of type org.apache.lucene.codecs.PostingsFormat with name 'Lucene50' does not exist. You need to add the corresponding JAR file supporting this SPI to your classpath. The current classpath supports the following names: [es090, completion090, XBloomFilter]]
I got help from the developer community in https://github.com/elastic/elasticsearch/issues/12737 and was able to debug it.
procedure in short would be :
1) Search for the file Elasticsearch.java/ElasticsearchF.java inside the package org.elasticsearch.bootstrap .
2) Right click -> Run Configurations...
3) In the window that pops up , Click the "Arguments" tab and under "Program arguments:" section give the value as start
and under "VM arguments:" section give the value as
-Des.path.home={path to your elasticsearch code base root folder}/core -Des.security.manager.enabled=false
4) Click "Apply" and then click "Run".
It runs now.
to check , go to localhost:9200 and you will get a message something like
{
"name" : "Raza",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "2.0.0-beta1",
"build_hash" : "${buildNumber}",
"build_timestamp" : "NA",
"build_snapshot" : true,
"lucene_version" : "5.2.1"
},
"tagline" : "You Know, for Search"
}
for more info on arguments
see : https://github.com/elastic/elasticsearch/commit/2b9ef26006c0e4608110164480b8127dffb9d6ad
Edit your debug/run configurations,put it on the vm arguments:
-Des.path.home=C:\github\elasticsearch\
change the C:\github\elasticsearch\ to your elasticsearch root path
the reason is some arguments in the elasticsearch.bat is missed when you debug/run it in eclipse