I'm trying to run elasticsearch and kibana via dockers, and I'm getting errors with kibana.
I'm using elasticsearch and kibana version 7.6.2
and Ubuntu 18.04.6 LTS
I run elasticsearch with the following command:
docker run -p 127.0.0.1:9200:9200 -p 127.0.0.1:9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:7.6.2
And it seems that elasticsearch is on (I can bulk documents and get information about the index from python code).
I'm running kibana with the following commands:
docker network create elastic
docker run --net elastic -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://127.0.0.1:9200" docker.elastic.co/kibana/kibana:7.6.2
I see the following message in the web browser: Kibana server is not ready yet
And I see the following logs in the console:
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["info","savedobjects-service"],"pid":7,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nHEAD http://127.0.0.1:9200/.apm-agent-configuration => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","data"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_xpack => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["error","elasticsearch","admin"],"pid":7,"message":"Request error, retrying\nGET http://127.0.0.1:9200/_nodes?filter_path=nodes.*.version%2Cnodes.*.http.publish_address%2Cnodes.*.ip => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
Could not create APM Agent configuration: No Living connections
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"Unable to revive connection: http://127.0.0.1:9200/"}
{"type":"log","#timestamp":"2022-05-22T06:45:20Z","tags":["warning","elasticsearch","data"],"pid":7,"message":"No living connections"}
How can I run kibana via docker ?
Did you try enrolling kibana to you elasticsearch cluster?
The enrollment token is valid for 30 minutes. If you need to generate a new enrollment token, run the elasticsearch-create-enrollment-token tool on your existing node. This tool is available in the Elasticsearch bin directory of the Docker container.
For example, run the following command on the existing es01 node to generate an enrollment token for newer nodes to be added:
docker exec -it es01
/usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s
node
When you start Kibana, a unique link is output to your terminal.
To access Kibana, click the generated link in your terminal.
Then in your browser, paste the enrollment token that you copied when starting Elasticsearch and click the button to connect your Kibana instance with Elasticsearch.
Log in to Kibana as the elastic user with the password that was generated when you started Elasticsearch.
More details here
You've created a docker network for the Kibana container, but the Elastic container is not joined to it. Since you can access Elastic from your localhost:9200, there is no need to use the elastic network for the Kibana container.
Update the Kibana docker run command to docker run -p 127.0.0.1:5601:5601 -e "ELASTICSEARCH_HOSTS=http://host.docker.internal:9200" docker.elastic.co/kibana/kibana:7.6.2
This removes the join to the elastic network, and updates the ELASTICSEARCH_HOSTS environment variable so that it uses the localhost of the machine instead of container.
I have insatlled Below Softwares in windows 10
1.cassandra version 2.2.15
Docker version 19.03.2
3.minikube version: v1.4.0
4.helm client and sever v2.15.0-rc.1
kubectl client v1.14.6 and server v1.16.0
I am running cassandra DB in localhost port 9042 , i want to connect that localhost DB to my minikube helm with the IPaddress or hostname of the cassandra i am not able .
I am able to connect to the local cassandra db to Minikube cluster , need to change the
rpc_address in cassandra.yaml
rpc_address: <LOCAL_WINDOWS_IPV4_ADDR>
broadcast_rpc_address: 1.2.3.4
I have installed a kubernetes elasticsearch (v. 7.0.1) environment with a deployment and service using type NodePort running on minikube. When I hit kubectl get services, I get the relevant line:
elasticsearch NodePort 10.101.5.85 <none> 9200:31066/TCP 27m
If I do
$curl http://$(minikube ip):31066
I get the usual elasticsearch page. If, however, I do
root#webapp-5489d8d6fd-2ml2w:/# curl http://localhost:9200
as root of a webapp on the same cluster, I get error:
curl: (7) Failed to connect to localhost port 9200: Connection refused
Can anyone hint at the reason for my problem?
First of all, your elasticsearch service is NodePort type with ports 9200:31066/TCP.
It means that elasticsearch is using port 9200, and NodePort is using 31066 port.
1) curl http://$(minikube ip):31066
MinikubeIP is your node ip. You can verify this using $ kubectl describe node So if you are use port 31066 it connects correctly.
2) curl http://localhost:9200
You did not provide any information about other Deployments or pods so I assume you have Elasticsearch deployment with pod.
If you will execute $ curl http://localhost:9200 in elasticsearch container it will work, because elasticsearch is running inside (local) this container.
If you want to curl from other (non elasticsearch pod) you have to use service which you have created with elasticsearch port.
$ curl elasticsearch:9200 or $ curl 10.101.5.85:9200
From other containers you can also curl using NodeIP with NodePort
$ curl $(minikube ip):31066 same like in point 1.
Usefull links:
https://gardener.cloud/050-tutorials/content/howto/service-access/
Hope it helps!
From my Spring Boot application deployed as a Docker container I need to be able to access the parent node (within a Swarm) by some kind of docker-exposed DNS name so that I can add it to a configuration file and was wondering if there was a DNS name exposed automatically by Docker for this purpose?
Another container that is scheduled on the same host running in "network_mode: host" is running Consul and advertising on port 8500 (real IP is 192.168.1.233).
If I run netstat on the Swarm node (ip 192.168.1.233) I can see it's listening on port 8500:
# netstat -anp | grep 8500
tcp6 0 0 :::8500 :::* LISTEN 25010/docker-proxy
I want to be able to define a connection string as configuration in the Spring Boot app on the swarm node to the local instance of Consul sheduled on the same physical host as the Spring Boot app.
If I refer to "localhost" within the Spring Boot container then there's nothing listening on the 8500 port, proven by running this after shelling into the Spring Boot container (using a sample Consul API call)
# wget http://localhost:8500/v1/status/leader
Connecting to localhost:8500 (127.0.0.1:8500)
wget: can't connect to remote host (127.0.0.1): Connection refused
I've also tried using "host.docker.internal" but that doesn't work:
# ping host.docker.internal
ping: bad address 'host.docker.internal'
The hosts are all Centos 7 hosts, and Docker version 18.09.1, build 4c52b90.
The firewalld service is disabled on all hosts.
I'm trying SSH tunneling for the first time hence, I'm expecting some level of guidance (with explanation) to setup an SSH tunnel so that I from my Windows client machine can connect to things like ElasticSearch and MongoDB that are residing on AWS EC2 Windows Server.
Here is how you can make tunnel to server for MongoDB,
ssh -L 9999:127.0.0.1:27017 user#serverip -NnT
Now you are able to access your remote mongodb through tunnel on port 9999 so you can now connect to mongodb server from local like,
mongo --host 127.0.0.1 --port 9992
The same way you can also create your own tunnel for elasticsearch also by specifying Port of elastic search like below,
ssh -L 9200:127.0.0.1:9200 user#serverip -NnT
Not have more knowledge of accessing elastic through port but this might help.