I am new in consul.In my case i have three servers.all are tuning state.
When i checked leader information using following url "http://localhost:8500/v1/status/leader" getting the correct information
"192.168.10.7:8300"
Consul\data\raft have the following information
I could see some answers in stack.it didn't help me.
Also try following command
-bootstrap-expect=3
showing an error given below
Error Log
Consul request failed with status [500]: No cluster leader
Am totally stuck.How can i fix this issue
Use docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name node1 -h node1 progrium/consul -server -bootstrap-expect 3
Since we have given expect 3 it means its looking for three peers to get connected first and then it will bootstrap the servers.
1. docker run -d -p 8400:8400 -p 8500:8500 -p 8600:53/udp --name node1 -h node1 progrium/consul -server -bootstrap-expect 3
docker inspect -f '{{.NetworkSettings.IPAddress}}' node1
Use the inspected IP to join with, in next three commands.
2. docker run -d --name node2 -h node2 progrium/consul -server -join 172.17.0.2
3. docker run -d --name node3 -h node3 progrium/consul -server -join 172.17.0.2
4. docker run -d --name node4 -h node4 progrium/consul -server -join 172.17.0.2
And you can start your service now, it will get connected with consul.
Explanation:-
As said in docs Before a Consul cluster can begin to service requests, a server node must be elected leader. And this is reason of your exception on start of spring-boot service the leader has not been elected yet!!
Why the leader has not been elected? The list of servers involved in the cluster should be bootstrapped. And the servers can be bootstrapped using the
-bootstrap-expect configuration option. Recommended
Note:- Just for testing/learning purpose you can go ahead and create a single server because A single server deployment is highly discouraged as data loss is inevitable in a failure scenario.
Related
I have setup and installed IBM Cloud private CE with two ubuntu images in Virtual Box. I can ssh into both images and from there ssh into the others. The ICp dashboard shows only one active node I was expecting two.
I explicitly ran the command (from a root user on master node):
docker run -e LICENSE=accept --net=host \
-v "$(pwd)":/installer/cluster \
ibmcom/cfc-installer install -l \
192.168.27.101
The result of this command seemed to be a successful addition of the worker node:
PLAY RECAP *********************************************************************
192.168.27.101 : ok=45 changed=11 unreachable=0 failed=0
But still the worker node isn't showing in the dashboard.
What should I be checking to ensure the worker node will work for the master node?
If you're using Vagrant to configure IBM Cloud Private, I'd highly recommend trying https://github.com/IBM/deploy-ibm-cloud-private
The project will use a Vagrantfile to configure a master/proxy and then provision 2 workers within the image using LXD. You'll get better density and performance on your laptop with this configuration over running two full Virtual Box images (1 for master/proxy, 1 for the worker).
You can check on your worker node with following steps:
check cluster nodes status
kubectl get nodes to check status of the newly added worker node
if it's NotReady, check kubelet log if there is error message about why kubelet is not running properly:
ICp 2.1
systemctl status kubelet
ICp 1.2
docker ps -a|grep kubelet to get kubelet_containerid,
docker logs kubelet_containerid
Run this to get the kubectl working
ln -sf /opt/kubernetes/hyperkube /usr/local/bin/kubectl
run the below command to identified failed pods if any in the setup on the master node.
Run this to get the pods details running in the environment
kubectl -n kube-system get pods -o wide
for restarting any failed pods of icp
txt="0/";ns="kube-system";type="pods"; kubectl -n $ns get $type | grep "$txt" | awk '{ print $1 }' | xargs kubectl -n $ns delete $type
now run the kubectl cluster-info
kubectl get nodes
Then ckeck the cluster info command of kubectl
Check kubectl version is giving you https://localhost:8080 or https://masternodeip:8001
kubectl cluster-info
Do you get the output
if no..
then
login to https://masternodeip:8443 using admin login
and then copy the configure clientcli settings by clicking on admin on the panel
paste it in ur master node.
and run the
kubectl cluster-info
Suppose, I have 3 Containers running on a single host and we are making a Hadoop cluster,
1 is master and other 2 are slaves(Namenode and datanodes)
And,we need to map ports:
docker run -itd -p 50070:50070 --name master centos:bigdata
docker run -itd -p 50075:50075 -p 50010:50010 --name slave1 centos:bigdata
Now ports 50075,50010,50070 are busy on host, we cannot map them for slave2
And if we do some random mapping like,
docker run -p 123:50075 -p 234:50010 --name slave2 centos:bigdata
Then, containers won't be able to communicate and it won't work.
So, Can flannel solve this problem?
I have a problem similar to How to access externally to consul UI but I can't get the combinations of network options to work right.
I'm on OSX using Docker for Mac, not the old docker-machine stuff, and the official consul docker image, not the progrium/docker image.
I can start up a 3-node server cluster fine using
docker run -d --name node1 -h node1 consul agent -server -bootstrap-expect 3
JOIN_IP="$(docker inspect -f '{{.NetworkSettings.IPAddress}}' node1)"
docker run -d --name node2 -h node2 consul agent -server -join $JOIN_IP
docker run -d --name node3 -h node3 consul agent -server -join $JOIN_IP
So far so good, they're connected to each other and working fine. Now I want to start an agent, and view the UI via it.
I tried a bunch of combinations of -client and -bind, which seem to be the key to all of this. Using
docker run -d -p 8500:8500 --name node4 -h node4 consul agent -join $JOIN_IP -ui -client=0.0.0.0 -bind=127.0.0.1
I can get the UI via http://localhost:8500/ui/, and consul members shows all the nodes:
docker exec -t node4 consul members
Node Address Status Type Build Protocol DC
node1 172.17.0.2:8301 alive server 0.7.1 2 dc1
node2 172.17.0.3:8301 alive server 0.7.1 2 dc1
node3 172.17.0.4:8301 alive server 0.7.1 2 dc1
node4 127.0.0.1:8301 alive client 0.7.1 2 dc1
But all is not well; in the UI it tells me node4 is "Agent not live or unreachable" and in its logs there's a whole bunch of
2016/12/19 18:18:13 [ERR] memberlist: Failed to send ping: write udp 127.0.0.1:8301->172.17.0.4:8301: sendto: invalid argument
I've tried a bunch of other combinations - --net=host just borks things up on OSX.
If I try -bind=my box's external IP it won't start,
Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Failed to start TCP listener. Err: listen tcp 192.168.1.5:8301: bind: cannot assign requested address
I also tried mapping all the other ports including the udp ports (-p 8500:8500 -p 8600:8600 -p 8400:8400 -p 8300-8302:8300-8302 -p 8600:8600/udp -p 8301-8302:8301-8302/udp) but that didn't change anything.
How can I join a node up to this cluster and view the UI?
Try using the 0.7.2 release of Consul and start the agent using the following (beta as of 0.7.2, final by 0.8.0) syntax:
$ docker run -d -p 8500:8500 --name node4 -h node4 consul agent -join $JOIN_IP -ui -client=0.0.0.0 -bind='{{ GetPrivateIP }}'
The change being the argument to -bind where Consul will now render out the IP address of a private IP address. The other template parameters are documented in the hashicorp/go-sockaddr.
I have spring boot application which communicate with ElasticSearch 5.0.0 alpha 2.
My application successfully communicate with elastic and preform several queries.
When I try to dockerize my application, it fails to communicate with ElasticSearch, and I get the following error:
None of the configured nodes are available: [{#transport#-1}{127.0.0.1}{127.0.0.1:9300}]
I have spent a lot of time on the internet, but I have found problems when the ElasticSearch is dockerized, but in my case, the client is dockerized, and it is working fine without the docker.
The command I used to create the docker image is: docker build -t my-service .
The DockerFile is:
FROM java:8
VOLUME /tmp
ADD ./build/libs/myjarfile-2.0.0.jar app.jar
EXPOSE 8090
RUN sh -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
To execute the image i use: docker run --name myname -d -p 8090:8090 -t my-service
Can someone share his/her experience with this issue?
Thanks
Guy Hudara
The problem is that your elasticsearch is not available on your dockerized host. When you put something in a docker container it also gets isolated on a network layer and localhost is localhost of the docker container but not the host itself. Therefore if you have elasticsearch also in a docker container use container linking and environment variable injection or reference your host machines address of your main network interface – not loopback – to your app.
Option 1
assuming that elasticsearch exposes 9200 try to run the following
$ docker run -d --name=elasticsearch elasticsearch
$ docker run -d --name=my-app --link elasticsearch:elasticsearch -p 8090:8090 my-app
Then you can define elasticsearch address in your app using env variable ${ELASTICSEARCH_PORT_9200_TCP_ADDR}.
Option 2
assuming your host machine runs on 192.168.1.10 you can also do the following:
$ docker run -d -p 9200:9200 elasticsearch
$ docker run -d -p 8090:8090 my-app
note that the name for the easticsearch container is optional here but the exposing of elasticsearch port mandatory. In this case you'll have to configure your elasticsearch host in your app given address of 192.168.1.10.
Can't access resource manager web ui - Spark docker container - Mac PC
These are the steps that I did:
docker pull sequenceiq/spark:1.6.0
docker run -it -p 8088:8088 -p 8042:8042 -p 4040:4040 -h sandbox sequenceiq/spark:1.6.0 bash
I tested using this: ( Runs fine )
run the spark shell
spark-shell \
--master yarn-client \
--driver-memory 1g \
--executor-memory 1g \
--executor-cores 1
execute the the following command which should return 1000
scala> sc.parallelize(1 to 1000).count()
But can't access the web ui:
I tried:
a. :8088
b: http://sandbox:8088/proxy/application_1458858022274_0002/A
c: localhost:8088
Nothing works.. Any help ??
Thanks in advance!!
You need to expose the ports before publishing them. Either EXPOSE 8088 8042 4040 in the Dockerfile or -e 8088 -e 8042 -e 4040 in your run command. Expose functionality is separated from publish/host mapping functionality because there are cases where one would like to expose to port to other containers without mapping it to the host.