How to setup a single node Consul server/client? - consul

What configuration is required to achieve this?
It's possible using the "development mode" mentioned here - https://learn.hashicorp.com/consul/getting-started/agent (but not recommended for production).
I've tried setting this up but I'm not sure how to set the client config. What I've tried is a config of:
{
"data_dir": "/tmp2/consul-client",
"log_level": "INFO",
"server": false,
"node_name": "master",
"addresses": {
"https": "127.0.0.1"
},
"bind_addr": "127.0.0.1"
}
Which results in a failure of:
consul agent -config-file=client.json
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Could not set up network transport: failed to obtain an address: Failed to start TCP listener on "127.0.0.1" port 8301: listen tcp 127.0.0.1:8301: bind: address already in use

No "client" agent is required to run for an operational Consul cluster.
I had to set this server / master with the bootstrap_expect set to 1(number of nodes for boostrap process):
{
"retry_join" : ["127.0.0.1"],
"data_dir": "/tmp2/consul",
"log_level": "INFO",
"server": true,
"node_name": "master",
"addresses": {
"https": "127.0.0.1"
},
"bind_addr": "127.0.0.1",
"ui": true,
"bootstrap_expect": 1
}

Related

Consul not showing services registered in different node

I have a cluster of 3 nodes of consul servers. I have registered one service(FooService) with one of the server(Server1). When i check the registered services using http (/v1/agent/services) from the server(Server1) it is showing correctly. But when i try the same with any of other server(ie, Server1 /Server2) its not listing this registered service. This issue is not happening for KV Store. Can someone suggest a fix for this?
consul version : 1.2.1
I have pasted my configuration below
{
"bootstrap_expect": 3,
"client_addr": "0.0.0.0",
"datacenter": "DC1",
"data_dir": "/var/consul",
"domain": "consul",
"enable_script_checks": true,
"dns_config": {
"enable_truncate": true,
"only_passing": true
},
"enable_syslog": true,
"encrypt": "3scwcXQpgNVo1CZuqlSouA==",
"leave_on_terminate": true,
"log_level": "INFO",
"rejoin_after_leave": true,
"server": true,
"start_join": [
"10.0.0.242",
"10.0.0.243",
"10.0.0.244"
],
"ui": true
}
What i understood is , spring boot app should always connect to local consul client. Then this issue will not occur.

Marathon: How to specify environment variables in args

I am trying to run a Consul container on each of my Mesos slave node.
With Marathon I have the following JSON script:
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"args": ["agent","-bind","$MESOS_SLAVE_IP","-retry-join","$MESOS_MASTER_IP"]
}
However, it seems that marathon treats the args as plain text.
That's why I always got errors:
==> Starting Consul agent...
==> Error starting agent: Failed to start Consul client: Failed to start lan serf: Failed to create memberlist: Failed to parse advertise address!
So I just wonder if there are any workaround so that I can start a Consul container on each of my Mesos slave node.
Update:
Thanks #janisz for the link.
After taking a look at the following discussions:
#3416: args in marathon file does not resolve env variables
#2679: Ability to specify the value of the hostname an app task is running on
#1328: Specify environment variables in the config to be used on each host through REST API
#1828: Support for more variables and variable expansion in app definition
as well as the Marathon documentation on Task Environment Variables.
My understanding is that:
Currently it is not possible to pass environment variables in args
Some post indicates that one could pass environment variables in "cmd". But those environment variables are Task Environment Variables provided by Marathon, not the environment variables on your host machine.
Please correct if I was wrong.
You can try this.
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST",
"parameters": [
"key": "env",
"value": "YOUR_ENV_VAR=VALUE"
]
}
}
}
Or
{
"id": "consul-agent",
"instances": 10,
"constraints": [["hostname", "UNIQUE"]],
"container": {
"type": "DOCKER",
"docker": {
"image": "consul",
"privileged": true,
"network": "HOST"
}
},
"env": {
"ENV_NAME" : "VALUE"
}
}

Gossip encryption not working fine

I have created a master token using the below command:
$ consul keygen
G74SM8N9NUc4meaHfA7CFg==
Then, I bootstrapped the server with the following config.json:
{
"server": true,
"datacenter": "consul",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"disable_update_check": true,
"client_addr": "0.0.0.0",
"bootstrap": true,
"leave_on_terminate": true,
"encrypt": "G74SM8N9NUc4meaHfA7CFg=="
}
The output of the bootstrap server is as follows:
Node name: 'abcd'
Datacenter: 'consul'
Server: true (bootstrap: true)
Client Addr: 0.0.0.0 (HTTP: 8500, HTTPS: -1, DNS: 8600, RPC: 8400)
Cluster Addr: x.x.x.x (LAN: 8301, WAN: 8302)
Gossip encrypt: true, RPC-TLS: false, TLS-Incoming: false
Atlas: <disabled>
Then, I added a new server as a regular consul server which has the following config.json:
{
"server": true,
"datacenter": "consul",
"data_dir": "/var/consul",
"log_level": "INFO",
"enable_syslog": true,
"disable_update_check": true,
"client_addr": "0.0.0.0",
"bootstrap": false,
"leave_on_terminate": true,
"ui_dir": "/usr/local/bin/consul_ui",
"check_update_interval": "0s",
"ports": {
"dns": 8600,
"http": 8500,
"https": 8700,
"rpc": 8400,
"serf_lan": 8301,
"serf_wan": 8302,
"server": 8300
},
"dns_config": {
"allow_stale": true,
"enable_truncate": true,
"only_passing": true,
"max_stale": "02s",
"node_ttl": "30s",
"service_ttl": {
"*": "10s"
}
},
"advertise_addr": "y.y.y.y",
"encrypt": "G74SM8N9NUc4meaHfA7CFg==",
"retry_join": [
"x.x.x.x",
"y.y.y.y"
]
}
Note: Here, x.x.x.x is IP address of the bootstrap server, y.y.y.y is IP address of the regular server.
For testing purpose, I changed the encrypt key on one of the servers. And, when I do consul members, I can still see the all IPs which means that the servers are still able to communicate even with the different encrypt key. It seems that the gossip encryption is not working fine.
A Consul instance will cache the initial key and re-use it. It is stored in the serf folder in the file local.keyring.
This is counter-intuitive, but it is documented at least in one place together with the encrypt option.
You'll need to delete this file and restart Consul in order to get the expected behaviour.

Ansible: how to start consul-cluster

I have a ansible-playbook to configure consul with (3 servers (1 bootstrap)) and 3 clients.
First, I want to execute the bootstrap this is the console command:
vagrant#172.16.8.191$ consul agent -config-dir /etc/consul.d/bootstrap
Then while bootstrap is executing, I want to start consul in the others servers of the cluster. I have the next in ansible:
- name: start consul
service: name=consul state=restarted enabled=yes
My problem is, how can I stop the next execution using Ansible:
consul agent -config-dir /etc/consul.d/bootstrap
If it is other way to start consul-cluster by Ansible I'm thrilled to know.
Thanks,
Solution answer:
I have change my consul config on clients and servers to auto-create the cluster, so when you start the nodes computers, the cluster start and consul start automatically.
To do this, I use the next configuration:
Client:
{
"bind_addr":"172.16.8.194",
"client_addr":"0.0.0.0",
"server": false,
"datacenter": "ikerlan-Consul",
"data_dir": "/var/consul",
"ui_dir": "/home/ikerlan/dist",
"log_level": "WARN",
"encrypt": "XXXXXX",
"enable_syslog": true,
"retry_join": [172.16.8.191,172.16.8.192,172.16.8.193]
}
Server:
{
"bind_addr":"0.0.0.0",
"client_addr":"0.0.0.0",
"bootstrap": false,
"server": true,
"datacenter": "ikerlan-Consul",
"data_dir": "/var/consul",
"ui_dir": "/home/ikerlan/dist",
"log_level": "WARN",
"encrypt": "XXXXXX",
"enable_syslog": true,
"retry_join": [172.16.8.191,172.16.8.192,172.16.8.193],
"bootstrap_expect": 3
}

Trouble hitting a container's exposed port from a seperate container & host

I have two vagrant hosts running local on my machine and I am trying to hit a container within one host from a second container on the other host.
When I curl from within the container:
curl search.mydomain.localhost:9090/ping
I receive the curl: (7) Failed to connect to search.mydomain.localhost port 9090: Connection refused
However when I curl without specifying the port:
curl search.mydomain.localhost/ping
OK
I'm certain the port is properly exposed as if I try the same from within the host instead of within the container I get:
curl search.mydomain.localhost:9090/ping
OK
Which shows that the service at port 9090 is exposed, however there is some networking issue with the container trying to reach it.
A fellow dev running the same versions of vbox/vagrant/docker/docker-compose and using an identical commit of the repos has no trouble hitting the service from within the container. I'm really stumped as to what to try from here...
I'm using the default bridge network:
sudo brctl show
bridge name bridge id STP enabled interfaces
docker0 8000.02427c9cea3c no veth5dc6655
vethd1867df
docker network inspect bridge
[
{
"Name": "bridge",
"Id": "e4b8df614d4b8c451cd4a26c5dda09d22d77de934a4be457e1e93d82e5321a8b",
"Scope": "local",
"Driver": "bridge",
"IPAM": {
"Driver": "default",
"Config": [
{
"Subnet": "172.17.0.1/16",
"Gateway": "172.17.0.1"
}
]
},
"Containers": {
"1d67a1567ff698694b5f10ece8a62a7c2cdcfcc7fac6bc58599d5992def8df5a": {
"EndpointID": "4ac99ce582bfad1200d59977e6127998d940a688f4aaf4f3f1c6683d61e94f48",
"MacAddress": "02:42:ac:11:00:03",
"IPv4Address": "172.17.0.3/16",
"IPv6Address": ""
},
"3e8b6cbd89507814d66a026fd9fad26d649ecf211f1ebd72ed4689b21e781e2c": {
"EndpointID": "2776560da3848e661d919fcc24ad3ab80e00a0bf96673e9c1e0f2c1711d6c609",
"MacAddress": "02:42:ac:11:00:02",
"IPv4Address": "172.17.0.2/16",
"IPv6Address": ""
}
},
"Options": {
"com.docker.network.bridge.default_bridge": "true",
"com.docker.network.bridge.enable_icc": "true",
"com.docker.network.bridge.enable_ip_masquerade": "true",
"com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
"com.docker.network.bridge.name": "docker0",
"com.docker.network.driver.mtu": "1500"
}
}
]
I'm on Docker version 1.9.0, build 76d6bc9 and docker-compose version: 1.5.0.
Any help would be appreciated.
I resolved my issue which seems like it might be a bug. Essentially the container was inheriting the /etc/hosts from my local macbook, bypassing the /etc/hosts on the actual vagrant host running the container, causing my entry of "127.0.0.1 search.mydomain.localhost" to make all connection attempts within the container redirect to itself.

Resources