fetch the information from Consul and then submit this job to Nomad, injecting the Consul k/v value as an environment variable.
like for example i have a value in consul k/v which is
testData = "HELLO"
on the other hand, in my job.
i want the value from consul k/v to be injected to env stanza as a value.
env
CONSUL_test = <value of consul k/v testData>
is this possible?
so that when i inspect the docker env, i must see
CONSUL_test = HELLO
nomad uses consul-templating, so you can reference consul k/v values in your nomad spec by doing {{ key "myKey" }}. https://www.nomadproject.io/docs/job-specification/template.html
Related
I m planning to use Vault Service as HA with Consul Backend with TLS using helm deployment for both consul and vault.
I have already deployed consul using helm deployment in my EKS cluster. This would deploy consul client as a daemonset and consul server as a pod
When the vault is deployed using helm, my vault server has to interact with consul client instead of consul server.
The challenge i face is that I can't provide storage consul address in below vault configuration file as 127.0.0.1:8501 because vault is running as separate pod and consul client as separate pod. so different ips.
storage "consul" {
address = "<WHAT_SHOULD_I_PROVIDE?>:8501"
path = "vault/"
scheme = "https"
tls_ca_file = ""
tls_cert_file = ""
tls_key_file = ""
token = "<CONSUL_TOKEN>""
}
I have also tweaked it by using HOST_IP:8501 but it throws below error
[WARN] storage migration check error: error="Get "https://10.15.0.7:8501/v1/kv/vault/core/migration": x509: certificate signed by unknown authority"
This is because TLS certificate should include a Subject Alternative Name (SAN) for the IP address, and of course, it should be signed by a trusted CA that you include as part of the ca_file parameter in Consul. But in my consul helm chart configuration, I'm using enableAutoEncrypt: true. So I can't able to use custom certs.
This would resolve if i deploy vault and consul in the same pod. In the vault helm chart configs, I couldn't find the consulAgent configuration in order to deploy the vault and consul agent together. Please help and let me know how to resolve this
Have you tried consul.service.consul?
I use kubespray installed Kubernetes Cluster. I defined the cluster name as cluster.devops in the file, group_vars/k8s-cluster/k8s-cluster.yml. After the Kubernetes Cluster installed. the current-context in kube confi file is kubernetes-admin#cluster.devops. I would like the current-context is cluster.devops,
i.e. current-context is same as cluster name. How to do it?
you can rename context using
kubectl config rename-context old-name new-name
for example in your case
kubectl config rename-context kubernetes-admin#cluster.devops cluster.devops
Suppose I've got services A and B. Both of them are deployed to test server and connected to Consul.
When I start service A on my local machine it will read data from consul and interact with service B deployed on test server.
How can I make service A to interact with service B on the local machine if it's also running?
I thought to run local Consul instance and proxy missing requests (configuration and service discovery) to test server consul but I didn't find any info about it.
How can / should I configure my local environment with Consul?
Steps for configuring consul on local environment :
install consul on local
https://www.consul.io/downloads.html
consul agent -dev - command used to run consul on local
You can use git2consul tool for reading the config from the local git repository, like
git2consul --config <path to git2consul file>
https://github.com/breser/git2consul
If you want to avoid the usage of service B running in the test environment you should make your local service B to register whit a different name into the consul server like C and also change you local service A to consume it.
This way you would have registered into consul two instances of service A, one instance of service B and one instance of service C and.
If there is only one consul node, it can config consul like following:
spring:
cloud:
consul:
host: localhost
port: 8500
But if there is consul cluster, for example there are three concul nodes.
in this case, how to config the consul?
Do it need DNS to route the host name to multi IP address?
you should run a consul agent in client mode on a node and join consul cluster.
then run your spring boot instance on this node and connect to this consul agent.
I am having a problem in that consul-template seems to be substituting the service "ServiceAddress" and not "Address" in my template and I wonder if anyone can tell me why.
From a bash session within my nginx container where consul-template is also running I can fetch the service definition from Consul with:
curl http://consul-server.service.consul:8500/v1/catalog/service/service1
[{"Node":"ip-172-31-24-202","Address":"172.31.24.202","ServiceID":"ip-172-31-24-202:service1:23141","ServiceName":"service1","ServiceTags":null,"ServiceAddress":"172.17.0.3","ServicePort":32809}]
My consul-template template file looks like:
{{range service "service1"}}server {{.Address}}:{{.Port}};
I would expect this to output the Address, and not the ServiceAddress for the service. However, the below happens:
consul-template -consul consul-server.service.consul:8500 -template "/var/templates/service1.conf.tmpl" -dry -once
server 172.17.0.3:32809
I've figured this out, .Address is part of the Consul response metadata, and not really part of the service metadata. This confused me because my consul Client is running on the same host.
I changed the -ip argument to the Registrator service I was running to be the internal IP address of the host running docker (rather than the default which is the IP of the Docker container) and everything worked.