Deploy Vault and consul agent in same pod with TLS using helm - consul

I m planning to use Vault Service as HA with Consul Backend with TLS using helm deployment for both consul and vault.
I have already deployed consul using helm deployment in my EKS cluster. This would deploy consul client as a daemonset and consul server as a pod
When the vault is deployed using helm, my vault server has to interact with consul client instead of consul server.
The challenge i face is that I can't provide storage consul address in below vault configuration file as 127.0.0.1:8501 because vault is running as separate pod and consul client as separate pod. so different ips.
storage "consul" {
address = "<WHAT_SHOULD_I_PROVIDE?>:8501"
path = "vault/"
scheme = "https"
tls_ca_file = ""
tls_cert_file = ""
tls_key_file = ""
token = "<CONSUL_TOKEN>""
}
I have also tweaked it by using HOST_IP:8501 but it throws below error
[WARN] storage migration check error: error="Get "https://10.15.0.7:8501/v1/kv/vault/core/migration": x509: certificate signed by unknown authority"
This is because TLS certificate should include a Subject Alternative Name (SAN) for the IP address, and of course, it should be signed by a trusted CA that you include as part of the ca_file parameter in Consul. But in my consul helm chart configuration, I'm using enableAutoEncrypt: true. So I can't able to use custom certs.
This would resolve if i deploy vault and consul in the same pod. In the vault helm chart configs, I couldn't find the consulAgent configuration in order to deploy the vault and consul agent together. Please help and let me know how to resolve this

Have you tried consul.service.consul?

Related

how to add a trusted CA for local Consul docker container

how to add a trusted CA for local Consul docker container?
So I have a consul running as a docker container and I am able to access it using chrome at localhost:8500 after exposing the ports, as expected. But due to company setting there is a security CA being added that chrome trusts, since it is added to MacOS keychain, but consul does not seems to trust when I try to use golang library to connect to consul
x509: “Menlo Security Intermediate CA” certificate is not trusted"
I get a certificate is not trusted error. I am able to export the CA to a RootCA.cer file from the keychain but how do I configure consul image to trust this CA file?
https://iotech.force.com/edgexpert/s/article/secure-consul-tls
I see articles like this
ca_file is used to check the authenticity of the client and server connections
cert_file is provided to clients and server to verify the agent's authenticity
key_file is used with the certificate to verify the agent's authenticity
but for me, the .cer export file will be used as cert_file?
how should I do it in docker compose?
consul:
image: dockerproxy.comp.com/consul:latest
ports:
- "9500:9500"

Spring application unable to access kafka running in kubernetes minikube

I used bitnami/kafka to deploy kafka on minikube. A describe of the pod kafka-0 looks says that server address is:
KAFKA_CFG_ADVERTISED_LISTENERS:INTERNAL://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9093,CLIENT://$(MY_POD_NAME).kafka-headless.default.svc.cluster.local:9092
My kafka address is set like so in Spring config properties:
spring.kafka.bootstrap-servers=["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
But when I try to send a message I get the following error:
Failed to construct kafka producer] with root cause:
org.apache.kafka.common.config.ConfigException:
Invalid url in bootstrap.servers: ["kafka-0.kafka-headless.default.svc.cluster.local:9092"]
Note that this works when I run kafka locally and set the bootstrap-servers address to localhost:9092
How do I fix this error? What is the correct kafka URL to use and where do I find it? thanks
Minikube network is different to the host network, you need a bridge.
The advertised listener is in the minikube realm, not findable from the host.
You could setup a service and an ingress in minikube pointing to your kafka, setup your hosts file to the ip address of the ingress and the hostname advertised.
spring.kafka.bootstrap-servers needs valid server hostnames along with port number as comma-separated
hostname-1:port,hostname-2:port
["kafka-0.kafka-headless.default.svc.cluster.local:9092"] is not looking like one!

Kubernetes cannot access cassandra database

I cannot access my Cassandra database, deployed on the same namespace in kubernetes.
My service has no cluster IP but an internal endpoint cassandra.hosting:9042 but whenever I try to connect from an internal spring application using
spring.data.cassandra.contact-points=cassandra.hosting
it fails with the error All host(s) tried for query failed
How did you configure your endpoint? Generally, all services and pods in a Kubernetes cluster are discoverable through a standard DNS notation. It looks like this:
<service-name>.<namespace>.svc.cluster.local # or
<pod-name>.<namespace>.svc.cluster.local # or
<pod-name>.<subdomain>.<namespace>.svc.cluster.local
If you are within the same namespace this would work too:
<service-name>
<pod-name>
<pod-name>.<subdomain>
I would also check either core-dns or kube-dns are running and ready:
kubectl -n kube-system get pods | grep dns

Kubernetes networking, How to transfer a variable to container

I have a K8s, currently running in single node (master+kubelet,172.16.100.81). I have an config server image which I will run it in pod. The image is talking to another pod named eureka server. Both two images are spring boot application. And eureka server's http address and port is defined by me. I need to transfer eureka server's http address and port to config pod so that it could talk to eureka server.
I start eureka server: ( pesudo code)
kubectl run eureka-server --image=eureka-server-image --port=8761
kubectl expose deployment eureka-server --type NodePort:31000
Then I use command "docker pull" to download config server image and run it as below:
kubectl run config-server --image=config-server-image --port=8888
kubectl expose deployment config-server --type NodePort:31001
With these steps, I did not find the way to transfer eureka-server http
server (master IP address 172.16.100.81:31000) to config server, are there
methods I could transer variable eureka-server=172.16.100.81:31000 to Pod config server? I know I shall use ingress in K8s networking, but currently I use NodePort.
Generally, you don't need nodePort when you want two pods to communicate with each other. A simpler clusterIP is enough.
Whenever you are exposing a deployment with a service, it will be internally discoverable from the DNS. Both of your exposed services can be accessed using:
http://config-server.default:31001 and http://eureka-server.default:31000. default is the namespace here.
172.16.100.81:31000 will make it accessible from outside the cluster.

How to register a service using Spring Cloud Consul

I have registered a service using Spring Cloud Consul, but for this I had to run a Consul local agent which establishes a channel communication to Consul server node (running as bootstrap).
For example:
#Server
consul agent -server -bootstrap -bind -data-dir data -ui-dir web_ui
#Desktop
consul agent -data-dir consul -ui-dir consul/dist -join server_ip_address
Is there any way to avoid of having this local agent in my desktop, I mean from my desktop Spring Cloud Consul would register the service to server node?
An example of this is what Netflix Eureka client does with Netflix Eureka server, no external agents running in machines to bind services names.
you have to use property file or yaml file i just give sample yaml file
application.yml
# Configure this Discovery Server
spring:
cloud:
consul:
host: localhost
port: 8500
bootstrap.yml
spring:
application:
name: service-name

Resources