How to enable mTLS between Apache APISIX and etcd? - apache-apisix

APISIX uses etcd as the configuration center, I have configured and enabled mTLS in etcd for secure data transfer, how do I configure it in APISIX to make it effective?

First of all, you need to prepare a pair of client certificate and private key. Then configure it on APISIX. You can specify them in the config.yaml. The related fields are:
etcd.tls.cert: client certificate
etcd.tls.key: client private key
apisix.ssl. ssl_trusted_certificate: CA certificate to verify the ETCD server certificate

Related

Elasticsearch encryption settings

Due to auditing requirements, it is necessary to encrypt all connections between the application and the elasticsearch cluster
after a little googling, I realized that the elasticsearch cluster protection looks like this
Enable x-pack: true in elasticsearch.yml
generate CA
copy the CA to each node in the cluster
generate a certificate for each node (by signing it using the generated CA in paragraph 2)
in this case, we will have a secure connection ONLY between the nodes in the cluster (on port 9300)
inter-node tls
I have purchased an SSL certificate and the certificate files look like this
STAR_rem-masters_com.ca-bundle star_rem_masters_com_certificate.crt
(I bought these certificates from sectigo)
Now the question is how can these certificates be used for inter-node ssl and client-server ( application > elasticsearch ) because in all examples they use self-signed certificates that are generated using elasticsearch-certutil

How to connect to the nomad/consul UI with tls enabled?

I'm now researching the Hashistack and trying to deploy pet microservice-based project on it. I deployed Nomad and Consul clusters with Ansible roles on bare metal nodes:
https://github.com/ansible-community/ansible-consul.git (v2.5.4)
https://github.com/ansible-community/ansible-nomad.git (v1.9.6)
Servers of Nomad and Consul are placed on the same nodes.
I do not use Vault. I created separate private CA, generated TLS certificates and private keys for these services and configured Nomad and Consul servers and clients to use them.
My goal is to setup production ready Hashistack cluster. So that I want to setup full TLS for both services.
I successfully connected to both UIs via HTTP, but when I try HTTPS, I get the SSL_ERROR_BAD_CERT_ALERT error.
I'll appreciate if you suggest the best practices to operate the Hashistack in production, and what steps are required for it.
Thank you!
I'm a bit late to respond, but came across the same error. Figured I'd leave my solution in case future readers find it helpful...
For me, the issue came down to the verify_https_client flag in my Nomad tls config block. Since Nomad is configured for mutual TLS, all clients (including web browsers) need to provide a client certificate signed by the same CA used by Nomad in order to connect. You'll need to generate/sign that certificate, and look up how to configure your browser to automatically provide it when needed.
For production use, that's the safest route. For a dev environment, you can just set that verify_https_client config to false in your Nomad config.
Here's a link to the Nomad docs for this flag: https://www.nomadproject.io/docs/configuration/tls#verify_https_client
You need first, generate a client certificate for your web brower.
Then convert it to PKCS12 format.
openssl pkcs12 -export -inkey ./nomad-cli.key -in ./nomad-cli.pem -out ./nomad-cli.p12
Let's say your are using Chrome,
Go to chrome://settings/certificates?search=certificate and import the converted certificate nomad-cli.p12.
I've found answer for same case.
When nomad cluster deployed with mTLS need deploy cli keys to each server nodes or at least on the node to which you are configuring the connection.
cli keys generated by instruction https://learn.hashicorp.com/tutorials/nomad/security-enable-tls#nomad-ca-key-pem
and nginx configured by instruction https://learn.hashicorp.com/tutorials/nomad/reverse-proxy-ui?in=nomad/manage-clusters
however this manual does not contain a description of configuring mTLS.
You need add following parameters in location /.
location / {
....
proxy_pass https://127.0.0.1:4646;
proxy_ssl_certificate /etc/nomad.d/cli.pem;
proxy_ssl_certificate_key /etc/nomad.d/cli-key.pem;
proxy_ssl_verify off;
....
}
In this case nginx can connect encrypted connection with nomad http port with TLS.
Also don't forget enable http basic auth at least.

Service Fabric Kestrel 3.1 Https certificate through load balanser

Using Fabric 2 stateless services with Kestrel 3.1
Have a problem exposing a HTTPS endpoint. A primary certificate is defined on the cluster (Security section). This certificate (primary) is accessible to the nodes (i.e. via X509Store find operations on the thumbprint or subject) automatically by Service Fabric. When configuring kestrel for a particular endpoint the certificate is used by the UseHttps method on any Ipv6 address (i.e. Ipv6Any). In the Application Manifest the access to the certificate's private key is granted (see article) with an endpoint policy. Here is example code on gist. The cluster's load balanser exposes the 443 HTTPS endpoint via the 8443 port (similar to the setup in this tutorial).
Despite the above configuration when navigating to the application the response is that the web page is either down or has been moved plus a ERR_HTTP2_INADEQUATE_TRANSPORT_SECURITY error.
The service according to the logging sent to Insights starts fine using the primary certificate:
Hosting environment: Production
...
Now listening on: https://[::]:443
Anybody else get as similar setup working?
Turns out I had set the protocol to HTTP2 rather than HTTP1.

Generate certificate for HTTP service from Istio

Is it possible to generate certificates via Citadel for HTTPS services? In my case, I would like to use the Elastic ECK operator to spawn a new Elasticsearch cluster + Kibana, but I don't want to use the self-signed CA (since I'd have to push that CA certificate file to all and every service that wants to connect to the ES API); rather I'd like to use another self-signed certificate authority; the one and the same that Istio uses.
My hope is that if we get around to adding Vault to the cluster + cert-manager, I can easily create new certificates with that and all HTTPS usage INSIDE the cluster.
How can I (or can I), generate TLS certificates with Istio somehow? I have SDS installed in the cluster.
This question is not about:
How to generate public certificates
cert-manager
How to turn off TLS in Elasticsearch's HTTP endpoint

Consul TLS with Spring based Rest service

We are trying to enable tls to the Consul so that our Rest service(which is using self-signed certificate) will be able to register to Consul in Https mode, For enabling TLS I am following Consul documentation as well as below links
https://www.digitalocean.com/community/tutorials/how-to-secure-consul-with-tls-encryption-on-ubuntu-14-04
http://russellsimpkins.blogspot.in/2015/10/consul-adding-tls-using-self-signed.html
Note: I am using centos 7.2
now my service try to register to the consul but in Consul dashboard its down and on the console I am getting below error:
x509: certificate signed by unknown authority
we found the solution. we have to add the CA cert to TLS trust store instead JVM trust store for centos it is "/etc/pki/tls/certs/ca-bundle.crt"
just by appending CA certificate to this file solved our issue

Resources