Configure TLS for Sonarqube via Helm Chart - sonarqube

I'm deploying Sonarqube via official helm charts and using following ingress configuration:
ingress:
enabled: true
# Used to create an Ingress record.
hosts:
- name: sonar.<company>.com
# Different clouds or configurations might need /* as the default path
path: /
# For additional control over serviceName and servicePort
# serviceName: someService
# servicePort: somePort
# the pathType can be one of the following values: Exact|Prefix|ImplementationSpecific(default)
# pathType: ImplementationSpecific
annotations:
# kubernetes.io/tls-acme: "true"
# nginx.ingress.kubernetes.io/proxy-body-size: "64m"
# Set the ingressClassName on the ingress record
# ingressClassName: nginx
# Additional labels for Ingress manifest file
# labels:
# traffic-type: external
# traffic-type: internal
tls:
# Secrets must be manually created in the namespace. To generate a self-signed certificate (and private key) and then create the secret in the cluster please refer to official documentation available at https://kubernetes.github.io/ingress-nginx/user-guide/tls/#tls-secrets
- secretName: sonar-server-tls
hosts:
- sonar.<company>.com
Sonar is working when using: http://sonar.<company>.com:443 but without the certificate. https://sonar.<company>.com doesnt work. I cannot find much related to this specific topic. Some questions:
Do I have to use nginx here? If yes, is it recommended to use nginx.enabled: true to make stuff working smooth? That secret name is valid, exists and its found during deployment.
Thanks for any advice.

Using HTTP instead of HTTPS is not recommended, as it will not provide the same level of security. It is possible to use Nginx to enable HTTPS,you will likely need to use nginx to act as a reverse proxy for the sonar..com domain, and then configure it to use the secret containing the certificate. It is generally recommended to use Nginx's nginx.enabled: true option to ensure that the setup is working properly, which will then allow you to set up the nginx configuration and use the secret name provided.. Once this is done, you should be able to access Sonar securely on the HTTPS address you specified.
For more information follow this doc.

Related

Best practices for storing passwords when using Spring Boot

We are working on a Java Spring Boot application, that needs access to a database, the password is stored on the application.properties file.
Our main issue is that the passwords might be viewable when uploaded to GitLab/GitHub.
I found that we can use Jasypt to encrypt the data, but from what I read, I need to use the decryption key on the execution, which is also stored on Git, in order to be deployed using Kubernates.
Is there some way to secure our passwords in such a case? We are using AWS if that makes any difference, and we are trying to use the EKS service, but until now we have had a VM with K8s installed.
Do not store passwords in application.properties as you mention is insecure but also you may have a different version of your application (dev, staging, prod) which will use different databases and different passwords.
What you can do in this case is maintain the password empty in source files and externalize this configuration, i.e you can use an environment variable in your k8 deployment file or VM that the application will be run, spring boot will load it as property value if they have the right format. From spring documentation:
Spring Boot lets you externalize your configuration so that you can work with the same application code in different environments. You can use a variety of external configuration sources, include Java properties files, YAML files, environment variables, and command-line arguments.
You should use environment variables in your application.properties file for this:
spring.datasource.username=${SPRING_DATASOURCE_USERNAME}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD}
Or with a default value (for development):
spring.datasource.username=${SPRING_DATASOURCE_USERNAME:admin}
spring.datasource.password=${SPRING_DATASOURCE_PASSWORD:admin}
Then you can add a Kubernetes Secret to your namespace:
apiVersion: v1
kind: Secret
metadata:
name: mysecret
namespace: mynamespace
data:
SPRING_DATASOURCE_PASSWORD: YWRtaW4=
SPRING_DATASOURCE_USERNAME: YWRtaW4=
And assign it to your Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydeployment
spec:
# omitted...
containers:
- name: mycontainer
envFrom:
- secretRef:
name: mysecret
- configMapRef:
name: myconfigmap
# omitted...
Another alternative would be to store the entire application.properties file in your Secret or ConfigMap and mount it into your container as a file.
Both scenarios are explained in further detail here:
https://developers.redhat.com/blog/2017/10/03/configuring-spring-boot-kubernetes-configmap

passing application configuration using K8s configmaps

How to pass in the application.properties to the Spring boot application using configmaps. Since the application.yml file contains sensitive information, this requires to pass in secrets and configmaps. In this case what options do we have to pass in both the sensitive and non-sensitive configuration data to the Spring boot pod.
I am currently using Spring cloud config server and Spring cloud config server can encrypt the sensitive data using the encrypt.key and decrypt the key.
ConfigMaps as described by #paltaa would do the trick for non-sensitive information. For sensitive information I would use a sealedSecret.
Sealed Secrets is composed of two parts:
A cluster-side controller / operator
A client-side utility: kubeseal
The kubeseal utility uses asymmetric crypto to encrypt secrets that only the controller can decrypt.
These encrypted secrets are encoded in a SealedSecret resource, which you can see as a recipe for creating a secret.
Once installed you create your secret as normal and you can then:
kubeseal --format=yaml < secret.yaml > sealed-secret.yaml
You can safely push your sealedSecret to github etc.
This normal kubernetes secret will appear in the cluster after a few seconds and you can use it as you would use any secret that you would have created directly (e.g. reference it from a Pod).
You can mount Secret as volumes, the same as ConfigMaps. For example:
Create the secret.
kubectl create secret generic ssh-key-secret --from-file=application.properties
Then mount it as volume:
apiVersion: v1
kind: Pod
metadata:
name: secret-test-pod
labels:
name: secret-test
spec:
volumes:
- name: secret-volume
secret:
secretName: ssh-key-secret
containers:
- name: ssh-test-container
image: mySshImage
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
More information in https://kubernetes.io/docs/concepts/configuration/secret/

AWS - Network Load Balancer created via kubectl is missing SSL Certificate

I am using kubectl to create Network Load Balancer. The Load Balancer is created, but without the SSL certificate I selected - which is weird because I supplied the correct Certificate ARN as I found it in the Certificate Manager. This is how my metadata in the kubectl yaml file look like
apiVersion: v1
kind: Service
metadata:
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: {CERTIFICATE ARN}
service.beta.kubernetes.io/aws-load-balancer-ssl-negotiation-policy: "ELBSecurityPolicy-TLS-1-1-2017-01"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
labels:
app: ingress-nginx
name: ingress-nginx
namespace: ingress-nginx
Does anyone have an idea why is the Network load balancer created without the certificate? I am able to add the certificate by editing the NLB later and then everything works as expected - but the deployment through kubectl doesnt work.
Thanks a million
Issue solved - I am using Kubernetes version 1.14 and this feature is supported only from 1.15. Explained here
https://github.com/kubernetes/kubernetes/issues/73297

Openshift/Kubernetes ssh Secret doesn't work with Camel SFTP component

Long story short --->
While passing an ssh-key, which is retrieved from a secret in Openshift to apache-camel SFTP component its not able to connect the server; whereas if I directly pass a path of the actual ssh-key file w/o creating secret to the same component, it works just fine. The exception is, invalid key. I tried to read the key file in java and pass it as ByteArray as a privateKey parameter but no luck. Seems like passing the key as byte is not working as all possible means.
SFTP-COMPONENT Properties->
sftp:
host: my.sftp.server
port: 22
fileDirectory: /to
fileName: /app/home/file.txt
username: sftp-user
privateKeyFilePath: /var/run/secret/secret-volume/ssh-privatekey **(Also tried privateKey param with byte array)**
knownHostsFile: resource:classpath:keys/known_hosts
binary: true
Application Detail:
I am using Openshift 3.11.
Developing Camel-SpringBoot Micro-Integration services configured with fabric8 and spring-cloud-kubernetes plugins for deployment.
I am creating the secret as,
oc secrets new-sshauth sshsecret --ssh-privatekey=$HOME/.ssh/id_rsa
I have tried to refer secret with deployment.yml and bootstrap.yml
Using as env variable with secret-key-ref->
deployment.yml->
- name: SSH_SECRET
valueFrom:
secretKeyRef:
name: sshsecret
key: ssh-privatekey
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
enableApi: true
name: sshsecret
Using as mounted volume->
deployment.yml->
volumeMounts:
- mountPath: /var/run/secret/secret-volume
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sshsecret
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
paths: /var/run/secret/secret-volume
Note: Once the service is deployed I can see the mounted volume is attached with the container and can even bash into the POD and go to the same directory and locate the private key, which completely intact.
Any help will be appreciated. Ask me all questions you need to know to solve this.
It was a very bad mistake from my side. I was using privateKeyUri in camel SFTP component instead of privateKeyFile. I didn't rectify this and always changing those SFTP parameters in config-map directly.
By the way, for those trying to implement similar usecase; use the second option which is, mounting the secret into a volume and then refer the volume path inside Camel. Don't use the secret as ENV variable, so you need not enable secret API inside bootstrap.yml.
Thanks anyway, cheers!
Rito

Can't access Google Cloud Datastore from Google Kubernetes Engine cluster

I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.

Resources