How to make SSL/HTTPS certificates on Gitlab Auto DevOps with GKE and kube-lego - https

The instructions say to install it, but doesn't suggest which method will work, in what order or with which load balancer. I keep getting useless test certificates installed on my deployments.

It sounds like you are getting the default chart from helm, which is slightly different than the instructions you will find on the kube-lego Github. The helm chart uses Let's Encrypt's staging server by default. Also if you are not familiar with helm it can be quite tricky to undo the remnants that the kube-lego helm chart leaves behind, specially as tiller likes to keep the old installation history.
Here's a brief overview of setting up Gitlab Auto Devops with Google Kubernetes Engine. I should mention that you can skip up to step 13 if you use the Gitlab GKE wizard introduced in 10.1, but it won't really tell you what's happening or how to debug when things go wrong, also the wizard is a one way process.
In Gitlab under integrations you'll find a Kubernetes integration button
You can also find this under CI/ CD -> cluster
The API URL is the endpoint ip prefixed by https://
The "CA Certificate" it asks for is service account CA, not the same as the cluster CA
To connect to gitlab you'll need to create a k8s service account and get the CA and token for it. Do so by using gcloud to authenticate kubectl. GCE makes it easy by doing this for you through the "connect" button in k8s engine
https://kubernetes.io/docs/admin/authentication/#service-account-tokens
All commands must be run with a custom namespace, it will not work with default
kubectl create namespace (NS)
kubectl create serviceaccount (NAME) --namespace=(NS)
This will create two tokens in your configs/secrets, a default one and you service account one
kubectl get -o json serviceaccounts (NAME) --namespace=(NS)
kubectl get -o json secret (secrets-name-on-prev-result) --namespace=(NS)
To decode the base64 values echo them to base64 -d, for example
Echo mybase64stringasdfasdf= | base64 -d
To install helm, use the installation script method
https://docs.helm.sh/using_helm/#from-script
Init helm and update it's repo
helm init
helm repo update
Install nginx-ingress, its ingress with nginx. You could use the Google load balancer as well, but it's not as portable.
helm install stable/nginx-ingress
Make a wild card subdomain with an A record pointing to the IP address set up by ingress
In Gitlab, Make auto devops use your newly setup wildcard subdomain, if its "*.x" on "me.com" you should enter "x.me.com" in the auto devops settings.
Now install kube lego
helm install --name lego-main \
--set config.LEGO_EMAIL=CHANGEMENOW#example.com \
--set config.LEGO_URL=https://acme-v01.api.letsencrypt.org/directory \
stable/kube-lego
Helm installations are called charts, equal to a k8s container installation
If you wish to delete a release, you must purge it, otherwise tiller will keep a history, with:
helm delete --purge my-release-name
You can find the release names and their associated chart in
helm list
Troubleshooting
Order doesn't seem to matter too much. attaching to a pod can be a useful way of debugging problems, such as a bad email address. The Ideal order however is probably, nginx-ingress, then kube-lego, then gitlab. I did make it work with gitlab first, then nginx-ingress, then kube-lego.
I heard from Sid that they are working to make this easier... let's hope so.

Related

Knative: update image of a service in a CI autodeploy pipeline

I recently converted my kubernetes deployment service to a knative serverless application. I am looking for a way how to update the image of a container the the knative app from a CI/CD pipeline without using yml file (CI pipeline doesn't have access to the yaml config used to deploy the file). Previously, I was using kubectl set image command to update the image from CI to the latest version for a deployment but it does not appear to work for a knative service, e.g. the command I tried is:
kubectl set image ksvc/hello-world hello-world=some-new-image --record
Is there a way to update the image of a knative app using a kubectl command without having access to the original yaml config?
You can use kn CLI:
https://github.com/knative/client/blob/master/docs/cmd/kn_service_update.md
kn service update hello-world --image some-new-image
This would create a new revision for the Knative service though.
You can clean up old revisions with kn.
Get kn here: https://knative.dev/docs/install/install-kn/

Helm Install fails with Error: could not find tiller from within Azure Cloud Shell

Helm install called from the cloud shell worked last week but now regardless of using bash or powershell it returns
Error: could not find tiller
Last week I was able to create an ingress controller by following the Microsoft article Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
now when I get to the helm install step I get the error indicated in the title. To recreate this issue:
Do clouddrive unmount from within the powershell cloud shell.
Using the Azure portal delete your cloudshell file share in Azure Storage.
create a 2 node 'B2s' Kubernetes Service using Advanced networking using the provided defaults.
Open the Cloud Shell using either bash or Powershell.
Do az aks get-credentials and provide the name of your AKS cluster.
Do kubectl create namespace ingress-basic
Do helm repo add stable https://kubernetes-charts.storage.googleapis.com/
the command above will warn that you need to perform a helm init.
Do a az aks list to get the servicePrincipalProfile:clientId of your AKS cluster
Do helm init --service-account using the clientId of the command above for the service-account parameter
Do the helm install using the parameters from the Microsoft Docs Create an HTTPS ingress controller on Azure Kubernetes Service (AKS)
At this point you should get the error mentioned in the title.
Any suggestions on what I might be missing.
right, I think you need to actually create a service account in kubernetes in order for this to work, sample code:
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
# Users in China: You will need to specify a specific tiller-image in order to initialize tiller.
# The list of tiller image tags are available here: https://dev.aliyun.com/detail.html?spm=5176.1972343.2.18.ErFNgC&repoId=62085.
# When initializing tiller, you'll need to pass in --tiller-image
helm init --service-account tiller \
--tiller-image registry.cn-hangzhou.aliyuncs.com/google_containers/tiller:<tag>
https://rancher.com/docs/rancher/v2.x/en/installation/options/helm2/helm-init/
whereas you are trying to use Azure Service Principal instead of Kubernetes Service Account. They are not the same thing.
OK two items.
Tiller was not installed. on 2/14/2020 I was able to install an ingress controller using helm. But on 2/18 I was not able to do a helm install --dry-run on a chart because of the above mentioned error. Anyway after following this documentation Install applications with Helm in Azure Kubernetes Service (AKS) I got Tiller installed properly.
After getting Tiller properly installed the helm install command for the ingress controller Create an HTTPS ingress controller on Azure Kubernetes Service (AKS) was failing with chart not found error.
I modified the command from
helm install nginx stable/nginx-ingress \
to
helm install stable/nginx-ingress \
and the chart is now deploying properly.
I am now back to where I was on 2/14. Thanks everyone for the help.
OK got the following letter from Microsoft Tech Support letting me know that there was a problem with Azure Cloud Shell. I did a Helm version this morning and now see
version.BuildInfo{Version:"v3.1.1",
GitCommit:"afe70585407b420d0097d07b21c47dc511525ac8",
GitTreeState:"clean", GoVersion:"go1.13.8"}
Which I did not see on 2/14. So I looks like I was NOT crazy and that Microsoft has fixed this issue. Based on her letter the issue with the chart name should also be resolved.
Hi Brian, Thanks for your update. We are so sorry for the
inconvenience. There was an error in the script which builds the cloud
shell image. Helm released 2.16.3 more recently than 3.1 and the build
script picked that up as the 'latest' release, causing the inadvertent
downgrade. As helm v3 does not require tiller pod, so the tiller pod
cannot be found when using helm v2. The helm version will be
re-upgraded in the next release. As confirmed by you, the queries
related to this issue have been solved and hence I will go ahead and
archive your case at this time. Please remember that support for this
case does not end here. Should you need further assistance with your
issue, you can reach out to me and I will continue working with you.
Here is a summary of the key points of the case for your records.
Issue Definition: Helm install Error: could not find tiller.
Resolution/Suggestion Summary: You could refer to the follow the below document to configure helm V2:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm#install-an-application-with-helm-v2.
Alternatively, you may also use the command below to upgrade the helm
to version 3 as a workaround: curl https://aka.ms/cloud-shell-helm3 |
bash. The command ‘helm version’ can be used to verify the version of
Helm you have installed.
Besides, the syntax for helm 3 is different from helm 2. The syntax
for helm v3 is ‘helm install [NAME] [CHART] [flags]’ while for hem v2
is ‘helm install [CHART] [flags]’. So you need to remove the word
nginx after the word install if you are using helm v2. You could refer
to the following document for more information:
https://learn.microsoft.com/en-us/azure/aks/kubernetes-helm#install-an-application-with-helm-v2
It was my pleasure to work with you on this service request. Please do
not hesitate to contact me if I can be of further assistance. Thank
you very much for your support of Microsoft Azure. Have a nice day😊!

How to use kubernetes helm from a CI/CD pipeline hosted outside the k8s cluster

I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.
I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??
Or is this not the right way to use Helm from a CI/CD server?
Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster.
Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running kubectl config current-context and comparing that to the cluster details in the kubecfg.
You can find more details in Helm's docs, check the quick start guide for more information.
why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on Jenkins X and it works great - we can run kubectl or helm inside pipelines just fine.
In this case you will want to install kubectl on whichever slave or agent you have identified for use by your CI/CD server, OR install kubectl on-the-fly in your automation, AND then make sure you have OR are able to generate a kubeconfig to use.
To answer the question:
But how do I create a kube-config file for the service account ...
You can set new clusters, credentials, and contexts for use with kubectl in a default or custom kubeconfig file using kubectl config set-cluster, kubectl config set-credentials, and kubectl config set-context. If you have KUBECONFIG env variable set and pointing to a kubeconfig file, that works or when setting new entries simply pass -kubeconfig to point to a custom file.
Here's the relevant API documentation for v1.6.
We created helmsman which provides you with declarative syntax to manage helm charts in your cluster. It configures kubectl (and therefore helm) for you wherever you run it. It can also be used from a docker container.

How to push container to Google Container Registry (unable to create repository)

EDIT: I'm just going to blame this on platform inconsistencies. I have given up on pushing to the Google Cloud Container Registry for now, and have created an Ubuntu VM where I'm doing it instead. I have voted to close this question as well, for the reasons stated previously, and also as this should probably have been asked on Server Fault in the first place. Thanks for everyone's help!
running $ gcloud docker push gcr.io/kubernetes-test-1367/myapp results in:
The push refers to a repository [gcr.io/kubernetes-test-1367/myapp]
595e622f9b8f: Preparing
219bf89d98c1: Preparing
53cad0e0f952: Preparing
765e7b2efe23: Preparing
5f2f91b41de9: Preparing
ec0200a19d76: Preparing
338cb8e0e9ed: Preparing
d1c800db26c7: Preparing
42755cf4ee95: Preparing
ec0200a19d76: Waiting
338cb8e0e9ed: Waiting
d1c800db26c7: Waiting
42755cf4ee95: Waiting
denied: Unable to create the repository, please check that you have access to do so.
$ gcloud init results in:
Welcome! This command will take you through the configuration of gcloud.
Settings from your current configuration [default] are:
[core]
account = <my_email>#gmail.com
disable_usage_reporting = True
project = kubernetes-test-1367
Your active configuration is: [default]
Note: this is a duplicate of Kubernetes: Unable to create repository, but I tried his solution and it did not help me. I've tried appending :v1, /v1, and using us.gcr.io
Edit: Additional Info
$ gcloud --version
Google Cloud SDK 116.0.0
bq 2.0.24
bq-win 2.0.18
core 2016.06.24
core-win 2016.02.05
gcloud
gsutil 4.19
gsutil-win 4.16
kubectl
kubectl-windows-x86_64 1.2.4
windows-ssh-tools 2016.05.13
+
$ gcloud components update
All components are up to date.
+
$ docker -v
Docker version 1.12.0-rc3, build 91e29e8, experimental
The first image push requires admin rights for the project. I had the same problem trying to push a new container to GCR for a team project, which I could resolve by updating my permissions.
You might also want to have a look at docker-credential-gcr. Hope that helps.
What version of gcloud and Docker are you using?
Looking at your requests, it seems as though the Docker client is not attaching credentials, which would explain the access denial.
I would recommend running gcloud components update and seeing if the issue reproduces. If it still does, feel free to reach out to us on gcr-contact at google.com so we can help you debug the issue and get your issue resolved.
I am still not able to push a docker image from my local machine, but authorizing a compute instance with my account and pushing an image from there works. If you run into this issue, I recommend creating a Compute Engine instance (for yourself), authorizing an account with gcloud auth that can push containers, and pushing from there. I have my source code in a Git repository that I can just pull from to get the code.
Thanks for adding your Docker version info. Does downgrading Docker to a more stable release (e.g. 1.11.2) help at all? Have you run 'docker-machine upgrade'?
It seems like you're trying to run gcloud docker push from an Google Compute Engine instance without a proper security scope of read/write access to Google Cloud Storage (it's where Google Container Registry stores the images of your containers behind the scene).
Try to create another instance, but this time with proper access scopes, i.e.:
gcloud compute --project "kubernetes-test-1367" instances create "test" --zone "us-east1-b" --machine-type "n1-standard-1" --network "default" --scopes default="https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring.write","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management","https://www.googleapis.com/auth/devstorage.full_control" --image "/debian-cloud/debian-8-jessie-v20160629" --boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "test-1"
Once you create new instance, ssh into it and then try to re-run the gcloud docker push gcr.io/kubernetes-test-1367/myapp command
I checked for
gcloud auth list
to see my application is the active account and not my personal Google account. After setting
gcloud config set account example#gmail.com
I was able to push
gcloud docker -- push eu.gcr.io/$PROJECT_ID/my-docker:v1
So I can continue http://kubernetes.io/docs/hellonode/
I had a similar issue and it turned out that I had to enable billing for the project. When you have a new Google Cloud account you can enable only so many projects with billing. Once I did that it worked.
Also this could be the cause of this problem (was in my case):
Important: make sure the Compute Engine API is enabled for your project on the
Source: https://pinrojas.com/2016/09/12/your-personal-kubernetes-image-repo-in-a-few-steps-gcr-io/
If anyone is still having this problem while trying to push a docker image to gcr, even though they've authenticated an account that should have the permission to do so, try running gcloud auth configure-docker and pushing again.

Source code changes in kubernetes, SaltStack & Docker in production and locally

This is an abstract question and I hope that I am able to describe this clear.
Basically; What is the workflow in distributing of source code to Kubernetes that is running in production. As you don't run Docker with -v in production, how do you update running pods.
In production:
Do you use SaltStack to update each container in each pod?
Or
Do you rebuild Docker images and restart every pod?
Locally:
With Vagrant you can share a local folder for source code. With Docker you can use -v, but if you have Kubernetes running locally how would you mirror production as close as possible?
If you use Vagrant with boot2docker, how can you combine this with Docker -v?
Short answer is that you shouldn't "distribute source code", you should rather "build and deploy". In terms of Docker and Kubernetes, you would build by means of building and uploading the container image to the registry and then perform a rolling update with Kubernetes.
It would probably help to take a look at the specific example script, but the gist is in the usage summary in current Kubernetes CLI:
kubecfg [OPTIONS] [-u <time>] [-image <image>] rollingupdate <controller>
If you intend to try things out in development, and are looking for instant code update, I'm not sure Kubernetes helps much there. It's been designed for production systems and shadow deploys are not a kind of things one does sanely.

Resources