I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch yaml configuration files from repository and apply them to this cluster.
Is it possible to call kubectl apply -f <config-file> via kubernetes API (or better via golang SDK)?
As yaml directly: no, not that I'm aware of. But if you increase the kubectl verbosity (--v=100 or such), you'll see that the first thing kubectl does to your yaml file is convert it to json, and then POST that content to the API. So the spirit of the answer to your question is "yes."
This box/kube-applier project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.
Related
I recently have to create a automation tool in Golang to handle some k8s tasks.
One of them is to combine some Deployments definition in YAML file and manage all these seperated containers into one POD deploy.
These legacy YAML defines Deployments is maintained by developers team so they should not be modified directly for some reason, thus my program needs to read them as template then do all the manipulation in runtime then deploy them use k8s apis(go-client maybe).
I did some research and managed to parse YAML by using API
k8s.io/client-go/kubernetes/scheme => Codecs.UniversalDeserializer().Decode
k8s.io/apimachinery/pkg/util/yaml => NewYAMLOrJSONDecoder
But I'm kind of stuck from here.
Any body might have done such before? What kind of API I should look at?
We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.
I'm working on composing yaml file for installing Elasticsearch via the ECK operator (CRD).
Its documented here - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elasticsearch-specification.html
As one goes through the sub-links, you will discover there are various pieces through which we can define the configuration of Elasticsearch.
The description of the CRD is contained here, but this is not easy to read -
https://download.elastic.co/downloads/eck/1.0.0-beta1/all-in-one.yaml
I'm curious how do Kubernetes developers understand various constructs of yaml file for a given kubernetes resource ?
Docs
In general, if it's a standard resource, the best way is to consult the official documentation for all the fields of the resource.
You can do this with kubectl explain. For example:
kubectl explain deploy
kubectl explain deploy.spec
kubectl explain deploy.spec.template
You can find the same information in the web-based API reference.
Boilerplate
For some resources, you can use kubectl create to generate the YAML for a basic version of the resource, which you can then use as a starting point for your own customisations.
For example:
kubectl create deployment --image=nginx mydep -o yaml --dry-run >mydep.yaml
This generates the YAML for a Deployment resource and saves it in a local file. You can then customise the resource from there.
The --dry-run option causes the generated resource definition to not be submitted to the API server (so it won't be created) and the -o yaml option outputs the generated definition of the resource in YAML format.
Other common resources for which this is possible include:
(Cluster)Role
(Cluster)RoleBinding
ConfigMap
Deployment
Job
PodDisruptionBudget
Secret
Service
See kubectl create -h.
Another approach to write yml files for Kubernetes resources could be to use kubernetes plugin which should be available for most of the IDEs.
For example, For Intellij, following is link for Kubernetes plugin. This link also explains which all features this plugin provides:
https://www.jetbrains.com/help/idea/kubernetes.html
I'm writing a Helm chart for a custom application that we'll need to bring up in different environments within my organization. This application has some pieces in Kubernetes (which is why I'm writing the Helm chart) and other pieces outside of K8S, more specifically various resources in AWS which I have codified with Terraform.
One of those resources is a Lambda function, which I have fronted with API Gateway. This means that when I run the Terraform in a new environment, it creates the Lambda function and attaches an API Gateway endpoint to it, with a brand new URL which AWS generates for that endpoint. I'm having Terraform record that URL as an output variable, and moreover I have a non-local backend configured so that Terraform is saving its state remotely.
What I want to do is tie them both together, directly from Helm. I want a way to run the Terraform so that it brings up my Lambda, and by doing so saves the generated API Gateway URL to its remote state file. Then when I install my Helm chart, I'd like it if Helm were smart enough to automatically pull from the Terraform remote state file to get the URL it needs of the API Gateway endpoint, to use as a variable within my chart.
Currently, I either have to copy and paste, or use Bash. I can get away with doing it with a bash script much like this one:
#!/bin/bash
terraform init
terraform plan -out=tfplan.out
terraform apply tfplan.out
export WEBHOOK_URL=$(terraform output webhook_url)
helm install ./mychart --set webhook.url="${WEBHOOK_URL}"
But using a Bash script to accomplish this is not ideal. It requires that I run it in the same directory as the Terraform files (because the output command must be called from that directory), and it doesn't account for different methods of authentication we might use. Moreover, other developers on the team might want to run Terraform and Helm directly and not have to rely on a custom bash script to do it for them. Since this bash script is effectively acting as an "operator," and since Helm already is kind of an operator itself, I'm wondering if there's some way I can do it entirely within Helm?
The Terraform remote state files are ultimately just JSON files. I happen to be using the Consul backend, but I could just as easily use the S3 backend or any other; at the end of the day Terraform will manifest its state as a JSON file somewhere, where (presumably) Helm could read it and pick out the specific output value. Except I'm not sure if Helm is powerful enough to do this. Looking over their documentation, I didn't really see anything outside of writing your normal values.yaml templates to specify defaults. Does Helm have any functions built into it around making REST requests for external JSON? Is this something that could be done?
Helm does not have any functionality to search in files/templates.
It needs for you to tell it exactly what to inject.
I have tried all the basics of Kubernetes and if you want to update your application all you can use kubectl rolling-update to update the pods one by one without downtime. Now, I have read the kubernetes documentation again and I have found a new feature called Deployment on version v1beta1. I am confused since I there is a line on the Deployment docs:
Next time we want to update pods, we can just update the deployment again.
Isn't this the role for rolling-update? Any inputs would be very useful.
Deployment is an Object that lets you define a declarative deploy.
It encapsulates
DeploymentStatus object, that is in charge of managing the number of replicas and its state.
DeploymentSpec object, which holds number of replicas, templateSpec , Selectors, and some other data that deal with deployment behaviour.
You can get a glimpse of actual code here:
https://github.com/kubernetes/kubernetes/blob/5516b8684f69bbe9f4688b892194864c6b6d7c08/pkg/apis/extensions/v1beta1/types.go#L223-L253
You will mostly use Deployments to deploy services/applications, in a declarative manner.
If you want to modify your deployment, update the yaml/json you used without changing the metadata.
In contrast, kubectl rolling-update isn't declarative, no yaml/json involved, and needs an existing replication controller.
I have been testing rolling update of a service using both replication controller and declarative deployment objects. I found using rc there appears to be no downtime from a client perspective. But when the Deployment is doing a rolling update, the client gets some errors for a while until the update stabilizes.
This is with kubernetes 1.2.1
The main difference is that "kubectl rolling-update" is client-driven rolling update, whereas the Deployment object gives you server-side rolling update.