Using templating/overlay libraries in operators - go

While building operators using OperatorSDK: Go framework, we end up creating Kubernetes resources such Deployments, Services etc programmatically by leveraging structs from k8s modules/packages. Compared to creating these manifests in yaml/json formats, this is quite cumbersome and requires quite a bit of coding. And any changes to the manifest would require code changes and the new version of the operator needs to be rolled out.
I am wondering whether existing templating/overlay tools such as Helm or Kustomize can be used for building these k8s resources within the operator code. This would also enable you to externalise the manifest/template files from the operator code. I couldn't find any good examples of how these tools can be used as modules/libraries within a Go program. Please provide any pointers, suggestions or alternate approaches.
Related question: Kubernetes operator create Deployment using yaml template
This talks about how you can read a yaml file and unmarshal it into a Deployment object. Here, I would still need to code templating/overlay logic within the operator.

You can use the helm engine programmatically, by calling engine.Render.
func Render(chrt *chart.Chart, values chartutil.Values) (map[string]string, error)

Related

Converting .proto to Json or yaml

New to protobuf. I have a bunch of .proto files which defines numerous endpoints. I would like to programmatically extract the endpoint definitions along with other method data defined in the endpoint specs. Is there an easy way to do this ?
Interesting question.
I'm unaware of any tools like jq for JSON and yq for YAML for processing|querying protos but it would be a useful tool to have.
I think there are probably tools out there that can help with the documentation aspect of protos but I've not used any. The folks at Buf are doing some interesting work. Their schema registry may be of interest? I've not used it.
Otherwise, you could one of the SDKs that supports reflection (e.g. Go's v2 SDK supports this) and build a solution.

Why dataweave over template engines like Velocity/Freemarker/Thymeleaf

I see a broad adoption of Dataweave which I feel is more of transformation library just like Freemarker or Velocity.
In case of DW Change in transformation logic would need change in code, the very same purpose template engines got popular at the first place to seperate logic and code so that we can change transformation logic without needing to rebuild/repackage our code (more deployment hassle).
Can anyone help me to point out few reasons as to why one would prefer DW .
TLDR: If you're looking for a template engine for things like static websites, DataWeave definitely isn't the right choice. Use the right tool for the job. Also, while you can use DataWeave outside of Mule, I don't think I've seen anyone adopt DataWeave that hasn't adopted MuleSoft..
A few things to consider (and most of these I'm stating in the context of developing Mule applications):
These template engines are, typically, for outputting static text. If you're using it to output structured data rather than something like an HTML page.. you're probably doing it wrong. They aren't going to return structured data - they are going to return text. If you're at the very end of your flow and you're going to output that back out of the API or to a file, you're fine I suppose.. but if you want to actually be able to work with that output, you're going to have to convert the plain text to an actual object... introducing a lot of extra steps in this process when you could have just used DataWeave in the first place. Dataweave is especially beneficial when you want to do things like streaming because you're processing large payloads. Dataweave can understand JSON, XML, and CSV (the three most common data types I see) in a streamed format without any additional work, making it very easy to create efficient applications. The big difference between a template engine and a data transformation language is that one is for outputting text using structured data as input, and the other is for working with structured data on the input and outputting structured data that you can continue to work with. There is a reason that almost all of the template engine docs talk about building websites and not things like integrations.
The DataWeave engine is, as Aled indicated, built into the Mule runtime. Deeply so. You can use DataWeave in any field in any connector by default, even fields that don't have the f(x) button - because it's built into the runtime. This makes DataWeave what you could consider a first-class citizen within Mule, unlike something you will only be able to utilize either via connectors or by invoking java bridges/libraries.. which you do via DataWeave or a long series of connector operations.
The benefits you listed are also not things you can't do with DataWeave. You can VERY easily templatize and externalize DataWeave - for example, I have several DataWeave libraries in my maven repo I can include as dependencies. I've built several transformation services that use databases with DataWeave in order to do transformation, allowing me to change those transformations without modifying the app. You can also use dynamic DataWeave, where you use a template system to load specific parts of the script before running it. I've even taken it a step further and written a generic DataWeave script that I can use to do basic mappings without writing DataWeave - this allowed me to wrap a web UI around things pretty easily.
I wouldn't use DataWeave outside of MuleSoft unless you're a MuleSoft shop. If you are a MuleSoft shop, using the CLI to run your scripts, the same way you do with most interpreted languages, works fairly nicely - especially since you likely already have in-house expertise in DataWeave. The language is still niche enough that unless you've already adopted it for use in Mule applications I don't see any advantage in using it.
Docs / basic examples:
https://github.com/mulesoft-labs/data-weave-native
https://docs.mulesoft.com/mule-runtime/4.3/parse-template-reference
https://docs.mulesoft.com/mule-runtime/4.3/dataweave-create-module
https://github.com/mikeacjones/transform-system-api
Because it is the expression and transformation language embedded in Mule runtime. If you are using Mule it is also integrated with the IDE Anypoint Studio.
Outside Mule applications I don't think you can use DataWeave easily. You might want to go with the alternatives.

Mix implementation languages in Operator SDK - Helm, Go, Ansible

I need to deploy several containers to a Kubernetes cluster. The objetive is automating the deployment of Kafka, Kafka Connect, PostgreSQL, and others. Some of them already provide a Helm operator that we could use. So my question is, can we somehow use those helm operators inside our operator? If so, what would be the best approach?
The only method I can think of so far is calling the helm setup console commands from within a deployment app.
Another approach, without using those helm files, would be implementing the functionality of each operator in my own operator, which doesn't seem to make much sense since what I need was already developed and is public.
I'm very new to operator development so please excuse me if this is a silly question.
Edit:
The main purpose of the operator is to deploy X databases. Along with that we would like to have a single operator/bundle that deploys the whole system right away. Does it even make sense to use an operator to bundle, even if we have additional tasks for some of the containers? With this, the user would specify in the yaml file:
databases
- type: "postgres"
name: "users"
- type: "postgres"
name: "purchases"
and 2 PostgreSQL databases would be created. Those databases could then be mentioned in other yaml files or further down in the same yaml file. Case on hands: the information from the databases will be pulled by Debezium (another container), so Debezium needs to know their addresses. So the operator should create a service and associate the service address with the database name.
This is part of an ETL system. The idea is that the operator would allow an easy deployment of the whole system by taking care of most of the configuration.
With this in mind, we were thinking if it wasn't possible to pick on existing Helm operators (or another kind of operator) and deploy them with small modifications to the configurations such as different ports for different databases.
But after reading F1ko's reply I gained new perspectives. Perhaps this is not possible with an operator as initially expected?
Edit2: Clarification of edit1.
Just for clarification purposes:
Helm is a package manager with which you can install an application onto the cluster in a bundled matter: it basically provides you with all the necessary YAMLs, such as ConfigMaps, Services, Deployments, and whatever else is needed to get the desired application up and running in a proper way.
An Operator is essentially a controller. In Kubernetes, there are lots of different controllers that define the "logic" whenever you do something (e.g. the replication-controller adds more replicates of a Pod if you decide to increment the replicas field). There simply are too many controllers to list them all and have running individually, that's why they are compiled into a single binary known as the kube-controller-manager.
Custom-built controllers are called operators for easier distinction. These operators simply watch over the state of certain "things" and are going to perform an action if needed. Most of the time these "things" are going to be CustomResources (CRs) which are essentially new Kubernetes objects that were introduced to the cluster by applying CustomResourceDefinitions (CRDs).
With that being said, it is not uncommon to use helm to deploy operators, however, try to avoid the term "helm operator" as it is actually referring to a very specific operator and may lead to confusion in the future: https://github.com/fluxcd/helm-operator
So my question is, can we somehow use those helm operators inside our operator?
Although you may build your own operator with the operator-sdk which then lets you deploy or trigger certain events from other operators (e.g. by editing their CRDs) there is no reason to do so.
The only method I can think of so far is calling the helm setup console commands from within a deployment app.
Most likely what you are looking for is a proper CI/CD workflow.
Simply commit the helm chart and values.yaml files that you are using during helm install inside a Git repository and have a CI/CD tool (such as GitLab) deploy them to your cluster every time you make a new commit.
Update: As the other edited his question and left a comment i decided to update this post:
The main purpose of the operator is to deploy X databases. Along with that we would like to have a single operator/bundle that deploys the whole system right away.
Do you think it makes sense to bundle operators together in another operator, as one would do with Helm?
No it does not make sense at all. That's exactly what helm is there for. With helm you can bundle stuff, you can even bundle multiple helm charts together which may be what you are actually looking for. You can have one helm chart that passes the needed values down to the actual operator helm charts and therefore use something like the service-name in multiple locations.
In the case of operators inside operators, is it still necessary to configure every sub-operator individually when configuring the operator?
As mentioned above, it does not make any sense to do it like that, it is just an over-engineered approach. However, if you truly want to go with the operator approach there are basically two approaches you could take:
Write an operator that configures the other operators by changing their CRs, ConfigMaps etc. ; with this this approach you will have a somewhat lightweight operator, however you will have to ensure it is compatible at all times with all the different operators you want it to interfere with (when they change to a new apiVersion with breaking changes, introduce new CRs or anything of that kind, you will have to adapt again).
Extract the entire logic from the existing operators into your operator (i.e. rebuild something that already already exists); with this approach you will have a big monolithic application that will be huge pain to maintain as you will continuously have to update your code whenever there is an update in the upstream operator
Hopefully it is clear by now that building your own operator for "operating" other operators comes with lot of painful dependencies and should not be the way to go.
Is it possible to deploy different configurations of images? Such as databases configured with different ports?
Good operators and helm charts let you do that out of the box, either via a respective CR / ConfigMap or a values.yaml file, however, that now depends on what solutions you are going to use. So in general the answer is: yes, it is possible if supported.

Supporting multiple versions of Kuberentes APIs in Go program

Kubernetes has a rapidly evolving API and I am trying to find best practices, recommendations, or really any kind of guidance about how to write Go software that gracefully handles supporting its evolving API and supports multiple versions simultaneously. I am sure I am not the first person to attempt this, but so far I have not found any guidance about Kubernetes specifically, and what I have read about polymorphism in Go has not inspired a great solution yet.
Kubernetes is written in Go and provides Go packages like k8s.io/api/extensions/v1beta1 and k8s.io/api/networking/v1beta1. Kubernetes resources, for example Ingress, are first released in one API group (extensions) and as they become more mature, get moved to another API group (networking) and can also change versions (e.g. go from v1beta1 to plain v1). Kubernetes also provides k8s.io/client-go for interacting with a Kubernetes cluster.
I am an experienced object-oriented (and other types of) programmer, but fairly new to Go and completely new to the Kubernetes packages. What I want to accomplish is a program architecture that allows me to write code once and have it work on any version of the Kubernetes resource, at least as long as the resource contains all the features I care about. In a typical object-oriented environment, I would create a base Ingress class and have all these various versions derive from it, and package up operations so that I could just work on Ingress everywhere. My sense is that Go intends for people to take a different approach, and in any case there are complications because of the client/server aspect.
Client/server and APIs
My Go program is a client of the Kubernetes server. Various version of the server will support various version of the Kubernetes API, and therefor various versions of the Ingress resource. So my first problem is that I have to do something like this to get a list of all the Ingresses:
ingressesExt, err := il.kubeClient.ExtensionsV1beta1().Ingresses(namespace).List(metav1.ListOptions{})
ingressesNet, err := il.kubeClient.NetworkingV1beta1().Ingresses(namespace).List(metav1.ListOptions{})
I have to gracefully handle errors about the API not being supported. Because the return types are different, AFAIK there is no unified interface where I can just make one call and get the results in a single list. It seems like this is the sort of thing someone should have solved and provided a solution for, but so far I have not found anything.
Type conversion
I also have to find some way to merge ingressesExt and ingressesNet into a single usable list, with an eye toward maintainability/extensibility now that Ingress has graduated to NetworkingV1.
Kubernetes utilities
I see that Kubernetes provides a lot of auto-generated code and utilities, but I have not found a lot of documentation about how to use them. For example, Ingress has functions like
DeepCopy
Marshal
XXX_DiscardUnknown
XXX_Merge
XXX_Unmarshal
Maybe I can use these to do the type conversion? Combine marshal, unmarshall, discard, and merge somehow to take the data from on version and import it into another?
Questions
Hopefully you see the issue and understand what I am trying to achieve.
Are there packages from Kubernetes or other open source authors that make some progress in unifying the APIs like I need?
Are any of the Kubernetes auto-generated functions meant for general use (as opposed to internal use) and helpful to my challenge? I have not found documentation for any but DeepCopy.
What is the "Go way" of abstracting out the differences between the various versions of the Ingress object such that I can write the rest of the code to work on any version? Keep in mind that I may need to make another API call for further processing, in which case I would need to know the concrete type of the object and select the right API call. It is not obvious to me that client-go provides any support for such auto-selection of API calls.

How do you deploy a golang google cloud function with a custom go build constraint/tag?

I am using go build constraints to conditionally compile constants into my test/staging/production cloud functions. How can I pass -tags ENV to the builder used by gcloud beta functions deploy?
As #Guilherme mentioned in the comments, indeed, it seems that it's not possible to pass the go constraints/tags to the builder used by Cloud Functions.
I searched around and while there isn't this option, I think indeed, having the option to send constraints to the builder used by Cloud Functions. Considering that, I would recommend you to raise a Feature Request for this to be checked by Google.
One option that you might want to give a look at it, it's deploying your application using Cloud Run. As it's informed in their official documentation about this application:
Use the programming language of your choice, any language or operating system libraries, or even bring your own binaries.
Cloud Run pairs great with the container ecosystem: Cloud Build, Container Registry, Docker.
So, this might work for you as a workaround. In this below tutorial, there are the steps to build and deploy a quick application with Go in Cloud Run.
Quickstart: Build and Deploy
Let me know if the information helped you!

Resources