is someone has any idea where can I edit ingress controller algorithm? - cluster-computing

I installed the ingress controller on my Kubernetes machine with helm, but somehow have no idea where the ingress controller puts the configuration file to be edited. We can find the helm file through helm list but what I mean is the code of the program itself.
I want to edit some algorithms from the ingress controller to do some projects.
Now I am using bitnami. I want to try to find the code for the algorithm. But still confused about what to do and what to use. do I have to use docker? Do I need to edit with any specific apps? I am confused about where is the Nginx ingress algorithm file source code.

There is a good guide how to build the NGINX Ingress Controller Image on the official NGINX website.
Note: this project is different from the NGINX Ingress controller in kubernetes/ingress-nginx repo.
So, change the ingress controller source code to tweak its algorithm and follow these steps:
Before you can build the image, make sure that the following software is installed on your machine:
Docker v18.09+
GNU Make
git
OpenSSL, optionally, if you would like to generate a self-signed certificate and a key for the default server.
Clone your Ingress Controller repo.
Build the image using make tool like this:
$ make debian-image PREFIX=myregistry.example.com/nginx-ingress TARGET=download
Check the Makefile here.
Push the image to your Docker registry like this:
$ make push PREFIX=myregistry.example.com/nginx-ingress TAG=your-tag
After that, create your own Helm chart for your custom Ingress Controller.
Read this guide on how to create your first Helm Chart here. As a good production example you can take this NGINX Ingress Controller Helm Chart. You need to change the referenced image.
Also, check this guide on how to install NGINX Ingress Controller using Helm.
I hope this gives a good idea of how to build a custom Ingress Controller from the source code.
EDIT:
As for Load balancer algorithms, there are several built-in load balancer methods: least_conn, ip_hash, random, random two, random two least_conn.
You can choose the load balancer method using annotation nginx.org/lb-method. See more info here.
But if you still want to change the Load balancer algorithm, you will have to modify the source code and build a custom ingress controller or use some of the other existing ingress controllers.

If you are using any open source ingress controller you might be able to get the code.
For example :
Nginx ingress controller : https://github.com/kubernetes/ingress-nginx
Kong ingress controller : https://github.com/Kong/kubernetes-ingress-controller
there could be chances pro or plus features won't be there.

Related

Using Helm For Deploying Spring Boot Microservice to K8s

We have build a few Microservices (MS) which have been deployed to our company's K8s clusters.
For current deployment, any one of our MSs will be built as a Docker image and they deployed manually using the following steps; and it works fine:
Create Configmap
Installing a Service.yaml
Installing a Deployment.yaml
Installing an Ingress.yaml
I'm now looking at Helm v3 to simplify and encapsulate these deployments. I've read a lot of the Helm v3 documentation, but I still haven't found the answer to some simple questions and I hope to get an answer here before absorbing the entire doc along with Go and SPRIG and then finding out it won't fit our needs.
Our Spring MS has 5 separate application.properties files that are specific to each of our 5 environments. These properties files are simple multi-line key=value format with some comments preceded by #.
# environment based values
key1=value1
key2=value2
Using helm create, I installed a chart called ./deploy in the root directory which auto-created ./templates and a values.yaml.
The problem is that I need to access the application.properties files outside of the Chart's ./deploy directory.
From helm, I'd like to reference these 2 files from within my configmap.yaml's Data: section.
./src/main/resource/dev/application.properties
./src/main/resources/logback.xml
And I want to keep these files in their current format, not rewrite them to JSON/YAML format.
Does Helm v3 allow this?
Putting this as answer as there's no enough space on the comments!
Check the 12 factor app link I shared above, in particular the section on configuration... The explanation there is not great but the idea is behind is to build one container and deploy that container in any environment without having to modify it plus to have the ability to change the configuration without the need to create a new release (the latter cannot be done if the config is baked in the container). This allows, for example, to change a DB connection pool size without a release (or any other config parameter). It's also good from a security point of view as you might not want the container running in your lower environments (dev/test/whatnot) having production configuration (passwords, api keys, etc). This approach is similar to the Continuous Delivery principle of build once, deploy anywhere.
I assume that when you run the app locally, you only need access to one set of configuration, so you can keep that in a separate file (e.g. application.dev.properties), and have the parameters that change between environments in helm environment variables. I know you mentioned you don't want to do this, but this is considered a good practice nowadays (might be considered otherwise in the future...).
I also think it's important to be pragmatic, if in your case you don't feel the need to have the configuration outside of the container, then don't do it, and probably using the suggestion I gave to change a command line parameter to pick the config file works well. At the same time, keep in mind the 12 factor-app approach in case you find out you do need it in the future.

Kubernetes deployment - specify multiple options for image pull as a fallback?

We have had image pull issues at one time or another with all of our possible docker registries including Artifactory, AWS ECR, and GitLab. Even DockerHub occasionally has issues.
Is there a way in a Kubernetes deployment to specify that a pod can get an image from multiple different repositories so it can fall back if one is down?
If not, what other solutions are there to maintain stability? I've seen things like Harbor and Trow, but it seems like a heavy handed solution to a simple problem.
Is there a way in a Kubernetes deployment to specify that a pod can get an image from multiple different repositories so it can fall back if one is down?
Not really, not natively 😔. You could probably trick a K8s node to pull images from different image registries (one at a time) if you place them behind something like a TCP load balancer that directs traffic to multiple registries. But this might take a lot of testing and work.
If not, what other solutions are there to maintain stability? I've seen things like Harbor and Trow, but it seems like a heavy handed solution to a simple problem.
I'd say either Harbor, Quay, and Trow is the way to go if you want something more redundant.
Kubernetes has the ability to set ImagePullPolicy and you can set it for example to Never if you'd like to pre-pull all your critical images on all the K8s nodes. You can tie this to some automation to pre-pull your images across your clusters and nodes.
I've actually opened a K8s feature request to see 👀 if this idea gains traction.
Update:
If you're using containerd or cri-o (or even Docker has registry mirrors). You have the ability to configure mirror registries:
containerd.toml example
...
[plugins.cri.registry]
[plugins.cri.registry.mirrors]
[plugins.cri.registry.mirrors."docker.io"]
endpoint = ["https://registry-1.docker.io"]
[plugins.cri.registry.mirrors."local.insecure-registry.io"]
endpoint = ["http://localhost:32000"]
[plugins.cri.registry.mirrors."gcr.io"]
endpoint = ["https://gcr.io"]
[plugins.cri.registry.configs]
[plugins.cri.registry.configs.auths]
[plugins.cri.registry.configs.auths."https://gcr.io"]
auth = "xxxxx...."
...
cri-o.conf example
...
# registries is used to specify a comma separated list of registries to be used
# when pulling an unqualified image (e.g. fedora:rawhide).
registries = [
“registry.example.xyz”,
“registry.fedoraproject.org”
]
...
✌️

How to setup multiple visual studio solutions working together using docker compose (with debugging)

Lots of questions seem to have been asked about getting multiple projects inside a single solution working with docker compose but none that address multiple solutions.
To set the scene, we have multiple .NET Core APIs each as a separate VS 2019 solution. They all need to be able to use (as a minimum) the same RabbitMQ container running locally as this deals with all of the communication between the services.
I have been able to get this setup working for a single solution by:
Adding 'Container orchestration support' for an API project.
This created a new docker-compose project in the solution I did it for.
Updating the docker-componse.yml to include both a RabbitMQ and MongoDb image (see image below - sorry I couldn't get it to paste correctly as text/code):
Now when I launch all new RabbitMQ and MongoDB containers are created.
I then did exactly the same thing with another solution and unsurprisingly it wasn't able to start because the RabbitMQ ports were already in use (i.e. it tried to create another new RabbitMQ image).
I kind of expected this but don't know the best/right way to properly configure this and any help or advice would be greatly appreciated.
I have been able to compose multiple services from multiple solutions by setting the value of context to the appropriate relative path. Using your docker-compose example and adding my-other-api-project you end up with something like:
services:
my-api-project:
<as you have it currently>
my-other-api-project:
image: ${DOCKER_REGISTRY-}my-other-api-project
build:
context: ../my-other-api-project/ <-- Or whatever the relative path is to your other project
dockerfile: my-other-api-project/Dockerfile
ports:
- <your port mappings>
depends_on:
- some-mongo
- some-rabbit
some-rabbit:
<as you have it currently>
some-mongo:
<as you have it currently>
So I thought I would answer my own question as I think I eventually found a good (not perfect) solution. I did the following steps:
Created a custom docker network.
Created a single docker-compose.yml for my RabbitMQ, SQL Server and MongoDB containers (using my custom network).
Setup docker-compose container orchestration support for each service (right click on the API project and choose add container orchestration).
The above step creates the docker-compose project in the solution with docker-compose.yml and docker-compose.override.yml
I then edit the docker-compose.yml so that the containers use my custom docker network and also specifically specify the port numbers (so they're consistently the same).
I edited the docker-compose.override.yml environment variables so that my connection strings point to the relevant container names on my docker network (i.e. RabbitMQ, SQL Server and MongoDB) - no more need to worry about IPs and when I set the solution to startup using docker-compose project in debug mode my debug containers can access those services.
Now I can close the VS solution and go to the command line and navigate to the solution folder and run 'docker-compose up' to start the container.
I setup each VS solution as per steps 3-7 and can start up any/all services locally without the need to open VS anymore (provided I don't need to debug).
When I need to debug/change a service I stop the specific container (i.e. 'docker container stop containerId' and then open the solution in VS and start it in debug mode/make changes etc.
If I pull down changes by anyone else I re-build the relevant container on the command line by going to the solution folder and running 'docker-compose build'.
As a brucey bonus I wrote PowerShell script to start all of my containers using each docker-compose file as well as one to build them all so when I turn on my laptop I simply run that and my full dev environment and 10 services are up and running.
For the most part this works great but with some caveats:
I use https and dev-certs and sometimes things don't play well and I have to clean the certs/re-trust them because kestrel throws errors and expects the certificate to be trusted, have a certain name and to be trusted. I'm working on improving this but you could always not use https locally in dev.
if you're using your own nuget server like me you'll need to a Nuget.config file and copy that as part of your docker files.

Whats the best way to compose yaml file for Kubernetes resource?

I'm working on composing yaml file for installing Elasticsearch via the ECK operator (CRD).
Its documented here - https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-elasticsearch-specification.html
As one goes through the sub-links, you will discover there are various pieces through which we can define the configuration of Elasticsearch.
The description of the CRD is contained here, but this is not easy to read -
https://download.elastic.co/downloads/eck/1.0.0-beta1/all-in-one.yaml
I'm curious how do Kubernetes developers understand various constructs of yaml file for a given kubernetes resource ?
Docs
In general, if it's a standard resource, the best way is to consult the official documentation for all the fields of the resource.
You can do this with kubectl explain. For example:
kubectl explain deploy
kubectl explain deploy.spec
kubectl explain deploy.spec.template
You can find the same information in the web-based API reference.
Boilerplate
For some resources, you can use kubectl create to generate the YAML for a basic version of the resource, which you can then use as a starting point for your own customisations.
For example:
kubectl create deployment --image=nginx mydep -o yaml --dry-run >mydep.yaml
This generates the YAML for a Deployment resource and saves it in a local file. You can then customise the resource from there.
The --dry-run option causes the generated resource definition to not be submitted to the API server (so it won't be created) and the -o yaml option outputs the generated definition of the resource in YAML format.
Other common resources for which this is possible include:
(Cluster)Role
(Cluster)RoleBinding
ConfigMap
Deployment
Job
PodDisruptionBudget
Secret
Service
See kubectl create -h.
Another approach to write yml files for Kubernetes resources could be to use kubernetes plugin which should be available for most of the IDEs.
For example, For Intellij, following is link for Kubernetes plugin. This link also explains which all features this plugin provides:
https://www.jetbrains.com/help/idea/kubernetes.html

Apply configuration yaml file via API or SDK

I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch yaml configuration files from repository and apply them to this cluster.
Is it possible to call kubectl apply -f <config-file> via kubernetes API (or better via golang SDK)?
As yaml directly: no, not that I'm aware of. But if you increase the kubectl verbosity (--v=100 or such), you'll see that the first thing kubectl does to your yaml file is convert it to json, and then POST that content to the API. So the spirit of the answer to your question is "yes."
This box/kube-applier project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.

Resources