Conditional deployment strategy in Openshift YAML file - yaml

I need to set Deployment strategy for my service depending on the environment. My current file, with same strategy over all envs looks like:
strategy:
type: Recreate
I need something like this:
strategy:
if ${evn}=="qa" {
type: Recreate
} else {
type: Rolling
rollingParams:
intervalSeconds: 1
maxSurge: 25%
maxUnavailable: 25%
timeoutSeconds: 600
updatePeriodSeconds: 1
}
I know syntax is not correct, I just want something similar to that. Any help is appreciated.

Related

Elasticsearch cluster managed by terraform with eck operator. Version upgrade fails

Our current Production Elasticsearch cluster for logs collection is manually managed and runs on AWS.
I'm creating the same cluster using ECK deployed with Helm under Terraform.
I was able to get all the features replicated (S3 repo for snapshots, ingest pipelines, index templates, etc) and deployed, so, first deployment is perfectly working.
But when I tried to update the cluster (changing the ES version from 8.3.2 to 8.5.2) I get this error:
│ Error: Provider produced inconsistent result after apply
│
│ When applying changes to kubernetes_manifest.elasticsearch_deploy, provider "provider\["registry.terraform.io/hashicorp/kubernetes"\]" produced an unexpected new
│ value: .object: wrong final value type: attribute "spec": attribute "nodeSets": tuple required.
│
│ This is a bug in the provider, which should be reported in the provider's own issue tracker.
I stripped down my elasticsearch and kibana manifests to try to isolate the problem.
Again, I previously deployed the eck operator with its helm chart: it works, because the first deployment of the cluster is flawless.
I have in my main.tf:
resource "kubernetes_manifest" "elasticsearch_deploy" {
field_manager {
force_conflicts = true
}
computed_fields = \["metadata.labels", "metadata.annotations", "spec.finalizers", "spec.nodeSets", "status"\]
manifest = yamldecode(templatefile("config/elasticsearch.yaml", {
version = var.elastic_stack_version
nodes = var.logging_elasticsearch_nodes_count
cluster_name = local.cluster_name
}))
}
resource "kubernetes_manifest" "kibana_deploy" {
field_manager {
force_conflicts = true
}
depends_on = \[kubernetes_manifest.elasticsearch_deploy\]
computed_fields = \["metadata.labels", "metadata.annotations", "spec.finalizers", "spec.nodeSets", "status"\]
manifest = yamldecode(templatefile("config/kibana.yaml", {
version = var.elastic_stack_version
cluster_name = local.cluster_name
namespace = local.stack_namespace
}))
}
and my manifests are:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
annotations:
eck.k8s.elastic.co/downward-node-labels: "topology.kubernetes.io/zone"
name: ${cluster_name}
namespace: ${namespace}
spec:
version: ${version}
volumeClaimDeletePolicy: DeleteOnScaledownAndClusterDeletion
monitoring:
metrics:
elasticsearchRefs:
- name: ${cluster_name}
logs:
elasticsearchRefs:
- name: ${cluster_name}
nodeSets:
- name: logging-nodes
count: ${nodes}
config:
node.store.allow_mmap: false]]
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: ${cluster_name}
namespace: ${namespace}
spec:
version: ${version}
count: 1
elasticsearchRef:
name: ${cluster_name}
monitoring:
metrics:
elasticsearchRefs:
- name: ${cluster_name}
logs:
elasticsearchRefs:
- name: ${cluster_name}
podTemplate:
metadata:
labels:
stack_name: ${stack_name}
stack_repository: ${stack_repository}
spec:
serviceAccountName: ${service_account}
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: "1"
When I change the version, testing a cluster upgrade (e.g. going from 8.3.2 to 8.5.2), I get the error mentioned at the beginning of this post.
Is it a eck operator bug or I'm doing something wrong?
Do I need to add some other entity in the 'computed_fields' and remove 'force_conflicts'?
In the end, a colleague of mine found that indeed you have to add the whole "spec" to the computed_fields, like this:
resource "kubernetes_manifest" "elasticsearch_deploy" {
field_manager {
force_conflicts = true
}
computed_fields = ["metadata.labels", "metadata.annotations", "spec", "status"]
manifest = yamldecode(templatefile("config/elasticsearch.yaml", {
version = var.elastic_stack_version
nodes = var.logging_elasticsearch_nodes_count
cluster_name = local.cluster_name
}))
}
This way I got a proper cluster upgrade, without full cluster restart.
Underlying reason: the eck operator makes changes to the spec section. Even if you just do a terraform apply without any changes (and "spec" is not added to the computed_fields), terraform will find that something has changed and will perform an update.
Its nice that you already have a working solution. Just out of curiousness, why do you use kubernetes_manifest instead of helm_release api from terraform to upgrade your es cluster? We upgraded from 8.3.2 to 8.5.2 using helm_release and everything went smooth.

Is it possible to set EventBridge ScheduleExpression value from SSM in Serverless

I want to schedule one lambda via AWS EventBridge. The issue is I want to read the number value used in ScheduledExpression from SSM variable GCHeartbeatInterval
Code I used is below
heartbeat-check:
handler: groupconsultation/heartbeatcheck.handler
description: ${self:custom.gitVersion}
timeout: 15
memorySize: 1536
package:
include:
- groupconsultation/heartbeatcheck.js
- shared/*
- newrelic-lambda-wrapper.js
events:
- eventBridge:
enabled: true
schedule: rate(2 minutes)
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: 1
Description: value in minute. need to convert it to seconds/milliseconds
Is this possible to achieve in serverless.yml ?
Reason for reading it from SSM is, it's a heartbeat service and the same value will be used by FE to send a heartbeat in set interval. BE lambda needs to be triggerred after 2x heartbeat interval
It turns out it's not possible. Only solution to it was to pass the variable as a command line argument. something like below.
custom:
mySchedule: ${opt:mySchedule, 1} # Allow overrides from CLI
...
schedule: ${self:custom.mySchedule}
...
resources:
Resources:
GCHeartbeatInterval:
Type: AWS::SSM::Parameter
Properties:
Name: /${file(vars.js):values.environmentName}/lambda/HeartbeatInterval
Type: String
Value: ${self:custom.mySchedule}
With the other approach, if we make it work we still have to redeploy the application as we do need to redeploy in this case also.

Creating private gke cluster

Creating private gke cluster with yaml.
Currently looking into creating a private gke. tried adding private settings in yaml file but getting error
resources:
- name: myclus
type: gcp-types/container-v1:projects.locations.clusters
properties:
parent: projects/[PROJECT_ID]/locations/[REGION]
cluster:
name: my-clus
zone: [ZONE]
network: [NETWORK]
subnetwork: [SUBNETWORK] ### leave this field blank if using the default network###
initialClusterVersion: "1.13"
nodePools:
- name: my-clus-pool1
initialNodeCount: 1
autoscaling:
enabled: true
minNodeCount: 1
maxNodeCount: 12
management:
autoUpgrade: true
autoRepair: true
config:
machineType: n1-standard-1
diskSizeGb: 15
imageType: cos
diskType: pd-ssd
oauthScopes: ###Change scope to match needs###
- https://www.googleapis.com/auth/cloud-platform
preemptible: false
Looking for it to create a private cluster with no external IPs.
Did you ever had the chance to go over this documentation?
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters
https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#public_master
Well, I also found this other Official Google Document that can help you achieve what you want:
https://cloud.google.com/solutions/creating-kubernetes-engine-private-clusters-with-net-proxies
On the "Creating the Docker Image" section there's a Dockerfile example.
Best of Luck!

Updating the Yaml file, with new fields using ruamel

I am trying to update the yaml file using ruamel python.
proc=subprocess.Popen(['kubectl','get','pod','web3','-o','yaml','--export'], stdout=subprocess.PIPE)
rein=proc.stdout.read()
result, indent, block_seq_indent = ruamel.yaml.util.load_yaml_guess_indent(rein, preserve_quotes=True)
So far I have tried :
result['spec'].append('nodeSelector')
which gives ERROR :
result['spec'].append('nodeSelector')
AttributeError: 'CommentedMap' object has no attribute 'append'
Also tried like this :
result['spec']['nodeSelector']['kubernetes.io/hostname']='kubew1'
gives :
result['spec']['nodeSelector']['kubernetes.io/hostname']='kubew1'
File "/usr/local/lib/python3.6/dist-packages/ruamel/yaml/comments.py", line 752, in __getitem__
return ordereddict.__getitem__(self, key)
KeyError: 'nodeSelector'
My Intial Yaml File is :
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
app: demo
name: web
name: web3
selfLink: /api/v1/namespaces/default/pods/web3
spec:
containers:
- image: aexlab/flask-sample-one
imagePullPolicy: Always
name: web
ports:
- containerPort: 5000
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-7bcc9
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
And Expected fields I want to add inside 'spec' is :
nodeSelector:
kubernetes.io/hostname: kubew1
Any Ideas how to achieve this with ruamel library.
In your YAML file your root level collection is a mapping and the value for the key spec in that mapping is itself a mapping. Both of those mappings get loaded as dict-like objects using ruamel.yaml named CommentedMap.
As with normal dicts you can add key-value pairs, deleted keys (and their values), and update values for a key, but there is no .append() method, as there is with a list (i.e. appending an extra item to a list).
Your output is a bit terse, but of course you cannot just add nodeSelector to anything (list/sequence nor dict/mapping) and expect that to add kubernetes.io/hostname: kubew1 (a mapping in its own right) automatically.
Your try of:
result['spec']['nodeSelector']['kubernetes.io/hostname'] = 'kubew1'
cannot work because there is no dict result['spec']['nodeSelector'] where you can add the key kubernetes.io/hostname.
You would either first have to create a key with an emtpy dict as value:
result['spec']['nodeSelector'] = {}
result['spec']['nodeSelector']['kubernetes.io/hostname'] = 'kubew1'
or do
result['spec']['nodeSelector'] = {'kubernetes.io/hostname': 'kubew1'}
Please note that the above has nothing much to do with ruamel.yaml, that is just basic Python data structure manipulation. Also note that there are over 100 libraries in the ruamel namespace, out of which ruamel.yaml is just one of several published as open source, so using ruamel is not very clear statement, although of course the context often provides enough information on which library you actually use.

serverless warm up plugin concurrent execution of warmup functions

I got the serverless-plugin-warmup 4.2.0-rc.1 working fine with serverless version 1.36.2
But it only executes with one single warmup call instead of the configured five.
Is there any problem in my serverless.yml config?
It is also strange that I have to add 'warmup: true' to the function section to get the function warmed up. According to the docs on https://github.com/FidelLimited/serverless-plugin-warmup the config at custom section should be enough.
plugins:
- serverless-prune-plugin
- serverless-plugin-warmup
custom:
warmup:
enabled: true
concurrency: 5
prewarm: true
schedule: rate(2 minutes)
source: { "type": "keepLambdaWarm" }
timeout: 60
functions:
myFunction:
name: ${self:service}-${opt:stage}-${opt:version}
handler: myHandler
environment:
FUNCTION_NAME: myFunction
warmup: true
in AWS Cloud Watch I only see one execution every 2 minutes. I would expect to see 5 executions every 2 minutes, or do I misunderstand something here?
EDIT:
Now using the master branch concurrency works but now the context that is deliverd to the function which should be warmed is broken: Using Spring Cloud Functions => "Error parsing Client Context as JSON"
Looking at the JS of the generated warmup function the delivered source looks not ok =>
const functions = [{"name":"myFunction","config":{"enabled":true,"source":"\"\\\"{\\\\\\\"source\\\\\\\":\\\\\\\"serverless-plugin-warmup\\\\\\\"}\\\"\"","concurrency":3}}];
Config is:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
timeout: 60
Added Property sourceRaw: true to warmup config which generates a clean source in the Function JS.
const functions = [{"name":"myFunctionName","config":{"enabled":true,"source":"{\"type\":\"keepLambdaWarm\"}","concurrency":3}}];
Config:
custom:
warmup:
enabled: true
concurrency: 3
prewarm: true
schedule: rate(5 minutes)
source: { "type": "keepLambdaWarm" }
sourceRaw: true
timeout: 60

Resources