OpenShift : Waiting for image stream to be created - bash

I am creating an installation script that will create resources off of YAML files†. This script will do the equivalent of this command:
oc new-app registry.access.redhat.com/rhscl/nginx-114-rhel7~http://github.com/username/repo.git
Three YAML files were created as follows:
imagestream for nginx-114-rhel7 - is-nginx.yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
build: build-repo
name: nginx-114-rhel7
namespace: ns
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: registry.access.redhat.com/rhscl/nginx-114-rhel7
name: latest
referencePolicy:
type: Source
imagestream for repo - is-repo.yaml
apiVersion: v1
kind: ImageStream
metadata:
labels:
application: is-rp
name: is-rp
namespace: ns
buildconfig for repo (output will be imagestream for repo) - bc-repo.yaml
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: rp
name: bc-rp
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'is-rp:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'http://github.com/username/repo.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nginx-114-rhel7:latest'
namespace: flo
type: Source
successfulBuildsHistoryLimit: 5
When these commands are run one after another,
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;oc start-build bc/bc-rep --wait
I get this error message,
The ImageStreamTag "nginx-114-rhel7:latest" is invalid: from: Error resolving ImageStreamTag nginx-114-rhel7:latest in namespace ns: unable to find latest tagged image
But, when I run the commands with a sleep before start-build, the build is triggered correctly.
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;sleep 5;oc start-build bc/bc-rep
How do I trigger start-build without entering a sleep command? The oc wait seems to work only for --for=condition and --for=delete. I do not know what value is to be used for --for=condition.
† - I do not see a clear guideline on creating installation scripts - with YAML or equivalent oc commands only - for deploying applications on to OpenShift.

Instead of running oc start-build, you should look into Image Change Triggers and Configuration Change Triggers
In your build config, you can point to an ImageStreamTag to start a build
type: "imageChange"
imageChange: {}
type: "imageChange"
imageChange:
from:
kind: "ImageStreamTag"
name: "custom-image:latest"

oc wait --for=condition=available only works when status object includes conditions, which is not the case for imagestreams.
status:
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/test/s2i-openresty-centos7
tags:
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: quay.io/openresty/openresty-centos7#sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
generation: 2
image: sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
tag: builder
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: qquay.io/openresty/openresty-centos7#sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
generation: 2
image: sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
tag: runtime
Until openshift CLI implements builtin waiting command for imagestreams, what I used to do is: request imagestream object, parse status object for the expected tag and sleep few seconds if not ready. Something like this:
until oc get is nginx-114-rhel7 -o json || echo '{}' | jq '[.status.tags[] | select(.tag == "latest")] | length == 1' --exit-status; do
sleep 1
done

Related

Env var not available in scipt/command in a kind Job (Kubernetes)

I run this Job:
kubectl apply -f - <<EOF
apiVersion: batch/v1
kind: Job
metadata:
name: sample
spec:
template:
spec:
containers:
- command:
- /bin/bash
- -c
- |
env
echo "MY_VAR : ${MY_VAR}"
sleep 800000
env:
- name: MY_VAR
value: MY_VALUE
image: mcr.microsoft.com/azure-cli:2.0.80
imagePullPolicy: IfNotPresent
name: sample
restartPolicy: Never
backoffLimit: 4
EOF
But when I look at the log the value MY_VALUE its empty even though env prints it:
$ kubectl logs -f sample-7p6bp
...
MY_VAR=MY_VALUE
...
MY_VAR :
Why does this line contain an empty value for ${MY_VAR}:
echo "MY_VAR : ${MY_VAR}"
?
UPDATE: Tried the same with a simple pod:
kubectl -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: sample
spec:
containers:
- name: sample
imagePullPolicy: Always
command: ["/bin/sh", "-c", "echo BEGIN ${MY_VAR} END"]
image: radial/busyboxplus:curl
env:
- name: MY_VAR
value: MY_VALUE
EOF
Same/empty result:
$ kubectl logs -f sample
BEGIN END
The reason this happens is because your shell expands the variable ${MY_VAR} before it's ever sent to the kubernetes. You can disable parameter expansion inside of a heredoc by quoting the terminator:
kubectl apply -f - <<'EOF'
Adding these quotes should resolve your issue.

Update istio-ingressgateway with yaml instead of kubectl edit

I test tcp-based service from book...
To complete this task, I need to expose port 31400...
I found that I can do this using this command : KUBE_EDITOR="nano" kubectl edit svc istio-ingressgateway -n istio-system
and enter manually this :
name: tcp
nodePort: 30851
port: 31400,
protocol: TCP
targetPort: 31400
I work as expected, but how do the same task using yaml and kubectl apply ?
Thanks for your help,
WCDR
1 - Get current configuration :
$ kubectl get -n istio-system service istio-ingressgateway -o yaml
Output look like :
apiVersion: v1
kind: Service
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{...,"kind":"Service",..."app":"istio-ingressgateway"...
...
labels:
app: istio-ingressgateway
...
spec:
...
ports:
...
>>>> insert block here <<<<
selector:
...
...
2 - Patch it with yq or manually...
https://github.com/mikefarah/yq
3 - Apply change :
$ kubectl apply -n istio-system -f - <<EOF
apiVersion: v1
kind: Service
...
EOF
Output must be :
service/istio-ingressgateway configured
Enjoy...

Remove certain fields from a YAML map object using yq

I have some tricky thing that I need to do for a yaml file, this is how it looks before
Before
apiVersion: v1
items:
- apiVersion: core.k8s.com/v1alpha1
kind: Test
metadata:
creationTimestamp: '2022-02-097T19:511:11Z'
finalizers:
- cuv.ssf.com
generation: 1
name: bar
namespace: foo
resourceVersion: '12236'
uid: 0117657e8
spec:
certificateIssuer:
acme:
email: myemail
provider:
credentials: dst
type: foo
domain: vst
type: bar
status:
conditions:
- lastTransitionTime: '2022-02-09T19:50:12Z'
message: test
observedGeneration: 1
reason: Ready
status: 'True'
type: Ready
lastOperation:
description: test
state: Succeeded
type: Reconcile
https://codebeautify.org/yaml-validator/y22fe4943
I need to remove all the fields under section metadata and the tricky part is to keep only the name and namespace
in addition to removing the status section at all, it should look like following
After
apiVersion: v1
items:
- apiVersion: core.k8s.com/v1alpha1
kind: Test
metadata:
name: bar
namespace: foo
spec:
certificateIssuer:
acme:
email: myemail
provider:
credentials: dst
type: foo
domain: vst
type: bar
After link
https://codebeautify.org/yaml-validator/y220531ef
Using yq (https://github.com/mikefarah/yq/) version 4.19.1
Using mikefarah/yq (aka Go yq), you could do something like below. Using del for deleting an entry and with_entries for selecting known fields
yq '.items[] |= (del(.status) | .metadata |= with_entries(select(.key == "name" or .key == "namespace")))' yaml
Starting v4.18.1, the eval flag is the default action, so the e flag can be avoided
This should work using yq in the implementaion of https://github.com/kislyuk/yq (not https://github.com/mikefarah/yq), and the -y (or -Y) flag:
yq -y '.items[] |= (del(.status) | .metadata |= {name, namespace})'
apiVersion: v1
items:
- apiVersion: core.k8s.com/v1alpha1
kind: Test
metadata:
name: bar
namespace: foo
spec:
certificateIssuer:
acme:
email: myemail
provider:
credentials: dst
type: foo
domain: vst
type: bar

Filename of configMap shows up as env in the Pod

I have a file named config.txt, which i used to create configmap myconfig inside minikube cluster.
However, when I use myconfig in a Pod, the name of the file config.txt also shows up as part of the ENV.
How can I correct it?
> cat config.txt
var3=val3
var4=val4
> kubectl create cm myconfig --from-file=config.txt
configmap/myconfig created
> kubectl describe cm myconfig
Name: myconfig
Namespace: default
Labels: <none>
Annotations: <none>
Data
====
config.txt:
----
var3=val3
var4=val4
Events: <none>
Pod definition
> cat nginx.yml
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: nginx
name: nginx
spec:
containers:
- image: nginx
name: nginx
envFrom:
- configMapRef:
name: myconfig
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Never
status: {}
> kubectl create -f nginx.yml
pod/nginx created
Pod EVN inspection, notice the line config.txt=var3=val3
expected it to be just var3=val3
> kubectl exec -it nginx -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=nginx
TERM=xterm
config.txt=var3=val3
var4=val4
KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
KUBERNETES_SERVICE_HOST=10.96.0.1
KUBERNETES_SERVICE_PORT=443
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT=tcp://10.96.0.1:443
NGINX_VERSION=1.19.4
NJS_VERSION=0.4.4
PKG_RELEASE=1~buster
HOME=/root
Create configmap like this will do the job:
kubectl create cm myconfig --from-env-file=config.txt

Kubernetes and spring boot variable parsing conflict

I have a Kubernetes's and spring boot's env variables conflict error. Details is as follows:
When creating my zipkin server pod, I need to set env variable RABBITMQ_HOST=http://172.16.100.83,RABBITMQ_PORT=5672.
Initially I define zipkin_pod.yaml as follows:
apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: 31503
With this configuration, when I do command
kubectl apply -f zipkin_pod.yaml
The console throws error:
[root#master3 sup]# kubectl apply -f zipkin_pod.yaml
Error from server (BadRequest): error when creating "zipkin_pod.yaml": Pod in version "v1" cannot be handled as a Pod: v1.Pod: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, parsing 1018 ...,"value":3... at {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"gearbox-rack-zipkin-server\",\"purpose\":\"platform-demo\"},\"name\":\"gearbox-rack-zipkin-server\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"EUREKA_SERVER\",\"value\":\"http://172.16.100.83:31501\"},{\"name\":\"RABBITMQ_HOST\",\"value\":\"http://172.16.100.83\"},{\"name\":\"RABBITMQ_PORT\",\"value\":31503}],\"image\":\"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server\",\"name\":\"gearbox-rack-zipkin-server\",\"ports\":[{\"containerPort\":9411}]}]}}\n"},"labels":{"app":"gearbox-rack-zipkin-server","purpose":"platform-demo"},"name":"gearbox-rack-zipkin-server","namespace":"default"},"spec":{"containers":[{"env":[{"name":"EUREKA_SERVER","value":"http://172.16.100.83:31501"},{"name":"RABBITMQ_HOST","value":"http://172.16.100.83"},{"name":"RABBITMQ_PORT","value":31503}],"image":"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server","name":"gearbox-rack-zipkin-server","ports":[{"containerPort":9411}]}]}}
so I modified the last line of zipkin_pod.yaml file as follows: Or use brutal force to make port number as int.
apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: !!31503
Then pod is successfully created, but spring getProperties throws exception.
[root#master3 sup]# kubectl apply -f zipkin_pod.yaml
pod "gearbox-rack-zipkin-server" created
When I check logs:
[root#master3 sup]# kubectl logs gearbox-rack-zipkin-server
2018-05-28 07:56:26.792 INFO [zipkin-server,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#4ac68d3e: startup date [Mon May 28 07:56:26 UTC 2018]; root of context hierarchy
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target org.springframework.boot.autoconfigure.amqp.RabbitProperties#324c64cd failed:
Property: spring.rabbitmq.port
Value:
Reason: Failed to convert property value of type 'java.lang.String' to required type 'int' for property 'port'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [int]
Action:
Update your application's configuration
My question is how to let kubernetes understand the port number as int, while not breaking spring boot convert rule from string to int? because spring boot could not convert !!31503 to int 31503.
As #Bal Chua and #Pär Nilsson mentioned, for environmental variables you can use only string variables because Linux environmental variables can be only strings.
So, if you use yaml, you need to place value into quotes to force Kubernetes to use string.
For example:
- name: RABBITMQ_PORT
value: '31503'

Resources