I have a Kubernetes's and spring boot's env variables conflict error. Details is as follows:
When creating my zipkin server pod, I need to set env variable RABBITMQ_HOST=http://172.16.100.83,RABBITMQ_PORT=5672.
Initially I define zipkin_pod.yaml as follows:
apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: 31503
With this configuration, when I do command
kubectl apply -f zipkin_pod.yaml
The console throws error:
[root#master3 sup]# kubectl apply -f zipkin_pod.yaml
Error from server (BadRequest): error when creating "zipkin_pod.yaml": Pod in version "v1" cannot be handled as a Pod: v1.Pod: Spec: v1.PodSpec: Containers: []v1.Container: v1.Container: Env: []v1.EnvVar: v1.EnvVar: Value: ReadString: expects " or n, parsing 1018 ...,"value":3... at {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"gearbox-rack-zipkin-server\",\"purpose\":\"platform-demo\"},\"name\":\"gearbox-rack-zipkin-server\",\"namespace\":\"default\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"EUREKA_SERVER\",\"value\":\"http://172.16.100.83:31501\"},{\"name\":\"RABBITMQ_HOST\",\"value\":\"http://172.16.100.83\"},{\"name\":\"RABBITMQ_PORT\",\"value\":31503}],\"image\":\"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server\",\"name\":\"gearbox-rack-zipkin-server\",\"ports\":[{\"containerPort\":9411}]}]}}\n"},"labels":{"app":"gearbox-rack-zipkin-server","purpose":"platform-demo"},"name":"gearbox-rack-zipkin-server","namespace":"default"},"spec":{"containers":[{"env":[{"name":"EUREKA_SERVER","value":"http://172.16.100.83:31501"},{"name":"RABBITMQ_HOST","value":"http://172.16.100.83"},{"name":"RABBITMQ_PORT","value":31503}],"image":"192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server","name":"gearbox-rack-zipkin-server","ports":[{"containerPort":9411}]}]}}
so I modified the last line of zipkin_pod.yaml file as follows: Or use brutal force to make port number as int.
apiVersion: v1
kind: Pod
metadata:
name: gearbox-rack-zipkin-server
labels:
app: gearbox-rack-zipkin-server
purpose: platform-demo
spec:
containers:
- name: gearbox-rack-zipkin-server
image: 192.168.1.229:5000/gearboxrack/gearbox-rack-zipkin-server
ports:
- containerPort: 9411
env:
- name: EUREKA_SERVER
value: http://172.16.100.83:31501
- name: RABBITMQ_HOST
value: http://172.16.100.83
- name: RABBITMQ_PORT
value: !!31503
Then pod is successfully created, but spring getProperties throws exception.
[root#master3 sup]# kubectl apply -f zipkin_pod.yaml
pod "gearbox-rack-zipkin-server" created
When I check logs:
[root#master3 sup]# kubectl logs gearbox-rack-zipkin-server
2018-05-28 07:56:26.792 INFO [zipkin-server,,,] 1 --- [ main] s.c.a.AnnotationConfigApplicationContext : Refreshing org.springframework.context.annotation.AnnotationConfigApplicationContext#4ac68d3e: startup date [Mon May 28 07:56:26 UTC 2018]; root of context hierarchy
...
***************************
APPLICATION FAILED TO START
***************************
Description:
Binding to target org.springframework.boot.autoconfigure.amqp.RabbitProperties#324c64cd failed:
Property: spring.rabbitmq.port
Value:
Reason: Failed to convert property value of type 'java.lang.String' to required type 'int' for property 'port'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type [java.lang.String] to type [int]
Action:
Update your application's configuration
My question is how to let kubernetes understand the port number as int, while not breaking spring boot convert rule from string to int? because spring boot could not convert !!31503 to int 31503.
As #Bal Chua and #Pär Nilsson mentioned, for environmental variables you can use only string variables because Linux environmental variables can be only strings.
So, if you use yaml, you need to place value into quotes to force Kubernetes to use string.
For example:
- name: RABBITMQ_PORT
value: '31503'
Related
I have a question if possible:
(I work with GKE)
this it my kibana:
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: quickstart
spec:
version: 7.6.2
count: 1
elasticsearchRef:
name: cdbridgerpayelasticsearch
http:
service:
spec:
type: LoadBalancer
(It ran well with the loadbalancer - ...in the browser )
And I wanted to make it publish on https://some.thing.net
so I made an ingress:
apiVersion: v1
kind: Secret
metadata:
name: bcd-kibana-testsecret-tls
namespace: default
type: kubernetes.io/tls
data:
tls.crt: |
LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUREVENDQWZXZ0F3SUJBZ0lVVkk1ellBakR0
RWFyd0Zqd2xuTnlLeU0xMnNrd0RRWUpLb1pJaHZjTkFRRUwKQlFBd0ZqRVVNQklHQTFVRUF3d0xa
bTl2TG1KaGNpNWpiMjB3SGhjTk1qSXdOakUxTVRJeE16RTVXaGNOTWpNdwpOakUxTVRJeE16RTVX
akFXTVJRd0VnWURWUVFEREF0bWIyOHVZbUZ5TG1OdmJUQ0NBU0l3RFFZSktvWklodmNOCkFRRUJC
UUFEZ2dFUEFEQ0NBUW9DZ2dFQkFLMVRuTDJpZlluTUFtWEkrVmpXQU9OSjUvMGlpN0xWZlFJMjQ5
aVoKUXl5Zkw5Y2FQVVpuODZtY1NSdXZnV1JZa3dLaHUwQXpqWW9aQWR5N21TQ3ROMmkzZUZIRGdq
NzdZNEhQcFA1TQpKVTRzQmU5ZmswcnlVeDA3aU11UVA5T3pibGVzNHcvejJIaXcyYVA2cUl5ZFI3
bFhGSnQ0NXNSeDJ3THRqUHZCClErMG5UQlJrSndsQVZQYTdIYTN3RjBTSDJXL1dybTgrNlRkVGpG
MmFUanpkMFFXdy9hKzNoUU9HSnh4b1JxY1MKYmNNYmMrYitGeCtqZ3F2N0xuQ3R1Njd5L2tsb3ZL
Z2djdW41ZVNqY3krT0ZpdTNhY2hLVDlUeWJlSUxPY2FSYQp3KzJSeitNOUdTMWR2aUI0Q0dRNnlw
RDhkazc4bktja1FBam9QMXV5ZXJnR2hla0NBd0VBQWFOVE1GRXdIUVlEClZSME9CQllFRkpLUWps
KzI0TVJrNVpqTlB4ZVRnVU1pbE5xWk1COEdBMVVkSXdRWU1CYUFGSktRamwrMjRNUmsKNVpqTlB4
ZVRnVU1pbE5xWk1BOEdBMVVkRXdFQi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dF
QgpBSUp2Y1ZNclpEUEZ6TEhvZ3IyZklDN0E0TTB5WFREZXhONWNEZFFiOUNzVk0zUjN6bkZFU1Jt
b21RVVlCeFB3CmFjUVpWQ25qM0xGamRmeExBNkxrR0hhbjBVRjhDWnJ4ODRRWUtzQTU2dFpJWFVm
ZXRIZk1zOTZsSE5ROW5samsKT3RoazU3ZkNRZVRFMjRCU0RIVDJVL1hhNjVuMnBjcFpDU2FYWStF
SjJaWTBhZjlCcVBrTFZud3RTQ05lY0JLVQp3N0RCM0o4U2h1Z0FES21xU2VJM1R2N015SThvNHJr
RStBdmZGTUw0YlpFbS9IWW4wNkxQdVF3TUE1cndBcFN3CnlDaUJhcjRtK0psSzNudDRqU0hGeU4x
N1g1M1FXcnozQTNPYmZSWXI0WmJObE8zY29ObzFiQnd3eWJVVmgycXoKRGIrcnVWTUN5WjBTdXlE
OGZURFRHY1E9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
tls.key: |
LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZB
QVNDQktjd2dnU2pBZ0VBQW9JQkFRQ3RVNXk5b24ySnpBSmwKeVBsWTFnRGpTZWY5SW91eTFYMENO
dVBZbVVNc255L1hHajFHWi9PcG5Fa2JyNEZrV0pNQ29idEFNNDJLR1FIYwp1NWtnclRkb3QzaFJ3
NEkrKzJPQno2VCtUQ1ZPTEFYdlg1Tks4bE1kTzRqTGtEL1RzMjVYck9NUDg5aDRzTm1qCitxaU1u
VWU1VnhTYmVPYkVjZHNDN1l6N3dVUHRKMHdVWkNjSlFGVDJ1eDJ0OEJkRWg5bHYxcTV2UHVrM1U0
eGQKbWs0ODNkRUZzUDJ2dDRVRGhpY2NhRWFuRW0zREczUG0vaGNmbzRLcit5NXdyYnV1OHY1SmFM
eW9JSExwK1hrbwozTXZqaFlydDJuSVNrL1U4bTNpQ3puR2tXc1B0a2MvalBSa3RYYjRnZUFoa09z
cVEvSFpPL0p5bkpFQUk2RDliCnNucTRCb1hwQWdNQkFBRUNnZ0VCQUpyb3FLUy83aTFTM1MyMVVr
MTRickMxSkJjVVlnRENWNGk4SUNVOHpWRzcKTUZteVJPT0JFc0FiUXlmd1V0ZXBaaktxODUwc3Rp
cWZzUTlqeHpieU9SeHBKYXNGN29sMXluaUJhYmd4dkFIQwp6TWNsQjVLclEyZFVCeTNRVFl0YXla
cW9sUU56NzV2bWk0M0lBQTQwbjU3aFdqU2QrTG5IL0hNQWRzbW04SnVwCjA5d3VHMGltdEhQYm5B
TERnY2N3V04zY2xxU0FtR3pxbW9kOEE1YjBWZTQwZHhjTXh3TldaN0JqOTBnamNWYnQKVU1aaFhp
T2E2bnNSSHZDQjF0b0lmZVBuOEdzRk9nOUVqUHZDTWM4QWZKVW84Qk5TK2N0RmF3RjRWaUUyUHlB
VgpxZzR1MHhBQ2Qya28zUUtpZFpsbjAyZkc2ZWg1UmxGYzdsL1RiWWxQY2xFQ2dZRUEyU0NvK2pJ
MUZ1WkZjSFhtCm1Ra2Z3Q0Vvc28xd3VTV3hRWDBCMnJmUmJVbS9xdXV4Rm92MUdVc0hwNEhmSHJu
RWRuSklqVUw4bWxpSjFFUTkKVUpxZC9SVkg1OTdZZ2Zib1dCbHh0cVhIRFRObjNIU3JzQmJlQTh6
NXFMZjE2QzZaa3U3YmR3L2pxazJVaFgrcwp6T3piYTVqYVVYU0pXYXpzRmZoZjdSMlZ3UTBDZ1lF
QXpGdDZBTDNYendBSi9EcXU5QlVuaTIxemZXSUp5QXFwCnBwRnNnQUczVkVYZWFRMjVGcjV6cURn
dlBWRkF5QUFYMU9TL3pHZVcxaDQ0SERzQjRrVmdxVlhHSTdoQUV1RjYKRlgra1M5Uk5QdmFsYXdQ
cXp4VTdPQmcvUis5NVB1NW1oNnFrRWVUekM2T21ZUGRGbmVQcWxKZk03YU43OEhjOApGVU1xRTBa
NkNVMENnWUJJUUVYNmU1cU85REZIS3ZTQkdEZ29odUEwQ2p6b1gxS01xRHhsdTZWRTZMV08rcjhD
CjhhK3Rxdm54RTVaYmN4V2RGSXB2OTBwM1VkOExjMm16MkwrWjUrcjFqWUllUFRzemxjUHhNMWo1
VzVIRUdrN0gKV2RTbkR4NUV0bkp0d0pQNkFPR216UExGU091VFFOa1BtQUdyM0VGSnVhMjYyWC8y
RDZCY0Z1d3VRUUtCZ0V0aApHcm1YVFRsdnZEOHJya2tlWEgzVG01d09RNmxrTlh2WmZIb2pKK3FQ
OHlBeERhclVDWGx0Y0E5Z0gxTW1wYVBECjFQT2k2a0tFMXhHaXVta3FTaU5zSGpBaTBJK21XQkFD
Q3lwbFh6RHdiY2Z4by9WSzBaTTVibTRzYVQ3TFZVcUoKcVFkb3VqWDY0VzQzQjVqYjd6VnNZUXp2
RnRKMlNOVlc5dmd4TU9hcEFvR0FDaHphN3BMWWhueFA5QWVUL1pxZwpvUWM0ckh1SDZhTDMveCtq
dlpRVXdiSitKOWZGY05pcTFZRlJBL2RJSEJWcGZTMWJWR3N3OW9MT2tsTWxDckxRCnJKSjkzWlRu
dFdSQ1FCOTNMbXhoZmJuRk9CemZtbHZYTjFJeE1FNStVbVZRQmRLaS9YMktMZnFSUW5HVTExL2UK
NytFcXFFbllrWTBraE1HL0xqRlRTU0E9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: tls-bcd-ingress
spec:
tls:
- hosts:
- some.thing.net
secretName: bcd-kibana-testsecret-tls
rules:
- host: some.thing.net
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: quickstart-kb-http
port:
number: 5601
(p.s. to get the tls.crt and the tls.key I ran in the GCP cli the command:
kubectl create secret tls bcdsecret --key="tls.key" --cert="tls.crt"
and then:
cat tls.crt | base64 and cat tls.key | base64)
But the ingress failed
when I ran kubectl get ingress I got:
$ kubectl get ingress
W0615 21:28:17.715766 604 gcp.go:120] WARNING: the gcp auth plugin is deprecated in v1.22+, unavailable in v1.25+; use gcloud instead.
To learn more, consult https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
NAME CLASS HOSTS ADDRESS PORTS AGE
...
tls-bcd-ingressbcd some.thing.net 34.120.185.85 80, 443 35m
but when I ran this hostname in the browser I got nothing
What am I supposed to do?
Thank you!
or in other words:
how can I publish the kibana on hostname?
thanks :smiling_face:
Frida
I've written a node exporter in golang named "my-node-exporter" with some collectors to show metrics. From my cluster, I can view my metrics just fine with the following:
kubectl port-forward my-node-exporter-999b5fd99-bvc2c 9090:8080 -n kube-system
localhost:9090/metrics
However when I try to view my metrics within the prometheus dashboard
kubectl port-forward prometheus-prometheus-operator-158978-prometheus-0 9090
localhost:9090/graph
my metrics are nowhere to be found and I can only see default metrics. Am I missing a step for getting my metrics on the graph?
Here are the pods in my default namespace which has my prometheus stuff in it.
pod/alertmanager-prometheus-operator-158978-alertmanager-0 2/2 Running 0 85d
pod/grafana-1589787858-fd7b847f9-sxxpr 1/1 Running 0 85d
pod/prometheus-operator-158978-operator-75f4d57f5b-btwk9 2/2 Running 0 85d
pod/prometheus-operator-1589787700-grafana-5fb7fd9d8d-2kptx 2/2 Running 0 85d
pod/prometheus-operator-1589787700-kube-state-metrics-765d4b7bvtdhj 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-bwljh 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-nb4fv 1/1 Running 0 85d
pod/prometheus-operator-1589787700-prometheus-node-exporter-rmw2f 1/1 Running 0 85d
pod/prometheus-prometheus-operator-158978-prometheus-0 3/3 Running 1 85d
I used helm to install prometheus operator.
EDIT: adding my yaml file
# Configuration to deploy
#
# example usage: kubectl create -f <this_file>
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-node-exporter-sa
namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-binding
subjects:
- kind: ServiceAccount
name: my-node-exporter-sa
namespace: kube-system
roleRef:
kind: ClusterRole
name: my-node-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: my-node-exporter-role
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
#####################################################
############ Service ############
#####################################################
kind: Service
apiVersion: v1
metadata:
name: my-node-exporter-svc
namespace: kube-system
labels:
app: my-node-exporter
spec:
ports:
- name: my-node-exporter
port: 8080
targetPort: metrics
protocol: TCP
selector:
app: my-node-exporter
---
#########################################################
############ Deployment ############
#########################################################
kind: Deployment
apiVersion: apps/v1
metadata:
name: my-node-exporter
namespace: kube-system
spec:
selector:
matchLabels:
app: my-node-exporter
replicas: 1
template:
metadata:
labels:
app: my-node-exporter
spec:
serviceAccount: my-node-exporter-sa
containers:
- name: my-node-exporter
image: locationofmyimagehere
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
imagePullPolicy: Always
ports:
- containerPort: 8080
volumeMounts:
- name: log-dir
mountPath: /var/log
volumes:
- name: log-dir
hostPath:
path: /var/log
Service monitor yaml
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: my-node-exporter-service-monitor
labels:
app: my-node-exporter-service-monitor
spec:
selector:
matchLabels:
app: my-node-exporter
matchExpressions:
- {key: app, operator: Exists}
endpoints:
- port: my-node-exporter
namespaceSelector:
matchNames:
- default
- kube-system
Prometheus yaml
# Prometheus will use selected ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: my-node-exporter
labels:
team: frontend
spec:
serviceMonitorSelector:
matchLabels:
app: my-node-exporter
matchExpressions:
- key: app
operator: Exists
You need to explicitly tell Prometheus what metrics to collect - and where from - by firstly creating a Service that points to your my-node-exporter pods (if you haven't already), and then a ServiceMonitor, as described in the Prometheus Operator docs - search for the phrase "This Service object is discovered by a ServiceMonitor".
Getting Deployment/Service/ServiceMonitor/PrometheusRule working in PrometheusOperator needs great caution.
So I created a helm chart repo kehao95/helm-prometheus-exporter to install any prometheus-exporters, including your customer exporter, you can try it out.
It will create not only the exporter Deployment but also Service/ServiceMonitor/PrometheusRule for you.
install the chart
helm repo add kehao95 https://kehao95.github.io/helm-prometheus-exporter/
create an value file my-exporter.yaml for kehao95/prometheus-exporter
exporter:
image: your-exporter
tag: latest
port: 8080
args:
- "--telemetry.addr=8080"
- "--telemetry.path=/metrics"
install it with helm
helm install --namespace yourns my-exporter kehao95/prometheus-exporter -f my-exporter.yaml
Then you should see your metrics in prometheus.
I am trying to run KEDA with Rabbitmq & Spring Boot. But it is not working. Basically KEDA is not generating Kubernetes HPA object.
I tried sample code (which is provided by KEDA in GO language) & it is working fine.
I have my producer/consumer code written in spring boot. When I am trying to apply KEDA, it is not scaling rabbitmq consumers (basically not even created HPA object)
https://github.com/sky29/rabbitmq-k8s-broker-publisher-consumer
https://github.com/sky29/rabbitmq-k8s-keda-spring-boot
https://github.com/sky29/rabbitmq-k8s-keda-spring-boot/tree/master/app/myclients
https://github.com/sky29/rabbitmq-k8s-keda-spring-boot/blob/master/app/04_scaled-object-new.yaml
I am able to achieve the same. Sharing in case anyone facing the same issue.
I am using KEDA version : 2.8.2
apiVersion: v1
kind: Secret
metadata:
name: keda-rabbitmq-secret
namespace: default
data:
host: <HTTP API endpoint> # base64 encoded value of format http://guest:password#localhost:15672/vhost
---
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: trigger-auth-rabbitmq-conn
namespace: default
spec:
secretTargetRef:
- parameter: host
name: keda-rabbitmq-secret
key: host
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: test-analysis
namespace: default
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: test-celery # Mandatory. Must be in the same namespace as the ScaledObject
pollingInterval: 10 # Optional. Default: 10 seconds
cooldownPeriod: 3600 # Optional. Default: 300 seconds
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 2 # Optional. Default: 100
fallback: # Optional. Section to specify fallback options
failureThreshold: 3 # Mandatory if fallback section is included
replicas: 1 # Mandatory if fallback section is included
advanced: # Optional. Section to specify advanced options
restoreToOriginalReplicaCount: true # Optional. Default: false
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
name: keda-hpa-auto-analysis # Optional. Default: keda-hpa-{scaled-object-name}
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 600
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: rabbitmq
metadata:
protocol: amqp
queueName: test_queue
queueLength: "1"
activationValue: "0"
authenticationRef:
name: trigger-auth-rabbitmq-conn
Reference : https://keda.sh/docs/2.9/scalers/rabbitmq-queue/
I am creating an installation script that will create resources off of YAML files†. This script will do the equivalent of this command:
oc new-app registry.access.redhat.com/rhscl/nginx-114-rhel7~http://github.com/username/repo.git
Three YAML files were created as follows:
imagestream for nginx-114-rhel7 - is-nginx.yaml
apiVersion: image.openshift.io/v1
kind: ImageStream
metadata:
labels:
build: build-repo
name: nginx-114-rhel7
namespace: ns
spec:
tags:
- annotations: null
from:
kind: DockerImage
name: registry.access.redhat.com/rhscl/nginx-114-rhel7
name: latest
referencePolicy:
type: Source
imagestream for repo - is-repo.yaml
apiVersion: v1
kind: ImageStream
metadata:
labels:
application: is-rp
name: is-rp
namespace: ns
buildconfig for repo (output will be imagestream for repo) - bc-repo.yaml
apiVersion: v1
kind: BuildConfig
metadata:
labels:
build: rp
name: bc-rp
namespace: ns
spec:
output:
to:
kind: ImageStreamTag
name: 'is-rp:latest'
postCommit: {}
resources: {}
runPolicy: Serial
source:
git:
ref: dev_1.0
uri: 'http://github.com/username/repo.git'
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: 'nginx-114-rhel7:latest'
namespace: flo
type: Source
successfulBuildsHistoryLimit: 5
When these commands are run one after another,
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;oc start-build bc/bc-rep --wait
I get this error message,
The ImageStreamTag "nginx-114-rhel7:latest" is invalid: from: Error resolving ImageStreamTag nginx-114-rhel7:latest in namespace ns: unable to find latest tagged image
But, when I run the commands with a sleep before start-build, the build is triggered correctly.
oc create -f is-nginx.yaml;oc create -f is-repo.yaml;oc create -f bc-repo.yaml;sleep 5;oc start-build bc/bc-rep
How do I trigger start-build without entering a sleep command? The oc wait seems to work only for --for=condition and --for=delete. I do not know what value is to be used for --for=condition.
† - I do not see a clear guideline on creating installation scripts - with YAML or equivalent oc commands only - for deploying applications on to OpenShift.
Instead of running oc start-build, you should look into Image Change Triggers and Configuration Change Triggers
In your build config, you can point to an ImageStreamTag to start a build
type: "imageChange"
imageChange: {}
type: "imageChange"
imageChange:
from:
kind: "ImageStreamTag"
name: "custom-image:latest"
oc wait --for=condition=available only works when status object includes conditions, which is not the case for imagestreams.
status:
dockerImageRepository: image-registry.openshift-image-registry.svc:5000/test/s2i-openresty-centos7
tags:
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: quay.io/openresty/openresty-centos7#sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
generation: 2
image: sha256:db224d642ad4001ab971b934a6444da16bb87ddfcc9c048bbf68febafcac52db
tag: builder
- items:
- created: "2019-11-05T11:23:45Z"
dockerImageReference: qquay.io/openresty/openresty-centos7#sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
generation: 2
image: sha256:082ee75ed83f161375d0d281f235b7271176b1d129f5ed9972c7d31923e08660
tag: runtime
Until openshift CLI implements builtin waiting command for imagestreams, what I used to do is: request imagestream object, parse status object for the expected tag and sleep few seconds if not ready. Something like this:
until oc get is nginx-114-rhel7 -o json || echo '{}' | jq '[.status.tags[] | select(.tag == "latest")] | length == 1' --exit-status; do
sleep 1
done
A'm running mvn fabric8:resource and I'm getting this output:
...
[INFO] F8: validating /home/jcabre/projects/tdevhub/application-src/t-devhub/tdev-wsec-service/target/classes/META-INF/fabric8/openshift/tdev-wsec-service-deploymentconfig.yml resource
[WARNING] F8: Invalid Resource : /home/jcabre/projects/tdevhub/application-src/t-devhub/tdev-wsec-service/target/classes/META-INF/fabric8/openshift/tdev-wsec-service-deploymentconfig.yml
[message=.spec.template.spec.containers[0].name: does not match the regex pattern ^[a-z0-9]([-a-z0-9]*[a-z0-9])?$, violation type=pattern]
[INFO] F8: validating /home/jcabre/projects/tdevhub/application-src/t-devhub/tdev-wsec-service/target/classes/META-INF/fabric8/kubernetes/tdev-wsec-service-deployment.yml resource
[WARNING] F8: Invalid Resource : /home/jcabre/projects/tdevhub/application-src/t-devhub/tdev-wsec-service/target/classes/META-INF/fabric8/kubernetes/tdev-wsec-service-deployment.yml
...
I don't quite figure out what's wrong. The content of tdev-wsec-service-deploymentconfig.yml is:
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
fabric8.io/git-commit: 4bb3b53369213a4b4d9940d49aa47c9df4a2f611
fabric8.io/iconUrl: img/icons/spring-boot.svg
fabric8.io/git-branch: master
fabric8.io/metrics-path: dashboard/file/kubernetes-pods.json/?var-project=tdev-wsec-service&var-version=0.0.1-SNAPSHOT
fabric8.io/scm-tag: HEAD
fabric8.io/scm-url: https://github.com/spring-projects/spring-boot/spring-boot-starter-parent/t-devhub/tdev-wsec-service
labels:
app: tdev-wsec-service
provider: fabric8
version: 0.0.1-SNAPSHOT
group: com.raw.io
name: tdev-wsec-service
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
app: tdev-wsec-service
provider: fabric8
group: com.raw.io
template:
metadata:
annotations:
fabric8.io/git-commit: 4bb3b53369213a4b4d9940d49aa47c9df4a2f611
fabric8.io/metrics-path: dashboard/file/kubernetes-pods.json/?var-project=tdev-wsec-service&var-version=0.0.1-SNAPSHOT
fabric8.io/scm-url: https://github.com/spring-projects/spring-boot/spring-boot-starter-parent/t-devhub/tdev-wsec-service
fabric8.io/iconUrl: img/icons/spring-boot.svg
fabric8.io/git-branch: master
fabric8.io/scm-tag: HEAD
labels:
app: tdev-wsec-service
provider: fabric8
version: 0.0.1-SNAPSHOT
group: com.raw.io
spec:
containers:
- env:
- name: KUBERNETES_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: wsec:0.0.1-SNAPSHOT
imagePullPolicy: IfNotPresent
name: com.raw.io-tdev-wsec-service
securityContext:
privileged: false
message=.spec.template.spec.containers[0].name -> KUBERNETES_NAMESPACE
^[a-z0-9]([-a-z0-9]*[a-z0-9])?$ -> character '_' is not allowed
This is not a fabric limitation, i tried running a similar pod def using kubectl and i got the same message.