Run pipeline on Openshift Online - continuous-integration

I'm trying to run a simple Pipeline on Openshift Online. Here are my steps:
oc new-project ess
Content of bc.yaml:
kind: "BuildConfig"
apiVersion: "v1"
metadata:
name: "yngwuoso-pipeline"
spec:
source:
git:
uri: "https://github.com/yngwuoso/spring-boot-rest-example.git"
strategy:
type: JenkinsPipeline
oc create -f bc.yaml
The result is:
Error from server (Forbidden): error when creating "bc.yaml": buildconfigs.build.openshift.io "yngwuoso-pipeline" is forbidden: unrecognized build strategy: build.BuildStrategy{DockerStrategy:(*build.DockerBuildStrategy)(nil), SourceStrategy:(*build.SourceBuildStrategy)(nil), CustomStrategy:(*build.CustomBuildStrategy)(nil), JenkinsPipelineStrategy:(*build.JenkinsPipelineBuildStrategy)(nil)}
Can anyone tell me what's missing?

If you want to run pipeline build based on git source code, first create the buildConfig of source Strategy for git repo, then create the buildConfig of pipeline to control over the all build process.
For instance, it's sample guide for your understanding, it might not work on your env, but you can customize below configuration for your env.
buildConfig for source strategy (github) is as follows,
apiVersion: v1
kind: BuildConfig
metadata:
labels:
app: yngwuoso-pipeline
name: yngwuoso-git-build
spec:
failedBuildsHistoryLimit: 5
output:
to:
kind: ImageStreamTag
name: yngwuoso-pipeline-image:latest
runPolicy: Serial
source:
git:
uri: https://github.com/yngwuoso/spring-boot-rest-example.git
type: Git
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: redhat-openjdk18-openshift:1.3
namespace: openshift
type: Source
triggers:
- type: ConfigChange
- type: ImageChange
buildConfig of pipeline for trigger above buildConfig based on git repo.
apiVersion: v1
kind: BuildConfig
metadata:
labels:
name: yngwuoso-pipeline
name: yngwuoso-pipeline
spec:
runPolicy: Serial
strategy:
jenkinsPipelineStrategy:
jenkinsfile: |-
node(''){
stage 'Build by S2I'
openshiftBuild(namespace: 'PROJECT NAME', bldCfg: 'yngwuoso-git-build', showBuildLogs: 'true')
}
type: JenkinsPipeline
triggers:
- github:
secret: gitsecret
type: GitHub
- generic:
secret: genericsecret
type: Generic
You should configure GitHub Webhook using authentication secret in pipeline buildConfg, refer GitHub Webhooks
for more information.

Related

Image pulling issue on Kubernetes from private repository

I created registry credits and when I apply on pod like this:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: registry.io.io/simple-node
imagePullSecrets:
- name: regcred
it works succesfly pull image
But if I try to do this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node123
namespace: node123
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
selector:
matchLabels:
name: node123
template:
metadata:
labels:
name: node123
spec:
containers:
- name: node123
image: registry.io.io/simple-node
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
On pod will get error: ImagePullBackOff
when I describe it getting
Failed to pull image "registry.io.io/simple-node": rpc error: code =
Unknown desc = Error response from daemon: Get
https://registry.io.io/v2/simple-node/manifests/latest: no basic auth
credentials
Anyone know how to solve this issue?
We are always running images from private registry. And this checklist might help you :
Put your params in env variable in your terminal to have single source of truth:
export DOCKER_HOST=registry.io.io
export DOCKER_USER=<your-user>
export DOCKER_PASS=<your-pass>
Make sure that you can authenticate & the image really exist
echo $DOCKER_PASS | docker login -u$DOCKER_USER --password-stdin $DOCKER_HOST
docker pull ${DOCKER_HOST}/simple-node
Make sure that you created the Dockerconfig secret in the same namespace of pod/deployment;
namespace=mynamespace # default
kubectl -n ${namespace} create secret docker-registry regcred \
--docker-server=${DOCKER_HOST} \
--docker-username=${DOCKER_USER} \
--docker-password=${DOCKER_PASS} \
--docker-email=anything#will.work.com
Patch the service account used by the Pod with the secret
namespace=mynamespace
kubectl -n ${namespace} patch serviceaccount default \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'
# if the pod use another service account,
# replace "default" by the relevant service account
or
Add imagePullSecrets in the pod :
imagePullSecrets:
- name: regcred
containers:
- ....

Spring Boot - read container environment variables in properties file

I use:
Spring Boot
Microservices (containerized)
Docker
Kubernetes
My case is as follows:
I have to generate link:
https://dev-myapp.com OR https://qa-myapp.com
depending on the environment in which my service is running (DEV, QA)
Depending on the environment (DEV, QA). I have one Spring profile BUT under this profile my app can run in kubernetes on 2 types of environment: DEV or QA. I want to generate proper link - read it from my properties file:
#Value("${email.body}")
private String emailBody;
application.yaml:
email:
body: Click on the following URL: ${ENVIRONMENT_URL:}/edge/invitation?code={0}&email={1}
DEVOPS(Kubernetes):
Manifest in workloads folder (DEV branch, the same for qa branch nut this time with https://qa-myapp.com):
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
...
...
containers:
env:
- name: ENVIRONMENT_URL
value: https://dev-myapp.com
So is i possible to read that value from kubernetes container in my Spring properties file? I want to get email.body property depending on the container my service is running on.
Yes this is possible and have corrected the syntax of the yaml
apiVersion: v1
kind: Service
...
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
command: ["/bin/sh", "-c", "env | grep ENVIRONMENT_URL"]
env:
- name: ENVIRONMENT_URL
value: https://myapp.com. #Indedntation Changed
ports:
- containerPort: 80

Can't get pod and service running with generated deployment and service descriptors

Following Ryan Baxter's Spring On Kubernetes workshop, I run into a problem I can't resolve. On the step of "Deploying To Kubernetes", after generating depoyment.yaml and services.yaml files, I run
kubectl apply -f ./k8s
and I get validation errors:
error validating "k8s/deployment.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
error validating "k8s/service.yaml": error validating data: the server could not find the requested resource; if you choose to ignore these errors, turn validation off with --validate=false
After running
kubectl apply -f ./k8s --validate=false
I get
error: unable to recognize "k8s/deployment.yaml": no matches for extensions/, Kind=Deployment
service"my-app" created
And here is the yaml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: my-app
name: my-app
spec:
replicas: 1
selector:
matchLabels:
app: my-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: my-app
spec:
containers:
- image: docker.io/my-id/my-app
name: my-app
resources: {}
status: {}
Based on Harsh's suggestion, I change the apiVersion to apps/v1 and run the kubectl apply command again.
deployment "my-app" created
service "my-app" configured
Based on what is shown in the watch, I run
kubectl port-forward svc/my-app 8080:80
where svc/my-app is shown in the watch. And it yields
error: invalid resource name svc/my-app: [may not contain '/']
To clean up, I run
kubectl delete -f ./k8s
And it yields
service "my-app" deleted
Error from server (NotFound): error when stopping "k8s/deployment.yaml": the server could not find the requested resource
I don't know whether those problems are caused by my operations errors or some bugs.
save this and deploy this file : kubectl apply -f filename.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
replicas: 1
selector:
matchLabels:
app: k8s-demo-app
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
spec:
containers:
- image: harbor.workshop.demo.ryanjbaxter.com/user1/k8s-demo-app
name: k8s-demo-app
resources: {}
status: {}
---
apiVersion: v1
kind: Service
metadata:
creationTimestamp: null
labels:
app: k8s-demo-app
name: k8s-demo-app
spec:
ports:
- name: 80-8080
port: 80
protocol: TCP
targetPort: 8080
selector:
app: k8s-demo-app
type: ClusterIP
status:
loadBalancer: {}
With help from Harsh and Chanseok, I upgrade gcloud components which kubectl is one of those components.
kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.2", GitCommit:"59603c6e503c87169aea6106f57b9f242f64df89", GitTreeState:"clean", BuildDate:"2020-01-18T23:30:10Z", GoVersion:"go1.13.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:07:13Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
I rerun those commands to deploy the server to the local cluster. It works!
I can't expose the service in the following step, though. An EXTERNAL-IP never shows up after the service.yaml modification. It is another problem.

Unable to set proxy in SonarQube running in OpenShift(OKD)

I'm running the sonarqube-openshift-docker build of sonarqube. I need to set the proxy Sonar uses so it can get to the Marketplace and pull down a Java profile.
I've tried setting a deployment a config env name/value pair:
JAVA_TOOLS_OPTIONS = "-Dhttp.proxyHost= -Dhttp.proxyPort= -Dhttps.proxyHost= -Dhttps.proxyPort="
I've also tried setting HTTP_PROXY_HOST, HTTP_PROXY_PORT, HTTPS_PROXY_HOST, HTTPS_PROXY_PORT name/value pairs.
All of these make it through to the environment on the container side, but Sonar isn't using those.
Changing the sonar.properties file in the container doesn't work since it's not persistent and gets stomped on with a restart.
I also tried adding it here, but that didn't work.
template:
metadata:
annotations:
openshift.io/container.sonarqube.image.entrypoint: '["./bin/run.sh -Dhttp.proxyHost=<myProxy:port>"]'
I am guessing I need to pass it in somewhere in the YAML file, but I can't figure out where.
AFAIK you have to provide host and port in separate properties:
http.proxyHost=
http.proxyPort=
Take a look at the sonar.properties file here.
Running SonarQube on OpenShift, I use a template that ubstalls a ConfigMap setting HTTP proxies configuration.
apiVersion: v1
kind: Template
metadata:
name: sonarqube-template
objects:
[...]
- apiVersion: v1
kind: ConfigMap
metadata:
name: ${APPLICATION_NAME}-conf
data:
sonar.properties: |-
http.nonProxyHosts=${PROXY_EXCLUDE}
http.proxyHost=${PROXY_HOST}
http.proxyPort=${PROXY_PORT}
https.proxyHost=${PROXY_HOST}
https.proxyPort=${PROXY_PORT}
wrapper.conf: |-
wrapper.java.command=java
wrapper.java.additional.1=-Dsonar.wrapped=true
wrapper.java.additional.2=-Djava.awt.headless=true
[...]
- apiVersion: v1
kind: DeploymentConfig
[...]
volumeMounts:
- mountPath: /opt/sonarqube/conf
name: ${APPLICATION_NAME}-conf
[....]
volumes:
- configMap:
defaultMode: 420
name: ${APPLICATION_NAME}-conf
[...]
parameters:
- name: APPLICATION_NAME
value: sonarqube
- name: PROXY_HOST
value: proxy.example.com
- name: PROXY_PORT
value: "3128"
- name: PROXY_EXCLUDE
value: "*.internal.domain.example.com"

Kubernetes : error validating data: found invalid field env for v1.PodSpec;

I am using below yaml file to create the pod, kubectl command giving below error.
How to correct this error message?
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]
kubectl create -f commands.yaml
error: error validating "commands.yaml": error validating data: found invalid field env for v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
follow example from this page.
https://kubernetes.io/docs/tasks/configure-pod-container/define-command-argument-container/
Thanks
-SR
Your—syntactically correct—YAML results in an incorrect data-structure for kubernetes. In YAML the indentations can affect the structure of the data. See this.
I think this should be correct:
apiVersion: v1
kind: Pod
metadata:
name: command-demo
labels:
purpose: demonstrate-command
spec:
containers:
- name: command-demo-container
image: debian
command: ["printenv"]
args: ["HOSTNAME", "KUBERNETES_PORT"]
env:
- name: MESSAGE
value: "hello world"
command: ["/bin/echo"]
args: ["$(MESSAGE)"]

Resources