Run Oracle 12c on Kubernetes - oracle

I am trying to run Oracle 12c on Kubernetes.
When I start it as docker container the FIRST TIME I have to wait until it configures.
It configures successfully and pass scripts in my configDBora.sh that I'd added.
BUT If I want to login as user that I created I couldn't do that. I have to restart the container at LEAST ONCE.
How can I do that in Kubernetes? There is no stop command. I also tried changing replicas to 0 of my deployment but it deletes existing pod.
What script can I add to configDBora.sh or do something else to restart a container with oracle on kubernetes?
The end of the script in configDBora.sh is:
#A lot of scipt here
...
echo "$DB_PDB = \
(DESCRIPTION = \
(ADDRESS = (PROTOCOL = TCP)(HOST = 0.0.0.0)(PORT = 1521)) \
(CONNECT_DATA = \
(SERVER = DEDICATED) \
(SERVICE_NAME = $DB_PDB.$DB_DOMAIN) \
) \
) \
" >> $TNS_ORA
# start listener
lsnrctl start
# clean
unset DB_PASSWD
history -w
history -c
echo ""
echo "DONE!"
# end
Deployment oracle:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.26.0 (40646f47)
creationTimestamp: null
labels:
io.kompose.service: oracle
name: oracle
spec:
replicas: 1
selector:
matchLabels:
io.kompose.service: oracle
strategy: {}
template:
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.26.0 (40646f47)
creationTimestamp: null
labels:
io.kompose.service: oracle
spec:
containers:
- env:
- name: ORACLE_MEM
value: "2000"
- name: ORACLE_PDB
value: ORCLPDB
- name: ORACLE_PWD
value: Admin123
- name: ORACLE_SID
value: ORCLCDB
image: maxprimeaery/oracle-12c-base:5.0
name: oracle
ports:
- containerPort: 1521
# resources:
# limits:
# memory: "1073741824"
restartPolicy: Always
status: {}
Service:
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.26.0 (40646f47)
creationTimestamp: null
labels:
io.kompose.service: oracle
name: oracle
spec:
type: LoadBalancer
ports:
- name: "1521"
port: 1521
targetPort: 1521
selector:
io.kompose.service: oracle
status:
loadBalancer: {}

In this case, you can try to restart your deployment:
kubectl rollout restart deployment <deployment_name>
It will perform shutdown and restart of the container without interruptions to the service.
Please check What is Kubectl Rollout Restart? for more information about this method.
Edit: It is the way Kubernetes is supposed to work - by handling automatic replacement of the pod with the new one as soon as the container is restarted. It is not common to modify the pod inside, but you can try to perform force reboot of the container inside of particular pod by applying such command:
kubectl exec -it <pod_name> -c <container_name> -- /sbin/reboot
Or it can be force termination of the container, depending on your container settings:
kubectl exec -it <pod_name> -c <container_name> -- /bin/sh -c "kill 1"
Once container is terminated, it will be restarted after few seconds in the same pod.

The only problem was 12c version of Oracle.
I moved my DB to Oracle 19c Enterprise Edition and my problem was solved. With Oracle 19c I don't need to restart pod with oracle but I need more ram and rom memory for it.

Related

Image pulling issue on Kubernetes from private repository

I created registry credits and when I apply on pod like this:
apiVersion: v1
kind: Pod
metadata:
name: private-reg
spec:
containers:
- name: private-reg-container
image: registry.io.io/simple-node
imagePullSecrets:
- name: regcred
it works succesfly pull image
But if I try to do this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: node123
namespace: node123
spec:
replicas: 5
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 2
maxUnavailable: 0
selector:
matchLabels:
name: node123
template:
metadata:
labels:
name: node123
spec:
containers:
- name: node123
image: registry.io.io/simple-node
ports:
- containerPort: 3000
imagePullSecrets:
- name: regcred
On pod will get error: ImagePullBackOff
when I describe it getting
Failed to pull image "registry.io.io/simple-node": rpc error: code =
Unknown desc = Error response from daemon: Get
https://registry.io.io/v2/simple-node/manifests/latest: no basic auth
credentials
Anyone know how to solve this issue?
We are always running images from private registry. And this checklist might help you :
Put your params in env variable in your terminal to have single source of truth:
export DOCKER_HOST=registry.io.io
export DOCKER_USER=<your-user>
export DOCKER_PASS=<your-pass>
Make sure that you can authenticate & the image really exist
echo $DOCKER_PASS | docker login -u$DOCKER_USER --password-stdin $DOCKER_HOST
docker pull ${DOCKER_HOST}/simple-node
Make sure that you created the Dockerconfig secret in the same namespace of pod/deployment;
namespace=mynamespace # default
kubectl -n ${namespace} create secret docker-registry regcred \
--docker-server=${DOCKER_HOST} \
--docker-username=${DOCKER_USER} \
--docker-password=${DOCKER_PASS} \
--docker-email=anything#will.work.com
Patch the service account used by the Pod with the secret
namespace=mynamespace
kubectl -n ${namespace} patch serviceaccount default \
-p '{"imagePullSecrets": [{"name": "regcred"}]}'
# if the pod use another service account,
# replace "default" by the relevant service account
or
Add imagePullSecrets in the pod :
imagePullSecrets:
- name: regcred
containers:
- ....

Volume mounts not working Kubernetes and WSL 2 and Docker

I am unable to properly mount volumes using HostPath within Kubernetes running in Docker and WSL 2. This seems to be a WSL 2 issue when mounting volumes in Kubernetes running in Docker. Anyone know how to fix this?
Here are the steps:
Deploy debug build to Kubernetes for my app.
Attach Visual Studio Code using the Kubernetes extension
Navigate to the project folder for my application that was attached using the volume mount <= Problem Right Here
When you go and look at the volume mount nothing is there.
C:\Windows\System32>wsl -l -v
NAME STATE VERSION
Ubuntu Running 2
docker-desktop-data Running 2
docker-desktop Running 2
Docker Desktop v2.3.0.3
Kubernetes v1.16.5
Visual Studio Code v1.46.1
====================================================================
Dockerfile
====================================================================
#
# Base image for deploying and running based on Ubuntu
#
# Support ASP.NET and does not include .NET SDK or NodeJs
#
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-bionic AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
#
# Base image for building .NET based on Ubuntu
#
# 1. Uses .NET SDK image as the starting point
# 2. Restore NuGet packages
# 3. Build the ASP.NET Core application
#
# Destination is /app/build which is copied to /app later on
#
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS build
WORKDIR /src
COPY ["myapp.csproj", "./"]
RUN dotnet restore "./myapp.csproj"
COPY . .
WORKDIR "/src/."
RUN dotnet build "myapp.csproj" -c Release -o /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-bionic AS debug
RUN curl --silent --location https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get install --yes nodejs
ENTRYPOINT [ "sleep", "infinity" ]
#
# Base image for building React based on Node/Ubuntu
#
# Destination is /app/ClientApp/build which is copied to /clientapp later
#
# NOTE: npm run build puts the output in the build directory
#
FROM node:12.18-buster-slim AS clientbuild
WORKDIR /src
COPY ./ClientApp /app/ClientApp
WORKDIR "/app/ClientApp"
RUN npm install
RUN npm run build
#
# Copy clientbuild:/app/ClientApp to /app/ClientApp
#
# Copy build:/app to /app
#
FROM base as final
WORKDIR /app/ClientApp
COPY --from=clientbuild /app/ClientApp .
WORKDIR /app
COPY --from=build /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
====================================================================
Kubernetes Manifest
====================================================================
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
selector:
matchLabels:
app: myapp
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: localhost:6000/myapp
ports:
- containerPort: 5001
securityContext:
privileged: true
volumeMounts:
- mountPath: /local
name: local
resources: {}
volumes:
- name: local
hostPath:
path: /C/dev/myapp
type: DirectoryOrCreate
hostname: myapp
restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: myapp
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 5001
targetPort: 5001
selector:
app: myapp
According to the following thread, hostPath volumes are not officially supported for wsl2, yet. They do suggest a workaround, though I had trouble getting it to work. I have found that prepending /run/desktop/mnt/host/c seems to work for me.
// C:\someDir\volumeDir
hostPath:
path: /run/desktop/mnt/host/c/someDir/volumeDir
type: DirectoryOrCreate
Thread Source: https://github.com/docker/for-win/issues/5325
Suggested workaround from thread: https://github.com/docker/for-win/issues/5325#issuecomment-567594291
Using #RyanDarnell's excellent answer above, here is what worked for me
Objective: get message-db with password using docker build --secret running in local docker-desktop kubernetes with StatefulSet and StorageClass using Skaffold
$kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
docker-desktop Ready control-plane,master 21h v1.21.1 <internalip> <none> Docker Desktop 5.4.72-microsoft-standard-WSL2 docker://20.10.7
#./k8s/storage-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
labels:
app: postgres-database
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
#./k8s/persistent-volume.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv1
labels:
app: postgres-database
spec:
capacity:
storage: 128Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /run/desktop/mnt/host/c/kubernetes-mount-path # created this folder at C:\kubernetes-mount-path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
#./k8s/stateful-set.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres-database
spec:
selector:
matchLabels:
app: postgres-database
serviceName: postgres-service
replicas: 1
template:
metadata:
labels:
app: postgres-database
spec:
containers:
- name: message-db-container
image: message-db-test
volumeMounts:
- name: postgres-disk
mountPath: /var/lib/postgresql/data
env:
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
- name: POSTGRES_PASSWORD
value: <postgres-password>
volumeClaimTemplates:
- metadata:
name: postgres-disk
labels:
app: postgres-database
spec:
selector:
matchLabels:
app: postgres-database
storageClassName: local-storage
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 128Mi
#./k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: postgres-loadbalancer
spec:
selector:
app: postgres-database
type: LoadBalancer
ports:
- port: 5432
targetPort: 5432
#./Dockerfile
# syntax = docker/dockerfile:1.3
FROM postgres:13.3-alpine3.14
RUN apk add --no-cache curl tar
RUN mkdir -p /usr/src/eventide \
&& curl -L https://github.com/message-db/message-db/tarball/v1.2.6 -o /usr/src/eventide/message-db.tgz
RUN tar -xf /usr/src/eventide/message-db.tgz --directory /usr/src/eventide
# change message_store login password
RUN --mount=type=secret,id=message-db-pass sed -i "s/WITH LOGIN/WITH LOGIN ENCRYPTED PASSWORD '$(cat /run/secrets/message-db-pass)'/g" /usr/src/eventide/message-db-message-db-759a4f3/database/roles/message-store.sql
RUN echo -e "#!/bin/sh\ncd /usr/src/eventide/message-db-message-db-759a4f3/database\n./install.sh" > /docker-entrypoint-initdb.d/rundbscripts.sh
RUN docker-entrypoint.sh postgres --version
ENTRYPOINT docker-entrypoint.sh postgres
#./secret.txt
<postgres-password>
#./skaffold.yaml
kind: Config
apiVersion: skaffold/v2beta20
build:
artifacts:
- image: message-db-test
context: .
docker:
secret:
id: message-db-pass
src: secret.txt
local:
useBuildkit: true
deploy:
kubectl:
manifests:
- k8s/*.yaml
run skaffold debug -v debug to startup

kubernetes Minikube : Node port service not accessible from outside

I am trying to deploy simple spring boot REST service on minikube (Windows-10). Below are my configuration
Docker file
FROM openjdk:8-jdk-alpine
ENTRYPOINT ["/usr/bin/java", "-jar", "/usr/share/myservice/minikube-spring-boot-demo-0.0.1-SNAPSHOT.jar"]
ADD target/minikube-spring-boot-demo-0.0.1-SNAPSHOT.jar /usr/share/myservice/lib
ARG JAR_FILE
ADD target/${JAR_FILE} /usr/share/myservice/minikube-spring-boot-demo-0.0.1-SNAPSHOT.jar
EXPOSE 8080
docker image is running fine and i am able to run the app.
docker run -p 8080:8080 minikube-spring-boot-demo:0.0.1-SNAPSHO
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: minikube-spring-boot-demo
spec:
selector:
matchLabels:
app: minikube-spring-boot-demo
tier: backend
replicas: 3
template:
metadata:
labels:
app: minikube-spring-boot-demo
tier: backend
spec:
containers:
- name: demo-backend
image: nirajsonawane/minikube-spring-boot-demo:0.0.1-SNAPSHOT
imagePullPolicy: Always
ports:
- containerPort: 8080
Service
apiVersion: v1
kind: Service
metadata:
name: minikube-spring-boot-demo-service
spec:
selector:
app: minikube-spring-boot-demo
tier: backend
ports:
- port: 8080
targetPort: 8080
nodePort: 30008
type: NodePort
kubectl get all status
kubectl cluster-info
minikube logs
Service Details
i am not able to access the rest endpoint using service-ip:Nodeport/Uri
http://127.0.0.1:30008/hello
http://172.17.0.2:30008/hello
Anything i am missing here? any inputs will be useful.
output of netstat -a
minikube is running in a virtual machine. Services can't be accessed through either localhost or 127.0.0.1 out of the machine.
Try to run minikube service minikube-spring-boot-demo-service. It will show service details and open the service in the browser.
You can get you cluster ip using below command
kubectl get nodes -o wide
then run below to get nodeport
kubectl get svc -o wide -n <namespace>
get the port of your NodePort Svc
then your application will be running on http://:port(svc Nodeport )
In your case it might be running on
http://127.0.0.1:30008/hello

Debugging uWSGI in kubernetes

I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000

Access Elasticsearch from minikube/kubernetes

I have a spring boot application which is deployed in Kubernetes on local windows machine using minikube. I also have Elasticsearch running on my local machine (http://localhost:9200).
I want to call Elasticsearch REST endpoints from this spring boot app.
I tried solving this by creating a service without selector but not sure what am i missing.
When accessing the spring boot app using http://#minikube_ip#:#Node_Port#, i get an error "No route to host".
i tried doing minikube ssh and executing curl command, from there also i get the same error. Clearly I am missing something here.
application.yaml
elasticsearch:
hosts:
- http://my-es:80
connectTimeout: 10000
connectionRequestTimeout: 10000
socketTimeout: 10000
maxRetryTimeoutMillis: 60000
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-es-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kube-es-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kube-es-app
spec:
containers:
- image: elastic-search-app:latest
imagePullPolicy: Never
name: kube-es-app
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
name: my-es
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-es
subsets:
- addresses:
- ip: <MY_LOCAL_MACHINE_IP>
ports:
- port: 9200
Commands I executed
docker build -t elastic-search-app .
kubectl create -f deployment.yaml
kubectl expose deployment/kube-es-app --type="NodePort" --port 8080
Can anyone help please? I am stuck
If I've got the description right, the Windows machine should have vbox network adapter connected to the Host-only-network the Minikube VM is connected to.
Minikube can access the host machine directly because both are in the same network.
The Minikube is in charge of NAT-ting packages from Pods outside. What you need is to allow Elasticsearch to listen to the vbox- or all interfaces, and enable its port in the Windows firewall. Then the Elasticsearch should be available via IP address of Windows in the Host-only-network.
Apart from that, you might create a service (if you need go by name instead of IP) as discussed here:
Connect to local database from inside minikube cluster,
Minikube:Exposing mysql as a service on localhost.

Resources