filesystemexception elasticsearch keystore device or resource busy - elasticsearch

I want to build elasticsearch (7.3.0) and run in kubernetes,but i get error
Error:
Exception in thread "main" java.nio.file.FileSystemException:
/usr/share/elasticsearch/config/elasticsearch.keystore.tmp
/usr/share/elasticsearch/config/elasticsearch.keystore: Device or
resource busy
my step
create secret
kubectl create secret generic elasticsearch-keystore --from-file=./elasticsearch.keystore
set secretmount
secretMounts:
- name: elastic-certificates
secretName: elastic-certificates
path: /usr/share/elasticsearch/config/certs
secretName: elasticsearch-keystore
path: /usr/share/elasticsearch/config/elasticsearch.keystore
subPath: elasticsearch.keystore
i tried to change elasticsearch.keystore mode g+s , but doesn't work
Is there something I am missing?thanks

Related

How to mount a host volume in Kubernetes running on Docker Desktop (Windows 10) backed by WSL2?

I've figured out the syntax to mount a volume (Kubernetes YAML):
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- name: php
volumeMounts:
- mountPath: /app/db_backups
name: db-backups
readOnly: true
volumes:
- hostPath:
path: /mnt/c/Users/Mark/PhpstormProjects/proj/db_backups
type: DirectoryOrCreate
name: db-backups
And the volume does show when I drop into a shell:
kubectl --context docker-desktop exec --stdin --tty deploy/app-deployment-development -cphp -nmyns -- /bin/bash
But the db_backups directory is empty, so I guess the volume is backed by nothing -- it's not finding the volume on my Windows host machine.
I've tried setting the host path like C:\Users\Mark\PhpstormProjects\proj\db_backups but if I do that then my Deployment fails with a CreateContainerError:
Error: Error response from daemon: invalid volume specification: 'C:\Users\Mark\PhpstormProjects\proj\db_backups:/app/db_backups:ro'
So I guess it doesn't like the Windows-style filepath.
So what then? If neither style of path works, how do I get it to mount?
From here it is clear that, for WSL2 we need to mention the specific path before we are actually giving the path we desired in the host machine.
In your file you are giving like path: /mnt/c/Users/Mark/PhpstormProjects/proj/db_backups but you need to mention the path like this path: /run/desktop/mnt/host/path_of_directory_in_local_machine. The key is we need to mention /run/desktop/mnt/host/ before we are going to give the actual path to the directory.
You gave the type: DirectoryOrCreate in the above file, so that is creating an empty directory in the path you mentioned. Because it is not actually going to your desired path.
So try with this
apiVersion: v1
kind: Pod
metadata:
...
spec:
containers:
- name: php
volumeMounts:
- mountPath: /app/db_backups
name: db-backups
readOnly: true
volumes:
- hostPath:
path: /run/desktop/mnt/host/c/Users/Mark/PhpstormProjects/proj/db_backups
#In my case tested with path: /run/desktop/mnt/host/d/K8-files/voldir
type: DirectoryOrCreate
name: db-backups
It worked in our case, we created a directory in 'd' drive so we used this path: /run/desktop/mnt/host/d/K8-files/voldir. So try giving /run/desktop/mnt/host/ before the actual path.
For more information refer this Link

Connect to dynamically create new cluster on GKE

I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)

HTTPS encryption is not active for my domain. My Order certificates is not completed

I am working with cert-manager in my kubernetes cluster, in order to get certificates signed by let'sencrypt CA to my service application inside my cluster.
I am performing the following steps in the order presented. I've wanted provide the most details as a possible of my process in order to understand the behavior presented.
Install the CustomResourceDefinition resources separately
⟩ kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.7/deploy/manifests/00-crds.yaml
customresourcedefinition.apiextensions.k8s.io/certificates.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/challenges.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/issuers.certmanager.k8s.io created
customresourcedefinition.apiextensions.k8s.io/orders.certmanager.k8s.io created
[I]
Label the cert-manager namespace to disable resource validation
⟩ kubectl label namespace kube-system certmanager.k8s.io/disable-validation=true
namespace/kube-system labeled
[I]
Install the cert-manager Helm chart
⟩ helm install \
--name cert-manager \
--namespace kube-system \
--version v0.7.0 \
jetstack/cert-manager
I've confirmed the steps in this guide, in order to avoid possible problems, and all steps are ok ...
Creating my ingress
I am using kong-ingress-controller to manage the ingress process.
⟩ kubectl get pod,svc,deploy,replicaset -n kong | grep kong-ingress-controller
pod/kong-ingress-controller-667b4748d4-ccj8z 2/2 Running 14 95m
service/kong-ingress-controller NodePort 10.0.48.131 <none> 8001:32257/TCP 3d19h
deployment.extensions/kong-ingress-controller 1 1 1 1 3d19h
replicaset.extensions/kong-ingress-controller-667b4748d4 1 1 1 3d19h
This means that my external IP addres is given by kong-proxy and is 52.166.60.158
⟩ kubectl get svc -n kong | grep kong-proxy
kong-proxy LoadBalancer 10.0.153.8 52.166.60.158 80:31577/TCP,443:32323/TCP 3d21h
I've created the ingress for first time of this way:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
# namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx" # Don't include it in order to use kong-ingress-controller
# add an annotation indicating the issuer to use.
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- path: "/"
backend:
serviceName: zcrm365dev
servicePort: 80
#- backend:
# serviceName: zcrm365dev
# servicePort: 80
# path: /
tls: # < placing a host in the TLS config will indicate a cert should be created
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: cert-manager-webhook-webhook-tls
# for the moment I've included this secret which was created by cert-manager installation
Apply it.
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 created
[I]
And my ingress is taking the kong-ingress-controller
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
cert-manager-webhook-webhook-tls terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"cert-manager-webhook-webhook-tls"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 3m30s kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal UPDATE 3m28s kong-ingress-controller Ingress default/kong-ingress-zcrm365
[I]
Creating a ClusterIssuer
I am going to create a ClusterIssuer, I'd can create a Issuer, but I've started with a ClusterIssuer. What is the best alternative? This depen of our deployment and requirements to future, mostly in the namespace situations
apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
# The ACME server URL
server: https://acme-staging-v02.api.letsencrypt.org/directory
# Email address used for ACME registration
email: b.garcia#possibilit.nl
# Name of a secret used to store the ACME account private key
privateKeySecretRef:
name: letsencrypt-staging
# Enable the HTTP-01 challenge provider
http01: {}
Apply it
⟩ kubectl apply -f 01-lets-encrypt-issuer-staging.yaml
clusterissuer.certmanager.k8s.io/letsencrypt-staging created
[I]
This ClusterIssuer was registered on ACME letsencrypt server
⟩ kubectl describe clusterissuers letsencrypt-staging
Name: letsencrypt-staging
Namespace:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-staging"},"spec":{"acm...
API Version: certmanager.k8s.io/v1alpha1
Kind: ClusterIssuer
Metadata:
Creation Timestamp: 2019-03-15T11:38:03Z
Generation: 1
Resource Version: 623999
Self Link: /apis/certmanager.k8s.io/v1alpha1/clusterissuers/letsencrypt-staging
UID: cb48b391-4716-11e9-a113-e27267a7d354
Spec:
Acme:
Email: b.garcia#possibilit.nl
Http 01:
Private Key Secret Ref:
Name: letsencrypt-staging
Server: https://acme-staging-v02.api.letsencrypt.org/directory
Status:
Acme:
Uri: https://acme-staging-v02.api.letsencrypt.org/acme/acct/8579841
Conditions:
Last Transition Time: 2019-03-15T11:38:05Z
Message: The ACME account was registered with the ACME server
Reason: ACMEAccountRegistered
Status: True
Type: Ready
Events: <none>
[I]
Modifying my ingress resource created previously
Now that I've created our Let's Encrypt staging ClusterIssuer, I am ready to modify the Ingress Resource we created above and enable TLS encryption for the test1kongletsencrypt.possibilit.nl paths adding the following
I am going to add certmanager.k8s.io/cluster-issuer: letsencrypt-staging annotation and use the secret created with the letsencrypt-staging ClusterIssuer named letsencrypt-staging
Our ingress has been stayed of this way:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kong-ingress-zcrm365
#namespace: default
annotations:
# kubernetes.io/ingress.class: "nginx" #new
# certmanager.k8s.io/acme-challenge-type: http01
# add an annotation indicating the issuer to use.
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- path: "/"
backend:
serviceName: zcrm365dev
servicePort: 80
tls:
- hosts:
- test1kongletsencrypt.possibilit.nl
secretName: letsencrypt-staging # I've added this secret of letsencrypt cluster issuer
Apply it
⟩ kubectl apply -f 03-zcrm365-ingress.yaml
ingress.extensions/kong-ingress-zcrm365 configured
[I]
This process update on the ingress, create one ingres named cm-acme-http-solver-jr4fg
⟩ kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
cm-acme-http-solver-jr4fg test1kongletsencrypt.possibilit.nl 80 33s
kong-ingress-zcrm365 test1kongletsencrypt.possibilit.nl 52.166.60.158 80, 443 56m
[I]
The detail of cm-acme-http-solver-jr4fg ingress is:
⟩ kubectl get ingress cm-acme-http-solver-jr4fg -o yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/whitelist-source-range: 0.0.0.0/0
creationTimestamp: "2019-03-15T12:10:57Z"
generateName: cm-acme-http-solver-
generation: 1
labels:
certmanager.k8s.io/acme-http-domain: "4095675862"
certmanager.k8s.io/acme-http-token: "657526223"
name: cm-acme-http-solver-jr4fg
namespace: default
ownerReferences:
- apiVersion: certmanager.k8s.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Challenge
name: letsencrypt-staging-2613163196-0
uid: 638f1701-471b-11e9-a113-e27267a7d354
resourceVersion: "628284"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/cm-acme-http-solver-jr4fg
uid: 640ef483-471b-11e9-a113-e27267a7d354
spec:
rules:
- host: test1kongletsencrypt.possibilit.nl
http:
paths:
- backend:
serviceName: cm-acme-http-solver-svmvw
servicePort: 8089
path: /.well-known/acme-challenge/W7-9-KuPao_jg6EF5E2FXitFs8shOEsY5PlT9EEvNxE
status:
loadBalancer:
ingress:
- ip: 52.166.60.158
And the detail of our kong-ingress-zcrm365 resource ingress is:
⟩ kubectl describe ingress kong-ingress-zcrm365
Name: kong-ingress-zcrm365
Namespace: default
Address: 52.166.60.158
Default backend: default-http-backend:80 (<none>)
TLS:
letsencrypt-staging terminates test1kongletsencrypt.possibilit.nl
Rules:
Host Path Backends
---- ---- --------
test1kongletsencrypt.possibilit.nl
/ zcrm365dev:80 (<none>)
Annotations:
certmanager.k8s.io/cluster-issuer: letsencrypt-staging
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"certmanager.k8s.io/cluster-issuer":"letsencrypt-staging"},"name":"kong-ingress-zcrm365","namespace":"default"},"spec":{"rules":[{"host":"test1kongletsencrypt.possibilit.nl","http":{"paths":[{"backend":{"serviceName":"zcrm365dev","servicePort":80},"path":"/"}]}}],"tls":[{"hosts":["test1kongletsencrypt.possibilit.nl"],"secretName":"letsencrypt-staging"}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 60m kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal UPDATE 4m25s (x2 over 60m) kong-ingress-controller Ingress default/kong-ingress-zcrm365
Normal CreateCertificate 4m25s cert-manager Successfully created Certificate "letsencrypt-staging"
[I]
We can see that our ingress even using the kong-ingress-controller and the letsencrypt-staging certificate has been created in the default namespace:
⟩ kubectl get certificates
NAME
letsencrypt-staging
[I]
The letsencypt-staging certificate have the following detail
⟩ kubectl describe certificate letsencrypt-staging
Name: letsencrypt-staging
Namespace: default
Labels: <none>
Annotations: <none>
API Version: certmanager.k8s.io/v1alpha1
Kind: Certificate
Metadata:
Creation Timestamp: 2019-03-15T12:10:55Z
Generation: 1
Owner References:
API Version: extensions/v1beta1
Block Owner Deletion: true
Controller: true
Kind: Ingress
Name: kong-ingress-zcrm365
UID: 8643558f-4713-11e9-a113-e27267a7d354
Resource Version: 628164
Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/letsencrypt-staging
UID: 62b3a31e-471b-11e9-a113-e27267a7d354
Spec:
Acme:
Config:
Domains:
test1kongletsencrypt.possibilit.nl
Http 01:
Dns Names:
test1kongletsencrypt.possibilit.nl
Issuer Ref:
Kind: ClusterIssuer
Name: letsencrypt-staging
Secret Name: letsencrypt-staging
Status:
Conditions:
Last Transition Time: 2019-03-15T12:10:55Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 7m24s cert-manager Generated new private key
Normal GenerateSelfSigned 7m24s cert-manager Generated temporary self signed certificate
Normal OrderCreated 7m23s cert-manager Created Order resource "letsencrypt-staging-2613163196"
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments±)
I can see that my order issue is not completed, only was created in the OrderCreated event, and this order already have 7 minutes since I've created this certificate and the order was not completed and by that reason the certificate is not issued successfully
Another thing that happens to me, is that the letsencrypt-staging secret created by the letsencrypt-staging cluster Issuer and their respective certificate, only have the tls.key:
⟩ kubectl describe secrets letsencrypt-staging -n kube-system
Name: letsencrypt-staging
Namespace: kube-system
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
tls.key: 1675 bytes
[I]
According to I understand, is that if the letsencrypt certificate complete the order and the certificate is issued, in the letsencrypt-staging secret I would have one tls.crt key and maybe my letsencrypt-staging will be of tls type and not Opaque?
When I see the logs of my cert-manager pod I get the following output, I think that the http challenge is not executed:
I0315 12:10:57.833858 1 logger.go:103] Calling Discover
I0315 12:10:57.856136 1 pod.go:64] No existing HTTP01 challenge solver pod found for Certificate "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:57.923080 1 service.go:51] No existing HTTP01 challenge solver service found for Certificate "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:57.989596 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 12:10:57.989682 1 ingress.go:98] No existing HTTP01 challenge solver ingress found for Challenge "default/letsencrypt-staging-2613163196-0". One will be created.
I0315 12:10:58.014803 1 controller.go:178] ingress-shim controller: syncing item 'default/cm-acme-http-solver-jr4fg'
I0315 12:10:58.014842 1 sync.go:64] Not syncing ingress default/cm-acme-http-solver-jr4fg as it does not contain necessary annotations
I0315 12:10:58.014846 1 controller.go:184] ingress-shim controller: Finished processing work item "default/cm-acme-http-solver-jr4fg"
I0315 12:10:58.015447 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 12:10:58.033431 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I0315 12:10:58.079504 1 controller.go:212] challenges controller: Finished processing work item "default/letsencrypt-staging-2613163196-0"
I0315 12:10:58.079616 1 controller.go:206] challenges controller: syncing item 'default/letsencrypt-staging-2613163196-0'
I0315 12:10:58.079569 1 controller.go:184] orders controller: syncing item 'default/letsencrypt-staging-2613163196'
I get this message No existing HTTP01 challenge solver pod found for Certificate "default/letsencrypt-staging-2613163196-0"
According to this, I decide add the certmanager.k8s.io/acme-challenge-type: http01 annotation to my kong-ingress-zcrm365 ingress but nothing happened ... my ingress is updated, but nothing more.
All this process confirms that the TLS certificate was not successfully issued and HTTPS encryption is not active for my domains test1kongletsencrypt.possibilit.nl configured.
This make that my letsencrypt-staging certificate have a Status:False, and the order created event does not advance to completed to be issued.
Conditions:
Last Transition Time: 2019-03-15T12:10:55Z
Message: Certificate issuance in progress. Temporary certificate issued.
Reason: TemporaryCertificate
Status: False
Type: Ready
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Generated 51m cert-manager Generated new private key
Normal GenerateSelfSigned 51m cert-manager Generated temporary self signed certificate
Normal Cleanup 5m42s cert-manager Deleting old Order resource "letsencrypt-staging-2613163196"
Normal OrderCreated 5m42s cert-manager Created Order resource "letsencrypt-staging-2965106631"
Normal OrderCreated 39s (x2 over 51m) cert-manager Created Order resource "letsencrypt-staging-2613163196"
Normal Cleanup 39s cert-manager Deleting old Order resource "letsencrypt-staging-2965106631"
[I]
~/workspace/ZCRM365/Deployments/Kubernetes/cert-manager · (Deployments±)
How to can I to my certificate to be signed and succesfully issued by letsencrypt CA and active the https encryption active?
What is happening with these logs messages?
kubectl logs -n kube-system cert-manager-6f68b58796-q7txg
0315 13:06:11.027204 1 logger.go:103] Calling Discover
I0315 13:06:11.032299 1 ingress.go:49] Looking up Ingresses for selector certmanager.k8s.io/acme-http-domain=4095675862,certmanager.k8s.io/acme-http-token=657526223
I0315 13:06:11.046081 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I0315 13:06:11.046109 1 controller.go:212] challenges controller: Finished processing work item "default/letsencrypt-staging-2613163196-0"
I0315 13:06:21.046242 1 controller.go:206] challenges controller: syncing item 'default/letsencrypt-staging-2613163196-0'
I've heared that letsencrypt-staging environment only have test certificates and these are a kind of 'fake certificates' and maybe some clients like my chrome/firefox browser doesn’t trust certificate issuer ...
Is this a reason to I cannot enable https encryption on my domain?
In affirmative case, should I change from staging environment to production environment?
In this question some people talk about that but they emphasize:
that the staging environment should be used just to test that your client is working fine and can generate the challenges, certificates
In my case the http challenge is not generated still in staging environment. :(
here are the annotation I'm usually using for this:
"ingress.kubernetes.io/ssl-redirect": "true",
"certmanager.k8s.io/cluster-issuer": "letsencrypt-production",
# I'd suggest adding these 2 below
"kubernetes.io/tls-acme": "true",
"kubernetes.io/ingress.class": "nginx"
also, you didnt spot this error:
I0315 12:10:58.033431 1 sync.go:173] propagation check failed: wrong status code '404', expected '200'
I'm not sure what is wrong here exactly, your domain name should resolve to your ingress, you should be able to access yourdomain.name/.well-known/acme-challenge/W7-9-KuPao_jg6EF5E2FXitFs8shOEsY5PlT9EEvNxE (this is lets encrypt validation response url, according to your logs)

minikube hostpath mount permissions

I'm trying to mount a local directory to be used by a container in kubernetes, but getting this error:
$ kubectl logs mysql-pd
chown: changing ownership of '/var/lib/mysql/': Input/output error
minikube version: v0.33.1
docker for mac version: 2.0.0.2 (30215)
Engine: 18.09.1
Kubernetes: v1.10.11
I'm starting up minikube with mounted directory:
minikube start --mount-string /Users/foo/mysql_data:/mysql_data --mount
deployment.yml
apiVersion: v1
kind: Pod
metadata:
name: mysql-pd
spec:
containers:
- image: mysql:5.7
name: mysql-container
env:
- name: MYSQL_ROOT_PASSWORD
value: ""
- name: MYSQL_ALLOW_EMPTY_PASSWORD
value: "yes"
ports:
- containerPort: 3306
volumeMounts:
- mountPath: "/var/lib/mysql"
name: host-mount
volumes:
- name: host-mount
hostPath:
path: "/mysql_data"
As #Matthew L Daniel mentioned in the comments, the main purpose of using hostPath is to mount a local folder from your machine which is hosting minikube inside to the nested Pod, therefore it's not necessary to mount local directory inside to minikube. Also, take a look at this article which explains some restrictions about host folder mounting for the particular VM driver in minikube.

Spring Boot + google kubernetes + Google SQL Cloud not working

I am trying to push spring boot application in google kubernetes(Google Container Engine).
I have performed all the step which given in below link.
https://codelabs.developers.google.com/codelabs/cloud-springboot-kubernetes/index.html?index=..%2F..%2Findex#0
When i am trying to perform step 9 http://:8080 in browser that is not reachable.
Yes i got external ip address.
I am able to ping that ip address
let me know if any other information is require.
In Logging that does not able to connect database
Error:
com.mysql.jdbc.exceptions.jdbc4.MySQLNonTransientConnectionException: Could not create connection to database server.
I hope you have created cluster in google container engine
Follow the first 5 step given in this link
https://cloud.google.com/sql/docs/mysql/connect-container-engine
change database configuration in your application
hostname: 127.0.0.1
port: 3306 or your mysql port
username: proxyuser
should be same as link step - 3
mvn package -Dmaven.test.skip=true
Create File with name "Dockerfile" and below content
FROM openjdk:8
COPY target/SpringBootWithDB-0.0.1-SNAPSHOT.jar /app.jar
EXPOSE 8080/tcp
ENTRYPOINT ["java", "-jar", "/app.jar"]
docker build -t gcr.io//springbootdb-java:v1 .
docker run -ti --rm -p 8080:8080 gcr.io//springbootdb-java:v1
gcloud docker -- push gcr.io//springbootdb-java:v1
Follow the 6th step given in link and create yaml file
kubectl create -f cloudsql_deployment.yaml
run kubectl get deployment and copy name of deployment
kubectl expose deployment --type=LoadBalancer
My Yaml File
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: conversationally
spec:
replicas: 1
template:
metadata:
labels:
app: conversationally
spec:
containers:
- image: gcr.io/<project ID>/springbootdb-java:v1
name: web
env:
- name: DB_HOST
# Connect to the SQL proxy over the local network on a fixed port.
# Change the [PORT] to the port number used by your database
# (e.g. 3306).
value: 127.0.0.1:3306
# These secrets are required to start the pod.
# [START cloudsql_secrets]
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
# [END cloudsql_secrets]
ports:
- containerPort: 8080
name: conv-cluster
# Change [INSTANCE_CONNECTION_NAME] here to include your GCP
# project, the region of your Cloud SQL instance and the name
# of your Cloud SQL instance. The format is
# $PROJECT:$REGION:$INSTANCE
# Insert the port number used by your database.
# [START proxy_container]
- image: gcr.io/cloudsql-docker/gce-proxy:1.09
name: cloudsql-proxy
command: ["/cloud_sql_proxy", "--dir=/cloudsql",
"-instances=<instance name>=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
- name: ssl-certs
mountPath: /etc/ssl/certs
- name: cloudsql
mountPath: /cloudsql
# [END proxy_container]
# [START volumes]
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
- name: ssl-certs
hostPath:
path: /etc/ssl/certs
- name: cloudsql
emptyDir:
# [END volumes]
===========

Resources