How do I access this Kubernetes service via kubectl proxy? - proxy

I want to access my Grafana Kubernetes service via the kubectl proxy server, but for some reason it won't work even though I can make it work for other services. Given the below service definition, why is it not available on http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana?
grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
namespace: monitoring
name: grafana
labels:
app: grafana
spec:
type: NodePort
ports:
- name: web
port: 3000
protocol: TCP
nodePort: 30902
selector:
app: grafana
grafana-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: monitoring
name: grafana
spec:
replicas: 1
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:4.1.1
env:
- name: GF_AUTH_BASIC_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_SECURITY_ADMIN_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GF_SECURITY_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
volumeMounts:
- name: grafana-storage
mountPath: /var/grafana-storage
ports:
- name: web
containerPort: 3000
resources:
requests:
memory: 100Mi
cpu: 100m
limits:
memory: 200Mi
cpu: 200m
- name: grafana-watcher
image: quay.io/coreos/grafana-watcher:v0.0.5
args:
- '--watch-dir=/var/grafana-dashboards'
- '--grafana-url=http://localhost:3000'
env:
- name: GRAFANA_USER
valueFrom:
secretKeyRef:
name: grafana-credentials
key: user
- name: GRAFANA_PASSWORD
valueFrom:
secretKeyRef:
name: grafana-credentials
key: password
resources:
requests:
memory: "16Mi"
cpu: "50m"
limits:
memory: "32Mi"
cpu: "100m"
volumeMounts:
- name: grafana-dashboards
mountPath: /var/grafana-dashboards
volumes:
- name: grafana-storage
emptyDir: {}
- name: grafana-dashboards
configMap:
name: grafana-dashboards
The error I'm seeing when accessing the above URL is "no endpoints available for service "grafana"", error code 503.

With Kubernetes 1.10 the proxy URL should be slighly different, like this:
http://localhost:8080/api/v1/namespaces/default/services/SERVICE-NAME:PORT-NAME/proxy/
Ref: https://kubernetes.io/docs/tasks/access-application-cluster/access-cluster/#manually-constructing-apiserver-proxy-urls

As Michael says, quite possibly your labels or namespaces are mismatching. However in addition to that, keep in mind that even when you fix the endpoint, the url you're after (http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana) might not work correctly.
Depending on your root_url and/or static_root_path grafana configuration settings, when trying to login you might get grafana trying to POST to http://localhost:8001/login and get a 404.
Try using kubectl port-forward instead:
kubectl -n monitoring port-forward [grafana-pod-name] 3000
then access grafana via http://localhost:3000/
https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/

The issue is that Grafana's port is named web, and as a result one needs to append :web to the kubectl proxy URL: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web.
An alternative, is to instead not name the Grafana port, because then you don't have to append :web to the kubectl proxy URL for the service: http://localhost:8001/api/v1/proxy/namespaces/monitoring/services/grafana:web. I went with this option in the end since it's easier.

There are a few factors that might be causing this issue.
The service expects to find one or more supporting endpoints, which it discovers through matching rules on the labels. If the labels don't align, then the service won't find endpoints, and the network gateway function performed by the service will result in 503.
The port declared by the POD and the process within the container are misaligned from the --target-port expected by the service.
Either one of these might generate the error. Let's take a closer look.
First, kubectl describe the service:
$ kubectl describe svc grafana01-grafana-3000
Name: grafana01-grafana-3000
Namespace: default
Labels: app=grafana01-grafana
chart=grafana-0.3.7
component=grafana
heritage=Tiller
release=grafana01
Annotations: <none>
Selector: app=grafana01-grafana,component=grafana,release=grafana01
Type: NodePort
IP: 10.0.0.197
Port: <unset> 3000/TCP
NodePort: <unset> 30905/TCP
Endpoints: 10.1.45.69:3000
Session Affinity: None
Events: <none>
Notice that my grafana service has 1 endpoint listed (there could be multiple). The error above in your example indicates that you won't have endpoints listed here.
Endpoints: 10.1.45.69:3000
Let's take a look next at the selectors. In the example above, you can see I have 3 selector labels on my service:
Selector: app=grafana01-grafana,component=grafana,release=grafana01
I'll kubectl describe my pods next:
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
Namespace: default
Node: 10.10.25.220/10.10.25.220
Start Time: Fri, 14 Jul 2017 03:25:11 +0000
Labels: app=grafana01-grafana
component=grafana
pod-template-hash=1843344063
release=grafana01
...
Notice that the labels on the pod align correctly, hence my service finds pods which provide endpoints which are load balanced against by the service. Verify that this part of the chain isn't broken in your environment.
If you do find that the labels are correct, you may still have a disconnect in that the grafana process running within the container within the pod is running on a different port than you expect.
$ kubectl describe pod grafana
Name: grafana01-grafana-1843344063-vp30d
...
Containers:
grafana:
Container ID: docker://69f11b7828c01c5c3b395c008d88e8640c5606f4d865107bf4b433628cc36c76
Image: grafana/grafana:latest
Image ID: docker-pullable://grafana/grafana#sha256:11690015c430f2b08955e28c0e8ce7ce1c5883edfc521b68f3fb288e85578d26
Port: 3000/TCP
State: Running
Started: Fri, 14 Jul 2017 03:25:26 +0000
If for some reason, your port under the container listed a different value, then the service is effectively load balancing against an invalid endpoint.
For example, if it listed port 80:
Port: 80/TCP
Or was an empty value
Port:
Then even if your label selectors were correct, the service would never find a valid response from the pod and would remove the endpoint from the rotation.
I suspect your issue is the first problem above (mismatched label selectors).
If both the label selectors and ports align, then you might have a problem with the MTU setting between nodes. In some cases, if the MTU used by your networking layer (like calico) is larger than the MTU of the supporting network, then you'll never get a valid response from the endpoint. Typically, this last potential issue will manifest itself as a timeout rather than a 503 though.

Your Deployment may not have a label app: grafana, or be in another namespace. Could you also post the Deployment definition?

Related

Cipher mismatch error while trying to access an app deployed in GKE as HTTPS Ingress

I am trying to deploy a springboot application running on 8080 port. My target is to have https protocol for custom subdomain with google managed-certificates.
here are my yamls.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-deployment
namespace: my-namespace
spec:
replicas: 1
selector:
matchLabels:
app: my-deployment
namespace: my-namespace
template:
metadata:
labels:
app: my-deployment
namespace: my-namespace
spec:
containers:
- name: app
image: gcr.io/PROJECT_ID/IMAGE:TAG
imagePullPolicy: Always
ports:
- containerPort: 8080
resources:
requests:
memory: "256Mi"
ephemeral-storage: "256Mi"
cpu: "250m"
limits:
memory: "512Mi"
ephemeral-storage: "512Mi"
cpu: "250m"
2.service.yaml
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: my-namespace
annotations:
cloud.google.com/backend-config: '{"default": "my-http-health-check"}'
spec:
selector:
app: my-deployment
namespace: my-namespace
type: NodePort
ports:
- port: 80
name: http
targetPort: http
protocol: TCP
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
namespace: my-name-space
annotations:
kubernetes.io/ingress.global-static-ip-name: my-ip
networking.gke.io/managed-certificates: my-cert
kubernetes.io/ingress.class: "gce"
labels:
app: my-ingress
spec:
rules:
- host: my-domain.com
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: my-service
port:
name: http
I followed various documentation, most of them could help to make http work but, couldn't make https work and ends with error ERR_SSL_VERSION_OR_CIPHER_MISMATCH. Looks like there is issue with "Global forwarding rule". Ports shows 443-443. What is the correct way to terminate the HTTPS traffic at loadbalancer and route it to backend app with http?
From the information provided, I can see that the "ManagedCertificate" object is missing, you need to create a yaml file with the following structure:
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: my-cert
spec:
domains:
- <your-domain-name1>
- <your-domain-name2>
And then apply it with the command: kubectl apply -f file-name.yaml
Provisioning of the Google-managed certificate can take up to 60 minutes; you can check the status of the certificate using the following command: kubectl describe managedcertificate my-cert, wait for the status to be as "Active".
A few prerequisites you need to be aware, though:
You must own the domain name. The domain name must be no longer than
63 characters. You can use Google Domains or another registrar.
The cluster must have the HttpLoadBalancing add-on enabled.
Your "kubernetes.io/ingress.class" must be "gce".
You must apply Ingress and ManagedCertificate resources in the same
project and namespace.
Create a reserved (static) external IP address. Reserving a static IP
address guarantees that it remains yours, even if you delete the
Ingress. If you do not reserve an IP address, it might change,
requiring you to reconfigure your domain's DNS records.
Finally, you can take a look at the complete Google's guide on Creating an Ingress with a Google-managed certificate.

Debugging uWSGI in kubernetes

I have a pair of kubernetes pods, one for nginx and one for Python Flask + uWSGI. I have tested my setup locally in docker-compose, and it has worked fine, however after deploying to kubernetes somehow it seems there is no communication between the two. The end result is that I get 502 Gateway Error when trying to reach my location.
So my question is not really about what is wrong with my setup, but rather what tools can I use to debug this scenario. Is there a test-client for uwsgi? Can I use ncat? I don't seem to get any useful log output from nginx, and I don't know if uwsgi even has a log.
How can I debug this?
For reference, here is my nginx location:
location / {
# Trick to avoid nginx aborting at startup (set server in variable)
set $upstream_server ${APP_SERVER};
include uwsgi_params;
uwsgi_pass $upstream_server;
uwsgi_read_timeout 300;
uwsgi_intercept_errors on;
}
Here is my wsgi.ini:
[uwsgi]
module = my_app.app
callable = app
master = true
processes = 5
socket = 0.0.0.0:5000
die-on-term = true
uid = www-data
gid = www-data
Here is the kubernetes deployment.yaml for nginx:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: nginx
name: nginx
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: nginx
strategy:
type: Recreate
template:
metadata:
labels:
service: nginx
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: nginx
image: <custom image url>
imagePullPolicy: Always
env:
- name: APP_SERVER
valueFrom:
secretKeyRef:
name: my-environment-config
key: APP_SERVER
- name: FK_SERVER_NAME
valueFrom:
secretKeyRef:
name: my-environment-config
key: SERVER_NAME
ports:
- containerPort: 80
- containerPort: 10443
- containerPort: 10090
resources:
requests:
cpu: 1m
memory: 200Mi
volumeMounts:
- mountPath: /etc/letsencrypt
name: my-storage
subPath: nginx
- mountPath: /dev/shm
name: dshm
restartPolicy: Always
volumes:
- name: my-storage
persistentVolumeClaim:
claimName: my-storage-claim-nginx
- name: dshm
emptyDir:
medium: Memory
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: nginx
name: nginx
spec:
type: LoadBalancer
ports:
- name: "nginx-port-80"
port: 80
targetPort: 80
protocol: TCP
- name: "nginx-port-443"
port: 443
targetPort: 10443
protocol: TCP
- name: "nginx-port-10090"
port: 10090
targetPort: 10090
protocol: TCP
selector:
service: nginx
Here is the kubernetes deployment.yaml for python flask:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
service: my-app
name: my-app
spec:
replicas: 1
revisionHistoryLimit: 2
selector:
matchLabels:
service: my-app
strategy:
type: Recreate
template:
metadata:
labels:
service: my-app
spec:
imagePullSecrets:
- name: docker-reg
containers:
- name: my-app
image: <custom image url>
imagePullPolicy: Always
ports:
- containerPort: 5000
resources:
requests:
cpu: 1m
memory: 100Mi
volumeMounts:
- name: merchbot-storage
mountPath: /app/data
subPath: my-app
- name: dshm
mountPath: /dev/shm
- name: local-config
mountPath: /app/secrets/local_config.json
subPath: merchbot-local-config-test.json
restartPolicy: Always
volumes:
- name: merchbot-storage
persistentVolumeClaim:
claimName: my-storage-claim-app
- name: dshm
emptyDir:
medium: Memory
- name: local-config
secret:
secretName: my-app-local-config
Here is the kubernetes service.yaml for nginx:
apiVersion: v1
kind: Service
metadata:
labels:
service: my-app
name: my-app
spec:
ports:
- name: "my-app-port-5000"
port: 5000
targetPort: 5000
selector:
service: my-app
Debugging in kubernetes is not very different from debugging outside, there's just some concepts that need to be overlaid for the kubernetes world.
A Pod in kubernetes is what you would conceptually see as a host in the VM world. Every container running in a Pod will see each others services on localhost. From there, a Pod to anything else will have a network connection involved (even if the endpoint is node local). So start testing with services on localhost and work your way out through pod IP, service IP, service name.
Some complexity comes from having the debug tools available in the containers. Generally containers are built slim and don't have everything available. So you either need to install tools while a container is running (if you can) or build a special "debug" container you can deploy on demand in the same environment. You can always fall back to testing from the cluster nodes which also have access.
Where you have python available you can test with uswgi_curl
pip install uwsgi-tools
uwsgi_curl hostname:port /path
Otherwise nc/curl will suffice, to a point.
Pod to localhost
First step is to make sure the container itself is responding. In this case you are likely to have python/pip available to use uwsgi_curl
kubectl exec -ti my-app-XXXX-XXXX sh
nc -v localhost 5000
uwsgi_curl localhost:5000 /path
Pod to Pod/Service
Next include the kubernetes networking. Start with IP's and finish with names.
Less likely to have python here, or even nc but I think testing the environment variables is important here:
kubectl exec -ti nginx-XXXX-XXXX sh
nc -v my-app-pod-IP 5000
nc -v my-app-service-IP 5000
nc -v my-app-service-name 5000
echo $APP_SERVER
echo $FK_SERVER_NAME
nc -v $APP_SERVER 5000
# or
uwsgi_curl $APP_SERVER:5000 /path
Debug Pod to Pod/Service
If you do need to use a debug pod, try and mimic the pod you are testing as much as possible. It's great to have a generic debug pod/deployment to quickly test anything, but if that doesn't reveal the issue you may need to customise the deployment to mimic the pod you are testing more closely.
In this case the environment variables play a part in the connection setup, so that should be emulated for a debug pod.
Node to Pod/Service
Pods/Services will be available from the cluster nodes (if you are not using restrictive network policies) so usually the quick test is to check Pods/Services are working from there:
nc -v <pod_ip> <container_port>
nc -v <service_ip> <service_port>
nc -v <service__dns> <service_port>
In this case:
nc -v <my_app_pod_ip> 5000
nc -v <my_app_service_ip> 5000
nc -v my-app.svc.<namespace>.cluster.local 5000

Access Elasticsearch from minikube/kubernetes

I have a spring boot application which is deployed in Kubernetes on local windows machine using minikube. I also have Elasticsearch running on my local machine (http://localhost:9200).
I want to call Elasticsearch REST endpoints from this spring boot app.
I tried solving this by creating a service without selector but not sure what am i missing.
When accessing the spring boot app using http://#minikube_ip#:#Node_Port#, i get an error "No route to host".
i tried doing minikube ssh and executing curl command, from there also i get the same error. Clearly I am missing something here.
application.yaml
elasticsearch:
hosts:
- http://my-es:80
connectTimeout: 10000
connectionRequestTimeout: 10000
socketTimeout: 10000
maxRetryTimeoutMillis: 60000
deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-es-app
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: kube-es-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: kube-es-app
spec:
containers:
- image: elastic-search-app:latest
imagePullPolicy: Never
name: kube-es-app
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
---
kind: Service
apiVersion: v1
metadata:
name: my-es
spec:
ports:
- protocol: TCP
port: 80
targetPort: 9200
---
kind: Endpoints
apiVersion: v1
metadata:
name: my-es
subsets:
- addresses:
- ip: <MY_LOCAL_MACHINE_IP>
ports:
- port: 9200
Commands I executed
docker build -t elastic-search-app .
kubectl create -f deployment.yaml
kubectl expose deployment/kube-es-app --type="NodePort" --port 8080
Can anyone help please? I am stuck
If I've got the description right, the Windows machine should have vbox network adapter connected to the Host-only-network the Minikube VM is connected to.
Minikube can access the host machine directly because both are in the same network.
The Minikube is in charge of NAT-ting packages from Pods outside. What you need is to allow Elasticsearch to listen to the vbox- or all interfaces, and enable its port in the Windows firewall. Then the Elasticsearch should be available via IP address of Windows in the Host-only-network.
Apart from that, you might create a service (if you need go by name instead of IP) as discussed here:
Connect to local database from inside minikube cluster,
Minikube:Exposing mysql as a service on localhost.

Not able to access Spring Boot backend service from another service in Kubernetes

I am new to Kubernetes and I am trying to create a simple front-end back-end application where front-end and back-end will have its own services. For some reason, I am not able to access back-end service by its name from front-end service.
Just because of simplicity, front-end service can be created like:
kubectl run curl --image=radial/busyboxplus:curl -i --tty
When I do a nslookup I get the following:
[ root#curl-66bdcf564-rbx2k:/ ]$ nslookup msgnc-travel
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: msgnc-travel
Address 1: 10.100.171.209 msgnc-travel.default.svc.cluster.local
Service is available by its name msgnc-travel, but when I try to do curl it:
curl msgnc-travel
it just keeps on waiting and no response is received. I have also tried
curl 10.100.171.209 and curl msgnc-travel.default.svc.cluster.local but I have the same behaviour
Any ideas why is this issue occurring?
I have successfully managed to do a "workaround" by using Ingress, but I am curious why can't I access my Spring Boot backend service directly just by providing its name?
deployment.yml looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: msgnc-travel-deployment
labels:
name: msgnc-travel-deployment
app: msgnc-travel-app
spec:
template:
metadata:
name: msgnc-travel-pod
labels:
name: msgnc-travel-pod
app: msgnc-travel-app
spec:
containers:
- name: msgnc-travel
image: bdjordjevic/msgnc-travel
ports:
- containerPort: 8080
replicas: 1
selector:
matchExpressions:
- {key: name, operator: In, values: [msgnc-travel-pod]}
- {key: app, operator: In, values: [msgnc-travel-app]}
service.yml looks like this:
apiVersion: v1
kind: Service
metadata:
name: msgnc-travel
labels:
name: msgnc-travel-service
app: msgnc-travel-app
spec:
ports:
- port: 8080
targetPort: 8080
selector:
name: msgnc-travel-pod
app: msgnc-travel-app
You are defining the service to listen at port 8080. So you are supposed to execute curl msgnc-travel:8080.
I tried running wget and this is the output I got:
wget msgnc-travel:8080
Connecting to msgnc-travel:8080 (10.98.81.45:8080)
wget: server returned error: HTTP/1.1 404

Eureka and Kubernetes

I am putting together a proof of concept to help identify gotchas using Spring Boot/Netflix OSS and Kubernetes together. This is also to prove out related technologies such as Prometheus and Graphana.
I have a Eureka service setup which is starting with no trouble within my Kubernetes cluster. This is named discovery and has been given the name "discovery-1551420162-iyz2c" when added to K8 using
For my config server, I am trying to use Eureka based on a logical URL so in my bootstrap.yml I have
server:
port: 8889
eureka:
instance:
hostname: configserver
client:
registerWithEureka: true
fetchRegistry: true
serviceUrl:
defaultZone: http://discovery:8761/eureka/
spring:
cloud:
config:
server:
git:
uri: https://github.com/xyz/microservice-config
and I am starting this using
kubectl run configserver --image=xyz/config-microservice --replicas=1 --port=8889
This service ends up running named as configserver-3481062421-tmv4d. I then see exceptions in the config server logs as it tries to locate the eureka instance and cannot.
I have the same setup for this using docker-compose locally with links and it starts the various containers with no trouble.
discovery:
image: xyz/discovery-microservice
ports:
- "8761:8761"
configserver:
image: xyz/config-microservice
ports:
- "8888:8888"
links:
- discovery
How can I setup something like eureka.client.serviceUri so my microservices can locate their peers without knowing fixed IP addresses within the K8 cluster?
How can I setup something like eureka.client.serviceUri?
You have to have a Kubernetes service on top of the eureka pods/deployments which then will provide you a referable IP address and port number. And then use that referable address to look up the Eureka service, instead of "8761".
To address further question about HA configuration of Eureka
You shouldn't have more than one pod/replica of Eureka per k8s service (remember, pods are ephemeral, you need a referable IP address/domain name for eureka service registry). To achieve high availability (HA), spin up more k8s services with one pod in each.
Eureka service 1 --> a single pod
Eureka Service 2 --> another single pod
..
..
Eureka Service n --> another single pod
So, now you have referable IP/Domain name (IP of the k8s service) for each of your Eureka.. now it can register each other.
Feeling like it's an overkill?
If all your services are in same kubernetes namespace you can achieve everything (well, almost everything, except client side load balancing) that eureka offers though k8s service + KubeDNS add-On. Read this article by Christian Posta
Edit
Instead of Services with one pod each, you can make use of StatefulSets as Stefan Ocke pointed out.
Like a Deployment, a StatefulSet manages Pods that are based on an
identical container spec. Unlike a Deployment, a StatefulSet maintains
a sticky identity for each of their Pods. These pods are created from
the same spec, but are not interchangeable: each has a persistent
identifier that it maintains across any rescheduling.
Regarding HA configuration of Eureka in Kubernetes:
You can (meanwhile) use a StatefulSet for this instead of creating a service for each instance. The StatefulSet guarantees stable network identity for each instance you create.
For example, the deployment could look like the following yaml (StatefulSet + headless Service).
There are two Eureka instances here, according to the DNS
naming rules for StatefulSets (assuming namespace is "default"):
eureka-0.eureka.default.svc.cluster.local and
eureka-1.eureka.default.svc.cluster.local
As long as your pods are in the same namespace, they can reach Eureka also as:
eureka-0.eureka
eureka-1.eureka
Note: The docker image used in the example is from https://github.com/stefanocke/eureka. You might want to chose or build your own one.
---
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
ports:
- port: 8761
name: eureka
clusterIP: None
selector:
app: eureka
---
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: stoc/eureka
ports:
- containerPort: 8761
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
# Due to camelcase issues with "defaultZone" and "preferIpAddress", _JAVA_OPTIONS is used here
- name: _JAVA_OPTIONS
value: -Deureka.instance.preferIpAddress=false -Deureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
- name: EUREKA_CLIENT_REGISTERWITHEUREKA
value: "true"
- name: EUREKA_CLIENT_FETCHREGISTRY
value: "true"
# The hostnames must match with the the eureka serviceUrls, otherwise the replicas are reported as unavailable in the eureka dashboard
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
# No need to start the pods in order. We just need the stable network identity
podManagementPolicy: "Parallel"
#Stefan Ocke i'm trying to the same setup the same, but with my own image of eureka server. but i keep getting this error
Request execution failed with message: java.net.ConnectException: Connection refused (Connection refused)
2019-09-27 06:27:03.363 ERROR 1 --- [ main] c.n.d.s.t.d.RedirectingEurekaHttpClient : Request execution error. endpoint=DefaultEndpoint{ serviceUrl='http://eureka-1.eureka:8761/eureka/}
Here are configurations:
Eureka Spring Properties:
server.port=${EUREKA_PORT}
spring.security.user.name=${EUREKA_USERNAME}
spring.security.user.password=${EUREKA_PASSWORD}
eureka.client.register-with-eureka=true
eureka.client.fetch-registry=true
eureka.instance.prefer-ip-address=false
eureka.server.wait-time-in-ms-when-sync-empty=0
eureka.server.eviction-interval-timer-in-ms=15000
eureka.instance.leaseRenewalIntervalInSeconds=30
eureka.instance.leaseExpirationDurationInSeconds=30
eureka.instance.hostname=${EUREKA_INSTANCE_HOSTNAME}
eureka.client.serviceUrl.defaultZone=http://eureka-0.eureka:8761/eureka/,http://eureka-1.eureka:8761/eureka/
StatefulSet Config:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: eureka
spec:
serviceName: "eureka"
podManagementPolicy: "Parallel"
replicas: 2
selector:
matchLabels:
app: eureka
template:
metadata:
labels:
app: eureka
spec:
containers:
- name: eureka
image: "my-image"
command: ["java", "-Djava.security.egd=file:/dev/./urandom", "-jar","/app/eureka-service.jar"]
ports:
- containerPort: 8761
env:
- name: EUREKA_PORT
value: "8761"
- name: EUREKA_USERNAME
value: "theusername"
- name: EUREKA_PASSWORD
value: "thepassword"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: EUREKA_INSTANCE_HOSTNAME
value: ${MY_POD_NAME}.eureka
Service Config:
apiVersion: v1
kind: Service
metadata:
name: eureka
labels:
app: eureka
spec:
clusterIP: None
selector:
app: eureka
ports:
- port: 8761
targetPort: 8761
Ingress Controller:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: eureka
servicePort: 8761
You have to install a kubernetes kube-dns server to resolve names with their IPs, and then expose your eureka pods as a service. (see kubernetes docs) for more infos to how to create dns and services.
#random_dude, what will be the case if i used to create 2 or 3 replicas of eureka? it turned out that when i mount a micro-service 'X' i will be registred in all eureka replicas, but when it becomes down, only one replicas gets the update ! the others still consider the micro-service instance as running
I got exactly this problem and it got resoled by adding an environment variable for pod. This has the answer. Sample env variable for my pod is shown below,
I'm wondering where you put that config, is that on service-registry which is on the eureka, or in the client-side (where we want to connect to eureka)?
my current case is, that all configuration is placed on a repo, the eureka config as well. which makes the config static.

Resources