I have a kubernetes cluster on google cloud platform, and on it, I have a jaeger deployment via development setup of jaeger-kubernetes templates
because my purpose is setup elasticsearch like backend storage, due to this, I follow the jaeger-kubernetes github documentation with the following actions
I've created the services via production setup options
Here are configured the URLs to access to elasticsearch server and username and password and ports
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/configmap.yml
And here, there are configured the download of docker images of the elasticsearch service and their volume mounts.
kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/production-elasticsearch/elasticsearch.yml
And then, at this moment we have a elasticsearch service running over 9200 and 9300 ports
kubectl get service elasticsearch [a89fbe2]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 1h
I've follow with the creation of jaeger components using the kubernetes-jaeger production templates of this way:
λ bgarcial [~] → kubectl create -f https://raw.githubusercontent.com/jaegertracing/jaeger-kubernetes/master/jaeger-production-template.yml
deployment.extensions/jaeger-collector created
service/jaeger-collector created
service/zipkin created
deployment.extensions/jaeger-query created
service/jaeger-query created
daemonset.extensions/jaeger-agent created
λ bgarcial [~/workspace/jaeger-elastic] at master ?
According to the Jaeger architecture, the jaeger-collector and jaeger-query services require access to backend storage.
And so, these are my services running on my kubernetes cluster:
λ bgarcial [~/workspace/jaeger-elastic] at master ?
→ kubectl get services [baefdf9]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 3h
jaeger-collector ClusterIP 10.55.253.240 <none> 14267/TCP,14268/TCP,9411/TCP 3h
jaeger-query LoadBalancer 10.55.248.243 35.228.179.167 80:30398/TCP 3h
kubernetes ClusterIP 10.55.240.1 <none> 443/TCP 3h
zipkin ClusterIP 10.55.240.60 <none> 9411/TCP 3h
λ bgarcial [~/workspace/jaeger-elastic] at master ?
I going to configmap.yml elastic search file kubectl edit configmap jaeger-configuration command in order to try to edit it in relation to the elasticsearch URLs endpoints (may be? ... At this moment I am supossing that this is the next step ...)
I execute it:
λ bgarcial [~] → kubectl edit configmap jaeger-configuration
And I get the following edit entry:
apiVersion: v1
data:
agent: |
collector:
host-port: "jaeger-collector:14267"
collector: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
collector:
zipkin:
http-port: 9411
query: |
es:
server-urls: http://elasticsearch:9200
username: elastic
password: changeme
span-storage-type: elasticsearch
kind: ConfigMap
metadata:
creationTimestamp: "2018-12-27T13:24:11Z"
labels:
app: jaeger
jaeger-infra: configuration
name: jaeger-configuration
namespace: default
resourceVersion: "1387"
selfLink: /api/v1/namespaces/default/configmaps/jaeger-configuration
uid: b28eb5f4-09da-11e9-9f1e-42010aa60002
Here ... do I need setup our own URLs to collector and query services, which will be connect wiht elasticsearch backend service?
How to can I setup the elasticsearch IP address or URLs here?
In the jaeger components, the query and collector need access to storage, but I don't know what is the elastic endpoint ...
Is this server-urls: http://elasticsearch:9200 a correct endpoint?
I am starting in the kubernetes and DevOps world, and I appreciate if someone can help me in the concepts and point me in the right address in order to setup jaeger and elasticsearch as a backend storage.
When you are accessing the service from the pod in the same namespace you can use just the service name.
Example:
http://elasticsearch:9200
If you are accessing the service from the pod in the different namespace you should also specify the namespace.
Example:
http://elasticsearch.mynamespace:9200
http://elasticsearch.mynamespace.svc.cluster.local:9200
To check in what namespace the service is located, use the following command:
kubectl get svc --all-namespaces -o wide
Note: Changing ConfigMap does not apply it to deployment instantly. Usually, you need to restart all pods in the deployment to apply new ConfigMap values. There is no rolling-restart functionality at the moment, but you can use the following command as a workaround:
(replace deployment name and pod name with the real ones)
kubectl patch deployment mydeployment -p '{"spec":{"template":{"spec":{"containers":[{"name":"my-pod-name","env":[{"name":"START_TIME","value":"'$(date +%s)'"}]}]}}}}'
Related
I am trying to connect my springboot app (running inside minikube) to kafka on my localhost (ie, laptop).
I have tried many things, including headless services, services without selectors, updating minikube \etc\hosts, but nothing works yet.
I get error from spring boot saying No resolvable bootstrap urls given in bootstrap.servers
Can someone please point me to what I am doing wrong?
My Headless Service
apiVersion: v1
kind: Service
metadata:
name: es-local-kafka
namespace: demo
spec:
clusterIP: None
---
apiVersion: v1
kind: Endpoints
metadata:
name: es-local-kafka
subsets:
- addresses:
- ip: "10.0.2.2"
ports:
- name: "kafkabroker1"
port: 9191
- name: "kafkabroker2"
port: 9192
- name: "kafkabroker3"
port: 9193
My application properties for kafka:
kafka.bootstrap-servers=${LOCALHOST}:9191,${LOCALHOST}:9192,${LOCALHOST}:9193
My Config Map:
apiVersion: v1
kind: ConfigMap
metadata:
creationTimestamp: null
name: rr-config
namespace: demo
data:
LOCALHOST: es-local-kafka.demo.svc
Not sure how you are trying to connect service running on Minikube or on the local system and want to leverage kafka on minikube.
If your application running on local system and Kafka on minikube
you can connect the application to Kafka cluster with the IP of minikube also.
Here is good example : https://github.com/d1egoaz/minikube-kafka-cluster
Git clone : https://github.com/d1egoaz/minikube-kafka-cluster
cd minikube-kafka-cluster
kubectl apply -f 00-namespace/
kubectl apply -f 01-zookeeper/
kubectl apply -f 02-kafka/
kubectl apply -f 03-yahoo-kafka-manager/
kubectl get svc -n kafka-ca1 (Note the port of kafka 31445)
list the Ip of minikube
minikube ip
Now from your local system to minikube kafka you can connect with, http://minikube-ip:port you will see UI of kafka manager in browser
If you are running sprint boot application on the minikube
If both services are running in same namespace you just have to use the service name only to connect
Only service name in sprint boot, if port required you can also pass it
es-local-kafka
try with passing full service also
<servicename>.<namespace>.svc.cluster.local
Headless service is for different purposes and service without a selector is weird in that case your service wont be able to connect to PODs.
I eventually got a fix, and doesn't need all the crazy stuff I was referring to in my question:
You need to make sure your kafka broker is bound to 0.0.0.0 instead of 127.0.0.0 (localhost) . By default, in the single node kafka broker setup, this is what is used. I went with this, due to both time constraint, and the fact that this was just for a POC in my local (prod will have a specific dns-able kafka URL anyway, and no such localhost shenanigans needed)
In the kafka URL in your application properties file, instead of localhost, you need to give ip as as the minikube ip. This is the same ip that you will get if you do the command minikube ip :)
Read more about how this works here: https://minikube.sigs.k8s.io/docs/handbook/host-access/
I'm doing a perf test for web server which is deployed on EKS cluster. I'm invoking the server using jmeter with different conditions (like varying thread count, payload size, etc..).
So I want to record kubernetes perf data with the timestamp so that I can analyze these data with my jmeter output (JTL).
I have been digging through the internet to find a way to record kubernetes perf data. But I was unable to find a proper way to do that.
Can experts please provide me a standard way to do this??
Note: I have a multi-container pod also.
In line with #Jonas comment
This is the quickest way of installing Prometheus in you K8 cluster. Added Details in the answer as it was impossible to put the commands in a readable format in Comment.
Add bitnami helm repo.
helm repo add bitnami https://charts.bitnami.com/bitnami
Install helmchart for promethus
helm install my-release bitnami/kube-prometheus
Installation output would be:
C:\Users\ameena\Desktop\shine\Article\K8\promethus>helm install my-release bitnami/kube-prometheus
NAME: my-release
LAST DEPLOYED: Mon Apr 12 12:44:13 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **
Watch the Prometheus Operator Deployment status using the command:
kubectl get deploy -w --namespace default -l app.kubernetes.io/name=kube-prometheus-operator,app.kubernetes.io/instance=my-release
Watch the Prometheus StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-prometheus,app.kubernetes.io/instance=my-release
Prometheus can be accessed via port "9090" on the following DNS name from within your cluster:
my-release-kube-prometheus-prometheus.default.svc.cluster.local
To access Prometheus from outside the cluster execute the following commands:
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Watch the Alertmanager StatefulSet status using the command:
kubectl get sts -w --namespace default -l app.kubernetes.io/name=kube-prometheus-alertmanager,app.kubernetes.io/instance=my-release
Alertmanager can be accessed via port "9093" on the following DNS name from within your cluster:
my-release-kube-prometheus-alertmanager.default.svc.cluster.local
To access Alertmanager from outside the cluster execute the following commands:
echo "Alertmanager URL: http://127.0.0.1:9093/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-alertmanager 9093:9093
Follow the commands to forward the UI to localhost.
echo "Prometheus URL: http://127.0.0.1:9090/"
kubectl port-forward --namespace default svc/my-release-kube-prometheus-prometheus 9090:9090
Open the UI in browser: http://127.0.0.1:9090/classic/graph
Annotate the pods for sending the metrics.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 4 # Update the replicas from 2 to 4
template:
metadata:
labels:
app: nginx
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9102'
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
In the ui put appropriate filters and start observing the crucial parameter such as memory CPU etc. UI supports autocomplete so it will not be that difficult to figure out things.
Regards
I followed How can I access to kubernetes dashboard using NodePort in a remote cluster for testing?
My Kubernetes cluster runs in Amazon EC2 instances and cluster services looks like below
$ kubectl get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 5h54m
kube-system kube-dns ClusterIP 100.64.0.10 <none> 53/UDP,53/TCP 5h53m
kube-system kubernetes-dashboard NodePort 100.68.178.51 <none> 443:31872/TCP 5h47m
$ kubectl cluster-info
Kubernetes master is running at https://api.selumalai.k8s.com
KubeDNS is running at https://api.selumalai.k8s.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
I have exposed the NodePort at 31872. If I access the dashboard in browser using
$ kubectl proxy -p 8001 &
$ curl https://api.selumalai.k8s.com:31872
Its loading forever. What am I doing wrong ?
You need to add a security group to your instance to allow traffic for port 31872.
It's explained how to do so here.
If you use kubectl proxy it can be accessed localy only, which is stated in the docs:
...
kubectl proxy
...
The UI can only be accessed from the machine where the command is executed. See kubectl proxy --help for more options.
Also here is a guide from AWS on how to Deploy the Kubernetes Dashboard (web UI).
You can try running a L7 ELB to expose the Dashboard which is explained here.
I followed the commands mentioned on this page...
https://www.elastic.co/guide/en/cloud-on-k8s/current/index.html
elastic service is stared successfully. But I do not see external-ip
# /usr/local/bin/kubectl --kubeconfig="wzone2.yaml" get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 10m
quickstart-es ClusterIP 10.245.97.209 <none> 9200/TCP 3m11s
quickstart-es-discovery ClusterIP None <none> 9300/TCP 3m11s
I tried port forwarding command but that did not help.
kubectl port-forward service/quickstart-es 9200
How do I connect to this elastic server?
ClusterIP services are only available from inside the cluster. To make it visible from the outside you would need to change it to LoadBalancer type, and have an implementation of that available (read: be running on a cloud provider or use MetalLB).
Outside of using a LoadBalancer like #coderanger suggested, you can also use a service of type NodePort. This will allow you to connect to your service using the node IP address and without depending on cloud providers.
Objective: create a k8s LoadBalancer service on AWS whose IP is static
I have no problem accomplishing this on GKE by pre-allocating a static IP and passing it in via loadBalancerIP attribute:
$ kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: dave
loadBalancerIP: 17.18.19.20
...etc...
But doing same in AWS results in externalIP stuck as <pending> and an error in the Events history
Removing the loadBalancerIP value allows k8s to spin up a Classic LB:
$ kubectl describe svc dave
Type: LoadBalancer
IP: 100.66.51.123
LoadBalancer Ingress: ade4d764eb6d511e7b27a06dfab75bc7-1387147973.us-west-2.elb.amazonaws.com
...etc...
but AWS explicitly warns me that the IPs are ephemeral (there's sometimes 2), and Classic IPs don't seem to support attaching static IPs
Thanks for your time
as noted by #Quentin, AWS Network Load Balancer now supports K8s
https://aws.amazon.com/blogs/opensource/network-load-balancer-support-in-kubernetes-1-9/
Network Load Balancing in Kubernetes
Included in the release of Kubernetes 1.9, I added support for using the new Network Load Balancer with Kubernetes services. This is an alpha-level feature, and as of today is not ready for production clusters or workloads, so make sure you also read the documentation on NLB before trying it out. The only requirement to expose a service via NLB is to add the annotation service.beta.kubernetes.io/aws-load-balancer-type with the value of nlb.
A full example looks like this:
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: default
labels:
app: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
externalTrafficPolicy: Local
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: LoadBalancer