I need some help in monitoring Micro services which are running on AKS cluster using existing Prometheus which is running on a different node.
How can we detect and setup alerts for a number of scenarios around pods like
out of memory issues and pods getting restarted? I have checked few articles on internet but most of them are addressing the issue in which both micro services and Prometheus is running on same AKS cluster. But in my case, Prometheus is already configured on different node and we are using node_exporter and black_box_exporter to monitor other servers. I do not want to disturb the existing setup of Prometheus.
Any suggestion would be greatly appreciated!
Thanks,
Sharmila
Related
How should I go about deploying Jaeger on AWS ECS with Elasticsearch as backend? Is it a good idea to use the Jaeger all in one image or should I use separate images?
While I didn’t find any official jaeger reference to this, I think the jaeger all in one image is not intended for use in production. It makes one container a single point of failure, making it better to use separate containers for each jaeger component(if one is down from some reason - others can continue to operate).
I have recently written a blog post about hosting jaeger on AWS with AWS Elasticsearch (OpenSearch) service. While it is done with all-in-one, it is still useful to get the general idea of how to go about this.
Just to generally outline the process (described in detail in the post):
Create AWS Elasticsearch cluster
Create an ECS Cluster (running on ec2)
Create an ECS Task Definition, configured with a jaeger all-in-one image with the elasticsearch url from the step 1
Create an ECS Service that runs the created task definition
Make sure security groups on your EC2 allow access to jaeger ports as described here
Send spans to your jaeger endpoint via OpenTelemetry SDK
View your spans via the hosted jaeger UI (your-ec2-url:16686)
The all in one is a useful tool in development to test your work locally.
For deployment it is very limiting. Ideally to handle a potentially large volume of traffic you will want to scale parts of your infrastructure.
I would recommend deploying multiple jaeger-collectors, configured to write to the ES cluster. Then you can configure jaeger-agents running as a sidecar to each app or service broadcasting telemetry info. These agents can be configured to forward to one of a list of collectors adding some extra resilience.
I have several microservices around 20 or something to check their services in my local development. The micro-services are spring boot services with maven build. So wanted to know when I have to run them on my aws server can I run all these containers individually like they might have shared database so will that be one issue i might face.Or is it possible to run all these services together in one single docker image.
Also I have to configure it with Kubernetes so I have configured Minikube in my local dev would be helpful if there are some considerations to be taken while running around 20services on my minikube or even Kubernetes env
PS: I know this is a basic question but dont have much idea about Devops
Ideally you should have different docker image for each of the micro services and create kubernetes deployment for each of the micro services.This makes scaling individual micro services de coupled from each other. Also communication between micro services should be via kubernetes service. This makes communication stable because service IPs and FQDN don't change even if pods are created, deleted, scaled up and down.
Just be cautious of how much memory and CPU the micros services will need and if the system with minikube has that much resource or not. If the available memory and CPU of a Kubernetes node is not enough to schedule the pod then pods will be stuck in pending state.
As you have too many microservices, I suggest you make a Kubernetes cluster on AWS of 3-4 VMs (more info here). Then try to deploy all your microservices on that. For that you need to build the containers individually for each service and create kubernetes deployment for each service.
I run all these containers individually like they might have shared database so will that be one issue i might face.
As you have shared database, I suggest you run your database server on individual host and then remotely connect with your database from your services. This way you would be able to share database between your microservices.
We have an elasticsearch cluster deployed to the Elastic Cloud and would like to send monitoring/health metrics to Datadog. What is the best way to do that?
It seems like our options are:
Installing the datadog agent binary via the plugins upload
Using metric beat -> logstash -> datadog_metrics output
You can deploy the Datadog agent in a container / instance that you manage and the configure it according to these instructions to gather metrics from the remote ElasticSearch cluster that is hosted on Elastic Cloud. You need to create a conf.yaml file in the elastic.d/ directory and provide the required information (Elasticsearch endpoint/URL, username, password, port, etc) for the agent to be able to connect to the cluster. You may find a sample configuration file here.
As George Tseres mentioned above, the way I had to get this working was to set up collection on a separate instance (through docker) and then to configure it to read the specific Elastic Cloud instances.
I ended up making this: https://github.com/crwang/datadog-elasticsearch, building that docker image, and then pushing it up to AWS ECR.
Then, I spun up a Fargate service / task to run the container.
I also set it to run locally with docker-compose as a test.
I want to setup elastic stack (elastic search, logstash, beats and kibana) for monitoring my kubernetes cluster which is running on on-prem bare metals. I need some recommendations on the following 2 approaches, like which one would be more robust,fault-tolerant and of production grade. Let's say I have a K8 cluster named as K8-abc.
Approach 1- Will be it be good to setup the elastic stack outside the kubernetes cluster?
In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be fetched by beats(running on K8-abc) and put into into the ES Cluster which is configured on Linux Bare Metals via Logstash (which is also running on VMs). And for fetching the kubernetes node logs, the beats running on respective VMs (which are participating in forming the K8-abc) would fetch the logs and put it into the ES Cluster which is configured on VMs. The thing to note here is the VMs used for forming the ES Cluster are not the part of the K8-abc.
Approach 2- Will be it be good to setup the elastic stack on the kubernetes cluster k8-abc itself?
In this approach, all the logs from pods running in kube-system namespace and user-defined namespaces would be send to Elastic search cluster configured on the K8-abc via logstash and beats (both running on K8-abc). For fetching the K8-abc node logs, the beats running on VMs (which are participating in forming the K8-abc) would put the logs into ES running on K8-abc via logstash which is running on k8-abc.
Can some one help me in evaluating the pros and cons of the before mentioned two approaches? It will be helpful even if the relevant links to blogs and case studies is provided.
I would be more inclined to the second solution. It has many advantages over the first one however it may seem more complex as it comes to the initial setup. You can actually ask similar question when it comes to migrate any other type of workload to Kubernetes. It has many advantages over VM. To name just a few:
self-healing cluster,
service discovery and integrated load balancing,
Such solution is much easier to scale (HPA) in comparison with VMs,
Storage orchestration. Kubernetes allows you to automatically mount a storage system of your choice, such as local storage, public cloud providers, and many more including Dynamic Volume Provisioning mechanism.
All the above points could be easily applied to any other workload and may bee seen as Kubernetes advantages in general so let's look why to use it for implementing Elastic Stack:
It looks like Elastic is actively promoting use of Kubernetes on their website. See also this article.
They also provide an official elasticsearch helm chart so it is already quite well supported by Elastic.
Probably there are many other reasons in favour of Kubernetes solution I didn't mention here. Here you can find a hands-on article about setting up Highly Available and Scalable Elasticsearch on Kubernetes.
I'm new to kubernetes and trying to understand what resources resides on which node.
Can anyone please confirm what kind of kubernetes resources other than pods are usually worker node hosts ? On which nodes do other resources like ingress, persistence-volume-claim, service, config map etc resides ?
Hi Pramode Kumar Pandit,
On Workers you will run your workload, such as deployed microservices, the ingress run on your master node.
the Persistence volumes will depend how you configure your ICP to determine where it will run.
If you look at this post you will have an idea about the architecture using ICP:
https://www.ibm.com/support/knowledgecenter/en/SSBS6K_3.1.2/getting_started/architecture.html