Cannot obtain cAdvisor container metrics on Windows Kubernetes nodes - windows

I have configured a mixed node Kubernetes cluster. Two worker nodes are Unbuntu Server 18.04.4 and two worker nodes are Windows Server 2019 Standard. I have deployed several Docker containers as deployments/pods to each set of worker nodes (.NET Core apps on Ubuntu and legacy WCF apps on Windows). Everything seems to work as advertised.
I am now at the point where I want to monitor the resources of the pod/containers. I have deployed Prometheus, kube-state-metrics, metrics-server. I have Prometheus scraping the nodes. For container metrics, the kubelet/cAdvisor is returning everything I need from the Ubunutu nodes, such as "container_cpu_usage_seconds_total, container_cpu_cfs_throttled_seconds_total, etc". But the kubelet/cAdvisor for the Windows nodes only give me some basic information:
http://localhost:8001/api/v1/nodes/[WINDOWS_NODE]/proxy/metrics/cadvisor
# HELP cadvisor_version_info A metric with a constant '1' value labeled by kernel version, OS version, docker version, cadvisor version & cadvisor revision.
# TYPE cadvisor_version_info gauge
cadvisor_version_info{cadvisorRevision="",cadvisorVersion="",dockerVersion="",kernelVersion="10.0.17763.1012",osVersion="Windows Server 2019 Standard"} 1
# HELP container_scrape_error 1 if there was an error while getting container metrics, 0 otherwise
# TYPE container_scrape_error gauge
container_scrape_error 0
# HELP machine_cpu_cores Number of CPU cores on the machine.
# TYPE machine_cpu_cores gauge
machine_cpu_cores 2
# HELP machine_memory_bytes Amount of memory installed on the machine.
# TYPE machine_memory_bytes gauge
machine_memory_bytes 1.7179398144e+10
So while the cAdvisor on the Ubuntu nodes gives me everything I ever wanted about containers and more, the cAdvisor on the Windows nodes only gives me the above.
I have examined the Powershell scripts that install/configure kubelet on the Windows nodes, but don't see/understand how I might configure a switch or config file if there is a magical setting I am missing that would enable container metrics to be published when kubelet/cAdvisor is scraped. Any suggestions?

There is metrics/resource/v1alpha1 endpoint. But it provides only 4 basic metrics.
Documentation
I think that cAdvisor doesn't support windows nodes properly that you see is just a n emulated interface with limited metrics
Github issue

Related

How to view application specific logs while running services using docker-compose

How to view application specific logs while running services using docker-compose, without getting into each of the containers. We have microservices running in Rails, Python, Java in a single docker-compose environment. What would be a cost effective open source solution which we can use for monitoring + searching logs by the Operations team. We would want to avoid Elasticsearch for this as we don't have a big budget, appreciate your inputs
Elastic search provides free tier as well. ELK - subscriptions. You can use BASIC - FREE AND OPEN
You can use easily set up logging infrastructure using
ELK - Elastic Search, Logstash, Kibana
filebeat - Log shipper for docker containers - filebeat
metricbeat - metricbeat for docker - containers
The infrastructure would scale irrespective of how many containers you have.
You can check out some basic monitoring and logging examples here - link
As well as the Free license mentioned in the other answer, most Elastic tools are available in apache-licensed OSS versions.
Beats agents mostly support autodiscovery in docker and docker-compose, making them really easy to use on an ongoing basis, even with short-lived containers.
It would help if you specify whether the budget constraints are around a) licensing costs, b) time and effort for your Operations team, or c) something else.

Running WinLogBeat or any Windowds EventLog collector as DaemonSet on Windows nodes in Kuberentes cluster

Is it possible to be running WinLogBeat or any Windows EventLog collector as DaemonSet on Windows Nodes inside Kubernetes cluster without the need of installing it on every windows pod. All examples are for Linux but there is zero info on how to configure something similar on Windows nodes. Is it even possible to achieve this in theory?

Concerning health cloudera hadoop (network interface speed)

I am new in that area and have some problems with my hadoop cluster.
I fixed many health issues but health of my hosts is still "concerning" (still yellow and not "green", unfortunately). Can this depend on the fact that my hosts are connected through an old switch with a speed of 100 Mbps? Network cards of almost all servers support 1000 Mbps.
In the recommendation for resolving this error, it's was advised checking the duplex options. It would have "concerning health" if the all of my hosts working on half-duplex mode, but i've check it and they all have full-duplex mode.
Screen of Network Interface speed issue:
About Cluster installation options that i choose (if need):
1) Use Packages;
2) CDH5;
3) CDH 5.13.0.
P.S. How much does "concerning health" of hosts affect their work? Can I run complex tasks on them? I've just used the test WordCount and calculating the Pi - and they were successful.
+1 to #cricket_007's comments. If you consider the Reference Architecture for Deploying
CDH 5.X ON RED HAT OSP 11 as a typical deployment you'll see that the ideal network configuration is 10 Gigabit Ethernet (10GbE).
Most of Cloudera Manager's warnings are meant for production-level clusters.

Support of multi node type in IBM Cloud Private Cluster

IBM Cloud Private supports multi node type, x, p, x in the same cluster, what should we define in the helm deployment in order to make sure a deployment goes to a particular node type?
IBM Could Private supports mixed architectures on the worker nodes.
For example, if you deploy an z application it will only try to run on z nodes.
All the matter nodes should be one architecture, either x or p.
From ICP app center, you can create different charts for different platforms as following:
app center
We found an issue to enable 'nodeSelector' to select different platforms to deploy. This issue was tracked in both ICP and Kubernetes community as below.
ICP issue: https://github.ibm.com/IBMPrivateCloud/roadmap/issues/1737
Kubernetes Chart issue: https://github.com/kubernetes/charts/issues/1899
You can get the node information from Infrastructure-->Node.
Different platforms have different arch images, so we have to use nodeSelector to enable the pods to be scheduled to different nodes. We are now trying to enable multiarck docker images here https://github.com/docker-library/official-images , if this finished, then we will do not need nodeSelector anymore.
For the usage metric of different nodes in the cluster, you can get resource usage from the dashboard by default. If you install the add-on monitoring framework such as Prometheus etc on ICP, you will get more metrics.

How to deploy a Cassandra cluster on two ec2 machines?

It's a known fact that it is not possible to create a cluster in a single machine by changing ports. The workaround is to add virtual Ethernet devices to our machine and use these to configure the cluster.
I want to deploy a cluster of , let's say 6 nodes, on two ec2 instances. That means, 3 nodes on each machine. Is it possible? What should be the seed nodes address, if it's possible?
Is it a good idea for production?
You can use Datastax AMI on AWS. Datastax Enterprise is a suitable solution for production.
I am not sure about your cluster, because each node need its own config files and it is default. I have no idea how to change it.
There are simple instructions here. When you configure instances settings, you have to write advanced settings for cluster, like --clustername yourCluster --totalnodes 6 --version community etc. You also can install Cassandra manually by installing latest version java and cassandra.
You can build cluster by modifying /etc/cassandra/cassandra.yaml (Ubuntu 12.04) fields like cluster_name, seeds, listener_address, rpc_broadcast and token. Cluster_name have to be same for whole cluster. Seed is master node, which IP you should add for every node. I am confused about tokens

Resources