CoreOS VM crash: swap trace printed - linux-kernel

I'm using CoreOS 773.1.0 with kubernetes. Recently it crashes and printed this trace log:
The VM is still running but I cannot ssh to it, kubernetes master node declare it as NotReady. I had to turn it off (not shutdown) and start it.
I'm using Hyper-V as hypervisor, the VM is assiged 12GB RAM, 4GB swap, 4 cores CPU. Especially, I got this error after I moved the disk (.vhd file) to new partition.

This is a known issue to CoreOS 717.3.0 with swap: https://github.com/coreos/bugs/issues/429

Based on the stack trace, it looks like the kernel was trying to free up memory. So, probably the node was under severe memory pressure. Kernel bugs tend to crop up under memory pressure.
It also looks like swap was turned on. Kubernetes developers don't recommend turning on swap.

Looks like Kubelet process is stucked. Do you have kubelet log and at which operation, kubelet is stucked?

Related

System too slow after minikube . (4gb ram + core i3 )

I have a core i3 processor 7th generation and 4gb ram in my system. I installed virtual box and then ran .. minikube start on my shell.
As the minikube starts, the system is heavily slowed down. It hangs at the drop of hat. I am learning kubernetes and want to make use of yaml files to deploy and learn which I can't in playground .
And as I delete minikube , system comes back to life.
So, I have two questions. Is the issue with RAM or with core i3 ? The prerequisites for minikube is 2 CPUs . Does that mean minikube alone will have two CPUs for itself and host will not have any?
Whats causing the issue?
Second one, is there any other way i can learn k8 without minikube? playground doesn't provide way for adding yaml file
This is probably not enough memory so the machine starts paging/swapping. Not enough cpu is a problem, too, of course.
If you are using Windows try stopping all programs and docker desktop when using minikube. (minikube uses 2gb ram by default)
Try upgrading to at least 8gb ram when working with virtual maschine. (minikube and docker desktop both create VMs for the linux environment.)
If you are using a linux machine use the docker driver for minikube.

Why is docker consuming so much VIRT memory?

I am running docker on my Mac OS X: 2,5 GHz Intel Core i7, 16 GB 1600 MHz DDR3.
The point is that it seems to be consuming TOO much VIRT memory, if I am reading the htop output correctly.
Is this normal? Or is there any problem behind it? My laptop is very slow.
This is illustrated by moby/moby issue 31594.
That issue actually asks to run contrib/check-config.sh as a way to know more about the docker configuration being used.
The same issue is illustrated since 2015 in #15020
It appears that Docker somehow does not respect MALLOC_ARENA_MAX and will regardless allow the amount of virtual memory to grow to a number correlating to the number of CPUs being allocated to it.
(host is running macOS 10.13.2)
As commented:
docker itself does nothing with that environment variable (or memory management of the processes inside the container); it sets up namespaces and cgroups for the process, which is all part of the kernel.

Is a VM's speed affected by programs running outside of it?

I'm testing the speed of different sorting methods for a CS class, and although our professor said we didn't need to be terribly precise, he still wants us to be careful not to run apps in the background while testing, use a different machine, or do anything to throw the speed of the sorts off.
If I ran the tests in a VM, would the environment outside of the VM affect the speed? Would that help make the tests accurate without having to worry about changes in the apps I have open alongside the VM?
In short, yes.
In most scenarios, hosts share their resources with the VM. If you bog down/freeze/crash the host then the VM will be affected.
For those that have more robust servers with better resources, processes running in the host won't affect the VM as much. Because if you have more resources on the host you can assign better RAM and Virtual Processors to the VM so it runs smoothly.
For instance, let's say our host has 64GB of RAM a processor that has 4 cores and 8 threads (such as an Intel® Xeon® E3-1240 Processor).
We can tell VirtualBox, VMware or Hyper-V to assign 32GB of RAM and 4 virtual processors to the VM, essentially cutting the host's power by half.
With this in mind, any processes you run on the host will usually be separate from the VM but if the processes freeze, crash or cause a hard reboot on the host then the VM will be affected regardless of RAM or virtual processors assigned.
In enterprises environments, a hyper-v server should only be used for that purpose and installing/running a lot of processes in the host is usually frowned upon (such as installing/running DHCP, DNS, Web Server (IIS), etc).
So your professor is right to advise against running processes on the host while testing your VM.

Hadoop installation using Cloudera VMware

Can anyone please let me know the minimum RAM required (of the host machine) for running Cloudera's hadoop on VMware workstation?
I have 6GB of RAM. The documentation says that the RAM required by the VM is 4 GB.
Still, when I run it, the CentOS is loaded and the VM crashes. I have no other active application running at the time.
Are there any other options apart from installing hadoop manually?
You may be running into your localhost running out or memory or some other issue preventing the machine from booting completely. There are a couple of other options if you don’t want to deal with a manual install:
If you have access to a docker environment try the the docker image they provide.
Run it in the cloud with AWS, GCE, Azure, they usually have a small allotment of personal/student credits available.
For AWS, EMR also makes it easy for you to run something repeatedly.
For really short durations, you could try the demo from Bitnami (https://bitnami.com/stack/hadoop) and just run whatever you need to there.

Very slow network performance of Docker containers with host's network

I'm having a problem with sluggish network performance between Docker containers and host's network. I asked this question on the Docker's forum but have received no answers so far.
Problem
Set-up: two Macs on the same local network; the first runs an MQTT broker (mosquitto); the second runs Docker for Mac. Two C++ programs run on the second Mac and exchange data multiple times through the MQTT broker (on the first Mac), using the Paho MQTT C library.
Native run: when I ran the two C++ programs natively, the network performance was excellent as expected. The programs were built with XCode 7.3.
Docker runs: when I ran either of the C++ programs, or both of them, in Docker, the network performance dropped dramatically, roughly 30 times slower than the native run. The Docker image is based on ubuntu:latest, and the programs were built by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609.
I tried to use the host network (--network="host" in Docker run) but it didn't help. I also tried to run the MQTT broker on the second Mac (so that the broker and the containers ran on the same host); the problem persisted. The problem existed on both my work LAN and my home network.
In theory, it could have been that the C++ programs were generally slow in Docker containers. But I doubt this was the case because in my experience, the general performance of C++ code in Docker is about as fast as in the native environment.
Question
What could be the cause of this problem? Are there any settings in Docker that can solve this issue?
Your problem sounds very similar to this open issue on the Docker for Mac repo. Unfortunately, there doesn't seem to be a known solution, but the discussion in there may be useful. My personal guess at the moment is that the bug lives near the hyperkit virtualization being used on Docker for Mac specifically.
In my case, I was oddly able to bypass this issue by using a different physical router, but I have no idea why it worked. Sadly that's not really a 'solution' though.
I hate that this isn't a great answer, but I wanted to at least share the discussion in the open issue. Good luck and keep us posted.
I suspect the default allocation of memory and CPU for the containers might not be optimal for the kind of network performance you are trying to achieve.
Investigate the utilization of resources within the containers using standard tools like top, htop, strace etc. Or you can use docker stat command when these instances are in peak operation
$ docker stats node1 node2
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
node1 0.07% 796 KB/64 MB 1.21% 788 B/648 B
node2 0.07% 2.746 MB/64 MB 4.29% 1.266 KB/648 B
Then you might want to modify various resource allocation parameters available with docker run.
EDIT: Another thing to check would be MTU of the actual system interface and the setting on the docker interfaces. Use
--mtu=BYTES to set MTU of your docker values to match your system interface's MTU value

Resources