System too slow after minikube . (4gb ram + core i3 ) - performance

I have a core i3 processor 7th generation and 4gb ram in my system. I installed virtual box and then ran .. minikube start on my shell.
As the minikube starts, the system is heavily slowed down. It hangs at the drop of hat. I am learning kubernetes and want to make use of yaml files to deploy and learn which I can't in playground .
And as I delete minikube , system comes back to life.
So, I have two questions. Is the issue with RAM or with core i3 ? The prerequisites for minikube is 2 CPUs . Does that mean minikube alone will have two CPUs for itself and host will not have any?
Whats causing the issue?
Second one, is there any other way i can learn k8 without minikube? playground doesn't provide way for adding yaml file

This is probably not enough memory so the machine starts paging/swapping. Not enough cpu is a problem, too, of course.
If you are using Windows try stopping all programs and docker desktop when using minikube. (minikube uses 2gb ram by default)
Try upgrading to at least 8gb ram when working with virtual maschine. (minikube and docker desktop both create VMs for the linux environment.)
If you are using a linux machine use the docker driver for minikube.

Related

Performance issues on WSL 2

For the last two months I've (tried to) embraced WSL2 as my main development environment. It works fine with most small projects, but when it comes to complex ones, things start to slow down, making working on WSL2 impossible. With complex one I mean a monorepo with React, node, different libraries, etc. This same monorepo, on the same machine, works just fine when running it from Windows itself.
Please note that, when working on WSL2, all my files are in the linux environment; I'm not trying to access Windows files from WSL2.
I've the latest Docker Desktop installed, with WSL2 integration and kubernetes enabled. But the issue persists even with Docker completely stopped.
I've also tried to limit the memory consumption for WSL2, but that doesn't seems to fix the problem.
My machine is an Aero 15X with 16GB of ram. A colleague suggested upgrading to 32GB of ram. But before trying this, or "switching back" to Windows for now, I'd like to see if someone has any suggestions I could test out.
Thanks.
The recent Kernel version Linux MSI-wsl 5.10.16.3 starts slower than previous overall.But the root cause can be outside WSL: if you have a new NVIDIA GeForce card installed Windows gives it to eat as much memory as it can, i.e 6-16 Gb without using it. I had to limit WSL memory to 8Gb to start WSL service without OoM. Try to play with this parameter in .wslconfig in your home directory and look at the WSL_Console_Log in the same place. If the timestamps in this file are in ms my Kernel starts in 55 ms and then hangs on Networking(!!!).
I'm afraid that WSL Kernel network driver
lshw -c network
*-network
description: Ethernet interface
physical id: 1
logical name: eth0
serial: 00:15:5d:52:c5:c0
size: 10Gbit/s
capabilities: ethernet physical
configuration: autonegotiation=off broadcast=yes driver=hv_netvsc driverversion=5.10.16.3-microsoft-standard-WS duplex=full firmware=N/A ip=172.20.186.108 link=yes multicast=yes speed=10Gbit/s
is not so fast how it is expected to be

How to have a source code and IDE on local laptop and run it on another machine inside the docker container?

I'm using an "old" MacBook Pro 2015 with "just" 16GB of RAM for developing some complex websites based on Drupal. As a tool stack, I'm using Docksal (a Docker-based env) and PHPStorm as an IDE.
Because the Drupal site is complex it eats up almost all my memory and the NFS sharing is also very slow. I have another machine (an old Mac Mini) with 16GB of RAM. My idea is to have the code and IDE on my laptop and run the heavy Docker on a second machine.
I found the Docker Context tool and I've configured it to connect to the Docker host on the second machine, so now I can run docker commands on my laptop and they will be executed on the second machine.
But I'm still missing the last part of the puzzle - how can I configure Docker and Docksal to sync my code from the laptop to the Docker image on another machine? Is it possible at all?
Also, I found a Mutagen.io but it is not supported in Docksal and it's not clear how to combine everything together.

Any way to run Docker For Mac in only a couple GB of RAM?

Docker For Mac is demanding 4GB of available RAM.
That is a much larger overhead than I have seen before for VMs.
Is there any way to run Docker on Mac without so much RAM?
Have you tried these settings - https://docs.docker.com/docker-for-mac/#advanced
I have been running Docker Desktop on Mac with these settings for a long time without no issues until you run some heavy workloads on it.

Too little RAM in Kaa Server

I want to run a test with KAA, so I was trying to install the sandbox in my laptop but it has only 4GB in RAM, so when I try to set up the Virtual Machine the system won't let me set up over 1,6GB and the VM won't start.
So I was trying to install in other old laptop so I installed Ubuntu 16,04 and I followed all the step by step instructions in Kaaproyect's WEB. I could do it, but when I try to start the server can't do it. I was checking the Log error and say me that the problem is in the Java's Virtual machine, can't start because only have 2GB in RAM. I need to test a Little application so is it possible change this requirement in the Java machine and start the system?
PS: I can't buy more Ram.
I recommend you to use amazon AWS. The basic installation where you can run Kaa is free for one year, and it runs quite well there.

Hadoop installation using Cloudera VMware

Can anyone please let me know the minimum RAM required (of the host machine) for running Cloudera's hadoop on VMware workstation?
I have 6GB of RAM. The documentation says that the RAM required by the VM is 4 GB.
Still, when I run it, the CentOS is loaded and the VM crashes. I have no other active application running at the time.
Are there any other options apart from installing hadoop manually?
You may be running into your localhost running out or memory or some other issue preventing the machine from booting completely. There are a couple of other options if you don’t want to deal with a manual install:
If you have access to a docker environment try the the docker image they provide.
Run it in the cloud with AWS, GCE, Azure, they usually have a small allotment of personal/student credits available.
For AWS, EMR also makes it easy for you to run something repeatedly.
For really short durations, you could try the demo from Bitnami (https://bitnami.com/stack/hadoop) and just run whatever you need to there.

Resources