Performance issues on WSL 2 - windows

For the last two months I've (tried to) embraced WSL2 as my main development environment. It works fine with most small projects, but when it comes to complex ones, things start to slow down, making working on WSL2 impossible. With complex one I mean a monorepo with React, node, different libraries, etc. This same monorepo, on the same machine, works just fine when running it from Windows itself.
Please note that, when working on WSL2, all my files are in the linux environment; I'm not trying to access Windows files from WSL2.
I've the latest Docker Desktop installed, with WSL2 integration and kubernetes enabled. But the issue persists even with Docker completely stopped.
I've also tried to limit the memory consumption for WSL2, but that doesn't seems to fix the problem.
My machine is an Aero 15X with 16GB of ram. A colleague suggested upgrading to 32GB of ram. But before trying this, or "switching back" to Windows for now, I'd like to see if someone has any suggestions I could test out.
Thanks.

The recent Kernel version Linux MSI-wsl 5.10.16.3 starts slower than previous overall.But the root cause can be outside WSL: if you have a new NVIDIA GeForce card installed Windows gives it to eat as much memory as it can, i.e 6-16 Gb without using it. I had to limit WSL memory to 8Gb to start WSL service without OoM. Try to play with this parameter in .wslconfig in your home directory and look at the WSL_Console_Log in the same place. If the timestamps in this file are in ms my Kernel starts in 55 ms and then hangs on Networking(!!!).
I'm afraid that WSL Kernel network driver
lshw -c network
*-network
description: Ethernet interface
physical id: 1
logical name: eth0
serial: 00:15:5d:52:c5:c0
size: 10Gbit/s
capabilities: ethernet physical
configuration: autonegotiation=off broadcast=yes driver=hv_netvsc driverversion=5.10.16.3-microsoft-standard-WS duplex=full firmware=N/A ip=172.20.186.108 link=yes multicast=yes speed=10Gbit/s
is not so fast how it is expected to be

Related

How to have a source code and IDE on local laptop and run it on another machine inside the docker container?

I'm using an "old" MacBook Pro 2015 with "just" 16GB of RAM for developing some complex websites based on Drupal. As a tool stack, I'm using Docksal (a Docker-based env) and PHPStorm as an IDE.
Because the Drupal site is complex it eats up almost all my memory and the NFS sharing is also very slow. I have another machine (an old Mac Mini) with 16GB of RAM. My idea is to have the code and IDE on my laptop and run the heavy Docker on a second machine.
I found the Docker Context tool and I've configured it to connect to the Docker host on the second machine, so now I can run docker commands on my laptop and they will be executed on the second machine.
But I'm still missing the last part of the puzzle - how can I configure Docker and Docksal to sync my code from the laptop to the Docker image on another machine? Is it possible at all?
Also, I found a Mutagen.io but it is not supported in Docksal and it's not clear how to combine everything together.

Very slow network performance of Docker containers with host's network

I'm having a problem with sluggish network performance between Docker containers and host's network. I asked this question on the Docker's forum but have received no answers so far.
Problem
Set-up: two Macs on the same local network; the first runs an MQTT broker (mosquitto); the second runs Docker for Mac. Two C++ programs run on the second Mac and exchange data multiple times through the MQTT broker (on the first Mac), using the Paho MQTT C library.
Native run: when I ran the two C++ programs natively, the network performance was excellent as expected. The programs were built with XCode 7.3.
Docker runs: when I ran either of the C++ programs, or both of them, in Docker, the network performance dropped dramatically, roughly 30 times slower than the native run. The Docker image is based on ubuntu:latest, and the programs were built by gcc (Ubuntu 5.4.0-6ubuntu1~16.04.1) 5.4.0 20160609.
I tried to use the host network (--network="host" in Docker run) but it didn't help. I also tried to run the MQTT broker on the second Mac (so that the broker and the containers ran on the same host); the problem persisted. The problem existed on both my work LAN and my home network.
In theory, it could have been that the C++ programs were generally slow in Docker containers. But I doubt this was the case because in my experience, the general performance of C++ code in Docker is about as fast as in the native environment.
Question
What could be the cause of this problem? Are there any settings in Docker that can solve this issue?
Your problem sounds very similar to this open issue on the Docker for Mac repo. Unfortunately, there doesn't seem to be a known solution, but the discussion in there may be useful. My personal guess at the moment is that the bug lives near the hyperkit virtualization being used on Docker for Mac specifically.
In my case, I was oddly able to bypass this issue by using a different physical router, but I have no idea why it worked. Sadly that's not really a 'solution' though.
I hate that this isn't a great answer, but I wanted to at least share the discussion in the open issue. Good luck and keep us posted.
I suspect the default allocation of memory and CPU for the containers might not be optimal for the kind of network performance you are trying to achieve.
Investigate the utilization of resources within the containers using standard tools like top, htop, strace etc. Or you can use docker stat command when these instances are in peak operation
$ docker stats node1 node2
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
node1 0.07% 796 KB/64 MB 1.21% 788 B/648 B
node2 0.07% 2.746 MB/64 MB 4.29% 1.266 KB/648 B
Then you might want to modify various resource allocation parameters available with docker run.
EDIT: Another thing to check would be MTU of the actual system interface and the setting on the docker interfaces. Use
--mtu=BYTES to set MTU of your docker values to match your system interface's MTU value

Better performance from windows virtualboxes on ubuntu or from ubuntu virtualboxes on windows

I am planning to develop an automated test solution with multiple windows machines and multiple ubuntu machines that perform related/interdependent tasks. To start the project, I'd like to have one or two windows machines (virtual) and a few ubuntu machines (virtual) running on a single desktop. It seems likely that I will be pushing a single desktop to the limit here so I am trying to guess if I will have better luck if my host OS is ubuntu or if it is Windows 7. I would be able to use the host OS as one of the machines in my environment. The desktop is some sort of above average Dell, but nothing really impressive.
Does anyone have any insight here? I've worked mostly with VMWare in the past and my host was windows along with my VMs.
Note: VirtualBox is a type-2 hypervisor (it runs on the host OS, not on the hardware like a type-1 hypervisor) and tends to offer weaker performance than, for example, Hyper-V, ESX or XEN (type-1 hypervisors).
Therefore, if performance is a considerable concern, you may squeeze more juice out of Win8 or Windows Server 2012 box running, for example, Hyper-V. Further reading on this here and here (YMMV).
How your environment will run when hosted by a Windows vs. a Linux box is, frankly impossible to tell. I suggest you build your VM's and try dual-booting your machine in Windows and Linux and measuring your scenario. Be sure to have enough RAM in the host to allocate enough working RAM to each VM and enough IO throughput that your host doesn't end up dragging the perf of all VM's down if one VM saturates the machine's IO.
One last note of caution though: Don't completely trust fine-grained perf results measured on VM's - even the best hypervisors cannot truly replicate the perf' characteristics of code running on bare-metal. Treat your measurements as a guideline only.
Measure, then measure again. Measure again just to be sure ... and THEN tweak your config and re-measure, measure, measure ;)
My $0.02:
If its VirtualBox you are using I would go with Ubuntu for certain. I have an AMD 945 Phenom with 16GB of Ram with 12.04LTS 64bit . I can usually have 2 VM's running Windows and / or Ubuntu guests and never consume more than 7 GBs of RAM . If your running them in a testing solution you could expect to probably see 12 maybe 13 GBs of RAM, but the CPU might be your problem. My AMD Phenom runs great, but would be maxed out for sure. I use VMWare at work and on my Laptop and would recommend that if you were running a Windows Host. I also have VMWare on my Ubuntu host, but it just does not run as well as it does on Windows., at least for me.

Recover windows seven

I started on Ubuntu and have had the first considerable error. I'm looking for help.
I have an HP Pavilion dv6 i7. I had installed windows 7 and I decided to also install Ubuntu using a USB.
My first attempt was to install Ubuntu 11.10 following the instructions of the official Ubuntu website. When loading the pendrive, my PC stucks at the main menu of ubuntu, so after searching, I found could be due to a problem with my AMD Radeon graphic card (or not), but I decided to change.
Then I used Ubuntu 10.4. This could happen from the start menu i get into Ubuntu live. There I decided to install it because I liked it and I need to develope with Google TV (in windows is not posible).
And I fail in the partitions section. I tried to follow the instructions on this page:
http://hadesbego.blogspot.com/2010/08/instalando-linux-en-hp-pavilion-dv6.html
but there were things that changed a bit so I improvised. I took the windows partition of 700000MB and went to 600000Mb leaving 100GB free to install Linux there. The error was to set it to ext3 (it was ntfs). I thought the new 100gb partition will be set to ext3, and windows partition will stuck at ntfs system, but not.
Total I ran out to boot windows, and above I can not install ubuntu on the 100GB free.
Someone thinks I can help. Is there any easy way to convert back to ntfs windows and not lose data?
Thank you very much.
You should be able to hit F11 when the machine is booting up and go to the HP recovery application. This should let you reset to factory default.
You should definitely be able to install Ubuntu on the new 100GB partition as well. Just make sure you choose the right partition to install it on.
You will need to recover using recovery CD/DVD's. You must have been using the install gparted utility in Linux to "re-partition" your drive. You scrubbed some boot files.
If you successfully recover using the recovery media you can use Disk Management in Win 7 to shrink or extend your volume. In your case you would shrink it down 100Gb's and then when installing Linux gparted will see that available 100 GB and install there while Windows will still run.
Also, you should probably be running ext4 fs, not ext3. you would only want ext3 for compatibility reasons.

Windows Virtual PC Development Setup?

After having had a dev PC HD corrupt, I'm considering the idea of making my development environment be fully Virtual PC based.
The core items would be:
- XP Pro 32
- IIS
- VS2003
- VS2008
- SQL Server 2005
- Office 2003
Primary source would reside on a server in SVN with only a clocal copy on the VPC.
This would be for Windows based web and desktop development.
Assuming that the host machine has decent performance and provides for hardware virtualization, are there any known gotchas with such a setup, ie main pros and cons. Any performance issues or other issues that make this a good or bad idea?
I'd like to go this route so I can create a full backup VPC that can be put on a new PC if one fails and is repalced or copied to a laptop as needed for offsite work, etc. With the new Virtual PC features of Win7 this seems like it may be even better goign forward too.
Would like to get some feedback on this before we go down that road...
I wouldn't recommend Virtual PC because the performance is pretty disappointing compared to VMWare.
I've used a virtual development machine inside VMWare Workstation and VMWare Fusion on Mac for quite a while, and it works very well. It feels as if you're running on a dedicated machine.
My recommendations are:
Use a 64-bit OS as your host OS (Vista x64, Windows 7 64-bit, Mac OS X Leopord)
Have at least 6GB of RAM on your physical machine
Allocate 3GB of RAM to your VM for 32-bit, or more for a 64-bit guest OS
Pre-allocate the diskspace for your guest OS (no auto-grow)
Another advantage is that you can take your VM from a Windows-based VMWare Workstation to a Mac-based VMWare Fusion (and the other way around) without any problems.
I have been running multiple virtual development environments in MS Virtual PC and Virtualbox for 2 years now. I am doing mostly asp.net applications, some of the solutions are relatively large and use large databases which I also run inside the VM.
My observations based on this:
It is a good idea for exactly the reasons you mention and it works fine. Go for it!
768 megs of ram for the VM is enough, but more is better.
Have a Multi-core CPU.
Install the virtual machine additions for the guest OS. (This is basically like installing the proper drivers for your "virtual" hardware, and seems to be more important for performance than having hardware virtualisation support).
If possible, have the VM disk image on
a separate physical disk from the
host OS.
Use Virtualbox. It's free, and being developed rapidly. It might already be the best.
If you can satisfy the above, performance is no issue. Multiple Visual studio instances, IIS, SQL, Office, works just fine.
Running multiple copies of the same guest OS when it is a member of a domain/AD is tricky. If you need to do this you should read up on the sysprep.exe tool. Basically you can't just make a copy of the virtual disk, you need to take some special precautions.
Virtual PC is very convenient and it was what I used for starters, but I have to say that virtualbox seems to have overtaken it now. It was a bit rough in the beginning but the last few versions have really gotten there.
Virtualbox is fully free, and it has better features than VPC2007 - the main one that made me switch was the support for high resolutions. Virtualbox runs fullscreen on my 1920x1080 no problem.
It can also run virtual PC images, so switching was just a matter of installing virtualbox and adding my existing virtual PC disks to it.
An added benefit is that I can run the virtual images just as easily on my new mac as on the old pc.
The commercial options are not (anymore) worth what they cost, IMHO.
One thing you might have to consider is the lack of support for multiple monitors within the VM. I really like using multiple monitors, one for my source, the rest for all the rest. As far as I know, this is not possible in Virtual PC. Aside from that I can't think of anything that should hold you back, it's something I have been considering as well.
Regards,
Sebastiaan
VirtualBox from Sun is also a good choice. I am writing this from a Vista laptop with a virtualised Ubuntu dev environment.
One thing that Virtual Box is great for is having a seamless mode in which the guest OS application windows are presented as just windows on the host system, with a single common background (you get 2 status bars - one for Windows and one for Linux).
The Z-orders don't interpolate (ie all guest windows appear on the same Z plane in the host Window system, with their own Z-order within that plane) which can make it a bit odd, but you get used to it.
It is particularly useful if you need to build across many environments. VirtualBox is getting better and I now have an OpenSolaris environment and a FreeBSD one as well.
It is free as in beer which can be handy.
I actually run three development environments (and many test environments) under Ubuntu host in Windows guest virtual machines - it's very good for keeping things separated and for being able to restore test environments to a known point. It's also handy since the backup is a simple directory copy on the host and you don't have to worry about recovering settings or re-installing applications. etc.
I prefer VMWare over Virtual PC for both performance and usability (keep in mind that's my opinion). You don't need the VMWare Workstation product to create a VM - check out EasyVMX here for a way to create easy VMs.
The one thing you'll miss though is VMWare tools which only comes with the Workstation product, not the player. But VMWare has this for download here - I'm unsure of the legality of this even though it's an official download from VMWare, you may only be able to use it if you have the paid product.
I actually have a license for Workstation, it's just an earlier version and I prefer the latest Player.

Resources