vm.max_map_count problem for docker at windows - windows

I am trying to run ELK docker images on my windows10 as below.
C:\WINDOWS\system32> docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 -p 9600:9600 -p 9700:9700 -it --memory="3g" --name elk sebp/elk
I got below error, could i set vm.max_map_count at docker run command line?
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Any suggestion or hints are more than welcome!

This can be done via WSL's support of a .wslconfig file (stored in your Windows %userprofile% folder), which can apply and persist such setting across restarts, for example:
[wsl2]
kernelCommandLine = sysctl.vm.max_map_count=262144
(Note that's NOT a space after sysctl, but a period, which is necessary for it to work, from my testing.)
After saving the file, restart wsl with wsl --shutdown. Before reopening your WSL, make sure the vm is shutdown, using wsl -l -v, as it can take several seconds sometimes.
For more on this file, its many available settings, and even that need to wait for the shutdown, see the docs.

I've had similar experience with running elastic/elastic, so this might help.
When you're running it in WSL2, you might want to log in to your WSL VM:
wsl -d docker-desktop (Where docker-desktop is the name of the vm, you can check for them with wsl --list
Once in your docker-desktop, do the following:
echo "vm.max_map_count = 262144"> /etc/sysctl.d/999-docker-desktop-conf
followed by:
sysctl -w vm.max_map_count=262144
You can then exit the docker-host by typing exit.

Persistent setting via windows powershell:
wsl su-
[sudo] password for root:<type your root password>
sysctl vm.max_map_count
vi /etc/sysctl.conf
vm.max_map_count = 262144
sysctl -p
sysctl vm.max_map_count

Related

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] - Not working even after changing sysctl.conf file

I changed /etc/sysctl.conf file and added statement vm.max_map_count=262144, restarted laptop and still the same error keeps occurring whenever I execute: docker-compose -f docker-amundsen.yml up on terminal.
Can anyone please suggest some solution?
EDIT: Solved now.
To make it persistent, you can add this line:
$ sudo nano /etc/sysctl.conf
vm.max_map_count=262144
$ sudo sysctl -p

How to set JVM heap size at run time when running jmeter in distributed testing using docker

I have below test infrastructure:
3 instances (master + 2 slaves), dockerized
Run command from jmeter master (default 512m is used in all 3 machines) sudo docker exec -i master /bin/bash -c "/jmeter/apache-jmeter-3.1/bin/jmeter -n -t /home/librarian_journey_Req.jmx -Djava.rmi.server.hostname=yy.yy.yy.yy -Dclient.rmi.localport=60000 -R1xx.xx.xx.xx -j jmeter.log -l result.csv"
the above command works fine and getting results also. however wanted to increase the heap size to 3gb at run time.
I had tried using below command:
sudo docker exec -i master /bin/bash -c "JVM_ARGS="-Xms1024m -Xmx1024m" /jmeter/apache-jmeter-3.1/bin/jmeter -n -t /home/librarian_journey_Req.jmx -Djava.rmi.server.hostname=10.135.104.138 -Dclient.rmi.localport=60000 -R10.135.104.135,10.135.104.139 -j jmeter.log -l result.csv"
after running the above command nothing happens. Please guide how can it be increased.
You can override environment variables when running containers. Also, usually you don't need to use sudo to execute docker. So try this:
docker exec -i -e JVM_ARGS="-Xms1024m -Xmx1024m" master /bin/bash ...
Thanks for all help and guidance from all. Able to set heap size to master and slave machines by setting ENV variable at docker jmeter base image as below. Thanks to #vins.
ENV JVM_ARGS -Xms3G -Xmx3G

How to increase vm.max_map_count?

I'm trying to run Elastic search in an Ubuntu EC2 machine (t2.medium).
But I'm getting the message:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
How can I increase the vm.max_map_count value?
To make it persistent, you can add this line:
vm.max_map_count=262144
in your /etc/sysctl.conf and run
$ sudo sysctl -p
to reload configuration with new value
I use
# sysctl -w vm.max_map_count=262144
And for the persistence configuration
# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
Att.
Note that
From version 207 and 21x, systemd only applies settings from
/etc/sysctl.d/*.conf and /usr/lib/sysctl.d/*.conf. If you had
customized /etc/sysctl.conf, you need to rename it as
/etc/sysctl.d/99-sysctl.conf. If you had e.g. /etc/sysctl.d/foo, you
need to rename it to /etc/sysctl.d/foo.conf.
See https://wiki.archlinux.org/index.php/sysctl#Configuration
So add vm.max_map_count=262144 in /etc/sysctl.d/99-sysctl.conf and then run
sudo sysctl --system
sysctl -w vm.max_map_count=262144
When:
permission denied on key 'vm.max_map_count'
sudo sysctl -w vm.max_map_count=262144
If you are using ubuntu VM, then navigate to etc folder.
Run vim sysctl.conf
Add vm.max_map_count=262144 to the end of the file and save
Finally run sudo sysctl -w vm.max_map_count=262144 this command
you will see vm.max_map_count=262144
Following command as worked fine on Fedora 28 (Linux 4.19 Kernel)
sudo echo "vm.max_map_count=262144" >> /etc/sysctl.d/elasticsearchSpecifications.conf && sudo sysctl --system
I found that when adding the settings to /etc/sysctl.conf, the system actually saved the changes to /etc/sysctl.d/99-sysctl.conf.
And when saving the changes to /etc/sysctl.d/99-sysctl.conf, it's also saved to /etc/sysctl.conf, so I think they both point to the same file.

Volume binding using docker compose on Windows

I recently upgraded my Docker Toolbox on Windows 10, and now my volume mounts no longer work. I've tried everything. Here is the current mount path:
volumes:
- C:\Users\Joey\Desktop\backend:/var/www/html
I receive an invalid bind mount error.
Use:
volumes:
- "C:/Users/Joey/Desktop/backend:/var/www/html"
Putting the whole thing in double quotes and using forward slashes worked for me.
I was on windows 10 in windows 10 using Linux containers through WSL2
This answer was from Spenhouet given here.
Share nfs path using docker settings
2. execute following command
docker run --rm -v c:/Users:/data alpine ls /data
Set path in docker compose file as shown below
File copied to windows
I think you have to set COMPOSE_CONVERT_WINDOWS_PATHS=1, see here.
Docker Machine should do it automatically: https://github.com/docker/machine/pull/3830
I faced with same issue (I'm using Docker Desktop).
My steps were:
1) Place your folder under drive "C"
2) Open "Settings" in Docker Desktop -> "Shared Drives" -> "Reset Credentials" -> select drive "C" -> "Apply"
3) Open terminal and run (as proposed by Docker Desktop):
docker run --rm -v c:/Users:/data alpine ls /data
4) Open your docker-compose.yml and update path in -volumes:
volumes:
- /data/YOUR_USERNAME/projects/my_project/jssecacerts:/usr/lib/jvm/java-1.8-openjdk/jre/lib/security/jssecacerts/
5) restart docker container
This solution worked for me, in docker-compose.yml :
volumes:
- c/Users/Cyril/django:/mydjango
(Windows 10 with WSL2 and Docker Desktop)
It seems you are using an absolute path located inside C:\Users dir, that didn't work for me either, and if you are using Docker-Toolbox see below.
Overview
Forwarding the ./ relative path in volumes section will automatically get resolved by docker-compose to the directory containing docker-compose.yml file (for example, if your project is in %UserProfile%/my-project then ./:/var/www/html gets /c/Users/my-name/my-project:/var/www/html).
The problem is that currently (using DockerToolbox-19.03.1) only the /c/Users directory gets shared with the Virtual-Machine (toolbox puts docker itself in the VM, which means it has no access to your file system, except mounted shared-directories).
Conclusion
So, basically placing your project there (C:\Users\YOUR_USER_NAME) should make ./ work.
But not even that worked for me, and we ended up with below _prepare.sh script:
#!/bin/bash
VBoxManage='/c/Program Files/Oracle/VirtualBox/VBoxManage'
# Defines variables for later use.
ROOT=$(dirname $0)
ROOT=$(cd "$ROOT"; pwd)
MACHINE=default
PROJECT_KEY=shared-${ROOT##*/}
# Prepares machine (without calling "docker-machine stop" command).
#
if [ $(docker-machine status $MACHINE 2> /dev/null) = 'Running' ]; then
echo Unmounting volume: $ROOT
eval $(docker-machine env $MACHINE)
docker-compose down
docker-machine ssh $MACHINE <<< '
sudo umount "'$ROOT'";
'
"$VBoxManage" sharedfolder remove $MACHINE --name "$PROJECT_KEY" -transient > /dev/null 2>&1
else
docker-machine start $MACHINE
eval $(docker-machine env $MACHINE)
fi
set -euxo pipefail
"$VBoxManage" sharedfolder add $MACHINE --name "$PROJECT_KEY" --hostpath "$ROOT" -automount -transient
docker-machine ssh $MACHINE <<< '
echo Mounting volume: '$ROOT';
sudo mkdir -p "'$ROOT'";
sudo mount -t vboxsf -o uid=1000,gid=50 "'$PROJECT_KEY'" "'$ROOT'";
'
docker-compose up -d
docker-machine ssh $MACHINE
bash
Usage:
Place a copy of it beside each project's docker-compose.yml file.
Run it each time the system is turned on (simply double-click it or its shortcut).
Done! relative paths should now work even if your project is in another drive (far away and outside of C:\Users dir).
Note:
With a little edit, it should work without docker-compose being required.
Consider running docker system prune to free disk-space (or simply add docker system prune --force to the above script, on a new line right after mount command).
On windows 10, solved the problem with adding the last one / at the end of host and mount path, like that:
volumes:
- '/c/work/vcs/app/docker/i18n/:/usr/app/target/i18n/'
Without adding the last one / mounted path contained some docker system folders and symlinks.
If you're using the new Docker WSL2 backend, some drives may not be mounted in any WSL (and so Docker won't be able to see them either). For example, D: or E: or usb drives. See
https://github.com/docker/for-win/issues/2151
https://superuser.com/questions/1114341/windows-10-ubuntu-bash-shell-how-do-i-mount-other-windows-drives
To rule out this problem, try running docker-compose from a wsl command line.
I solved it by replacing : and '' in the windows path with / at the first of the line.
to be like that:
volumes:
-/c/Users/Joey/Desktop/backend:/var/www/html
Please note: c should be small.

Inconsistent runtime kernel parameters in DOCKER container and on host

my host is on Ubuntu 14.04.2 LTS and I'm using the latest centos base image in order to create a DOCKER image of IBM InfoSphere BigInsights in order to push it to the Bluemix Container Cloud.
I've solved nearly everything but I'm stuck with setting runtime kernel parameters using sysctl because they have the wrong value and the installer complains.
sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 61000
Of course it is not possible to set them inside the DOCKER container, I get the following error:
sysctl -w net.ipv4.ip_local_port_range="1024 64000"
sysctl: setting key "net.ipv4.ip_local_port_range": Read-only file system
So I've set the parameters on the host system:
sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000"
net.ipv4.ip_local_port_range = 1024 64000
sudo sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 1024 64000
I've even rebuilt the whole image and re-created the container but still inside the DOCKER container I get:
sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 61000
Any ideas?
You need to reload sysctl. Give one of the following commands a try (they depend on your OS)
sudo /etc/rc.d/sysctl reload
or
sudo sysctl -p /etc/sysctl.conf

Resources