How to increase vm.max_map_count? - elasticsearch

I'm trying to run Elastic search in an Ubuntu EC2 machine (t2.medium).
But I'm getting the message:
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
How can I increase the vm.max_map_count value?

To make it persistent, you can add this line:
vm.max_map_count=262144
in your /etc/sysctl.conf and run
$ sudo sysctl -p
to reload configuration with new value

I use
# sysctl -w vm.max_map_count=262144
And for the persistence configuration
# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
Att.

Note that
From version 207 and 21x, systemd only applies settings from
/etc/sysctl.d/*.conf and /usr/lib/sysctl.d/*.conf. If you had
customized /etc/sysctl.conf, you need to rename it as
/etc/sysctl.d/99-sysctl.conf. If you had e.g. /etc/sysctl.d/foo, you
need to rename it to /etc/sysctl.d/foo.conf.
See https://wiki.archlinux.org/index.php/sysctl#Configuration
So add vm.max_map_count=262144 in /etc/sysctl.d/99-sysctl.conf and then run
sudo sysctl --system

sysctl -w vm.max_map_count=262144

When:
permission denied on key 'vm.max_map_count'
sudo sysctl -w vm.max_map_count=262144

If you are using ubuntu VM, then navigate to etc folder.
Run vim sysctl.conf
Add vm.max_map_count=262144 to the end of the file and save
Finally run sudo sysctl -w vm.max_map_count=262144 this command
you will see vm.max_map_count=262144

Following command as worked fine on Fedora 28 (Linux 4.19 Kernel)
sudo echo "vm.max_map_count=262144" >> /etc/sysctl.d/elasticsearchSpecifications.conf && sudo sysctl --system

I found that when adding the settings to /etc/sysctl.conf, the system actually saved the changes to /etc/sysctl.d/99-sysctl.conf.
And when saving the changes to /etc/sysctl.d/99-sysctl.conf, it's also saved to /etc/sysctl.conf, so I think they both point to the same file.

Related

[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144] - Not working even after changing sysctl.conf file

I changed /etc/sysctl.conf file and added statement vm.max_map_count=262144, restarted laptop and still the same error keeps occurring whenever I execute: docker-compose -f docker-amundsen.yml up on terminal.
Can anyone please suggest some solution?
EDIT: Solved now.
To make it persistent, you can add this line:
$ sudo nano /etc/sysctl.conf
vm.max_map_count=262144
$ sudo sysctl -p

vm.max_map_count problem for docker at windows

I am trying to run ELK docker images on my windows10 as below.
C:\WINDOWS\system32> docker run -p 5601:5601 -p 9200:9200 -p 9300:9300 -p 5044:5044 -p 9600:9600 -p 9700:9700 -it --memory="3g" --name elk sebp/elk
I got below error, could i set vm.max_map_count at docker run command line?
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
Any suggestion or hints are more than welcome!
This can be done via WSL's support of a .wslconfig file (stored in your Windows %userprofile% folder), which can apply and persist such setting across restarts, for example:
[wsl2]
kernelCommandLine = sysctl.vm.max_map_count=262144
(Note that's NOT a space after sysctl, but a period, which is necessary for it to work, from my testing.)
After saving the file, restart wsl with wsl --shutdown. Before reopening your WSL, make sure the vm is shutdown, using wsl -l -v, as it can take several seconds sometimes.
For more on this file, its many available settings, and even that need to wait for the shutdown, see the docs.
I've had similar experience with running elastic/elastic, so this might help.
When you're running it in WSL2, you might want to log in to your WSL VM:
wsl -d docker-desktop (Where docker-desktop is the name of the vm, you can check for them with wsl --list
Once in your docker-desktop, do the following:
echo "vm.max_map_count = 262144"> /etc/sysctl.d/999-docker-desktop-conf
followed by:
sysctl -w vm.max_map_count=262144
You can then exit the docker-host by typing exit.
Persistent setting via windows powershell:
wsl su-
[sudo] password for root:<type your root password>
sysctl vm.max_map_count
vi /etc/sysctl.conf
vm.max_map_count = 262144
sysctl -p
sysctl vm.max_map_count

I want to disable IPv6 to install Hadoop 2.7.1, but doesn't work

I am installing hadoop 2.7.1 .
It was written there:
"Disable IPv6 with the command
sudo nano /etc/sysctl.conf
and copy the following lines at the end of the file:
#disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
Use the command cat /proc/sys/net/ipv6/conf/all/disable_ipv6 to check to make sure IPv6 is off:
it should say 1. If it says 0, you missed something."
After using cat /proc/sys/net/ipv6/conf/all/disable_ipv6 command I am getting "0". that means it don't get disable.
What i am doing wrong.
sudo apt-get install gksu
gksu gedit /etc/sysctl.conf
If you’re set on using terminal then this’ll do it:
echo "net.ipv6.conf.all.disable_ipv6 = 1" | sudo tee -a
/etc/sysctl.conf
echo "net.ipv6.conf.default.disable_ipv6 = 1" | sudo tee -a
/etc/sysctl.conf
echo "net.ipv6.conf.lo.disable_ipv6 = 1" | sudo tee -a
/etc/sysctl.conf
at this point you can try to run
sudo service networking restart
However, Atheros and Ubuntu seem to have a strange sort of ‘not working’ thing going on and so that command doesn’t work with my wireless driver. If the restart fails, just restart the computer and you should be good.
(if you’re terminal only : sudo shutdown -r now )
Has it worked up to here?
If you’re stout of heart, attempt the following:
su - hduser
ssh localhost
If that’s worked you be greeted with a message along the lines of ‘Are you sure you want to continue connecting?’ The answer you’re looking for at this point is ‘yes’.
If it hasn’t worked at this point run the following command:
cat /proc/sys/net/ipv6/conf/all/disable_ipv6
If the value returned is 0 then you’ve still not got ipv6 disabled – have a re-read of that section and see if you’ve missed anything.

Inconsistent runtime kernel parameters in DOCKER container and on host

my host is on Ubuntu 14.04.2 LTS and I'm using the latest centos base image in order to create a DOCKER image of IBM InfoSphere BigInsights in order to push it to the Bluemix Container Cloud.
I've solved nearly everything but I'm stuck with setting runtime kernel parameters using sysctl because they have the wrong value and the installer complains.
sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 61000
Of course it is not possible to set them inside the DOCKER container, I get the following error:
sysctl -w net.ipv4.ip_local_port_range="1024 64000"
sysctl: setting key "net.ipv4.ip_local_port_range": Read-only file system
So I've set the parameters on the host system:
sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000"
net.ipv4.ip_local_port_range = 1024 64000
sudo sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 1024 64000
I've even rebuilt the whole image and re-created the container but still inside the DOCKER container I get:
sysctl -a |grep net.ipv4.ip_local_port_range
net.ipv4.ip_local_port_range = 32768 61000
Any ideas?
You need to reload sysctl. Give one of the following commands a try (they depend on your OS)
sudo /etc/rc.d/sysctl reload
or
sudo sysctl -p /etc/sysctl.conf

boot2docker startup script to mount local shared folder with host

I'm running boot2docker 1.3 on Win7.
I want to connect a shared folder.
In the VirtualBox Manager under the image properties->shared folders I've added the folder I've want and named it "c/shared". The "auto-mount" and "make permanent" boxes are checked.
When boot2docker boots, it isn't mounted though. I have to do an additional:
sudo mount -t vboxsf c/shared /c/shared
for it to show up.
Since I need that for every time I'll ever use docker, I'd like that to just run on boot, or just already be there. So I thought if there were some startup script I could add, but I can't seem to find where that would be.
Thanks
EDIT: It's yelling at me about this being a duplicate of Boot2Docker on Mac - Accessing Local Files which is a different question. I wanted to mount a folder that wasn't one of the defaults such as /User on OSX or /c/Users on windows. And I'm specifically asking for startup scripts.
/var/lib/boot2docker/bootlocal.sh fits your need probably, it will be run by initial script /opt/bootscripts.sh
And bootscripts.sh will also put the output into the /var/log/bootlocal.log, see segment below (boot2docker 1.3.1 version)
# Allow local HD customisation
if [ -e /var/lib/boot2docker/bootlocal.sh ]; then
/var/lib/boot2docker/bootlocal.sh > /var/log/bootlocal.log 2>&1 &
fi
One use case for me is
I usually put shared directory as /c/Users/larry/shared, then I add script
#/bin/bash
ln -s /c/Users/larry/shared /home/docker/shared
So each time, I can access ~/shared in boot2docker as the same as in host
see FAQ.md (provided by #KCD)
If using boot2docker (Windows) you should do following:
First create shared folder for boot2docker VM:
"C:/Program Files/Oracle/VirtualBox/VBoxManage" sharedfolder add default -name some_shared_folder -hostpath /c/some/path/on/your/windows/box
#Then make this folder automount
docker-machine ssh
vi /var/lib/boot2docker/profile
Add following at the end of profile file:
sudo mkdir /windows_share
sudo mount -t vboxsf some_shared_folder /windows_share
Restart docker-machine
docker-machine restart
Verify that folder content is visible in boot2docker:
docker-machine ssh
ls -al /windows_share
Now you can mount the folder either using docker run or docker-compose.
Eg:
docker run it --rm --volume /windows_share:/windows_share ubuntu /bin/bash
ls -al /windows_share
If changes in the profile file are lost after VM or Windows restart please do following:
1) Edit file C:\Program Files\Docker Toolbox\start.sh and comment out following line:
#line number 44 (or somewhere around that)
yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
#change the line above to:
# yes | "${DOCKER_MACHINE}" regenerate-certs "${VM}"
Thanks for your help with this. An additional few flags I needed to add, in order for the new mount to be accessible by the boot2docker "docker" user:
sudo mount -t vboxsf -o umask=0022,gid=50,uid=1000 Ext-HD /Volumes/Ext-HD
With docker 1.3 you do not need to manually mount anymore. Volumes should work properly as long as the source on the host vm is in your user directory.
https://blog.docker.com/2014/10/docker-1-3-signed-images-process-injection-security-options-mac-shared-directories/
I can't make it work following Larry Cai's instruction. I figured I could make changes to "c:\Program Files\Boot2Docker for Windows\start.sh", add below
eval "$(./boot2docker.exe shellinit 2>/dev/null | sed 's,\\,\\\\,g')"
your mount command
eval "$(./boot2docker ssh 'sudo mount -t vboxsf c/shared /c/shared')"
I also add the command to start my container here.
eval "$(docker start KDP)"

Resources