How much memory could vm use - linux-kernel

I read the document Understanding Virtual Memory and it said one method for changing tunable parameters in the Linux VM was the command:
sysctl -w vm.max_map_count=65535
I want to know what the number 65535 means and how much memory could vm use by the setting.

From the Linux kernel documentation:
max_map_count:
This file contains the maximum number of memory map areas a process
may have. Memory map areas are used as a side-effect of calling
malloc, directly by mmap and mprotect, and also when loading shared
libraries.
While most applications need less than a thousand maps, certain
programs, particularly malloc debuggers, may consume lots of them,
e.g., up to one or two maps per allocation.
The default value is 65536.
Bottom line: this setting limits the number of discrete mapped memory areas - on its own it imposes no limit on the size of those areas or on the memory that is usable by a process.
And yes, this:
sysctl -w vm.max_map_count=65535
is just a nicer way of writing this:
echo 65535 > /proc/sys/vm/max_map_count

echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
sysctl -p

echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
This does not work since we cannot change the configuration file directly.
Run the below command.
echo vm.max_map_count=262144 | sudo tee -a /etc/sysctl.conf
But check if vm.max_map_count already exists or not. You can do that using
grep vm.max_map_count /etc/sysctl.conf

Related

Instance of Google Compute Engine freezes trying to upload files on Google Cloud Storage

I have wrote this shell script that download archives from a url list, decompresses them and finally moves them in a Cloud Storage bucket.
#!/bin/bash
# declare STRING variable
for iurl in $(cat ./html-rdfa.list); do
filename=$(basename "$iurl")
file="${filename%.*}"
if gsutil ls gs://rdfa/$file; then
echo "yes"
else
wget $iurl
gunzip $filename
gsutil cp -n $file gs://rdfa
rm $file
sleep 2
fi
done
html-rdfa.list contains the url list. The instance is created using the debian 7 image provided by gooogle.
The script run correctly for the first 5 or 6 files, but then the instance freezes and i have to delete the instance. The ram or the disk of the instance are not full when it freezes.
I think the problem is caused by the command gsutil cp, but it is strange that CPU load is practically 0 and also the RAM is free but it is impossibilo to use the instance without restarting them.
Are you writing the temporary files to the default 10GB root disk? If so, you may be running into the Persistent Disk throughput caps. To see if this is the case, create a new Persistent Disk, then mount it as a data disk and use that disk for the temporary files. Consider starting with ~200GB disk and see if that is enough throughput for your workload. Also, see the docs on Persistent Disk performance.

Setting up Redis on Webfaction

What are the steps required to set up Redis database on Webfaction shared hosting account?
Introduction
Because of the special environment restrictions of Webfaction servers the installation instructions are not as straightforward as they would be. Nevertheless at the end you will have a fully functioning Redis server that stays up even after a reboot. I personally installed Redis by the following procedure about a half a year ago and it has been running flawlessy since. A little word of a warning though, half a year is not a long time, especially because the server have not been under a heavy use.
The instructions consists of five parts: Installation, Testing, Starting the Server, Managing the Server and Keeping the Server Running.
Installation
Login to your Webfaction shell
ssh foouser#foouser.webfactional.com
Download latest Redis from Redis download site.
> mkdir -p ~/src/
> cd ~/src/
> wget http://download.redis.io/releases/redis-2.6.16.tar.gz
> tar -xzf redis-2.6.16.tar.gz
> cd redis-2.6.16/
Before the make, see is your server Linux 32 or 64 bit. The installation script does not handle 32 bit environments well, at least on Webfaction's CentOS 5 machines. The command for bits is uname -m. If Linux is 32 bit the result will be i686, if 64 bit then x86_64. See this answer for details.
> uname -m
i686
If your server is 64 bit (x86_64) then just simply make.
> make
But if your server is 32 bit (i686) then you must do little extra stuff. There is a command make 32bit but it produces an error. Edit a line in the installation script to make make 32bit to work.
> nano ~/src/redis-2.6.16/src/Makefile
Change the line 214 from this
$(MAKE) CFLAGS="-m32" LDFLAGS="-m32"
to this
$(MAKE) CFLAGS="-m32 -march=i686" LDFLAGS="-m32 -march=i686"
and save. Then run the make with 32bit flag.
> cd ~/src/redis-2.6.16/ ## Note the dir, no trailing src/
> make 32bit
The executables were created into directory ~/src/redis-2.6.16/src/. The executables include redis-cli, redis-server, redis-benchmark and redis-sentinel.
Testing (optional)
As the output of the installation suggests, it would be nice to ensure that everything works as expected by running tests.
Hint: To run 'make test' is a good idea ;)
Unfortunately the testing requires tlc8.6.0 to be installed which is not the default at least on the machine web223. So you must install it first, from source. See Tcl/Tk installation notes and compiling notes.
> cd ~/src/
> wget http://prdownloads.sourceforge.net/tcl/tcl8.6.0-src.tar.gz
> tar -xzf tcl8.6.0-src.tar.gz
> cd tcl8.6.0-src/unix/
> ./configure --prefix=$HOME
> make
> make test # Optional, see notes below
> make install
Testing Tcl with make test will take time and will also fail due to WebFaction's environment restrictions. I suggest you skip this.
Now that we have Tlc installed we can run Redis tests. The tests will take a long time and also temporarily uses a quite large amount of memory.
> cd ~/src/redis-2.6.16/
> make test
After the tests you are ready to continue.
Starting the Server
First, create a custom application via Webfaction Control Panel (Custom app (listening on port)). Name it for example fooredis. Note that you do not have to create a domain or website for the app if Redis is used only locally i.e. from the same host.
Second, make a note about the socket port number that was given for the app. Let the example be 23015.
Copy the previously compiled executables to the app's directory. You may choose to copy all or only the ones you need.
> cd ~/webapps/fooredis/
> cp ~/src/redis-2.6.16/src/redis-server .
> cp ~/src/redis-2.6.16/src/redis-cli .
Copy also the sample configuration file. You will soon modify that.
> cp ~/src/redis-2.6.16/redis.conf .
Now Redis is already runnable. There is couple problems though. First the default Redis port 6379 might be already in use. Second, even if the port was free, yes, you could start the server but it stops running at the same moment you exit the shell. For the first the redis.conf must be edited and for the second, you need a daemon which is also solved by editing redis.conf.
Redis is able to run itself in the daemon mode. For that you need to set up a place where the daemon stores its process ids, PIDs. Usually pidfiles are stored in /var/run/ but because the environment restrictions you must select a place for them in your home directory. Because a reason explained later in the part Managing the Server, a good choice is to put the pidfile under the same directory as the executables. You do not have to create the file yourself, Redis creates it for you automatically.
Now open the redis.conf for editing.
> cd ~/webapps/fooredis/
> nano redis.conf
Change the configurations in the following manner.
daemonize no -> daemonize yes
pidfile /var/run/redis.pid -> pidfile /home/foouser/webapps/fooredis/redis.pid
port 6379 -> port 23015
Now finally, start Redis server. Specify the conf-file so Redis listens the right port and runs as a daemon.
> cd ~/webapps/fooredis/
> ./redis-server redis.conf
>
See it running.
> cd ~/webapps/fooredis/
> ./redis-cli -p 23015
redis 127.0.0.1:23015> SET myfeeling Phew.
OK
redis 127.0.0.1:23015> GET myfeeling
"Phew."
redis 127.0.0.1:23015> (ctrl-d)
>
Stop the server if you want to.
> ps -u $USER -o pid,command | grep redis
718 grep redis
10735 ./redis-server redis.conf
> kill 10735
or
> cat redis.pid | xargs kill
Managing the Server
For the ease of use and as a preparatory work for the next part, make a script that helps to open the client and start, restart and stop the server. An easy solution is to write a makefile. When writing a makefile, remember to use tabs instead of spaces.
> cd ~/webapps/fooredis/
> nano Makefile
# Redis Makefile
client cli:
./redis-cli -p 23015
start restart:
./redis-server redis.conf
stop:
cat redis.pid | xargs kill
The rules are quite self-explanatory. The special about the second rule is that while in daemon mode, calling the ./redis-server does not create a new process if there is a one running already.
The third rule has some quiet wisdom in it. If redis.pid was not stored under the directory of fooredis but for example to /var/run/redis.pid then it would not be so easy to stop the server. This is especially true if you run multiple Redis instances concurrently.
To execute a rule:
> make start
Keeping the Server Running
You now have an instance of Redis running in daemon mode which allows you to quit the shell without stopping it. This is still not enough. What if the process crashes? What if the server machine is rebooted? To cover these you have to create two cronjobs.
> export EDITOR=nano
> crontab -e
Add the following two lines and save.
*/5 * * * * make -C ~/webapps/fooredis/ -f ~/webapps/fooredis/Makefile start
#reboot make -C ~/webapps/fooredis/ -f ~/webapps/fooredis/Makefile start
The first one ensures each five minutes that fooredis is running. As said above this does not start new process if one is already running. The second one ensures that fooredis is started immediately after the server machine reboot and long before the first rule kicks in.
Some more deligate methods for this could be used, for example forever. See also this Webfaction Community thread for more about the topic.
Conclusion
Now you have it. Lots of things done but maybe more will come. Things you may like to do in the future which were not covered here includes the following.
Setting a password, preventing other users flushing your databases. (See redis.conf)
Limiting the memory usage (See redis.conf)
Logging the usage and errors (See redis.conf)
Backupping the data once in a while.
Any ideas, comments or corrections?
To summarize Akseli's excellent answer:
assume your user is named "magic_r_user"
cd ~
wget "http://download.redis.io/releases/redis-3.0.0.tar.gz"
tar -xzf redis-3.0.0.tar.gz
mv redis-3.0.0 redis
cd redis
make
make test
create a custom app "listening on port" through the Webfaction management website
assume we named it magic_r_app
assume it was assigned port 18932
cp ~/redis/redis.conf ~/webapps/magic_r_app/
vi ~/webapps/magic_r_app/redis.conf
daemonize yes
pidfile ~/webapps/magic_r_app/redis.pid
port 18932
test it
~/redis/src/redis-server ~/webapps/magic_r_app/redis.conf
~/redis/src/redis-cli -p 18932
ctrl-d
cat ~/webapps/magic_r_app/redis.pid | xargs kill
crontab -e
*/1 * * * * /home/magic_r_user/redis/src/redis-server /home/magic_r_user/webapps/magic_r_app/redis.conf &>> /home/magic_r_user/logs/user/cron.log
don't forget to set a password!
FYI, if you are installing redis 2.8.8+ you may get an error, undefined reference to __sync_add_and_fetch_4 when compiling. See http://www.eschrade.com/page/undefined-reference-to-__sync_add_and_fetch_4/ for information.
I've pasted the relevant portion from that page below in case the page ever goes offline. Essentially you need to export the CFLAGS variable and restart the build process.
[root#devvm1 redis-2.6.7]# export CFLAGS=-march=i686
[root#devvm1 redis-2.6.7]# make distclean
[root#devvm1 redis-2.6.7]# make

How to increase ulimit on Amazon EC2 instance?

After SSH'ing into an EC2 instance running the Amazon Linux AMI, I tried:
ulimit -n 20000
...and got the following error:
-bash: ulimit: open files: cannot modify limit: Operation not permitted
However, the shell allows me to decrease this number, for the current session only.
Is there anyway to increase the ulimit on an EC2 instance (permanently)?
In fact, changing values through the ulimit command only applies to the current shell session. If you want to permanently set a new limit, you must edit the /etc/security/limits.conf file and set your hard and soft limits. Here's an example:
# <domain> <type> <item> <value>
* soft nofile 20000
* hard nofile 20000
Save the file, log-out, log-in again and test the configuration through the ulimit -n command. Hope it helps.
P.S. 1: Keep the following in mind:
Soft limit: value that the kernel enforces for the corresponding resource.
Hard limit: works as a ceiling for the soft limit.
P.S. 2: Additional files in /etc/security/limits.d/ might affect what is configured in limits.conf.
Thank you for the answer. For me just updating /etc/security/limits.conf wasn't enough. Only the 'open files' ulimit -n was getting updated and nproc was not getting updated. After updating /etc/security/limits.d/whateverfile, nproc "ulimit -u" also got updated.
Steps:
sudo vi /etc/security/limits.d/whateverfile
Update limits set for nproc/ nofile
sudo vi /etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
Reboot the machine sudo reboot
P.S. I was not able to add it as a comment, so had to post as an answer.
I don't have enough rep points to comment...sorry for the fresh reply, but maybe this will keep someone from wasting an hour.
Viccari's answer finally solved this headache for me. Every other source tells you to edit the limits.conf file, and if that doesn't work, to add
session required pam_limits.so
to the /etc/pam.d/common-session file
DO NOT DO THIS!
I'm running an Ubuntu 18.04.5 EC2 instance, and this locked me out of SSH entirely. I could log in, but as soon as it was about to drop me into a prompt, it dropped my connection (I even saw all the welcome messages and stuff). Verbose showed this as the last error:
fd 1 is not O_NONBLOCK
and I couldn't find an answer to what that meant. So, after shutting down the instance, waiting about an hour to snapshot the volume, and then mounting it to another running instance, I removed the edit to the common-session file and bam, SSH login worked again.
The fix that worked for me was looking for files in the /etc/security/limits.d/ folder, and editing those.
(and no, I did not need to reboot to get the new limits, just log out and back in)

EC2 Can't resize volume after increasing size

I have followed the steps for resizing an EC2 volume
Stopped the instance
Took a snapshot of the current volume
Created a new volume out of the previous snapshot with a bigger size in the same region
Deattached the old volume from the instance
Attached the new volume to the instance at the same mount point
Old volume was 5GB and the one I created is 100GB
Now, when i restart the instance and run df -h I still see this
Filesystem Size Used Avail Use% Mounted on
/dev/xvde1 4.7G 3.5G 1021M 78% /
tmpfs 296M 0 296M 0% /dev/shm
This is what I get when running
sudo resize2fs /dev/xvde1
The filesystem is already 1247037 blocks long. Nothing to do!
If I run cat /proc/partitions I see
202 64 104857600 xvde
202 65 4988151 xvde1
202 66 249007 xvde2
From what I understand if I have followed the right steps xvde should have the same data as xvde1 but I don't know how to use it
How can I use the new volume or umount xvde1 and mount xvde instead?
I cannot understand what I am doing wrong
I also tried sudo ifs_growfs /dev/xvde1
xfs_growfs: /dev/xvde1 is not a mounted XFS filesystem
By the way, this a linux box with centos 6.2 x86_64
There's no need to stop instance and detach EBS volume to resize it anymore!
13-Feb-2017 Amazon announced: "Amazon EBS Update – New Elastic Volumes Change Everything"
The process works even if the volume to extend is the root volume of running instance!
Say we want to increase boot drive of Ubuntu from 8G up to 16G "on-the-fly".
step-1) login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
step-2) ssh into the instance and resize the partition:
let's list block devices attached to our box:
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 8G 0 part /
As you can see /dev/xvda1 is still 8 GiB partition on a 16 GiB device and there are no other partitions on the volume.
Let's use "growpart" to resize 8G partition up to 16G:
# install "cloud-guest-utils" if it is not installed already
apt install cloud-guest-utils
# resize partition
growpart /dev/xvda 1
Let's check the result (you can see /dev/xvda1 is now 16G):
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 16G 0 disk
└─xvda1 202:1 0 16G 0 part /
Lots of SO answers suggest to use fdisk with delete / recreate partitions, which is nasty, risky, error-prone process especially when we change boot drive.
step-3) resize file system to grow all the way to fully use new partition space
# Check before resizing ("Avail" shows 1.1G):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.8G 6.3G 1.1G 86% /
# resize filesystem
resize2fs /dev/xvda1
# Check after resizing ("Avail" now shows 8.7G!-):
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 16G 6.3G 8.7G 42% /
So we have zero downtime and lots of new space to use.
Enjoy!
Update: Update: Use sudo xfs_growfs /dev/xvda1 instead of resize2fs when XFS filesystem.
Thank you Wilman your commands worked correctly, small improvement need to be considered if we are increasing EBSs into larger sizes
Stop the instance
Create a snapshot from the volume
Create a new volume based on the snapshot increasing the size
Check and remember the current's volume mount point (i.e. /dev/sda1)
Detach current volume
Attach the recently created volume to the instance, setting the exact mount point
Restart the instance
Access via SSH to the instance and run fdisk /dev/xvde
WARNING: DOS-compatible mode is deprecated. It's strongly recommended to
switch off the mode (command 'c') and change display units to
sectors (command 'u')
Hit p to show current partitions
Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
Hit n to create a new partition
Hit p to set it as primary
Hit 1 to set the first cylinder
Set the desired new space (if empty the whole space is reserved)
Hit a to make it bootable
Hit 1 and w to write changes
Reboot instance OR use partprobe (from the parted package) to tell the kernel about the new partition table
Log via SSH and run resize2fs /dev/xvde1
Finally check the new space running df -h
Prefect comment by jperelli above.
I faced same issue today. AWS documentation does not clearly mention growpart. I figured out the hard way and indeed the two commands worked perfectly on M4.large & M4.xlarge with Ubuntu
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
[SOLVED]
This is what it had to be done
Stop the instance
Create a snapshot from the volume
Create a new volume based on the snapshot increasing the size
Check and remember the current's volume mount point (i.e. /dev/sda1)
Detach current volume
Attach the recently created volume to the instance, setting the exact mount point
Restart the instance
Access via SSH to the instance and run fdisk /dev/xvde
Hit p to show current partitions
Hit d to delete current partitions (if there are more than one, you have to delete one at a time) NOTE: Don't worry data is not lost
Hit n to create a new partition
Hit p to set it as primary
Hit 1 to set the first cylinder
Set the desired new space (if empty the whole space is reserved)
Hit a to make it bootable
Hit 1 and w to write changes
Reboot instance
Log via SSH and run resize2fs /dev/xvde1
Finally check the new space running df -h
This is it
Good luck!
This will work for xfs file system just run this command
xfs_growfs /
login into AWS web console -> EBS -> right mouse click on the one you wish to resize -> "Modify Volume" -> change "Size" field and click [Modify] button
growpart /dev/xvda 1
resize2fs /dev/xvda1
This is a cut-to-the-chase version of Dmitry Shevkoplyas' answer. AWS documentation does not show the growpart command. This works ok for ubuntu AMI.
sudo growpart /dev/xvda 1
sudo resize2fs /dev/xvda1
the above two commands saved my time for AWS ubuntu ec2 instances.
Once you modify the size of your EBS,
List the block devices
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 10G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Expand the partition
Suppose you want to extend the second partition mounted on /,
sudo growpart /dev/nvme0n1 2
If all your space is used up in the root volume and basically you're not able to access /tmp i.e. with error message Unable to growpart because no space left,
temporarily mount a /tmp volume: sudo mount -o size=10M,rw,nodev,nosuid -t tmpfs tmpfs /tmp
unmount after the complete resize is done: sudo umount -l /tmp
Verify the new size
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1 259:2 0 20G 0 disk
|-nvme0n1p1 259:3 0 1M 0 part
`-nvme0n1p2 259:4 0 10G 0 part /
Resize the file-system
For XFS (use the mount point as argument)
sudo xfs_growfs /
For EXT4 (use the partition name as argument)
sudo resize2fs /dev/nvme0n1p2
Just in case if anyone here for GCP google cloud platform , Try this:
sudo growpart /dev/sdb 1
sudo resize2fs /dev/sdb1
So in Case anyone had the issue where they ran into this issue with 100% use , and no space to even run growpart command (because it creates a file in /tmp)
Here is a command that i found that bypasses even while the EBS volume is being used , and also if you have no space left on your ec2 , and you are at 100%
/sbin/parted ---pretend-input-tty /dev/xvda resizepart 1 yes 100%
see this site here:
https://www.elastic.co/blog/autoresize-ebs-root-volume-on-aws-amis
Did you make a partition on this volume? If you did, you will need to grow the partition first.
Thanks, #Dimitry, it worked like a charm with a small change to match my file system.
source: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html#recognize-expanded-volume-linux
Then use the following command, substituting the mount point of the filesystem (XFS file systems must be mounted to resize them):
[ec2-user ~]$ sudo xfs_growfs -d /mnt
meta-data=/dev/xvdf isize=256 agcount=4, agsize=65536 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=262144, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 262144 to 26214400
Note
If you receive an xfsctl failed: Cannot allocate memory error, you may need to update the Linux kernel on your instance. For more information, refer to your specific operating system documentation.
If you receive a The filesystem is already nnnnnnn blocks long. Nothing to do! error, see Expanding a Linux Partition.
Bootable flag (a) didn't worked in my case (EC2, centos6.5), so i had to re-create volume from snapshot.
After repeating all steps EXCEPT bootable flag - everything worked flawlessly so i was able to resize2fs after.
Thank you!
Don't have enough rep to comment above; but also note per the comments above that you can corrupt your instance if you start at 1; if you hit 'u' after starting fdisk before you list your partitions with 'p' this will infact give you the correct start number so you don't corrupt your volumes. For centos 6.5 AMI, also as mentioned above 2048 was correct for me.
Put space between name and number, ex:
sudo growpart /dev/xvda 1
Note that there is a space between the device name and the partition number.
To extend the partition on each volume, use the following growpart
commands. Note that there is a space between the device name and the
partition number.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/recognize-expanded-volume-linux.html
I faced similar issue for Ubuntu system in EC2
Firstly checked the filesystem
lsblk
Then after increasing volume size from console, I ran below commands
sudo growpart /dev/nvme0n1 1
This will show change in lsblk command
Then I could then extend the FS with
sudo resize2fs /dev/nvme0n1p1
Finally verify it with df -h command, it will work

setting launchctl limit maxfiles 512 unlimited results in error: Neither the hard nor soft limit for "maxfiles" can be unlimited

Screen shot of my Terminal http://d.pr/1pE5
I'm following this tutorial:
http://blog.ghostinthemachines.com/2010/01/19/mac-os-x-fork-resource-temporarily-unavailable/
And where it tells me to follow the process I follow in my screenshot:
[laptop:~ user]$ launchctl limit maxproc 512 1024
[laptop:~ user]$ launchctl limit maxfiles 512 unlimited
[laptop:~ user]$ launchctl limit
I'm trying to perform the following setup:
launchctl limit maxfiles 512 unlimited
My system (Lion) tells me what I'm doing is wrong, and silly, but It's already set unlimited... so I don't know what's going on, or why it's behaving this way.
Should I just go ahead and give it a specific value?
This comment says that unlimited is not valid for maxfiles on 10.6.
Since that does not apply to 10.7 (Lion) I don't actually know what the answer is.
edit: answer only works for 10.6. Removed bad suggestion to set both soft and hard limits to 512.

Resources