ulimit in test kitchen - ruby

The operating system is Centos 7, I am using Test Kitchen 1.13.2 and centos-7.2 default vagrant box.
I need nproc to be above a certain limit for one user, for this I modified /etc/security/limits.d/20-nproc.conf (Which overrides /etc/security/limits.conf) and added
myuser soft nproc 99999
However, after rebooting the VM created by kitchen and I log via kitchen login and run ulimit -a I see this:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1878
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1878
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I am afraid there is something in the way Kitchen connects to the VMs it generates that does not load limits.conf configuration. Any idea how to be able to test this locally on Kitchen?

When you login using kitchen login default user is vagrant, if you want to check limits for user myuser, run:
sudo su -
su -l myuser
ulimit -a
It works for me ;-)

Related

Hadoop no space error

When I submit a Hadoop jar on hdfs, I'm facing
Exception in thread "main" java.io.IOException : No space left on
device
Any one help me on this
usually an issue with the limit of open files or processes on the linux box. Try ulimit -a to view limits. you are interested in
open files (-n) 1024
max user processes (-u) 102400
to set the open files limit
ulimit -n xxxx
to set the max processes limit
ulimit -u xxxx

OS X Sierra: Increase maxfilesperproc

I need to increase the number of allowed open files per process on OS X Sierra.
Currently, when I run
ulimit -n
The response is 2048.
None of the following techniques suggested on StackOverflow and other sites are working for me:
Creating Launch Daemons
Running sudo sysctl -w kern.maxfilesperproc=10240
Adding the following lines to /etc/sysctl.conf
kern.maxfiles=20480
kern.maxfilesperproc=10240
Running ulimit -n 10240. Note that when I run ulimit -n 512 I am able to successfully decrease the allowed limit but I can't seem to increase it.
My Launch Daemon for maxfiles is below. The one for maxproc is similar.
localhost:LaunchDaemons jay$ ls -latr limit*
-rw-r--r-- 1 root wheel 540 Nov 8 11:10 limit.maxfiles.plist
-rw-r--r-- 1 root wheel 531 Nov 8 11:19 limit.maxproc.plist
localhost:LaunchDaemons jay$ cat limit.maxfiles.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>limit.maxfiles</string>
<key>ProgramArguments</key>
<array>
<string>launchctl</string>
<string>limit</string>
<string>maxfiles</string>
<string>64000</string>
<string>524288</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>ServiceIPC</key>
<false/>
</dict>
</plist>
Has anyone successfully increased the number of allowed open files per process on Sierra?
I think you want this:
# Check current limit
ulimit -n
256
# Raise limit
ulimit -S -n 4096
# Check again
ulimit -n
4096
(not enough rep to comment) Yes, I was able to increase the open files limit after a lot of headaches and the creating daemons solution link. It finally stuck by setting the permissions correctly (it should be root:wheel) and rebooting.
This is the response on the machine's from ulimit -a:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
Note that I also used the same link to increase the max user processes. Increasing the open files limit stuck with setting the permissions on limit.maxfiles.plist and loading the file with sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist. But the changes to max user process (limit.maxproc.plist) did not work and required rebooting.
It's worth mentioning that while I was finally able to increase these limits, increasing them has not solved my issue.
UPDATE
I tried to set the open files limit on another machine running Sierra and encountered the same problems that the OP is having, specifically:
running ulimit -S -n 10241 encounters the error: ulimit: open files: cannot modify limit: Invalid argument from Terminal
I was able to lower the open files limit and raise it again, but not break the 10240 barrier without encountering the 'Invalid argument' error
I followed the instructions in this answer Open files limit does not work as before in OSX Yosemite because it provides a sample of what should be in the limit.maxfiles.plist file. The soft limit is set at 64000 and the hard limit at 524288.
sudo touch limit.maxfiles.plist to create the file with the correct permissions (root:wheel).
With a text editor, I copied the example provided (in the answer above).
Then launchctl limit maxfiles reported maxfiles 64000 524288 so everything is good! NO! ulimit -n still came back as 10240
Then sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist and launchctl limit maxfiles and ulimit -n and the result was the same as in step 3.
After restarting the computer, launchctl limit maxfiles and ulimit -n both return the new soft limit of 64000.
The open files limit can be lowered and raised, but not raised above the new soft limit of 64000. Raising the open files limit above 64000 requires editing the .plist file and restarting again.
ulimit and launchctl need to agree, it seems, and that only happens after restarting. There is a little more context about the two in Difference between ulimit, launchctl, sysctl?, basically:
ulimit is for the shell
sysctl is for the OS
launchctl is for MacOS and older MacOS (nee OS X) gets, the less it respects sysctl or treats its settings as temporary.

what is the maximum value I can set for "MAX_OPEN_FILES" configuration in Elasticsearch

The current Elasticsearch configuration, at my 64GB RAM and 8 cores linux machine has,
At /etc/sysconfig/elasticsearch
MAX_OPEN_FILES=65535
when i do ulimit,
[root#machine ~]# ulimit -n
1024
What is the maximum number I can set for this configuration?
you can run ulimit -n as the user running elasticsearch to know what's the current limit
You can update the max open files on your linux server , in /etc/security/limits.conf
If your user is called elasticsearch, you could add something like this in the file:
elasticsearch - nofile 65535
or run ulimit -n 65535 before starting elasticsearch

tsung cluster error:too few file descriptors available

I'am using tsung's cluster.
I have modified the limits of file descriptors.
The node which I take it as the slave will report errors:
ts_launcher:(2:<0.49.0>) WARNING !!! too few file descriptors
available (1024), you should decrease maxusers (currently 60000)
Can anyone give some help?
Try the following steps
First, check the current system limit for file descriptors:
cat /proc/sys/fs/file-max
If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter:
fs.file-max = 64000
Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry:
* - nofile 8192
Edit the /etc/pam.d/system-auth, and add this entry:
session required /lib/security/$ISA/pam_limits.so
Reboot the machine to apply the changes.
reboot
On CentOS/RHEL, change the file descriptor limit in /etc/security/limits.conf
sudo vi /etc/security/limits.conf
* soft nofile 64000
* hard nofile 64000
Reboot the machine:
sudo reboot
Check the limit again:
ulimit -n
64000

How much memory could vm use

I read the document Understanding Virtual Memory and it said one method for changing tunable parameters in the Linux VM was the command:
sysctl -w vm.max_map_count=65535
I want to know what the number 65535 means and how much memory could vm use by the setting.
From the Linux kernel documentation:
max_map_count:
This file contains the maximum number of memory map areas a process
may have. Memory map areas are used as a side-effect of calling
malloc, directly by mmap and mprotect, and also when loading shared
libraries.
While most applications need less than a thousand maps, certain
programs, particularly malloc debuggers, may consume lots of them,
e.g., up to one or two maps per allocation.
The default value is 65536.
Bottom line: this setting limits the number of discrete mapped memory areas - on its own it imposes no limit on the size of those areas or on the memory that is usable by a process.
And yes, this:
sysctl -w vm.max_map_count=65535
is just a nicer way of writing this:
echo 65535 > /proc/sys/vm/max_map_count
echo 'vm.max_map_count=262144' >> /etc/sysctl.conf
sysctl -p
echo "vm.max_map_count=262144" >> /etc/sysctl.conf
sysctl -p
This does not work since we cannot change the configuration file directly.
Run the below command.
echo vm.max_map_count=262144 | sudo tee -a /etc/sysctl.conf
But check if vm.max_map_count already exists or not. You can do that using
grep vm.max_map_count /etc/sysctl.conf

Resources