Number of opened files is not taking effect in session - bash

In linux script, number of opened files are not taking effect in same session. limits.conf file has been modified successfully. Although if i open new session, it shows new value of number of opened files
(command - ulimit -Sn).
Code written in test.sh is as follows :-
>loading.log
ulimit -Sn >> loading.log
sudo sed -i 's/soft nofile 2048/soft nofile 1024/g' /etc/security/limits.conf
ulimit -Sn >> loading.log

The changes in limits.conf take only effect after logging in again. See do changes in /etc/security/limits.conf require a reboot?.
Perhaps it is sufficient to manually set the limit for the current shell session too.
# for next login
sudo sed -i 's/soft nofile 2048/soft nofile 1024/g' /etc/security/limits.conf
# for this shell session
ulimit -Sn 1024

Related

Hadoop no space error

When I submit a Hadoop jar on hdfs, I'm facing
Exception in thread "main" java.io.IOException : No space left on
device
Any one help me on this
usually an issue with the limit of open files or processes on the linux box. Try ulimit -a to view limits. you are interested in
open files (-n) 1024
max user processes (-u) 102400
to set the open files limit
ulimit -n xxxx
to set the max processes limit
ulimit -u xxxx

OS X Sierra: Increase maxfilesperproc

I need to increase the number of allowed open files per process on OS X Sierra.
Currently, when I run
ulimit -n
The response is 2048.
None of the following techniques suggested on StackOverflow and other sites are working for me:
Creating Launch Daemons
Running sudo sysctl -w kern.maxfilesperproc=10240
Adding the following lines to /etc/sysctl.conf
kern.maxfiles=20480
kern.maxfilesperproc=10240
Running ulimit -n 10240. Note that when I run ulimit -n 512 I am able to successfully decrease the allowed limit but I can't seem to increase it.
My Launch Daemon for maxfiles is below. The one for maxproc is similar.
localhost:LaunchDaemons jay$ ls -latr limit*
-rw-r--r-- 1 root wheel 540 Nov 8 11:10 limit.maxfiles.plist
-rw-r--r-- 1 root wheel 531 Nov 8 11:19 limit.maxproc.plist
localhost:LaunchDaemons jay$ cat limit.maxfiles.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Label</key>
<string>limit.maxfiles</string>
<key>ProgramArguments</key>
<array>
<string>launchctl</string>
<string>limit</string>
<string>maxfiles</string>
<string>64000</string>
<string>524288</string>
</array>
<key>RunAtLoad</key>
<true/>
<key>ServiceIPC</key>
<false/>
</dict>
</plist>
Has anyone successfully increased the number of allowed open files per process on Sierra?
I think you want this:
# Check current limit
ulimit -n
256
# Raise limit
ulimit -S -n 4096
# Check again
ulimit -n
4096
(not enough rep to comment) Yes, I was able to increase the open files limit after a lot of headaches and the creating daemons solution link. It finally stuck by setting the permissions correctly (it should be root:wheel) and rebooting.
This is the response on the machine's from ulimit -a:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 64000
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
Note that I also used the same link to increase the max user processes. Increasing the open files limit stuck with setting the permissions on limit.maxfiles.plist and loading the file with sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist. But the changes to max user process (limit.maxproc.plist) did not work and required rebooting.
It's worth mentioning that while I was finally able to increase these limits, increasing them has not solved my issue.
UPDATE
I tried to set the open files limit on another machine running Sierra and encountered the same problems that the OP is having, specifically:
running ulimit -S -n 10241 encounters the error: ulimit: open files: cannot modify limit: Invalid argument from Terminal
I was able to lower the open files limit and raise it again, but not break the 10240 barrier without encountering the 'Invalid argument' error
I followed the instructions in this answer Open files limit does not work as before in OSX Yosemite because it provides a sample of what should be in the limit.maxfiles.plist file. The soft limit is set at 64000 and the hard limit at 524288.
sudo touch limit.maxfiles.plist to create the file with the correct permissions (root:wheel).
With a text editor, I copied the example provided (in the answer above).
Then launchctl limit maxfiles reported maxfiles 64000 524288 so everything is good! NO! ulimit -n still came back as 10240
Then sudo launchctl load -w /Library/LaunchDaemons/limit.maxfiles.plist and launchctl limit maxfiles and ulimit -n and the result was the same as in step 3.
After restarting the computer, launchctl limit maxfiles and ulimit -n both return the new soft limit of 64000.
The open files limit can be lowered and raised, but not raised above the new soft limit of 64000. Raising the open files limit above 64000 requires editing the .plist file and restarting again.
ulimit and launchctl need to agree, it seems, and that only happens after restarting. There is a little more context about the two in Difference between ulimit, launchctl, sysctl?, basically:
ulimit is for the shell
sysctl is for the OS
launchctl is for MacOS and older MacOS (nee OS X) gets, the less it respects sysctl or treats its settings as temporary.

Monit requires manual restart in order to receive max open files value for running a process, bug?

I've been trying to figure this out for quite some time and I don't seem to be able to find any information on this issue.
diving into the issue:
I am running an application on Ubuntu 14.04 using Monit V5.6
The deployment of the application and Monit is done by using Chef scripts with AWS Opsworks which works excellent.
The problem is that once done, Monit starts the application using the following syntax:
start program = "/bin/sh -c 'ulimit -n 150000; <some more commands here which are not intersting>'" as uid <user> and gid <user_group>
This indeed starts the application using the correct user but the problem is that max open files for the process is showing 4096 instead of the number set in limits.conf
Just to be clear, I have set the following in /etc/security/limits.conf
root hard nofile 150000
root soft nofile 150000
* hard nofile 150000
* soft nofile 150000
Further more, if I stop the application then do a service monit restart and then start the application, the max open files values is received correctly and I am seeing 150000.
If I then redeploy the application without rebooting the instance then this happens again and I have to manually restart monit again.
Also if I run the application using the following syntax in order to mimic Monit:
sudo -H -u <user> /bin/sh -c 'ulimit -n 150000; <more commands here>'
Then everything is working and the process is receiving the correct value of max open files.
I try to script this manual service monit restart with stopping and starting the application via Chef scripts then this also fails and I receive 4096 as the max open files value thus my only option is to manually do this each time I deploy which is not very convenient.
Any help on this or thoughts would be greatly appreciated.
Thanks!
P.S. I also reviewed the following articles:
https://serverfault.com/questions/797650/centos-monit-ulimit-not-working
https://lists.nongnu.org/archive/html/monit-general/2010-04/msg00018.html
but as manually restarting Monit causes this to work then I am looking for a solution without changing init scripts.

tsung cluster error:too few file descriptors available

I'am using tsung's cluster.
I have modified the limits of file descriptors.
The node which I take it as the slave will report errors:
ts_launcher:(2:<0.49.0>) WARNING !!! too few file descriptors
available (1024), you should decrease maxusers (currently 60000)
Can anyone give some help?
Try the following steps
First, check the current system limit for file descriptors:
cat /proc/sys/fs/file-max
If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter:
fs.file-max = 64000
Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry:
* - nofile 8192
Edit the /etc/pam.d/system-auth, and add this entry:
session required /lib/security/$ISA/pam_limits.so
Reboot the machine to apply the changes.
reboot
On CentOS/RHEL, change the file descriptor limit in /etc/security/limits.conf
sudo vi /etc/security/limits.conf
* soft nofile 64000
* hard nofile 64000
Reboot the machine:
sudo reboot
Check the limit again:
ulimit -n
64000

How to increase ulimit on Amazon EC2 instance?

After SSH'ing into an EC2 instance running the Amazon Linux AMI, I tried:
ulimit -n 20000
...and got the following error:
-bash: ulimit: open files: cannot modify limit: Operation not permitted
However, the shell allows me to decrease this number, for the current session only.
Is there anyway to increase the ulimit on an EC2 instance (permanently)?
In fact, changing values through the ulimit command only applies to the current shell session. If you want to permanently set a new limit, you must edit the /etc/security/limits.conf file and set your hard and soft limits. Here's an example:
# <domain> <type> <item> <value>
* soft nofile 20000
* hard nofile 20000
Save the file, log-out, log-in again and test the configuration through the ulimit -n command. Hope it helps.
P.S. 1: Keep the following in mind:
Soft limit: value that the kernel enforces for the corresponding resource.
Hard limit: works as a ceiling for the soft limit.
P.S. 2: Additional files in /etc/security/limits.d/ might affect what is configured in limits.conf.
Thank you for the answer. For me just updating /etc/security/limits.conf wasn't enough. Only the 'open files' ulimit -n was getting updated and nproc was not getting updated. After updating /etc/security/limits.d/whateverfile, nproc "ulimit -u" also got updated.
Steps:
sudo vi /etc/security/limits.d/whateverfile
Update limits set for nproc/ nofile
sudo vi /etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
Reboot the machine sudo reboot
P.S. I was not able to add it as a comment, so had to post as an answer.
I don't have enough rep points to comment...sorry for the fresh reply, but maybe this will keep someone from wasting an hour.
Viccari's answer finally solved this headache for me. Every other source tells you to edit the limits.conf file, and if that doesn't work, to add
session required pam_limits.so
to the /etc/pam.d/common-session file
DO NOT DO THIS!
I'm running an Ubuntu 18.04.5 EC2 instance, and this locked me out of SSH entirely. I could log in, but as soon as it was about to drop me into a prompt, it dropped my connection (I even saw all the welcome messages and stuff). Verbose showed this as the last error:
fd 1 is not O_NONBLOCK
and I couldn't find an answer to what that meant. So, after shutting down the instance, waiting about an hour to snapshot the volume, and then mounting it to another running instance, I removed the edit to the common-session file and bam, SSH login worked again.
The fix that worked for me was looking for files in the /etc/security/limits.d/ folder, and editing those.
(and no, I did not need to reboot to get the new limits, just log out and back in)

Resources