tsung cluster error:too few file descriptors available - file-descriptor

I'am using tsung's cluster.
I have modified the limits of file descriptors.
The node which I take it as the slave will report errors:
ts_launcher:(2:<0.49.0>) WARNING !!! too few file descriptors
available (1024), you should decrease maxusers (currently 60000)
Can anyone give some help?

Try the following steps
First, check the current system limit for file descriptors:
cat /proc/sys/fs/file-max
If the setting is lower than 64000, edit the /etc/sysctl.conf file, and reset the fs.file-max parameter:
fs.file-max = 64000
Then increase the maximum number of open files on the system by editing the /etc/security/limits.conf configuration file. Add the following entry:
* - nofile 8192
Edit the /etc/pam.d/system-auth, and add this entry:
session required /lib/security/$ISA/pam_limits.so
Reboot the machine to apply the changes.
reboot

On CentOS/RHEL, change the file descriptor limit in /etc/security/limits.conf
sudo vi /etc/security/limits.conf
* soft nofile 64000
* hard nofile 64000
Reboot the machine:
sudo reboot
Check the limit again:
ulimit -n
64000

Related

Number of opened files is not taking effect in session

In linux script, number of opened files are not taking effect in same session. limits.conf file has been modified successfully. Although if i open new session, it shows new value of number of opened files
(command - ulimit -Sn).
Code written in test.sh is as follows :-
>loading.log
ulimit -Sn >> loading.log
sudo sed -i 's/soft nofile 2048/soft nofile 1024/g' /etc/security/limits.conf
ulimit -Sn >> loading.log
The changes in limits.conf take only effect after logging in again. See do changes in /etc/security/limits.conf require a reboot?.
Perhaps it is sufficient to manually set the limit for the current shell session too.
# for next login
sudo sed -i 's/soft nofile 2048/soft nofile 1024/g' /etc/security/limits.conf
# for this shell session
ulimit -Sn 1024

Elasticsearch not getting enough file descriptors

Elasticsearch is failing because of the following bootstrap error.
max file descriptors [4096] for elasticsearch process is too low, increase to at least [65536]
I have set the value as root using sysctl and ulimit commands and verified using cat /proc/sys/fs/file-max. Yet, the elasticsearch process does not get more than 4096 file descriptors. I even tried to start a new session from the root session I created but no luck there. What else could be the problem?
For the .zip and .tar.gz packages, set ulimit -n 65536 as root before starting Elasticsearch, or set nofile to 65536 in /etc/security/limits.conf.
Please find the full reference here.

Hadoop no space error

When I submit a Hadoop jar on hdfs, I'm facing
Exception in thread "main" java.io.IOException : No space left on
device
Any one help me on this
usually an issue with the limit of open files or processes on the linux box. Try ulimit -a to view limits. you are interested in
open files (-n) 1024
max user processes (-u) 102400
to set the open files limit
ulimit -n xxxx
to set the max processes limit
ulimit -u xxxx

what is the maximum value I can set for "MAX_OPEN_FILES" configuration in Elasticsearch

The current Elasticsearch configuration, at my 64GB RAM and 8 cores linux machine has,
At /etc/sysconfig/elasticsearch
MAX_OPEN_FILES=65535
when i do ulimit,
[root#machine ~]# ulimit -n
1024
What is the maximum number I can set for this configuration?
you can run ulimit -n as the user running elasticsearch to know what's the current limit
You can update the max open files on your linux server , in /etc/security/limits.conf
If your user is called elasticsearch, you could add something like this in the file:
elasticsearch - nofile 65535
or run ulimit -n 65535 before starting elasticsearch

How to increase ulimit on Amazon EC2 instance?

After SSH'ing into an EC2 instance running the Amazon Linux AMI, I tried:
ulimit -n 20000
...and got the following error:
-bash: ulimit: open files: cannot modify limit: Operation not permitted
However, the shell allows me to decrease this number, for the current session only.
Is there anyway to increase the ulimit on an EC2 instance (permanently)?
In fact, changing values through the ulimit command only applies to the current shell session. If you want to permanently set a new limit, you must edit the /etc/security/limits.conf file and set your hard and soft limits. Here's an example:
# <domain> <type> <item> <value>
* soft nofile 20000
* hard nofile 20000
Save the file, log-out, log-in again and test the configuration through the ulimit -n command. Hope it helps.
P.S. 1: Keep the following in mind:
Soft limit: value that the kernel enforces for the corresponding resource.
Hard limit: works as a ceiling for the soft limit.
P.S. 2: Additional files in /etc/security/limits.d/ might affect what is configured in limits.conf.
Thank you for the answer. For me just updating /etc/security/limits.conf wasn't enough. Only the 'open files' ulimit -n was getting updated and nproc was not getting updated. After updating /etc/security/limits.d/whateverfile, nproc "ulimit -u" also got updated.
Steps:
sudo vi /etc/security/limits.d/whateverfile
Update limits set for nproc/ nofile
sudo vi /etc/security/limits.conf
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
Reboot the machine sudo reboot
P.S. I was not able to add it as a comment, so had to post as an answer.
I don't have enough rep points to comment...sorry for the fresh reply, but maybe this will keep someone from wasting an hour.
Viccari's answer finally solved this headache for me. Every other source tells you to edit the limits.conf file, and if that doesn't work, to add
session required pam_limits.so
to the /etc/pam.d/common-session file
DO NOT DO THIS!
I'm running an Ubuntu 18.04.5 EC2 instance, and this locked me out of SSH entirely. I could log in, but as soon as it was about to drop me into a prompt, it dropped my connection (I even saw all the welcome messages and stuff). Verbose showed this as the last error:
fd 1 is not O_NONBLOCK
and I couldn't find an answer to what that meant. So, after shutting down the instance, waiting about an hour to snapshot the volume, and then mounting it to another running instance, I removed the edit to the common-session file and bam, SSH login worked again.
The fix that worked for me was looking for files in the /etc/security/limits.d/ folder, and editing those.
(and no, I did not need to reboot to get the new limits, just log out and back in)

Resources