I've got a problems with using terminal on macOS 10.12.3 on Mac mini
When I try to run any command I get the following message:
can't fork process: Resource temporarily unavailable.
I have already had such problem. Last time I was able to fix it - increase the number of process and my system looked like:
sysctl -a | grep maxproc
kern.maxproc: 2048
kern.maxprocperuid: 2048
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
I thought that the problem was solved, but now I had that issue once again.
I was able to solve this problem this time with reboot - but it's strongly undesireable to reboot my mac every time.
Do you have any advise how to fix that problem once and for all?
No idea why your question was downvoted…
sudo sysctl kern.tty.ptmx_max=255 (or 511, or whatever) should fix it.
My default (in El Capitan) was 127. (As a tmux user, I need
more than that.)
To learn more:
sysctl -a | grep max
ulimit -a
launchctl limit
cat /private/etc/launchd.conf
cat /private/etc/sysctl.conf
man 8 sysctl
Rebooting solves the issue. In my case, I opened Activity Monitor and found way too many processes for "Outlook WebContent". I quit MS Outlook and voila, the terminal reopened and that error disappeared.
A short-term solution (besides rebooting) is to quit unused apps or close unused documents or projects.
I had a similar experience and it turned out I had a cronjob spawning every minute and not finishing. If you can free up enough resources to run the below command, you can see what is repeatedly spawning.
ps -e | awk '{print $4" "$5" "$6}' | sort | uniq -c | sort -n
For me the last few lines were this:
15 /Applications/Google Chrome.app/Contents/Frameworks/Google Chrome
36 (find)
1184 (bash)
1220 (cron)
1221 /usr/sbin/cron
My answer was easy. crontab -e and remove the offending line. Then a reboot cleared all the zombie crons.
Related
I'm trying to run the VariantsToBinaryPed tool from GATK3, but it seems that my system's 'open file handle limit' is too small for it to successfully run.
I've tried increasing the limit using ulimit, as shown below, but the command still fails.
The GATK command:
> java -jar GenomeAnalysisTK.jar \
-T VariantsToBinaryPed \
-R Homo_sapiens_assembly38.fasta \
-V ~/vcf/snp.indel.recal.splitMA_norm.vcf.bgz\
-m ~/03_IdentityCheck/KING/targeted_seq_ped_clean.fam\
-bed output.bed\
-bim output.bim\
-fam output.fam\
--minGenotypeQuality 0
Returns this error:
ERROR MESSAGE: An error occurred because there were too many files
open concurrently; your system's open file handle limit is probably too small.
See the unix ulimit command to adjust this limit or
ask your system administrator for help.
Following the advice given here, I ran:
echo kern.maxfiles=65536 | sudo tee -a /etc/sysctl.conf
echo kern.maxfilesperproc=65536 | sudo tee -a /etc/sysctl.conf
sudo sysctl -w kern.maxfiles=65536
sudo sysctl -w kern.maxfilesperproc=65536
sudo ulimit -n 65536 65536
and added this line to my .bash_profile and sourced it:
ulimit -n 65536 65536
So that now, when I run ulimit -n, I get:
65536
However, I still get the same error from GATK:
ERROR MESSAGE: An error occurred because there were too many files
open concurrently; your system's open file handle limit is probably too small.
See the unix ulimit command to adjust this limit or
ask your system administrator for help.
Is there anything else I can do to avoid this error?
Not able to start NameNode after installation.
Getting the below error
!NameNode Start
stderr: /var/lib/ambari-agent/data/errors-1775.txt
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-thdppca0.out
Namenode log file:
ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128331 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
Ip :inet addr:10.47.84.5
Need help to resolve this issue. Let me know if you more details needs to be shared.Thanks in advance
In the below example saturn is the master node and pluto is the slave node.
hadoop#saturn:~$ start-dfs.sh
16/02/22 15:51:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [saturn]
hadoop#saturn's password:
saturn: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-saturn.out
hadoop#saturn's password: pluto: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-pluto.out
It gets hanged at the last instruction.
I am puzzled as to why it is happening like this.
Update: Earlier i had saturn and pluto both in usr/local/hadoop/slaves file but when i changed it to pluto only then it ran. But now the datanode is not getting initiated in slave/pluto node.
As requested by #running:
Log of /usr/local/hadoop/logs/hadoop-hadoop-datanode-pluto.out
ulimit -a for user hadoop
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15838
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15838
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Log of /usr/local/hadoop/logs/hadoop-hadoop-namenode-saturn.out
ulimit -a for user hadoop
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1031371
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1031371
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
(I am sorry for the formatting)
It was happening because the couple of files did not have the required permission to write.
So i did chown and chmod to the /usr/local/ and it worked.
I wanted to limit luakit to a maximum of 150mb virtual memory. Here is my shell script:
#!/bin/bash
#limit virtual memory to 150mb
ulimit -H -v 153600
while true
do
startx /usr/bin/luakit -U -- -s 0 dpms
done
But when memory usage goes above 150mb (in htop VIRT column) nothing happens.
I'm trying to disable core dumps for my application, I changed ulimit -c 0
But whenever I am trying to attach to the process with gdb using gdb --pid=<pid> then gcore I am still getting the core dump for that application. I'm using bash:
-bash-3.2$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 65600
max locked memory (kbytes, -l) 50000000
max memory size (kbytes, -m) unlimited
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 131072
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
-bash-3.2$ ps -ef | grep top
oracle 8951 8879 0 Mar05 ? 00:01:44 /u01/R122_EBS/fs1/FMW_Home/jrockit32 jre/bin/java -classpath /u01/R122_EBS/fs1/FMW_Home/webtier/opmn/lib/wlfullclient.jar:/u01/R122_EBS/fs1/FMW_Home/Oracle_EBS-app1/shared-libs/ebs-appsborg/WEB-INF/lib/ebsAppsborgManifest.jar:/u01/R122_EBS/fs1/EBSapps/comn/java/classes -mx256m oracle.apps.ad.tools.configuration.RegisterWLSListeners -appsuser APPS -appshost rws3510293 -appsjdbcconnectdesc jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=rws3510293.us.oracle.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=rahulshr))) -adtop /u01/R122_EBS/fs1/EBSapps/appl/ad/12.0.0 -wlshost rws3510293 -wlsuser weblogic -wlsport 7001 -dbsid rahulshr -dbhost rws3510293 -dbdomain us.oracle.com -dbport 1521 -outdir /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appltmp/Tue_Mar_5_00_42_52_2013 -log /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/logs/appl/rgf/Tue_Mar_5_00_42_52_2013/adRegisterWLSListeners.log -promptmsg hide -contextfile /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appl/admin/rahulshr_rws3510293.xml
oracle 23694 22895 0 Mar05 pts/0 00:00:00 top
oracle 26235 22895 0 01:51 pts/0 00:00:00 grep top
-bash-3.2$ gcore
usage: gcore [-o filename] pid
-bash-3.2$ gcore 23694
0x000000355cacbfe8 in tcsetattr () from /lib64/libc.so.6
Saved corefile core.23694
[2]+ Stopped top
-bash-3.2$ ls -l
total 2384
-rw-r--r-- 1 oracle dba 2425288 Mar 6 01:52 core.23694
drwxr----- 3 oracle dba 4096 Mar 5 03:32 oradiag_oracle
-rwxr-xr-x 1 oracle dba 20 Mar 5 04:06 test.sh
-bash-3.2$
The gcore command in gdb is not using the Linux core file dumping code in the kernel. It is walking the memory itself, and writing out a binary file in the same format as a process core file. This is apparent since the process is still active after issuing gcore, while if Linux was dumping the core file, the process would have been terminated.