How to disable core dump in linux - bash

I'm trying to disable core dumps for my application, I changed ulimit -c 0
But whenever I am trying to attach to the process with gdb using gdb --pid=<pid> then gcore I am still getting the core dump for that application. I'm using bash:
-bash-3.2$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 65600
max locked memory (kbytes, -l) 50000000
max memory size (kbytes, -m) unlimited
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 131072
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
-bash-3.2$ ps -ef | grep top
oracle 8951 8879 0 Mar05 ? 00:01:44 /u01/R122_EBS/fs1/FMW_Home/jrockit32 jre/bin/java -classpath /u01/R122_EBS/fs1/FMW_Home/webtier/opmn/lib/wlfullclient.jar:/u01/R122_EBS/fs1/FMW_Home/Oracle_EBS-app1/shared-libs/ebs-appsborg/WEB-INF/lib/ebsAppsborgManifest.jar:/u01/R122_EBS/fs1/EBSapps/comn/java/classes -mx256m oracle.apps.ad.tools.configuration.RegisterWLSListeners -appsuser APPS -appshost rws3510293 -appsjdbcconnectdesc jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=rws3510293.us.oracle.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=rahulshr))) -adtop /u01/R122_EBS/fs1/EBSapps/appl/ad/12.0.0 -wlshost rws3510293 -wlsuser weblogic -wlsport 7001 -dbsid rahulshr -dbhost rws3510293 -dbdomain us.oracle.com -dbport 1521 -outdir /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appltmp/Tue_Mar_5_00_42_52_2013 -log /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/logs/appl/rgf/Tue_Mar_5_00_42_52_2013/adRegisterWLSListeners.log -promptmsg hide -contextfile /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appl/admin/rahulshr_rws3510293.xml
oracle 23694 22895 0 Mar05 pts/0 00:00:00 top
oracle 26235 22895 0 01:51 pts/0 00:00:00 grep top
-bash-3.2$ gcore
usage: gcore [-o filename] pid
-bash-3.2$ gcore 23694
0x000000355cacbfe8 in tcsetattr () from /lib64/libc.so.6
Saved corefile core.23694
[2]+ Stopped top
-bash-3.2$ ls -l
total 2384
-rw-r--r-- 1 oracle dba 2425288 Mar 6 01:52 core.23694
drwxr----- 3 oracle dba 4096 Mar 5 03:32 oradiag_oracle
-rwxr-xr-x 1 oracle dba 20 Mar 5 04:06 test.sh
-bash-3.2$

The gcore command in gdb is not using the Linux core file dumping code in the kernel. It is walking the memory itself, and writing out a binary file in the same format as a process core file. This is apparent since the process is still active after issuing gcore, while if Linux was dumping the core file, the process would have been terminated.

Related

bash (nano receives SIGHUP or SIGTERM and sudo's password disappears)

I'm trying to developing a script that shows if the site ip.com is up (background output in ttttttttt.out)
#!/bin/bash
exec >"ttttttttt.out" 2>>"ttttttttt.out" </dev/null
while true; do
A=$(timeout 1 bash -c 'exec 3<> /dev/tcp/ip.com/80;echo $?' 2>/dev/null);
if [[ "$A" == "0" ]];then echo "ip.com connected"; else echo "ip.com unconnected"; fi;
sleep 1;
done;
If it's executed in the background:
$ ./ip.com &
and I start nano on edkewkn.ewdnjewn (random name)
$ pico edkewkn.ewdnjewn
after 2-3 seconds I get the error Received SIGHUP o SIGTERM and pico quits.
I've noticed that if I try to run sudo kill -9 PID it's impossible because the prompt disappears like this
username#computer:~/dir$ sudo kill -9 5999
[sudo] password for username:
username#computer:~/dir$
It does not give me the time to put in the password. I've to exit from the shell and do an other ssh access to have the shell working
After some testing I understand that the problem is bash -c.
I need to use this script in this way (saving in variable) and I need to use only bash (not netcat or others software, because they don't are available in some shells I have).
Why this? And how can I modify this line to make it work?
A=$(timeout 1 bash -c 'exec 3<> /dev/tcp/ip.com/80; echo $?' 2>/dev/null);
Also, if I remove stdin, stdout, stderr and the variable (like here over) the problem persists
#!/bin/bash
while true; do
#timeout 1 sleep 2 #in this way it works good
#timeout 1 bash 'sleep 2' #in this way it works bad
timeout 1 bash -c 'sleep 2' #in this way it works bad
sleep 1;
done;
so the error comes directly from bash.
PPS I'm over SSH
root 1162 0.0 0.5 10008 5604 ? Ss Jan03 0:00 /usr/sbin/sshd -D
root 11288 0.0 0.6 10640 6232 ? Ss 15:08 0:00 \_ sshd: username [priv]
username 11486 0.0 0.3 10776 3188 ? S 15:08 0:01 | \_ sshd: username#pts/3
username 11505 0.1 0.5 7504 5184 pts/3 Ss 15:08 0:03 | \_ -bash
username 12500 0.0 0.2 5456 2696 pts/3 S 16:10 0:00 | \_ /bin/bash ./script.sh
username 13776 0.0 0.0 4460 684 pts/3 S 16:10 0:00 | | \_ timeout 1 sleep 2
username 13777 0.0 0.0 4280 576 pts/3 S 16:10 0:00 | | \_ sleep 2
username 13779 0.0 0.3 7912 3184 pts/3 R+ 16:10 0:00 | \_ ps auxf --sort pid
root 12293 1.0 0.6 10640 6304 ? Ss 16:10 0:00 \_ sshd: username [priv]
username 12399 0.0 0.3 10772 3196 ? S 16:10 0:00 | \_ sshd: username#notty
username 12405 0.0 0.1 2576 1792 ? Ss 16:10 0:00 | \_ /usr/lib/openssh/sftp-server
root 14039 0.0 0.6 10640 6304 ? Ss 15:09 0:00 \_ sshd: username [priv]
username 14176 0.0 0.3 10784 3192 ? S 15:09 0:00 \_ sshd: username#notty
username 14215 0.0 0.1 2576 1868 ? Ss 15:09 0:00 \_ /usr/lib/openssh/sftp-server
##John Bollinger if u can see the light with this could be u're a sensitive :)
PID PPID PGID SID CMD
19237 19225 19237 19237 -bash
22848 19237 22848 19237 /bin/bash ./script.sh
23512 22848 22848 19237 sleep 2
23563 19237 23563 19237 ps -o pid,ppid,pgid,sid,cmd
and before someone asks i'm over an old linux ubuntu version running on an old computer
$ uname -a
Linux XXXXXXXX 4.4.0-142-generic #168-Ubuntu SMP Wed Jan 16 21:01:15 UTC 2019 i686 i686 i686 GNU/Linux
~
$ pico
GNU nano 2.5.3
~
Bash Version: 4.3
Patch Level: 48
Release Status: release
~
$ ulimit -Sa
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7611
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 7611
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
~
$ ulimit -Ha
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 7611
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 7611
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
~
#think that's not the u limit (from top)
$ top
Tasks: 184 total, 1 running, 183 sleeping, 0 stopped, 0 zombie
~
$ sudo cat /etc/ssh/sshd_config
[sudo] password :
# Package generated configuration file
# See the sshd_config(5) manpage for details
# What ports, IPs and protocols we listen for
Port 22
# Use these options to restrict which interfaces/protocols sshd will bind to
#ListenAddress ::
#ListenAddress 0.0.0.0
Protocol 2
# HostKeys for protocol version 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
#Privilege Separation is turned on for security
UsePrivilegeSeparation yes
# Lifetime and size of ephemeral version 1 server key
KeyRegenerationInterval 3600
ServerKeyBits 1024
# Logging
SyslogFacility AUTH
LogLevel INFO
# Authentication:
LoginGraceTime 120
#PermitRootLogin prohibit-password
PermitRootLogin yes
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
#AuthorizedKeysFile %h/.ssh/authorized_keys
# Don't read the user's ~/.rhosts and ~/.shosts files
IgnoreRhosts yes
# For this to work you will also need host keys in /etc/ssh_known_hosts
RhostsRSAAuthentication no
# similar for protocol version 2
HostbasedAuthentication no
# Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication
#IgnoreUserKnownHosts yes
# To enable empty passwords, change to yes (NOT RECOMMENDED)
PermitEmptyPasswords no
# Change to yes to enable challenge-response passwords (beware issues with
# some PAM modules and threads)
ChallengeResponseAuthentication no
# Change to no to disable tunnelled clear text passwords
#PasswordAuthentication yes
# Kerberos options
#KerberosAuthentication no
#KerberosGetAFSToken no
#KerberosOrLocalPasswd yes
#KerberosTicketCleanup yes
# GSSAPI options
#GSSAPIAuthentication no
#GSSAPICleanupCredentials yes
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog no
TCPKeepAlive yes
#UseLogin no
#MaxStartups 10:30:60
#Banner /etc/issue.net
# Allow client to pass locale environment variables
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
# Set this to 'yes' to enable PAM authentication, account processing,
# and session processing. If this is enabled, PAM authentication will
# be allowed through the ChallengeResponseAuthentication and
# PasswordAuthentication. Depending on your PAM configuration,
# PAM authentication via ChallengeResponseAuthentication may bypass
# the setting of "PermitRootLogin without-password".
# If you just want the PAM account and session checks to run without
# PAM authentication, then enable this but set PasswordAuthentication
# and ChallengeResponseAuthentication to 'no'.
UsePAM yes
UseDNS no
GSSAPIAuthentication no

Can't fork process: Resource temporarily unavailable issue

I've got a problems with using terminal on macOS 10.12.3 on Mac mini
When I try to run any command I get the following message:
can't fork process: Resource temporarily unavailable.
I have already had such problem. Last time I was able to fix it - increase the number of process and my system looked like:
sysctl -a | grep maxproc
kern.maxproc: 2048
kern.maxprocperuid: 2048
ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 65536
pipe size (512 bytes, -p) 1
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 2048
virtual memory (kbytes, -v) unlimited
I thought that the problem was solved, but now I had that issue once again.
I was able to solve this problem this time with reboot - but it's strongly undesireable to reboot my mac every time.
Do you have any advise how to fix that problem once and for all?
No idea why your question was downvoted…
sudo sysctl kern.tty.ptmx_max=255 (or 511, or whatever) should fix it.
My default (in El Capitan) was 127. (As a tmux user, I need
more than that.)
To learn more:
sysctl -a | grep max
ulimit -a
launchctl limit
cat /private/etc/launchd.conf
cat /private/etc/sysctl.conf
man 8 sysctl
Rebooting solves the issue. In my case, I opened Activity Monitor and found way too many processes for "Outlook WebContent". I quit MS Outlook and voila, the terminal reopened and that error disappeared.
A short-term solution (besides rebooting) is to quit unused apps or close unused documents or projects.
I had a similar experience and it turned out I had a cronjob spawning every minute and not finishing. If you can free up enough resources to run the below command, you can see what is repeatedly spawning.
ps -e | awk '{print $4" "$5" "$6}' | sort | uniq -c | sort -n
For me the last few lines were this:
15 /Applications/Google Chrome.app/Contents/Frameworks/Google Chrome
36 (find)
1184 (bash)
1220 (cron)
1221 /usr/sbin/cron
My answer was easy. crontab -e and remove the offending line. Then a reboot cleared all the zombie crons.

Namenode Startup issue

Not able to start NameNode after installation.
Getting the below error
​
!NameNode Start
stderr: /var/lib/ambari-agent/data/errors-1775.txt
​
​File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-thdppca0.out
​Namenode log file:
​ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128331 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited
​​Ip :inet addr:10.47.84.5
Need help to resolve this issue. Let me know if you more details needs to be shared.Thanks in advance

Hadoop multi node configuration: Master node not able to initiate data node in slave

In the below example saturn is the master node and pluto is the slave node.
hadoop#saturn:~$ start-dfs.sh
16/02/22 15:51:34 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [saturn]
hadoop#saturn's password:
saturn: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hadoop-namenode-saturn.out
hadoop#saturn's password: pluto: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hadoop-datanode-pluto.out
It gets hanged at the last instruction.
I am puzzled as to why it is happening like this.
Update: Earlier i had saturn and pluto both in usr/local/hadoop/slaves file but when i changed it to pluto only then it ran. But now the datanode is not getting initiated in slave/pluto node.
As requested by #running:
Log of /usr/local/hadoop/logs/hadoop-hadoop-datanode-pluto.out
ulimit -a for user hadoop
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 15838
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 15838
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
Log of /usr/local/hadoop/logs/hadoop-hadoop-namenode-saturn.out
ulimit -a for user hadoop
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 1031371
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 1031371
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
(I am sorry for the formatting)
It was happening because the couple of files did not have the required permission to write.
So i did chown and chmod to the /usr/local/ and it worked.

How to use ulimit right

I wanted to limit luakit to a maximum of 150mb virtual memory. Here is my shell script:
#!/bin/bash
#limit virtual memory to 150mb
ulimit -H -v 153600
while true
do
startx /usr/bin/luakit -U -- -s 0 dpms
done
But when memory usage goes above 150mb (in htop VIRT column) nothing happens.

Resources