Java Processbuilder: GraphicsMagick/imagemagick convert hangs - processbuilder

The below command hangs in Java processbuilder whereas the same command produce expected result in command line.
gm convert -debug all -log %u %m:%l %e -limit Disk 5GB -limit memory 1GB -limit map 1GB -limit pixels 50MB -limit Threads 1 inputfile -quality 82 -size 700x500 -scale 700x500> jpg:out
when i check the top command for the task it gives below information
Tasks: 1 total, 0 running, 1 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3%us, 0.3%sy, 0.0%ni, 99.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 30827392k total, 7131008k used, 23696384k free, 126288k buffers
Swap: 0k total, 0k used, 0k free, 4447716k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
18260 yarn 20 0 165m 9148 5436 S 0.0 0.0 0:00.02 gm
From the above information, the task went to sleep after 02 milliseconds.
Any help would be greatly appreciated.

Related

Bash: Multiple npm install in the background give error, 'No space left on device'

I am setting up docker on a google cloud compute machine, 1 vCPU and 3.75 GB ram.
If I simply run docker-compose up --build, it does work but the process is sequential and slow. So I am using this bash script so that I can build images in the background, and skip the usual sequential process.
command=$1
shift
jobsList=""
taskList[0]=""
i=0
#Replaces all the fluff with nothing, and we get our job Id
function getJobId(){
echo "$(echo $STRING | sed s/^[^0-9]*// | sed s/[^0-9].*$//)"
}
for task in "$#"
do
echo "Command is $command $task"
docker-compose $command $task &> ${task}.text &
lastJob=`getJobId $(jobs %%)`
jobsList="$jobsList $lastJob"
echo "jobsList is $jobsList"
taskList[$i]="$command $task"
i=$(($i + 1))
done
i=0
for job in $jobsList
do
wait %$job
echo "${taskList[$i]} completed with status $?"
i=$(($i + 1))
done
and I use it in the following manner:
availableServices=$(docker-compose config --services)
while IFS='' read -r line || [[ -n "$line" ]]
do
services+=$(echo "$line ")
done <<<"$availableServices"
./runInParallel.sh build $services
I string together available services in docker-compose.yml, and pass it to my script.
But the issue is eventually all the processes fail with the following error:
npm WARN tar ENOSPC: no space left on device, write
Unhandled rejection Error: ENOSPC: no space left on device, write
I checked inodes, and on /dev/sda1 only 44% were used.
Here's my output for the command df -h:
Filesystem Size Used Avail Use% Mounted on
udev 1.8G 0 1.8G 0% /dev
tmpfs 370M 892K 369M 1% /run
/dev/sda1 9.6G 9.1G 455M 96% /
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
/dev/loop0 55M 55M 0 100% /snap/google-cloud-sdk/64
/dev/loop2 55M 55M 0 100% /snap/google-cloud-sdk/62
/dev/loop1 55M 55M 0 100% /snap/google-cloud-sdk/63
/dev/loop3 79M 79M 0 100% /snap/go/3095
/dev/loop5 89M 89M 0 100% /snap/core/5897
/dev/loop4 90M 90M 0 100% /snap/core/6130
/dev/loop6 90M 90M 0 100% /snap/core/6034
/dev/sda15 105M 3.6M 101M 4% /boot/efi
tmpfs 370M 0 370M 0% /run/user/1001
and here's the output for df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 469499 385 469114 1% /dev
tmpfs 472727 592 472135 1% /run
/dev/sda1 1290240 636907 653333 50% /
tmpfs 472727 1 472726 1% /dev/shm
tmpfs 472727 8 472719 1% /run/lock
tmpfs 472727 18 472709 1% /sys/fs/cgroup
/dev/loop0 20782 20782 0 100% /snap/google-cloud-sdk/64
/dev/loop2 20680 20680 0 100% /snap/google-cloud-sdk/62
/dev/loop1 20738 20738 0 100% /snap/google-cloud-sdk/63
/dev/loop3 9417 9417 0 100% /snap/go/3095
/dev/loop5 12808 12808 0 100% /snap/core/5897
/dev/loop4 12810 12810 0 100% /snap/core/6130
/dev/loop6 12810 12810 0 100% /snap/core/6034
/dev/sda15 0 0 0 - /boot/efi
tmpfs 472727 10 472717 1% /run/user/1001
From your df -h output, root directory (/dev/sda1) has only 455MB free space.
Whenever you run docker build, the docker-client (CLI) will send all the contents of the Dockerfile directory to docker-daemon which builds the image.
So, for example, if you have three services each with 300MB directories, you can build them sequentially with 455MB available free space, but to build them all at the same time you need 300MB*3 amount of free space for docker-daemon to cache and build the images.

How to execute linux commands in a remote machine with shell script

I have a test server, from which i need to login to two different prod server and execute
top, free -h, ps -ef commands
could some one help me with a shell script
Only ssh command can do this:
$ for ip in 192.168.138.22{1,2,3}; do ssh ${ip} -o StrictHostKeyChecking=no "free -h"; done
total used free shared buff/cache available
Mem: 7.8G 782M 5.2G 152M 1.8G 6.4G
Swap: 7.9G 0B 7.9G
total used free shared buff/cache available
Mem: 7.8G 1.3G 4.1G 169M 2.5G 5.8G
Swap: 7.9G 0B 7.9G
total used free shared buff/cache available
Mem: 7.8G 563M 5.9G 118M 1.4G 6.6G
Swap: 7.9G 0B 7.9G
However, this need to authenticate ssh-key with each other. So, we can use sshpass to pass the passqord:
$ for ip in 192.168.138.22{1,2,3}; do sshpass -p PASSWORD ssh ${ip} -o StrictHostKeyChecking=no "free -h"; done
total used free shared buff/cache available
Mem: 7.8G 782M 5.2G 152M 1.8G 6.4G
Swap: 7.9G 0B 7.9G
total used free shared buff/cache available
Mem: 7.8G 1.3G 4.1G 169M 2.5G 5.8G
Swap: 7.9G 0B 7.9G
total used free shared buff/cache available
Mem: 7.8G 563M 5.9G 118M 1.4G 6.6G
Swap: 7.9G 0B 7.9G
If you have ssh access to your server, you can execute commands remotely on this way:
ssh -i <PATH_YOUR_PRIVATE_KEY>remote_username#remote_host "<COMMAND>"

Memory error in Rails console on AWS

I logged into our production instance on AWS, and tried to go into Rails console:
bundle exec rails c production
But I'm getting the following error
There was an error while trying to load the gem 'mini_magick' (Bundler::GemRequireError)
Gem Load Error is: Cannot allocate memory - animate
When I run free I see there's no swap:
free
total used free shared buffers cached
Mem: 7659512 7515728 143784 408 1724 45604
-/+ buffers/cache: 7468400 191112
Swap: 0 0 0
df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 3824796 12 3824784 1% /dev
tmpfs 765952 376 765576 1% /run
/dev/xvda1 15341728 11289944 3323732 78% /
none 4 0 4 0% /sys/fs/cgroup
none 5120 0 5120 0% /run/lock
none 3829756 0 3829756 0% /run/shm
none 102400 0 102400 0% /run/user
/dev/xvdf 10190136 6750744 2898720 70% /mnt
Not sure what's causing this or how to resolve it. Any help is appreciated.
Thanks!
You can increase EC2 instance memory or add swap memory to EC2.
grep Mem /proc/meminfo
grep Swap /proc/meminfo
free
uname -a
# Set swap file to /swapfile1
sudo dd if=/dev/zero of=/swapfile1 bs=1M count=512
grep Swap /proc/meminfo
ll /swapfile1
sudo chmod 600 /swapfile1
mkswap /swapfile1
ll /swapfile1
sudo mkswap /swapfile1
swapon -s
free
sudo swapon /swapfile1
free
grep Swap /proc/meminfo

How to know the number of active threads in Puma

I am trying to see the number of active puma threads on my server.
I can not see it through ps:
$ ps aux | grep puma
healthd 2623 0.0 1.8 683168 37700 ? Ssl May02 5:38 puma 2.11.1 (tcp://127.0.0.1:22221) [healthd]
root 8029 0.0 0.1 110460 2184 pts/0 S+ 06:34 0:00 grep --color=auto puma
root 18084 0.0 0.1 56836 2664 ? Ss May05 0:00 su -s /bin/bash -c puma -C /opt/elasticbeanstalk/support/conf/pumaconf.rb webapp
webapp 18113 0.0 0.8 83280 17324 ? Ssl May05 0:04 puma 2.16.0 (unix:///var/run/puma/my_app.sock) [/]
webapp 18116 3.5 6.2 784992 128924 ? Sl May05 182:35 puma: cluster worker 0: 18113 [/]
As in the configuration I have:
threads 8, 32
I was expecting to see at least 8 puma threads?
To quickly answer the question, the number of threads used by a
process running on a given PID, can be obtained using the
following :
% ps -h -o nlwp <pid>
This will just return you the total number of threads used by the
process. The option -h removes the headers and the option -o nlwp
formats the output of ps such that it only outputs the Number of Light Weight Processes (NLWP) or threads. Example, when only a single process puma is running and its PID is obtained with pgrep, you get:
% ps -h -o nlwp $(pgrep puma)
4
What is the difference between process, thread and light-weight process?
This question has been answered already in various places
[See here, here and the excellent geekstuff
article]. The quick, short and ugly version is :
a process is essentially any running instance of a program.
a thread is a flow of execution of the process. A process
containing multiple execution-flows is known as multi-threaded
process and shares its resources amongst its threads (memory,
open files, io, ...). The Linux kernel has no knowledge of what
threads are and only knows processes. In the past,
multi-threading was handled on a user level and not kernel
level. This made it hard for the kernel to do proper process
management.
Enter lightweight processes (LWP). This is essentially the
answer to the issue with threads. Each thread is considered to
be an LWP on kernel level. The main difference between a
process and an LWP is that the LWP shares resources. In other words, an Light Weight Process is kernel-speak for what users call a thread.
Can ps show information about threads or LWP's?
The ps command or process status command provides information
about the currently running processes including their corresponding
LWPs or threads. To do this, it makes use of the /proc directory
which is a virtual filesystem and regarded as the control and
information centre of the kernel. [See here and here].
By default ps will not give you any information about the LWPs,
however, adding the option -L and -m to the command generally does
the trick.
man ps :: THREAD DISPLAY
H Show threads as if they were processes.
-L Show threads, possibly with LWP and NLWP columns.
m Show threads after processes.
-m Show threads after processes.
-T Show threads, possibly with SPID column.
For a single process puma with pid given by pgrep puma
% ps -fL $(pgrep puma)
UID PID PPID LWP C NLWP STIME TTY STAT TIME CMD
kvantour 2160 2876 2160 0 4 15:22 pts/39 Sl+ 0:00 ./puma
kvantour 2160 2876 2161 99 4 15:22 pts/39 Rl+ 0:14 ./puma
kvantour 2160 2876 2162 99 4 15:22 pts/39 Rl+ 0:14 ./puma
kvantour 2160 2876 2163 99 4 15:22 pts/39 Rl+ 0:14 ./puma
however, adding the -m option clearly gives a nicer overview. This
is especially handy when multiple processes are running with the same
name.
% ps -fmL $(pgrep puma)
UID PID PPID LWP C NLWP STIME TTY STAT TIME CMD
kvantour 2160 2876 - 0 4 15:22 pts/39 - 0:44 ./puma
kvantour - - 2160 0 - 15:22 - Sl+ 0:00 -
kvantour - - 2161 99 - 15:22 - Rl+ 0:14 -
kvantour - - 2162 99 - 15:22 - Rl+ 0:14 -
kvantour - - 2163 99 - 15:22 - Rl+ 0:14 -
In this example, you see that process puma with PID 2160 runs with 4
threads (NLWP) having the ID's 2160--2163. Under STAT you see two different values Sl+ and 'Rl+'. Here the l is an indicator for multi-threaded. S and R stand for interruptible sleep (waiting for an event to complete) and respectively running. So we see that 3 of the 4 threads are running at 99% CPU and one thread is sleeping.
You also see the total accumulated CPU time (44s) while a single thread only runs for 14s.
Another way to obtain information is by directly using the format
specifiers with -o or -O.
man ps :: STANDARD FORMAT SPECIFIERS
lwp lightweight process (thread) ID of the dispatchable
entity (alias spid, tid). See tid for additional
information. Show threads as if they were processes.
nlwp number of lwps (threads) in the process. (alias thcount).
So you can use any of lwp,spid or tid and nlwp or thcount.
If you only want to get the number of threads of a process called
puma, you can use :
% ps -o nlwp $(pgrep puma)
NLWP
4
or if you don't like the header
% ps -h -o nlwp $(pgrep puma)
4
You can get a bit more information with :
% ps -O nlwp $(pgrep puma)
PID NLWP S TTY TIME COMMAND
19304 4 T pts/39 00:00:00 ./puma
Finally, you can combine the flags with ps aux to list the threads.
% ps aux -L
USER PID LWP %CPU NLWP %MEM VSZ RSS TTY STAT START TIME COMMAND
...
kvantour 1618 1618 0.0 4 0.0 33260 1436 pts/39 Sl+ 15:17 0:00 ./puma
kvantour 1618 1619 99.8 4 0.0 33260 1436 pts/39 Rl+ 15:17 0:14 ./puma
kvantour 1618 1620 99.8 4 0.0 33260 1436 pts/39 Rl+ 15:17 0:14 ./puma
kvantour 1618 1621 99.8 4 0.0 33260 1436 pts/39 Rl+ 15:17 0:14 ./puma
...
Can top show information about threads or LWP's?
top has the option to show threads by hitting H in the interactive mode or by launching top with top -H. The problem is that it lists the threads as processes (similar to ps -fH).
% top
top - 09:42:10 up 17 days, 3 min, 1 user, load average: 3.35, 3.33, 2.75
Tasks: 353 total, 3 running, 347 sleeping, 3 stopped, 0 zombie
%Cpu(s): 75.5 us, 0.6 sy, 0.5 ni, 22.6 id, 0.0 wa, 0.0 hi, 0.8 si, 0.0 st
KiB Mem : 16310772 total, 8082152 free, 3662436 used, 4566184 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 11363832 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
868 kvantour 20 0 33268 1436 1308 S 299.7 0.0 46:16.22 puma
1163 root 20 0 920488 282524 258436 S 2.0 1.7 124:48.32 Xorg
...
Here you see that puma runs at about 300% CPU for an accumulated time of 46:16.22. There is, however, no indicator that this is a threaded process. The only indicator is the CPU usage, however, this could be below 100% if 3 threads are "sleeping"? Furthermore, the status flag states S which indicates that the first thread is asleep. Hitting H give you then
% top -H
top - 09:48:30 up 17 days, 10 min, 1 user, load average: 3.18, 3.44, 3.02
Threads: 918 total, 5 running, 910 sleeping, 3 stopped, 0 zombie
%Cpu(s): 75.6 us, 0.2 sy, 0.1 ni, 23.9 id, 0.0 wa, 0.0 hi, 0.2 si, 0.0 st
KiB Mem : 16310772 total, 8062296 free, 3696164 used, 4552312 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 11345440 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
870 kvantour 20 0 33268 1436 1308 R 99.9 0.0 21:45.35 puma
869 kvantour 20 0 33268 1436 1308 R 99.7 0.0 21:45.43 puma
872 kvantour 20 0 33268 1436 1308 R 99.7 0.0 21:45.31 puma
1163 root 20 0 920552 282288 258200 R 2.0 1.7 124:52.05 Xorg
...
Now we see only 3 threads. As one of the Threads is "sleeping", it is way down the bottom as top sorts by CPU usage.
In order to see all threads, it is best to ask top to display a specific pid (for a single process):
% top -H -p $(pgrep puma)
top - 09:52:48 up 17 days, 14 min, 1 user, load average: 3.31, 3.38, 3.10
Threads: 4 total, 3 running, 1 sleeping, 0 stopped, 0 zombie
%Cpu(s): 75.5 us, 0.1 sy, 0.2 ni, 23.6 id, 0.0 wa, 0.0 hi, 0.7 si, 0.0 st
KiB Mem : 16310772 total, 8041048 free, 3706460 used, 4563264 buff/cache
KiB Swap: 4194300 total, 4194300 free, 0 used. 11325008 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
869 kvantour 20 0 33268 1436 1308 R 99.9 0.0 26:03.37 puma
870 kvantour 20 0 33268 1436 1308 R 99.9 0.0 26:03.30 puma
872 kvantour 20 0 33268 1436 1308 R 99.9 0.0 26:03.22 puma
868 kvantour 20 0 33268 1436 1308 S 0.0 0.0 0:00.00 puma
When you have multiple processes running, you might be interested in hitting f and toggle PGRP on. This shows the Group PID of the process. (PID in ps where PID in top is LWP in ps).
How do I get the thread count without using ps or top?
The file /proc/$PID/status contains a line stating how many threads
the process with PID $PID is using.
% grep Threads /proc/19304/status
Threads: 4
General comments
It is possible that you do not find the process of another user
and therefore cannot get the number of threads that process is using. This could be due to the mount options of /proc/ (hidepid=2).
Used example program:
#include <omp.h>
#include <stdio.h>
#include <stdlib.h>
int main (int argc, char *argv[]) {
char c = 0;
#pragma omp parallel shared(c) {
int i = 0;
if (omp_get_thread_num() == 0) {
printf("Read character from input : ");
c = getchar();
} else {
while (c == 0) i++;
printf("Total sum is on thread %d : %d\n", omp_get_thread_num(), i);
}
}
}
compiled with gcc -o puma --openmp
If you are just looking for number of threads that are spawned by the process, you can see the number of task folders created under /proc/[pid-of-process]/task because each thread creates a folder under this path. So counting the number of folders would be sufficient.
In fact the ps utility itself reads the information from this path, a file /proc/[PID]/cmdline which is represented in a more readable way.
From Linux Filesystem Hierarchy
/proc is very special in that it is also a virtual file-system. It's sometimes referred to as a process information pseudo-file system. It doesn't contain 'real' files but run-time system information (e.g. system memory, devices mounted, hardware configuration, etc). For this reason it can be regarded as a control and information center for the kernel. In fact, quite a lot of system utilities are simply calls to files in this directory.
All you need to get the PID of the process puma, use ps or any utility of your choice
ps aux | awk '/[p]uma/{print $1}'
or more directly use pidof(8) - Linux man page which gets you the PID directly given the process name as input
pidof -s puma
Now that you have the PID to count the number of task/ folders your process had created use the find command
find /proc/<PID>/task -maxdepth 1 -type d -print | wc -l
ps aux | grep puma will give you list of process for only puma. You need to find out , how many threads are running by the particular Process. May be this will help you:
ps -T -p 2623
You need to provide process id, for which you want to find out number of threads. Make sure you are providing accurate process id.
Using ps and wc to count puma threads:
ps --no-headers -T -C puma | wc -l
The string "puma" can be replaced as desired. Example, count bash threads:
ps --no-headers -T -C bash | wc -l
On my system that outputs:
9
The code in the question, ps aux | grep puma, has a few grep related problems:
It returns grep --color=auto puma, which isn't a puma thread at all.
Similarly any util or command with the string "puma", e.g. a util called notpuma, would be matched by grep.
I found "htop" to be an excellent solution. Just toggle "tree view" and you can view each puma-worker and the threads under that worker.
Number of puma threads for each worker:
ps aux | awk '/[p]uma/{print $2}' | xargs ps -h -o nlwp
Sample output:
7
59
59
61
59
60
59
59
59

How to disable core dump in linux

I'm trying to disable core dumps for my application, I changed ulimit -c 0
But whenever I am trying to attach to the process with gdb using gdb --pid=<pid> then gcore I am still getting the core dump for that application. I'm using bash:
-bash-3.2$ ulimit -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 65600
max locked memory (kbytes, -l) 50000000
max memory size (kbytes, -m) unlimited
open files (-n) 131072
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 10240
cpu time (seconds, -t) unlimited
max user processes (-u) 131072
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
-bash-3.2$ ps -ef | grep top
oracle 8951 8879 0 Mar05 ? 00:01:44 /u01/R122_EBS/fs1/FMW_Home/jrockit32 jre/bin/java -classpath /u01/R122_EBS/fs1/FMW_Home/webtier/opmn/lib/wlfullclient.jar:/u01/R122_EBS/fs1/FMW_Home/Oracle_EBS-app1/shared-libs/ebs-appsborg/WEB-INF/lib/ebsAppsborgManifest.jar:/u01/R122_EBS/fs1/EBSapps/comn/java/classes -mx256m oracle.apps.ad.tools.configuration.RegisterWLSListeners -appsuser APPS -appshost rws3510293 -appsjdbcconnectdesc jdbc:oracle:thin:#(DESCRIPTION=(ADDRESS_LIST=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS=(PROTOCOL=tcp)(HOST=rws3510293.us.oracle.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=rahulshr))) -adtop /u01/R122_EBS/fs1/EBSapps/appl/ad/12.0.0 -wlshost rws3510293 -wlsuser weblogic -wlsport 7001 -dbsid rahulshr -dbhost rws3510293 -dbdomain us.oracle.com -dbport 1521 -outdir /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appltmp/Tue_Mar_5_00_42_52_2013 -log /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/logs/appl/rgf/Tue_Mar_5_00_42_52_2013/adRegisterWLSListeners.log -promptmsg hide -contextfile /u01/R122_EBS/fs1/inst/apps/rahulshr_rws3510293/appl/admin/rahulshr_rws3510293.xml
oracle 23694 22895 0 Mar05 pts/0 00:00:00 top
oracle 26235 22895 0 01:51 pts/0 00:00:00 grep top
-bash-3.2$ gcore
usage: gcore [-o filename] pid
-bash-3.2$ gcore 23694
0x000000355cacbfe8 in tcsetattr () from /lib64/libc.so.6
Saved corefile core.23694
[2]+ Stopped top
-bash-3.2$ ls -l
total 2384
-rw-r--r-- 1 oracle dba 2425288 Mar 6 01:52 core.23694
drwxr----- 3 oracle dba 4096 Mar 5 03:32 oradiag_oracle
-rwxr-xr-x 1 oracle dba 20 Mar 5 04:06 test.sh
-bash-3.2$
The gcore command in gdb is not using the Linux core file dumping code in the kernel. It is walking the memory itself, and writing out a binary file in the same format as a process core file. This is apparent since the process is still active after issuing gcore, while if Linux was dumping the core file, the process would have been terminated.

Resources