Gitlab and redmine high memory usage - ruby

I have a VPS with 1GB memory, Debian 7 stable, Gitlab and Redmine installed without anything else (except normal processes).
This configuration consumes more than 900MB of memory. I already set unicorn workers to 1 but no significant changes. Version of Redmine is 2.5.1.stable, version of Gitlab is 6-9-stable.
I wonder if there a way to reduce the memory consuption and CPU load. I might use nginx instead of apache2 or postgres instead of mysql. What else?
Any suggestion is really appreciated.
Here is the list of running processes:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 30176 2160 ? Ss 10:38 0:00 init
root 2 0.0 0.0 0 0 ? S 10:38 0:00 [kthreadd/1723]
root 3 0.0 0.0 0 0 ? S 10:38 0:00 [khelper/1723]
root 227 0.0 0.0 16988 884 ? S 10:38 0:00 upstart-udev-bridge --daemon
root 235 0.0 0.1 21300 1352 ? Ss 10:38 0:00 /sbin/udevd --daemon
root 283 0.0 0.0 21296 1024 ? S 10:38 0:00 /sbin/udevd --daemon
root 284 0.0 0.0 21296 1028 ? S 10:38 0:00 /sbin/udevd --daemon
root 428 0.0 0.0 14936 640 ? S 10:38 0:00 upstart-socket-bridge --daemon
root 1874 0.0 0.1 58740 1652 ? Sl 10:38 0:00 /usr/sbin/rsyslogd -c5
root 1920 0.0 0.0 57568 988 ? Ss 10:38 0:00 /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
root 1922 0.0 0.0 57568 632 ? S 10:38 0:00 /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
root 1988 0.0 0.2 72552 2648 ? Ss 10:38 0:00 sendmail: MTA: accepting connections
root 2029 0.0 0.1 49888 1244 ? Ss 10:38 0:00 /usr/sbin/sshd
root 2061 0.0 0.0 19520 964 ? Ss 10:38 0:00 /usr/sbin/xinetd -pidfile /var/run/xinetd.pid -stayalive -inetd_compat -inetd_ipv6
root 2113 0.0 2.1 301048 22244 ? Ss 10:38 0:00 /usr/sbin/apache2 -k start
root 2156 0.0 0.0 20364 1044 ? Ss 10:38 0:00 /usr/sbin/cron
root 2206 0.0 0.0 4136 712 ? S 10:38 0:00 /bin/sh /usr/bin/mysqld_safe
root 2322 0.0 0.1 23368 1968 ? Ssl 10:38 0:00 PassengerWatchdog
root 2337 0.5 0.2 100600 2652 ? Sl 10:38 0:19 PassengerHelperAgent
root 2348 0.0 0.9 46372 10412 ? Sl 10:38 0:00 Passenger spawn server
nobody 2353 0.0 0.3 81832 4168 ? Sl 10:38 0:00 PassengerLoggingAgent
mysql 2551 0.0 5.2 464312 55360 ? Sl 10:38 0:01 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysq
root 2552 0.0 0.0 4044 668 ? S 10:38 0:00 logger -t mysqld -p daemon.error
root 2684 0.0 0.7 58116 8300 ? S 10:38 0:00 python /usr/sbin/denyhosts --daemon --purge --config=/etc/denyhosts.conf
redis 2708 0.0 0.1 39964 1676 ? Ssl 10:38 0:00 /usr/bin/redis-server /etc/redis/redis.conf
git 2811 0.5 13.0 377628 136484 ? Sl 10:38 0:19 unicorn_rails master -D -c /home/git/gitlab/config/unicorn.rb -E production
git 2846 0.0 12.3 377628 129148 ? Sl 10:38 0:00 unicorn_rails worker[0] -D -c /home/git/gitlab/config/unicorn.rb -E production
git 2873 0.6 13.6 428528 143532 ? Sl 10:38 0:23 sidekiq 2.17.0 gitlab [0 of 25 busy]
root 2892 0.0 0.2 32712 2248 ? Ss 10:38 0:00 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 106:111
root 2913 0.0 0.0 14532 876 tty1 Ss+ 10:38 0:00 /sbin/getty 38400 console
root 2915 0.0 0.0 14532 880 tty2 Ss+ 10:38 0:00 /sbin/getty 38400 tty2
admin 2976 0.1 8.6 265444 90444 ? Sl 10:39 0:03 Passenger ApplicationSpawner: /var/www/redmine
admin 2984 0.0 9.8 282032 103736 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 2992 0.0 8.1 265444 85744 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 2998 0.0 8.1 265444 85764 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 3004 0.0 8.1 265444 85760 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 3010 0.0 8.1 265444 85760 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 3016 0.0 9.9 282532 104400 ? Sl 10:39 0:01 Rails: /var/www/redmine
www-data 3026 0.0 1.6 301492 17416 ? S 10:39 0:00 /usr/sbin/apache2 -k start
git 3042 0.0 12.8 313152 134320 ? Sl 10:39 0:00 Rack: /home/git/gitlab
root 3794 0.0 0.3 71248 3628 ? Ss 11:23 0:00 sshd: admin [priv]
admin 3797 0.0 0.1 71248 1824 ? R 11:23 0:00 sshd: admin#pts/0
admin 3798 0.0 0.2 19428 2228 pts/0 Ss 11:23 0:00 -bash
www-data 3922 0.0 1.6 301520 17448 ? S 11:32 0:00 /usr/sbin/apache2 -k start
www-data 3926 0.0 1.6 301472 17328 ? S 11:32 0:00 /usr/sbin/apache2 -k start
www-data 3929 0.0 1.6 301472 17288 ? S 11:32 0:00 /usr/sbin/apache2 -k start
www-data 3930 0.0 1.5 301256 16220 ? S 11:32 0:00 /usr/sbin/apache2 -k start
root 4012 0.0 0.2 72552 2876 ? S 11:38 0:00 sendmail: MTA: ./s59ECXBN022245 example.com.: user open
and this is the result of "free -m":
total used free shared buffers cached
Mem: 1024 962 61 0 0 196
-/+ buffers/cache: 766 257
Swap: 1024 0 1024

Related

SLURM sbatch multiple parent jobs in parallel, each with multiple child jobs

I want to run a fortran code called orbits_01 on SLURM. I want to run multiple jobs simultaneously (i.e. parallelize over multiple cores). After running multiple jobs, each orbits_01 program will call another executable called optimizer, and the optimizer will constantly call another Python script called relax.py. When I submitted the jobs to SLURM by sbatch python main1.py, the jobs failed to even call the optimizer. However, the whole scheme works fine when I ran locally. The local process status is shown below:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
shuha 39395 0.0 0.0 161540 3064 ? S Oct22 0:19 sshd: shuha#pts/72
shuha 39396 0.0 0.0 118252 5020 pts/72 Ss Oct22 0:11 \_ -bash
shuha 32351 0.3 0.0 318648 27840 pts/72 S 02:08 0:00 \_ python3 main1.py
shuha 32968 0.0 0.0 149404 1920 pts/72 R+ 02:10 0:00 \_ ps uxf
shuha 32446 0.0 0.0 10636 1392 pts/72 S 02:08 0:00 ../orbits_01.x
shuha 32951 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c ./optimizer >& log
shuha 32954 0.0 0.0 1716076 1376 pts/72 S 02:10 0:00 \_ ./optimizer
shuha 32955 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c python relax.py > relax.out
shuha 32956 99.6 0.0 749900 101944 pts/72 R 02:10 0:02 \_ python relax.py
shuha 32410 0.0 0.0 10636 1388 pts/72 S 02:08 0:00 ../orbits_01.x
shuha 32963 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c ./optimizer >& log
shuha 32964 0.0 0.0 1716076 1376 pts/72 S 02:10 0:00 \_ ./optimizer
shuha 32965 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c python relax.py > relax.out
shuha 32966 149 0.0 760316 111992 pts/72 R 02:10 0:01 \_ python relax.py
shuha 32372 0.0 0.0 10636 1388 pts/72 S 02:08 0:00 ../orbits_01.x
shuha 32949 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c ./optimizer >& log
shuha 32950 0.0 0.0 1716076 1376 pts/72 S 02:10 0:00 \_ ./optimizer
shuha 32952 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c python relax.py > relax.out
shuha 32953 100 0.0 749892 101936 pts/72 R 02:10 0:03 \_ python relax.py
I have a main Python script called main1.py, which does a for loop to run multiple orbits_01 jobs at the same time. Then the main script will wait for all jobs to finish. Here 3 parent orbits_01 jobs are running in parallel, and each parent job has multiple child jobs. The heavy computations are done by the python code relax.py, so each job should be able to run only using one core. I want to know what is the best way to submit and parallelize multiple parent jobs with multiple child jobs over all cores in one node on SLURM?

.bashrc somehow looping and sourcing itself (fork bomb)

I'm using a web host with an Apache terminal, using it to host a NodeJS application. For the most part everything runs smooth, however when I open the terminal I often get bash: fork: retry: no child processes and bash: fork: retry: resource temporarily unavailable.
I've narrowed down the cause of the problem to my .bashrc file, as when using top I could see that the many excess processes being created were bash instances:
top - 13:41:13 up 71 days, 20:57, 0 users, load average: 1.82, 1.81, 1.72
Tasks: 14 total, 1 running, 2 sleeping, 11 stopped, 0 zombie
%Cpu(s): 11.7 us, 2.7 sy, 0.1 ni, 85.5 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 41034544 total, 2903992 free, 6525792 used, 31604760 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 28583704 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1001511 xxxxxxxx 20 0 11880 3692 1384 S 0.0 0.0 0:00.02 bash
1001578 xxxxxxxx 20 0 11880 2840 524 T 0.0 0.0 0:00.00 bash
1001598 xxxxxxxx 20 0 11880 2672 348 T 0.0 0.0 0:00.00 bash
1001599 xxxxxxxx 20 0 11880 2896 524 T 0.0 0.0 0:00.00 bash
1001600 xxxxxxxx 20 0 11880 2720 396 T 0.0 0.0 0:00.00 bash
1001607 xxxxxxxx 20 0 11880 2928 532 T 0.0 0.0 0:00.00 bash
1001613 xxxxxxxx 20 0 11880 2964 532 T 0.0 0.0 0:00.00 bash
1001618 xxxxxxxx 20 0 11880 2780 348 T 0.0 0.0 0:00.00 bash
1001619 xxxxxxxx 20 0 12012 3024 544 T 0.0 0.0 0:00.00 bash
1001620 xxxxxxxx 20 0 11880 2804 372 T 0.0 0.0 0:00.00 bash
1001651 xxxxxxxx 20 0 12012 2836 352 T 0.0 0.0 0:00.00 bash
1001653 xxxxxxxx 20 0 12016 3392 896 T 0.0 0.0 0:00.00 bash
1004463 xxxxxxxx 20 0 9904 1840 1444 S 0.0 0.0 0:00.00 bash
1005200 xxxxxxxx 20 0 56364 1928 1412 R 0.0 0.0 0:00.00 top
~/.bashrc consists of only:
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# Uncomment the following line if you don't like systemctl's auto-paging feature:
# export SYSTEMD_PAGER=
# User specific aliases and functions
export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
If I comment out the last 3 lines like so:
#export NVM_DIR="$HOME/.nvm"
#[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh" # This loads nvm
#[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion" # This loads nvm bash_completion
Then the terminal functions as expected and no excess processes are created. However I obviously can't use nvm/npm commands while it's disabled as nvm isn't started.
I'm relatively inexperienced with bash and can't seem to figure out why this is happening. It seems that bash is somehow calling itself every time it opens, which creates the loop/fork bomb once the terminal is opened.
How can I prevent this while still being able to use nvm/npm?

Optimizing vncscreenshot scripts

Good Day,
I'm using vncsnapshot http://vncsnapshot.sourceforge.net/ in debian 7 environment to capture screenshots of workstations to monitor staffs desktop activity. This captures screenshot via nmap and saves it to my desired location accessed via internal web-page.
I have scripts like this . The x.x.x.x is the ip-range of the network to capture all open workstations.
#!/bin/bash
nmap -v -p5900 --script=vnc-screenshot-it --script-args vnc-screenshot.quality=30 x.x.x.x
And set-up in crontab to run every 5 mins.
The server has too many running processes because of it. This is the sample of ps command
root 32696 0.0 0.0 4368 0 ? S Feb23 0:00 /bin/bash /var/www/vncsnapshot/.scripts/.account.sh
root 32708 0.0 0.0 14580 4 ? S Feb23 0:00 nmap -v -p5900,5901,5902 --script=vnc-screenshot-mb
root 32717 0.0 0.0 1952 60 ? S Apr10 0:00 sh -c vncsnapshot -cursor -quality 30 x.x.x.x
root 32719 0.0 0.1 11480 4892 ? S Apr10 0:00 vncsnapshot -cursor -quality 30 30 x.x.x.x /var/w
root 32720 0.0 0.0 1952 60 ? S Apr25 0:00 sh -c vncsnapshot -cursor -quality 30 30 x.x.x.x
root 32722 0.0 0.0 1952 4 ? Ss Feb09 0:00 /bin/sh -c /var/www/vncsnapshot/.scripts/.account.sh
root 32723 0.0 0.0 3796 140 ? S Apr25 0:00 vncsnapshot -cursor -quality 30 30 x.x.x.x /var/w
root 32730 0.0 0.0 1952 4 ? Ss Feb08 0:00 /bin/sh -c /var/www/vncsnapshot/.scripts/.account
root 32734 0.0 0.0 4364 0 ? S Feb08 0:00 /bin/bash /var/www/vncsnapshot/.scripts/.account.
root 32741 0.0 0.0 13700 4 ? S Feb08 0:00 nmap -v -p5900 --script=vnc-screenshot-account --
root 32755 0.0 0.0 1952 4 ? Ss Feb08 0:00 /bin/sh -c /var/www/vncsnapshot/.scripts/.account.sh
root 32757 0.0 0.0 1952 4 ? S Feb07 0:00 sh -c vncsnapshot -cursor -quality 30 30 x.x.x.x
root 32760 0.0 0.0 3796 0 ? S Feb07 0:00 vncsnapshot -cursor -quality 30 30 x.x.x.x /var/w
root 32762 0.0 0.0 4368 0 ? S Feb09 0:00 /bin/bash /var/www/vncsnapshot/.scripts/.account.sh
root 32764 0.0 0.0 4368 0 ? S Feb08 0:00 /bin/bash /var/www/vncsnapshot/.scripts/.account.sh
How can I optimize this set-up to close un-nessesary processes that are still running.
Thanks
I split the processes in two part: nmap that regularly scan the network and the vncsnapshot that grab screenshot of a list of previously scanned host.
In my opinion, in this way the things are cleaner.
i haven't test this code
#!/bin/bash
## capture the list of host with vnc port open
list=/dev/shm/list
port=5900
network=192.168.1.*
nmap -n -p${port} --open ${network} -oG - | grep 'open\/tcp' | awk '{print $2}' > ${list}
the other script, check if a process is alive with lock file and in case launch the grab command
#!/bin/bash
list=/dev/shm/list
run=/run/vncscreenshot/
mkdir -p ${run} &>/dev/null
cat ${list} |\
while read host
do
lock="${run}/${host}.lock"
test -e ${lock} && ps -p $(<${lock}) &>/dev/null && continue
vnc-screenshot-it vnc-screenshot.quality=30 ${host} &
echo $! > ${lock}
done

apache running slow without using all ram

I have a centos server running apache with 8GB ram. It is running very slowly to load simple php pages. I have set the following on my config file. I cannot see any httpd process more than 100m. It usually slows down after 5mins from a restart.
<IfModule prefork.c>
StartServers 12
MinSpareServers 12
MaxSpareServers 12
ServerLimit 150
MaxClients 150
MaxRequestsPerChild 1000
</IfModule>
$ ps -ylC httpd | awk '{x += $8;y += 1} END {print "Apache Memory Usage (MB): "x/1024; print "Average Proccess Size (MB): "x/((y-1)*1024)}'
Apache Memory Usage (MB): 1896.09
Average Proccess Size (MB): 36.4633
What else can I do to make the pages load faster.
$ free -m
total used free shared buffers cached
Mem: 7872 847 7024 0 29 328
-/+ buffers/cache: 489 7382
Swap: 7999 934 7065
top - 15:42:17 up 545 days, 16:46, 2 users, load average: 0.05, 0.06, 0.
Tasks: 251 total, 1 running, 250 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 2.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.
Mem: 8060928k total, 909112k used, 7151816k free, 30216k buffers
Swap: 8191992k total, 956880k used, 7235112k free, 336612k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
16544 apache 20 0 734m 47m 10m S 0.0 0.6 0:00.21 httpd
16334 apache 20 0 731m 45m 10m S 0.0 0.6 0:00.41 httpd
16212 apache 20 0 723m 37m 10m S 0.0 0.5 0:00.72 httpd
16555 apache 20 0 724m 37m 10m S 0.0 0.5 0:00.25 httpd
16347 apache 20 0 724m 36m 10m S 0.0 0.5 0:00.42 httpd
16608 apache 20 0 721m 34m 10m S 0.0 0.4 0:00.16 httpd
16088 apache 20 0 717m 31m 10m S 0.0 0.4 0:00.35 httpd
16012 apache 20 0 717m 30m 10m S 0.0 0.4 0:00.78 httpd
16338 apache 20 0 716m 30m 10m S 0.0 0.4 0:00.36 httpd
16336 apache 20 0 715m 29m 10m S 0.0 0.4 0:00.42 httpd
16560 apache 20 0 716m 29m 9.9m S 0.0 0.4 0:00.06 httpd
16346 apache 20 0 715m 28m 10m S 0.0 0.4 0:00.28 httpd
16016 apache 20 0 714m 28m 10m S 0.0 0.4 0:00.74 httpd
16497 apache 20 0 715m 28m 10m S 0.0 0.4 0:00.18 httpd
16607 apache 20 0 714m 27m 9m S 0.0 0.4 0:00.17 httpd
16007 root 20 0 597m 27m 15m S 0.0 0.3 0:00.13 httpd
16694 apache 20 0 713m 26m 10m S 0.0 0.3 0:00.10 httpd
16695 apache 20 0 712m 25m 9.9m S 0.0 0.3 0:00.04 httpd
16554 apache 20 0 712m 25m 10m S 0.0 0.3 0:00.15 httpd
16691 apache 20 0 598m 14m 2752 S 0.0 0.2 0:00.00 httpd
22613 root 20 0 884m 12m 6664 S 0.0 0.2 132:10.11 agtrep
16700 apache 20 0 597m 12m 712 S 0.0 0.2 0:00.00 httpd
16750 apache 20 0 597m 12m 712 S 0.0 0.2 0:00.00 httpd
16751 apache 20 0 597m 12m 712 S 0.0 0.2 0:00.00 httpd
2374 root 20 0 2616m 8032 1024 S 0.0 0.1 171:31.74 python
9699 root 0 -20 50304 6488 1168 S 0.0 0.1 1467:01 scopeux
9535 root 20 0 644m 6304 2700 S 0.0 0.1 21:01.24 coda
14976 root 20 0 246m 5800 2452 S 0.0 0.1 42:44.70 sssd_be
22563 root 20 0 825m 4704 2636 S 0.0 0.1 44:07.68 opcmona
22496 root 20 0 880m 4540 3304 S 0.0 0.1 13:54.78 opcmsga
22469 root 20 0 856m 4428 2804 S 0.0 0.1 1:18.45 ovconfd
22433 root 20 0 654m 4144 2752 S 0.0 0.1 10:45.71 ovbbccb
22552 root 20 0 253m 2936 1168 S 0.0 0.0 50:35.27 opcle
22521 root 20 0 152m 1820 1044 S 0.0 0.0 0:53.57 opcmsgi
14977 root 20 0 215m 1736 1020 S 0.0 0.0 15:53.13 sssd_nss
16255 root 20 0 254m 1704 1152 S 0.0 0.0 92:07.63 vmtoolsd
24180 root -51 -20 14788 1668 1080 S 0.0 0.0 9:48.57 midaemon
I do not have access to root
I have updated it to the following which seems to be better but i see occasional 7GB httpd process
<IfModule prefork.c>
StartServers 12
MinSpareServers 12
MaxSpareServers 12
ServerLimit 150
MaxClients 150
MaxRequestsPerChild 0
</IfModule>
top - 09:13:42 up 546 days, 10:18, 2 users, load average: 1.86, 1.51, 0.78
Tasks: 246 total, 2 running, 244 sleeping, 0 stopped, 0 zombie
Cpu(s): 28.6%us, 9.5%sy, 0.0%ni, 45.2%id, 16.7%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 8060928k total, 7903004k used, 157924k free, 2540k buffers
Swap: 8191992k total, 8023596k used, 168396k free, 31348k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2466 apache 20 0 14.4g 7.1g 240 R 100.0 92.1 4:58.95 httpd
2285 apache 20 0 730m 31m 7644 S 0.0 0.4 0:02.37 httpd
2524 apache 20 0 723m 23m 7488 S 0.0 0.3 0:01.75 httpd
3770 apache 20 0 716m 21m 10m S 0.0 0.3 0:00.29 httpd
3435 apache 20 0 716m 20m 9496 S 0.0 0.3 0:00.60 httpd
3715 apache 20 0 713m 19m 10m S 0.0 0.2 0:00.35 httpd
3780 apache 20 0 713m 19m 10m S 0.0 0.2 0:00.22 httpd
3778 apache 20 0 713m 19m 10m S 0.0 0.2 0:00.28 httpd
3720 apache 20 0 712m 18m 10m S 0.0 0.2 0:00.21 httpd
3767 apache 20 0 712m 18m 10m S 0.0 0.2 0:00.21 httpd
3925 apache 20 0 712m 17m 10m S 0.0 0.2 0:00.11 httpd
2727 apache 20 0 716m 17m 7576 S 0.0 0.2 0:01.66 httpd
2374 root 20 0 2680m 14m 2344 S 0.0 0.2 173:44.40 python
9699 root 0 -20 50140 5556 624 S 0.0 0.1 1475:46 scopeux
3924 apache 20 0 598m 5016 2872 S 0.0 0.1 0:00.00 httpd
3926 apache 20 0 598m 5000 2872 S 0.0 0.1 0:00.00 httpd
14976 root 20 0 246m 2400 1280 S 0.0 0.0 42:51.54 sssd_be
9535 root 20 0 644m 2392 752 S 0.0 0.0 21:07.36 coda
22563 root 20 0 825m 2000 952 S 0.0 0.0 44:16.37 opcmona
22552 root 20 0 254m 1820 868 S 0.0 0.0 50:48.12 opcle
16255 root 20 0 254m 1688 1144 S 0.0 0.0 92:53.74 vmtoolsd
22536 root 20 0 282m 1268 892 S 0.0 0.0 24:21.73 opcacta
16784 root 20 0 597m 1236 180 S 0.0 0.0 0:02.16 httpd
14977 root 20 0 215m 1092 864 S 0.0 0.0 15:57.32 sssd_nss
22496 root 20 0 880m 1076 864 S 0.0 0.0 13:57.86 opcmsga
22425 root 20 0 1834m 944 460 S 0.0 0.0 74:12.96 ovcd
22433 root 20 0 654m 896 524 S 0.0 0.0 10:48.00 ovbbccb
2634 oiadmin 20 0 15172 876 516 R 9.1 0.0 0:14.78 top
2888 root 20 0 103m 808 776 S 0.0 0.0 0:00.19 sshd
16397 root 20 0 207m 748 420 S 0.0 0.0 32:52.23 ManagementAgent
2898 oiadmin 20 0 103m 696 556 S 0.0 0.0 0:00.08 sshd
22613 root 20 0 884m 580 300 S 0.0 0.0 132:34.94 agtrep
20886 root 20 0 245m 552 332 S 0.0 0.0 79:09.05 rsyslogd
2899 oiadmin 20 0 105m 496 496 S 0.0 0.0 0:00.03 bash
24180 root -51 -20 14788 456 408 S 0.0 0.0 9:50.43 midaemon
14978 root 20 0 203m 440 308 S 0.0 0.0 9:28.87 sssd_pam
14975 root 20 0 203m 432 288 S 0.0 0.0 21:45.01 sssd
8215 root 20 0 88840 420 256 S 0.0 0.0 3:28.13 sendmail
18909 oiadmin 20 0 103m 408 256 S 0.0 0.0 0:02.83 sshd
1896 root 20 0 9140 332 232 S 0.0 0.0 50:39.87 irqbalance
2990 oiadmin 20 0 98.6m 320 276 S 0.0 0.0 0:00.04 tail
4427 root 20 0 114m 288 196 S 0.0 0.0 8:58.77 crond
25628 root 20 0 4516 280 176 S 0.0 0.0 11:15.24 ndtask
4382 ntp 20 0 28456 276 176 S 0.0 0.0 0:28.61 ntpd
8227 smmsp 20 0 78220 232 232 S 0.0 0.0 0:05.09 sendmail
25634 root 20 0 6564 200 68 S 0.0 0.0 4:50.30 mgsusageag
4926 root 20 0 110m 188 124 S 0.0 0.0 3:23.79 abrt-dump-oops
9744 root 20 0 197m 180 136 S 0.0 0.0 1:46.59 perfalarm
22469 root 20 0 856m 128 128 S 0.0 0.0 1:18.65 ovconfd
4506 rpc 20 0 19036 84 40 S 0.0 0.0 1:44.05 rpcbind
32193 root 20 0 66216 68 60 S 0.0 0.0 4:54.51 sshd
18910 oiadmin 20 0 105m 52 52 S 0.0 0.0 0:00.11 bash
22521 root 20 0 152m 44 44 S 0.0 0.0 0:53.71 opcmsgi
18903 root 20 0 103m 12 12 S 0.0 0.0 0:00.22 sshd
1 root 20 0 19356 4 4 S 0.0 0.0 3:57.84 init
1731 root 20 0 105m 4 4 S 0.0 0.0 0:01.91 rhsmcertd
1983 dbus 20 0 97304 4 4 S 0.0 0.0 0:16.92 dbus-daemon
2225 root 20 0 4056 4 4 S 0.0 0.0 0:00.01 mingetty
Your server is slow because you are condeming it to a non-threaded, ever-reclaiming childs scenario.
That is you use more than 12 processes but your maxspareservers is 12 so HTTPD is constantly spawning and despawning processes and precisely that's the biggest weakness of a non-threaded mpm. And also such a low MaxRequestsPerChild won't help either if you have a decent amount of requests per second, although understable since you are using mod_php, that value will increase the constant re-spawning.
In any OS, spawning processes uses much more cpu than spawning threads inside a process.
So either you set MaxSpareServers to a very high number so your server have lots of them ready to serve your requests, or you STOP using mod_php+prefork and probably .htaccess (like everyone in here who seems to believe that's needed for apache httpd to work). to a more reliable: HTTPD with mpm_event + mod_proxy_fcgi + php-fpm, where you can configure dozens of hundreds of threads and apache will spawn and use them in less than a blink and you leave all your php load under php's own daemon, php-fpm.
So, it's not apache, it is your ever-respawning processes setup in a non-threaded mpm what's giving you troubles.

Append Output results

I'm running a validation software and I want all of the output sent to a text file and have the results of multiple files placed/appended to the same file. I thought my code was working, but I just discovered I'm only getting the results from 1 file output to the text file.
java -jar /Applications/epubcheck-3.0.1/epubcheck-3.0.1.jar ~/Desktop/Validator/*.epub 2>&1 | tee -a ~/Desktop/Validator/EPUBCHECK3_results.txt
open ~/Desktop/Validator/EPUBCHECK3_results.txt
EDIT
When I run the same .jar file using Windows command line it will process a batch of files and appeand the results appropriately. I would just do this, but it would mean having to switch work stations and transferring files to validate them. I would like to get this running through the Unix shell on my Mac system so that I don't have to do unnecessary work. Command line that IS working below:
FOR /f %%1 in ('dir /b "C:\Users\scrawfo\Desktop\epubcheck\drop epubs here\*.epub"') do (
echo %%1 >> epubcheck.txt
java -jar "C:\Users\scrawfo\Desktop\epubcheck\epubcheck-3.0.jar" "C:\Users\scrawfo\Desktop\epubcheck\drop epubs here\%%1" 2>> epubcheck.txt
echo. >> epubcheck.txt)
notepad epubcheck.txt
del epubcheck.txt
syntax provided by you is correct there might be some problem with java output or something Try Executing it without redirection
cat test
Output:-
This is Test File ...............
Next Executed Command with same syntax
ps l 2>&1 | tee -a test
Output:-
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME
COMMAND 4 0 3287 1 20 0 4060 572 n_tty_ Ss+ tty2
0:00 /sbin/mingetty /dev/tty2 4 0 3289 1 20 0 4060 572
n_tty_ Ss+ tty3 0:00 /sbin/mingetty /dev/tty3 4 0 3291
1 20 0 4060 576 n_tty_ Ss+ tty4 0:00 /sbin/mingetty
/dev/tty4 4 0 3295 1 20 0 4060 576 n_tty_ Ss+ tty5
0:00 /sbin/mingetty /dev/tty5 4 0 3297 1 20 0 4060 572
n_tty_ Ss+ tty6 0:00 /sbin/mingetty /dev/tty6 4 0 19086
1 20 0 4060 572 n_tty_ Ss+ tty1 0:00 /sbin/mingetty
/dev/tty1 4 0 20837 20833 20 0 108432 2148 wait Ss pts/0
0:00 -bash 4 0 21471 20837 20 0 108124 1036 - R+ pts/0
0:00 ps l 0 0 21472 20837 20 0 100908 664 pipe_w S+ pts/0
0:00 tee -a test
Checked File
cat test
Output:-(Appended properly)
This is Test File ...............
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 4 0
3287 1 20 0 4060 572 n_tty_ Ss+ tty2 0:00
/sbin/mingetty /dev/tty2 4 0 3289 1 20 0 4060 572
n_tty_ Ss+ tty3 0:00 /sbin/mingetty /dev/tty3 4 0 3291
1 20 0 4060 576 n_tty_ Ss+ tty4 0:00 /sbin/mingetty
/dev/tty4 4 0 3295 1 20 0 4060 576 n_tty_ Ss+ tty5
0:00 /sbin/mingetty /dev/tty5 4 0 3297 1 20 0 4060 572
n_tty_ Ss+ tty6 0:00 /sbin/mingetty /dev/tty6 4 0 19086
1 20 0 4060 572 n_tty_ Ss+ tty1 0:00 /sbin/mingetty
/dev/tty1 4 0 20837 20833 20 0 108432 2148 wait Ss pts/0
0:00 -bash 4 0 21471 20837 20 0 108124 1036 - R+ pts/0
0:00 ps l 0 0 21472 20837 20 0 100908 664 pipe_w S+ pts/0
0:00 tee -a test

Resources