Append Output results - shell

I'm running a validation software and I want all of the output sent to a text file and have the results of multiple files placed/appended to the same file. I thought my code was working, but I just discovered I'm only getting the results from 1 file output to the text file.
java -jar /Applications/epubcheck-3.0.1/epubcheck-3.0.1.jar ~/Desktop/Validator/*.epub 2>&1 | tee -a ~/Desktop/Validator/EPUBCHECK3_results.txt
open ~/Desktop/Validator/EPUBCHECK3_results.txt
EDIT
When I run the same .jar file using Windows command line it will process a batch of files and appeand the results appropriately. I would just do this, but it would mean having to switch work stations and transferring files to validate them. I would like to get this running through the Unix shell on my Mac system so that I don't have to do unnecessary work. Command line that IS working below:
FOR /f %%1 in ('dir /b "C:\Users\scrawfo\Desktop\epubcheck\drop epubs here\*.epub"') do (
echo %%1 >> epubcheck.txt
java -jar "C:\Users\scrawfo\Desktop\epubcheck\epubcheck-3.0.jar" "C:\Users\scrawfo\Desktop\epubcheck\drop epubs here\%%1" 2>> epubcheck.txt
echo. >> epubcheck.txt)
notepad epubcheck.txt
del epubcheck.txt

syntax provided by you is correct there might be some problem with java output or something Try Executing it without redirection
cat test
Output:-
This is Test File ...............
Next Executed Command with same syntax
ps l 2>&1 | tee -a test
Output:-
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME
COMMAND 4 0 3287 1 20 0 4060 572 n_tty_ Ss+ tty2
0:00 /sbin/mingetty /dev/tty2 4 0 3289 1 20 0 4060 572
n_tty_ Ss+ tty3 0:00 /sbin/mingetty /dev/tty3 4 0 3291
1 20 0 4060 576 n_tty_ Ss+ tty4 0:00 /sbin/mingetty
/dev/tty4 4 0 3295 1 20 0 4060 576 n_tty_ Ss+ tty5
0:00 /sbin/mingetty /dev/tty5 4 0 3297 1 20 0 4060 572
n_tty_ Ss+ tty6 0:00 /sbin/mingetty /dev/tty6 4 0 19086
1 20 0 4060 572 n_tty_ Ss+ tty1 0:00 /sbin/mingetty
/dev/tty1 4 0 20837 20833 20 0 108432 2148 wait Ss pts/0
0:00 -bash 4 0 21471 20837 20 0 108124 1036 - R+ pts/0
0:00 ps l 0 0 21472 20837 20 0 100908 664 pipe_w S+ pts/0
0:00 tee -a test
Checked File
cat test
Output:-(Appended properly)
This is Test File ...............
F UID PID PPID PRI NI VSZ RSS WCHAN STAT TTY TIME COMMAND 4 0
3287 1 20 0 4060 572 n_tty_ Ss+ tty2 0:00
/sbin/mingetty /dev/tty2 4 0 3289 1 20 0 4060 572
n_tty_ Ss+ tty3 0:00 /sbin/mingetty /dev/tty3 4 0 3291
1 20 0 4060 576 n_tty_ Ss+ tty4 0:00 /sbin/mingetty
/dev/tty4 4 0 3295 1 20 0 4060 576 n_tty_ Ss+ tty5
0:00 /sbin/mingetty /dev/tty5 4 0 3297 1 20 0 4060 572
n_tty_ Ss+ tty6 0:00 /sbin/mingetty /dev/tty6 4 0 19086
1 20 0 4060 572 n_tty_ Ss+ tty1 0:00 /sbin/mingetty
/dev/tty1 4 0 20837 20833 20 0 108432 2148 wait Ss pts/0
0:00 -bash 4 0 21471 20837 20 0 108124 1036 - R+ pts/0
0:00 ps l 0 0 21472 20837 20 0 100908 664 pipe_w S+ pts/0
0:00 tee -a test

Related

SLURM sbatch multiple parent jobs in parallel, each with multiple child jobs

I want to run a fortran code called orbits_01 on SLURM. I want to run multiple jobs simultaneously (i.e. parallelize over multiple cores). After running multiple jobs, each orbits_01 program will call another executable called optimizer, and the optimizer will constantly call another Python script called relax.py. When I submitted the jobs to SLURM by sbatch python main1.py, the jobs failed to even call the optimizer. However, the whole scheme works fine when I ran locally. The local process status is shown below:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
shuha 39395 0.0 0.0 161540 3064 ? S Oct22 0:19 sshd: shuha#pts/72
shuha 39396 0.0 0.0 118252 5020 pts/72 Ss Oct22 0:11 \_ -bash
shuha 32351 0.3 0.0 318648 27840 pts/72 S 02:08 0:00 \_ python3 main1.py
shuha 32968 0.0 0.0 149404 1920 pts/72 R+ 02:10 0:00 \_ ps uxf
shuha 32446 0.0 0.0 10636 1392 pts/72 S 02:08 0:00 ../orbits_01.x
shuha 32951 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c ./optimizer >& log
shuha 32954 0.0 0.0 1716076 1376 pts/72 S 02:10 0:00 \_ ./optimizer
shuha 32955 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c python relax.py > relax.out
shuha 32956 99.6 0.0 749900 101944 pts/72 R 02:10 0:02 \_ python relax.py
shuha 32410 0.0 0.0 10636 1388 pts/72 S 02:08 0:00 ../orbits_01.x
shuha 32963 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c ./optimizer >& log
shuha 32964 0.0 0.0 1716076 1376 pts/72 S 02:10 0:00 \_ ./optimizer
shuha 32965 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c python relax.py > relax.out
shuha 32966 149 0.0 760316 111992 pts/72 R 02:10 0:01 \_ python relax.py
shuha 32372 0.0 0.0 10636 1388 pts/72 S 02:08 0:00 ../orbits_01.x
shuha 32949 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c ./optimizer >& log
shuha 32950 0.0 0.0 1716076 1376 pts/72 S 02:10 0:00 \_ ./optimizer
shuha 32952 0.0 0.0 113472 1472 pts/72 S 02:10 0:00 \_ sh -c python relax.py > relax.out
shuha 32953 100 0.0 749892 101936 pts/72 R 02:10 0:03 \_ python relax.py
I have a main Python script called main1.py, which does a for loop to run multiple orbits_01 jobs at the same time. Then the main script will wait for all jobs to finish. Here 3 parent orbits_01 jobs are running in parallel, and each parent job has multiple child jobs. The heavy computations are done by the python code relax.py, so each job should be able to run only using one core. I want to know what is the best way to submit and parallelize multiple parent jobs with multiple child jobs over all cores in one node on SLURM?

Not able to delete the blank or empty space from a command text output file on UNIX

The command '(sleep 4 ; echo q) | topas -P |tee /tmp/top' on AIX produces this output in a command text file and I am not able to remove the blank spaces from it. I have tried, sed/perl and awk commands to print only the lines containing characters however nothing has helped. Do I need to convert this file to a ASCII text format to use sed/per/grep or awk to remove the empty lines?.
$ file /tmp/top
/tmp/top: commands text
$ head -n33 /tmp/top
DATA TEXT PAGE PGFAULTS
USER PID PPID PRI NI RES RES SPACE TIME CPU% I/O OTH COMMAND
root 9044256 20447712 60 20 2.43M 304K 2.43M 0:00 2.1 0 253 topas
root 14942646 8913178 60 20 72.0M 37.8M 72.0M 0:42 0.2 0 1 TaniumCl
root 20447712 21889434 60 20 148K 312K 508K 0:00 0.2 0 0 ksh
root 21955056 20447712 60 20 216K 36.0K 216K 0:00 0.1 0 3 sed
root 24838602 20447712 60 20 120K 8.00K 120K 0:00 0.1 0 1 tee
root 9830690 10355194 60 20 120K 4.00K 120K 0:00 0.1 0 0 sleep
root 12255642 13893896 60 41 57.5M 39.8M 57.5M 33:42 0.1 0 0 mmfsd
root 10355194 20447712 60 20 148K 312K 508K 0:00 0.1 0 0 ksh
root 9109790 4063622 39 20 12.9M 3.68M 12.9M 5:19 0.1 0 0 rmcd
root 13697394 4063622 60 20 8.27M 55.9M 8.27M 17:18 0.1 0 0 backup_a
root 20906328 1 60 20 1.81M 0 1.81M 3:15 0.0 0 0 nfsd
root 4260244 1 60 20 620K 88.0K 620K 41:23 0.0 0 0 getty
root 1573172 0 37 41 960K 0 960K 15:17 0.0 0 0 gil
nagios 9240876 4063622 60 20 23.7M 736K 23.7M 9:43 0.0 0 0 ncpa_pas
root 4391332 1 60 20 12.5M 252K 12.5M 4:43 0.0 0 0 secldapc
a_RTHOMA 8323456 12059082 60 20 636K 3.06M 1016K 0:00 0.0 0 0 sshd
root 8388902 4063622 60 20 1.76M 1.05M 1.76M 7:03 0.0 0 0 clcomd
root 3539312 1 60 20 448K 0 448K 5:07 0.0 0 0 lock_rcv
root 3670388 1 60 20 448K 0 448K 4:18 0.0 0 0 sec_rcv
root 5767652 4063622 48 8 392K 324K 392K 2:49 0.0 0 0 xntpd
root 6816242 1 60 20 1.19M 0 1.19M 1:05 0.0 0 0 rpc.lock
root 459026 0 16 41 640K 0 640K 2:19 0.0 0 0 reaffin
root 23921008 1 60 20 1.00M 0 1.00M 4:36 0.0 0 0 n4cb
lpar2rrd 23200112 25625020 64 22 868K 120K 868K 0:00 0.0 0 0 vmstat
root 7143896 1 40 41 448K 0 448K 0:48 0.0 0 0 nfsWatch
root 6160840 1 60 20 448K 0 448K 0:09 0.0 0 0 j2gt
Looks like your output file has some ASCII characters so it's better to print only those lines which starting from letters do like:
awk '/^[a-zA-Z]/' Input_file

Jump to the top parent shell from any arbitrary depth of subshell

I created multiple subshells
$ ps -f
UID PID PPID C STIME TTY TIME CMD
501 2659 2657 0 8:22AM ttys000 0:00.15 -bash
501 2776 2659 0 8:23AM ttys000 0:00.02 bash
501 2778 2776 0 8:23AM ttys000 0:00.09 bash
501 3314 2778 0 9:13AM ttys000 0:00.26 bash
501 8884 3314 0 4:41PM ttys000 0:00.03 /bin/bash
501 8891 8884 0 4:41PM ttys000 0:00.01 /bin/bash
501 8899 8891 0 4:41PM ttys000 0:00.02 /bin/bash
501 423 408 0 7:16AM ttys001 0:00.22 -bash
501 8095 423 0 3:52PM ttys001 0:00.15 ssh root#www.****.com
501 8307 8303 0 4:05PM ttys002 0:00.17 -bash
I'd like to jump back the most top one, but have to try exit one by one
$ ps -f
UID PID PPID C STIME TTY TIME CMD
501 2659 2657 0 8:22AM ttys000 0:00.17 -bash
501 423 408 0 7:16AM ttys001 0:00.22 -bash
501 8095 423 0 3:52PM ttys001 0:00.15 ssh root#***.com
501 8307 8303 0 4:05PM ttys002 0:00.17 -bash
I checked there are 3 bashes left, so I continue,
$ exit
logout
Saving session...completed.
[Process completed]
Sad, it's the most cases I encounter, How could I jump to the top from arbitrary depth of subshells?

Gitlab and redmine high memory usage

I have a VPS with 1GB memory, Debian 7 stable, Gitlab and Redmine installed without anything else (except normal processes).
This configuration consumes more than 900MB of memory. I already set unicorn workers to 1 but no significant changes. Version of Redmine is 2.5.1.stable, version of Gitlab is 6-9-stable.
I wonder if there a way to reduce the memory consuption and CPU load. I might use nginx instead of apache2 or postgres instead of mysql. What else?
Any suggestion is really appreciated.
Here is the list of running processes:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 30176 2160 ? Ss 10:38 0:00 init
root 2 0.0 0.0 0 0 ? S 10:38 0:00 [kthreadd/1723]
root 3 0.0 0.0 0 0 ? S 10:38 0:00 [khelper/1723]
root 227 0.0 0.0 16988 884 ? S 10:38 0:00 upstart-udev-bridge --daemon
root 235 0.0 0.1 21300 1352 ? Ss 10:38 0:00 /sbin/udevd --daemon
root 283 0.0 0.0 21296 1024 ? S 10:38 0:00 /sbin/udevd --daemon
root 284 0.0 0.0 21296 1028 ? S 10:38 0:00 /sbin/udevd --daemon
root 428 0.0 0.0 14936 640 ? S 10:38 0:00 upstart-socket-bridge --daemon
root 1874 0.0 0.1 58740 1652 ? Sl 10:38 0:00 /usr/sbin/rsyslogd -c5
root 1920 0.0 0.0 57568 988 ? Ss 10:38 0:00 /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
root 1922 0.0 0.0 57568 632 ? S 10:38 0:00 /usr/sbin/saslauthd -a pam -c -m /var/run/saslauthd -n 2
root 1988 0.0 0.2 72552 2648 ? Ss 10:38 0:00 sendmail: MTA: accepting connections
root 2029 0.0 0.1 49888 1244 ? Ss 10:38 0:00 /usr/sbin/sshd
root 2061 0.0 0.0 19520 964 ? Ss 10:38 0:00 /usr/sbin/xinetd -pidfile /var/run/xinetd.pid -stayalive -inetd_compat -inetd_ipv6
root 2113 0.0 2.1 301048 22244 ? Ss 10:38 0:00 /usr/sbin/apache2 -k start
root 2156 0.0 0.0 20364 1044 ? Ss 10:38 0:00 /usr/sbin/cron
root 2206 0.0 0.0 4136 712 ? S 10:38 0:00 /bin/sh /usr/bin/mysqld_safe
root 2322 0.0 0.1 23368 1968 ? Ssl 10:38 0:00 PassengerWatchdog
root 2337 0.5 0.2 100600 2652 ? Sl 10:38 0:19 PassengerHelperAgent
root 2348 0.0 0.9 46372 10412 ? Sl 10:38 0:00 Passenger spawn server
nobody 2353 0.0 0.3 81832 4168 ? Sl 10:38 0:00 PassengerLoggingAgent
mysql 2551 0.0 5.2 464312 55360 ? Sl 10:38 0:01 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib/mysql/plugin --user=mysq
root 2552 0.0 0.0 4044 668 ? S 10:38 0:00 logger -t mysqld -p daemon.error
root 2684 0.0 0.7 58116 8300 ? S 10:38 0:00 python /usr/sbin/denyhosts --daemon --purge --config=/etc/denyhosts.conf
redis 2708 0.0 0.1 39964 1676 ? Ssl 10:38 0:00 /usr/bin/redis-server /etc/redis/redis.conf
git 2811 0.5 13.0 377628 136484 ? Sl 10:38 0:19 unicorn_rails master -D -c /home/git/gitlab/config/unicorn.rb -E production
git 2846 0.0 12.3 377628 129148 ? Sl 10:38 0:00 unicorn_rails worker[0] -D -c /home/git/gitlab/config/unicorn.rb -E production
git 2873 0.6 13.6 428528 143532 ? Sl 10:38 0:23 sidekiq 2.17.0 gitlab [0 of 25 busy]
root 2892 0.0 0.2 32712 2248 ? Ss 10:38 0:00 /usr/sbin/ntpd -p /var/run/ntpd.pid -g -u 106:111
root 2913 0.0 0.0 14532 876 tty1 Ss+ 10:38 0:00 /sbin/getty 38400 console
root 2915 0.0 0.0 14532 880 tty2 Ss+ 10:38 0:00 /sbin/getty 38400 tty2
admin 2976 0.1 8.6 265444 90444 ? Sl 10:39 0:03 Passenger ApplicationSpawner: /var/www/redmine
admin 2984 0.0 9.8 282032 103736 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 2992 0.0 8.1 265444 85744 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 2998 0.0 8.1 265444 85764 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 3004 0.0 8.1 265444 85760 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 3010 0.0 8.1 265444 85760 ? Sl 10:39 0:00 Rails: /var/www/redmine
admin 3016 0.0 9.9 282532 104400 ? Sl 10:39 0:01 Rails: /var/www/redmine
www-data 3026 0.0 1.6 301492 17416 ? S 10:39 0:00 /usr/sbin/apache2 -k start
git 3042 0.0 12.8 313152 134320 ? Sl 10:39 0:00 Rack: /home/git/gitlab
root 3794 0.0 0.3 71248 3628 ? Ss 11:23 0:00 sshd: admin [priv]
admin 3797 0.0 0.1 71248 1824 ? R 11:23 0:00 sshd: admin#pts/0
admin 3798 0.0 0.2 19428 2228 pts/0 Ss 11:23 0:00 -bash
www-data 3922 0.0 1.6 301520 17448 ? S 11:32 0:00 /usr/sbin/apache2 -k start
www-data 3926 0.0 1.6 301472 17328 ? S 11:32 0:00 /usr/sbin/apache2 -k start
www-data 3929 0.0 1.6 301472 17288 ? S 11:32 0:00 /usr/sbin/apache2 -k start
www-data 3930 0.0 1.5 301256 16220 ? S 11:32 0:00 /usr/sbin/apache2 -k start
root 4012 0.0 0.2 72552 2876 ? S 11:38 0:00 sendmail: MTA: ./s59ECXBN022245 example.com.: user open
and this is the result of "free -m":
total used free shared buffers cached
Mem: 1024 962 61 0 0 196
-/+ buffers/cache: 766 257
Swap: 1024 0 1024

How do I put an already running CHILD process under nohup

My question is very similar to that posted in: How do I put an already-running process under nohup?
Say I execute foo.sh from my command line, and it in turn executes another shell script, and so on. For example:
foo.sh
\_ bar.sh
\_ baz.sh
Now I press Ctrl+Z to suspend "foo.sh". It is listed in my "jobs -l".
How do I disown baz.sh so that it is no longer a grandchild of foo.sh? If I type "disown" then only foo.sh is disowned from its parent, which isn't exactly what i want. I'd like to kill off the foo.sh and bar.sh processes and only be left with baz.sh.
My current workaround is to "kill -18" (resume) baz.sh and go on with my work, but I would prefer to kill the aforementioned processes. Thanks.
Use ps to get the PID of bar.sh, and kill it.
imac:barmar $ ps -l -t p0 -ww
UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD
501 3041 3037 4006 0 31 0 2435548 760 - Ss 8c6da80 ttyp0 0:00.74 /bin/bash --noediting -i
501 68228 3041 4006 0 31 0 2435544 664 - S 7cbc2a0 ttyp0 0:00.00 /bin/bash ./foo.sh
501 68231 68228 4006 0 31 0 2435544 660 - S c135a80 ttyp0 0:00.00 /bin/bash ./bar.sh
501 68232 68231 4006 0 31 0 2435544 660 - S a64b7e0 ttyp0 0:00.00 /bin/bash ./baz.sh
501 68233 68232 4006 0 31 0 2426644 312 - S f9a1540 ttyp0 0:00.00 sleep 100
0 68243 3041 4106 0 31 0 2434868 480 - R+ a20ad20 ttyp0 0:00.00 ps -l -t p0 -ww
imac:barmar $ kill 68231
./foo.sh: line 3: 68231 Terminated ./bar.sh
[1]+ Exit 143 ./foo.sh
imac:barmar $ ps -l -t p0 -ww
UID PID PPID F CPU PRI NI SZ RSS WCHAN S ADDR TTY TIME CMD
501 3041 3037 4006 0 31 0 2435548 760 - Ss 8c6da80 ttyp0 0:00.74 /bin/bash --noediting -i
501 68232 1 4006 0 31 0 2435544 660 - S a64b7e0 ttyp0 0:00.00 /bin/bash ./baz.sh
501 68233 68232 4006 0 31 0 2426644 312 - S f9a1540 ttyp0 0:00.00 sleep 100
0 68248 3041 4106 0 31 0 2434868 480 - R+ 82782a0 ttyp0 0:00.00 ps -l -t p0 -ww

Resources