I am having a scenario where I need to create a script file that runs a couple of commands.
node index.js which runs continuously
./gradlew run which also run continuously
some commands which will run and finish automatically
I want to write one script file that does all the jobs like running the node server, Gradle run, and also other commands.
I'm thinking of one of the approaches to creating a new terminal tab and executing the commands in it. But I'm unable to find the exact code to create a new tab irrespective of the operating system.
You can use one terminal for this. Just run a command in background. You can do this by adding & at the end of the line. The commands stdout will still log to the terminal session.
# Sorry for the Polish date
while true; do echo $(date); sleep 2; done &
[1] 8873
czw, 28 paź 2021, 09:41:52 CEST
user#user-pc:~$ czw, 28 paź 2021, 09:41:54 CEST
czw, 28 paź 2021, 09:41:56 CEST
czw, 28 paź 2021, 09:41:58 CEST
If you want to bring back the job to the foreground fg.
user#user-pc:~$
user#user-pc:~$ fg
while true; do
echo $(date); sleep 2;
done
czw, 28 paź 2021, 09:48:07 CEST
czw, 28 paź 2021, 09:48:09 CEST
^C
So the script finally could look like this:
#!/bin/bash
./task1 &
./task2 &
./task3
Related
I'm trying to get a list of running domains from an application server. It takes a few seconds for the query to respond; so, it would be nice to run it in the background. However, it hangs, apparently waiting on something even though the command takes no input. When I bring it to the foreground, it immediately displays the results and quits. I also tried disconnecting stdin with 0<&-.
java -jar appserver-cli.jar list-domains &
How can I diagnose the issue? Or better yet, what's the problem?
I can see some open pipes and sockets.
ps --forest
PID TTY TIME CMD
16876 pts/1 00:00:00 bash
2478 pts/1 00:00:00 \_ java
2499 pts/1 00:00:00 | \_ stty
ls -l /proc/2478/fd
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 0 -> /dev/pts/1
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 1 -> /dev/pts/1
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 10 -> socket:[148228]
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 2 -> /dev/pts/1
lrwx------. 1 vagrant vagrant 64 Mar 23 09:08 24 -> socket:[148389]
lr-x------. 1 vagrant vagrant 64 Mar 23 09:08 73 -> pipe:[18170535]
lr-x------. 1 vagrant vagrant 64 Mar 23 09:08 75 -> pipe:[18170536]
I also see the following signal which does not show up when I run the process in the foreground.
futex(0x7fda7e0309d0, FUTEX_WAIT, 9670, NULL) = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGTTOU {si_signo=SIGTTOU, si_code=SI_KERNEL} ---
--- stopped by SIGTTOU ---
The following resources were helpful in figuring this out.
sigttin-sigttou-deep-dive-linux
cannot-rewrite-trap-command-for-sigtstp-sigttin-and-sigttou
From my post, it's clear the process is receiving SIGTTOU. It may be trying to configure terminal settings. I noticed that when I run from a script instead of interactive, there is no empty handler defined for SIGTTOU. The following resolves the issue.
bash -c "java -jar appserver-cli.jar list-domains &"
I have written a 'systemd's service unit to run a bash script that I have been running, whitout any problems, with '/etc/rc.local' for many years.
The 'systemd's service works perfectly except for the following problem:
My bash script runs, every 5 minutes, the following lines:
nice --adjustment=$CommonNicePriority su root --command="HOME=$COMMON_DIR $0 $CyclesOption" # System common tasks.
for User in $Users # Run logged in users' tasks.
do
nice --adjustment=$UsersNicePriority su $User --command="$0 $CyclesOption"
done
As you can see it spawns a process, running as 'root' (first line) and then one process for every logged in normal user (loop) as (normal) user.
Each of the above processes can spawn some other (task) processes. The processes running as 'root' work all right but those running for normal users are killed by 'systemd'. These are lines from '/var/log/syslog':
Feb 2 10:35:00 Linux-1 systemd[1]: Started User Manager for UID 0.
Feb 2 10:35:00 Linux-1 systemd[1]: Started Session c4 of user manolo.
Feb 2 10:35:00 Linux-1 systemd[1]: session-c4.scope: Killing process 31163 (CommonCron) with signal SIGTERM.
Feb 2 10:35:00 Linux-1 systemd[1]: session-c4.scope: Killing process 31164 (CommonCron) with signal SIGTERM.
Feb 2 10:35:00 Linux-1 systemd[1]: session-c4.scope: Killing process 31165 (CommonCron) with signal SIGTERM.
Feb 2 10:35:00 Linux-1 systemd[1]: Stopping Session c4 of user manolo.
Feb 2 10:35:00 Linux-1 systemd[1]: Stopped Session c4 of user manolo.
Feb 2 10:35:13 Linux-1 systemd[1]: Stopping User Manager for UID 0...
Here is my 'systemd's service:
[Unit]
Description=CommonStartUpShutdown
Requires=local-fs.target
Wants=network.target
[Service]
Type=forking
ExecStart=/etc/after_boot.local
RemainAfterExit=yes
TimeoutSec=infinity
KillMode=none
ExecStop=/etc/before_halt.local
[Install]
WantedBy=local-fs.target
# How I think it works:
# Service starts when target 'local-fs.target' is reached, preferably, when target 'network.target'
# is also reached. This last target is reached even if the router is powered off (tested).
# Service start sequence runs script: 'ExecStart=/etc/after_boot.local' which is expected
# to spawn child processes and exit: 'Type=forking'. Child processes: 'CommonSystemStartUp's
# children: 'CommonDaemon', 'CommonCron'... This script must exit with 0, otherwise systemd
# will kill all child processes and wont run 'ExecStop' script. Service start can run as long
# as it needs: 'TimeoutSec=infinity'.
# Service is kept alive, after running 'ExecStart=...', by 'RemainAfterExit=true'.
# When the service is stopped, at system shutdown, script 'ExecStop=/etc/before_halt.local'
# will run for as long as necessary, 'TimeoutSec=infinity', before target
# 'local-fs.target' is lost: 'Requires=local-fs.target'.
# 'ExecStart=/etc/after_boot.local's children processes ('CommonDaemon', 'CommonCron'...) won't be
# killed when 'ExecStop=/etc/before_halt.local' runs: 'KillMode=none'.
I have tried moving 'KillMode=none' to the [Unit] block: 'systemd' complains.
Also tried 'WantedBy=multi-user.target.wants' in the [Install] block: doesn't make any difference.
I am using innotifywait to track file changes of users, and was able to effectively trace whether a file was created/edited/deleted using the innotifywait tool by logging it to a log file.
However, when there are actions performed i.e rsync, all the changes are written to the log file as well.
Here is an example of performing rsync :
Mon Nov 23 15:42:56 .sidebar.php.KNYJir:DELETED
Mon Nov 23 15:42:56 .sidebar.php.KNYJir:DELETED
Mon Nov 23 15:42:56 .sidebar.php.KNYJir:DELETED
Mon Nov 23 15:42:56 sidebar.php
Attached below is the command which I am using :
/usr/bin/inotifywait -e create,delete,modify,move -mrq --format %w%f
I then pipe it to a endless while loop to process and test if the changed file exist to determine if the file does exist or not to determine if it is a create/modify/delete action.
Is there anyway we can exclude the logging for actions performed by root?
I do not think that is possible, certainly not with inotifywatch. The inotify api itself simply does not provide that information. As the manpage states:
The inotify API provides no information about the user or process that triggered the inotify event. In particular, there is no easy way for a process that is monitoring events via inotify to distinguish events that it triggers itself from those that are triggered by other processes.
What you can do is to filter based on the filename. Or, if you know the process that makes the additional changes, compare with its own logfile.
With Cygwin, I tried to use "timeout" to make my script sleep for a few seconds.
But even when I did it according to its syntax it always asks me to try --help, meaning I gave the wrong forms.
Here are the things I've tried
timeout 5
timeout 5s
timeout 5.0s
timeout 5.
None of which worked.
Any ideas?!
I don't think timeout does what you think it does. From the man page:
timeout [OPTION] NUMBER[SUFFIX] COMMAND [ARG]...
Start COMMAND, and kill it if still running after NUMBER seconds. SUFFIX may be 's' for seconds (the default), 'm' for minutes, 'h' for hours or 'd' for days.
You need to give it that command. Here's a simple example:
$ date; timeout 5 sleep 10; date
Thu, Nov 01, 2012 3:19:28 PM
Thu, Nov 01, 2012 3:19:33 PM
As you can see, only 5 seconds elapsed even though I ran sleep 10. That's because it timed out after 5 seconds and the timeout command killed it.
I have a little script to run a search daemon, like:
run.sh:
cd ~/apache-solr
xterm -e java -jar start.jar
sleep 5
cd ~/anotherFolder
#make something else
The problem: - after the xterm -e ... command the script waits for the command is complete to run another commands;
The question:
Can we run next command without waiting for the end of xterm -e ... command executing?
P.S.
Sorry for my English and Thanks for any help.
Or even better you could use nohup
Like:
nohup xterm -e java -jar start.jar &
With nohup your command will not receive a killsig even if you close your putty session for example.
Yes, you can put an & after your line you want to run in the background; that will allow your script to continue while the command is running;
xterm -e java -jar start.jar &
An example;
date
sleep 5
date
> Thu Mar 22 11:57:17 CET 2012
> Thu Mar 22 11:57:22 CET 2012
date
sleep 5 &
date
> Thu Mar 22 11:57:25 CET 2012
> Thu Mar 22 11:57:25 CET 2012
yes put & at end of command, that will start new thread of execution
An ampersand at the end of a command will start the command in the background and let the script continue with the next line.
xterm -e java -jar start.jar &
How about
xterm -e java -jar start.jar &
Note the ending & that tells the shell to run the process in the background.
It is another question how to know when that command finished if you need the results in your script.