How to change the terminal title to currently running process? - bash

I know how to change the Terminal Window title. What I am trying to find out is how to make bash not zsh write out the currently running process so if I say do
$ ls -lF
I would get something like this for the title
/home/me/curerntFolder (ls -lF)
Getting the last executed command would be too late since the command has executed already, so it won't set the title with the command that was executed.

In addition to #markp-fuso's answer, here's how I did it to make it work with Starship.
function set_win_title() {
local cmd=" ($#)"
if [[ "$cmd" == " (starship_precmd)" || "$cmd" == " ()" ]]
then
cmd=""
fi
if [[ $PWD == $HOME ]]
then
if [[ $SSH_TTY ]]
then
echo -ne "\033]0; 🏛️ # $HOSTNAME ~$cmd\a" < /dev/null
else
echo -ne "\033]0; 🏠 ~$cmd\a" < /dev/null
fi
else
BASEPWD=$(basename "$PWD")
if [[ $SSH_TTY ]]
then
echo -ne "\033]0; 🌩️ $BASEPWD # $HOSTNAME $cmd\a" < /dev/null
else
echo -ne "\033]0; 📁 $BASEPWD $cmd\a" < /dev/null
fi
fi
}
starship_precmd_user_func="set_win_title"
eval "$(starship init bash)"
trap "$(trap -p DEBUG | awk -F"'" '{print $2}');set_win_title \${BASH_COMMAND}" DEBUG
Note this differs from the Custom pre-prompt and pre-execution Commands in Bash instructions in that the trap is set after starship init. Which I have noted in a bug.

UPDATE: my previous answer (below) displays the previous command in the title bar.
Ignoring everything from my previous answer and starting from scratch:
trap 'echo -ne "\033]0;${PWD}: (${BASH_COMMAND})\007"' DEBUG
Running the following at the command prompt:
$ sleep 10
The window title bar changes to /my/current/directory: (sleep 10) while the sleep 10 is running.
Running either of these:
$ sleep 1; sleep 2; sleep 3
$ { sleep 1; sleep2; sleep 3; }
The title bar changes as each sleep command is invoked.
Running this:
$ ( sleep 1; sleep 2; sleep 3 )
The title bar does not change (the trap does not apply within a subprocess call).
One last one:
$ echo $(sleep 3; echo abc)
The title bar displays (echo $sleep 3; echo abc)).
previous answer
Adding to this answer:
store_command() {
declare -g last_command current_command
last_command=$current_command
current_command=$BASH_COMMAND
return 0
}
trap store_command DEBUG
PROMPT_COMMAND='echo -ne "\033]0;${PWD}: (${last_command})\007"'
Additional reading materials re: trap / DEBUG:
bash guide on traps
SO Q&A

You can combine setting the window title with setting the prompt.
Here's an example using bashs PROMPT_COMMAND:
tputps () {
echo -n '\['
tput "$#"
echo -n '\]'
}
prompt_builder () {
# Window title - operating system command (OSC) ESC + ]
echo -ne '\033]0;'"${USER}#${HOSTNAME}:$(dirs)"'\a' >&2
# username, green
tputps setaf 2
echo -n '\u'
# directory, orange
tputps setaf 208
echo -n ' \w'
tputps sgr0 0
}
prompt_cmd () {
PS1="$(prompt_builder) \$ "
}
export PROMPT_COMMAND=prompt_cmd

For Linux OS Adding following function in bashrc file
Following Steps
Open bashrc file
vi ~/.bashrc
Write a function in bashrc file
function set-title() {
if [[ -z "$ORIG" ]]; then
ORIG=$PS1
fi
TITLE="\[\e]2;$*\a\]"
PS1=${ORIG}${TITLE}
}
save the file
source ~/.bashrc
call function
set-title "tab1"

The easiest way to change the title of the terminal I could think of is to use echo in shell script
echo "\033]0;Your title \007"
And to change open a new tab with new title name is
meta-terminal--tab-t"Your title"

Related

How do I establish 2 way communication to my script that I'm sending through ssh?

I'm trying to establish two-way communication between a Windows box (with a bash cli) and a qnx box (with a ksh cli, though currently I'm using a linux VM with ksh98 to test with) over ssh using scripts.
I can't seem to get the redirects quite right though. Here is my simple setup:
#!/usr/bin/bash
cleanup() {
exec >&$SSH_STDIN- ; rm ssh_stdin
exec <&$SSH_STDOUT-; rm ssh_stdout
echo "Cleaned up."
}
trap 'cleanup' EXIT
mkfifo ssh_stdin ; exec {SSH_STDIN}<>./ssh_stdin
mkfifo ssh_stdout; exec {SSH_STDOUT}<>./ssh_stdout
echo "SSH_STDIN: $SSH_STDIN"
echo "SSH_STDOUT: $SSH_STDOUT"
repeat() { echo "$2"; echo "$2" >&$1; }
fn() {
sleep 5
echo AWAKE! >&2
set +o xtrace
while read -u $SSH_STDOUT a b; do
case "$a" in
Hi!) sleep 1; repeat $SSH_STDOUT "Ho!" ;;
and-away-we-go!) sleep 1; repeat $SSH_STDOUT "quit";;
*) echo "UNRECOGNIZED: $a $b";;
esac
done
}
fn&
ssh user#127.0.0.1 -p 2022 '
. /etc/profile # Used to setup path
repeat() { echo "$2"; echo "$2" >&$1; }
repeat 2 "Hi!"
while read a b; do
case "$a" in
Ho! ) sleep 1; repeat "and-away-we-go!";;
quit ) exit 0;;
* ) echo "UNRECOGINZED: $a $b";;
esac
done
' >&$SSH_STDIN <&$SSH_STDOUT
I feel that I'm close. What am I doing wrong? Maybe this can be done without using named FIFOs?
EDIT
This is the output:
$ ./test-writer.sh
SSH_STDIN: 11
SSH_STDOUT: 12
user#127.0.0.1's password:
Hi!
AWAKE!
./test-writer.sh: line 21: read: read error: 12: Communication error on send
It then hangs and after I press Ctrl-C:
./test-writer.sh: line 6: echo: write error: Communication error on send
Here it is with bash -x
$ bash -x ./test-writer.sh
+ trap cleanup EXIT
+ mkfifo ssh_stdin
+ exec
+ mkfifo ssh_stdout
+ exec
+ echo 'SSH_STDIN: 11'
SSH_STDIN: 11
+ echo 'SSH_STDOUT: 12'
SSH_STDOUT: 12
+ fn
+ ssh user#127.0.0.1 -p 2022 '
. /etc/profile # Used to setup path
repeat() { echo "$2"; echo "$2" >&$1; }
repeat 2 "Hi!"
while read a b; do
case "$a" in
Ho! ) sleep 1; repeat "and-away-we-go!";;
quit ) exit 0;;
* ) echo "UNRECOGINZED: $a $b";;
esac
done
'
+ sleep 5
user#127.0.0.1's password:
Hi!
+ echo 'AWAKE!'
AWAKE!
+ set +o xtrace
./test-writer.sh: line 21: read: read error: 12: Communication error on send
and after it hangs, I press Ctrl-C and get:
+ cleanup
+ exec
+ rm ssh_stdin
+ exec
+ rm ssh_stdout
+ echo 'Cleaned up.'
./test-writer.sh: line 6: echo: write error: Communication error on send
Although I have a solution using coproc, I would like to know why this solution isn't working.
Thanks to a hint by #KamilCuk about using coproc I was able to come up with an answer which is much less unwieldly.
#!/usr/bin/bash
cleanup() {
# Do I even need this?
exec <&${ssh_fd[0]}- >&${ssh_fd[1]}-
echo "Cleaned up."
}
trap 'cleanup' EXIT
coproc ssh_fd {
ssh user#127.0.0.1 -p 2022 '
. /etc/profile # Used to setup path
repeat() { echo "$2"; echo "$2" >&$1; }
repeat 2 "Hi!"
while read a b; do
case "$a" in
Ho! ) sleep 1; repeat 2 "and-away-we-go!";;
quit ) exit 0;;
* ) echo "UNRECOGINZED: .$a. .$b." >&2;;
esac
done
'
}
repeat() { echo "$2"; echo "$2" >&$1; }
fn() {
set +o xtrace
while read -u ${ssh_fd[0]} a b; do
case "$a" in
Hi!) sleep 1; repeat ${ssh_fd[1]} "Ho!" ;;
and-away-we-go!) sleep 1; repeat ${ssh_fd[1]} "quit";;
*) echo "UNRECOGNIZED: $a $b";;
esac
done
}
fn
Output:
$ ./test-writer.sh
user#127.0.0.1's password:
Hi!
Ho!
and-away-we-go!
quit
I didn't actually change anything in the code, just deleted and moved stuff around. I was right! I was almost there! Thx #KamilCuk!
Still, although I have this solution, I would like to know why the original one didn't work. I think it should, and I'll accept those answers if they figure it out.
Actually, still looking for a solution. I'm not sure why the devs fell short of redirecting all of the handles, INCLUDING STDERR! sigh

Bash sub script redirects input to /dev/null mistakenly

I'm working on a script to automate the creation of a .gitconfig file.
This is my main script that calls a function which in turn execute another file.
dotfile.sh
COMMAND_NAME=$1
shift
ARG_NAME=$#
set +a
fail() {
echo "";
printf "\r[${RED}FAIL${RESET}] $1\n";
echo "";
exit 1;
}
set -a
sub_setup() {
info "This may overwrite existing files in your computer. Are you sure? (y/n)";
read -p "" -n 1;
echo "";
if [[ $REPLY =~ ^[Yy]$ ]]; then
for ARG in $ARG_NAME; do
local SCRIPT="~/dotfiles/setup/${ARG}.sh";
[ -f "$SCRIPT" ] && echo "Applying '$ARG'" && . "$SCRIPT" || fail "Unable to find script '$ARG'";
done;
fi;
}
case $COMMAND_NAME in
"" | "-h" | "--help")
sub_help;
;;
*)
CMD=${COMMAND_NAME/*-/}
sub_${CMD} $ARG_NAME 2> /dev/null;
if [ $? = 127 ]; then
fail "'$CMD' is not a known command or has errors.";
fi;
;;
esac;
git.sh
git_config() {
if [ ! -f "~/dotfiles/git/gitconfig_template" ]; then
fail "No gitconfig_template file found in ~/dotfiles/git/";
elif [ -f "~/dotfiles/.gitconfig" ]; then
fail ".gitconfig already exists. Delete the file and retry.";
else
echo "Setting up .gitconfig";
GIT_CREDENTIAL="cache"
[ "$(uname -s)" == "Darwin" ] && GIT_CREDENTIAL="osxkeychain";
user " - What is your GitHub author name?";
read -e GIT_AUTHORNAME;
user " - What is your GitHub author email?";
read -e GIT_AUTHOREMAIL;
user " - What is your GitHub username?";
read -e GIT_USERNAME;
if sed -e "s/AUTHORNAME/$GIT_AUTHORNAME/g" \
-e "s/AUTHOREMAIL/$GIT_AUTHOREMAIL/g" \
-e "s/USERNAME/$GIT_USERNAME/g" \
-e "s/GIT_CREDENTIAL_HELPER/$GIT_CREDENTIAL/g" \
"~/dotfiles/git/gitconfig_template" > "~/dotfiles/.gitconfig"; then
success ".gitconfig has been setup";
else
fail ".gitconfig has not been setup";
fi;
fi;
}
git_config
In the console
$ ./dotfile.sh --setup git
[ ?? ] This may overwrite existing files in your computer. Are you sure? (y/n)
y
Applying 'git'
Setting up .gitconfig
[ .. ] - What is your GitHub author name?
Then I cannot see what I'm typing...
At the bottom of dotfile.sh, I redirect any error that occurs during my function call to /dev/null. But I should normally see what I'm typing. If I remove 2> /dev/null from this line sub_${CMD} $ARG_NAME 2> /dev/null;, it works!! But I don't understand why.
I need this line to prevent my script to echo an error in case my command doesn't exists. I only want my own message.
e.g.
$ ./dotfile --blahblah
./dotfiles: line 153: sub_blahblah: command not found
[FAIL] 'blahblah' is not a known command or has errors
I really don't understand why the input in my sub script is redirected to /dev/null as I mentioned only stderr to be redirected to /dev/null.
Thanks
Do you need the -e option in your read statements?
I did a quick test in an interactive shell. The following command does not echo characters :
read -e TEST 2>/dev/null
The following does echo the characters
read TEST 2>/dev/null

How to modify call stack in Bash?

Suppose I want to write a smart logging function log, that would read the line that is immediately after the log invocation and store it and its output in the log file. The function can find, read and execute the line of code that is in question. The problem is, that when the function returns, bash executes the line again.
Everything works fine except that assignment to BASH_LINENO[0] is silently discarded. Reading the http://wiki.bash-hackers.org/syntax/shellvars#bash_lineno I've learned that the variable is not read only.
function log()
{
BASH_LINENO[0]=$((${BASH_LINENO[0]}+1))
file=${BASH_SOURCE[1]##*/}
linenr=$((${BASH_LINENO[0]} + 1 ))
line=`sed "1,$((${linenr}-1)) d;${linenr} s/^ *//; q" $file`
if [ -f /tmp/tmp.txt ]; then
rm /tmp/tmp.txt
fi
exec 3>&1 4>&2 >>/tmp/tmp.txt 2>&1
set -x
eval $line
exitstatus=$?
set +x
exec 1>&3 2>&4 4>&- 3>&-
#Here goes the code that parses the /tmp/tmp.txt and stores it in the log
if [ "$exitstatus" -ne "0" ]; then
exit $exitstatus
fi
}
#Test case:
log
echo "Unfortunately this line gets appended twice" | tee -a bla.txt;
After consulting the wisdom of users on bug-bash#gnu.org mailing list it appears that modifying the call stack is not possible, after all. Here is an answer I got from Chet Ramey:
BASH_LINENO is a call stack; assignments to it should be (and are)
ignored. That's been the case since at least bash-3.2 (that's where I
quit looking).
There is an indirect way to force bash to not execute the next
command: set the extdebug option and have the DEBUG trap return a
non-zero status.
The above technique works very well for my purposes. I am finally able to do a production version of the log function.
#!/bin/bash
shopt -s extdebug
repetition_count=0
_ERR_HDR_FMT="%.8s %s#%s:%s:%s"
_ERR_MSG_FMT="[${_ERR_HDR_FMT}]%s \$ "
msg() {
printf "$_ERR_MSG_FMT" $(date +%T) $USER $HOSTNAME $PWD/${BASH_SOURCE[2]##*/} ${BASH_LINENO[1]}
echo ${#}
}
function rlog()
{
case $- in *x*) USE_X="-x";; *) USE_X=;; esac
set +x
if [ "${BASH_LINENO[0]}" -ne "$myline" ]; then
repetition_count=0
return 0;
fi
if [ "$repetition_count" -gt "0" ]; then
return -1;
fi
if [ -z "$log" ]; then
return 0
fi
file=${BASH_SOURCE[1]##*/}
line=`sed "1,$((${myline}-1)) d;${myline} s/^ *//; q" $file`
if [ -f /tmp/tmp.txt ]; then
rm /tmp/tmp.txt
fi
echo "$line" > /tmp/tmp2.txt
mymsg=`msg`
exec 3>&1 4>&2 >>/tmp/tmp.txt 2>&1
set -x
source /tmp/tmp2.txt
exitstatus=$?
set +x
exec 1>&3 2>&4 4>&- 3>&-
repetition_count=1 #This flag is to prevent multiple execution of the current line of code. This condition gets checked at the beginning of the function
frstline=`sed '1q' /tmp/tmp.txt`
[[ "$frstline" =~ ^(\++)[^+].*$ ]]
# echo "BASH_REMATCH[1]=${BASH_REMATCH[1]}"
eval 'tmp="${BASH_REMATCH[1]}"'
pluscnt=$(( (${#tmp} + 1) *2 ))
pluses="\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+\+"
pluses=${pluses:0:$pluscnt}
commandlines="`awk \" gsub(/^${pluses}\\s/,\\\"\\\")\" /tmp/tmp.txt`"
n=0
#There might me more then 1 command in the debugged line. The next loop appends each command to the log.
while read -r line; do
if [ "$n" -ne "0" ]; then
echo "+ $line" >>$log
else
echo "${mymsg}$line" >>$log
n=1
fi
done <<< "$commandlines"
#Next line extracts all lines that are prefixed by sufficent number of "+" (usually 3), that are immidiately after the last line prefixed with $pluses, i.e. after the last command line.
awk "BEGIN {flag=0} /${pluses}/ { flag=1 } /^[^+]/ { if (flag==1) print \$0; }" /tmp/tmp.txt | tee -a $log
if [ "$exitstatus" -ne "0" ]; then
echo "## Exit status: $exitstatus" >>$log
fi
echo >>$log
if [ "$exitstatus" -ne "0" ]; then
exit $exitstatus
fi
if [ -n "$USE_X" ]; then
set -x
fi
return -1
}
log_next_line='eval if [ -n "$log" ]; then myline=$(($LINENO+1)); trap "rlog" DEBUG; fi;'
logoff='trap - DEBUG'
The usage of the file is intended as follows:
#!/bin/bash
log=mylog.log
if [ -f mylog.log ]; then
rm mylog.log
fi
. ./log.sh
a=example
x=a
$log_next_line
echo "KUKU!"
$log_next_line
echo $x
$log_next_line
echo ${!x}
$log_next_line
echo ${!x} > /dev/null
$log_next_line
echo "Proba">/tmp/mtmp.txt
$log_next_line
touch ${!x}.txt
$log_next_line
if [ $(( ${#a} + 6 )) -gt 10 ]; then echo "Too long string"; fi
$log_next_line
echo "\$a and \$x">/dev/null
$log_next_line
echo $x
$log_next_line
ls -l
$log_next_line
mkdir /ddad/adad/dad #Generates an error
The output (`mylog.log):
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:14] $ echo 'KUKU!'
KUKU!
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:16] $ echo a
a
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:18] $ echo example
example
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:20] $ echo example
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:22] $ echo 1,2,3
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:24] $ touch example.txt
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:26] $ '[' 13 -gt 10 ']'
+ echo 'Too long string'
Too long string
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:28] $ echo '$a and $x'
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:30] $ echo a
a
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:32] $ ls -l
total 12
-rw-rw-r-- 1 adam adam 0 gru 4 13:39 example.txt
lrwxrwxrwx 1 adam adam 66 gru 4 13:29 log.sh -> /home/Adama-docs/Adam/Adam/MyDocs/praca/Puppet/bootstrap/common.sh
-rwxrwxr-x 1 adam adam 520 gru 4 13:29 log-test-case.sh
-rw-rw-r-- 1 adam adam 995 gru 4 13:39 mylog.log
[13:39:51 adam#adam-N56VZ:/home/Adama-docs/Adam/Adam/linux/tmp/log/log-test-case.sh:34] $ mkdir /ddad/adad/dad
mkdir: cannot create directory ‘/ddad/adad/dad’: No such file or directory
## Exit status: 1
The standard output is unchanged.
Limitations
Limitations are serious, unfortunately.
Exit code of logged command gets discarded
First of all, the exit code of the logged command is discarded, so user cannot test for it in the next statement. The current code exits the script if there was an error (which I believe is the best behavior). It is possible to modify the script to test
Limited support for bash tracing
The function honors bash tracing with -x. If it finds that the user traces output, it temporarily disables the output (as it would interfere with the trace anyway), and restores it back at the end. Unfortunately, it also appends a few extra lines to the trace.
Unless user turns off logging (with $logoff) there is a considerable speed penalty for all commands after the first $log_next_line, even if no logging takes place.
In ideal world the function should disable debug trapping (trap - DEBUG) after each invocation. Unfortunately I don't know how to do it, so beginning with the first $log_next_line macro, interpretation of each line invokes a custom function.
I use this function before every key command in my complex bootstrapping scripts. With it I can see what exactly and when was executed and what was the output, without the need to really understand the logic of the lengthy and sometimes messy scripts.

SHELL general function for action state

How to make a code bellow as a general function to be used entire script in bash:
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
You might write a wrapper for command execution:
function exec_cmd {
$#
if [[ $? = 0 ]]; then
echo "success " >> $log
else
echo "failed" >> $log
fi
}
And then execute commands in your script using the function:
exec_cmd command1 arg1 arg2 ...
exec_cmd command2 arg1 arg2 ...
...
If you don't want to wrap the original calls you could use an explicit call, like the following
function check_success {
if [[ $? = 0 ]]; then
echo "success " >> $log
else echo "failed" >> $log
fi
}
ls && check_success
ls non-existant
check_success
There's no really clean way to do that. This is clean and might be good enough?
PS4='($?)[$LINENO]'
exec 2>>"$log"
That will show every command run in the log, and each entry will start with the exit code of the previous command...
You could put this in .bashrc and call it whenever
function log_status { [ $? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status }
If you want it after every command you could make the prompt write to the log (note the original PS1 value is appended).
export PS1="\$([ \$? == 0 ] && echo success>>/tmp/status || echo fail>>/tmp/status)$PS1"
(I'm not experienced with this, perhaps PROMPT_COMMAND is a more appropriate place to put it)
Or even get more fancy and see the result with colours.
I guess you could also play with getting the last executed command:
How do I get "previous executed command" in a bash script?
Get name of last run program in Bash
BASH: echoing the last command run

OS X Bash, 'watch' command

I'm looking for the best way to duplicate the Linux 'watch' command on Mac OS X. I'd like to run a command every few seconds to pattern match on the contents of an output file using 'tail' and 'sed'.
What's my best option on a Mac, and can it be done without downloading software?
With Homebrew installed:
brew install watch
You can emulate the basic functionality with the shell loop:
while :; do clear; your_command; sleep 2; done
That will loop forever, clear the screen, run your command, and wait two seconds - the basic watch your_command implementation.
You can take this a step further and create a watch.sh script that can accept your_command and sleep_duration as parameters:
#!/bin/bash
# usage: watch.sh <your_command> <sleep_duration>
while :;
do
clear
date
$1
sleep $2
done
Use MacPorts:
$ sudo port install watch
The shells above will do the trick, and you could even convert them to an alias (you may need to wrap in a function to handle parameters):
alias myWatch='_() { while :; do clear; $2; sleep $1; done }; _'
Examples:
myWatch 1 ls ## Self-explanatory
myWatch 5 "ls -lF $HOME" ## Every 5 seconds, list out home directory; double-quotes around command to keep its arguments together
Alternately, Homebrew can install the watch from http://procps.sourceforge.net/:
brew install watch
It may be that "watch" is not what you want. You probably want to ask for help in solving your problem, not in implementing your solution! :)
If your real goal is to trigger actions based on what's seen from the tail command, then you can do that as part of the tail itself. Instead of running "periodically", which is what watch does, you can run your code on demand.
#!/bin/sh
tail -F /var/log/somelogfile | while read line; do
if echo "$line" | grep -q '[Ss]ome.regex'; then
# do your stuff
fi
done
Note that tail -F will continue to follow a log file even if it gets rotated by newsyslog or logrotate. You want to use this instead of the lower-case tail -f. Check man tail for details.
That said, if you really do want to run a command periodically, the other answers provided can be turned into a short shell script:
#!/bin/sh
if [ -z "$2" ]; then
echo "Usage: $0 SECONDS COMMAND" >&2
exit 1
fi
SECONDS=$1
shift 1
while sleep $SECONDS; do
clear
$*
done
I am going with the answer from here:
bash -c 'while [ 0 ]; do <your command>; sleep 5; done'
But you're really better off installing watch as this isn't very clean...
If watch doesn't want to install via
brew install watch
There is another similar/copy version that installed and worked perfectly for me
brew install visionmedia-watch
https://github.com/tj/watch
Or, in your ~/.bashrc file:
function watch {
while :; do clear; date; echo; $#; sleep 2; done
}
To prevent flickering when your main command takes perceivable time to complete, you can capture the output and only clear screen when it's done.
function watch {while :; do a=$($#); clear; echo "$(date)\n\n$a"; sleep 1; done}
Then use it by:
watch istats
Try this:
#!/bin/bash
# usage: watch [-n integer] COMMAND
case $# in
0)
echo "Usage $0 [-n int] COMMAND"
;;
*)
sleep=2;
;;
esac
if [ "$1" == "-n" ]; then
sleep=$2
shift; shift
fi
while :;
do
clear;
echo "$(date) every ${sleep}s $#"; echo
$#;
sleep $sleep;
done
Here's a slightly changed version of this answer that:
checks for valid args
shows a date and duration title at the top
moves the "duration" argument to be the 1st argument, so complex commands can be easily passed as the remaining arguments.
To use it:
Save this to ~/bin/watch
execute chmod 700 ~/bin/watch in a terminal to make it executable.
try it by running watch 1 echo "hi there"
~/bin/watch
#!/bin/bash
function show_help()
{
echo ""
echo "usage: watch [sleep duration in seconds] [command]"
echo ""
echo "e.g. To cat a file every second, run the following"
echo ""
echo " watch 1 cat /tmp/it.txt"
exit;
}
function show_help_if_required()
{
if [ "$1" == "help" ]
then
show_help
fi
if [ -z "$1" ]
then
show_help
fi
}
function require_numeric_value()
{
REG_EX='^[0-9]+$'
if ! [[ $1 =~ $REG_EX ]] ; then
show_help
fi
}
show_help_if_required $1
require_numeric_value $1
DURATION=$1
shift
while :; do
clear
echo "Updating every $DURATION seconds. Last updated $(date)"
bash -c "$*"
sleep $DURATION
done
Use the Nix package manager!
Install Nix, and then do nix-env -iA nixpkgs.watch and it should be available for use after the completing the install instructions (including sourcing . "$HOME/.nix-profile/etc/profile.d/nix.sh" in your shell).
The watch command that's available on Linux does not exist on macOS. If you don't want to use brew you can add this bash function to your shell profile.
# execute commands at a specified interval of seconds
function watch.command {
# USAGE: watch.commands [seconds] [commands...]
# EXAMPLE: watch.command 5 date
# EXAMPLE: watch.command 5 date echo 'ls -l' echo 'ps | grep "kubectl\\\|node\\\|npm\\\|puma"'
# EXAMPLE: watch.command 5 'date; echo; ls -l; echo; ps | grep "kubectl\\\|node\\\|npm\\\|puma"' echo date 'echo; ls -1'
local cmds=()
for arg in "${#:2}"; do
echo $arg | sed 's/; /;/g' | tr \; \\n | while read cmd; do
cmds+=($cmd)
done
done
while true; do
clear
for cmd in $cmds; do
eval $cmd
done
sleep $1
done
}
https://gist.github.com/Gerst20051/99c1cf570a2d0d59f09339a806732fd3

Resources