invoke xterm and run command with variable - tcsh

I would like invoke a xterm with two commands where the first command is to echo some header message follow by some other command (for this sample code I use sleep command for simplicity). The exec command with "echo $msg1" isn't print out any message. Please help me to fix it.
#!/bin/csh -f
set msg1 = ""
set msg1 = "$msg1#[INFO] xx"
set msg1 = "$msg1#[INFO] yy"
# not okay
exec /usr/bin/xterm -e sh -c 'echo "$msg1" | tr "#" "\n" ;sleep 5'
# okay
exec /usr/bin/xterm -e sh -c 'echo hello;sleep 5'
exec /usr/bin/xterm -e sh -c 'echo hello#world | tr "#" "\n" ;sleep 5'

Variables don't work inside single quotes ('), only double quotes ("):
% set x = 'asdf'
% echo '$x'
$x
% echo "$x"
asdf
Right now, the sh process inside the xterm will see echo "$msg1", but it doesn't know about the $msg1 variable since that's local to the script, which is a different process.
You can adjust that command to something like:
exec /usr/bin/xterm -e sh -c "echo '$msg1' | tr '#' '\n' ; sleep 5"
But this won't work well if msg1 can contain single quote or has a \ at the end. Quoting is complex, especially since you're dealing with two different shells (your script and the sh inside xterm) each with its own quoting rules, so it's probably better to use an environment variable:
setenv msg1 "$msg1"
And then you can use the same command as you had above, since the environment variables are inherited by the child process.

Related

OSX Command line: echo command before running it? [duplicate]

In a shell script, how do I echo all shell commands called and expand any variable names?
For example, given the following line:
ls $DIRNAME
I would like the script to run the command and display the following
ls /full/path/to/some/dir
The purpose is to save a log of all shell commands called and their arguments. Is there perhaps a better way of generating such a log?
set -x or set -o xtrace expands variables and prints a little + sign before the line.
set -v or set -o verbose does not expand the variables before printing.
Use set +x and set +v to turn off the above settings.
On the first line of the script, one can put #!/bin/sh -x (or -v) to have the same effect as set -x (or -v) later in the script.
The above also works with /bin/sh.
See the bash-hackers' wiki on set attributes, and on debugging.
$ cat shl
#!/bin/bash
DIR=/tmp/so
ls $DIR
$ bash -x shl
+ DIR=/tmp/so
+ ls /tmp/so
$
set -x will give you what you want.
Here is an example shell script to demonstrate:
#!/bin/bash
set -x #echo on
ls $PWD
This expands all variables and prints the full commands before output of the command.
Output:
+ ls /home/user/
file1.txt file2.txt
I use a function to echo and run the command:
#!/bin/bash
# Function to display commands
exe() { echo "\$ $#" ; "$#" ; }
exe echo hello world
Which outputs
$ echo hello world
hello world
For more complicated commands pipes, etc., you can use eval:
#!/bin/bash
# Function to display commands
exe() { echo "\$ ${#/eval/}" ; "$#" ; }
exe eval "echo 'Hello, World!' | cut -d ' ' -f1"
Which outputs
$ echo 'Hello, World!' | cut -d ' ' -f1
Hello
You can also toggle this for select lines in your script by wrapping them in set -x and set +x, for example,
#!/bin/bash
...
if [[ ! -e $OUT_FILE ]];
then
echo "grabbing $URL"
set -x
curl --fail --noproxy $SERV -s -S $URL -o $OUT_FILE
set +x
fi
shuckc's answer for echoing select lines has a few downsides: you end up with the following set +x command being echoed as well, and you lose the ability to test the exit code with $? since it gets overwritten by the set +x.
Another option is to run the command in a subshell:
echo "getting URL..."
( set -x ; curl -s --fail $URL -o $OUTFILE )
if [ $? -eq 0 ] ; then
echo "curl failed"
exit 1
fi
which will give you output like:
getting URL...
+ curl -s --fail http://example.com/missing -o /tmp/example
curl failed
This does incur the overhead of creating a new subshell for the command, though.
According to TLDP's Bash Guide for Beginners: Chapter 2. Writing and debugging scripts:
2.3.1. Debugging on the entire script
$ bash -x script1.sh
...
There is now a full-fledged debugger for Bash, available at SourceForge. These debugging features are available in most modern versions of Bash, starting from 3.x.
2.3.2. Debugging on part(s) of the script
set -x # Activate debugging from here
w
set +x # Stop debugging from here
...
Table 2-1. Overview of set debugging options
Short | Long notation | Result
-------+---------------+--------------------------------------------------------------
set -f | set -o noglob | Disable file name generation using metacharacters (globbing).
set -v | set -o verbose| Prints shell input lines as they are read.
set -x | set -o xtrace | Print command traces before executing command.
...
Alternatively, these modes can be specified in the script itself, by
adding the desired options to the first line shell declaration.
Options can be combined, as is usually the case with UNIX commands:
#!/bin/bash -xv
Another option is to put "-x" at the top of your script instead of on the command line:
$ cat ./server
#!/bin/bash -x
ssh user#server
$ ./server
+ ssh user#server
user#server's password: ^C
$
You can execute a Bash script in debug mode with the -x option.
This will echo all the commands.
bash -x example_script.sh
# Console output
+ cd /home/user
+ mv text.txt mytext.txt
You can also save the -x option in the script. Just specify the -x option in the shebang.
######## example_script.sh ###################
#!/bin/bash -x
cd /home/user
mv text.txt mytext.txt
##############################################
./example_script.sh
# Console output
+ cd /home/user
+ mv text.txt mytext.txt
Type "bash -x" on the command line before the name of the Bash script. For instance, to execute foo.sh, type:
bash -x foo.sh
Combining all the answers I found this to be the best, simplest
#!/bin/bash
# https://stackoverflow.com/a/64644990/8608146
exe(){
set -x
"$#"
{ set +x; } 2>/dev/null
}
# example
exe go generate ./...
{ set +x; } 2>/dev/null from https://stackoverflow.com/a/19226038/8608146
If the exit status of the command is needed, as mentioned here
Use
{ STATUS=$?; set +x; } 2>/dev/null
And use the $STATUS later like exit $STATUS at the end
A slightly more useful one
#!/bin/bash
# https://stackoverflow.com/a/64644990/8608146
_exe(){
[ $1 == on ] && { set -x; return; } 2>/dev/null
[ $1 == off ] && { set +x; return; } 2>/dev/null
echo + "$#"
"$#"
}
exe(){
{ _exe "$#"; } 2>/dev/null
}
# examples
exe on # turn on same as set -x
echo This command prints with +
echo This too prints with +
exe off # same as set +x
echo This does not
# can also be used for individual commands
exe echo what up!
For zsh, echo
setopt VERBOSE
And for debugging,
setopt XTRACE
To allow for compound commands to be echoed, I use eval plus Soth's exe function to echo and run the command. This is useful for piped commands that would otherwise only show none or just the initial part of the piped command.
Without eval:
exe() { echo "\$ $#" ; "$#" ; }
exe ls -F | grep *.txt
Outputs:
$
file.txt
With eval:
exe() { echo "\$ $#" ; "$#" ; }
exe eval 'ls -F | grep *.txt'
Which outputs
$ exe eval 'ls -F | grep *.txt'
file.txt
For csh and tcsh, you can set verbose or set echo (or you can even set both, but it may result in some duplication most of the time).
The verbose option prints pretty much the exact shell expression that you type.
The echo option is more indicative of what will be executed through spawning.
http://www.tcsh.org/tcsh.html/Special_shell_variables.html#verbose
http://www.tcsh.org/tcsh.html/Special_shell_variables.html#echo
Special shell variables
verbose
If set, causes the words of each command to be printed, after history substitution (if any). Set by the -v command line option.
echo
If set, each command with its arguments is echoed just before it is executed. For non-builtin commands all expansions occur before echoing. Builtin commands are echoed before command and filename substitution, because these substitutions are then done selectively. Set by the -x command line option.
$ cat exampleScript.sh
#!/bin/bash
name="karthik";
echo $name;
bash -x exampleScript.sh
Output is as follows:

Bash: Execute command WITH ARGUMENTS in new terminal [duplicate]

This question already has answers here:
how do i start commands in new terminals in BASH script
(2 answers)
Closed 20 days ago.
So i want to open a new terminal in bash and execute a command with arguments.
As long as I only take something like ls as command it works fine, but when I take something like route -n , so a command with arguments, it doesnt work.
The code:
gnome-terminal --window-with-profile=Bash -e whoami #WORKS
gnome-terminal --window-with-profile=Bash -e route -n #DOESNT WORK
I already tried putting "" around the command and all that but it still doesnt work
You can start a new terminal with a command using the following:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>"
To continue the terminal with the normal bash profile, add exec bash:
gnome-terminal --window-with-profile=Bash -- \
bash -c "<command>; exec bash"
Here's how to create a Here document and pass it as the command:
cmd="$(printf '%s\n' 'wc -w <<-EOF
First line of Here document.
Second line.
The output of this command will be '15'.
EOF' 'exec bash')"
xterm -e bash -c "${cmd}"
To open a new terminal and run an initial command with a script, add the following in a script:
nohup xterm -e bash -c "$(printf '%s\nexec bash' "$*")" &>/dev/null &
When $* is quoted, it expands the arguments to a single word, with each separated by the first character of IFS. nohup and &>/dev/null & are used only to allow the terminal to run in the background.
Try this:
gnome-terminal --window-with-profile=Bash -e 'bash -c "route -n; read"'
The final read prevents the window from closing after execution of the previous commands. It will close when you press a key.
If you want to experience headaches, you can try with more quote nesting:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "route -n; read -p '"'Press a key...'"'"'
(In the following examples there is no final read. Let’s suppose we fixed that in the profile.)
If you want to print an empty line and enjoy multi-level escaping too:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c "printf \\\\n; route -n"'
The same, with another quoting style:
gnome-terminal --window-with-profile=Bash \
-e 'bash -c '\''printf "\n"; route -n'\'
Variables are expanded in double quotes, not single quotes, so if you want them expanded you need to ensure that the outermost quotes are double:
command='printf "\n"; route -n'
gnome-terminal --window-with-profile=Bash \
-e "bash -c '$command'"
Quoting can become really complex. When you need something more advanced that a simple couple of commands, it is advisable to write an independent shell script with all the readable, parametrized code you need, save it somewhere, say /home/user/bin/mycommand, and then invoke it simply as
gnome-terminal --window-with-profile=Bash -e /home/user/bin/mycommand

How to execute arbitrary command under `bash -c`

What is a procedure to decorate an arbitrary bash command to execute it in a subshell? I cannot change the command, I have to decorate it on the outside.
the best I can think of is
>bash -c '<command>'
works on these:
>bash -c 'echo'
>bash -c 'echo foobar'
>bash -c 'echo \"'
but what about the commands such as
echo \'
and especially
echo \'\"
The decoration has to be always the same for all commands. It has to always work.
You say "subshell" - you can get one of those by just putting parentheses around the command:
x=outer
(x=inner; echo "x=$x"; exit)
echo "x=$x"
produces this:
x=inner
x=outer
You could (ab)use heredocs:
bash -c "$(cat <<-EOF
echo \'\"
EOF
)"
This is one way without using -c option:
bash <<EOF
echo \'\"
EOF
What you want to do is exactly the same as escapeshellcmd() in PHP (http://php.net/manual/fr/function.escapeshellcmd.php)
You just need to escape #&;`|*?~<>^()[]{}$\, \x0A and \xFF. ' and " are escaped only if they are not paired.
But beware of security issues...
Let bash take care of it this way:
1) prepare the command as an array:
astrCmd=(echo \'\");
2) export the array as a simple string:
export EXPORTEDastrCmd="`declare -p astrCmd| sed -r "s,[^=]*='(.*)',\1,"`";
3) restore the array and run it as a full command:
bash -c "declare -a astrCmd='$EXPORTEDastrCmd';\${astrCmd[#]}"
Create a function to make these steps more easy like:
FUNCbash(){
astrCmd=("$#");
export EXPORTEDastrCmd="`declare -p astrCmd| sed -r "s,[^=]*='(.*)',\1,"`";
bash -c "declare -a astrCmd='$EXPORTEDastrCmd';\${astrCmd[#]}";
}
FUNCbash echo \'\"

Shell script with sudo sh

I want to make a script to install a program (ROS) and I need to write this line:
sudo sh -c 'echo "TEXT VARIABLE TEXT" > systemFile' # to write in systemaFile I need sudo sh
if echo is just fixed text, it works.
If echo is text + variable it doesn't work.
I've tried with:
read f1 < <(lsb_release -a | grep Code* | cut -f2) #codename is writted in variable $f1
echo $f1 # retruns "quantal" as I expected
sudo sh -c 'echo "TEXT $f1 TEXT" > systemFile' #f1 is empty, WHY?
Then I have to assign the variable inside the same instruction sudo sh, for example:
sudo sh -c ' read f1 < <(lsb_release -a | grep Code* | cut -f2) ; echo "TEXT $f1 TEXT" > systemFile'
sh: 1: Syntax error: redirection unexpected
Please try just like this script line
sudo sh -c 'echo TEXT '$f1' TEXT > systemFile'
sudo bash -c 'echo TEXT '$f1' TEXT > systemFile'
i have use this able script line in .sh file and its working fine.
This can work too:
sudo sh -c "echo 'TEXT $VARIABLE TEXT' > systemFile"
However, it is generally not recommended to un-necessarily run a command as sudo. You seem to want only redirection to be "sudoed". So try these options:
echo "TEXT $VARIABLE TEXT" | sudo tee systemFile >/dev/null
echo "TEXT $VARIABLE TEXT" | sudo dd of=systemFile
echo can be simple echo, or any other command you want. Note that this command is not being run under sudo.
use -E option to run sudo:
sudo -E sh -c 'echo "TEXT VARIABLE TEXT" > systemFile'
from man sudo:
-E
The -E (preserve environment) option indicates to the security policy that the user wishes to preserve their existing environment variables. The security policy may return an error if the -E option is specified and the user does not have permission to preserve the environment.
Shell variables are only expanded in "double quotes", not in 'single quotes'.
$ v=value
$ echo $v
value
$ echo "$v"
value
$ echo '$v'
$v
You're starting a new instance of sh which then runs the command echo "TEXT $f1 TEXT" > systemFile.
Since $f1 has not been assigned within the new process, it's empty.
To fix this, you can expand $f1 and pass it in the command line:
sudo sh -c 'echo "TEXT '$f1' TEXT" > systemFile'
Or export it so it's available to the child process (using -E to preserve the environment, thanks anishsane):
export f1
sudo -E sh -c 'echo "TEXT $f1 TEXT" > systemFile'

bash: capturing the output of set -v

How many times have you seen someone trying to "Log the command I run and the output of the command"? It happens often, and for seeing the command you're running set -v is nice, (set -x is nice, too, but it can be harder to read), but what happens when you want to capture the command being run... but not all commands being run?
Running interactively I don't see a way to capture the set -v output at all.
set -v
echo a 1>/dev/null # 'echo a 1>/dev/null' is printed to the screen
echo a 2>/dev/null # 'echo a 2>/dev/null\na' is printed to the screen
I can put this in a script and things get better:
echo 'set -v'$'\n''echo a' > setvtest.sh
bash setvtest.sh 1>/dev/null # 'echo a' is printed to the screen
bash setvtest.sh 2>/dev/null # 'a' is printed to the screen
Aha, so from a script it goes to stderr. What about inline?
set +v
{ set -v ; echo a ; } 1>/dev/null # no output
set +v
( set -v ; echo a ; ) 1>/dev/null # no output
Hmm, no luck there.
Interestingly, and as a side note, this produces no output:
echo 'set -v ; echo a' > setvtest.sh
bash setvtest.sh 1>/dev/null
I'm not sure why, but perhaps that's also why the subshell version returns nothing.
What about shell functions?
setvtest2 () {
set -v
echo a
}
setvtest2 # 'a'
set +v
setvtest2 1>/dev/null # nothing
set +v
setvtest2 2>/dev/null # nothing
Now the question: Is there a nice way to capture the output of set -v?
Here's my not-nice hack, so I'm looking for something less insane:
#!/usr/bin/env bash
script=/tmp/$$.script
output=/tmp/$$.out
echo 'set -v'$'\n'"$1" >"$script"
bash "$script" 1>"$output"
cat "$output"
rm -f "$script" "$output"
Now I can execute simple scripts
bash gen.sh 'echo a' 1>/dev/null # prints 'echo a'
bash gen.sh 'echo a' 2>/dev/null # prints 'a'
But surely there are better ways.
You can run bash with option -v instead of turning on and off via set:
bash -v -c "echo a" 1>/dev/null # prints 'echo a'
bash -v -c "echo a" 2>/dev/null # prints 'a'
The dark side of this solution is that each such line will require to create new bash process, but you will not have to remember to switch off the v option back, since it's switched on only in a child process.
how about
#!/bin/bash
set -o xtrace
Stuff.....

Resources