The question is simple. I want to evaluate current value of PS1 in my bash script.
All materials on google point to tutorials on pimping it up, but I want to evaluate to see how would it be rendered by my current terminal, or at least by some terminal.
Is there any software/function that would help me achieve that? Of course I'd like to have all escaped characters evaluated, so echo $PS1 is not that useful in my case.
Bash 4.4+ solution using parameter transformation for a prompt string: echo "${PS1#P}"
[adamhotep#tabasco ~]$ echo "the prompt is '${PS1#P}'"
the prompt is '[adamhotep#tabasco ~]$'
[adamhotep#tabasco ~]$ TEST_STRING='\u is dining at \t using \s \V'
[adamhotep#tabasco ~]$ echo "${TEST_STRING}"
\u is dining at \t using \s \V
[adamhotep#tabasco ~]$ echo "${TEST_STRING#P}"
adamhotep is dining at 21:45:10 using bash 5.0.3
[adamhotep#tabasco ~]$
From the Bash Reference Manual page on Shell Parameter Expansion:
${parameter#operator}
Parameter transformation. The expansion is either a transformation of the value of parameter or information about parameter itself, depending on the value of operator.
Each operator is a single letter:
Q The expansion is a string that is the value of parameter quoted in a
format that can be reused as input.
E The expansion is a string that is the value of parameter with backslash
escape sequences expanded as with the $'…' quoting mechanism.
P The expansion is a string that is the result of expanding the value of
parameter as if it were a prompt string (see PROMPTING below).
A The expansion is a string in the form of an assignment statement or
declare command that, if evaluated, will recreate parameter with its
attributes and value.
a The expansion is a string consisting of flag values representing
parameter's attributes.
If parameter is # or *, the operation is applied to each positional parameter in turn, and the expansion is the resultant list. If parameter is an array variable subscripted with # or *, the operation is applied to each member of the array in turn, and the expansion is the resultant list.
(See also this answer from duplicate question Echo expanded PS1.)
Z Shell (zsh) can do this with ${(%%)PS1} or with its print builtin's -P flag:
[adamhotep#tabasco ~]% echo "the prompt is '${(%%)PS1}'"
the prompt is '[adamhotep#tabasco ~]%'
[adamhotep#tabasco ~]% print -P "the prompt is '$PS1'"
the prompt is '[adamhotep#tabasco ~]%'
[adamhotep#tabasco ~]% TEST_STRING="%n is dining at %* using %N $ZSH_VERSION"
[adamhotep#tabasco ~]% echo "$TEST_STRING"
%n is dining at %* using %N 5.7.1
[adamhotep#tabasco ~]% echo "${(%%)TEST_STRING}"
adamhotep is dining at 11:49:01 using zsh 5.7.1
[adamhotep#tabasco ~]% print -P "$TEST_STRING"
adamhotep is dining at 11:49:07 using zsh 5.7.1
[adamhotep#tabasco ~]%
The Zsh Expansion and Subsitution manual tells us:
Parameter Expansion Flags. If the opening brace is directly followed by an opening parenthesis, the string up to the matching closing parenthesis will be taken as a list of flags. In cases where repeating a flag is meaningful, the repetitions need not be consecutive; for example, (q%q%q) means the same thing as the more readable (%%qqq). The following flags are supported:…
% Expand all % escapes in the resulting words in the same way as in
prompts (see Prompt Expansion). If this flag is given twice, full
prompt expansion is done on the resulting words, depending on the
setting of the PROMPT_PERCENT, PROMPT_SUBST and PROMPT_BANG options.
From the Zsh Builtins documentation for print:
-P Perform prompt expansion (see Prompt Expansion). In combination with -f, prompt escape sequences are parsed only within interpolated arguments, not within the format string.
I would get it like this:
echo $PS1
And then edit it with an editor. After that for the test (this is set while the session is active):
PS1='\[\033[1m\]\[\033[34m\]\u\[\033[90m\]#\[\033[01;35m\]\h:\[\033[01;32m\]\W\[\033[0m\]$ '
(\u is for user, \h is for host, \w is for full path and \W is for short path)
And if I like it I will make it permanent by changing the value of PS1 in ~/.bashrc
P.S.:
If you want to see all global variables:
printenv
OR:
printenv <name_of_var_to_see>
One more possibility, using script utility (part of bsdutils package on ubuntu):
$ TEST_PS1="\e[31;1m\u#\h:\n\e[0;1m\$ \e[0m"
$ RANDOM_STRING=some_random_string_here_that_is_not_part_of_PS1
$ script /dev/null <<-EOF | awk 'NR==2' RS=$RANDOM_STRING
PS1="$TEST_PS1"; HISTFILE=/dev/null
echo -n $RANDOM_STRING
echo -n $RANDOM_STRING
exit
EOF
<prints the formatted prompt properly here>
script command generates a file specified & the output is also shown on stdout. If filename is omitted, it generates a file called typescript.
Since we are not interested in the log file in this case, filename is specified as /dev/null. Instead the stdout of the script command is passed to awk for further processing.
The entire code can also be encapsulated into a function.
Also, the output prompt can also be assigned to a variable.
This approach also supports parsing of PROMPT_COMMAND...
EDIT:
It appears that the new version of script echoes the piped stdin in the typescript. To handle that, the above mechanism can be changed to:
$ TEST_PS1="\e[31;1m\u#\h:\n\e[0;1m\$ \e[0m"
$ RANDOM_STRING=some_random_string_here_that_is_not_part_of_PS1
$ script /dev/null <<-EOF | awk '{old=current; current=$0;} END{print old}' RS=$RANDOM_STRING
PS1="$TEST_PS1"; HISTFILE=/dev/null
alias $RANDOM_STRING=true
$RANDOM_STRING
$RANDOM_STRING
EOF
<prints the formatted prompt properly here>
Explanation:
Try entering these commands manually on the terminal. Copy these commands under the heredoc as they are and paste with mouse middle click. The script command's stdout would contain something very similar.
e.g. With above case, the output of the script command gives this:
PS1="\e[31;1m\u#\h:\n\e[0;1m$ \e[0m"; HISTFILE=/dev/null
alias some_random_string_here_that_is_not_part_of_PS1=true
some_random_string_here_that_is_not_part_of_PS1
some_random_string_here_that_is_not_part_of_PS1
\e[0m"; HISTFILE=/dev/nullhsane-dev : ~/Desktop $ PS1="\e[31;1m\u#\h:\n\e[0;1m$
anishsane#anishsane-dev:
$ alias some_random_string_here_that_is_not_part_of_PS1=true
anishsane#anishsane-dev:
$ some_random_string_here_that_is_not_part_of_PS1
anishsane#anishsane-dev:
$ some_random_string_here_that_is_not_part_of_PS1
anishsane#anishsane-dev:
$ exit
Split that stdout with "some_random_string_here_that_is_not_part_of_PS1" as delimiter (record separator of awk) and print the last but one record.
EDIT2:
Another mechanism (using bash source code and gdb):
$ gdb -batch -p $$ -ex 'call bind_variable("expanded_PS1", decode_prompt_string (get_string_value ("PS1")), 0)'
$ echo "$expanded_PS1"
<prints the formatted prompt properly here>
There is a tiny issue here though. The \[ or \] strings in PS1 will get printed as \1/\2 respectively. You can remove those with tr -d '\1\2' <<< "$expanded_PS1"
If you get error like gdb failed to attach to the process (seems to happen in ubuntu :-\ ), run gdb with sudo.
Another way to do this would be to eval echoing your prompt to handle any expansion (was not sure why the brackets remain). This is most likely less robust than #anishsane's method, but may be a little quicker:
show-prompt() {
eval 'echo -en "'$PS1'"' | sed -e 's#\\\[##g' -e 's#\\\]##g'
}
# To show it in a function registered with `complete -F` on
# a single tab, and keep the user's input:
show-prompt
echo -n "${COMP_WORDS[#]}"
Had tinkered with that regarding this GitHub issue.
Try the below command
echo $PS1 |
sed -e s/'\\d'/"$(date +'%a %b %_d')"/g |
sed -e s/'\\t'/"$(date +'%T')"/g |
sed -e s/'\\#'/"$(date +'%r')"/g |
sed -e s/'\\T'/"$(date +'%r'| awk {'print $1'})"/g |
sed -e s/'\\e'//g | sed -e s/'\\h'/"$HOSTNAME"/g |
sed -e s/'\\h'/"$HOSTNAME"/g |
sed -e s/'\\H'/"$HOSTNAME"/g |
sed -e s/'\\u'/"$USER"/g |
sed -e s#'\\W'#"$(pwd)"#g |
sed -e s/'\\w'/"$(pwd | sed -e s#$HOME#'~'#g )"/g |
sed -e s/"\\\\"//g |
sed -e s/"\\["//g |
sed -e s/"\\]"/*/g |
cut -d'*' -f2 |
cut -d';' -f2 |
sed s/\ //g |
sed -e s/[a-z]$/"$([ "$USER" != "root" ] && echo \$ || echo \#)"/g
Edit the /etc/bashrc file
you can use this as example and check the output
# If id command returns zero, you’ve root access.
if [ $(id -u) -eq 0 ];
then # you are root, set red colour prompt
PS1="\\[$(tput setaf 1)\\]\\u#\\h:\\w #\\[$(tput sgr0)\\]"
else # normal
PS1="[\\u#\\h:\\w] $"
fi
Related
This question already has answers here:
When to wrap quotes around a shell variable?
(5 answers)
Closed 1 year ago.
I'm fetching an event description from an API using curl and assigning the results to a variable in bash like this:
Event=$( curl -s -X GET https://api.vendor.com/v1/events/ev_$API_ID\
-H 'Accept: application/json' \
-u 'mykey:' )
EVTITLE=$(echo $Event | jq -r '.name')
DESC=$(echo $Event | jq -r '.description')
This is working well so far. But sometimes the EVTITLE or DESC strings have shell special chars in the strings like &, ! and sometimes quotes.
So, later, when I go to pass the variable to a sed command like this:
(to replace values in a template file)
ti_sed="s/EVTITLE/"$EVTITLE"/"
sed -i -e "$ti_sed" filename
Where the value in $EVTITLE is something like
Amy does Q&A for you and "other things" !
I'd like to avoid having bash interpret those strings before sed goes to work.
Is there a way to groom the strings so the final sed output looks like the input?
For example can I get the string value of $EVTITLE between single quotes?
Is there a way to groom the strings so the final sed output looks
like the input?
Here's a bash demo script which reads strings from a temporary JSON
file into an indexed array and has GNU sed write its own conversion
script to edit a template.
Note that \n, \r, \t, \u etc. in the JSON source will be converted
by jq -r before bash and sed see them. The bash script reads
newline-delimited lines and does not work for JSON strings containing \n.
More comments below.
#!/bin/bash
jsonfile="$(mktemp)" templatefile="$(mktemp)"
# shellcheck disable=SC2064
trap "rm -f -- '${jsonfile}' '${templatefile}'" INT EXIT
cat << 'HERE' > "${jsonfile}"
{
"Name":"A1",
"Desc":"*A* \\1 /does/ 'Q&A' for you\tand \"other things\" \\# $HOME !"
}
HERE
printf '%s\n' '---EVTITLE---' > "${templatefile}"
mapfile -t vars < <(
jq -r '.Name, .Desc' < "${jsonfile}"
)
wait "$!" || exit ## abort if jq failed
# shellcheck disable=SC2034
name="${vars[0]}" desc="${vars[1]}"
printf '%s\n' "${desc}" |
tee /dev/stderr |
sed -e 's/[\\/&\n]/\\&/g' -e 's/.*/s\/EVTITLE\/&\//' |
tee /dev/stderr |
sed -f /dev/stdin "${templatefile}"
These are the 3 lines output by the script (with tabs expanding to
different lengths) showing the contents of:
the shell variable desc
the generated sed script
the edited template file
*A* \1 /does/ 'Q&A' for you and "other things" \# $HOME !
s/EVTITLE/*A* \\1 \/does\/ 'Q\&A' for you and "other things" \\# $HOME !/
---*A* \1 /does/ 'Q&A' for you and "other things" \# $HOME !---
bash stores the string it reads and passes it on without modification
using printf to sed which in turn adds escapes as needed for a
replacement string to be inserted between s/EVTITLE/ and /, i.e.
the sed script required to edit the template file.
In the replacement section of a sed substitute command the
following have a special meaning according to
POSIX
\ (backslash) the escape character itself
the s command delimiter, default is / but it may be anything
other than backslash and newline
& (ampersand) referencing the entire matched portion
\𝛂 (𝛂 is one of digits 1 through 9 ) referencing a matched group
a literal newline
but several seds recognize other escapes as replacements. For example,
GNU sed will replace \f, \n, \t, \v etc. as in C, and (unless
--posix option) its extensions \L, \l, \U, \u, and \E act
on the replacement.
(More on this by info sed -n 'The "s" Command', info sed -n Escapes,
info sed --index-search POSIXLY_CORRECT.)
What this means is that all backslash, command delimiter, ampersand,
and newline characters in the input must be escaped, i.e. prefixed with
a backslash, if they are to represent themselves when used in a
replacement section. This is done by asking sed to s/[\\/&\n]/\\&/g.
Recall that most of the meta characters used in regular expressions
(and the shell, for that matter), such as ^$.*[]{}(), have no special
meaning when appearing in the replacement section of sed's s
command and so should not be escaped there. Contrariwise, & is not
a regex meta character.
I am trying to run a complex grep command from shell (currently zsh on MacOS, but bash would be ok)
I want to pass variables, i.e. $1 and $2, to the command : grep -e 'something $1' -e 'somethingelse $2' file
For instance my script:
#/bin/zsh
echo ------
echo grep -e "'"something $1"'" -e "'"somethingelse $2"'" file
echo ------
grep -e "'"something $1"'" -e "'"somethingelse $2"'" file
This doesn't work with:
% ~/scripts/test cat mouse
------
grep -e 'something cat' -e 'somethingelse mouse' file
------
grep: cat': No such file or directory
grep: mouse': No such file or directory
Any idea?
Don't try to add single-quotes when you run the command; just put double-quotes around the pattern (including the parameter):
#/bin/zsh
echo ------
echo grep -e "'something $1'" -e "'somethingelse $2'" file
echo ------
grep -e "something $1" -e "somethingelse $2" file
Note that when echoing it, I used single-quotes inside the double-quotes. They'll be printed, so it'll look ok, but the shell won't treat them as syntactically significant. When actually running grep, you don't want single-quotes at all.
Well, unless the something contains escapes or dollar signs; in that case, you can either escape them:
grep -e "\$ometh\\ng $1" -e "\$ometh\\nge\\se $2" file
Or mix single- and double-quoting, with single-quotes around the fixed pattern part, and double-quotes just around the parameter part:
grep -e '$ometh\ng '"$1" -e '$ometh\nge\se '"$2" file
I don't know why you want grep to see your quotes. Assuming your literal string something does not contain spaces or other characters which are significant to the shell (most notable filename expansion wildcards) and you are using zsh,
grep something$1 FILE
would work. Of course if you have spaces in or around your something, you need to quote it:
grep 'something '$1 FILE # Significant space between something and $1
or
grep "something $1" FILE
Since you also mentioned bash: In bash, only the last form (using double quotes) makes sense, because if $1 contained spaces, bash would do word splitting.
I am trying to find and replace a specific text content using the sed command and to run it via a shell script.
Below is the sample script that I am using:
fp=/asd/filename.txt
fd="sed -i -E 's ($2).* $2:$3 g' ${fp}"
eval $fd
and executing the same by passing the arguments:
./test.sh update asd asdfgh
But if the argument string contains $ , it breaks the commands and it is replacing with wrong values, like
./test.sh update asd $apr1$HnIF6bOt$9m3NzAwr.aG1Yp.t.bpIS1.
How can I make sure that the values inside the variables are not expanded because of the $?
Updated
sh file test.sh
set -xv
fp="/asd/filename.txt"
sed -iE "s/(${2//'$'/'\$'}).*/${2//'$'/'\$'}:${3//'$'/'\$'}/g" "$fp"
text file filename.txt
hello:world
Outputs
1)
./test.sh update hello WORLD
sed -iE "s/(${2//'$'/'\$'}).*/${2//'$'/'\$'}:${3//'$'/'\$'}/g" "$fp"
++ sed -iE 's/(hello).*/hello:WORLD/g' /asd/filename.txt
2)
./test.sh update hello '$apr1$hosgaxyv$D0KXp5dCyZ2BUYCS9BmHu1'
sed -iE "s/(${2//'$'/'\$'}).*/${2//'$'/'\$'}:${3//'$'/'\$'}/g" "$fp"
++ sed -iE 's/(hello).*/hello:'\''$'\''apr1'\''$'\''hosgaxyv'\''$'\''D0KXp5dCyZ2BUYCS9BmHu1/g' /asd/filename.txt
In both the case , its not replacing the content
You don't need eval here at all:
fp=/asd/filename.txt
sed -i -E "s/(${2//'$'/'\$'}).*/\1:${3//'$'/'\$'}/g" "$fp"
The whole sed command is in double quotes so variables can expand.
I've replaced the blank as the s separator with / (doesn't really matter in the example).
I've used \1 to reference the first capture group instead of repeating the variable in the substitution.
Most importantly, I've used ${2//'$'/'\$'} instead of $2 (and similar for $3). This escapes every $ sign as \$; this is required because of the double quoting, or the $ get eaten by the shell before sed gets to see them.
When you call your script, you must escape any $ in the input, or the shell tries to expand them as variable names:
./test.sh update asd '$apr1$HnIF6bOt$9m3NzAwr.aG1Yp.t.bpIS1.'
Put the command-line arguments that are filenames in single quotes:
./test.sh update 'asd' '$apr1$HnIF6bOt$9m3NzAwr.aG1Yp.t.bpIS1'
must protect all the script arguments with quotes if having space and special shell char, and escape it if it's a dollar $, and -Ei instead of -iE even better drop it first for test, may add it later if being really sure
I admit i won't understant your regex so let's just get in the gist of solution, no need eval;
fp=/asd/filename.txt
sed -Ei "s/($2).*/$2:$3/g" $fp
./test.sh update asd '\$apr1\$HnIF6bOt\$9m3NzAwr.aG1Yp.t.bpIS1.'
The following bash and Perl scripts mysteriously give different results. Why?
#!/bin/bash
hash=`echo -n 'abcd' | /usr/bin/shasum -a 256`;
echo $hash;
#!/usr/bin/perl
$hash = `echo -n 'abcd' | /usr/bin/shasum -a 256`;
print "$hash";
The bash script:
$ ./tst.sh
88d4266fd4e6338d13b845fcf289579d209c897823b9217da3e161936f031589 -
The Perl script:
$ ./tst.pl
61799467ee1ab1f607764ab36c061f09cfac2f9c554e13f4c7442e66cbab9403 -
the heck?
Summary: In your Perl script, -n is being treated as an argument to include in the output of echo, not a flag to suppress the newline. ( Try
$hash = `echo -n 'abcd'`;
to confirm). Use printf instead.
Perl uses /bin/sh to execute code in back tics. Even if /bin/sh is a link to bash, it will behave differently when invoked via that like. In POSIX mode,
echo -n 'abcd'
will output
-n abcd
that is, the -n option is not recognized as a flag to suppress a newline, but is treated as a regular argument to print. Replace echo -n with printf in each script, and you should get the same SHA hash from each script.
(UPDATE: bash 3.2, when invoked as sh, displays this behavior. Newer versions of bash seem to continue treating -n as a flag when invoked as sh.)
Even better, don't shell out to do things you can do in Perl.
use Digest::SHA;
$hash = Digest::SHA::sha256('abcd');
For the curious, here's what the POSIX spec has to say about echo. I'm not sure what to make of XSI conformance; bash echo requires the -e option to treat the escape characters specially, but nearly every shell—except old versions of bash, and then only under special circumstances—treats -n as a flag, not a string. Oh well.
The following operands shall be supported:
string
A string to be written to standard output. If the first operand is -n, or
if any of the operands contain a <backslash> character, the results are
implementation-defined.
On XSI-conformant systems, if the first operand is -n, it shall be treated
as a string, not an option. The following character sequences shall be
recognized on XSI-conformant systems within any of the arguments:
\a
Write an <alert>.
\b
Write a <backspace>.
\c
Suppress the <newline> that otherwise follows the final argument in the output. All characters following the '\c' in the arguments shall be ignored.
\f
Write a <form-feed>.
\n
Write a <newline>.
\r
Write a <carriage-return>.
\t
Write a <tab>.
\v
Write a <vertical-tab>.
\\
Write a <backslash> character.
\0num
Write an 8-bit value that is the zero, one, two, or three-digit octal number num.
If you do:
printf "%s" 'abcd' | /usr/bin/shasum -a 256
you get the 88d...589 hash. If you do:
printf "%s\n" '-n abcd' | /usr/bin/shasum -a 256
you get the 617...403 hash.
Therefore, I deduce that Perl is somehow running a different echo command, perhaps /bin/echo or /usr/bin/echo instead of the bash built-in echo, or maybe the built-in echo to /bin/sh (which might perhaps be dash rather than bash), and this other echo does not recognize the -n option as an option and outputs different data.
I'm not sure which other echo it is finding; on my machine which is running an Ubuntu 14.04 LTE derivative, bash, dash, sh (linked to bash), ksh, csh and tcsh all treat echo -n abcd the same way. But somewhere along the line, I think that there is something along these lines happening; the hash checksums being identical strongly point to it. (Maybe you have a 3.2 bash linked to sh; see the notes in the comments.)
I'm trying to escape ('\') a semicolon (';') in a string on unix shell (bash) with sed. It works when I do it directly without assigning the value to a variable. That is,
$ echo "hello;" | sed 's/\([^\\]\);/\1\\;/g'
hello\;
$
However, it doesn't appear to work when the above command is assigned to a variable:
$ result=`echo "hello;" | sed 's/\([^\\]\);/\1\\;/g'`
$
$ echo $result
hello;
$
Any idea why?
I tried by using the value enclosed with and without quotes but that didn't help. Any clue greatly appreciated.
btw, I first thought the semicolon at the end of the string was somehow acting as a terminator and hence the shell didn't continue executing the sed (if that made any sense). However, that doesn't appear to be an issue. I tried by using the semicolon not at the end of the string (somewhere in between). I still see the same result as before. That is,
$ echo "hel;lo" | sed 's/\([^\\]\);/\1\\;/g'
hel\;lo
$
$ result=`echo "hel;lo" | sed 's/\([^\\]\);/\1\\;/g'`
$
$ echo $result
hel;lo
$
You don't need sed (or any other regex engine) for this at all:
s='hello;'
echo "${s//;/\;}"
This is a parameter expansion which replaces ; with \;.
That said -- why are you trying to do this? In most cases, you don't want escape characters (which are syntax) to be inside of scalar variables (which are data); they only matter if you're parsing your data as syntax (such as using eval), which is a bad idea for other reasons, and best avoided (or done programatically, as via printf %q).
I find it interesting that the use of back-ticks gives one result (your result) and the use of $(...) gives another result (the wanted result):
$ echo "hello;" | sed 's/\([^\\]\);/\1\\;/g'
hello\;
$ z1=$(echo "hello;" | sed 's/\([^\\]\);/\1\\;/g')
$ z2=`echo "hello;" | sed 's/\([^\\]\);/\1\\;/g'`
$ printf "%s\n" "$z1" "$z2"
hello\;
hello;
$
If ever you needed an argument for using the modern x=$(...) notation in preference to the older x=`...` notation, this is probably it. The shell does an extra round of backslash interpretation with the back-ticks. I can demonstrate this with a little program I use when debugging shell scripts called al (for 'argument list'); you can simulate it with printf "%s\n":
$ z2=`echo "hello;" | al sed 's/\([^\\]\);/\1\\;/g'`
$ echo "$z2"
sed
s/\([^\]\);/\1\;/g
$ z1=$(echo "hello;" | al sed 's/\([^\\]\);/\1\\;/g')
$ echo "$z1"
sed
s/\([^\\]\);/\1\\;/g
$ z1=$(echo "hello;" | printf "%s\n" sed 's/\([^\\]\);/\1\\;/g')
$ echo "$z1"
sed
s/\([^\\]\);/\1\\;/g
$
As you can see, the script executed by sed differs depending on whether you use x=$(...) notation or x=`...` notation.
s/\([^\]\);/\1\;/g # ``
s/\([^\\]\);/\1\\;/g # $()
Summary
Use $(...); it is easier to understand.
You need to use four (three also work). I guess its because it's interpreted twice, first one by the sed command and the second one by the shell when reading the content of the variable:
result=`echo "hello;" | sed 's/\([^\\]\);/\1\\\\;/g'`
And
echo "$result"
yields:
hello\;