How do I edit the output of a bash script before executing it? - bash

For example look at the following line of bash-code
eval `echo "ls *.jpg"`
It lists all jpgs in the current directory. Now I want it to just print the line to the prompt so I can edit it before executing. (Like key-up does for example)
How do I do that?
The reason for this question comes from a much more usefull alias:
alias ac="history 2 | sed -n '1 s/[ 0-9]*//p' >> ~/.commands; sort -fu ~/.commands > ~/.commandsTmp; mv ~/.commandsTmp ~/.commands"
alias sc='oldIFS=$IFS; IFS=$'\n'; text=(); while read line ; do text=( ${text[#]-} "${line}") ; done < ~/.commands; PS3="Choose command by number: " ; eval `select selection in ${text[#]}; do echo "$selection"; break; done`; IFS=$oldIFS'
alias rc='awk '"'"'{print NR,$0}'"'"' ~/.commands; read -p "Remove number: " number; sed "${number} d" ~/.commands > ~/.commandsTmp; mv ~/.commandsTmp ~/.commands'
Where ac adds or remembers the last typed command, sc shows the available commands and executes them and rc deletes or forgets a command. (You need to touch ~/.commands before it works)
It would be even more usefull if I could edit the output of sc before executing it.

history -s whatever you want
will append "whatever you want" to your bash history. Then a simple up arrow (or !! followed by enter if you have shopt histreedit enabled --- I think that's the option I'm thinking of, not 100% sure), will give you "whatever you want" on the command line, ready to be edited.

Some comments on your aliases:
Simplified quoting:
alias rc='awk "{print NR,\$0}" ~/.commands ...'
No need for tail and you can combine calls to sed:
alias ac="history 2 | sed -n '1 s/[ 0-9]*//p'..."
Simplified eval and no need for $IFS:
alias sc='text=(); while read line ; do text+=("${line}") ; done < ~/.commands; PS3="Choose command by number: " ; select selection in "${text[#]}"; do eval "$selection"; break; done'

#OP, you should really put those commands into subroutines, and when you want to use them, source it. (taken from dennis's answers)
rc(){
awk "{print NR,\$0}" ~/.commands ...
}
ac(){
history 2 | sed -n '1 s/[ 0-9]*//p'...
}
sc(){
text=()
while read line
do
text+=("${line}")
done < ~/.commands
PS3="Choose command by number: "
select selection in "${text[#]}"
do
eval "$selection"
break
done
}
then save it as "library.sh" or something and when you want to use it
$ source /path/to/library.sh
Or
$ . /path/to/library.sh

Maybe you could use preexec.bash?
http://www.twistedmatrix.com/users/glyph/preexec.bash.txt
(On a related note, you can edit the current command line by using ctrl-x-e as well!)
cheers,
tavod

Related

Making a script in debian which would create a new file from names file with different order of the names

Existing names in the "names" file is in form of lastname1,firstname1 ; lastname2,firstname2.
In the new file it should be like down below.
Create a script that outputs a list of existing users (from the "names" file) in the form:
firstname1.lastname1
firstname2.lastname2
etc.
And saves a file called "cat list"
This kind of command line should be a solution for you :
awk -F '\.' '{print $2","$1}' source_file >> "cat list"
First awk revers the order of the field and put the char ',' under
">>" Second step redirect full output to a file called "cat list" as requested
I don't think I have the most efficient solution here but it works and outputs the different stages of translation to help illustrate the process:
#!/bin/sh
echo "lastname1,firstname1 ; lastname2,firstname2" >testfile
echo "original file:"
cat testfile
echo "\n"
# first replace semi-colon with newline
tr ';' '\n' <testfile >testfile_n
echo "after first translation:"
cat testfile_n
echo "\n"
# also remove extra spaces
tr -d '[:blank:]' <testfile_n >testfile_n_s
echo "after second translation:"
cat testfile_n_s
echo "\n"
# now swap name order using sed and use periods instead of commas
sed -E 's/([a-zA-Z0-9]*),([a-zA-Z0-9]*)/\2\.\1/g' testfile_n_s >"cat list"
echo "after third iteration:"
cat "cat list"
echo "\n"
The script above will save a file called 'cat list' and output something similar to:
original file:
lastname1,firstname1 ; lastname2,firstname2
after first translation:
lastname1,firstname1
lastname2,firstname2
after second translation:
lastname1,firstname1
lastname2,firstname2
after third iteration:
firstname1.lastname1
firstname2.lastname2

Extending terminal colors to the end of line

I have a bash script which generates a motd. The problem is depending on some terminal settings which I am not sure about the color will extend to the end of the line. Othertimes it doesn't:
e.g.
v.s.
IIRC one is just the normal gnome-terminal and the other is my tmux term. So my question is how can I get this to extend to 80 character (or really to the terminal width). Of course I can pad to 80 chars but that really doesn't solve the problem.
Here is a snip of my code which generates the motd:
TC_RESET="^[[0m"
TC_SKY="^[[0;37;44m"
TC_GRD="^[[0;30;42m"
TC_TEXT="^[[38;5;203m"
echo -n "${TC_SKY}
... lots of printing..."
echo -e "\n Welcome to Mokon's Linux! \n"
echo -n "${TC_GRD}"
nodeinfo # Just prints the info seen below...
echo ${TC_RESET}
How can I programmatically from bash change the terminal settings or something change the color to the end of the line?
Maybe use the Escape sequence to clear-to-EOL
For some reason (on my MacOS terminal!) I only needed specify this sequence and then it worked for all the lines but for completeness I list it for all
TC_RESET=$'\x1B[0m'
TC_SKY=$'\x1B[0;37;44m'
TC_GRD=$'\x1B[0;30;42m'
TC_TEXT=$'\x1B[38;5;203m'
CLREOL=$'\x1B[K'
echo -n "${TC_SKY}${CLREOL}"
echo -e "\n ABC${CLREOL}\n"
echo -e "\n DEFG${CLREOL}\n"
echo -n "${TC_GRD}"
echo -e "\n ABC${CLREOL}\n"
echo -e "\n DEFG${CLREOL}\n"
echo ${TC_RESET}
Padding filter
Unfortunely, you have to pad each line with exact number of spaces for changing the color of the whole line's background.
As you're speaking about bash, my solution will use bashisms (Won't work under other shell, or older version of bash).
syntax printf -v VAR FORM ARGS assign to varianble VAR then result of sprintf FORM ARGS. That's bashism, under other kind of shell, you have to replace this line by TC_SPC=$(printf "%${COLUMNS}s" '')
You may try this:
... lots of printing..."
echo -e "\n Welcome to Mokon's Linux! \n"
echo -n "${TC_GRD}"
printf -v TC_SPC "%${COLUMNS}s" ''
nodeinfo |
sed "s/$/$TC_SPC/;s/^\\(.\\{${COLUMNS}\\}\\) */\\1/" # Just prints the info seen below...
echo ${TC_RESET}
Maybe you have to ensure that $COLUMNS is correctly setted:
COLUMNS=$(tput cols)
As you could see, only the result of command filtered by sed is fully colored.
you may
use same filter many times:
cmd1 | sed '...'
cmd2 | sed '...'
or group your commands to use only one filter:
( cmd1 ; cmd 2 ) | sed '...'
But there is an issue in case you try to filter ouptut that contain formatting escapes:
(
echo $'\e[33;44;1mYellow text on blue background';
seq 1 6;
echo $'\e[0m'
) | sed "
s/$/$TC_SPC/;
s/^\\(.\\{${COLUMNS}\\}\\) */\\1/"
Il the lines you have to pad to contain escapes, you have to isolate thems:
(
echo $'\e[33;44;1mYellow text on blue background';
seq 1 6;
echo $'\e[0m'
) | sed "
s/\$/$TC_SPC/;
s/^\\(\\(\\o33\\[[0-9;]*[a-zA-Z]\\)*\\)\\([^\o033]\\{${COLUMNS}\\}\\) */\\1\\3/
"
And finally to be able to fill terminate very long lines:
(
echo $'\e[33;44;1mYellow text on blue background';
seq 1 6;
echo "This is a very very long long looooooooooong line that contain\
more characters than the line could hold...";
echo $'\e[0m';
) | sed "
s/\$/$TC_SPC/;
s/^\\(\\(\\o33\\[[0-9;]*[a-zA-Z]\\)*\\)\\(\\([^\o033]\\{${COLUMNS}\\}\\)*\\) */\\1\\3/"
Nota: This only work if formating escapes are located at begin of line.
Try with this:
echo -e '\E[33;44m'"yellow text on blue background"; tput sgr0

Modify existing code for creating menu names in an interactive shell script

When i give the command awk NR==7 ABC.mod it gives me the title ('Add') I need which is present on the 7th line of the file Currently I am just able to read the title but I am not aware of how to append it to the output. Can somebody help me organize this so I can modify the code to get the expected menu output with minimal disruption (I hope) to the existing script?
Assuming you can pull out the "Add", "Delete" ... and other "titles" from the 7th line of each *.mod file, then you need to modify your script where it looks at the file a1.out somewhere before the line which seems to create the menu, namely: tr ' ' '\n' < ~/a1.out > ~/b.dat.
I say "assuming" because, even though you mention awk NR==7, I don't see where you are using it in the script. In any case, if you can get the "title" from the 7th line of a given *.mod file, then you can get the menu "name" from the file name (which seems to be the way you are constructing your menu) like this:
awk '{ln=length(ARGV[1]); if(NR==7) print substr(ARGV[1],0,ln-4)"..."$0}' ABC.mod
outputs:
ABC...Add
There's may be a shorter, easier way to do this using sed, but you mentioned awk.
For me at least, there's not really enough information to go on to help you much further. If you update your question someone may be able to give more concrete advice.
EDIT:
I'll post my work here in the hope that you find it useful. It is not a complete solution. I have to say, this is a strangely written application - with shell code and variables hardwired to temporary data to locations strewn about the file system. It's a bit hairy to try and set up a local version to try it out. Hopefully by experimenting and making modifications to the cod e you will learn more about how your application works and about shell programming in general. Extra advice: record your changes; sketch out how/where your application reads and writes its data; use comments in the source code to help you and others remember how the code works; make backups; use source control.
My assumptions:
pradee.sh looks like this (why does the file has a .sh extension - it seems more like a it defines some constants for your script)
% cat pradee.sh
HBKTM
ABC
HBKTM
CBC
HBKTM
DBC
HBKTM
IBC
HBKTM
MBCE
HBKTM
UBC
HBKTM
VBCM
Here's how I created my "test environment":
% for file in `grep -v HBKTM pradee.sh`; do touch $file.mod ; done
% ls
ABC.mod CBC.mod DBC.mod IBC.mod MBCE.mod UBC.mod VBCM.mod pradee.sh
% echo -e "_ctrl.jsp \n\n\n\n\n" > *.mod # mod files have required text+6 lines
% echo -e "_ctrl.jsp \n\n\n\n\n" > HBKTM.mod # this file seems special ?
% sed -i'' -e "7i\\[Ctrl-V Ctrl-J]
Add" ABC.mod
OR since the files now have 6 lines ... echo the menu title onto the last line:
% echo "Delete" >> DBC.mod
% echo "Insert" >> IBC.mod
... [continue inserting titles like "Add" "Delete" etc to the other *.mod files]
After that I think I have data files that mimic your set up. You tell me. Now, if I make a few small changes to your script (so the file locations don't remove overwrite my own files) and add the awk command I mentioned previously, here is what I end up with:
# menu_create.sh
# See http://stackoverflow.com/questions/17297671
rm -f *.dat
clear
cont="y"
while [ "$cont" = "y" ] # "$" is need for POSIX
do
echo -e "\n\nPlease Enter ONS Name : "
read ons
currpath=.
up=$(echo $ons|tr '[:lower:]' '[:upper:]')
#echo "\n ONS menu name \n"
#echo $up
if [ -f $up.mod ]; then
#in=$(grep -ri $up pradee.sh) # changed to following
# - how could this have worked ?
in=$(grep -v $up pradee.sh)
if [ -n "$in" ]; then
onsname=$(grep -ri "_ctrl.jsp" $up.mod)
#echo "onsname : $onsname"
if [ -n "$onsname" ]; then
echo -e "\n ONS menu name : $up "
echo $in > a1.dat
#echo "written to a1.dat\n"
#cat ~/a1.dat
#tr ' ' '\n' < ~/a1.dat > ~/a.dat
#cat ~/a.dat
sed "s/$up//g" a1.dat >a1.out
for i in `cat a1.dat`;
do
awk '{ln=length(ARGV[1]);if(NR==7) print substr(ARGV[1],0,ln-4)"..."$0}' $i.mod >> menu.dat ;
done
echo -e "\n FINUX Names \n"
#tr ' ' '\n' < a1.out > b.dat
tr ' ' '\n' < menu.dat > b.dat
cat b.dat
else
echo -e "ONS Name Not Valid !"
fi
else
echo -e "FINUX menu Name not found in our Repository"
fi
else
echo -e "\n Please Enter valid ONS name !!"
fi
echo -e "\n\n Press "y" to continue, Any other key to exit"
read cont
done
It gives me this output:
Please Enter ONS Name :
hbktm
ONS menu name : HBKTM
FINUX Names
ABC...Add
CBC...Cancel
DBC...Delete
IBC...Insert
MBCE...Modify
UBC...Undelete
VBCM...Verify
Press y to continue, Any other key to exit
q
I hope my response to your question helps you learn more about how to modify your application.

Searching in bash shell

I have a text file.
Info in text file is
Book1:Author1:10.50:50:5
Book2:Author2:4.50:30:10
First one is book name, second is author name, third is the price, fourth is the quantity and fifth is the quantity sold.
Currently I have this set of codes
function search_book
{
read -p $'Title: ' Title
read -p $'Author: ' Author
if grep -Fq "${Title}:${Author}" BookDB.txt
then
record=grep -c "${Title}:${Author}" BookDB.txt
echo "Found '" $record "' record(s)"
else
echo "Book not found"
fi
}
for $record, I am trying the count the number of lines that is found. Did I do the right thing for it because when I run this code, it just shows error command -c.
When i did this
echo "Found"
grep -c "${Title}" BookDB.txt
echo "record(s)"
It worked, but the output is
Found
1
record(s)
I would like them to be together
Can I also add grep -i to grep -Fq in order to make all into small letters for better searching?
Lets say if I want to search Book1 and Author1, if I enter 'ok' for title and 'uth' for author, is there any % command to add to the title to search in the middle of the title and author?
The expected output is also expected to be..
Found 1 record(s)
Book1,Author1,$10.50,50,5.
Is there any where I can change the : delimiter to ,?
And also adding $ to the 3rd column which is the rice?
Please help..
Changing record=grep -c "${Title}:${Author}" BookDB.txt to record=$(grep -c "${Title}:${Author}" BookDB.txt) will fix the error. record=$(cmd) means assigning the output of command cmd to the variable record. Without that, shell will interpret record=grep -c ... as a command -c prepended by a environment variable setting(record=grep).
BTW, since your DB format is column-oriented text data, awk should be a better tool. Sample code:
function search_book
{
read -p $'Title: ' Title
read -p $'Author: ' Author
awk -F: '{if ($1 == "'"$Title"'" && $2 ~ "'"$Author"'") {count+=1; output=output "\n" $0} }
END {
if (count > 0) {print "found", count, "record(s)\n", output}
else {print "Book not found";}}' BookDB.txt
}
As you can see, using awk makes it easier to change delimiter(e.g. awk -F, for comma delimiter), and also makes the program more robust(e.g. it restricts the matching string to the first two fields). If you only need fuzzy match instead of exact match, you could change == to ~ in condition.
The "unnamed command -c" error can be avoided by enclosing the right part of the assignment in backticks or "$()", e.g.:
record=`grep -ic "${Title}:${Author}" BookDB.txt`
record=$(grep -ic "${Title}:${Author}" BookDB.txt)
Also, this snippet shows that -i is perfectly fine. However, please note that both grep commands should use the same list of flags (-F is missing in the 2nd one) - except for -q, of course.
Anyway, performing grep twice is probably not the best way to go. What about...
record=`grep -ic "${Title}:${Author}" BookDB.txt 2>/dev/null`
if [ ! -z "$record" ]; then ...
... or something like that?
By the way: If you omit -F you allow the user to operate with regular expressions. This would not only provide wildcards but also the possibility for more complex patterns. You could also apply an option to your script that decides whether to use -F or not..
Last but not least: To modify the lines, in order to change the delimiter or manipulate the columns at all, you could look into the manual pages or awk(1) or cut(1), at least. Although I believe that a more sophisticated language is more suitable here, e.g. perl(1) or python(1), especially when the script is to be extended with more features.
to add to the answer(s) above (this started as a comment, but it grew...) :
the $() form is preferred:
- it allows nesting,
- and it simplifies a lot the use of " and ' (each "level" of nesting see them at their level, so to speak). Tough to do with as using nested quotes and single-quotes becomes a nightmare of` and \\... depending on the "level of subshell" they are to be interpreted in...
ex: (trying to only grep once)
export results="$(grep -i "${Title}:${Author}" BookDB.txt)" ;
export nbresults=$(echo "${results}" | wc -l) ;
printf "Found %8s record(s)\n" "nbresults" ;
echo "$nbresults" ;
or, if too many results to fit in variable:
export tmpresults="/tmp/results.$$"
grep -i "${Title}:${Author}" BookDB.txt > "${tmpresults}"
export nbresults=$(wc -l "${tmpresults}") ;
printf "Found %8s record(s)\n" "nbresults" ;
cat "${tmpresults}" ;
rm -f "${tmpresults}" ;
Note: I use " a lot (except on the wc -l line) to illustrate it could be needed sometimes (not in all the cases above!) to keep spaces, newlines, etc. (And I purposely drop it for nbresults so that it only contain the number of lines, not the preceding spaces).

How to concatenate stdin and a string?

How to I concatenate stdin to a string, like this?
echo "input" | COMMAND "string"
and get
inputstring
A bit hacky, but this might be the shortest way to do what you asked in the question (use a pipe to accept stdout from echo "input" as stdin to another process / command:
echo "input" | awk '{print $1"string"}'
Output:
inputstring
What task are you exactly trying to accomplish? More context can get you more direction on a better solution.
Update - responding to comment:
#NoamRoss
The more idiomatic way of doing what you want is then:
echo 'http://dx.doi.org/'"$(pbpaste)"
The $(...) syntax is called command substitution. In short, it executes the commands enclosed in a new subshell, and substitutes the its stdout output to where the $(...) was invoked in the parent shell. So you would get, in effect:
echo 'http://dx.doi.org/'"rsif.2012.0125"
use cat - to read from stdin, and put it in $() to throw away the trailing newline
echo input | COMMAND "$(cat -)string"
However why don't you drop the pipe and grab the output of the left side in a command substitution:
COMMAND "$(echo input)string"
I'm often using pipes, so this tends to be an easy way to prefix and suffix stdin:
echo -n "my standard in" | cat <(echo -n "prefix... ") - <(echo " ...suffix")
prefix... my standard in ...suffix
There are some ways of accomplish this, i personally think the best is:
echo input | while read line; do echo $line string; done
Another can be by substituting "$" (end of line character) with "string" in a sed command:
echo input | sed "s/$/ string/g"
Why i prefer the former? Because it concatenates a string to stdin instantly, for example with the following command:
(echo input_one ;sleep 5; echo input_two ) | while read line; do echo $line string; done
you get immediatly the first output:
input_one string
and then after 5 seconds you get the other echo:
input_two string
On the other hand using "sed" first it performs all the content of the parenthesis and then it gives it to "sed", so the command
(echo input_one ;sleep 5; echo input_two ) | sed "s/$/ string/g"
will output both the lines
input_one string
input_two string
after 5 seconds.
This can be very useful in cases you are performing calls to functions which takes a long time to complete and want to be continuously updated about the output of the function.
You can do it with sed:
seq 5 | sed '$a\6'
seq 5 | sed '$ s/.*/\0 6/'
In your example:
echo input | sed 's/.*/\0string/'
I know this is a few years late, but you can accomplish this with the xargs -J option:
echo "input" | xargs -J "%" echo "%" "string"
And since it is xargs, you can do this on multiple lines of a file at once. If the file 'names' has three lines, like:
Adam
Bob
Charlie
You could do:
cat names | xargs -n 1 -J "%" echo "I like" "%" "because he is nice"
Also works:
seq -w 0 100 | xargs -I {} echo "string "{}
Will generate strings like:
string 000
string 001
string 002
string 003
string 004
...
The command you posted would take the string "input" use it as COMMAND's stdin stream, which would not produce the results you are looking for unless COMMAND first printed out the contents of its stdin and then printed out its command line arguments.
It seems like what you want to do is more close to command substitution.
http://www.gnu.org/software/bash/manual/html_node/Command-Substitution.html#Command-Substitution
With command substitution you can have a commandline like this:
echo input `COMMAND "string"`
This will first evaluate COMMAND with "string" as input, and then expand the results of that commands execution onto a line, replacing what's between the ‘`’ characters.
cat will be my choice: ls | cat - <(echo new line)
With perl
echo "input" | perl -ne 'print "prefix $_"'
Output:
prefix input
A solution using sd (basically a modern sed; much easier to use IMO):
# replace '$' (end of string marker) with 'Ipsum'
# the `e` flag disables multi-line matching (treats all lines as one)
$ echo "Lorem" | sd --flags e '$' 'Ipsum'
Lorem
Ipsum#no new line here
You might observe that Ipsum appears on a new line, and the output is missing a \n. The reason is echo's output ends in a \n, and you didn't tell sd to add a new \n. sd is technically correct because it's doing exactly what you are asking it to do and nothing else.
However this may not be what you want, so instead you can do this:
# replace '\n$' (new line, immediately followed by end of string) by 'Ipsum\n'
# don't forget to re-add the `\n` that you removed (if you want it)
$ echo "Lorem" | sd --flags e '\n$' 'Ipsum\n'
LoremIpsum
If you have a multi-line string, but you want to append to the end of each individual line:
$ ls
foo bar baz
$ ls | sd '\n' '/file\n'
bar/file
baz/file
foo/file
I want to prepend my sql script with "set" statement before running it.
So I echo the "set" instruction, then pipe it to cat. Command cat takes two parameters : STDIN marked as "-" and my sql file, cat joins both of them to one output. Next I pass the result to mysql command to run it as a script.
echo "set #ZERO_PRODUCTS_DISPLAY='$ZERO_PRODUCTS_DISPLAY';" | cat - sql/test_parameter.sql | mysql
p.s. mysql login and password stored in .my.cnf file

Resources