I have a command that I use to transform sentences to title case. It is inefficient to have to copy this command out of a text file, and then paste it into the terminal before then also pasting in the sentence I want converted. The command is:
echo "my text" | sed 's/.*/\L&/; s/[a-z]*/\u&/g'
How can I convert this to a script so I can just call something like the following from the terminal:
TitleCaseConverter "my text"
Is it possible to create such a script? Is it possible to make it work from any folder location?
Since bash's parameter expansion includes case modification, there's no need for sed. Just a short function:
tc() { set ${*,,} ; echo ${*^} ; }
Test (don't use quotes, since a title is typically no longer than a sentence, it shouldn't matter):
tc FOO bar
Output:
Foo Bar
Fancy version that avoids capitalizing some conjunctions, articles and such:
ftc() { set ${*,,} ; set ${*^} ; echo -n "$1 " ; shift 1 ; \
for f in ${*} ; do \
case $f in A|The|Is|Of|And|Or|But|About|To|In|By) \
echo -n "${f,,} " ;; \
*) echo -n "$f " ;; \
esac ; \
done ; echo ; }
Test:
ftc the last of the mohicans
Output:
The Last of the Mohicans
How about just wrapping it into a function in .bashrc or .bash_profile and source it from the current shell
TitleCaseConverter() {
sed 's/.*/\L&/; s/[a-z]*/\u&/g' <<<"$1"
}
or) if you want it pitch-perfect to avoid any sorts of trailing new lines from the input arguments do
printf "%s" "$1" | sed 's/.*/\L&/; s/[a-z]*/\u&/g'
Now you can source the file once from the command line to make the function available, do
source ~/.bash_profile
Now you can use it in the command line directly as
str="my text"
newstr="$(TitleCaseConverter "$str")"
printf "%s\n" "$newstr"
My Text
Also to your question,
How can I convert this to a script so I can just call something like the following from the terminal
Adding the function to one of the start-up files takes care of that, recommend adding it to .bash_profile more though.
TitleCaseConverter "this is stackoverflow"
This Is Stackoverflow
Update:
OP was trying to create a directory with the name returned from the function call, something like below
mkdir "$(TitleCaseConverter "this is stackoverflow")"
The key again here is to double-quote the command-substitution to avoid undergoing word-splitting by shell.
I don't have comment privs, but slight improvement to ManUnitedBloke's answer, this will handle contractions like "don't" and "who's".
echo "my text" | sed 's/.*/\L&/; s/[a-z']*/\u&/g'
Related
See I wanna copy a file to a destination: cp filename /home/example/temp.txt.
The question the filename will be changed by some programes, and the new name of it will be written in file /home/example/.env.
What I want is alias something like alias cpf=cp ${filename} /home/nope/temp.txt to .bashrc, then what I need is only run cpf if I want to copy the latest finename to /home/example/temp.txt.
What I have tried:
eval $(grep -v "^#" "/home/example/.env") cp ${filename} /home/nope/temp.txt
and faild to get ${filename}.
Is there some changes to make what I tried work?
Example .env:
key1='do not put me in the environment'
key2=1231
filename=thisvaluechanges
key4="I hate being evaluated"
You only want to evaluate the line with filename. First test how you can select that line, something like
grep "^filename=" /home/example/.env
# or
sed -n 's/^\s*filename\s*=\s*/filename=/p' /home/example/.env
Next you can source the selected line.
source <(grep "^filename=" /home/example/.env)
When the filename is a fixed string (without $() that needs to be evaluated), you can do without source:
cp $(grep "^filename=" /home/example/.env) /home/nope/temp.txt
Before putting this in an alias, remember that a function can do everything an alias can, and can do more. You "should" stop using alias.
When you have three or four files like filename1, 2, 3, 4, you can use a function with an argument:
cpf() {
if (( $# = 0 )); then
echo "Usage: cpf filenumber"
else
cp $(grep "^filename${1}=" /home/example/.env) /home/nope/temp.txt
fi
}
You can call the function with cpf 2 for filename2.
When you want to put the filename in the environment, you can change the function
source <(grep "^filename${1}=" /home/example/.env)
My guess is that assuming /home/example/.env contains:
#!/bin/bash
# bash sourcable file
filename=$(echo 123)
then you want:
#!/bin/bash
cpf() {
(
. /home/example/.env
cp "$filename" /home/nope/temp.txt
)
}
Notes:
eval is evil. Your use of eval $(grep...) is very dangerous.
Always remember to qoute your expansions.
I have this (test) script:
#!/bin/bash
my_cmd_bad_ ( ) {
cmd="$#"
$cmd
}
my_cmd_good_ ( ) {
"$#"
}
my_cmd_bad_ ls -l "file with space"
my_cmd_good_ ls -l "file with space"
The output is (the file does not exist, which is not the point of this question):
ยป ~/test.sh
ls: cannot access file: No such file or directory
ls: cannot access with: No such file or directory
ls: cannot access space: No such file or directory
ls: cannot access file with space: No such file or directory
I am surprised that the first version does not work as expected: the parameter is not quoted, and instead of processing one file, it processes three. Why?
How can I save the command that I want to execute, properly quoted? I need to execute it later, where I do not have "$#" anymore.
A simple rework of this test script would be appreciated.
See similar question: How to pass command line parameters with quotes stored in single variable?
Use those utility functions ho save a command to a string for later execution:
bash_escape() {
# backtick indirection strictly necessary here: we use it to strip the
# trailing newline from sed's output, which Solaris/BSD sed *always* output
# (unlike GNU sed, which outputs "test": printf %s test | sed -e s/dummy//)
out=`echo "$1" | sed -e s/\\'/\\''\\\\'\\'\\'/g`
printf \'%s\' "$out"
}
append_bash_escape() {
printf "%s " "$1"
bash_escape "$2"
}
your_cmd_fixed_ ( ) {
cmd="$#"
while [ $# -gt 0 ] ; do
cmd=`append_bash_escape "$cmd" "$1"` ; shift
done
$cmd
}
You can quote any single parameter and evaluate it later:
my_cmd_bad_ ( ) {
j=0
for i in "$#"; do
cmd["$j"]=\"$"$i"\"
j=$(( $j + 1 ))
done;
eval ${cmd[*]}
}
You are combining three space-delimited strings "ls", "-l", and "file with space" into a single space-delimited string cmd. There's no way to know which spaces were originally quoted (in "file with space") and which spaces were introduced during the assignment to cmd.
Typically, it is not a good idea to try to build up command lines into a single string. Use functions, or isolate the actual command and leave the arguments in $#.
Rewrite the command like this:
my_cmd_bad_ () {
cmd=$1; shift
$cmd "$#"
}
See http://mywiki.wooledge.org/BashFAQ/050
Note that your second version is greatly preferred most of the time. The only exceptions are if you need to do something special. For example, you can't bundle an assignment or redirect or compound command into a parameter list.
The correct way to handle the quoting issue requires non-standard features. Semi-realistic example involving a template:
function myWrapper {
typeset x IFS=$' \t\n'
{ eval "$(</dev/fd/0)"; } <<-EOF
for x in $(printf '%q ' "$#"); do
echo "\$x"
done
EOF
}
myWrapper 'foo bar' $'baz\nbork'
Make sure you understand exactly what's going on here and that you really have a good reason for doing this. It requires ensuring side-effects can't affect the arguments. This specific example doesn't demonstrate a very good use case because everything is hard-coded so you're able to correctly escape things in advance and expand the arguments quoted if you wanted.
My bash-script looks as following:
echo "Description:"
while [ $finishInput -eq 0 ]; do
read tmp
desc="$desc"$'\n'"$tmp"
if [ -z "$tmp" ]; then
finishInput="1"
fi
done
echo -n "Maintainer:"
read maintainer
It reads to the desc var until a empty line is passed. After that, i want to read in other stuff.
When executing my current script it looks like this:
Description:
Line 1
Line 2
Maintainer:
I would like to overwrite the last empty line with the "Maintainer:".
I searched for a solution but only found suggestions which were like
echo -n "Old line"
echo -e "\r new line"
which stays on the line and overwrites it. This is not possible in my case.
In your example you delete the text at the same line. When you want to return to the previous line use \e[1A, and to clear that line, use \e[K:
echo 'Old line'
echo -e '\e[1A\e[Knew line'
When you want to go N lines up, use \e[<N>A
Found a great guide on escape sequences and wanted to expand on some of the discussions here.
When you write out to a terminal, you move an invisible cursor around, much like you do when you write in any text editor. When using echo, it will automatically end the output with a new line character which moves the cursor to the next line.
$ echo "Hello" && echo " World"
Hello
World
You can use -n to prevent the new line and if you echo again after this, it will append it to the end of that line
$ echo -n "Hello" && echo " World"
Hello World
The cursor remains where it was so, on it's own, we can't use -n to overwrite the previous line, we need to move the cursor to the left. To do that we need to give it an escape sequence, which we let echo know we're going to use with -e and then move the cursor by providing a return carriage \r which puts the cursor at the beginning of the line.
$ echo -n "Hello" && echo -e "\rWorld"
World
That may look like it worked, but see what happens with
$ echo -n "A longer sentance" && echo -e "\rShort sentance"
Short sentancence
See the extra characters? Simply writing over the line only changes the characters where we wrote them.
To fix this, the accepted answer above uses the escape character \e[0K to erase everything after the cursor, after the cursor has moved left. i.e. \r move to beginning \e[0K erase to end.
$ echo -n "A longer sentance" && echo -e "\r\e[0KShort sentance"
Short sentance
Important \e to begin escape sequences works in zsh but not in sh and not necessarily in bash, however \033 works in all of them. If you want your script to work anywhere, you should preference \033
$ echo -n "A longer sentance" && echo -e "\r\033[0KShort sentance"
Short sentance
But escape characters can provide even more utility. For example \033[1A moves the cursor to the previous line so we don't need the -n on the previous echo:
$ echo "A longer sentance" && echo -e "\r\033[1A\033[0KShort sentance"
Short sentance
\r move to the beginning \033[1A move up \033[0K erase to the end
Finally, this is all a bit messy in my book, so you can turn this into a function:
overwrite() { echo -e "\r\033[1A\033[0K$#"; }
Using $# just puts all the parameters of the function into the string
$ echo Longer sentance && overwrite Short sentence
Short sentence
I built a function from Dennis Williamsons Comment:
function clearLastLine() {
tput cuu 1 && tput el
}
Thanks to Dennis Williamson
If you echo without the newline character echo -n "Something", you can use \r with your next echo to move the 'cursor' to the beginning of the line echo -e "\\rOverwrite something".
#!/bin/bash
CHECK_MARK="\033[0;32m\xE2\x9C\x94\033[0m"
echo -e "\n\e[4mDoing Things\e[0m"
echo -n "doing thing 1..."
sleep 1
echo -e "\\r${CHECK_MARK} thing 1 done"
Just be aware that if your new string is shorter that your old string, the tail of your old string will still be visible. Note the done.. in the gif above.
If you want to run a script in a loop and not blow up your scrollback, you can use the following pattern:
while sleep 10s; do
echo -n $(script)
echo -n -e "\e[0K\r"
done
Just replace the script command with your own.
#!/bin/bash
echo "Description:"
while test -z $finishInput; do
read -s tmp
desc="$desc"$'\n'"$tmp"
if [ -z "$tmp" ]; then
finishInput=1
else
echo $tmp
fi
#echo "fi="$finishInput;
done
echo -n "Maintainer:"
read maintainer
This solution avoids the empty line, but input is not echoed before the lines are complete.
Hint: My version of bash did not accept "[ $finishInput -eq 0 ]".
This question has 3 parts, and each alone is easy, but combined together is not trivial (at least for me) :)
Need write a script what should take as its arguments:
one name of another command
several arguments for the command
list of files
Examples:
./my_script head -100 a.txt b.txt ./xxx/*.txt
./my_script sed -n 's/xxx/aaa/' *.txt
and so on.
Inside my script for some reason I need distinguish
what is the command
what are the arguments for the command
what are the files
so probably the most standard way write the above examples is:
./my_script head -100 -- a.txt b.txt ./xxx/*.txt
./my_script sed -n 's/xxx/aaa/' -- *.txt
Question1: Is here any better solution?
Processing in ./my_script (first attempt):
command="$1";shift
args=`echo $* | sed 's/--.*//'`
filenames=`echo $* | sed 's/.*--//'`
#... some additional processing ...
"$command" "$args" $filenames #execute the command with args and files
This solution will fail when the filenames will contain spaces and/or '--', e.g.
/some--path/to/more/idiotic file name.txt
Question2: How properly get $command its $args and $filenames for the later execution?
Question3: - how to achieve the following style of execution?
echo $filenames | $command $args #but want one filename = one line (like ls -1)
Is here nice shell solution, or need to use for example perl?
First of all, it sounds like you're trying to write a script that takes a command and a list of filenames and runs the command on each filename in turn. This can be done in one line in bash:
$ for file in a.txt b.txt ./xxx/*.txt;do head -100 "$file";done
$ for file in *.txt; do sed -n 's/xxx/aaa/' "$file";done
However, maybe I've misinterpreted your intent so let me answer your questions individually.
Instead of using "--" (which already has a different meaning), the following syntax feels more natural to me:
./my_script -c "head -100" a.txt b.txt ./xxx/*.txt
./my_script -c "sed -n 's/xxx/aaa/'" *.txt
To extract the arguments in bash, use getopts:
SCRIPT=$0
while getopts "c:" opt; do
case $opt in
c)
command=$OPTARG
;;
esac
done
shift $((OPTIND-1))
if [ -z "$command" ] || [ -z "$*" ]; then
echo "Usage: $SCRIPT -c <command> file [file..]"
exit
fi
If you want to run a command for each of the remaining arguments, it would look like this:
for target in "$#";do
eval $command \"$target\"
done
If you want to read the filenames from STDIN, it would look more like this:
while read target; do
eval $command \"$target\"
done
The $# variable, when quoted will be able to group parameters as they should be:
for parameter in "$#"
do
echo "The parameter is '$parameter'"
done
If given:
head -100 test this "File name" out
Will print
the parameter is 'head'
the parameter is '-100'
the parameter is 'test'
the parameter is 'this'
the parameter is 'File name'
the parameter is 'out'
Now, all you have to do is parse the loop out. You can use some very simple rules:
The first parameter is always the file name
The parameters that follow that start with a dash are parameters
After the "--" or once one doesn't start with a "-", the rest are all file names.
You can check to see if the first character in the parameter is a dash by using this:
if [[ "x${parameter}" == "x${parameter#-}" ]]
If you haven't seen this syntax before, it's a left filter. The # divides the two parts of the variable name. The first part is the name of the variable, and the second is the glob filter (not regular expression) to cut off. In this case, it's a single dash. As long as this statement isn't true, you know you have a parameter. BTW, the x may or may not be needed in this case. When you run a test, and you have a string with a dash in it, the test might mistake it for a parameter of the test and not the value.
Put it together would be something like this:
parameterFlag=""
for parameter in "$#" #Quotes are important!
do
if [[ "x${parameter}" == "x${parameter#-}" ]]
then
parameterFlag="Tripped!"
fi
if [[ "x${parameter}" == "x--" ]]
then
print "Parameter \"$parameter\" ends the parameter list"
parameterFlag="TRIPPED!"
fi
if [ -n $parameterFlag ]
then
print "\"$parameter\" is a file"
else
echo "The parameter \"$parameter\" is a parameter"
fi
done
Question 1
I don't think so, at least not if you need to do this for arbitrary commands.
Question 3
command=$1
shift
while [ $1 != '--' ]; do
args="$args $1"
shift
done
shift
while [ -n "$1" ]; do
echo "$1"
shift
done | $command $args
Question 2
How does that differ from question 3?
I'm having a problem with a shell script (POSIX shell under HP-UX, FWIW). I have a function called print_arg into which I'm passing the name of a parameter as $1. Given the name of the parameter, I then want to print the name and the value of that parameter. However, I keep getting an error. Here's an example of what I'm trying to do:
#!/usr/bin/sh
function print_arg
{
# $1 holds the name of the argument to be shown
arg=$1
# The following line errors off with
# ./test_print.sh[9]: argval=${"$arg"}: The specified substitution is not valid for this command.
argval=${"$arg"}
if [[ $argval != '' ]] ; then
printf "ftp_func: $arg='$argval'\n"
fi
}
COMMAND="XYZ"
print_arg "COMMAND"
I've tried re-writing the offending line every way I can think of. I've consulted the local oracles. I've checked the online "BASH Scripting Guide". And I sharpened up the ol' wavy-bladed knife and scrubbed the altar until it gleamed, but then I discovered that our local supply of virgins has been cut down to, like, nothin'. Drat!
Any advice regarding how to get the value of a parameter whose name is passed into a function as a parameter will be received appreciatively.
You could use eval, though using direct indirection as suggested by SiegeX is probably nicer if you can use bash.
#!/bin/sh
foo=bar
print_arg () {
arg=$1
eval argval=\"\$$arg\"
echo "$argval"
}
print_arg foo
In bash (but not in other sh implementations), indirection is done by: ${!arg}
Input
foo=bar
bar=baz
echo $foo
echo ${!foo}
Output
bar
baz
This worked surprisingly well:
#!/bin/sh
foo=bar
print_arg () {
local line name value
set | \
while read line; do
name=${line%=*} value=${line#*=\'}
if [ "$name" = "$1" ]; then
echo ${value%\'}
fi
done
}
print_arg foo
It has all the POSIX clunkiness, in Bash would be much sorter, but then again, you won't need it because you have ${!}. This -in case it proves solid- would have the advantage of using only builtins and no eval. If I were to construct this function using an external command, it would have to be sed. Would obviate the need for the read loop and the substitutions. Mind that asking for indirections in POSIX without eval, has to be paid with clunkiness! So don't beat me!
Even though the answer's already accepted, here's another method for those who need to preserve newlines and special characters like Escape ( \033 ): Storing the variable in base64.
You need: bc, wc, echo, tail, tr, uuencode, uudecode
Example
#!/bin/sh
#====== Definition =======#
varA="a
b
c"
# uuencode the variable
varB="`echo "$varA" | uuencode -m -`"
# Skip the first line of the uuencode output.
varB="`NUM=\`(echo "$varB"|wc -l|tr -d "\n"; echo -1)|bc \`; echo "$varB" | tail -n $NUM)`"
#====== Access =======#
namevar1=varB
namevar2=varA
echo simple eval:
eval "echo \$$namevar2"
echo simple echo:
echo $varB
echo precise echo:
echo "$varB"
echo echo of base64
eval "echo \$$namevar1"
echo echo of base64 - with updated newlines
eval "echo \$$namevar1 | tr ' ' '\n'"
echo echo of un-based, using sh instead of eval (but could be made with eval, too)
export $namevar1
sh -c "(echo 'begin-base64 644 -'; echo \$$namevar1 | tr ' ' '\n' )|uudecode"
Result
simple eval:
a b c
simple echo:
YQpiCmMK ====
precise echo:
YQpiCmMK
====
echo of base64
YQpiCmMK ====
echo of base64 - with updated newlines
YQpiCmMK
====
echo of un-based, using sh instead of eval (but could be made with eval, too)
a
b
c
Alternative
You also could use the set command and parse it's output; with that, you don't need to treat the variable in a special way before it's accessed.
A safer solution with eval:
v=1
valid_var_name='[[:alpha:]_][[:alnum:]_]*$'
print_arg() {
local arg=$1
if ! expr "$arg" : "$valid_var_name" >/dev/null; then
echo "$0: invalid variable name ($arg)" >&2
exit 1
fi
local argval
eval argval=\$$arg
echo "$argval"
}
print_arg v
print_arg 'v; echo test'
Inspired by the following answer.