bash shell tr -d -c options combination usage [closed] - bash

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 6 years ago.
Improve this question
echo $PATH | tr -d -c :
this output is:
::::
The value of $PATH is:
/import/adams/2/z1/bin-pc.i86.linux:/import/adams/2/z1/bin:/usr/local/bin:/usr/bin:/bin
why I get such an output ;;;;? I cannot understand -d -c :. -c option needs two sets, but -d needs only one set. which option is executed first? how does this result be generated?
Thanks.

$ p=/import/adams/2/z1/bin-pc.i86.linux:/import/adams/2/z1/bin:/usr/local/bin:/usr/bin:/bin
$ echo "$p" | tr -d -c :
::::
The -d option tells tr to delete characters.
The -c option says to use the complement of the character set that follows.
Because the character set which follows is :, everything except : is deleted. That is why you see the output that you see.
More examples
In the following, the character set consists of not just : but also /. Consequently, everything except : and / are deleted:
$ echo "$p" | tr -d -c :/
/////://///:///://:/
In the following, we omit -c and specify a character set of :. Consequently, all colons are deleted:
$ echo "$p" | tr -d :
/import/adams/2/z1/bin-pc.i86.linux/import/adams/2/z1/bin/usr/local/bin/usr/bin/bin

I cannot reproduce that you'd get ; in your output and not :, but you're asking tr to -delete everything that is in the -complement of :, so all non-: characters.
As man 1 tr says:
-c, -C, --complement
use the complement of SET1
-d, --delete
delete characters in SET1, do not translate

Related

How to create dynamic substring with awk [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Let say i have file like below.
ABC_DEF_G-1_P-249_8.CSV
I want to cut to be like this below.
ABC_DEF_G-1_P-249_
I use this awk command to do that like below.
ls -lrt | grep -i .CSV | tail -1 | awk -F ' ' '{print $8}' | cut -c 1-18
Question is, if the number 1, is growing, how to make the substring is dynamic
example like below...
ABC_DEF_G-1_P-249_
....
ABC_DEF_G-10_P-249_
ABC_DEF_G-11_P-249_
...
ABC_DEF_G-1000_P-249_
To display the file names of all .CSV without everything after the last underscore, you can do this:
for fname in *.CSV; do echo "${fname%_*}_"; done
This removes the last underscore and evertyhing that follows it (${fname%_*}), and then appends an underscore again. You can assign that, for example, to another variable.
For an example file list of
ABC_DEF_G-1_P-249_9.CSV
ABC_DEF_G-10_P-249_8.CSV
ABC_DEF_G-1000_P-249_4.CSV
ABC_DEF_G-11_P-249_7.CSV
ABC_DEF_G-11_P-249_7.txt
this results in
$ for fname in *.CSV; do echo "${fname%_*}_"; done
ABC_DEF_G-1_P-249_
ABC_DEF_G-10_P-249_
ABC_DEF_G-1000_P-249_
ABC_DEF_G-11_P-249_
You can do this with just ls and grep
ls -1rt | grep -oP ".*(?=_\d{1,}\.CSV)"
If you are concerned about the output of ls -1, as mentioned in the comments you can use find as well
find -type f -printf "%f\n" | grep -oP ".*(?=_\d{1,}\.CSV)"
Outputs:
ABC_DEF_G-1_P-249
ABC_DEF_G-1000_P-249_
This assumes you want everything except the _number.CSV, if it needs to be case insensitive then you can the -i flag to the grep. The \d{1,} allows for the number between _ and .CSV to grow from one to many digits. Also doing it this way you don't have to worry about if the number 1 in your example increases:
ABC_DEF_G-1_P-249
You should not be parsing ls. Perhaps you are looking for something like this:
base=$(printf "%s\n" * | grep -i .CSV | tail -1 | awk -F ' ' '{print $8}' | cut -c 1-18)
However, that's a useless use of grep you want to get rid of right there -- Awk does everything grep does, and everything tail does, too, and actually, everything cut does as well. The grep can also be avoided by using a better wildcard, though:
base=$(printf "%s\n" *.[Cc][Ss][Vv] | awk 'END { print substr($8, 1, 18) }')
In the shell itself, you can do much the same thing with no external processes at all. Proposing a suitable workaround would perhaps require a better understanding of what you are trying to accomplish, though.

How to crop a word [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I have this list of inputs :
imalex
thislara
heiscarl
how to get :
alex
lara
carl
grep
Use grep to take the last four chars:
grep -o '.\{4\}$' file
The -o option makes sure only matched parts are printed.
sed
Using sed we can achieve a similar result:
sed 's/.*\(.\{4\}\)$/\1/' a
Here we capture the last four digits and replace each line with those last four digits. They are captured in a group \( \) and inserted \1.
read & tail
We can also grab the last five chars (including the newline) of each line using tail and a -c option. We do that for each line using read.
while read line; do
tail -c 5 <<< $line
done < file
2 answers using substring arithmetic
bash:
while read word; do
echo "${word:${#word}-4}"
done <<<"$list"
awk
echo "$list" | awk '{print substr($NF, length($NF)-4+1)}'

Why does this sed command do nothing? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 9 years ago.
Improve this question
I am trying to cat a file to create a copy of itself, but at the same time replace some values
my command is:
cat ${FILE} | sed "s|${key}|${value}|g" > ${TEMP_FILE}
However, when I open the temp file, none of the keys have been replaced- just a straight swap. I have echoed the values of key and value and they are correct - they come from an array element.
Yet if I use a plain string not a variable, it works fine for one type of key - i.e:
cat ${FILE} | sed "s|example_key|${value}|g" > ${TEMP_FILE}
The example_key instances within the file are replaced which is what I want.
However, when I try to use my array $key parameter, it does nothing. No idea why :-(
Command usage:
declare -a props
...
....
for x in "${props[#]}"
do
key=`echo "${x}" | cut -d '=' -f 1`
value=`echo "${x}" | cut -d '=' -f 2`
# global replace on the $FILE
cat ${FILE} | sed "s|${key}|${value}|g" > ${TEMP_FILE}
#cat ${FILE} | sed "s|example_key|${value}|g" > ${TEMP_FILE}
done
array elements are stored in the following format: $key=$value
key='echo "${x}" | cut -d '=' -f 1
value='echo "${x}" | cut -d '=' -f 2
Use back-ticks, not single-quotes, if you want to do command substitution.
key=`echo "${x}" | cut -d '=' -f 1`
value=`echo "${x}" | cut -d '=' -f 2`
Also note that as you loop over the series of key=value pairs, you're overwriting your temp file each time, using only one substitution applied to the original file.. So after the loop is finished, the best you can hope for is that only the last substitution will be applied.
I'd also suggest not doing this in multiple passes -- do it by passing multiple expressions to sed:
for x in "${props[#]}" ; do
subst="$subst -e 's=$x=g'"
done
sed $subst "${FILE}" > "${TEMP_FILE}"
I'm using a trick, by using = as the delimiter for the sed substitution expression, we don't have to separate the key from the value. The command simply becomes:
sed -e 's=foo=1=g' -e 's=bar=2=g' "${FILE}" > "${TEMP_FILE}"
Thanks to #BillKarwin for spotting the crux of the problem: each iteration of the loop wipes out the previous iterations' replacements, because the result of a single key-value pair replacement replaces the entire output file every time.
Try the following:
declare -a props
# ...
cp "$FILE" "$TEMP_FILE"
for x in "${props[#]}"; do
IFS='=' read -r key value <<<"$x"
sed -i '' "s|${key}|${value}|g" "${TEMP_FILE}"
done
Copies the input file to the output file first, then replaces the output file in-place (using sed's -i option) in every iteration of the loop.
I also streamlined the code to parse each line into key and value, using read.
Also note that I consistently double-quoted all variable references.
#anubhava makes a good general point: depending on the variable values, a different regex delimiter may be needed (in your case: if the keys or values contained '|', you couldn't use '|' to delimit the regexes).
Update: #BillKarwin makes a good point: performing the replacements one by one, in a loop, is inefficient.
Here's a one-liner that avoids loops altogether:
sed -f <(awk -F'=' '{ if ($0) print "s/" $1 "/" substr($0, 1+length($1)) "/g" }' \
"$FILE") "$FILE" > "$TEMP_FILE"
Uses awk to build up the entire set of substitution commands for sed (one per line).
Then feeds the result via process substitution as a command file to sed with -f.
Handles the case where values have embedded = chars. correctly.

grep containing "-c" [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I want to search for -c using grep
For example:
$>ls -al | grep '-c'
But grep thinks it is an option.
$>Usage: grep -hblcnsviw pattern file . . .
How can I search -c as a string?
Tell grep where your options end:
grep -- -c
You could either:
Escape the hyphen.
grep '\-c'
Use the -e (--regexp=) flag
grep -e -c
grep --regexp=-c # Not in POSIX, but supported at least in Linux and OS X.
You use the -e option:
$ ls -a1 | grep -e -c
This is of course mentioned in the documentation, thusly:
-e PATTERN, --regexp=PATTERN
Use PATTERN as the pattern. This can be used to specify multiple search patterns, or to protect a pattern beginning with a hyphen (-). (-e is specified by POSIX .)
To stay on topic, the answer to your original question is to use either -- to signal the end of options to process, or use -e to explicitly mark the option.
However, parsing the output of ls will produce an incorrect result if any of the file names contain a new line. Use find instead:
find . -depth 1 -name "-c"

Make output of command appear on single line [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
Is it possible to get the output of a command - for example tar - to write each line of output to one line only?
Example usage:
tar -options -f dest source | [insert trickery here]
and the output would show every file being processed without making the screen move: each output overwrites the last one. Can it be done?
Edit: we seem to have a working answer, but lets take it further:
How about doing the same, but over 5 lines? You see a scrolling output that doesn't affect the rest of the terminal. I think I've got an answer, but I'd like to see what you guys think.
Replace the newlines with carriage returns.
tar -options -f dest source | cut -b1-$(tput cols) | sed -u 'i\\o033[2K' | stdbuf -o0 tr '\n' '\r'; echo
Explanation:
cut -b1-$(tput cols): Truncates the output of tar if it is longer than the terminal is wide. Depending on how little you want your terminal to move, it isn't strictly necessary.
sed -u 'i\\o033[2K': Inserts a line blank at the beginning of each line. The -u option to sed puts it in unbuffered mode. stdbuf -oL sed 'i\\033[2K' would work equally as well.
stdbuf -o0 tr '\n' '\r': Uses tr to exchange newlines with carriage returns. Stdbuf makes sure that the output is unbuffered; without the \n's, on a line buffered terminal, we'd see no output.
echo: Outputs a final newline, so that the terminal prompt doesn't eat up the final line.
For the problem your edit proposes:
x=0;
echo -e '\e[s';
tar -options -f dest source | while read line; do
echo -en "\e[u"
if [ $x gt 0 ]; then echo -en "\e["$x"B"; fi;
echo -en "\e[2K"
echo -n $line | cut -b1-$(tput cols);
let "x = ($x+1)%5";
done; echo;
Feel free to smush all that onto one line. This actually yields an alternative solution for the original problem:
echo -e '\e[s'; tar -options -f dest source | while read line; do echo -en "\e[u\e2K"; echo -n $line | cut -b1-$(tput cols); done; echo
which neatly relies on nothing except VT100 codes.
Thanks to Dave/tripleee for the core mechanic (replacing newlines with carriage returns), here's a version that actually works:
tar [opts] [args] | perl -e '$| = 1; while (<>) { s/\n/\r/; print; } print "\n"'
Setting $| causes perl to automatically flush after every print, instead of waiting for newlines, and the trailing newline keeps your last line of output from being (partially) overwritten when the command finishes and bash prints a prompt. (That's really ugly if it's partial, with the prompt and cursor followed by the rest of the line of output.)
It'd be nice to accomplish this with tr, but I'm not aware of how to force tr (or anything similarly standard) to flush stdout.
Edit: The previous version is actually ugly, since it doesn't clear the rest of the line after what's been output. That means that shorter lines following longer lines have leftover trailing text. This (admittedly ugly) fixes that:
tar [opts] [args] | perl -e '$| = 1; $f = "%-" . `tput cols` . "s\r"; $f =~ s/\n//; while (<>) {s/\n//; printf $f, $_;} print "\n"'
(You can also get the terminal width in more perl-y ways, as described here; I didn't want to depend on CPAN modules though.
tar -options -f dest source | cut -b1-$(tput cols) | perl -ne 's/^/\e[2K/; s/\n/\r/; print' ;echo
Explanations:
| cut -b1-$(tput cols) This is in order to make sure that the columns aren't too wide.
(In perl -ne) s/^/\e[2K/ This code clears the current line, erasing 'old' lines. This should be at the start of the line, in order to ensure that the final line of output is preserved and also to ensure that we don't delete a line until the next line is available.
(In perl -ne) s/\n/\r/ The tr command could be used here of course. But once I started using perl, I stuck with it
PS To clarify: There are two distinct 'line-width' problems. Both must be solved. (1) We need to clear the lines, so that a short line isn't mixed up with older, longer lines. (2) If a line is very long, and is wider than the current terminal width, then we need to trim it.

Resources