Adding a tab character before external script output - bash

So, i've got a shell script to automate some SVN commands. I output to both a logfile and stdout during the script, and direct the SVN output to /dev/null. Now i'd like to include the SVN output in my logging, but to seperate it from my own output i'd like to prepend a \t to each line of the SVN output. Can this be done with shell scripting?
Edit
Is this something i could use AWK for? I'll investigate!
Edit
So, using AWK seems to do the trick. Sadly i can't get it to work with the svn commands though.
svn add * | awk '{ print "\t"$0 }'
Outputs without the prepended tab character. But if i run for example ls
ls -l | awk '{ print "\t"$0 }'
The directory is listed with a tab character in front of each line.
Edit
Thanks #daniel! I ended up with this
svn add * 2>&1 | sed 's/^/\t/'
Might aswell note that awk works well for this, when used correctly
svn add * 2>&1 | awk '{print "\t"$0 }'

You can use Sed. Instead of redirecting the output of your SVN command to /dev/null, you can pipe it to Sed.
svn ls https://svn.example.com 2>&1 | sed 's/^/ /'

Related

How to remove the username/hostname line from an output on Korn Shell?

I run the command
df -gP /data1 /data2 | grep -v File | awk '{print $1}' |
awk -F/dev/ '$0=$2' | tr '\n' '
on the AIX shell (ksh) and it prints the output below:
lv_data01 lv_data02 root#testhost:/
However, I would like the output to be printed this way. Could someone help?
lv_data01 lv_data02
Using grep … | awk … | awk … is not necessary; a single awk could do the whole job. So could sed and it might even be easier. I'd be tempted to deal with the spacing by using:
x=$(df … | sed …); echo $x
The tr command, once corrected, replaces newlines with spaces, so the prompt follows without a newline before it. The ; echo suggestion adds the missing newline; the echo $x suggestion (note no double quotes) does too.
As for the sed command:
sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'
Don't print anything by default
If the line doesn't match File (doing the work of grep -v):
remove the first space (blank or tab) and everything after it (doing the work of awk '{print $1}')
replace everything up to /dev/ with nothing and print (doing the work of awk -F/dev/ '{$0=$2}')
The command substitution and capture, followed by echo, deals with spaces and newlines.
So, my suggested solution is:
x=$(df -gP /data1 /data2 | sed -n '/File/!{ s/[[:space:]].*//; s%^.*/dev/%%p; }'); echo $x
You could add unset x after the echo if you are going to be using this directly in the shell and not in a shell script. If it'll be encapsulated in a shell script, you don't have to worry about it.
I'm blithely assuming the output from df -gP won't contain a path such as this, with two occurrences of /dev:
/who/knows/dev/lv_data01/dev/bin
If that's a real problem, you can fix the sed script, but I don't think it will be. It's one thing the second awk script in the question handles differently.

Extract specific string from line with standard grep,egrep or awk

i'm trying to extract a specific string from a grep output
uci show minidlna
produces a large list
.
.
.
minidlna.config.enabled='1'
minidlna.config.db_dir='/mnt/sda1/usb/db'
minidlna.config.enable_tivo='1'
minidlna.config.wide_links='1'
.
.
.
so i tried to narrow down what i wanted by running
uci show minidlna | grep -oE '\bdb_dir=\S+'
this narrows the output to
db_dir='/mnt/sda1/usb/db'
what i want is to output only
/mnt/sda1/usb/db
without the quotes and without the starting "db_dir" so i can run rm /mnt/sda1/usb/db/file.db
i've used the answers found here
How to extract string following a pattern with grep, regex or perl
and that's as close as i got.
EDIT: after using Ed Morton's awk command i needed to pass the output to rm command.
i used:
| ( read DB; (rm $DB/files.db) .
read DB passes the output into the vairable DB.
(...) combines commands.
rm $DB/files.db deletes the the file files.db.
Is this what you're trying to do?
$ awk -F"'" '/db_dir/{print $2}' file
/mnt/sda1/usb/db
That will work in any awk in any shell on every UNIX box.
If that's not what you want then edit your question to clarify your requirements and post more truly representative sample input/output.
Using sed with some effort to avoid single quotes:
sed -n 's/^minidlna.config.db_dir=\s*\S\(\S*\)\S\s*$/\1/p' input
Well, so you end up having a string like db_dir='/mnt/sda1/usb/db'.
I would first remove the quotes by piping this to
.... | tr -d "'"
Now you end up with a string like db_dir=/mnt/sda1/usb/db.
Say you have this string stored in a variable named confstr, then
${confstr##*=}
gives you just /mnt/sda1/usb/db, since *= denotes everything from the start to the equal sign, and ## denotes removal.
I would do this:
Once you either extracted your line about into file.txt (or pipe it into this command), split the fields using the quote character. Use printf to generate the rm command and pass this into bash to execute.
$ awk -F"'" '{printf "rm %s.db/file.db\n", $2}' file.txt | bash
rm: /mnt/sda1/usb/db.db/file.db: No such file or directory
With your original command:
$ uci show minidlna | grep -oE '\bdb_dir=\S+' | \
awk -F"'" '{printf "rm %s.db/file.db\n", $2}' | bash

cygwin awk print adds strange character to filename

I'm using Cygwin on a Windows machine to grab some information from a remote Linux machine and write the result to a file. Here is my command:
ssh user#remotemachine ps -aef | grep vnc | grep -v grep | awk '{print "<size>"$11"<\/size>""\n""<colorDepth>"$13"<\/colorDepth>"}' > myfile.txt
However, when I then run
ls -l
on the directory where myfile.txt was written, it shows that the name of the file is actually myfile.txt? (with the added question mark). Where did that extra character come from and how can I get the print code to name the file correctly as simply myfile.txt
I would just run another command such as
mv myfile.txt? myfile.txt
or
mv myfile.txt^M myfile.txt
but in my bash script neither seems to find the file to rename it (though interestingly I can from the terminal (not in the script) start typing
mv myf
and then tab to complete the finding of the file, then finish the line with a new file name and that successfully renames the file.
Most likely your script uses Windows-style line endings. The end of the line looks like
... myfile.txt
but it's really:
... myfile.txt\r\n
where \r\n is the Windows CR-LF line ending. Which is how lines in Windows text files are supposed to end, but the shell doesn't recognize Windows-style line endings. It sees a valid line of text, but it sees the CR character as part of the line. So it treats "myfile.txt\r" as the file name.
How did you create the bash script file? If you used a Windows native editor, that explains the line endings.
Many editors (vim included) will automatically adapt to the line endings of a file, so you may not be able to delete the extra \r from your editor.
And ls displays non-printable characters like CR as ?.
Running file on the script will probably tell you about the line endings.
Filter the script through the dos2unix command. (Read the man page first; unlike most text filters, dos2unix updates its input file rather than writing to stdout.)
This should also work:
mv foo.sh foo.sh.bad
tr -d '\r' < foo.sh.bad > foo.sh
chmod +x foo.sh
(I created a backup copy first just in case something goes wrong, so you don't clobber your script.)
This:
ssh user#remotemachine ps -aef | grep vnc | grep -v grep | awk '{print "<size>"$11"<\/size>""\n""<colorDepth>"$13"<\/colorDepth>"}' > myfile.txt
can be rewritten to:
ssh user#remotemachine ps -aef | awk '/[v]nc/ {print "<size>"$11"<\/size>""\n""<colorDepth>"$13"<\/colorDepth>"}' > myfile.txt
To prevent grep to not find it self in ps use [first letter]
grep vnc | grep -v grep
grep [v]nc
and since awk can do this
awk '/[v]nc/ {some code}'

tail -f, awk and output to file >

I am attempting to filter a log file and am running into issues, what I have so far is the following, which does not work,
tail -f /var/log/squid/accesscustom.log | awk '/username/;/user-name/ {print $1; fflush("")}' | awk '!x[$0]++' > /var/log/squid/accesscustom-filtered.log
The goal is to take a file that contains
ipaddress1 username
ipaddress7
ipaddress2 user-name
ipaddress1 username
ipaddress5
ipaddress3 username
ipaddress4 user-name
and save to accesscustom-filtered.log
ipaddress1
ipaddress2
ipaddress3
ipaddress4
It works without the output to accesscustom-filtered.log but something in the > isn't working right and the file ends up empty.
Edit: Changed the original example to be correct
Use tee:
tail -f /var/log/squid/accesscustom.log | awk '/username/;/user-name/ {print $1}' | tee /var/log/squid/accesscustom-filtered.log
See also: Writing “tail -f” output to another file and Turn off buffering in pipe
Note: awk doesn't buffer like grep in the superuser example, so you shouldn't need to do anything special with your awk command. (more info)

Extracting all lines from a file that are not commented out in a shell script

I'm trying to extract lines from certain files that do not begin with # (commented out). How would I run through a file, ignore everything with a # in front of it, but copy each line that does not start with a # into a different file.
Thanks
Simpler: grep -v '^[[:space:]]*#' input.txt > output.txt
This assumes that you're using Unix/Linux shell and the available Unix toolkit of commands AND that you want to keep a copy of the original file.
cp file file.orig
mv file file.fix
sed '/^[ ]*#/d' file.fix > file
rm file.fix
Or if you've got a nice shiny new GNU sed that all be summarized as
cp file file.orig
sed -i '/^[ ]*#/d' file
In both cases, the regexp in the sed command is meant to be [spaceCharTabChar]
So you saying, delete any line that begins with an (optional space or tab chars) #, but print everything else.
I hope this helps.
grep -v ^\# file > newfile
grep -v ^\# file | grep -v ^$ > newfile
Not fancy regex, but I provide this method to Jr. Admins as it helps with understanding of pipes and redirection.

Resources