Normally in my bash scripts I'm used to do some_command >> log.log. This works fine, however how can I append more data like time and command name?
My goal is to have a log like this
2012-01-01 00:00:01 [some_command] => some command output...
2012-01-01 00:01:01 [other_command] => other command output...
The processes should running and writing to the file concurrently.
The final solution, pointed by William Pursell in my case would be:
some_command 2>&1 | perl -ne '$|=1; print localtime . ": [somme_command] $_"' >> /log.log &
I also added 2>&1 to redirect the STDOUTand STDERR to the file and an & on the end to keep the program on background.
Thank you!
Given your comments, it seems that you want multiple processes to be writing to the file concurrently, and have a timestamp on each individual line. Something like this might suffice:
some_cmd | perl -ne '$|=1; print localtime . ": [some_cmd] $_"' >> logfile
If you want to massage the format of the date, use POSIX::strftime
some_cmd | perl -MPOSIX -ne 'BEGIN{ $|=1 }
print strftime( "%Y-%m-%d %H:%M:%S", localtime ) . " [some_cmd] $_"' >> logfile
An alternative solution using sed would be:
some_command 2>&1 | sed "s/^/`date '+%Y-%m-%d %H:%M:%S'`: [some_command] /" >> log.log &
It works by replacing the beginning of line "character" (^). Might come in handy if you don't want to depend on Perl.
On Ubuntu:
sudo apt-get install moreutils
echo "cat" | ts
Mar 26 09:43:00 cat
something like this:
(echo -n $(date); echo -n " ls => "; ls) >> /tmp/log
however, your command output is multiple lines and it will not have the format above you are showing. you may want to replace the newline in output with some other character with a command like tr or sed in that case.
One approach is to use logger(1).
Another might be something like this:
stamp () {
( echo -n "`date +%T` "
"$#" ) >> logfile
}
stamp echo how now brown cow
A better alternative using GNU sed would be:
some_command 2>&1 | sed 'h; s/.*/date "+%Y-%m-%d %H:%M:%S"/e; G; s/\n/ [some_command]: /'
Breaking down how this sed program works:
# store the current line (a.k.a. "pattern space") in the "hold space"
h;
# replace the current line with the date command and execute it
# note the e command is available in GNU sed, but not in some other version
s/.*/date "+%Y-%m-%d %H:%M:%S"/e;
# Append a newline and the contents of the "hold space" to the "pattern space"
G;
# Replace that newline inserted by the G command with whatever text you want
s/\n/ [some_command]: /
Related
I have a command like below
md5sum test1.txt | cut -f 1 -d " " >> test.txt
I want output of the above result prefixed with File_CheckSum:
Expected output: File_CheckSum: <checksumvalue>
I tried as follows
echo 'File_Checksum:' >> test.txt | md5sum test.txt | cut -f 1 -d " " >> test.txt
but getting result as
File_Checksum:
adbch345wjlfjsafhals
I want the entire output in 1 line
File_Checksum: adbch345wjlfjsafhals
echo writes a newline after it finishes writing its arguments. Some versions of echo allow a -n option to suppress this, but it's better to use printf instead.
You can use a command group to concatenate the the standard output of your two commands:
{ printf 'File_Checksum: '; md5sum test.txt | cut -f 1 -d " "; } >> test.txt
Note that there is a race condition here: you can theoretically write to test.txt before md5sum is done reading from it, causing you to checksum more data than you intended. (Your original command mentions test1.txt and test.txt as separate files, so it's not clear if you are really reading from and writing to the same file.)
You can use command grouping to have a list of commands executed as a unit and redirect the output of the group at once:
{ printf 'File_Checksum: '; md5sum test1.txt | cut -f 1 -d " " } >> test.txt
printf "%s: %s\n" "File_Checksum:" "$(md5sum < test1.txt | cut ...)" > test.txt
Note that if you are trying to compute the hash of test.txt(the same file you are trying to write to), this changes things significantly.
Another option is:
{
printf "File_Checksum: "
md5sum ...
} > test.txt
Or:
exec > test.txt
printf "File_Checksum: "
md5sum ...
but be aware that all subsequent commands will also write their output to test.txt. The typical way to restore stdout is:
exec 3>&1
exec > test.txt # Redirect all subsequent commands to `test.txt`
printf "File_Checksum: "
md5sum ...
exec >&3 # Restore original stdout
Operator &&
e.g. mkdir example && cd example
How do I add a string after each line in a file using bash? Can it be done using the sed command, if so how?
If your sed allows in place editing via the -i parameter:
sed -e 's/$/string after each line/' -i filename
If not, you have to make a temporary file:
typeset TMP_FILE=$( mktemp )
touch "${TMP_FILE}"
cp -p filename "${TMP_FILE}"
sed -e 's/$/string after each line/' "${TMP_FILE}" > filename
I prefer echo. using pure bash:
cat file | while read line; do echo ${line}$string; done
I prefer using awk.
If there is only one column, use $0, else replace it with the last column.
One way,
awk '{print $0, "string to append after each line"}' file > new_file
or this,
awk '$0=$0"string to append after each line"' file > new_file
If you have it, the lam (laminate) utility can do it, for example:
$ lam filename -s "string after each line"
Pure POSIX shell and sponge:
suffix=foobar
while read l ; do printf '%s\n' "$l" "${suffix}" ; done < file |
sponge file
xargs and printf:
suffix=foobar
xargs -L 1 printf "%s${suffix}\n" < file | sponge file
Using join:
suffix=foobar
join file file -e "${suffix}" -o 1.1,2.99999 | sponge file
Shell tools using paste, yes, head
& wc:
suffix=foobar
paste file <(yes "${suffix}" | head -$(wc -l < file) ) | sponge file
Note that paste inserts a Tab char before $suffix.
Of course sponge can be replaced with a temp file, afterwards mv'd over the original filename, as with some other answers...
This is just to add on using the echo command to add a string at the end of each line in a file:
cat input-file | while read line; do echo ${line}"string to add" >> output-file; done
Adding >> directs the changes we've made to the output file.
Sed is a little ugly, you could do it elegantly like so:
hendry#i7 tmp$ cat foo
bar
candy
car
hendry#i7 tmp$ for i in `cat foo`; do echo ${i}bar; done
barbar
candybar
carbar
There's a file with some lines containing some text and either date or time stamp:
...
string1-20141001
string2-1414368000000
string3-1414454400000
...
I want to quickly convert time stamps to dates, like this:
$ date -d #1414368000 +"%Y%m%d"
20141027
and I want to do this dynamically with sed or some similar command line tool. For testing I unsuccessfully use this:
$ echo "something-1414454400000" | sed "s/-\(..........\)...$/-$(date -d #\\1 +'%Y%m%d')/"
date: invalid date '#\\1'
something-
but echoing seems to be working:
$ echo "something-1414454400000" | sed "s/-\(..........\)...$/-$(echo \\1)/"
something-1414454400
so what could be done?
It's interesting what's happening here. Some pointers:
Always single-quote your regex for sed, if possible, when using BASH (etc), especially if using special characters like$. This is why date is being run (with -d #\\1) before sed even gets involved.
Your "working" echo example isn't, actually (I believe): echo \\1 produces \1 (and as above, will do so before sed even gets invoked). This then happens to valid sed replacement syntax, so will substitute your group on the LHS, which is why the output looks about right.
Note that by using -r, you can use easier / more advanced regex syntax.
Hard to say exactly what to do without a bit more context, but to fix the immediate problems, try something like:
echo "something-1414454400000" | sed -re 's/-([0-9]{10,}).+/-$(date -d #\1 +"%Y%m%d")/'
which produces: $(date -d #1414454400) (which you can then pipe to sh)
Or for a more complete solution, you can change the regex to produce a shell command directly, and pipe it:
echo "something-1414454400000" | sed -re 's/(.*-)([0-9]{10,10}).+/echo \1$(date -d #\2 \"+%Y%M%d\")/' | sh
..producing something-20140028
You can do this in BASH:
while read -r p; do
if [[ "$p" =~ ^(.+-)([0-9]{10}).{3}$ ]]; then
echo -n "${BASH_REMATCH[1]}"
date -d "#${BASH_REMATCH[2]}" +"%Y%m%d"
else
echo "$p"
fi
done < file
OUTPUT:
string1-20141001
string2-20141026
string3-20141027
awk -F- 'BEGIN { OFS=FS }
$2 ~ /^[0-9]{13}$/ {
"date -d#" $2/1000 " +%Y%m%d " | getline t; $2=t }1'
Just try this command. I have checked it. It is working on your inputs.
cat file | sed -E "s,(.*)-(.*),\1-`date -d #1414368000 +'%Y%m%d'`,g"
I'm processing some data from a text file using a bash script (Ubuntu 12.10).
The basic idea is that I select a certain line from a file using grep. Next, I process the line to get the number with sed. Both the grep and sed command are working. I can echo the number.
But the concatenation of the result with a string goes wrong.
I get different results when combining string when I do a grep command from a variable or a file. The concatenation goes wrong when I grep a file. It works as expected when I grep a variable with the same text as in the file.
What am I doing wrong with the grep from a file?
Contents of test.pdb
REMARK overall = 324.88
REMARK bon = 24.1918
REMARK coup = 0
My script
#!/bin/bash
#Correct function
echo "Working code"
TEXT="REMARK overall = 324.88\nREMARK bon = 24.1918\nREMARK coup = 0\n"
DATA=$(echo -e $TEXT | grep 'overall' | sed -n -e "s/^.*= //p" )
echo "Data: $DATA"
DATA="$DATA;0"
echo $DATA
#Not working
echo ""
echo "Not working code"
DATA=$(grep 'overall' test.pdb | sed -n -e "s/^.*= //p")
echo "Data: $DATA"
DATA="$DATA;0"
echo $DATA
Output
Working code
Data: 324.88
324.88;0
Not working code
Data: 324.88
;04.88
I went crazy with the same issue.
The real problem is that your "test.pdb" has probably a wrong EOL (end of line) character.
Linux EOL: LF (aka \n)
Windows EOL: CR LF (aka \r \n)
This mean that echo and grep will have problem with this extra character (\r), luckily tr, sed and awk manage it correctly.
So you can try also with:
DATA=$(grep 'overall' test.pdb | sed -n -e "s/^.*= //p" | sed -e 2s/\r$//")
or
DATA=$(grep 'overall' test.pdb | sed -n -e "s/^.*= //p" | tr -d '\r')
With awk, it will be more reliable and cleaner I guess :
$ awk '$2=="overall"{print "Working code\nData: " $4 "\n" $4 ";0"}' file.txt
Working code
Data: 324.88
324.88;0
Try this:
SUFFIX=";0"
DATA="${DATA}${SUFFIX}"
I have a function in bash that outputs a bunch of lines to stdout. I want to combine them into a single line with some delimiter between them.
Before:
one
two
three
After:
one:two:three
What is an easy way to do this?
Use paste
$ echo -e 'one\ntwo\nthree' | paste -s -d':'
one:two:three
And another way:
cat file | tr -s "\n" ":"
This might work for you:
paste -sd':' file
For fun, here's a bash-only way:
echo $'one\n2 and 3\nfour' | { mapfile -t lines; IFS=:; echo "${lines[*]}"; }
outputs
one:2 and 3:four
The {} grouping is to ensure all the commands that refer to the array variable are executed in the same subshell. The variable will not exist once the pipeline ends.
http://www.gnu.org/software/bash/manual/bashref.html#index-mapfile-140
Taking #glennJackman's corrections verbatim
awk '{printf("%s%s", sep, $0); sep=":"} END {print ""}' file
Or as you specified bash
while read line ; do printf "%s:" $line ; done < file | sed s'/:$//'
I hope this helps
Input.txt
one
two
three
Perl Solution : dummy.pl
#a = `cat /home/Input.txt`;
foreach my $x (#a)
{
chomp($x);
push(#array,"$x");
}
chomp(#array);
print "#array";
Run the script as :
$> perl dummy.pl | sed 's/ /:/g' > Output.txt
Output.txt
one:two:three