I wrote a massive one-liner as a tool to check some logs at work which I wanted to break up and comment, so I could understand it at a later date. When I was done breaking it all up, I bumped into this error:
/home/kaffe/.aliases:13: parse error near `|'
11 mlog () {
12 cat /home/kaffe/progs/muse/nxaa* \ # Look in all muse logs
13 | grep "$(date +'%Y-%m-%d')\|$(date --date '-1 days' +'%Y-%m-%d')" \ # Dynamic search for date - today and yesterday
14 | sed -e 's/ com.*(): / /; \ # Start sed, remove irrelevant information
15 s/;/ /;s/;/ /; \ # Remove first two instances of semi-colon in every line
16 s/, severity../ /; \ # Globally remove mention of severity level
17 s/.*New alarm:/ New: &/g; \ # If "New alarm:" exists, add "New:" to beginning of line
18 s/ New alarm: / /g1; \ # Globally remove "New alarm:" from line
19 s/.*Alarm cleared:/Cleared: &/g; \ # If "Alarm cleared:" exists, add "Cleared:" to beginning
20 s/ Alarm cleared: / /g1; \ # Globally remove "Alarm cleared:" from line
21 s/.*Alarm changed:/Changed: &/g; \ # If "Alarm changed:" exists, add "Changed:" to beginning
22 s/ Alarm changed: / /g1' \ # Globally remove "Alarm changed:" from line
23 -e ''/ New:/s//$(printf "\033[31mNew:\033[0m")/g'' \ # Color "New:" red
24 -e ''/Cleared:/s//$(printf "\033[32mCleared:\033[0m")/g'' \ # Color "Cleared:" green
25 -e ''/Changed:/s//$(printf "\033[33mChanged:\033[0m")/g'' \ # Color "Changed:" yellow
26 | sort -k1.24 \ # Sort from 14th character (date)
27 | egrep -i $1 # Insert custom search pattern, allow regexp, case insensitive
28 }
The function seems to work as intended, though. I just wish to understand why there is an error and my abysmal zsh-fu restricts me from figuring it out. Knowing what causes this would probably help me in future zsh endeavors.
Thanks in advance for any contribution.
OS and zsh versions:
$ uname -a
Linux kaffe-noc 3.2.0-4-amd64 #1 SMP Debian 3.2.68-1+deb7u3 x86_64 GNU/Linux
$ zsh --version
zsh 4.3.17 (x86_64-unknown-linux-gnu)
Do you have those comments in real code ?
You can not have anything after \ but a newline for line continuation.
man bash
A non-quoted backslash () is the escape character. It preserves
the literal value of the next character that follows, with the
exception of . If a \ pair appears, and the
backslash is not itself quoted, the \ is treated as a line
continuation (that is, it is removed from the input stream and
effectively ignored).
Related
Let's suppose that you need to generate a NUL-delimited stream of timestamped filenames.
On Linux & Solaris I can do it with:
stat --printf '%.9Y %n\0' -- *
On BSD, I can get the same info, but delimited by newlines, with:
stat -f '%.9Fm %N' -- *
The man talks about a few escape sequences but the NUL byte doesn't seem supported:
If the % is immediately followed by one of n, t, %, or #, then a newline character, a tab character, a percent character, or the current file number is printed.
Is there a way to work around that? edit: (accurately and efficiently?)
Update:
Sorry, the glob * is misleading. The arguments can contain any path.
I have a working solution that forks a stat call for each path. I want to improve it because of the massive number of files to process.
You may try this work-around solution if running stat command for files:
stat -nf "%.9Fm %N/" * | tr / '\0'
Here:
-n: To suppress newlines in stat output
Added / as terminator for each entry from stat output
tr / '\0': To convert / into NUL byte
Another work-around is to use a control character in stat and use tr to replace it with \0 like this:
stat -nf "%.9Fm %N"$'\1' * | tr '\1' '\0'
This will work with directories also.
Unfortunately, stat out of the box does not offer this option, and so what you ask is not directly achievable.
However, you can easily implement the required functionality in a scripting language like Perl or Python.
#!/usr/bin/env python3
from pathlib import Path
from sys import argv
for arg in argv[1:]:
print(
Path(arg).stat().st_mtime,
arg, end="\0")
Demo: https://ideone.com/vXiSPY
The demo exhibits a small discrepancy in the mtime which does not seem to be a rounding error, but the result could be different on MacOS (the demo platform is Debian Linux, apparently). If you want to force the result to a particular number of decimal places, Python has formatting facilities similar to those of stat and printf.
With any command that can't produce NUL-terminated (or any other character/string terminated) output, you can just wrap it in a function to call the command and then printf it's output with a terminating NUL instead of newline, for example:
nulstat() {
local fmt=$1 file
shift
for file in "$#"; do
printf '%s\0' "$(stat -f "$fmt" "$file")"
done
}
nulstat '%.9Fm %N' *
For example:
$ > foo
$ > $'foo\nbar'
$ nulstat '%.9Fm %N' foo* | od -c
0000000 1 6 6 3 1 6 2 5 3 6 . 4 7 7 9 8
0000020 0 1 4 0 f o o \0 1 6 6 3 1 6 2
0000040 5 3 9 . 3 8 8 0 6 9 9 3 0 f o
0000060 o \n b a r \0
0000066
1. What you can do (accurate but slow):
Fork a stat command for each input path:
for p in "$#"
do
stat -nf '%.9Fm' -- "$p" &&
printf '\t%s\0' "$p"
done
2. What you can do (accurate but twisted):
In the input paths, replace each occurrence of (possibly overlapping) /././ with a single /./, make stat output /././\n at the end of each record, and use awk to substitute each /././\n by a NUL byte:
#!/bin/bash
shopt -s extglob
stat -nf '%.9Fm%t%N/././%n' -- "${#//\/.\/+(.\/)//./}" |
awk -F '/\\./\\./' '{
if ( NF == 2 ) {
printf "%s%c", record $1, 0
record = ""
} else
record = record $1 "\n"
}'
N.B. If you wonder why I chose /././\n as record separator then take a look at Is it "safe" to replace each occurrence of (possibly overlapped) /./ with / in a path?
3. What you should do (accurate & fast):
You can use the following perl one‑liner on almost every UNIX/Linux:
LANG=C perl -MTime::HiRes=stat -e '
foreach (#ARGV) {
my #st = stat($_);
if ( #st > 0 ) {
printf "%.9f\t%s\0", $st[9], $_;
} else {
printf STDERR "stat: %s: %s\n", $_, $!;
}
}
' -- "$#"
note: for perl < 5.8.9, remove the -MTime::HiRes=stat from the command line.
ASIDE: There's a bug in BSD's stat:
When %N is at the end of the format string and the filename ends with a newline character, then its trailing newline might get stripped:
For example:
stat -f '%N' -- $'file1\n' file2
file1
file2
For getting the output that one would expect from stat -f '%N' you can use the -n switch and add an explicit %n at the end of the format string:
stat -nf '%N%n' -- $'file1\n' file2
file1
file2
Is there a way to work around that?
If all you need is to just replace all newlines with NULLs, then following tr should suffice
stat -f '%.9Fm %N' * | tr '\n' '\000'
Explanation: 000 is NULL expressed as octal value.
I'm encountering a problem with creating a Bash completion function, when the command is expected to contain colons. When you type a command and press tab, Bash inserts the contents of the command line into an array, only these arrays are broken up by colons. So the command:
dummy foo:apple
Will become:
('dummy' 'foo' ':' 'apple')
I'm aware that one solution is to change COMP_WORDBREAKS, but this isn't an option as it's a team environment, where I could potentially break other code by messing with COMP_WORDBREAKS.
Then this answer suggests using the _get_comp_words_by_ref and __ltrim_colon_completions variables, but it is not remotely clear to me from the answer how to use these.
So I've tried a different solution below. Basically, read the command line as a string, and figure out which word the user's cursor is currently selecting by calculating an "offset". If there is a colon in the command line with text to the left or right of it, it will add 1 each to the offset, and then subtract this from the COMP_CWORD variable.
1 #!/bin/bash
2 _comp() {
3 #set -xv
4 local a=("${COMP_WORDS[#]}")
5 local index=`expr $COMP_CWORD`
6 local c_line="$COMP_LINE"
7
8 # Work out the offset to change the cursor by
9 # This is needed to compensate for colon completions
10 # Because COMP_WORDS splits words containing colons
11 # E.g. 'foo:bar' becomes 'foo' ':' 'bar'.
12
13 # First delete anything in the command to the right of the cursor
14 # We only need from the cursor backwards to work out the offset.
15 for ((i = ${#a[#]}-1 ; i > $index ; i--));
16 do
17 regex="*([[:space:]])"${a[$i]}"*([[:space:]])"
18 c_line="${c_line%$regex}"
19 done
20
21 # Count instances of text adjacent to colons, add that to offset.
22 # E.g. for foo:bar:baz, offset is 4 (bar is counted twice.)
23 # Explanation: foo:bar:baz foo
24 # 0 12 34 5 <-- Standard Bash behaviour
25 # 0 1 <-- Desired Bash behaviour
26 # To create the desired behaviour we subtract offset from cursor index.
27 left=$( echo $c_line | grep -o "[[:alnum:]]:" | wc -l )
28 right=$( echo $c_line | grep -o ":[[:alnum:]]" | wc -l )
29 offset=`expr $left + $right`
30 index=`expr $COMP_CWORD - $offset`
31
32 # We use COMP_LINE (not COMP_WORDS) to get an array of space-separated
33 # words in the command because it will treat foo:bar as one string.
34 local comp_words=($COMP_LINE)
35
36 # If current word is a space, add an empty element to array
37 if [ "${COMP_WORDS[$COMP_CWORD]}" == "" ]; then
38 comp_words=("${comp_words[#]:0:$index}" "" "${comp_words[#]:$index}" )
39 fi
40
41
42 local cur=${comp_words[$index]}
43
44 local arr=(foo:apple foo:banana foo:mango pineapple)
45 COMPREPLY=()
46 COMPREPLY=($(compgen -W "${arr[*]}" -- $cur))
47 #set +xv
48 }
49
50 complete -F _comp dummy
Problem is, this still doesn't work correctly. If I type:
dummy pine<TAB>
Then it will correctly complete dummy pineapple. If I type:
dummy fo<TAB>
Then it will show the three available options, foo:apple foo:banana foo:mango. So far so good. But if I type:
dummy foo:<TAB>
Then the output I get is dummy foo:foo: And then further tabs don't work, because it interprets foo:foo: as a cur, which doesn't match any completion.
When I test the compgen command on its own, like so:
compgen -W 'foo:apple foo:banana foo:mango pineapple' -- foo:
Then it will return the three matching results:
foo:apple
foo:banana
foo:mango
So what I assume is happening is that the Bash completion sees that it has an empty string and three available candidates for completion, so adds the prefix foo: to the end of the command line - even though foo: is already the cursor to be completed.
What I don't understand is how to fix this. When colons aren't involved, this works fine - "pine" will always complete to pineapple. If I go and change the array to add a few more options:
local arr=(foo:apple foo:banana foo:mango pineapple pinecone pinetree)
COMPREPLY=()
COMPREPLY=($(compgen -W "${arr[*]}" -- $cur))
Then when I type dummy pine<TAB> it still happily shows me pineapple pinecone pinetree, and doesn't try to add a superfluous pine on the end.
Is there any fix for this behaviour?
One approach that's worked for me in the past is to wrap the output of compgen in single quotes, e.g.:
__foo_completions() {
COMPREPLY=($(compgen -W "$(echo -e 'pine:cone\npine:apple\npine:tree')" -- "${COMP_WORDS[1]}" \
| awk '{ print "'\''"$0"'\''" }'))
}
foo() {
echo "arg is $1"
}
complete -F __foo_completions foo
Then:
$ foo <TAB>
$ foo 'pine:<TAB>
'pine:apple' 'pine:cone' 'pine:tree'
$ foo 'pine:a<TAB>
$ foo 'pine:apple'<RET>
arg is pine:apple
$ foo pi<TAB>
$ foo 'pine:
I am new to bash. I have experience in java and python but no experience in bash so I'm struggling with the simplest of tasks.
What I want to achieve is I want to look through the string and find certain sub strings, numbers to be exact. But not all numbers just number that are followed by " xyz". For example:
string="Blah blah boom boom 14 xyz foo bar 12 foo boom 55 XyZ hue hue 15 xyzlkj 45hh."
And I want to find numbers:
14 55 and 15
How would I go about that?
You can use grep with lookahead
echo "$string" | grep -i -P -o '[0-9]+(?= xyz)'
Explanation:
-i – ignore case
-P – interpret pattern as a Perl regular expression
-o – print only matching
[0-9]+(?= xyz) – match one or more numbers followed by xyz
For more information see:
https://linux.die.net/man/1/grep
http://www.regular-expressions.info/lookaround.html
https://github.com/tldr-pages/tldr/blob/master/pages/common/grep.md
grep + cut approach (without PCRE):
echo $string | grep -io '[0-9]* xyz' | cut -d ' ' -f1
The output:
14
55
15
I want to execute a command on the body of every incoming postfix mail.
sed ':a;N;$!ba;s/=\n//g' /path-to/message-file | sed 's/</\n\</g' | sed -r '/'"$(sed -r 's/\\/\\\\/g;s/\//\\\//g;s/\^/\\^/g;s/\[/\\[/g;s/'\''/'\'"\\\\"\'\''/g;s/\]/\\]/g;s/\*/\\*/g;s/\$/\\$/g;s/\./\\./g' whitelist | paste -s -d '|')"'/! s/http/httx/g'
I think it could be possible with Postfix After-Queue Content Filter, but I don't know how to do it...
EDIT:
afterqueue.sh
#!/bin/sh
# Simple shell-based filter. It is meant to be invoked as follows:
# /path/to/script -f sender recipients...
# Localize these. The -G option does nothing before Postfix 2.3.
INSPECT_DIR=/var/spool/filter
SENDMAIL="/usr/sbin/sendmail -G -i" # NEVER NEVER NEVER use "-t" here.
# Exit codes from <sysexits.h>
EX_TEMPFAIL=75
EX_UNAVAILABLE=69
# Clean up when done or when aborting.
trap "rm -f in.$$" 0 1 2 3 15
# Start processing.
cd $INSPECT_DIR || {
echo $INSPECT_DIR does not exist; exit $EX_TEMPFAIL; }
cat >in.$$ || {
echo Cannot save mail to file; exit $EX_TEMPFAIL; }
# Specify your content filter here.
sh /path/to/remove_links.sh <in.$$
$SENDMAIL "$#" <in.$$
exit $?
remove_links.sh
#!/bin/bash
sed ':a;N;$!ba;s/=\n//g' $1 | sed 's/</\n\</g' | sed -r '/'"$(sed -r 's/\\/\\\\/g;s/\//\\\//g;s/\^/\\^/g;s/\[/\\[/g;s/'\''/'\'"\\\\"\'\''/g;s/\]/\\]/g;s/\*/\\*/g;s/\$/\\$/g;s/\./\\./g' /path/to/whitelist | paste -s -d '|')"'/! s/http/httx/g'
It is working, if I call it by hand, but if I add it to the /etc/postfix/master.cf like this:
# =============================================================
# service type private unpriv chroot wakeup maxproc command
# (yes) (yes) (yes) (never) (100)
# =============================================================
filter unix - n n - 10 pipe
flags=Rq user=filter null_sender=
argv=/path/to/afterqueue.sh -f ${sender} -- ${recipient}
there are no changes in the mail.
I get the following syslog:
Apr 13 15:14:08 rs211184 postfix/qmgr[7492]: 3FFDF23CB5F: from=<test#gmail.com>, size=4358, nrcpt=1 (queue active)
Apr 13 15:14:08 rs211184 postfix/pipe[7504]: 116E523CA8C: to=<example#example.de>, relay=filter, delay=0.2, delays=0.16/0/0/0.04, dsn=2.0.0, status=sent (delivered via filter service)
Apr 13 15:14:08 rs211184 postfix/qmgr[7492]: 116E523CA8C: removed
Apr 13 15:14:08 rs211184 postfix-local[7522]: postfix-local: from=test#gmail.com, to=example#example.de, dirname=/var/qmail/mailnames
Apr 13 15:14:08 rs211184 postfix/pipe[7521]: 3FFDF23CB5F: to=<dsehlhoff#lcdev1.de>, relay=plesk_virtual, delay=0.02, delays=0.01/0/0/0.01, dsn=2.0.0, status=sent (delivered via plesk_virtual service)
Apr 13 15:14:08 rs211184 postfix/qmgr[7492]: 3FFDF23CB5F: removed
You seem to expect the message in a file, and oddly a static file name, but that's not how it works. The message arrives on standard input. Minimally, just remove /path/to/message-file -- but really, piping sed to sed is very often a mistake; you should refactor this to a single sed script (or Awk, or Python, or what have you).
sed -e ':a;N;$!ba;s/=\n//g' -e 's/</\n\</g' |
# This is too convoluted, really!
sed -r '/'"$(sed -r 's/\\/\\\\/g;s/\//\\\//g;s/\^/\\^/g;s/\[/\\[/g;s/'\''/'\'"\\\\"\'\''/g;s/\]/\\]/g;s/\*/\\*/g;s/\$/\\$/g;s/\./\\./g' whitelist |
paste -s -d '|')"'/! s/http/httx/g'
I am trying to write a little bash script, where you can specify a number of minutes and it will show the lines of a log file from those last X minutes.
To get the lines, I am using sed
sed -n '/time/,/time/p' LOGFILE
On CLI this works perfectly, in my script however, it does not.
# Get date
now=$(date "+%Y-%m-%d %T")
# Get date minus X number of minutes -- $1 first argument, minutes
then=$(date -d "-$1 minutes" +"%Y-%m-%d %T")
# Filter logs -- $2 second argument, filename
sed -n '/'$then'/,/'$now'/p' $2
I have tried different approaches and none of them seem to work:
result=$(sed -n '/"$then"/,/"$now"/p' $2)
sed -n "/'$then'/,/'$now'/p" "$2"
sed -n "/$then/,/$now/p" $2
sed -n "/$then/,/$now/p" "$2
Any sugesstions?
I am on Debian 5, echo $SHELL says /bin/sh
EDIT : The script produces no output, so there is no error showing up.
In the logfile every entry starts with a date like this 2013-05-15 14:21:42,794
I assume that the main problem is that you try to perform an arithmetic comparison by string matching. sed -n '/23/,/27/p' gives you the lines between the first line that contains 23 and the next line that contains 27 (and then again from the next line that contains 23 to the next line that contains 27, and so on). It does not give you all lines that contain a number between 23 and 27. If the input looks like
19
22
24
26
27
30
it does not output anything (since there is no 23). An awk solution that uses string matching has the same problem. So, unless your then date string occurs verbatim in the log file, your method will fail. You have to convert your date strings into numbers (drop the -, <space>, and :) and then check whether the resulting number is in the right range, using an arithmetical comparison rather than a string match. This goes beyond the capabilities of sed; awk and perl can do it rather easily. Here is a perl solution:
#!/bin/bash
NOW=$(date "+%Y%m%d%H%M%S")
THEN=$(date -d "-$1 minutes" "+%Y%m%d%H%M%S")
perl -wne '
if (m/^(....)-(..)-(..) (..):(..):(..)/) {
$date = "$1$2$3$4$5$6";
if ($date >= '"$THEN"' && $date <= '"$NOW"') {
print;
}
}' "$2"
Don't give yourself a headache with nested quotes. Use the -v option with awk to pass the value of a shell variable into the script:
#!/bin/bash
# Get date
now=$(date "+%Y-%m-%d %T")
# Get date minus X number of minutes -- $1 first argument, minutes
delta=$(date -d "-$1 minutes" +"%Y-%m-%d %T")
# Filter logs -- $2 second argument, filename
awk -v n="$now" -v d="$delta" '$0~n,$0~d' $2
Also don't use variable names of shell builtins i.e then.