I am trying to prepend a message to the output of rsstail, this is what I have right now:
rsstail -o -i 15 --initial 0 http://feeds.bbci.co.uk/news/world/europe/rss.xml | awk -v time=$( date +\[%H:%M:%S_%d/%m/%Y\] ) '{print time,$0}' | tee someFile.txt
which should give me the following:
[23:46:49_23/10/2014] Title: someTitle
After the command I have a | while read line do ... end which never gets called because the above command does not output a single thing. What am I doing wrong?
PS: I am using the python version of rsstail, since the other one kept on crashing (https://github.com/gvalkov/rsstail.py)
EDIT:
As requested in the comments the command:
rsstail -o -i 15 --initial 0 http://feeds.bbci.co.uk/news/world/europe/rss.xml
Will give back a message like the following when a new article is found
Title: Sweden calls off search for sub
It seems that my rsstail is different from yours, but mine supports the option
-Z x add heading 'x'
so that
rsstail -Z"$( date +\[%H:%M:%S_%d/%m/%Y\] ) " ...
does the job without awk; on the other hand, you do have some problem with buffering, is it possible to ask rsstail to stop after a given number of titles?
Related
I am a complete beginner on shell scripting and I am trying to iterate through a set of JSON files and trying to extract a certain field out of it. Each JSON file has a "country:"xxx" field. In each JSON file, there are 10k of the same field with the same country name so I need only the first occurrence and I can do that using "-m 1".
I tried to use grep for this but could not figure out how to extract the whole field including the country name from each file at first occurrence.
for FILE in *.json;
do
grep -o -a -m 1 -h -r '"country":"' $FILE;
done
I tried to use another pipe and use the below pattern but it did not work
| egrep -o '^[^"]+'
Actual Output:
"country":"
"country":"
"country":"
Desired Output:
"country:"romania"
"country:"united kingdom"
"country:"tajikistan"
but I need the whole thing. Any help would be great. Thanks
There is one general answer on the question "I only want the first occurence", and that answer is:
... | head -n 1
This mean, whatever your do: take the head (the first lines), the -n switch gives you the possibility to say how many you want (one in this case).
The same can be done for the last occurence(s), but then you use tail instead of head (you can also use the -n switch).
After trying many things. I found the pattern I was looking for.
grep -Po '"country":.*?[^\\]",' $FILE | head -n 1;
I am working on a shell script with exiftool to automatically change some exif tags on pictures contained in a certain folder and I would like to use the output to get a notification on my NAS (a QNAP) when the job is completed.
Everything works already, but - as the notification system truncates the message - I would like to receive just the information I need, i.e. the last line of the shell output, which is for example the following:
Warning: [minor] Entries in IFD0 were out of sequence. Fixed. - 2015-07-12 15.41.06.jpg
4512 files failed condition
177 image files updated
The problem is that currently I only receive the following notification:
Exiftool cronjob completed on Camera: 4512 files failed condition
What I would like to get instead is:
Exiftool cronjob completed on Camera: 177 image files updated
The script is the following:
#!/bin/sh
# exiftool script for 2002 problem
dir="/share/Multimedia/Camera"
cd "$dir"
FOLDER="$(printf '%s\n' "${PWD##*/}")"
OUTPUT="$(exiftool -overwrite_original -r '-CreateDate<DateTimeOriginal' -if '$CreateDate eq "2002:12:08 12:00:00"' -if '$DateTimeOriginal ne $CreateDate' *.[Jj][Pp][Gg])"
/sbin/notice_log_tool -a "Exiftool cronjob completed on ${FOLDER}: ${OUTPUT}" --severity=5
exit 0
To do that I played with the $OUTPUT variable using | tail -1, but probably I make some basic errors and I receive something like:
Exiftool cronjob completed on Camera: 4512 files failed condition | tail -1
How to do it in the right way? Thanks
Put the tail inside the capturing parens.
OUTPUT=$(exif ... | tail -1)
You don't need the double quotes here. I'm guessing that you
tried
OUTPUT="$(exif ...) | tail -1"
Probably an old post to be answering now, but try using the -n flag (see tail --help) and wrap the command output using ticks.
OUTPUT=`exif ... | tail -n 1`
(user464502's answer did not work for me as the tail command does not recognize the parameter "-1")
While trying to run the following in Terminal in Mac OS Lion, rather than getting the first line as output, I simply get the output from xpath.
curl -s http://wordsmith.org/awad/rss1.xml | xpath //item/description | sed q
Outputs:
Found 1 nodes:
-- NODE --
<description>...</description>
Instead of:
Found 1 nodes:
Why is sed not able to process the output from xpath? What am I missing?
I don't have Mac OS but I can guess your problem. If I do the equivalent under Linux I get the following output:
$ curl -s http://wordsmith.org/awad/rss1.xml | xpath -e "//item/description" | sed q
Found 1 nodes in stdin:
-- NODE --
<description>Ending life for humane reasons, such as to avoid pain from an incurable condition.</description>
That's because part of the output is going to stdout and part is going to stderr. So if I redirect everything to stdout, I get this,
$ curl -s http://wordsmith.org/awad/rss1.xml | xpath -e "//item/description" 2>&1 | sed q
Found 1 nodes in stdin:
I do not have the exact answer, but I have come up against this exact problem. Although I was using awk not sed. The solution was setting the -q flag. Also you forgot the -e flag to identify the expression. This might have something to do with me being on ubuntu and you being on osx. but my output was the same.
so what you want is
curl -s http://wordsmith.org/awad/rss1.xml | xpath -q -e //item/description | sed q
SYNOPSIS
xpath [-s suffix] [-p prefix] [-q] -e query [-e query] ... [file] ...
-q
Be quiet. Output only errors (and no separator) on stderr.
On OSX 10.7.4
I am not exactly sure what you wanted as an output. I wanted to get rid of the STDERR ("Found X nodes ...") and only print out the actual item (the actual title and description). Hopefully this helps.
> cat wordsmith.sh
#!/bin/bash
/usr/bin/curl -s http://wordsmith.org/awad/rss1.xml > file.xml
title=`xpath file.xml //item/title 2> /dev/null | sed 's/<[^>]\*>//g'`
description=`xpath file.xml //item/description 2> /dev/null | sed 's/<[^>]*>//g'`
echo $title : $description
/bin/rm file.xml
> ./wordsmith.sh
versal : Universal; whole.
I have additional info that may provide more clarity to the question and even solve some problems, as it did mine. The errors encountered are partially related to the version of xpath that you are running.
The -q quiet flag is available on the version that I installed on my Ubuntu system via apt-get, but not available on the versions installed either on OSX or RHEL. There are also slight syntax differences between the versions like the order of the query and the input are reversed.
But the most helpful part is that you can copy an Ubuntu-installed version to the other systems and it works fine with the rest of the already installed xpath library. You need to have xpath installed and can then just migrate the core xpath script (usually at /usr/bin/xpath). Then you can take advantage of the extremely helpful -q parameter and skip the sed/regex post processing.
If you don't have the -q flag on OS X, you could comment out those lines that print "-- NODE --" and "Found x nodes". Something like this:
murphy:~ pdurbin$ diff -u /usr/bin/xpath5.12.orig /usr/bin/xpath5.12
--- /usr/bin/xpath5.12.orig 2012-12-06 06:29:14.000000000 -0500
+++ /usr/bin/xpath5.12 2014-05-15 14:32:14.000000000 -0400
## -48,17 +48,18 ##
}
if ($nodes->size) {
- print STDERR "Found ", $nodes->size, " nodes:\n";
+ #print STDERR "Found ", $nodes->size, " nodes:\n";
foreach my $node ($nodes->get_nodelist) {
- print STDERR "-- NODE --\n";
+ #print STDERR "-- NODE --\n";
print $node->toString;
+ print "\n";
}
}
else {
print STDERR "No nodes found";
}
-print STDERR "\n";
+#print STDERR "\n";
exit;
murphy:~ pdurbin$
A lot late, but I had a similar problem in a bash script recently in which I was trying to suppress "Found # nodes:" followed by "---NODE---" for every item returned, above the values the command was inserting into an array.
For example:
Found 9 nodes:
-- NODE --
-- NODE --
-- NODE --
-- NODE --
-- NODE --
-- NODE --
-- NODE --
-- NODE --
-- NODE --
Please select your option from the menu:
1) Option 1
2) Option 2
etc.
I fixed it by redirecting STDERR to 2>/dev/null in my xpath argument. This eliminated the "Found # nodes" and only returned the array I loaded into a select menu. Hope this helps whoever stumbles across this in the future.
Let's say that during your workday you repeatedly encounter the following form of columnized output from some command in bash (in my case from executing svn st in my Rails working directory):
? changes.patch
M app/models/superman.rb
A app/models/superwoman.rb
in order to work with the output of your command - in this case the filenames - some sort of parsing is required so that the second column can be used as input for the next command.
What I've been doing is to use awk to get at the second column, e.g. when I want to remove all files (not that that's a typical usecase :), I would do:
svn st | awk '{print $2}' | xargs rm
Since I type this a lot, a natural question is: is there a shorter (thus cooler) way of accomplishing this in bash?
NOTE:
What I am asking is essentially a shell command question even though my concrete example is on my svn workflow. If you feel that workflow is silly and suggest an alternative approach, I probably won't vote you down, but others might, since the question here is really how to get the n-th column command output in bash, in the shortest manner possible. Thanks :)
You can use cut to access the second field:
cut -f2
Edit:
Sorry, didn't realise that SVN doesn't use tabs in its output, so that's a bit useless. You can tailor cut to the output but it's a bit fragile - something like cut -c 10- would work, but the exact value will depend on your setup.
Another option is something like: sed 's/.\s\+//'
To accomplish the same thing as:
svn st | awk '{print $2}' | xargs rm
using only bash you can use:
svn st | while read a b; do rm "$b"; done
Granted, it's not shorter, but it's a bit more efficient and it handles whitespace in your filenames correctly.
I found myself in the same situation and ended up adding these aliases to my .profile file:
alias c1="awk '{print \$1}'"
alias c2="awk '{print \$2}'"
alias c3="awk '{print \$3}'"
alias c4="awk '{print \$4}'"
alias c5="awk '{print \$5}'"
alias c6="awk '{print \$6}'"
alias c7="awk '{print \$7}'"
alias c8="awk '{print \$8}'"
alias c9="awk '{print \$9}'"
Which allows me to write things like this:
svn st | c2 | xargs rm
Try the zsh. It supports suffix alias, so you can define X in your .zshrc to be
alias -g X="| cut -d' ' -f2"
then you can do:
cat file X
You can take it one step further and define it for the nth column:
alias -g X2="| cut -d' ' -f2"
alias -g X1="| cut -d' ' -f1"
alias -g X3="| cut -d' ' -f3"
which will output the nth column of file "file". You can do this for grep output or less output, too. This is very handy and a killer feature of the zsh.
You can go one step further and define D to be:
alias -g D="|xargs rm"
Now you can type:
cat file X1 D
to delete all files mentioned in the first column of file "file".
If you know the bash, the zsh is not much of a change except for some new features.
HTH Chris
Because you seem to be unfamiliar with scripts, here is an example.
#!/bin/sh
# usage: svn st | x 2 | xargs rm
col=$1
shift
awk -v col="$col" '{print $col}' "${#--}"
If you save this in ~/bin/x and make sure ~/bin is in your PATH (now that is something you can and should put in your .bashrc) you have the shortest possible command for generally extracting column n; x n.
The script should do proper error checking and bail if invoked with a non-numeric argument or the incorrect number of arguments, etc; but expanding on this bare-bones essential version will be in unit 102.
Maybe you will want to extend the script to allow a different column delimiter. Awk by default parses input into fields on whitespace; to use a different delimiter, use -F ':' where : is the new delimiter. Implementing this as an option to the script makes it slightly longer, so I'm leaving that as an exercise for the reader.
Usage
Given a file file:
1 2 3
4 5 6
You can either pass it via stdin (using a useless cat merely as a placeholder for something more useful);
$ cat file | sh script.sh 2
2
5
Or provide it as an argument to the script:
$ sh script.sh 2 file
2
5
Here, sh script.sh is assuming that the script is saved as script.sh in the current directory; if you save it with a more useful name somewhere in your PATH and mark it executable, as in the instructions above, obviously use the useful name instead (and no sh).
It looks like you already have a solution. To make things easier, why not just put your command in a bash script (with a short name) and just run that instead of typing out that 'long' command every time?
If you are ok with manually selecting the column, you could be very fast using pick:
svn st | pick | xargs rm
Just go to any cell of the 2nd column, press c and then hit enter
Note, that file path does not have to be in second column of svn st output. For example if you modify file, and modify it's property, it will be 3rd column.
See possible output examples in:
svn help st
Example output:
M wc/bar.c
A + wc/qax.c
I suggest to cut first 8 characters by:
svn st | cut -c8- | while read FILE; do echo whatever with "$FILE"; done
If you want to be 100% sure, and deal with fancy filenames with white space at the end for example, you need to parse xml output:
svn st --xml | grep -o 'path=".*"' | sed 's/^path="//; s/"$//'
Of course you may want to use some real XML parser instead of grep/sed.
I am trying to automate the copying of changed files into a perforce changelist, but need help getting the generated changelist number. I assume this is probably a straight-forward thing for bash scripting - but I'm just not getting it yet!!...
Basically I execute the command
p4 change -o | sed 's/<enter description here>/This is my description./' | p4 change -i
As a result of this, I get output onto screen something like the line below (obviously the number changes)
Change 44152 created.
What I want, is to be able to capture the generated number into a variable I can then use in the rest of my script (to add future files to the same changelist etc)...
Can anyone please advise?
Thanks
like this
change=`p4 change -o | sed 's/<enter description here>/This is my description./' | p4 change -i|cut -d f2`
echo $change
EDIT: Per #Enigma last comment
If you want to use shell variable in sed command use doublr quote "" instead single quote '' around sed command. Like below
sed "s/<enter description here>/ updating $change form/"
Results in "updating 44152 form" ($change holds value 44152)
You can capture the output of a command with the ` character.
myVariable=`myCommand`
You can use awk to get the 2nd column of data, the number part.
myVariable=`originalCommand|awk '{print $2}'`
Now myVariable will be your number, 44152.
you could use cut. Here is another related stackoverflow entry:
use space as a delimiter with cut command
I would use something like this
echo "Change 44152 created." | tr -d 'a-zA-Z .'
If you are wanting to get the last generated changelist, you can also type
variable=`p4 counter change`
but this will only work if no one else made a changelist after you made yours.