This question already has answers here:
How to insert strings containing slashes with sed? [duplicate]
(11 answers)
Why it's not possible to use regex to parse HTML/XML: a formal explanation in layman's terms
(10 answers)
Closed 3 years ago.
I am trying to eliminate everything before and after the JSON contained in a specific part of a webpage so I can send that to a PHP script. I've tried a number of ways to get rid of the container content but all of them so far have failed, including one method that has worked in the exact same syntax for related purposes:
The characters that are between the two asterisks (**) at the beginning and end I need removed:
**var songs = [**{"timestamp":1555176393000,"title":"Enter Sandman","trackId":"ba_5cbb546d-5c1c-490e-9908-761b89dd5166","artist":"Metallica","artistId":"52_65f4f0c5-ef9e-490c-aee3-909e7ae6b2ab","album":"Metallica","albumId":"d0_6e729716-c0eb-3f50-a740-96ac173be50d","npe_id":"3cc5fe24d0ffcbb9152d861f27ae801660"},{"timestamp":1555176702000,"title":"Start Me Up","trackId":"76_d0b86399-11e5-4d11-b4fe-ce4b3f9a4736","artist":"The Rolling Stones","artistId":"1b_b071f9fa-14b0-4217-8e97-eb41da73f598","album":"Tattoo You","albumId":"d1_778b345b-e8a1-4054-b5ba-c611d3fda421","npe_id":"f0dc0ab12ef99a6e0087cad12886509b7b"},{"timestamp":1555176909000,"title":"Fame","trackId":"4e_cdef4b88-7314-431a-9cdd-d457296a65b7","artist":"David Bowie","artistId":"ab_5441c29d-3602-4898-b1a1-b77fa23b8e50","album":"Best of Bowie","albumId":"21_3709ee5a-d087-370f-afb4-f730092c7a94","npe_id":"2b8b3a170baa77125891d72a0474d3343a"},{"timestamp":1555177158000,"title":"Rocket","trackId":"34_aa5b9053-849e-4788-972f-7941303175b6","artist":"Def Leppard","artistId":"c1_7249b899-8db8-43e7-9e6e-22f1e736024e","album":"Hysteria","albumId":"06_de5cf055-d875-41f8-9261-89b11b7ff145","npe_id":"0d87b580f140a85feaebc7d77f75db2a3d"},{"timestamp":1555177826000,"title":"Mama, I'm Coming Home","trackId":"cb_e5b09171-9527-4d24-8ab6-1e922fdd66d3","artist":"Ozzy Osbourne","artistId":"4b_8aa5b65a-5b3c-4029-92bf-47a544356934","album":"No More Tears","albumId":"66_8f3d5a65-036c-3260-b9bb-36f1d0d80c11","npe_id":"6b766464fe945f275bf478192dcd33cfdc"},{"timestamp":1555178076000,"title":"Gold Dust Woman","trackId":"a4_ef8c1eca-f344-4bfb-82ea-763aa8aeaad9","artist":"Fleetwood Mac","artistId":"66_bd13909f-1c29-4c27-a874-d4aaf27c5b1a","album":"2010-01-08: The Rock Boat X, Lido Deck, Carnival Inspiration","albumId":"80_4f229af0-2afc-431d-87ff-f7f6af66268e","npe_id":"f6417d98fd1fefcca227d82a8ac9b84197"},{"timestamp":1555178363000,"title":"With or Without You","trackId":"79_6b9a509f-6907-4a6e-9345-2f12da09ba4b","artist":"U2","artistId":"26_a3cb23fc-acd3-4ce0-8f36-1e5aa6a18432","album":"The Joshua Tree","albumId":"0c_d287c703-5c25-3181-85d4-4d8c1a7d8ecd","npe_id":"23b19420196b28e2156ecda87c11b882e0"},{"timestamp":1555178654000,"title":"Who Are You","trackId":"7d_431b9746-c6ec-489d-9199-c83676171ae8","artist":"The Who","artistId":"22_f2fa2f0c-b6d7-4d09-be35-910c110bb342","album":"Who Are You","albumId":"40_b255da2c-6583-35f9-95e3-ef5f9c14e868","npe_id":"e01896f74f24968bb7727eaafbf6250b8f"},{"timestamp":1555179031000,"title":"Authority Song","trackId":"31_f5ff19f7-95f3-4a22-8996-3788c264e0b8","artist":"John Mellencamp","artistId":"4d_0aad6b52-fd93-4ea4-9c5d-1f66e1bc9f0a","album":"Words & Music: John Mellencamp's Greatest Hits","albumId":"9e_1240c510-7015-4484-baac-ce17f5277ea1","npe_id":"244785e3b1d75effb9fdecbb6df76b009f"},{"timestamp":1555179256000,"title":"Touch Me","trackId":"9d_1dd1f86c-2120-45f3-ac9f-3c87257fe414","artist":"The Doors","artistId":"13_9efff43b-3b29-4082-824e-bc82f646f93d","album":"The Soft Parade","albumId":"db_c29d7552-b5df-42b8-aae7-03d1e250cb3a","npe_id":"1b5d155eb2eeee6fc1fdb50a94b100669c"}]**; <ol class="songs tracks"></ol>**
Here is the shell script which produces the above at present:
#!/bin/sh
curl -v --silent http://player.listenlive.co/41851/en/songhistory >/var/tmp/wklh$1.a.txt
pta=`cat /var/tmp/wklh$1.a.txt | grep songs > /var/tmp/wklh$1.b.txt`
ptb=`cat /var/tmp/wklh$1.b.txt | sed -n -e '/var songs = /,/; <span title/ p' > /var/tmp/wklh$1.c.txt`
ptc=`cat /var/tmp/wklh$1.c.txt | grep songs > /var/tmp/wklh$1.d.txt`
#ptd=`cat /var/tmp/wklh$1.d.txt | sed -i 's/var songs = [//g' /var/tmp/wklh$1.d.txt`
#ptd=`cat /var/tmp/wklh$1.d.txt | sed -i 's/}]; <ol class="songs tracks"></ol>//g' /var/tmp/wklh$1.d.txt`
json=`cat /var/tmp/wklh$1.d.txt`
echo $json
metadata=`php /etc/asterisk/scripts/music/wklh.php $json`
echo $metadata
The commented out lines are what I was trying to use to remove the extraneous content, since it is predictable every time. However, when uncommented, I get the following errors:
sed: -e expression #1, char 18: unterminated `s' command
sed: -e expression #1, char 38: unknown option to `s'
I've examined my sed statement, but I can't find any discrepancies between how I use it here and in other working shell scripts.
Is there actually a syntax error here (or unallowed characters)? Or is there a better way I can do this?
Your shell script has serious issues.
The syntax
variable=`commands`
takes the output of commands and assigns it to variable. But in every case, you are redirecting all output to a file; so the variable will always be empty.
Unless you need the temporary files for reasons which are not revealed in your question (such as maybe being able to check how many bytes of output you got in each temporary file for a monitoring report, or something like that), a pipeline would be much superior.
#!/bin/sh
curl -v --silent http://player.listenlive.co/41851/en/songhistory |
grep songs |
sed -n -e '/var songs = /,/; <span title/ p' |
grep songs |
php /etc/asterisk/scripts/music/wklh.php
This also does away with the useless uses of cat and the useless uses of echo and so also coincidentally removes the quoting errors. The grep x | sed -n 's/y/z/p' is a useless use of grep which can easily be refactored to sed -n '/x/s/y/z/p'
Square brackets are special to sed. Simply escape them.
s/var songs = \[//g
If you use slash / as the regex delimiter, it becomes special. Either escape it or use a different delimiter.
s/}]; <ol class="songs tracks"><\/ol>//g
s|}]; <ol class="songs tracks"></ol>||g
if your data in 'd' file, try gnu sed,
sed -Ez 's/^\*\*[^\*]+\*\*(.+)]\*\*[^\*]+\*\*\s*$/\1/' d
remove last ] too, to correctly balance the Json
There is a website that I am a part of and I wanted to get the information out of the site on a daily basis. The page looks like this:
User1 added User2.
User40 added user3.
User13 added user71
User47 added user461
so on..
There's no JSON end point to get the information and parse it. So I have to wget the page and clean up the HTML:
User1 added user2
Is it possible to clean this up even though the username always changes?
I would divide that problem into two:
How to clean up your HTML
Yes it is possible to use grep directly, but I would recommend using a standard tool to convert HTML to text before using grep. I can think of two (html2text is a conversion utility, and w3m is actually a text browser), but there are more:
wget -O - http://www.stackoverflow.com/ | html2text | grep "How.*\?"
w3m http://www.stackoverflow.com/ | grep "How.*\?"
These examples will get the homepage of StackOverflow and display all questions found on that page starting with How and ending with ? (it displays about 20 such lines for me, but YMMV depending on your settings).
How to extract only the desired strings
Concerning your username, you can just tune your expression to match different users (-E is necessary due to the extended regular expression syntax, -o will make grep print only the matching part(s) of each line):
[...] | grep -o -E ".ser[0-9]+ added .ser[0-9]+"
This however assumes that users always have a name matching .ser[0-9]+. You may want to use a more general pattern like this one:
[...] | grep -o -E "[[:graph:]]+[[:space:]]+added[[:space:]]+[[:graph:]]+"
This pattern will match added surrounded by any two other words, delimited by an arbitrary number of whitespace characters. Or simpler (assuming that a word may contain everything but blank, and the words are delimited by exactly one blank):
[...] | grep -o -E "[^ ]+ added [^ ]+"
Do you intent to just strip away the HTML-Tags?
Then try this:
sed 's/<[^>]*>//g' infile >outfile
For some reason I cannot get this to output just the version of this line. I suspect it has something to do with how grep interprets the dash.
This command:
admin#DEV:~/TEMP$ sendemail
Yields the following:
sendemail-1.56 by Brandon Zehm
More output below omitted
The first line is of interest. I'm trying to store the version to variable.
TESTVAR=$(sendemail | grep '\s1.56\s')
Does anyone see what I am doing wrong? Thanks
TESTVAR is just empty. Even without TESTVAR, the output is empty.
I just tried the following too, thinking this might work.
sendemail | grep '\<1.56\>'
I just tried it again, while editing and I think I have another issue. Perhaps im not handling the output correctly. Its outputting the entire line, but I can see that grep is finding 1.56 because it highlights it in the line.
$ TESTVAR=$(echo 'sendemail-1.56 by Brandon Zehm' | grep -Eo '1.56')
$ echo $TESTVAR
1.56
The point is grep -Eo '1.56'
from grep man page:
-E, --extended-regexp
Interpret PATTERN as an extended regular expression (ERE, see below). (-E is specified by POSIX.)
-o, --only-matching
Print only the matched (non-empty) parts of a matching line, with each such part on a separate output
line.
Your regular expression doesn't match the form of the version. You have specified that the version is surrounded by spaces, yet in front of it you have a dash.
Replace the first \s with the capitalized form \S, or explicit set of characters and it should work.
I'm wondering: In your example you seem to know the version (since you grep for it), so you could just assign the version string to the variable. I assume that you want to obtain any (unknown) version string there. The regular expression for this in sed could be (using POSIX character classes):
sendemail |sed -n -r '1 s/sendemail-([[:digit:]]+\.[[:digit:]]+).*/\1/ p'
The -n suppresses the normal default output of every line; -r enables extended regular expressions; the leading 1 tells sed to only work on line 1 (I assume the version appears in the first line). I anchored the version number to the telltale string sendemail- so that potential other numbers elsewhere in that line are not matched. If the program name changes or the hyphen goes away in future versions, this wouldn't match any longer though.
Both the grep solution above and this one have the disadvantage to read the whole output which (as emails go these days) may be long. In addition, grep would find all other lines in the program's output which contain the pattern (if it's indeed emails, somebody might discuss this problem in them, with examples!). If it's indeed the first line, piping through head -1 first would be efficient and prudent.
jayadevan#jayadevan-Vostro-2520:~$ echo $sendmail
sendemail-1.56 by Brandon Zehm
jayadevan#jayadevan-Vostro-2520:~$ echo $sendmail | cut -f2 -d "-" | cut -f1 -d" "
1.56
I've never used sed apart from the few hours trying to solve this. I have a config file with parameters like:
test.us.param=value
test.eu.param=value
prod.us.param=value
prod.eu.param=value
I need to parse these and output this if REGIONID is US:
test.param=value
prod.param=value
Any help on how to do this (with sed or otherwise) would be great.
This works for me:
sed -n 's/\.us\././p'
i.e. if the ".us." can be replaced by a dot, print the result.
If there are hundreds and hundreds of lines it might be more efficient to first search for lines containing .us. and then do the string replacement... AWK is another good choice or pipe grep into sed
cat INPUT_FILE | grep "\.us\." | sed 's/\.us\./\./g'
Of course if '.us.' can be in the value this isn't sufficient.
You could also do with with the address syntax (technically you can embed the second sed into the first statement as well just can't remember syntax)
sed -n '/\(prod\|test\).us.[^=]*=/p' FILE | sed 's/\.us\./\./g'
We should probably do something cleaner. If the format is always environment.region.param we could look at forcing this only to occur on the text PRIOR to the equal sign.
sed -n 's/^\([^,]*\)\.us\.\([^=]\)=/\1.\2=/g'
This will only work on lines starting with any number of chars followed by '.' then 'us', then '.' and then anynumber prior to '=' sign. This way we won't potentially modify '.us.' if found within a "value"
Need to extract .co.uk urls from a file with lots of entries, some .com .us etc.. i need only the .co.uk ones. any way to do that?
pd: im learning bash
edit:
code sample:
32
<tr><td id="Table_td" align="center">23<a name="23"></a></td><td id="Table_td"><input type="text" value="http://www.ultraguia.co.uk/motets.php?pg=2" size="57" readonly="true" style="border: none"></td>
note some repeat
important: i need all links, broken or 404 too
found this code somwhere in the net:
cat file.html | tr " " "\n" | grep .co.uk
output:
href="http://www.domain1.co.uk/"
value="http://www.domain1.co.uk/"
href="http://www.domain2.co.uk/"
value="http://www.domain2.co.uk/"
think im close
thanks!
The following approach uses a real HTML engine to parse your HTML, and will thus be more reliable faced with CDATA sections or other syntax which is hard to parse:
links -dump http://www.google.co.uk/ -html-numbered-links 1 -anonymous \
| tac \
| sed -e '/^Links:/,$ d' \
-e 's/[0-9]\+.[[:space:]]//' \
| grep '^http://[^/]\+[.]co[.]uk'
It works as follows:
links (a text-based web browser) actually retrieves the site.
Using -dump causes the rendered page to be emitted to stdout.
Using -html-numbered-links requests a numbered table of links.
Using -anonymous tweaks defaults for added security.
tac reverses the output from Links in a line-ordered list
sed -e '/^Links:/,$ d' deletes everything after (pre-reversal, before) the table of links, ensuring that actual page content can't be misparsed
sed -e 's/[0-9]\+.[[:space:]]//' removes the numbered headings from the individual links.
grep '^https\?://[^/]\+[.]co[.]uk' finds only those links with their host parts ending in .co.uk.
One way using awk:
awk -F "[ \"]" '{ for (i = 1; i<=NF; i++) if ($i ~ /\.co\.uk/) print $i }' file.html
output:
http://www.mysite.co.uk/
http://www.ultraguia.co.uk/motets.php?pg=2
http://www.ultraguia.co.uk/motets.php?pg=2
If you are only interested in unique urls, pipe the output into sort -u
HTH
Since there is no answer yet, I can provide you with an ugly but robust solution. You can exploit the wget command to grab the URLs in your file. Normally, wget is used to download from thos URLs, but by denying wget time for it lookup via DNS, it will not resolve anything and just print the URLs. You can then grep on those URLs that have .co.uk in them. The whole story becomes:
wget --force-html --input-file=yourFile.html --dns-timeout=0.001 --bind-address=127.0.0.1 2>&1 | grep -e "^\-\-.*\\.co\\.uk/.*"
If you want to get rid of the remaining timestamp information on each line, you can pipe the output through sed, as in | sed 's/.*-- //'.
If you do not have wget, then you can get it here