I have something like that (it replaces a special sequence in request and then it sends it with curl).
SPECIAL_SEQUENCE=My_value
sed -i -e "s|SPECIAL_SEQUENCE|$SPECIAL_SEQUENCE|g" file.txt
curl http://127.0.0.1:1478/ -X POST -d #file.txt
It does work ok. However, the problem for me is that it leaves changed the file file.txt. I could undo sed at the end of this script, but I wouldn't like do this (becase I often interrupt this script with Ctrl+C).
Can you give me some other ideas to deal with this ? In other words the final form of request can be known only during executing mentioned script.
Instead of modifying the file, modify it in the memory and pipe it to curl:
SPECIAL_SEQUENCE=My_value
sed -e "s|SPECIAL_SEQUENCE|$SPECIAL_SEQUENCE|g" file.txt | curl http://127.0.0.1:1478/ -X POST
Related
I would like to run a find and replace on an HTML file through the command line.
My command looks something like this:
sed -e s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html > index.html
When I run this and look at the file afterward, it is empty. It deleted the contents of my file.
When I run this after restoring the file again:
sed -e s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html
The stdout is the contents of the file, and the find and replace has been executed.
Why is this happening?
When the shell sees > index.html in the command line it opens the file index.html for writing, wiping off all its previous contents.
To fix this you need to pass the -i option to sed to make the changes inline and create a backup of the original file before it does the changes in-place:
sed -i.bak s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html
Without the .bak the command will fail on some platforms, such as Mac OSX.
An alternative, useful, pattern is:
sed -e 'script script' index.html > index.html.tmp && mv index.html.tmp index.html
That has much the same effect, without using the -i option, and additionally means that, if the sed script fails for some reason, the input file isn't clobbered. Further, if the edit is successful, there's no backup file left lying around. This sort of idiom can be useful in Makefiles.
Quite a lot of seds have the -i option, but not all of them; the posix sed is one which doesn't. If you're aiming for portability, therefore, it's best avoided.
sed -i 's/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' index.html
This does a global in-place substitution on the file index.html. Quoting the string prevents problems with whitespace in the query and replacement.
use sed's -i option, e.g.
sed -i bak -e s/STRING_TO_REPLACE/REPLACE_WITH/g index.html
To change multiple files (and saving a backup of each as *.bak):
perl -p -i -e "s/\|/x/g" *
will take all files in directory and replace | with x
this is called a “Perl pie” (easy as a pie)
You should try using the option -i for in-place editing.
Warning: this is a dangerous method! It abuses the i/o buffers in linux and with specific options of buffering it manages to work on small files. It is an interesting curiosity. But don't use it for a real situation!
Besides the -i option of sed
you can use the tee utility.
From man:
tee - read from standard input and write to standard output and files
So, the solution would be:
sed s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html | tee | tee index.html
-- here the tee is repeated to make sure that the pipeline is buffered. Then all commands in the pipeline are blocked until they get some input to work on. Each command in the pipeline starts when the upstream commands have written 1 buffer of bytes (the size is defined somewhere) to the input of the command. So the last command tee index.html, which opens the file for writing and therefore empties it, runs after the upstream pipeline has finished and the output is in the buffer within the pipeline.
Most likely the following won't work:
sed s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html | tee index.html
-- it will run both commands of the pipeline at the same time without any blocking. (Without blocking the pipeline should pass the bytes line by line instead of buffer by buffer. Same as when you run cat | sed s/bar/GGG/. Without blocking it's more interactive and usually pipelines of just 2 commands run without buffering and blocking. Longer pipelines are buffered.) The tee index.html will open the file for writing and it will be emptied. However, if you turn the buffering always on, the second version will work too.
sed -i.bak "s#https.*\.com#$pub_url#g" MyHTMLFile.html
If you have a link to be added, try this. Search for the URL as above (starting with https and ending with.com here) and replace it with a URL string. I have used a variable $pub_url here. s here means search and g means global replacement.
It works !
The problem with the command
sed 'code' file > file
is that file is truncated by the shell before sed actually gets to process it. As a result, you get an empty file.
The sed way to do this is to use -i to edit in place, as other answers suggested. However, this is not always what you want. -i will create a temporary file that will then be used to replace the original file. This is problematic if your original file was a link (the link will be replaced by a regular file). If you need to preserve links, you can use a temporary variable to store the output of sed before writing it back to the file, like this:
tmp=$(sed 'code' file); echo -n "$tmp" > file
Better yet, use printf instead of echo since echo is likely to process \\ as \ in some shells (e.g. dash):
tmp=$(sed 'code' file); printf "%s" "$tmp" > file
And the ed answer:
printf "%s\n" '1,$s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g' w q | ed index.html
To reiterate what codaddict answered, the shell handles the redirection first, wiping out the "input.html" file, and then the shell invokes the "sed" command passing it a now empty file.
I was searching for the option where I can define the line range and found the answer. For example I want to change host1 to host2 from line 36-57.
sed '36,57 s/host1/host2/g' myfile.txt > myfile1.txt
You can use gi option as well to ignore the character case.
sed '30,40 s/version/story/gi' myfile.txt > myfile1.txt
With all due respect to the above correct answers, it's always a good idea to "dry run" scripts like that, so that you don't corrupt your file and have to start again from scratch.
Just get your script to spill the output to the command line instead of writing it to the file, for example, like that:
sed -e s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g index.html
OR
less index.html | sed -e s/STRING_TO_REPLACE/STRING_TO_REPLACE_IT/g
This way you can see and check the output of the command without getting your file truncated.
I'm trying to read a file, each line of which is a CVE ID. For each CVE, I want to make a curl to get its severity and then store that result in a new CSV file with the format cve-id,cve-severity.
Below is the script I'm using, which reads the IDs correctly, but doesn't make the curl call correctly. When I run this, it just outputs empty values for each curl call.
I've tried using back ticks instead of the $(), but same result. What am I doing wrong here?
#!/bin/bash
filename="cves.csv"
while read line
do
echo "$line"
cve_result=$(curl -s "https://cve.circl.lu/api/cve/${line}")
echo "$cve_result"
done < $filename
Also tried these variations, all with same (empty) result:
cve_result=$(curl -s "https://cve.circl.lu/api/cve/${line}")
cve_result=`curl -s "https://cve.circl.lu/api/cve/${line}"`
cve_result=$(curl -s "https://cve.circl.lu/api/cve/$line")
cve_result=`curl -s "https://cve.circl.lu/api/cve/$line"`
cve_result=$(curl -s https://cve.circl.lu/api/cve/$line)
cve_result=`curl -s https://cve.circl.lu/api/cve/$line`
Here is a sample of the CSV file:
CVE-2014-0114
CVE-2014-9970
CVE-2015-1832
CVE-2015-2080
CVE-2015-7521
Your code works for me (ie, each curl call pulls down a bunch of data).
If I convert my (linux) file to contain windows/dos line endings (\r\n) then the curl calls don't generate anything.
At this point I'm guessing your input file has windows/dos line endings (you can verify by running head -2 cves.csv | od -c and you should see the sequence \r \n at the end of each line).
Assuming this is your issue then you need to remove the \r characters; a couple options:
dos2unix cves.csv - only have to run once as this will update the file
curl ... ${line//$'\r'/}" - use parameter substitution to strip out the \r
I am currently attempting to make a script that when i enter the name of a vulnerability it will return to me the CVSS3 scores from tenable.
So far my plan is:
Curl the page
Grep the content i want
output the grepped CVSS3 score
when running myscript however grep is throwing the following error:
~/Documents/Tools/Scripts ❯ ./CVSS3-Grabber.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 30964 0 30964 0 0 28355 0 --:--:-- 0:00:01 --:--:-- 28355
grep: unrecognized option '-->Nessus<!--'
Usage: grep [OPTION]... PATTERNS [FILE]...
Try 'grep --help' for more information.
This has me very confused as when i run this in the command line i curl the content to sample.txt and then using the exact same grep syntax:
grep $pagetext -e CVSS:3.0/E:./RL:./RC:.
it returns to me the content i need, however when i run it via my script below...
#! /bin/bash
pagetext=$(curl https://www.tenable.com/plugins/nessus/64784)
cvss3_temporal=$(grep $pagetext -e CVSS:3.0/E:./RL:./RC:.)
echo $cvss3_temporal
i receive the errors above!
I believe this is because the '--' are causing grep to think the text inside the file that it is an instruction which grep doesnt know hence the error. I have tried copying the output of the curl to a text file and then grepping that rather than straight from the curl but still no joy. Does anyone know of a method to get grep to ignore '--' or any flags when reading text? Or alternatively if i can configure curl so that it only brings back text and no symbols?
Thanks in advance!
You don't need to store curl response in a variable, just pipe grep after curl like this:
cvss3_temporal=$(curl -s https://www.tenable.com/plugins/nessus/64784 |
grep -F 'CVSS:3.0/E:./RL:./RC:.')
Note use of -s in curl to suppress progress and -F in grep to make sure you are searching for a fixed string.
Grep filters a given file or standard input if none was given. In bash, you can use the <<< here-word syntax to send the variable content to grep's input:
grep -e 'CVSS:3.0/E:./RL:./RC:.' <<< "$pagetext"
Or, if you don't need the page anywhere else, you can pipe the output from curl directly to grep:
curl https://www.tenable.com/plugins/nessus/64784 | grep -e 'CVSS:3.0/E:./RL:./RC:.'
I'm trying to excerpt a bit of content from 2 text files and to send it as the body of an e-mail using the mailx program. I am trying to do this as a bash script, since I do have at least a limited amount of experience with creating simple bash scripts and so have a rudimentary knowledge in this area. That said, I am not opposed to entertaining other scripting options such as perl/python/whatever.
I've gotten partway to where I'd like to be using sed: sed -e '1,/excerpt delimiter 1/d' -e '/excerpt delimiter 2/,$d' file1.txt && sed -e '1,/excerpt delimiter one/d' -e '/excerpt delimiter two/,$d' file2.txt outputs to stdout the content I'm aiming to get into the e-mail body. But piping said content to mailx is not working, for reasons that are not entirely clear to me. That is to say that sed -e '1,/excerpt delimiter 1/d' -e '/excerpt delimiter 2/,$d' file1.txt && sed -e '1,/excerpt delimiter one/d' -e '/excerpt delimiter two/,$d' file2.txt | mail -s excerpts me#mymail.me does not send the output of both sed commands in the body of the e-mail: it only sends the output of the final sed command. I'm trying to understand why this is and to remedy matters by getting the output of both sed commands into the e-mail body.
Further background. The two text files contain many lines of text and are actually web page dumps I'm getting using lynx browser. I need just a block of a few lines from each of those files, so I'm using sed to delimit the blocks I need and to allow me to excise out those few lines from each file. My task might be easier and/or simpler if I were trying to excise from just one file rather than from two. But since the web pages with the content I'm after require entry of login credentials, and because I am trying to automate this process, I am using lynx's cmd_script option to first log in, then save (print-to-file, actually) the pages I need. lynx does not offer any way, so far as I can tell, to concatenate files, so I seem stuck with working with two separate files.
There must certainly be alternate ways of accomplishing my aim and I am not constrained, either by preference or by necessity, to use any particular utility. The only real constraint is, since I'm trying to automate this, that it be done as a script I can invoke as a cron job. I am using Linux and have at my disposal all the standard text manipulating tools. As may be clear, my scripting knowledge/abilities are quite limited, so I've been trying to accomplish what I'm aiming at using a one-liner. mailx is properly configured and working on this system.
The pipe only applies to the first command in the && list. You need combine the two into a single compound command whose output is piped to mailx.
{ sed -e '1,/excerpt delimiter 1/d' \
-e '/excerpt delimiter 2/,$d' file1.txt &&
sed -e '1,/excerpt delimiter one/d' \
-e '/excerpt delimiter two/,$d' file2.txt ; } |
mail -s excerpts me#mymail.me
I have the following Shell script below which can download the website into a variable. This is as far as I have got. What I would like to do is add input into this website (which accepts an IP address, and outputs ones location) from the console when I execute a Shell script with an argument(IP address) so that it can output the geographical location of the IP address. Please can anyone help.
#! /bin/bash
read input
content=$(wget http://freegeoip.net -q )
echo $content
Save the following as a file called getgeo:
#!/bin/bash
location=$(curl -s http://freegeoip.net/csv/$1)
echo $location
Then use it like this:
chmod +x getgeo
./getgeo 141.20.1.33
"141.20.1.33","DE","Germany","16","Berlin","Berlin","","52.5167","13.4000","",""
Or, if you just want the 5th and 3rd field and no quotes, do this:
./getgeo 141.20.1.33 | tr -d '"' | awk -F, '{print $5,$3}'
Berlin Germany
Or, you can do the trimming inside the script itself:
#!/bin/bash
location=$(curl -s http://freegeoip.net/csv/$1)
echo $location | tr -d '"' | awk -F, '{print $5,$3}'
If you prefer parsing XML or JSON, you can change the /csv/ to /XML/ or /JSON/ and you will get the following:
<?xml version="1.0" encoding="UTF-8"?> <Response> <Ip>92.238.99.46</Ip> <CountryCode>GB</CountryCode> <CountryName>United Kingdom</CountryName> <RegionCode>E6</RegionCode> <RegionName>Gloucestershire</RegionName> <City>Gloucester</City> <ZipCode>GL3</ZipCode> <Latitude>51.8456</Latitude> <Longitude>-2.1575</Longitude> <MetroCode></MetroCode> <AreaCode></AreaCode> </Response>
or JSON
{"ip":"141.20.1.33","country_code":"DE","country_name":"Germany","region_code":"16","region_name":"Berlin","city":"Berlin","zipcode":"","latitude":52.5167,"longitude":13.4,"metro_code":"","areacode":""}
Notes:
The command tr -d '"' removes all double quotes from whatever it receives as input:
The -F, switch to awk says to uses the comma as the field separator.
You may want to use curl - go ahead and refer to the documentation in the following site:
http://curl.haxx.se/docs/httpscripting.html#GET
The simplest and most common request/operation made using HTTP is to get a URL. The URL could itself refer to a web page, an image or a file. The client issues a GET request to the server and receives the document it asked for. If you issue the command line
curl http://curl.haxx.se
you get a web page returned in your terminal window. The entire HTML document that that URL holds.
Therefore, you can achieve what you want by redirecting the output of curl freegeoip.net/{format}/{ip_or_hostname} to a file, and then grep the info you want from it.