sha1sum into the URL of curl using GET method - bash

I'm writing a document for a curl request.
User should enter email and password, and I need to send user's email and sha1sum of the password via GET method.
If that would be right, the command would look like:
curl 'http://example.com/auth?email={EMAIL}&pwd={SHA1SUM(PASSWORD)}'
I know about printf variable | sha1sum, but how should I use it to concatenate to the quoted string of CURL?
Please not that it should stay a ONE line command.

You probably want a command substitution using $(). You should use double quotes on the URL in this case (assuming a $PASSWORD variable):
$ curl "http://example.com/auth?email={EMAIL}&pwd=$(echo "$PASSWORD" | sha1sum | sed -e 's|\w*-$||')"
(Added sed to remove the trailing - from the sha1sum output.)

Related

How do I get a string from a curl response for a variable in bash?

The bash script sends a curl. The curl response example is following:
{"code":"2aaea70fdccd7ad11e4ee8e82ec26162","nonce":1541355854942}
I need to get the code "2aaea70fdccd7ad11e4ee8e82ec26162" (without quotes) and use it in the bash script.
Use jq to extract the value from the JSON, and command substitution to capture the output of the command:
code=$(curl ... | jq -r '.code')
The -r (--raw) prints the string directly instead of quoting it as in a JSON.
You can also achieve it by sed command if you don't want to install jq:
json=`curl ...`
code=$(echo "$json" | sed -nE 's/.*"code":"([^\"]*)",".*/\1/p')

parse json using a bash script without external libraries

I have a fresh ubuntu installation and I'm using a command that returns a JSON string. I would like to send this json string to an external api using curl. How do I parse something like {"foo":"bar"} to an url like xxx.com?foo=bar using just the standard ubuntu libraries?
Try this
curl -s 'http://twitter.com/users/username.json' | sed -e 's/[{}]/''/g' | awk -v RS=',"' -F: '/^text/ {print $2}'
You could use tr -d '{}' instead of sed. But leaving them out completely seems to have the desired effect as well.
if you want to strip off the outer quotes, pipe the result of the above through sed 's/(^"\|"$)//g'

bash script grep using variable fails to find result that actually does exist

I have a bash script that iterates over a list of links, curl's down an html page per link, greps for a particular string format (syntax is: CVE-####-####), removes the surrounding html tags (this is a consistent format, no special case handling necessary), searches a changelog file for the resulting string ID, and finally does stuff based on whether the string ID was found or not.
The found string ID is set as a variable. The issue is that when grepping for the variable there are no results, even though I positively know there should be for some of the ID's. Here is the relevant portion of the script:
for link in $(cat links.txt); do
curl -s "$link" | grep 'CVE-' | sed 's/<[^>]*>//g' | while read cve; do
echo "$cve"
grep "$cve" ./changelog.txt
done
done
If I hardcode a known ID in the grep command, the script finds the ID and returns things as expected. I've tried many variations of grepping on this variable (e.g. exporting it and doing command expansion, cat'ing the changelog and piping to grep, setting variable directly via command expansion of the curl chain, single and double quotes surrounding variables, half a dozen other things).
Am I missing something nuanced with the outputted variable from the curl | grep | sed chain? When it is echo'd to stdout or >> to a file, things look fine (a single ID with no odd characters or carriage returns etc.).
Any hints or alternate solutions would be much appreciated. Thanks!
FYI:
OSX:$bash --version
GNU bash, version 3.2.57(1)-release (x86_64-apple-darwin14)
Edit:
The html file that I was curl'ing was chock full of carriage returns. Running the script with set -x was helpful because it revealed the true string being grepped: $'CVE-2011-2716\r'.
+ read -r link
+ curl -s http://localhost:8080/link1.html
+ sed -n '/CVE-/s/<[^>]*>//gp'
+ read -r cve
+ grep -q -F $'CVE-2011-2716\r' ./kernelChangelog.txt
Also investigating from another angle, opening the curled file in vim showed ^M and doing a printf %s "$cve" | xxd also showed the carriage return hex code 0d appended to the grep'd variable. Relying on 'echo' stdout was a wrong way of diagnosing things. Writing a simple html page with a valid CVE-####-####, but then adding a carriage return (in vim insert mode just type ctrl-v ctrl-m to insert the carriage return) will create a sample file that fails with the original script snippet above.
This is pretty standard string sanitization stuff that I should have figured out. The solution is to remove carriage returns, piping to tr -d '\r' is one method of doing that. I'm not sure there is a specific duplicate on SO for this series of steps, but in any case here is my now working script:
while read -r link; do
curl -s "$link" | sed -n '/CVE-/s/<[^>]*>//gp' | tr -d '\r' | while read -r cve; do
if grep -q -F "$cve" ./changelog.txt; then
echo "FOUND: $cve";
else
echo "NOT FOUND: $cve";
fi;
done
done < links.txt
HTML files can contain carriage returns at the ends of lines, you need to filter those out.
curl -s "$link" | sed -n '/CVE-/s/<[^>]*>//gp' | tr -d '\r' | while read cve; do
Notice that there's no need to use grep, you can use a regular expression filter in the sed command. (You can also use the tr command in sed to remove characters, but doing this for \r is cumbersome, so I piped to tr instead).
It should look like this:
# First: Care about quoting your variables!
# Use read to read the file line by line
while read -r link ; do
# No grep required. sed can do that.
curl -s "$link" | sed -n '/CVE-/s/<[^>]*>//gp' | while read -r cve; do
echo "$cve"
# grep -F searches for fixed strings instead of patterns
grep -F "$cve" ./changelog.txt
done
done < links.txt

Extract value via OSX Terminal from .html for "curl" submission within a single script

How do I extract the variable value of the following line of an html page via Terminal to submit it afterwards via "curl -d" in the same script?
<input type="hidden" name="au_pxytimetag" value="1234567890">
Edit: how do I transfer the extracted value to the "curl -d" command within a single script? might be a silly question, but I'm total noob. =0)
EDITED:
I cannot tell from your question what you are actually trying to do. I originally thought you were trying to extract a variable from a file, but it seems you actually want to firstly, get that file, secondly extract a variable, and thirdly, use variable for something else... so let's address each of those steps:
Firstly you want to grab a page using curl, so you will do
curl www.some.where.com
and the page will be output on your terminal. But actually you want to search for something on that page, so you need to do
curl www.some.where.com | awk something
or
curl www.some.where.com | grep something
But you want to put that into a variable, so you need to do
var=$(curl www.some.where.com | awk something)
or
var=$(curl www.some.where.com | grep something)
The actual command I think you want is
var=$(curl www.some.where.com | awk -F\" '/au_pxytimetag/{print $(NF-1)}')
Then you want to use the variable var for another curl operation, so you will need to do
curl -d "param1=$var" http://some.url.com/somewhere
Original answer
I'd use awk like this:
var=$(awk -F\" '/au_pxytimetag/{print $(NF-1)}' yourfile)
to take second to last field on line containing au_pxytimetag using " as field separator.
Then you can use it like this
curl -d "param1=$var&param2=SomethingElse" http://some.url.com/somewhere
You can use xmllint:
value=$(xmllint --html --xpath "string(//input[#name='au_pxytimetag']/#value)" index.html)
You can do it with my Xidel:
xidel http://webpage -e "//input[#name='au_pxytimetag']/#value"
But you do not need to.
With
xidel http://webpage -f "(//form)[1]" -e "//what-you-need-from-the-next-page"
you can send all values from the first form on the webpage to the form action and then you can query something from the next page
You can try:
grep au_pxytimetag input.html | sed "s/.* value=\"\(.*\)\".*/\1/"
EDIT:
If you need this on a script:
#!/bin/bash
DATA=$(grep au_pxytimetag input.html | sed "s/.* value=\"\(.*\)\".*/\1/")
curl http://example.com -d $DATA

How to check UID exists using ldapsearch

I would like to have a shell script use 'ldapsearch' to compare UIDs listed in a text file with those on a remote LDAP directory.
I'm no shell script expert, and would appreciate any assistance. The following loops through a text file given as an argument, but what I need is to echo when a UID in my text file does not exist in the LDAP.
#!/bin/sh
for i in `cat $1`;
do ldapsearch -x -H ldaps://ldap-66.example.com -b ou=People,dc=crm,dc=example,dc=com uid=$i | grep uid: | awk '{print $2}';
echo $i
done
Try:
#!/bin/sh
url="ldaps://ldap-66.example.com"
basedn="ou=People,dc=crm,dc=example,dc=com"
for i in `cat $1`; do
if ldapsearch -x -H "$url" -b "$basedn" uid=$i uid > /dev/null
then
# Do nothing
true
else
echo $i
fi
done
Anttix, this is the final version that worked. Thanks again. Does this mark the question as accepted?
#!/bin/sh
url="ldaps://ldap-66.example.com"
basedn="ou=People,dc=crm,dc=example,dc=com"
for i in `cat $1`; do
if ldapsearch -x -H "$url" -b "$basedn" uid=$i | grep uid: | awk '{print $2}' > /dev/null
then
# Do nothing
true
else
echo $i
fi
done
#!/bin/sh
uids="${1:?missing expected file argument}"
URL='ldaps://ldap-66.example.com'
BASE='ou=People,dc=crm,dc=example,dc=com'
ldapsearch \
-x -LLL -o ldif-wrap=no -c \
-H "$URL" \
-b "$BASE" \
-f "$uids" \
'(uid=%s)' uid |
sed \
-r -n \
-e 's/^uid: //p' \
-e 's/^uid:: (.+)/echo "\1"|base64 -d/ep' |
grep \
-F -x -v \
-f - \
"$uids"
Explanation
for the ldapsearch part
You can invoke ldapsearch -f just once to perform multiple searches and read them from a file $uids.
The (uid=%s) is used as a template for each line, where the %s is filled in for each line.
The -c is needed to continue even on errors, e.g. when a UID is not found.
The -x enforces simple authentication with should be considered insecure. Better use SASL.
The -LLL will remove un-needed cruft from the ldapsearch output.
The -o ldif-wrap=no will prevent lines longer than 79 characters from being wrapped - otherwise grep may only pick up the first part of your user names.
The uid tells ldapsearch to only return that attribute and skip all the other attributes we're not interested in; saves some network bandwidth and processing time.
for the sed part
The -r enabled extended regular expressions turning +, (…) into operators; otherwise they have to be pre-fixed with a back-slash \.
The -n tells sed to do not print each line by default but only when told to do so using an explicit print command p.
The s/uid: // will strip that prefix from lines containing it.
The s/uid:: …/…/ep part is needed for handling non-ASCII-characters like Umlauts as ldapsearch encodes them using base64.
The suffix …p will print only those lines which were substituted.
for the grep part
The -F tells grep to not use regular expression pattern matching, but plain text; this becomes important if your names contain ., *, parenthesis, ….
The -x enforces whole line matching so searching for foo will not also match foobar.
The -f - tells grep to read the patterns from STDIN, that is the existing UIDs found by ldapsearch.
The -v inverts the search and will filter out those existing users.
The $uids will again read the list of all requested users. After removing the existing users the list of missing users will remain.
Issues with previous solutions
The previous solutions all had certain kinds of issues:
missing grep
missing quoting which breaks as soon as blanks are involved
use-less use of cat
use-less use of grep when awk is also used
inefficient because for each UID many processes get fork()ed
inefficient because for each UID a separate search is performed
did not work for long user names
did not work for user names containing regular expression meta characters
did not work for user names containing Umlauts

Resources