I found this script for authenticating with cas to obtain a protected URL, from the commandline, using curl and bash(See code below). However, I could not get the script to work. I have verified that I am able to extract the LoginTicket, the JSESSION and provided the right username and password. However, it seems the cas-server does not react to it even though I have verified I provide it with all the right info. It just returns the login-page again and again without any error messages.
Is this script still a viable way of doing this? Or do I need to use the cas REST-API, if I want to get a valid cas ticket using the commandline, now adays?
# Taken from https://gist.github.com/dodok1/4134605
# Usage: cas-get.sh {url} {username} {password} # If you have any errors try removing the redirects to get more information
# The service to be called, and a url-encoded version (the url encoding isn't perfect, if you're encoding complex stuff you may wish to replace with a different method)
DEST="$1"
ENCODED_DEST=`echo "$DEST" | perl -p -e 's/([^A-Za-z0-9])/sprintf("%%%02X", ord($1))/seg' | sed 's/%2E/./g' | sed 's/%0A//g'`
#IP Addresses or hostnames are fine here
CAS_HOSTNAME=team.eea.sk
#Authentication details. This script only supports username/password login, but curl can handle certificate login if required
USERNAME=$2
PASSWORD=$3
#Temporary files used by curl to store cookies and http headers
COOKIE_JAR=.cookieJar
HEADER_DUMP_DEST=.headers
rm $COOKIE_JAR
rm $HEADER_DUMP_DEST
#The script itself is below
#Visit CAS and get a login form. This includes a unique ID for the form, which we will store in CAS_ID and attach to our form submission. jsessionid cookie will be set here
CAS_ID=`curl -s -k -c $COOKIE_JAR https://$CAS_HOSTNAME/cas/login?service=$ENCODED_DEST | grep name=.lt | sed 's/.*value..//' | sed 's/\".*//'`
#Submit the login form, using the cookies saved in the cookie jar and the form submission ID just extracted. We keep the headers from this request as the return value should be a 302 including a "ticket" param which we'll need in the next request
curl -s -k --data "username=$USERNAME&password=$PASSWORD<=$CAS_ID&_eventId=submit" -i -b $COOKIE_JAR -c $COOKIE_JAR https://$CAS_HOSTNAME/cas/login?service=$ENCODED_DEST -D $HEADER_DUMP_DEST -o /dev/null
#Linux may not need this line but my response from the previous call has retrieving windows-style linebreaks in OSX
#dos2unix $HEADER_DUMP_DEST > /dev/null
#Visit the URL with the ticket param to finally set the casprivacy and, more importantly, MOD_AUTH_CAS cookie. Now we've got a MOD_AUTH_CAS cookie, anything we do in this session will pass straight through CAS
CURL_DEST=`grep Location $HEADER_DUMP_DEST | sed 's/Location: //'`
curl -s -k -b $COOKIE_JAR -c $COOKIE_JAR $CURL_DEST
#If our destination is not a GET we'll need to do a GET to, say, the user dashboard here
#Visit the place we actually wanted to go to
curl -s -k -b $COOKIE_JAR "$DEST"
You might try extracting the "execution" value like you do the "lt" value and including it in the second curl call.
Related
I made a script that downloads several files located in my professional OneDrive. This script works perfectly from a French computer, a US computer but it can't work from a Japanese computer.
To permit you understand the problem, I will detail the program:
1- I establish the token system (I got inspired by Jay Lee detailed answer) and retrieve the token in the access_token variable.
2- To download the file, in my case I cannot use
curl -w %{time_total} https://graph.microsoft.com/v1.0/me/drive/items/01M...WU/content -H "Authorization: Bearer $access_token"
Thus, this how I proceed:
#I get the item properties
itemProperties=$(curl ${ODf1Mb} -H "Authorization: Bearer $access_token")
#In these properties I select the downloadUrl that will permit me to download the file
downloadUrl=$(echo -e "$itemProperties" | grep "#microsoft.graph.downloadUrl" | awk -F'[",]' '{ print $9 }')
#Finally I execute this URL storing the download time in a variable (I do all this stuff for this)
dload=$(curl -w %{time_total} ${downloadUrl} -H "Authorization: Bearer $access_token")
As I said at the begin, for French and US computers it will work but on the Japanese machine it doesn't. I do get the itemProperties and the downloadUrl but when I call the downloadUrl with CURL it seems that it cannot reach the server because I have this:
As we can see we do not even have the Total weight to be downloaded. As an element of comparison, this is the result in a French machine:
I know, there is a warning relating to command substitution but I haven't tried to fix it yet because it makes its job.
Note -> the downloadUrl has this format:
https://lpl-my.sharepoint.com/personal/{user}_{company infra domain}_com/_layouts/15/download.aspx?
I just cannot figure out what is the problem. I can access to the https://lpl-my.sharepoint.com through the browser so I don't think the server IP is banned.
Check your ping / traceroute to see if lpl-my.sharepoint.com resolves to the same network location.
Also, I have seen other folks run curl with -v to see verbose traces and see if what the difference is.
I've ran into a strange issue. I'm trying to script my router to collect usage stats and other stuff. I'm making one cURL to the auth URL to get a valid session id, then another using that session id to the page I need.
Here is my script:
SESSION_ID=$(curl --silent -D - -X POST http://10.0.0.1/login.cgi -d'admin_username=admin&admin_password=admin' | grep 'SESSION' | sed 's/Set-Cookie: SESSION=//' | sed 's/; path=\///')
echo $SESSION_ID # 1234567890
curl -v -H "Cookie: SESSION=$SESSION_ID" http://10.0.0.1/modemstatus_dslstatus.html
If I manually take SESSION_ID and insert it in place of '"$SESSION_ID"' everything is dandy. cURL shows the headers (via -v) and they are correct. Running the command while manually inserting the session id produces identical headers.
I'm sure it's something small. Please teach me something :)
Check for carriage returns \r in your variables which wouldn't appear with a simple echo in some cases.
For example, if I wanna issue a post request to the server. But the website requires the username and password to login first. How should I do these two operations?
If it's requires some programmatic username and password built into the web page, you'd need to submit what it expects for a user logging in, then capture the cookies you get, and then send those cookies back with your post. This can get involved if the login process involves multiple pages which are redirected to. curl can do this, but be prepared to spend some time on it.
To get the cookie being returned by the server, use curl -i to include headers. You can also add -L to automatically follow redirects (which you otherwise would have to do manually by retrieving the URI in the Location: field of an HTTP 301 or 302 response). Example:
curl -i -L stackoverflow.com > /tmp/so.html
grep -i 'Set-Cookie:' /tmp/so.html
Yields:
Set-Cookie: prov=31c24327-c0bf-474d-b504-fc97dc69ab61; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
(Until you get the predictable logic right and how you need to submit the requests, you'll need to inspect the rest of the headers to be able to accomodate redirects, see if there are multple cookies, etc.)
To submit a cookie, use curl -b:
curl -b "prov=31c24327-c0bf-474d-b504-fc97dc69ab61" [rest of curl command]
Be patient and good luck, and be sure to check the curl man page.
curl -u username:password -X POST --data "name1=value1&name2=value2" http://yourwebpage.com/
I'm attempting to use the new incremental authorization for an installed app in order to add scopes to an existing authorization while keeping the existing scopes. This is done using the new include_granted_scopes=true parameter. However, no matter what I've tried, the re-authorization always overwrites the scopes completely. Here's a minimal Bash PoC script I've written to demo my issue:
client_id='716905662885.apps.googleusercontent.com' # throw away client_id (non-prod)
client_secret='CMVqIy_iQqBEMlzjYffdYM8A' # not really a secret
redirect_uri='urn:ietf:wg:oauth:2.0:oob'
while :
do
echo "Please enter a list of scopes (space separated) or CTRL+C to quit:"
read scope
# Form the request URL
# http://goo.gl/U0uKEb
auth_url="https://accounts.google.com/o/oauth2/auth?scope=$scope&redirect_uri=$redirect_uri&response_type=code&client_id=$client_id&approval_prompt=force&include_granted_scopes=true"
echo "Please go to:"
echo
echo "$auth_url"
echo
echo "after accepting, enter the code you are given:"
read auth_code
# swap authorization code for access token
# http://goo.gl/Mu9E5J
auth_result=$(curl -s https://accounts.google.com/o/oauth2/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d code=$auth_code \
-d client_id=$client_id \
-d client_secret=$client_secret \
-d redirect_uri=$redirect_uri \
-d grant_type=authorization_code)
access_token=$(echo -e "$auth_result" | \
grep -Po '"access_token" *: *.*?[^\\]",' | \
awk -F'"' '{ print $4 }')
echo
echo "Got an access token of:"
echo $access_token
echo
# Show information about our access token
info_result=$(curl -s --get https://www.googleapis.com/oauth2/v2/tokeninfo \
-H "Content-Type: application/json" \
-d access_token=$access_token)
current_scopes=$(echo -e "$info_result" | \
grep -Po '"scope" *: *.*?[^\\]",' | \
awk -F'"' '{ print $4 }')
echo "Our access token now allows the following scopes:"
echo $current_scopes | tr " " "\n"
echo
echo "Let's add some more!"
echo
done
The script simply performs OAuth authorization and then prints out the scopes the token is currently authorized to use. In theory it should continue to add scopes each time through but in practice, the list of scopes is getting overwritten each time. So the idea would be on the first run, you'd use a minimal scope of something like email and then the next run, tack on something more like read-only calendar https://www.googleapis.com/auth/calendar.readonly. Each time, the user should only be prompted to authorize the currently requested scopes but the resulting token should be good for all scopes including those authorized on previous runs.
I've tried with a fresh client_id/secret and the results are the same. I know I could just include the already authorized scopes again but that prompts the user for all of the scopes, even those already granted and we all know the longer the list of scopes, the less likely the user is to accept.
UPDATE: during further testing, I noticed that the permissions for my app do show the combined scopes of each incremental authorization. I tried waiting 30 seconds or so after the incremental auth, then grabbing a new access token with the refresh token but that access token is still limited to the scopes of the last authorization, not the combined scope list.
UPDATE 2: I've also toyed around with keeping the original refresh token. The refresh token is only getting new access tokens that allow the original scopes, the incrementally added scopes are not included. So it seems effectively that include_granted_scopes=true is having no effect on the tokens, the old and new refresh tokens continue to work but only for their specified scopes. I cannot get a "combined scope" refresh or access token.
Google's OAuth 2.0 service does not support incremental auth for installed/native apps; it only works for the web server case. Their documentation is broken.
Try adding a complete list of scopes to the second request, where you exchange authorization code for an access token. Strangely enough, scope parameter doesn't seem to be documented, but it is present in requests generated by google-api-java-client. For example:
code=foo&grant_type=authorization_code
&redirect_uri=http%3A%2F%2Flocalhost%3A8080%2Fmyapp%2FoauthCallback
&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.profile+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.me+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fplus.stream.write
In the web server scenario, a complete list of granted scopes is returned together with authorization code when include_granted_scopes is set to true. This is another bit of information that seems to be missing from linked documentation.
Edit 1 Including a complete list of scopes in the code exchange request works for us in our Java app, but I have just tried your original script with no modification (except for client id/secret) and it works just fine (edited just the ids and tokens):
$ bash tokens.sh
Please enter a list of scopes (space separated) or CTRL+C to quit:
https://www.googleapis.com/auth/userinfo.profile
Please go to:
https://accounts.google.com/o/oauth2/auth?scope=https://www.googleapis.com/auth/userinfo.profile&redirect_uri=urn:ietf:wg:oauth:2.0:oob&response_type=code&client_id=189044568151-4bs2mcotfi2i3k6qp7vq8c6kbmkp2rf8.apps.googleusercontent.com&approval_prompt=force&include_granted_scopes=true
after accepting, enter the code you are given:
4/4qXGQ6Pt5QNYqdEuOudzY5G0ogru.kv_pt5Hlwq8UYKs_1NgQtlUFsAJ_iQI
Got an access token of:
ya29.1.AADtN_XIt8uUZ_zGZEZk7l9KuNQl9omr2FRXYAqf67QF92KqfvXliYQ54ffg_3E
Our access token now allows the following scopes:
https://www.googleapis.com/auth/userinfo.profile
https://www.googleapis.com/auth/userinfo.email
https://www.googleapis.com/auth/plus.me
https://www.googleapis.com/auth/plus.circles.read
You can see that the previously granted scopes are included...
I'm writing a Bash script to download image files from Snapito's web page snapshot API. The API can return a variety of responses indicated by different HTTP response codes and/or some custom headers. My script is intended to be run as an automated Cron job that pulls URLs from a MySQL database and saves the screenshots to local disk.
I am using curl. I'd like to do these 3 things using a single CURL command:
Extract the HTTP response code
Extract the headers
Save the file locally (if the request was successful)
I could do this using multiple curl requests, but I want to minimize the number of times I hit Snapito's servers. Any curl experts out there?
Or if someone has a Bash script that can respond to the full documented set of Snapito API responses, that'd be awesome. Here's their API documentation.
Thanks!
Use the dump headers option:
curl -D /tmp/headers.txt http://server.com
Use curl -i (include HTTP header) - which will yield the headers, followed by a blank line, followed by the content.
You can then split out the headers / content (or use -D to save directly to file, as suggested above).
There are three options -i, -I, and -D
> curl --help | egrep '^ +\-[iID]'
-D, --dump-header FILE Write the headers to FILE
-I, --head Show document info only
-i, --include Include protocol headers in the output (H/F)