It is possible to download a file from a server that use HTTPS + SSO (Single Sign ON) by means command line (of course using linux)?
The Single Sign On system run with shibbolet process
SOLVED!!
wget --save-cookies sso.cookie --keep-session-cookies --header="Referer: https://serverCheckPoint/" 'https://serverCheckPoint/Shibboleth.sso/Login?target=https://ServerCheckPoint/path_Of_The_File_To_Read'
curl -b sso.cookie -c 2sso.cookie -L -k -f -s -S https://IDP_SERVER/PATH_of_loginPAge --data "USER=yourUser&password=YOURPASSWORD" -o localfile.html
wget -v --load-cookies 2sso.cookie --save-cookies auth2.cookie --keep-session-cookies https://CheckPointServer/Path_of_data/DATA_to_DOWNLOAD
the file sso.cookie, 2sso.cookie, auth.cookie are used in order to store the session and the SAML token.
In case there are problem with certificates you should to disable the check for the TLS certificates
Related
I ran into a problem when trying to use curl with --netrc-file option in my bash script. When I just put curl -d "username=MYUSR&password=MYPSWD" https://st-machinexxx/api -c cookies.txt
then it works fine. But curl --netrc-file configfile.txt https://st-machinexxx/api -c cookies.txt
causes a HTTP ERROR 401. What can be the reason? I was trying to set athentication method by adding --digest, --negotiate and --ntlm as well as set some headers, but didn't help. I am using curl 7.29.0, configfile.txt contains just three lines:
machine st-machinexxx
login MYUSR
password MYPSWD
I am trying to run a command to download 3000 files in parallel. I am using Cygwin + Windows.
Downloading a single file via WGET in terminal :
wget --no-check-certificate --content-disposition --load-cookies cookies.txt \ -p https://username:password#website.webpage.com/folder/document/download/1?type=file
allows me to download the file with ID 1 singularly, in the correct format (as long as --content-disposition is in the command).
I iterate over this REST API call to download the entire folder (3000 files). This works OK, but is quite slow.
FOR /L %i in (0,1,3000) do wget --no-check-certificate --content-disposition --load-cookies cookies.txt \ -p https://username:password#website.webpage.com/folder/document/download/%i?type=file
Now I am trying to run the program in Cygwin, in parallel.
seq 3000 | parallel -j 200 wget --no-check-certificate --content-disposition --load-cookies cookies.txt -p https://username:password#website.webpage.com/folder/document/download/{}?type=file
It runs, but the file-name and format is lost (instead of "index.html", for example, we may get "4#type=file" as the file-name).
Is there a way for me to fix this?
It is unclear what you would like them named. Let us assume you want them named: index.[1-3000].html
seq 3000 | parallel -j 200 wget -O index.{}.html --no-check-certificate --content-disposition --load-cookies cookies.txt -p https://username:password#website.webpage.com/folder/document/download/{}?type=file
My guess is that it is caused by --content-disposition being experimental, and the wget used by CygWin may be older than the wget used by the FOR loop. To check that run:
wget --version
in CygWin and outside CygWin (ie. where you would run the FOR loop).
I am trying to download a file from remote server using curl
curl -u username:password -O https://remoteserver/filename.txt
In my case a file filename.txt is getting created but the content of file says virtaul user logged in. It is not downloading the actual file.
I am not sure why this is happening. Any help on why the download is not working.
Try this in terminal:
curl -u username:password -o filedownload.txt -0 https://remoteserver/filename.txt
This command with -o will copy the contents of filename.txt to filedownload.txt in the current working directory.
How do you print received cookie info to stdout with curl?
According to the man pages if you use '-' as the file name for the -c --cookie-jar option it should print the cookie to stdout. The problem is I get an error:
curl: option -: is unknown
an example of the command I am running:
curl -c --cookie-jar - 'http://google.com'
You get that error because you use in the wrong way that option. When you see in a man page an option like:
-c, --cookie-jar <file name>
this mean that if you want to use that option, you must to use -c OR --cookie-jar, never both! These two are equivalent and, in fact, -c is the abbreviated form for --cookie-jar. There are many, many options in man pages which are designed in the same way.
In your case:
curl -c - 'http://google.com'
--cookie-jar is given as argument for -c option, so, it's interpreted as a file name, not like an option (as you may think), and - remains alone which leads to error because curl, indeed, doesn't have such an option.
Remove the "-c"
curl --cookie-jar - 'http://google.com'
Also you try verbose mode and see the cookie headers:
curl -v 'http://google.com'
You can save the cookies received and send them back to the server using the following commands:
1) To get/save the cookies to file "/tmp/cookies.txt":
curl -c /tmp/cookies.txt http://the.site.with.cookies/
2) To send the cookies back to the server (again using file "/tmp/cookies.txt"):
curl -b /tmp/cookies.txt http://the.site.with.cookies/
I hope it was useful.
[]s
Ronaldo
You need to use two options to get only the cookie text on stdout:
--cookie-jar <file name> from the man page:
If you set the file name to a single dash, '-', the cookies will be written to stdout.
--output <file> from the man page:
Write output to instead of stdout.
Set it to /dev/null to throw it away.
--silent is also helpful.
Putting it all together:
curl --silent --output /dev/null --cookie-jar - 'http://www.google.com/'
Output:
# Netscape HTTP Cookie File
# https://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_.google.com TRUE / FALSE 1512524163 NID 105=DownH33BKZnCsWJeGvsIC5cKRi7CPT3K3QjfUB-4js5xGw6P_6svMqU1yKlKOEu4XwL_TdddZlcMITefFGOtCCyzJNhO_7E9UMNpbQHja40IAerYP5Bwj-FhY1m35mZdvkVSmrg1pZPvH96IkVVVVVVVV
My use case: Test that your website uses the HttpOnly cookie setting, per the OWASP recommendation:
curl --silent --output /dev/null --cookie-jar - 'http://www.google.com/' | grep HttpOnly
when i use bash to upload files to dropbox, it works fine but when i manually use command line it does not work.
I'm thinking it might be the & in the url.. im not sure..
Bash code:
CURL_BIN="/usr/bin/curl"
#Note: This option explicitly allows curl to perform "insecure" SSL connections and transfers.
#CURL_ACCEPT_CERTIFICATES="-k"
CURL_PARAMETERS="--progress-bar"
APPKEY="zrwv8z3bycfk3m8"
OAUTH_ACCESS_TOKEN="aaaaaaaa"
APPSECRET="aaaaaaaaaa"
OAUTH_ACCESS_TOKEN_SECRET="aaaaaaaaa"
ACCESS_LEVEL="dropbox"
API_UPLOAD_URL="https://api-content.dropbox.com/1/files_put"
RESPONSE_FILE="temp2.txt"
FILE_SRC="temp.txt"
$CURL_BIN $CURL_ACCEPT_CERTIFICATES $CURL_PARAMETERS -v -i -o "$RESPONSE_FILE" --upload-file "$FILE_SRC" "$API_UPLOAD_URL/$ACCESS_LEVEL/$FILE_DST?oauth_consumer_key=$APPKEY&oauth_token=$OAUTH_ACCESS_TOKEN&oauth_signature_method=PLAINTEXT&oauth_signature=$APPSECRET%26$OAUTH_ACCESS_TOKEN_SECRET"
Manual code:
curl --insecure --progress-bar -v -i -o temp2.txt --upload-file temp.txt https://api-content.dropbox.com/1/files_put/dropbox/attachments/temp.txt?oauth_consumer_key=aaaaaaaaaa&oauth_token=aaaaaaaaa&oauth_signature_method=PLAINTEXT&oauth_signature=aaaaaaaaa%26aaaaaaaaaa
curl --insecure --progress-bar -v -i -o temp2.txt --upload-file temp.txt "https://api-content.dropbox.com/1/files_put/dropbox/attachments/temp.txt?oauth_consumer_key=aaaaaaaaaa&oauth_token=aaaaaaaaa&oauth_signature_method=PLAINTEXT&oauth_signature=aaaaaaaaa%26aaaaaaaaaa"
The solution is to add in the inverted commas "