How do you print received cookie info to stdout with curl?
According to the man pages if you use '-' as the file name for the -c --cookie-jar option it should print the cookie to stdout. The problem is I get an error:
curl: option -: is unknown
an example of the command I am running:
curl -c --cookie-jar - 'http://google.com'
You get that error because you use in the wrong way that option. When you see in a man page an option like:
-c, --cookie-jar <file name>
this mean that if you want to use that option, you must to use -c OR --cookie-jar, never both! These two are equivalent and, in fact, -c is the abbreviated form for --cookie-jar. There are many, many options in man pages which are designed in the same way.
In your case:
curl -c - 'http://google.com'
--cookie-jar is given as argument for -c option, so, it's interpreted as a file name, not like an option (as you may think), and - remains alone which leads to error because curl, indeed, doesn't have such an option.
Remove the "-c"
curl --cookie-jar - 'http://google.com'
Also you try verbose mode and see the cookie headers:
curl -v 'http://google.com'
You can save the cookies received and send them back to the server using the following commands:
1) To get/save the cookies to file "/tmp/cookies.txt":
curl -c /tmp/cookies.txt http://the.site.with.cookies/
2) To send the cookies back to the server (again using file "/tmp/cookies.txt"):
curl -b /tmp/cookies.txt http://the.site.with.cookies/
I hope it was useful.
[]s
Ronaldo
You need to use two options to get only the cookie text on stdout:
--cookie-jar <file name> from the man page:
If you set the file name to a single dash, '-', the cookies will be written to stdout.
--output <file> from the man page:
Write output to instead of stdout.
Set it to /dev/null to throw it away.
--silent is also helpful.
Putting it all together:
curl --silent --output /dev/null --cookie-jar - 'http://www.google.com/'
Output:
# Netscape HTTP Cookie File
# https://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.
#HttpOnly_.google.com TRUE / FALSE 1512524163 NID 105=DownH33BKZnCsWJeGvsIC5cKRi7CPT3K3QjfUB-4js5xGw6P_6svMqU1yKlKOEu4XwL_TdddZlcMITefFGOtCCyzJNhO_7E9UMNpbQHja40IAerYP5Bwj-FhY1m35mZdvkVSmrg1pZPvH96IkVVVVVVVV
My use case: Test that your website uses the HttpOnly cookie setting, per the OWASP recommendation:
curl --silent --output /dev/null --cookie-jar - 'http://www.google.com/' | grep HttpOnly
Related
I know a similar question was posted, but I can't get it to work on my machine.
I tried the 1st answer from the mentioned question, i.e. response=$(curl --write-out %{http_code} --silent --output /dev/null servername) and when I echo $response I got 000 [Not sure if that is the desired output].
However, when trying to do so with my cURL command, I get no output.
This is my command:
curl -k --silent --ftp-pasv --ftp-ssl --user C:is_for_cookies --cert localcert_cert.pem --key certs/localcert_pkey.pem ftps://10.10.10.10:21/my_file.txt
and I use it with
x=$(curl -k --silent --ftp-pasv --ftp-ssl --user C:is_for_cookies --cert localcert_cert.pem --key certs/localcert_pkey.pem ftps://10.10.10.10:21/my_file.txt)
but when I try to echo $x all I get is a newline...
I know the cURL is failing, because when I run the same command, without --silent, I get curl: (7) Couldn't connect to server
This Q is tagged with both sh, bash because I've tried it on both with same results
I found this option which kind of helps (but I still don't know how to assign it to a variable, which should be easier than this...):
--stderr <file>
Redirect all writes to stderr to the specified file instead. If the file name is a plain '-', it is instead written to stdout.
If this option is used several times, the last one will be used.
When I use it like this:
curl -k --silent -S --stderr my_err_file --ftp-pasv --ftp-ssl --user C:is_for_cookies --cert localcert_cert.pem --key certs/localcert_pkey.pem ftps://10.10.10.10:21/my_file.txt
I can see the errors (i.e. curl: (7) Couldn't connect to server) inside that file.
I used --silent to suppress all output, and -S to un-suppress the errors, and the --stderr <file> to redirect them
In PHP or Python, for example, you cand do curl and receive a object with all properties organized.
In bash I have to invoke curl application.
I need to get status code and content in same command (its obvious because the status code is relationed with success or not to get content). But how I can do this?
I am trying this:
# get url in function
urlcurl=$(echo $#)
# create new file descriptor and redirect to STDOUT
exec 3>&1
# get curl status code and store in curlstatuscode
curlstatuscode=$(curl -L -k -w "%{http_code}" -o >(cat >&3) --silent \"${urlcurl}\")
My problem is about content: When I execute this in terminal I receive content in STDOUT (that is the command had to do). But I am trying to store this STDOUT using regular redirect expressions and they not work.
One example:
exec 3>&1
HTTP_STATUS=$(curl -k --silent -L -w "%{http_code}" -o >(cat >&3) 'http://example.com')
echo $HTTP_STATUS
If you not understood what I am trying to do, roughly mode (and invalid) would be something like that:
(I know that is invalid, I only want to clarify)
HTTP_CONTENT=$(echo `HTTP_STATUS=$(curl -k --silent -L -w "%{http_code}" -o >(cat >&3) 'http://example.com'`)
HTTP_CONTENT will get get content and HTTP_STATUS will get curl status code.
Please do not say to user another language. I need to solve this in bash. Is very simple to do that in other languages (mainly oriented objects). I really want to do this in bash.
Thank you!
Since http status is the last line in curl output, you can do like this in BASH:
out=$(curl -k --silent -L -w "\n%{http_code}" 'minecraft.net/haspaid.jsp?user=apterixbr')
http_status="${out##*$'\n'}"
http_content="${out%$'\n'*}"
In the janus project, they use curl to download and pipe a bootstrap script into bash.
https://github.com/carlhuda/janus
It looks like this:
$ curl -Lo- https://bit.ly/janus-bootstrap | bash
Why would one want to use the args -Lo-?
-o is supposed to be for output, but wouldn't that happen anyway (i.e. to stdout)?
It's all in the man pages:
-L in case the page has moved (3xx response) curl will redirect the request to the new address
-o output to a file instead of stdout (usually the screen). In your case the o flag is redundant since the output is piped to bash (for execution) - not to a file.
The -o is redundant, they produce the exact same output:
$ curl --silent example.com | sha256sum
3587cb776ce0e4e8237f215800b7dffba0f25865cb84550e87ea8bbac838c423 *-
$ curl --silent --output - example.com | sha256sum
3587cb776ce0e4e8237f215800b7dffba0f25865cb84550e87ea8bbac838c423 *-
They have used that syntax since that line was first introduced in 2011.
You might ask Wael Nasreddine (#kalbasit on GitHub) why he did it. He
is still active on that repo.
I'm on mac OS X and can't figure out how to download a file from a URL via the command line. It's from a static page so I thought copying the download link and then using curl would do the trick but it's not.
I referenced this StackOverflow question but that didn't work. I also referenced this article which also didn't work.
What I've tried:
curl -o https://github.com/jdfwarrior/Workflows.git
curl: no URL specified!
curl: try 'curl --help' or 'curl --manual' for more information
.
wget -r -np -l 1 -A zip https://github.com/jdfwarrior/Workflows.git
zsh: command not found: wget
How can a file be downloaded through the command line?
The -o --output option means curl writes output to the file you specify instead of stdout. Your mistake was putting the url after -o, and so curl thought the url was a file to write to rate and hence that no url was specified. You need a file name after the -o, then the url:
curl -o ./filename https://github.com/jdfwarrior/Workflows.git
And wget is not available by default on OS X.
curl -OL https://github.com/jdfwarrior/Workflows.git
-O: This option used to write the output to a file which named like remote file we get. In this curl that file would be Workflows.git.
-L: This option used if the server reports that the requested page has moved to a different location (indicated with a Location: header and a 3XX response code), this option will make curl redo the request on the new place.
Ref: curl man page
The easiest solution for your question is to keep the original filename. In that case, you just need to use a capital o ("-O") as option (not a zero=0!). So it looks like:
curl -O https://github.com/jdfwarrior/Workflows.git
There are several options to make curl output to a file
# saves it to myfile.txt
curl http://www.example.com/data.txt -o myfile.txt -L
# The #1 will get substituted with the url, so the filename contains the url
curl http://www.example.com/data.txt -o "file_#1.txt" -L
# saves to data.txt, the filename extracted from the URL
curl http://www.example.com/data.txt -O -L
# saves to filename determined by the Content-Disposition header sent by the server.
curl http://www.example.com/data.txt -O -J -L
# -O Write output to a local file named like the remote file we get
# -o <file> Write output to <file> instead of stdout (variable replacement performed on <file>)
# -J Use the Content-Disposition filename instead of extracting filename from URL
# -L Follow redirects
I have a list of URLS that I need to check, to see if they still work or not. I would like to write a bash script that does that for me.
I only need the returned HTTP status code, i.e. 200, 404, 500 and so forth. Nothing more.
EDIT Note that there is an issue if the page says "404 not found" but returns a 200 OK message. It's a misconfigured web server, but you may have to consider this case.
For more on this, see Check if a URL goes to a page containing the text "404"
Curl has a specific option, --write-out, for this:
$ curl -o /dev/null --silent --head --write-out '%{http_code}\n' <url>
200
-o /dev/null throws away the usual output
--silent throws away the progress meter
--head makes a HEAD HTTP request, instead of GET
--write-out '%{http_code}\n' prints the required status code
To wrap this up in a complete Bash script:
#!/bin/bash
while read LINE; do
curl -o /dev/null --silent --head --write-out "%{http_code} $LINE\n" "$LINE"
done < url-list.txt
(Eagle-eyed readers will notice that this uses one curl process per URL, which imposes fork and TCP connection penalties. It would be faster if multiple URLs were combined in a single curl, but there isn't space to write out the monsterous repetition of options that curl requires to do this.)
wget --spider -S "http://url/to/be/checked" 2>&1 | grep "HTTP/" | awk '{print $2}'
prints only the status code for you
Extending the answer already provided by Phil. Adding parallelism to it is a no brainer in bash if you use xargs for the call.
Here the code:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' < url.lst
-n1: use just one value (from the list) as argument to the curl call
-P10: Keep 10 curl processes alive at any time (i.e. 10 parallel connections)
Check the write_out parameter in the manual of curl for more data you can extract using it (times, etc).
In case it helps someone this is the call I'm currently using:
xargs -n1 -P 10 curl -o /dev/null --silent --head --write-out '%{url_effective};%{http_code};%{time_total};%{time_namelookup};%{time_connect};%{size_download};%{speed_download}\n' < url.lst | tee results.csv
It just outputs a bunch of data into a csv file that can be imported into any office tool.
This relies on widely available wget, present almost everywhere, even on Alpine Linux.
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
The explanations are as follow :
--quiet
Turn off Wget's output.
Source - wget man pages
--spider
[ ... ] it will not download the pages, just check that they are there. [ ... ]
Source - wget man pages
--server-response
Print the headers sent by HTTP servers and responses sent by FTP servers.
Source - wget man pages
What they don't say about --server-response is that those headers output are printed to standard error (sterr), thus the need to redirect to stdin.
The output sent to standard input, we can pipe it to awk to extract the HTTP status code. That code is :
the second ($2) non-blank group of characters: {$2}
on the very first line of the header: NR==1
And because we want to print it... {print $2}.
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
Use curl to fetch the HTTP-header only (not the whole file) and parse it:
$ curl -I --stderr /dev/null http://www.google.co.uk/index.html | head -1 | cut -d' ' -f2
200
wget -S -i *file* will get you the headers from each url in a file.
Filter though grep for the status code specifically.
I found a tool "webchk” written in Python. Returns a status code for a list of urls.
https://pypi.org/project/webchk/
Output looks like this:
▶ webchk -i ./dxieu.txt | grep '200'
http://salesforce-case-status.dxi.eu/login ... 200 OK (0.108)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.389)
https://support.dxi.eu/hc/en-gb ... 200 OK (0.401)
Hope that helps!
Keeping in mind that curl is not always available (particularly in containers), there are issues with this solution:
wget --server-response --spider --quiet "${url}" 2>&1 | awk 'NR==1{print $2}'
which will return exit status of 0 even if the URL doesn't exist.
Alternatively, here is a reasonable container health-check for using wget:
wget -S --spider -q -t 1 "${url}" 2>&1 | grep "200 OK" > /dev/null
While it may not give you exact status out, it will at least give you a valid exit code based health responses (even with redirects on the endpoint).
Due to https://mywiki.wooledge.org/BashPitfalls#Non-atomic_writes_with_xargs_-P (output from parallel jobs in xargs risks being mixed), I would use GNU Parallel instead of xargs to parallelize:
cat url.lst |
parallel -P0 -q curl -o /dev/null --silent --head --write-out '%{url_effective}: %{http_code}\n' > outfile
In this particular case it may be safe to use xargs because the output is so short, so the problem with using xargs is rather that if someone later changes the code to do something bigger, it will no longer be safe. Or if someone reads this question and thinks he can replace curl with something else, then that may also not be safe.