I have a one line bash command which gets me an HTML site over an SSL encrypted HTTPS connection:
echo "GET / HTTP/1.1\nHost: www.example.com\n\n" | openssl s_client -connect www.example.com:443 -quiet 2> /dev/null
The site is being loaded but with HTTP headers like:
HTTP/1.1 200 OK
Date: Fri, 01 Feb 2013 13:15:59 GMT
Server: Apache/2.2.20 (Ubuntu)
and more like this. With 2> /dev/null I can hide the output of wrong SSL certificates and more.
I do not want to take another script because curl does not what I want to do.
It is not possible due to the nature of openssl s_client which gives you the direct and plain output from a service which runs behind the connecting port (443 in my example from the question where I want to get / on a webserver with SSL).
telnet would also give you the plain output from the HTTP protocol and curl would show me the HTML site without headers and with HTTPS but does not allow self written commands to the web server.
You may use expect command to do the interaction with openssl. I'm not sure it will work but it worth a try.
Related
I'm trying to log into an ftps site. I've tried giving the login creds at the command line (and putting set parameters in ~/.lftprc, then opening an lftp session and typing those parameters with lftp job control statements. Regardless, I keep hitting the same roadblock:
421 Sorry, cleartext sessions are not accepted on this server.
Please reconnect using SSL/TLS security mechanisms.
I got furthest with the following parameters, but keep getting the error above.
How do I get lftp to use SSL/TLS security mechanism from the command line?
The objective is to script the access to this ftps site using bash (programming without using expect).
lftp
lftp :~> set ssl-allow false
lftp :~> set passive-mode yes
lftp :~> open ftp.abc.com
lftp ftp.abc.com:~> login theuser
Password:
lftp theuser#ftp.abc.com:~> cd
`cd' at 0 [Delaying before reconnect: 26]
CTRL-C
lftp theuser#ftp.abc.com:~> debug
lftp theuser#ftp.abc.com:~> cd
---- Connecting to ftp.abc.com (XX.XXX.XX.XX) port 21
<--- 220-Welcome to the Yahoo! Web Hosting FTP server
<--- 220-Need help? Get all details at:
<--- 220-http://help.yahoo.com/help/us/webhosting/gftp/
<--- 220-
<--- 220-No anonymous logins accepted.
<--- 220-Yahoo!
<--- 220-Local time is now 15:30. Server port: 21.
<--- 220-This is a private system - No anonymous login
<--- 220 You will be disconnected after 5 minutes of inactivity.
---> FEAT
<--- 211-Extensions supported:
<--- EPRT
<--- IDLE
<--- MDTM
<--- SIZE
<--- MFMT
<--- REST STREAM
<--- MLST type*;size*;sizd*;modify*;UNIX.mode*;UNIX.uid*;UNIX.gid*;unique*;
<--- MLSD
<--- XDBG
<--- AUTH TLS
<--- PBSZ
<--- PROT
<--- TVFS
<--- ESTA
<--- PASV
<--- EPSV
<--- SPSV
<--- ESTP
<--- 211 End.
---> OPTS MLST type;size;modify;UNIX.mode;UNIX.uid;UNIX.gid;
<--- 200 MLST OPTS type;size;sizd;modify;UNIX.mode;UNIX.uid;UNIX.gid;unique;
---> USER theuser
<--- 421 Sorry, cleartext sessions are not accepted on this server.
Please reconnect using SSL/TLS security mechanisms.
It seems like lftp is not configured correctly on many systems, which makes it unable to verify server certificates (producing Fatal error: Certificate verification: Not trusted).
The web (and answers in this post) is full of suggestions to fix this by disabling certificate verification or encryption altogether. This is unsecure as it allows man-in-the-middle attacks to pass unnoticed.
The better solution is to configure certificate verification correctly, which is easy, fortunately. To do so, add the following line to /etc/lftp.conf (or alternatively ~/.lftp/rc, or ~/.config/lftp/rc):
set ssl:ca-file "/etc/ssl/certs/ca-certificates.crt"
ca-certificates.crt is a file that contains all CA certificates of the system. The location used above is the one from Ubuntu and may vary on different systems. To generate or update the file, run update-ca-certificates:
sudo update-ca-certificates
If your system does not have this command, you can create one manually like this:
cat /etc/ssl/certs/*.pem | sudo tee /etc/ssl/certs/ca-certificates.crt > /dev/null
lftp :~> set ssl-allow false
You've explicitly set ssl-allow to false. But this must be true if lftp should attempt to use SSL.
You might also need to
set ssl:verify-certificate no
My answer provides access for a single user on your system rather than a system-wide certificate.
lftp uses Transport Layer Security (TLS). So it’s essential to first grab the certificate from the FTP server.
openssl s_client -connect <ftp-hostname>:21 -starttls ftp
I include the entire certificate chain in a new file called cert.crt in my local ~/.lftp folder. At the very least, you're looking to include all the text of the certificate itself: -----BEGIN CERTIFICATE----- <...> -----END CERTIFICATE-----.
I create a file called rc in the local ~/.lftp folder and add the lines
set ssl:ca-file “cert.crt”
set ssl:check-hostname no (this prevents Fatal error: Certificate verification: certificate common name doesn't match requested host name ‘<ftp-hostname>’ when running a command like ls remotely)
Setting ftp:ssl-allow true didn't work for me.
By typing set:
lftp :~> set
I noticed this:
set ftp:ssl-allow true
set ftp:ssl-allow/XXX.XXX.XXX.XXX no
with XXX.XXX.XXX.XXX being the server, I was logging into.
So the final set of commands I needed was:
lftp :~> set ftp:ssl-allow true
lftp :~> set ftp:ssl-allow/XXX.XXX.XXX.XXX true
lftp :~> set ssl:verify-certificate no
lftp version must be >= 4.6.3 (Debian user)
What worked for me step by step with lftp:
get certificate of host with openssl s_client -connect <ftp_hostname>:21 -starttls ftp, at the begining of result I got something like -----BEGIN CERTIFICATE-----
MIIEQzCCAyu.....XjMO
-----END CERTIFICATE-----
copy that -----BEGIN CERTIFICATE-----
MIIEQzCCAyu.....XjMO
-----END CERTIFICATE----- into /etc/ssl/certs/ca-certificates.crt
Into lftp configuration reference this certificate file adding to /etc/lftp.conf for systemwide set ssl:ca-file "/etc/ssl/certs/ca-certificates.crt"
and then do your sync or whatever with lftp, on my case it is lftp -u "${FTP_USER},${FTP_PWD}" ${FTP_HOST} -e "set net:timeout 10;mirror ${EXCLUDES} -R ${LOCAL_SOURCE_PATH} ${REMOTE_DEST_PATH} ; quit"
This worked for me for a FTPS server connection (with port 990, but not necessary to specify) using lftp
code:
lftp ftps://USER:PASSWORD#server.com -c "set ssl:verify-certificate false;"
then:
do stuff
more info at:
how-to-avoid-lftp-certificate-verification-error
My system:
Ubuntu 16.04
Mitmproxy version 3.0.4
Python version 3.5.2
I've successfully installed mitmproxy from: docs.mitmproxy.org on my server. But now I got confused how to save log mitmproxy to file? I try use mitmdump --mode transparent --showhost -p 9001 -w output_file
While I open output_file, it's not human readable. I read the docs and try scripts from the mitmproxy's Github, but no clue.
Anyone know how to save log mitmproxy to file, but human readable?
Thank you!
As you have probably noticed, mitmproxy generates streams in a binary format. If you want to save the streams in a human readable format, you can pass a script when you run mitmproxy to do so.
save.py
from mitmproxy.net.http.http1.assemble import assemble_request, assemble_response
f = open('/tmp/test/output.txt', 'w')
def response(flow):
f.write(assemble_request(flow.request).decode('utf-8'))
And now run mitmproxy -s save.py and the output will be written to output.txt in a human readable format.
Do pay attention to the responses because they might contain a lot of binary data. but if you do want to write the responses also in a human readable format, then you can add f.write(assemble_response(flow.response).decode('utf-8', 'replace')) to the script.
Example output from the script:
❯❯ tail -f output.txt
GET / HTTP/1.1
Host: example.com
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:77.0) Gecko/20100101 Firefox/77.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8
Accept-Language: en-US,en;q=0.5
Accept-Encoding: gzip, deflate
Connection: keep-alive
Upgrade-Insecure-Requests: 1
If-Modified-Since: Thu, 17 Oct 2019 07:18:26 GMT
If-None-Match: "3147526947"
Cache-Control: max-age=0
I initially had trouble getting the requests and responses output to file using what #securisec proposed but what worked for me was to replace mitmproxy -s save.py with mitmdump -s save.py instead.
Once I killed the mitmdump process, the output file was written to the file system.
I have created some curl command to send a POST to my server where I am listening on that port for input to trigger additional action. The command is the following (Just masked the URL):
curl -v -H "Content-Type: application/json" -X POST -d "{\"Location\":\"Some Name\",\"Value\":\"40%\"}" http://example.com:8885/
I get the following output from curl:
About to connect() to example.com port 8885 (#0)
Trying 5.147.XXX.XXX...
Connected to example.com (5.147.XXX.XXX) port 8885 (#0)
POST / HTTP/1.1
User-Agent: curl/7.29.0
Host: example.com:8885
Accept: /
Content-Type: application/json
Content-Length: 40
upload completely sent off: 40 out of 40 bytes
However after that curl does not close the connection. Am I doing something wrong? Also on the server I only receive the POST as soon as I hit ctrl+c.
It sits there waiting for the proper HTTP response, and after that has been received it will exit cleanly.
A minimal HTTP/1.1 response could look something like:
HTTP/1.1 200 OK
Content-Length: 0
... and it needs an extra CRLF after the last header to signal the end of headers.
I'm a bit rusty on this, but according to section 6.1 of RFC7230, you might need to add a Connection: close header as well. Quoting part of the paragraph:
The "close" connection option is defined for a sender to signal
that this connection will be closed after completion of the
response. For example,
Connection: close
in either the request or the response header fields indicates that
the sender is going to close the connection after the current
request/response is complete (Section 6.6).
Let me know if it solves your issue :-)
Is there a question mark in link ?
I found that my link had question mark like http... .com/something/something?properties=1 and i tried header connection: close but it was still active so i tried then removing ?properties etc. and it worked...
Just been tinkering with Sinatra and trying to get a bit of a restful web service going.
The error I'm getting at the moment is very specific though.
Take this example post method
post '/postMan/:someParam' do
#Edited here. This code can be anything. 411 is still the response
puts params[:someParam]
end
Seems simple enough. Take a param, make an object out of it, then go store it in whatever way the objects save method defines.
Heres what I use to post the data using Curl
$curl -I -X POST http://127.0.0.1/postman/123456
The only problem is, I'm getting 411 back and have no idea why.
To the best of my knowledge, 411 is length required. Here is the trace
HTTP/1.1 411 Length Required
Content-Type: text/html; charset=ISO-8859-1
Server: WEBrick/1.3.1 (Ruby/1.9.2/2011-07-09)
Date: Fri, 02 Mar 2012 22:27:09 GMT
Content-Length: 303
Connection: close
I 'Cannot' change the curl message in any way. So might anyone have a way to set the content length to be ignored in sinatra? Or some fix which doesn't involve changing the curl request?
For the record, it doesn't matter whether I use the parameters in the Post method or not. I could have some crazy code inside it, it will still throw the same error
As others said above, WEBrick wrongly requires POST requests to have a Content-Length header. I just pass an empty body, because it's less typing than passing in the header:
curl -X POST -d '' http://webrickwhyyounotakeemptyposts.com/
Are you sure you're on port 80 for your app?
When I run:
ruby -r sinatra -e "post('/postMan/:someParam'){puts params[:someParam]}"
and curl it:
curl -I -X POST http://127.0.0.1:4567/postMan/123456
HTTP/1.1 200 OK
X-Frame-Options: sameorigin
X-XSS-Protection: 1; mode=block
Content-Type: text/html;charset=utf-8
Content-Length: 0
Connection: keep-alive
Server: thin 1.3.1 codename Triple Espresso
it's ok. Had to change the URL to postManthough, your example threw a 404because you had postman.
The output was also as expected:
== Sinatra/1.3.2 has taken the stage on 4567 for development with backup from Thin
>> Thin web server (v1.3.1 codename Triple Espresso)
>> Maximum connections set to 1024
>> Listening on 0.0.0.0:4567, CTRL+C to stop
123456
Ah. Try it without -I. It's probably sending a HEAD request and as such, not sending what you expect. Use -v if you want to show the headers.
curl -v -X POST http://127.0.0.1/postman/123456
curl -v -X POST -d "key=val" http://127.0.0.1/postman/123456
WEBrick erroneously requires POST requests to include the Content-Length header.
curl -H 'Content-Length: 0' -X POST http://example.com
Standardly, however, POST requests don't require a body and therefore don't require the Content-Length header.
I wrote a bash script that gets output from a website using curl and does a bunch of string manipulation on the html output. The problem is when I run it against a site that is returning its output gzipped. Going to the site in a browser works fine.
When I run curl by hand, I get gzipped output:
$ curl "http://example.com"
Here's the header from that particular site:
HTTP/1.1 200 OK
Server: nginx
Content-Type: text/html; charset=utf-8
X-Powered-By: PHP/5.2.17
Last-Modified: Sat, 03 Dec 2011 00:07:57 GMT
ETag: "6c38e1154f32dbd9ba211db8ad189b27"
Expires: Sun, 19 Nov 1978 05:00:00 GMT
Cache-Control: must-revalidate
Content-Encoding: gzip
Content-Length: 7796
Date: Sat, 03 Dec 2011 00:46:22 GMT
X-Varnish: 1509870407 1509810501
Age: 504
Via: 1.1 varnish
Connection: keep-alive
X-Cache-Svr: p2137050.pubip.peer1.net
X-Cache: HIT
X-Cache-Hits: 425
I know the returned data is gzipped, because this returns html, as expected:
$ curl "http://example.com" | gunzip
I don't want to pipe the output through gunzip, because the script works as-is on other sites, and piping through gzip would break that functionality.
What I've tried
changing the user-agent (I tried the same string my browser sends, "Mozilla/4.0", etc)
man curl
google search
searching stackoverflow
Everything came up empty
Any ideas?
curl will automatically decompress the response if you set the --compressed flag:
curl --compressed "http://example.com"
--compressed
(HTTP) Request a compressed response using one of the algorithms libcurl supports, and save the uncompressed document. If this option is used and the server sends an unsupported encoding, curl will report an error.
gzip is most likely supported, but you can check this by running curl -V and looking for libz somewhere in the "Features" line:
$ curl -V
...
Protocols: ...
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
Note that it's really the website in question that is at fault here. If curl did not pass an Accept-Encoding: gzip request header, the server should not have sent a compressed response.
In the relevant bug report Raw compressed output when not using --compressed but server returns gzip data #2836 the developers says:
The server shouldn't send content-encoding: gzip without the client having signaled that it is acceptable.
Besides, when you don't use --compressed with curl, you tell the command line tool you rather store the exact stream (compressed or not). I don't see a curl bug here...
So if the server could be sending gzipped content, use --compressed to let curl decompress it automatically.