In my terminal:
url='http://58.30.207.171/youku/69764FC8BC2447992487A2488/030002010051001478A6FA0109ACBF22B0F614-2746-1AE5-C9EF-2266A1CC83DB.flv'
curl $url -o test1.flv #can download
wget -c $url -O test2.flv #can not download
1.Why wget can not download it?
2.How can i make wget download it?
The site may be blocking wget specifically. You can override this by setting the user-agent to an empty string:
I found this at http://www.gnu.org/software/wget/manual/wget.html#Option-Syntax
‘-U agent-string’
‘--user-agent=agent-string’
Identify as agent-string to the http server.
The http protocol allows the clients to identify themselves using a User-Agent header field. This enables distinguishing the www software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies as ‘Wget/version’, version being the current version number of Wget.
However, some sites have been known to impose the policy of tailoring the output according to the User-Agent-supplied information. While this is not such a bad idea in theory, it has been abused by servers denying information to clients other than (historically) Netscape or, more frequently, Microsoft Internet Explorer. This option allows you to change the User-Agent line issued by Wget. Use of this option is discouraged, unless you really know what you are doing.
Specifying empty user agent with ‘--user-agent=""’ instructs Wget not to send the User-Agent header in http requests.
Related
I have 2 Linux Servers (with LAMP):
Web Server with SSL (https://www.example.com)
Admin Server (needs to connect to Web Server, via https)
When i connect from Admin Server (to Web Server) via curl command. It is refusing. Then when i use curl with --caeert option, its going through. Like this:
# curl --cacert CAchain.crt -I https://www.example.com
HTTP/1.1 200 OK
..
I'm getting 200 OK only because of --cacert CAchain.crt.
Then obviously i need the pure/basic curl command without defining the --cacert, to be working. Like:
# curl -I https://www.example.com
HTTP/1.1 200 OK
..
So that my Admin Application will for sure be able to connect to it (via https).
But now, when i connect to https://www.example.com from Admin Server (via its Application), it is bouncing back. Not able to reach, with SSL.
How do i make my Linux (RHEL) to install the client's CA-CERT inside, in order automatically AVOID defining the cert file. So that any communications to "https://www.example.com" via CURL or Web Browser (from Admin), can just then successfully go through. (Is it something like, we make "SSH without Keys" logic? But how, please?)
You need to add the CA cert to somewhere that curl can use it - it looks like you're just keeping it in your local directory (which isn't where curl looks for it - typically in some /etc/pki/ssl/ca-bundle.crt-type location). There's a handful of ways to do this. I don't have much experience doing it in RHEL (or CentOS), but have done it for Debian.
This ServerFault Post might help.
Likewise, This Post might help you install/import the CA cert properly.
I would like to try and use Tornado's proxying capabilities. For this, the documentation tells me, I need to have libcurl compiled with asynchronous DNS resolver.
I have a version of libcurl installed via yum (7.29), but I can't figure out how to tell whether it was built with asynchronous DNS resolution or not.
If it doesn't, is there a way to enable it, or do I have to build it from scratch? It seems like the latter is the only option I could find so far, hoping I missed something.
Thanks!
Alternatively, if you have command-line version of curl installed, you may run curl --version. To install curl on alpine run apk add curl
Example output:
root#ae5870274e10:/mnt/src# curl --version
curl 7.38.0 (x86_64-pc-linux-gnu) libcurl/7.38.0 OpenSSL/1.0.1t zlib/1.2.8 libidn/1.29 libssh2/1.4.3 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: AsynchDNS IDN IPv6 Largefile GSS-API SPNEGO NTLM NTLM_WB SSL libz TLS-SRP
and look for AsynchDNS keyword
Call curl_version_info() and check the returned struct and its 'features' field:
int features; /* bitmask, see below */
If that field has the bit CURL_VERSION_ASYNCHDNS set, you know this libcurl build resolves names asynchronously. Using either threads or it was built to use c-ares.
If that bit is not set, it was built to use synchronous name resolves.
I have a web application that I need to debug because I suspect that the request send is altered on its way to the server.
I want to dump the HTTPS traffic received on port localhost:443 and decrypt it so I can check the packages.
Obviously I do have the private hey from the server.
Is there a way to do this from the command line?
You can use ssldump.(it works on top of libpcap).
ssldump -r <File_Name>.pcap -k <Key_File>.key -d host <IP_Address>
You specify the following options with the ssldump utility:
-r: Read data from the <File_Name>.pcap file instead of from the network.
-k: Use <Key_File>.key file as the location for the SSL keyfile.
-d: Display the application data traffic.
You may refer the complete example here
You can import the SSL key in wireshark to decrypt https if Wireshark is compiled with SSL decryption support:
http://www.etherlook.com/howto/use-wireshark-to-decrypt-https/
http://wiki.wireshark.org/SSL
I made my own git server on a centos distribution.
I can contact the server via git protocol at my home. But when I try to access via https at office I obtain:
Cloning into /Users/vito/Documents/... error:
error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112) while
accessing https://gitolite#myserverxyz.com/vitorepo.git/info/refs
fatal: HTTP request failed
Where is the problem? On my server or on my office-mac?
I got the exact same response from curl when trying to connect with an ubuntu instance running openssl 1.0.0e. I successfully resolved the problem by adding the -ssl3 flag to the curl command.
It seems that it's a compatibility problem between older version of OpenSSL (0.9.8) acting as a client and recent OpenSSL version (1.0.0) acting as a server with some specific options used by Curl on client side and Apache on server side.
It's probably due to some recent security fix in OpenSSL (probably the one against protocol downgrade attacks).
Try upgrading the OpenSSL library version on the client side to 1.0.0.
See:
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=3395520&group_id=976
In case anyone has this issue with XMLRPC.
Daniel's answer (forcing SSL version 3) solved the issue for me. just specify XMLRPC_SSLVERSION_SSLv3 in the clientXmlTransport_curl options (C++).
The problem began when we upgraded our server to OpenSSL version 1.0.1-4ubuntu5.5 and the clients were still running 0.9.8o-5ubuntu1.7.
I believe this is a host-name matching issue on the server. Error 1112 is SSL_R_TLSV1_UNRECOGNIZED_NAME, and comes from an SNI name mismatch (info on SNI). I was having the same issue in curl.
For me, the work around was to make sure the name I used on the client matched one of the ServerName or ServerAlias configurations on the server. Of course, these commands are for an apache server; I don't know what you need to do for a git server. But I suspect the server names you're using from home and work are different, and the home name is the cannonical name the git server is using (and therefore SNI is working).
The 'real' fix will probably take a client change in git to allow a way to ignore the name-mismatch warning (the way your browser already does).
Not sure if I had exactly the same problem, but the error message was the same. It only seemed to be happening on the ubuntu box I set up a git server on, for some reason the centos box with a git server set up on it was fine.
I only just solved it after 3 or 4 days. It turns out to be because git's underlying Curl library has a broken Keep-alive implementation (I ended up dumping HTTP traffic and verifying the behaviour by hand).
In a nutshell Curl (at least the version used inside every Git implementation I could find, including command line git and eclipse's EGit) doesn't seem to correctly interpret the Connection response header, or more correctly doesn't seem to correctly interpret the absence of it.
To fix the problem you need to configure the SSL virtual host inside the apache that is serving your GIT repository with an extra directive specifically for git. Add these lines just before the </VirtualHost>.
BrowserMatch "git" nokeepalive ssl-unclean-shutdown
You unfortunately can't tell apache to just downgrade to HTTP/1.0 (would be cleaner) because Curl can't handle that, but you can just tell it to force a Connection:close on every request which Curl does know how to handle.
In a misleading coincidence, if you try to test Curl directly without this change it will seem to work, because it makes a single request and then aborts. Only by getting curl to execute two requests on the same keep-alive connection over ssl will this problem become apparent.
I had the same error. The root cause seems to be incompatibility of client/server openssl versions.
I've upgraded my server with apt-get upgrade openssl and upgraded my windows git installation.
The combination of windows git client
git version 1.9.4.msysgit.0, which contains openssl version:
OpenSSL 0.9.8e 23 Feb 2007
And server with openssl version:
OpenSSL 1.0.1c 10 May 2012
seems to work fine together.
I'm trying to write a batch script (CMD # Windows XP Pro) that will automatically download and unzip packages with the help of 7zip and putty/psftp
If I have a URL to a package to download http://somesite.org/packages/package.zip how do I download it on command line using putty?
Also if you have a better way to do this that would be helpful too.
wget is of course an obvious solution, but I also suggest to have a look at cURL. From their website:
curl is a command line tool for
transferring files with URL syntax,
supporting FTP, FTPS, HTTP, HTTPS,
SCP, SFTP, TFTP, TELNET, DICT, LDAP,
LDAPS and FILE. curl supports SSL
certificates, HTTP POST, HTTP PUT, FTP
uploading, HTTP form based upload,
proxies, cookies, user+password
authentication (Basic, Digest, NTLM,
Negotiate, kerberos...), file transfer
resume, proxy tunneling and a busload
of other useful tricks.
Of course free and open source, and despite its huge list of supported protocols it's as simple to use as wget, so to use your example
curl -O http://somesite.org/packages/package.zip
downloads package.zip to a local file with the same name
curl -o myname.zip http://somesite.org/packages/package.zip
downloads package.zip as myname.zip
curl http://somesite.org/packages/package.zip > package.zip
redirects curl's stdout to package.zip
EDIT - example corrected, with thanks to #PrabhakarKasi
win32 version of wget:
http://pages.interlog.com/~tcharron/wgetwin.html
Putty isn't really a download tool. Unless you want to download something via SCP/SFTP. So yes, wget is more helpful here.
I don't know putty, but certainly wget can do. If you are in Windows, you can get it by cygwin or just google a win32 version.
pscp.exe -pw yourpassword you#somesite.org:/packages/package.zip .\
The path to /packages/package.zip should be whatever the path to the public web files are on the server. So, for example, on some old apache server, it might be:
pscp.exe -pw yourpassword you#somesite.org:/users/httpd/vhosts/default/packages/package.zip .\
Use pscp, which comes with PuTTY:
pscp user#host:/path/to/file.7z .
7z e file.7z
If you set this up with SSH keys, pscp won't have to ask you for a password.