Can curl default to using https? - bash

I have a script which acts as a wrapper around curl: it accepts all of curl's arguments but also adds some of its own (like -H 'Content-Type: application/json'), and then it does some parsing of the output.
The problem is that curl accepts curl google.com as meaning curl http://google.com. I want to force an HTTPS connection, but I don't want to parse curl's command line to find and edit the hostname. (The user might have typed curlwrapper -H "foo: bar" -XPOST google.com -d '{"hello":"world"}')
Is there any way to tell curl "use an HTTPS connection when you're not given a URL scheme"?

It does not appear to be possible due to how libcurl determines the protocol to use when no scheme is given. An excerpt from the code:
/*
* Since there was no protocol part specified, we guess what protocol it
* is based on the first letters of the server name.
*/
/* Note: if you add a new protocol, please update the list in
* lib/version.c too! */
if(checkprefix("FTP.", conn->host.name))
protop = "ftp";
else if(checkprefix("DICT.", conn->host.name))
protop = "DICT";
else if(checkprefix("LDAP.", conn->host.name))
protop = "LDAP";
else if(checkprefix("IMAP.", conn->host.name))
protop = "IMAP";
else if(checkprefix("SMTP.", conn->host.name))
protop = "smtp";
else if(checkprefix("POP3.", conn->host.name))
protop = "pop3";
else {
protop = "http";
}

HTTPS protocol for URL with missing scheme part (and thus also bypass protocol guessing mentioned in (obsolete) answer by #FatalError) can be set with option
--proto-default https
since version 7.45.0 from October 2015. See also https://github.com/curl/curl/pull/351.
It can be put into ~/.curlrc.
Example:
$ curl -v example.org
* Trying XXXXIPv6redacted:80...
* Connected to example.org (XXXXIPv6redacted) port 80 (#0)
> GET / HTTP/1.1
...
$ curl --proto-default https -v example.org
* Trying XXXXIPv6redacted:443...
* Connected to example.org (XXXXIPv6redacted) port 443 (#0)
* ALPN: offers h2
...
(Note that it's not a magic option to assure security. It e.g. won't affect http proxy, if set, according to the manual.)

Related

cURL got different page source than what Chrome browser did

In short: I'm trying to get that page source of https://www.etoro.com/app/sv-iframe using curl in Bash.
I understand this ask is quite "simple". I have read thru 10+ similar questions here. Unfortunately, none of them could solve my problem.
When you open the URL above in Chrome browser, it's blank. You can either right click -> View Page Source, or sniff network using Chrome Developer Tool. Both will give you the correct page source. The page contains javascripts, in which there is a long hex string - what I need ultimately. I tried disabling javascript and reloading the page. I still got the right page source. So javascript doesn't play trick here. It sounds getting such page source via curl should be just straight forward, right?
When I right click the request in Chrome Developer Tool -> Copy as cURL, and execute it in terminal, things turned nasty - I got a CloudFlare security check page. I reopened the page several times in Chrome Incognito mode. I swear never saw a CloudFlare security check in browser. I double checked the cURL command. It has user-agent set as well.
Here is what I tried so far:
Manually compose curl command and fill headers from Chrome Developer Tool
Sniff packages on an Android device, and use headers set on mobile browser
Post request online from Postman Web
All gave me the same CloudFlare security check page.
The CloudFlare page says "Please enable cookies". I suspect if server in this way determined I was not calling from a browser. Following some threads, I tried to set -b/-c/-j flag with curl. Also no luck.
Here's more detailed steps what I've done:
Open Chrome Incognito mode
Open Developer Tool
Use Command+Shift+P (Mac) to open command menu
Type "disable javascript" and hit enter
Switch to Network tab
Open https://www.etoro.com/app/sv-iframe
Observe the request list - there should be only 1 request (request screenshot 1 / request screenshot 2 / response body / response cookie)
Right click on the request -> Copy as cURL
Here's my curl command:
curl 'https://www.etoro.com/app/sv-iframe' \
-H 'authority: www.etoro.com' \
-H 'pragma: no-cache' \
-H 'cache-control: no-cache' \
-H 'sec-ch-ua: "Google Chrome";v="89", "Chromium";v="89", ";Not A Brand";v="99"' \
-H 'sec-ch-ua-mobile: ?0' \
-H 'upgrade-insecure-requests: 1' \
-H 'user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 11_2_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.82 Safari/537.36' \
-H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' \
-H 'sec-fetch-site: none' \
-H 'sec-fetch-mode: navigate' \
-H 'sec-fetch-user: ?1' \
-H 'sec-fetch-dest: document' \
-H 'accept-language: en-US,en;q=0.9' \
--compressed
The request itself I don't think it requires cookie, as page was able to be opened in Incognito mode. I anyways tried to set the response cookie together with the request. It doesn't help either.
-H 'cookie: __cfduid=d2edf...; TS01047baf=01d53...; __cf_bm=a3803...; __cflb=02Di3...'
Already spent whole evening on it but couldn't get it resolved. I appreciate any suggestions or help to get me thru it. I have a feeling that the actual fix would be fairly simple. The request has no cookie. Only thing to update is header. Maybe I didn't have correct header specified? Or some extra curl flag would help?
There is some obfuscated js eval code on that page, that is basically setting cookies or sending logs, digging a bit deeper, this is what ended up with:
(function() {
var s = '9a7xxx......';
function setCookie(cname, cvalue, domain, exdays) {
var d = new Date();
d.setTime(d.getTime() + (exdays * 1000 * 60 * 60 * 24));
var expires = "expires=" + d.toUTCString();
var cookie = cname + "=" + cvalue;
if (domain) {
cookie += ";" + "domain=" + domain;
}
cookie += ";" + expires + ";path=/";
document.cookie = cookie;
}
function deleteCookie(cname, domain) {
setCookie(cname, "", domain, 0);
}
var ta = ["window.callPhantom", "window.__nightmare", "window._phantom", "window.__webdriver_script_fn", "navigator.webdriver", "document.$cdc_asdjflasutopfhvcZLmcfl_"];
var re;
try {
re = [!!window.callPhantom, !!window.__nightmare, !!window._phantom, !!window.__webdriver_script_fn, !!navigator.webdriver, !!document.$cdc_asdjflasutopfhvcZLmcfl_];
} catch (err) {}
if (re && re.indexOf(true) == -1) {
setCookie("TMIS2", s, ".etoro.com", 14);
} else {
var resultsObj = {};
for (var i = 0; i < ta.length; i++) {
resultsObj[ta[i]] = re[i];
}
var img = new Image();
img.src = 'https://etorologsapi.etoro.com/api/v2/monitoring?applicationIdentifier=JSCClient&LogEvents=' + encodeURIComponent(JSON.stringify([{
ApplicationIdentifier: 'JSCClient',
ApplicationVersion: '0.0.11',
Level: "error",
Message: "ClientSel",
Results: resultsObj,
Type: 'log'
}]));
}
})();

Lua - HTTPS / tlsv1 alert internal error

I am trying to interact with the Cleverbot API with Lua. I've got a key and a username, so I tested with Postman and it worked perfectly. Then I tried to do the same thing with Lua but I'm having a weird error.
This is the code:
local https = require("ssl.https")
local string = require("string")
local ltn12 = require ("ltn12")
local funcs = (loadfile "./libs/functions.lua")()
local function cleverbot(msg)
local params = {
['user'] = 'SyR2nvN1cAxxxxxx',
['key'] = 'ckym8oDRNvpYO95GmTD14O9PuGxxxxxx',
['nick'] = 'cleverbot',
['text'] = tostring(msg),
}
local body = funcs.encode_table(params)
local response = {}
ok, code, headers, status = https.request({
method = "POST",
url = "https://cleverbot.io/1.0/ask/",
headers = {
['Accept'] = '*/*',
['content-type'] = 'application/x-www-form-urlencoded',
['accept-encoding'] = 'gzip',
['content-length'] = tostring(#body),
},
print(tostring(ok)),
print(tostring(code)),
print(tostring(headers)),
print(tostring(status)),
source = ltn12.source.string(body),
sink = ltn12.sink.table(response)
})
response = table.concat(response)
if code ~= 200 then
return
end
if response[1] ~= nil then
return tostring(response)
end
end
However, when I call this, this is what those 4 prints shows:
nil
tlsv1 alert internal error
nil
nil
I tried to connect using HTTP instead, but this is what happens:
1
301
table: 0xe5f7d60
HTTP/1.1 301 Moved Permanently
response is always empty. Please, what am I doing wrong?
Thanks!
My strong suspicion is, that the target host (cleverbot.io) insists to get a hostname through SNI (server name indication), which the ssl-library you use does not send. Usually, servers use a default site then, but of course they are free to let the handshake fail then. Seems like this is, what cloudflare (where cleverbot.io is hosted or proxied through) does.
Unfortunately there is no easy way to fix this, unless the underlying ssl-libraries are changed to use sni with hostname cleverbot.io for the ssl handshake.
See also
Fails:
openssl s_client -connect cleverbot.io:443 -tls1_1
Succeeds:
openssl s_client -servername cleverbot.io -connect cleverbot.io:443 -tls1_1
This means, not only the underlying ssl libraries have to support sni, but also have to be told, which servername to use by the lua-binding-layer in between. Luasec for example does not make use of sni currently, afaik

perl Socket6 binding to only one wildcard address

I have following program in perl which is supposed to listen on IPv6 address, and by theory should serve to both IPv4 (through IPv4 mapped IPv6 address) and IPv6 clients on a dual stack box.
use Socket;
use Socket6;
#res = getaddrinfo('', 8086, AF_UNSPEC, SOCK_STREAM,0, AI_PASSIVE);
my #ipv6Result;
while(scalar(#res)>=5){
my #currentResult = #res;
($family, $socktype, $proto, $saddr, $canonname, #res) = #res;
if($family == AF_INET6){
#ipv6Result = #currentResult;
}
}
if(#ipv6Result){
($family, $socktype, $proto, $saddr, $canonname) = #ipv6Result;
}
socket(Socket_Handle, $family, $socktype,$proto) || next;
bind(Socket_Handle,$saddr ) || die "bind: $!";
listen(Socket_Handle, 1) || die "listen: $!";
$paddr = accept(Client,Socket_Handle) || die "accept: $!";
After running this the netstat gave following observation:
c:\Perl\bin>netstat -nao | findstr 8086
TCP [::]:8086 [::]:0 LISTENING 2892
It seems, it is listening on only IPv6 wildcard address (::) and not on IPv4 wildcard address (0.0.0.0). I was not able to connect this server process from an IPv4 client, but was able to connect through an IPv6 client.
I tried a similar server program in java as follows (on the same setup):
import java.net.ServerSocket;
public class CodeTCPServer {
public static void main(String[] args) throws Exception{
new ServerSocket(8086).accept();
}
}
The netstat output for this was as follows:
C:\Users\Administrator>netstat -nao | findstr 8086
TCP 0.0.0.0:8086 0.0.0.0:0 LISTENING 3820
TCP [::]:8086 [::]:0 LISTENING 3820
Seems to listen on both IPv6 and IPv4, and I am also able to connect it from IPv4 and IPv6 clients.
If I run the same perl program on a linux box it works fine, and I am able to connect to it through IPv4 and IPv6 clients.
I wonder, if something on windows is stopping the perl program from listening on both IPv4 and IPv6 (but then it should have stopped the java program as well for the same reason). If some problem with the program logic it shouldn't have worked on linux as well.
(I am using Socket6 for now, as I couldn't use perl's inbuilt support for IPv6 somehow on windows, I am in communication with the authors to get it worked on my setup)
UPDATE:
I just tried following:
setsockopt (Socket_Handle, IPPROTO_IPV6, IPV6_V6ONLY, 0 ) or print("\nFailed to set IPV6_V6ONLY $! ");
in anticipation that the socket option has default value 1 (for this platform), and I have to manually override it, but alas! I got following error:
Your vendor has not defined Socket macro IPV6_V6ONLY, used at c:\socket6\Socket6Server.pl line 66
Now I wonder what does 'vendor' mean, is it Socket6 module / perl vendor or OS vendor ?
UPDATE2
I think the answer is given in http://metacpan.org/pod/IO::Socket::IP (for the V6Only argument)
with following lines:
If your platform does not support disabling this option but you still want to listen for both AF_INET and AF_INET6 connections you will have to create two listening sockets, one bound to each protocol.
And this worked for me! But then I need to check if the platform supports V6Only disabling
(protocol aware code in my program :( ), when compared to Java, Java does it automatically for me (checking and creating 2 sockets).
This requires the BIND_V6ONLY socket option to be turned off. See the IO::Socket::IP source for details on how.
Also, in response to your comment
I am using Socket6 for now, as I couldn't use perl's inbuilt support for IPv6 somehow on windows, I am in communication with the authors to get it worked on my setup)
That's not strictly true, if memory serves. You were having trouble with IO::Socket::IP but the plain Socket stuff should all be working fine. You don't need to be using Socket6 because Socket 2.006 already has everything that does. You can replace your code with:
use Socket qw( :addrinfo SOCK_STREAM AF_INET6 );
my ($err, #res) = getaddrinfo('', 8086,
{ socktype => SOCK_STREAM, flags => AI_PASSIVE });
my $ipv6Result;
my $current;
while(#res){
$current = shift #res;
if($current->{family} == AF_INET6) {
$ipv6Result = $current;
}
}
if($ipv6Result) {
$current = $ipv6Result;
}
socket(my $sock, $current->{family}, $current->{socktype}, $current->{proto}) or next;
bind(my $sock ,$current->{addr}) or die "bind: $!";
listen(my $sock, 1) or die "listen: $!";
my $paddr = accept(my $client, $sock) or die "accept: $!";

How to make an HTTP head request with headers in ruby?

I've been trying to use several libraries to make an HTTP HEAD request, but nothing seems to be working.
I've seen some examples, but nothing quite what I want.
Here's the Curl request, now I have to do it in ruby:
curl -XHEAD -H x-auth-user: myusername -H x-auth-key: mykey "url"
Also, this is an HTTPS url, if that makes a difference.
Try this:
require 'net/http'
url = 'http://...'
myusename = '...'
mykey = '...'
request = Net::HTTP.new(url, 80)
request.request_head('/', 'x-auth-user' => myusername, 'x-auth-key' => my_key)

Is there a way to attach Ruby Net::HTTP request to a specific IP address / network interface?

Im looking a way to use different IP addresses for each GET request with standard Net::HTTP library. Server has 5 ip addresses and assuming that some API`s are blocking access when request limit per IP is reached. So, only way to do it - use another server. I cant find anything about it in ruby docs.
For example, curl allows you to attach it to specific ip address (in PHP):
$req = curl_init($url)
curl_setopt($req, CURLOPT_INTERFACE, 'ip.address.goes.here';
$result = curl_exec($req);
Is there any way to do it with Net::HTTP library? As alternative - CURB (ruby curl binding). But it will be the last thing i`ll try.
Suggestions / Ideas?
P.S. The solution with CURB (with dirty tests, ip`s being replaced):
require 'rubygems'
require 'curb'
ip_addresses = [
'1.1.1.1',
'2.2.2.2',
'3.3.3.3',
'4.4.4.4',
'5.5.5.5'
]
ip_addresses.each do |address|
url = 'http://www.ip-adress.com/'
c = Curl::Easy.new(url)
c.interface = address
c.perform
ip = c.body_str.scan(/<h2>My IP address is: ([\d\.]{1,})<\/h2>/).first
puts "for #{address} got response: #{ip}"
end
I know this is old, but hopefully someone else finds this useful, as I needed this today. You can do the following:
http = Net::HTTP.new(uri.host, uri.port)
http.local_host = ip
response = http.request(request)
Note that you I don't believe you can use Net::HTTP.start, as it doesn't accept local_host as an option.
There is in fact a way to do this if you monkey patch TCPSocket:
https://gist.github.com/800214
Curb is awesome but won't work with Jruby so I've been looking into alternatives...
Doesn't look like you can do it with Net:HTTP. Here's the source
http://github.com/ruby/ruby/blob/trunk/lib/net/http.rb
Line 644 is where the connection is opened
s = timeout(#open_timeout) { TCPSocket.open(conn_address(), conn_port()) }
The third and fourth arguments to TCPSocket.open are local_address and local_port, and since they're not specified, it's not possible. Looks like you'll have to go with curb.
Of course you can. I did as below:
# remote_host can be IP or hostname
uri = URI.parse( "http://" + remote_host )
http = Net::HTTP.new( uri.host, uri.port )
request = Net::HTTP::Get.new(uri.request_uri)
request.initialize_http_header( { "Host" => domain })
response = http.request( request )

Resources