webpage download using cURL utility - proxy cycle issue - windows

I am trying to access google.com from my work using cURL for windows 32-bit(with SSH version). I am connecting via my company's proxy server but I am getting 400 proxy cycle detected error. Could someone please let me know why I am getting this error. The command & error message are as follows(Proxy IP changed to XXXX):
Command:
%curl -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre" -v --proxy-ntlm XXX.XXX.XXX.XXX:8080 -U name:password -I http://www.google.com
Output:
Enter proxy password for user 'name':
* Rebuilt URL to: XXX.XXX.XXX.XXX:8080/
* About to connect() to XXX.XXX.XXX.XXX port 8080 (#0)
* Trying XXX.XXX.XXX.XXX...
* Adding handle: conn: 0xcb0520
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0xcb0520) send_pipe: 1, recv_pipe: 0
* Connected to XXX.XXX.XXX.XXX (XXX.XXX.XXX.XXX) port 8080 (#0)
> HEAD / HTTP/1.1
> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre
> Host: XXX.XXX.XXX.XXX:8080
> Accept: */*
>
< HTTP/1.1 400 Cycle Detected
HTTP/1.1 400 Cycle Detected
< Date: Mon, 25 Nov 2013 11:56:06 GMT
Date: Mon, 25 Nov 2013 11:56:06 GMT
< Via: 1.1 localhost.localdomain
Via: 1.1 localhost.localdomain
< Cache-Control: no-store
Cache-Control: no-store
< Content-Type: text/html
Content-Type: text/html
< Content-Language: en
Content-Language: en
< Content-Length: 288
Content-Length: 288
<
* Connection #0 to host XXX.XXX.XXX.XXX left intact
* Rebuilt URL to: http://www.google.com/
* Adding handle: conn: 0xcb12f8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 1 (0xcb12f8) send_pipe: 1, recv_pipe: 0
* About to connect() to www.google.com port 80 (#1)
* Trying 173.194.115.50...
* Connection refused
* Trying 173.194.115.51...
* Connection refused
* Trying 173.194.115.49...
* Connection refused
* Trying 173.194.115.48...
* Connection refused
* Trying 173.194.115.52...
* Connection refused
* Failed connect to www.google.com:80; Connection refused
* Closing connection 1
curl: (7) Failed connect to www.google.com:80; Connection refused
For what it's worth, I am able to connect to google.com via browser using the said proxy address. And I am sure that I am giving the password(for the proxy) correctly.

You have set the proxy via --proxy parameter or -x parameter not via --proxy-ntlm, try this, please
curl -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre" -L --proxy http://xxx.xxx.xxx.xxx:8080 --proxy-ntlm -U name:password http://www.google.com
If you enter in a new redirect cycle you can try without -L parameter or set the --max-redirs parameter.
cURL manpage

I believe you are being knocked back due to authentication. Your work proxy likely requires authentication before it will allow you to access websites through it.
If your work uses Active Directory SSO (Single Sign On), try the following with your domain username and password:
curl --ntlm --user username:password http://www.google.com
Or not, try the following for basic auth:
curl --user username:password http://www.google.com

Related

socat localhost proxy with 'Connection refused'

I want to play with socat to setup a localhost proxy , which could redirect my request from local proxy I setup to a remote server.
Below are 2 commands I try (by google)
Command 1: works right on localhost
start socat server
socat -d TCP6-LISTEN:8080,fork,reuseaddr TCP4:drsol.com:80
client connect socat server
curl -I -6 http://localhost:8080/
HTTP/1.1 200 OK
Date: Sun, 25 Dec 2022 13:06:11 GMT
Server: Apache/2.2.15 (CentOS)
Last-Modified: Tue, 10 Jun 2014 17:35:11 GMT
ETag: "2482ce4-2-4fb7ebff1050b"
Accept-Ranges: bytes
Content-Length: 2
Connection: close
Content-Type: text/html; charset=UTF-8
Command 2: 'Connection refused' error on localhost server terminal
setup socat server
socat TCP6-LISTEN:8000 PROXY:localhost:drsol.com:80,proxyport=8080
connect socat server
curl -v http://localhost:8000/
* STATE: INIT => CONNECT handle 0x55fce853a968; line 1789 (connection #-5000)
* Uses proxy env variable no_proxy == 'localhost,127.0.0.0/8,::1'
* Added connection 0. The cache now contains 1 members
* family0 == v4, family1 == v6
* Trying 127.0.0.1:8000...
* STATE: CONNECT => CONNECTING handle 0x55fce853a968; line 1850 (connection #0)
* Connected to localhost (127.0.0.1) port 8000 (#0)
* STATE: CONNECTING => PROTOCONNECT handle 0x55fce853a968; line 1982 (connection #0)
* STATE: PROTOCONNECT => DO handle 0x55fce853a968; line 2003 (connection #0)
> GET / HTTP/1.1
> Host: localhost:8000
> User-Agent: curl/7.80.0
> Accept: */*
>
* STATE: DO => DID handle 0x55fce853a968; line 2099 (connection #0)
* STATE: DID => PERFORMING handle 0x55fce853a968; line 2218 (connection #0)
* STATE: PERFORMING => DONE handle 0x55fce853a968; line 2417 (connection #0)
* multi_done
* Empty reply from server
* The cache now contains 0 members
* Closing connection 0
* Expire cleared (transfer 0x55fce853a968)
curl: (52) Empty reply from server
on the socat server terminal "Connection refused"
2022/12/25 21:10:42 socat[46697] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
My question is : what was the difference between command 1 and command 2 , and why the connection refused happend on command 2?
(I am new to socat and on my way of learning it)

curl post request is not working with option --http2, but it works fine when I use --http2-prior-knowledge

I have created spring-boot application with tomcat 9.0.16, spring-boot 2.1.3.RELEASE, JDK1.8.
When I am making curl post request with --http2 its saying "curl: (56) Recv failure: Connection reset by peer".
but when I use --http-prior-knowledge it works fine.
my application.property file
server.port=8080
server.http2.enabled=true
and congif file
#Bean
public WebServerFactoryCustomizer tomcatCustomizer() {
return (container) -> {
if (container instanceof TomcatServletWebServerFactory) {
((TomcatServletWebServerFactory) container)
.addConnectorCustomizers((connector) -> {
connector.addUpgradeProtocol(new Http2Protocol());
});
}
};
}
for curl -vvv --http2 -H 'Content-Type: application/json' -H 'cache-control: no-cache' -XPOST http://localhost:8080/save -d '{"xyz":"xyz"}'
logs of curl->
* Trying ::1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x7fc78a808a00)
* Expire in 200 ms for 4 (transfer 0x7fc78a808a00)
* Connected to localhost (::1) port 8080 (#0)
> POST /save HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
> Content-Type: application/json
> Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd
> cache-control: no-cache
> Content-Length: 702
>
* upload completely sent off: 702 out of 702 bytes
< HTTP/1.1 101
< Connection: Upgrade
< Upgrade: h2c
< Date: Wed, 27 Mar 2019 12:29:18 GMT
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 200)!
* Recv failure: Connection reset by peer
* Failed receiving HTTP2 data
* Send failure: Broken pipe
* Failed sending HTTP2 data
* Connection #0 to host localhost left intact
curl: (56) Recv failure: Connection reset by peer
curl -vvv --http2-prior-knowledge -H 'Content-Type: application/json' -H 'Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd' -H 'cache-control: no-cache' -XPOST http://localhost:8080/save -d '{"xyz":"xyz"}'
* Expire in 0 ms for 6 (transfer 0x7fc5c0808a00)
* Trying ::1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x7fc5c0808a00)
* Expire in 200 ms for 4 (transfer 0x7fc5c0808a00)
* Connected to localhost (::1) port 8080 (#0)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fc5c0808a00)
> POST /save HTTP/2
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Type: application/json
> Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd
> cache-control: no-cache
> Content-Length: 702
>
* We are completely uploaded and fine
* Connection state changed (MAX_CONCURRENT_STREAMS == 200)!
< HTTP/2 200
< content-type: application/json;charset=UTF-8
< date: Wed, 27 Mar 2019 12:32:26 GMT
<
* Connection #0 to host localhost left intact
true%
You cannot use a POST method to perform an HTTP/1.1 upgrade, so Tomcat is probably choking on your first request (curl --http2 ...) for that reason.
I am the HTTP/2 implementer in Jetty, and Jetty also does not upgrade to HTTP/2 in that case, although it responds with HTTP/1.1 200 to the request, rather than choking.
Converting the first request to a GET without content, the upgrade succeeds in Jetty with a HTTP/1.1 101 response, as expected.
The second request is not an HTTP/1.1 upgrade, but a prior knowledge HTTP/2 request; there is no upgrade and therefore no limitation as to what HTTP method you can use, so the request succeeds in both Jetty and Tomcat.

use polipo to convert shadowsocks into an HTTP proxy

My ssserver is started by docker image oddrationale/docker-shadowsocks:
docker run -d -p 1984:1984 oddrationale/docker-shadowsocks -s 0.0.0.0 -p 1984 -k paaassswwword -m aes-256-cfb
Then I use sslocal command to get local proxy.
sslocal -c /etc/shadowsocks.json -d start --pid-file /data/tmp/sslocal.pid --log-file /data/tmp/sslocal.log
/etc/shadowsocks.json is like this:
{
"server":"127.0.0.1",
"server_port":1984,
"local_address": "127.0.0.1",
"local_port":1080,
"password":"paaassswwword",
"timeout":600,
"method":"aes-256-cfb"
}
I use polipo to convert shadowsocks to http proxy, my /etc/polipo/config is:
proxyAddress = 0.0.0.0
socksProxyType = socks5
socksParentProxy = 127.0.0.1:1080
daemonise = true
pidFile = /data/tmp/polipo.pid
logFile = /data/tmp/polipo.log
I edit the iptables rules to make port 8123 can be accessed. I can access http://host:8123 in browser, and the proxy looks work:
http_proxy=http://host:8123 curl -v google.com
the output is like this:
* Rebuilt URL to: google.com/
* Trying host...
* Connected to host (host) port 8123 (#0)
> GET HTTP://google.com/ HTTP/1.1
> Host: google.com
> User-Agent: curl/7.43.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
< HTTP/1.1 302 Found
< Content-Length: 262
< Date: Thu, 13 Apr 2017 09:52:34 GMT
< Cache-Control: private
< Content-Type: text/html; charset=UTF-8
< Referrer-Policy: no-referrer
< Location: http://www.google.com.sg/?gfe_rd=cr&ei=YkrvWPnOM-XLugTRgZDQBA
< Connection: keep-alive
<
<HTML><HEAD><meta http-equiv="content-type" content="text/html;charset=utf-8">
<TITLE>302 Moved</TITLE></HEAD><BODY>
<H1>302 Moved</H1>
The document has moved
here.
</BODY></HTML>
* Connection #0 to host host left intact
The command does not always run successfully, and sometimes I get the following error:
* Rebuilt URL to: google.com/
* Trying host...
* Connected to host (host) port 8123 (#0)
> GET HTTP://google.com/ HTTP/1.1
> Host: google.com
> User-Agent: curl/7.43.0
> Accept: */*
> Proxy-Connection: Keep-Alive
>
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer
The output of netstat -tlnp is:
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:1080 0.0.0.0:* LISTEN 5067/python
tcp 0 0 0.0.0.0:8123 0.0.0.0:* LISTEN 9704/polipo
tcp6 0 0 :::8388 :::* LISTEN 4238/docker-proxy
I really can't find the reason, thank you for your help.
google use https, not http, try
https_proxy=http://host:8123 curl -v https://www.google.com

Gitlab public repo clone fails with "401 Unauthorized"

I'm hoping someone can help me diagnose this issue. I'm running Gitlab 5.2 on a default Ubuntu 12.04 install with the latest ruby and git. It's mostly vanilla with the exception of some LDAP mapping modifications (username, display name).
I'm running into an error with Gitlab that I'm having trouble diagnosing. Whenever I attempt to clone a 'public' repo, instead of the expected (and working on CentOS with the same LDAP mapping modifications):
Started GET "/dd/lol.git/info/refs?service=git-upload-pack" for 127.0.0.1 at 2013-06-17 10:21:55 -0400
Started POST "/dd/lol.git/git-upload-pack" for 127.0.0.1 at 2013-06-17 10:21:55 -0400
I get (on Ubuntu):
Started GET "/dd/lol.git/info/refs?service=git-upload-pack" for 127.0.0.1 at 2013-06-17 10:26:13 -0400
Started GET "/dd/lol.git/HEAD" for 127.0.0.1 at 2013-06-17 10:26:13 -0400
Started GET "/dd/lol.git/HEAD" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
Started GET "/dd/lol.git/HEAD" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
Started GET "/dd/lol.git/objects/8c/4e72acdc72843492f55d5918f53dd12e5f1e43" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
Started GET "/dd/lol.git/objects/info/packs" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
On the client side I get consistent "401 Unauthorized" messages, then I'm prompted for a password. It doesn't seem to be related to Apache or Nginx proxying.
Client-side log:
git clone http://127.0.0.1:9292/dd/lol.git
Cloning into 'lol'...
* Couldn't find host 127.0.0.1 in the .netrc file; using defaults
* About to connect() to 127.0.0.1 port 9292 (#0)
* Trying 127.0.0.1...
* Adding handle: conn: 0x7fc610803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc610803000) send_pipe: 1, recv_pipe: 0
* Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
> GET /dd/lol.git/info/refs?service=git-upload-pack HTTP/1.1
User-Agent: git/1.7.12.4 (Apple Git-37)
Host: 127.0.0.1:9292
Accept: */*
Accept-Encoding: gzip
Pragma: no-cache
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< Last-Modified: Mon, 17 Jun 2013 14:33:31 GMT
< Expires: Fri, 01 Jan 1980 00:00:00 GMT
< Pragma: no-cache
< Cache-Control: no-cache, max-age=0, must-revalidate
< X-UA-Compatible: IE=Edge,chrome=1
< X-Request-Id: 0a9ec65cffb7888fb6fbc136171fa80a
< X-Runtime: 0.079635
< Date: Mon, 17 Jun 2013 14:33:31 GMT
< X-Content-Digest: 198141e92e2cf9bb83d1aa1022fdea885993f02e
< Age: 0
< X-Rack-Cache: stale, invalid, store
< Content-Length: 59
<
* Connection #0 to host 127.0.0.1 left intact
* Couldn't find host 127.0.0.1 in the .netrc file; using defaults
* Found bundle for host 127.0.0.1: 0x7fc6104155f0
* Re-using existing connection! (#0) with host 127.0.0.1
* Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
* Adding handle: conn: 0x7fc610803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc610803000) send_pipe: 1, recv_pipe: 0
> GET /dd/lol.git/HEAD HTTP/1.1
User-Agent: git/1.7.12.4 (Apple Git-37)
Host: 127.0.0.1:9292
Accept: */*
Accept-Encoding: gzip
Pragma: no-cache
* The requested URL returned error: 401 Unauthorized
* Closing connection 0
Any suggestions at all are very welcome, I'm not familiar with Gitlab and I'm currently a bit stumped.
Dmitry
Cloning with LDAP activated seems to be a recurring problem, especially over https:
issue 4288
issue 3890
issue 4129
A workaround is proposed here, and is related to file lib/gitlab/backend/grack_auth.rb, but a final fix is still in progress.
Update: from 5.3+ and 6.x, this should have been fixed.

Changing HTTP status message using Sinatra

I'm writing a simple Sinatra app, and given a user posts a request with an specific data, I want to return an error '453' (custom error code) with a message CLIENT_ERROR, or something similar.
The problem is: looking into the Sinatra documentation and doing some testing I couldn't find a way to setup the response error message, only the response status.
So, if a set the Sinatra response
get '/' do
response.status = 453
end
I get the error code right:
curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:4567
> Accept: */*
>
< HTTP/1.1 453
< X-Frame-Options: sameorigin
< X-XSS-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: thin 1.3.1 codename Triple Espresso
<
* Connection #0 to host localhost left intact
* Closing connection #0
But what I want to have is:
< HTTP/1.1 453 CLIENT_ERROR
The same way I have
< HTTP/1.1 200 OK
When everything goes according to the plan.
Is there anyway to do this using Sinatra/Rack?
The status message is generated by the server you are using, e.g. in Thin the messages are in Thin::HTTP_STATUS_CODES and the reponse line is generated in Thin::Response, and in WEBrick they are in WEBrick::HHTPStatus::StatusMessage and the response is generated in WEBrick::HTTPResponse.
If you know what server you are using, you could add your error to the appropriate hash.
With Thin:
require 'thin'
Thin::HTTP_STATUS_CODES[453] = "Client Error"
and the output:
$ curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 4567 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:4567
> Accept: */*
>
< HTTP/1.1 453 Client Error
< X-Frame-Options: sameorigin
< X-XSS-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: thin 1.4.1 codename Chromeo
<
* Connection #0 to host localhost left intact
* Closing connection #0
and with WEBrick:
require 'webrick'
WEBrick::HTTPStatus::StatusMessage[453] = "Client Error"
which gives the output:
$ curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 4567 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:4567
> Accept: */*
>
localhost - - [13/Aug/2012:01:41:48 BST] "GET / HTTP/1.1" 453 0
- -> /
< HTTP/1.1 453 Client Error
< X-Frame-Options: sameorigin
< X-Xss-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Server: WEBrick/1.3.1 (Ruby/1.9.3/2012-04-20)
< Date: Mon, 13 Aug 2012 00:41:48 GMT
< Connection: Keep-Alive
<
* Connection #0 to host localhost left intact
* Closing connection #0
I would recommend not to use custom HTTP status codes. If you think you have something of general use, consider writing an Internet Draft and going through the IETF specification process.

Resources