Gitlab public repo clone fails with "401 Unauthorized" - clone

I'm hoping someone can help me diagnose this issue. I'm running Gitlab 5.2 on a default Ubuntu 12.04 install with the latest ruby and git. It's mostly vanilla with the exception of some LDAP mapping modifications (username, display name).
I'm running into an error with Gitlab that I'm having trouble diagnosing. Whenever I attempt to clone a 'public' repo, instead of the expected (and working on CentOS with the same LDAP mapping modifications):
Started GET "/dd/lol.git/info/refs?service=git-upload-pack" for 127.0.0.1 at 2013-06-17 10:21:55 -0400
Started POST "/dd/lol.git/git-upload-pack" for 127.0.0.1 at 2013-06-17 10:21:55 -0400
I get (on Ubuntu):
Started GET "/dd/lol.git/info/refs?service=git-upload-pack" for 127.0.0.1 at 2013-06-17 10:26:13 -0400
Started GET "/dd/lol.git/HEAD" for 127.0.0.1 at 2013-06-17 10:26:13 -0400
Started GET "/dd/lol.git/HEAD" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
Started GET "/dd/lol.git/HEAD" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
Started GET "/dd/lol.git/objects/8c/4e72acdc72843492f55d5918f53dd12e5f1e43" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
Started GET "/dd/lol.git/objects/info/packs" for 127.0.0.1 at 2013-06-17 10:26:15 -0400
On the client side I get consistent "401 Unauthorized" messages, then I'm prompted for a password. It doesn't seem to be related to Apache or Nginx proxying.
Client-side log:
git clone http://127.0.0.1:9292/dd/lol.git
Cloning into 'lol'...
* Couldn't find host 127.0.0.1 in the .netrc file; using defaults
* About to connect() to 127.0.0.1 port 9292 (#0)
* Trying 127.0.0.1...
* Adding handle: conn: 0x7fc610803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc610803000) send_pipe: 1, recv_pipe: 0
* Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
> GET /dd/lol.git/info/refs?service=git-upload-pack HTTP/1.1
User-Agent: git/1.7.12.4 (Apple Git-37)
Host: 127.0.0.1:9292
Accept: */*
Accept-Encoding: gzip
Pragma: no-cache
< HTTP/1.1 200 OK
< Content-Type: text/plain; charset=utf-8
< Last-Modified: Mon, 17 Jun 2013 14:33:31 GMT
< Expires: Fri, 01 Jan 1980 00:00:00 GMT
< Pragma: no-cache
< Cache-Control: no-cache, max-age=0, must-revalidate
< X-UA-Compatible: IE=Edge,chrome=1
< X-Request-Id: 0a9ec65cffb7888fb6fbc136171fa80a
< X-Runtime: 0.079635
< Date: Mon, 17 Jun 2013 14:33:31 GMT
< X-Content-Digest: 198141e92e2cf9bb83d1aa1022fdea885993f02e
< Age: 0
< X-Rack-Cache: stale, invalid, store
< Content-Length: 59
<
* Connection #0 to host 127.0.0.1 left intact
* Couldn't find host 127.0.0.1 in the .netrc file; using defaults
* Found bundle for host 127.0.0.1: 0x7fc6104155f0
* Re-using existing connection! (#0) with host 127.0.0.1
* Connected to 127.0.0.1 (127.0.0.1) port 9292 (#0)
* Adding handle: conn: 0x7fc610803000
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc610803000) send_pipe: 1, recv_pipe: 0
> GET /dd/lol.git/HEAD HTTP/1.1
User-Agent: git/1.7.12.4 (Apple Git-37)
Host: 127.0.0.1:9292
Accept: */*
Accept-Encoding: gzip
Pragma: no-cache
* The requested URL returned error: 401 Unauthorized
* Closing connection 0
Any suggestions at all are very welcome, I'm not familiar with Gitlab and I'm currently a bit stumped.
Dmitry

Cloning with LDAP activated seems to be a recurring problem, especially over https:
issue 4288
issue 3890
issue 4129
A workaround is proposed here, and is related to file lib/gitlab/backend/grack_auth.rb, but a final fix is still in progress.
Update: from 5.3+ and 6.x, this should have been fixed.

Related

curl post request is not working with option --http2, but it works fine when I use --http2-prior-knowledge

I have created spring-boot application with tomcat 9.0.16, spring-boot 2.1.3.RELEASE, JDK1.8.
When I am making curl post request with --http2 its saying "curl: (56) Recv failure: Connection reset by peer".
but when I use --http-prior-knowledge it works fine.
my application.property file
server.port=8080
server.http2.enabled=true
and congif file
#Bean
public WebServerFactoryCustomizer tomcatCustomizer() {
return (container) -> {
if (container instanceof TomcatServletWebServerFactory) {
((TomcatServletWebServerFactory) container)
.addConnectorCustomizers((connector) -> {
connector.addUpgradeProtocol(new Http2Protocol());
});
}
};
}
for curl -vvv --http2 -H 'Content-Type: application/json' -H 'cache-control: no-cache' -XPOST http://localhost:8080/save -d '{"xyz":"xyz"}'
logs of curl->
* Trying ::1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x7fc78a808a00)
* Expire in 200 ms for 4 (transfer 0x7fc78a808a00)
* Connected to localhost (::1) port 8080 (#0)
> POST /save HTTP/1.1
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> Connection: Upgrade, HTTP2-Settings
> Upgrade: h2c
> HTTP2-Settings: AAMAAABkAARAAAAAAAIAAAAA
> Content-Type: application/json
> Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd
> cache-control: no-cache
> Content-Length: 702
>
* upload completely sent off: 702 out of 702 bytes
< HTTP/1.1 101
< Connection: Upgrade
< Upgrade: h2c
< Date: Wed, 27 Mar 2019 12:29:18 GMT
* Received 101
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Connection state changed (MAX_CONCURRENT_STREAMS == 200)!
* Recv failure: Connection reset by peer
* Failed receiving HTTP2 data
* Send failure: Broken pipe
* Failed sending HTTP2 data
* Connection #0 to host localhost left intact
curl: (56) Recv failure: Connection reset by peer
curl -vvv --http2-prior-knowledge -H 'Content-Type: application/json' -H 'Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd' -H 'cache-control: no-cache' -XPOST http://localhost:8080/save -d '{"xyz":"xyz"}'
* Expire in 0 ms for 6 (transfer 0x7fc5c0808a00)
* Trying ::1...
* TCP_NODELAY set
* Expire in 150000 ms for 3 (transfer 0x7fc5c0808a00)
* Expire in 200 ms for 4 (transfer 0x7fc5c0808a00)
* Connected to localhost (::1) port 8080 (#0)
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fc5c0808a00)
> POST /save HTTP/2
> Host: localhost:8080
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Type: application/json
> Postman-Token: 52e0708b-ce97-4baa-a567-2dabc675f3dd
> cache-control: no-cache
> Content-Length: 702
>
* We are completely uploaded and fine
* Connection state changed (MAX_CONCURRENT_STREAMS == 200)!
< HTTP/2 200
< content-type: application/json;charset=UTF-8
< date: Wed, 27 Mar 2019 12:32:26 GMT
<
* Connection #0 to host localhost left intact
true%
You cannot use a POST method to perform an HTTP/1.1 upgrade, so Tomcat is probably choking on your first request (curl --http2 ...) for that reason.
I am the HTTP/2 implementer in Jetty, and Jetty also does not upgrade to HTTP/2 in that case, although it responds with HTTP/1.1 200 to the request, rather than choking.
Converting the first request to a GET without content, the upgrade succeeds in Jetty with a HTTP/1.1 101 response, as expected.
The second request is not an HTTP/1.1 upgrade, but a prior knowledge HTTP/2 request; there is no upgrade and therefore no limitation as to what HTTP method you can use, so the request succeeds in both Jetty and Tomcat.

Elastic search query to return the version of kibana

I am running on kibana 5.4.0
I wanted to know if there is an Elastic search GET api query that I can run against the .kibana index which returns the version of kibana I am running on?
Why query elasticsearch? Kibana itself offers that capabiltity:
cinhtau#omega:~> curl -v localhost:5601/status
* About to connect() to localhost port 5601 (#0)
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 5601 (#0)
> GET /status HTTP/1.1
> User-Agent: curl/7.29.0
> Host: localhost:5601
> Accept: */*
>
< HTTP/1.1 302 Found
< location: https://localhost:5601/status
< kbn-name: kibana
< kbn-version: 5.6.1
< kbn-xpack-sig: 83f97b6a01fc027688f430e60e935b27
< cache-control: no-cache
< content-length: 0
< Date: Fri, 22 Sep 2017 08:51:43 GMT
< Connection: keep-alive
<
* Connection #0 to host localhost left intact
From the response header you can see it is 5.6.1. There is a way to query it from Elasticsearch, but IMHO doesn't make sense to ask Elasticsearch, information about Kibana.

Ruby http, net/http, httpclient: can't parse www.victoriassecret.com

I am using httpclient gem, it works fine on Windows, just moved to AWS EC2, tried it on https://victoriassecret.com and it gets this response:
= Response
HTTP/1.1 920 Unknown
Content-Type: text/html
Date: Wed, 21 Oct 2015 21:42:51 GMT
Connection: Keep-Alive
Content-Length: 23
<h1>File not found</h1>#<HTTP::Message:0x000000023f5168
#http_body=
#<HTTP::Message::Body:0x000000023f50a0
#body="<h1>File not found</h1>",
#chunk_size=nil,
#positions=nil,
#size=0>,
#http_header=
#<HTTP::Message::Headers:0x000000023f5140
#body_charset=nil,
#body_date=nil,
#body_encoding=#<Encoding:ASCII-8BIT>,
#body_size=0,
#body_type=nil,
#chunked=false,
#dumped=false,
#header_item=
[["Content-Type", "text/html"],
["Date", "Wed, 21 Oct 2015 21:42:51 GMT"],
["Connection", "Keep-Alive"],
["Content-Length", "23"]],
#http_version="1.1",
#is_request=false,
#reason_phrase="Unknown",
#request_absolute_uri=nil,
#request_method="GET",
#request_query=nil,
#request_uri=
#<URI::HTTPS:0x000000023f58c0 URL:https://www.victoriassecret.com/pink/new-and-now>,
#status_code=920>,
#peer_cert=
#<OpenSSL::X509::Certificate: subject=#<OpenSSL::X509::Name:0x000000024ebe00>, issuer=#<OpenSSL::X509::Name:0x000000024ebec8>, serial=#<OpenSSL::BN:0x000000024de110>, not_before=2015-05-27 00:00:00 UTC, not_after=2017-05-26 23:59:59 UTC>,
#previous=nil>
It does not work only with this website, httpclient get https://google.com for example works fine. But on Windows I get normal response from httpclient get https://www.victoriassecret.com. Butt when using standard NET/HTTP library I get the same 920 response on Windows.
This isn't ec2 related. It's most likely related to the User Agent header sent by the various http library implementations.
For example, they clearly don't like 'wget':
curl -A "Wget/1.13.4 (linux-gnu)" -v https://www.victoriassecret.com
* Rebuilt URL to: https://www.victoriassecret.com/
* Trying 98.158.54.100...
* Connected to www.victoriassecret.com (98.158.54.100) port 443 (#0)
* TLS 1.2 # truncated
> GET / HTTP/1.1
> Host: www.victoriassecret.com
> User-Agent: Wget/1.13.4 (linux-gnu)
> Accept: */*
>
< HTTP/1.1 910 Unknown
< Content-Type: text/html
< Date: Thu, 22 Oct 2015 01:16:31 GMT
< Connection: Keep-Alive
< Content-Length: 23
<
* Connection #0 to host www.victoriassecret.com left intact
<h1>File not found</h1>%

webpage download using cURL utility - proxy cycle issue

I am trying to access google.com from my work using cURL for windows 32-bit(with SSH version). I am connecting via my company's proxy server but I am getting 400 proxy cycle detected error. Could someone please let me know why I am getting this error. The command & error message are as follows(Proxy IP changed to XXXX):
Command:
%curl -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre" -v --proxy-ntlm XXX.XXX.XXX.XXX:8080 -U name:password -I http://www.google.com
Output:
Enter proxy password for user 'name':
* Rebuilt URL to: XXX.XXX.XXX.XXX:8080/
* About to connect() to XXX.XXX.XXX.XXX port 8080 (#0)
* Trying XXX.XXX.XXX.XXX...
* Adding handle: conn: 0xcb0520
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0xcb0520) send_pipe: 1, recv_pipe: 0
* Connected to XXX.XXX.XXX.XXX (XXX.XXX.XXX.XXX) port 8080 (#0)
> HEAD / HTTP/1.1
> User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre
> Host: XXX.XXX.XXX.XXX:8080
> Accept: */*
>
< HTTP/1.1 400 Cycle Detected
HTTP/1.1 400 Cycle Detected
< Date: Mon, 25 Nov 2013 11:56:06 GMT
Date: Mon, 25 Nov 2013 11:56:06 GMT
< Via: 1.1 localhost.localdomain
Via: 1.1 localhost.localdomain
< Cache-Control: no-store
Cache-Control: no-store
< Content-Type: text/html
Content-Type: text/html
< Content-Language: en
Content-Language: en
< Content-Length: 288
Content-Length: 288
<
* Connection #0 to host XXX.XXX.XXX.XXX left intact
* Rebuilt URL to: http://www.google.com/
* Adding handle: conn: 0xcb12f8
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 1 (0xcb12f8) send_pipe: 1, recv_pipe: 0
* About to connect() to www.google.com port 80 (#1)
* Trying 173.194.115.50...
* Connection refused
* Trying 173.194.115.51...
* Connection refused
* Trying 173.194.115.49...
* Connection refused
* Trying 173.194.115.48...
* Connection refused
* Trying 173.194.115.52...
* Connection refused
* Failed connect to www.google.com:80; Connection refused
* Closing connection 1
curl: (7) Failed connect to www.google.com:80; Connection refused
For what it's worth, I am able to connect to google.com via browser using the said proxy address. And I am sure that I am giving the password(for the proxy) correctly.
You have set the proxy via --proxy parameter or -x parameter not via --proxy-ntlm, try this, please
curl -A "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b7pre) Gecko/20100925 Firefox/4.0b7pre" -L --proxy http://xxx.xxx.xxx.xxx:8080 --proxy-ntlm -U name:password http://www.google.com
If you enter in a new redirect cycle you can try without -L parameter or set the --max-redirs parameter.
cURL manpage
I believe you are being knocked back due to authentication. Your work proxy likely requires authentication before it will allow you to access websites through it.
If your work uses Active Directory SSO (Single Sign On), try the following with your domain username and password:
curl --ntlm --user username:password http://www.google.com
Or not, try the following for basic auth:
curl --user username:password http://www.google.com

Changing HTTP status message using Sinatra

I'm writing a simple Sinatra app, and given a user posts a request with an specific data, I want to return an error '453' (custom error code) with a message CLIENT_ERROR, or something similar.
The problem is: looking into the Sinatra documentation and doing some testing I couldn't find a way to setup the response error message, only the response status.
So, if a set the Sinatra response
get '/' do
response.status = 453
end
I get the error code right:
curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
> GET / HTTP/1.1
> User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
> Host: localhost:4567
> Accept: */*
>
< HTTP/1.1 453
< X-Frame-Options: sameorigin
< X-XSS-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: thin 1.3.1 codename Triple Espresso
<
* Connection #0 to host localhost left intact
* Closing connection #0
But what I want to have is:
< HTTP/1.1 453 CLIENT_ERROR
The same way I have
< HTTP/1.1 200 OK
When everything goes according to the plan.
Is there anyway to do this using Sinatra/Rack?
The status message is generated by the server you are using, e.g. in Thin the messages are in Thin::HTTP_STATUS_CODES and the reponse line is generated in Thin::Response, and in WEBrick they are in WEBrick::HHTPStatus::StatusMessage and the response is generated in WEBrick::HTTPResponse.
If you know what server you are using, you could add your error to the appropriate hash.
With Thin:
require 'thin'
Thin::HTTP_STATUS_CODES[453] = "Client Error"
and the output:
$ curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 4567 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:4567
> Accept: */*
>
< HTTP/1.1 453 Client Error
< X-Frame-Options: sameorigin
< X-XSS-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Connection: keep-alive
< Server: thin 1.4.1 codename Chromeo
<
* Connection #0 to host localhost left intact
* Closing connection #0
and with WEBrick:
require 'webrick'
WEBrick::HTTPStatus::StatusMessage[453] = "Client Error"
which gives the output:
$ curl -v localhost:4567
* About to connect() to localhost port 4567 (#0)
* Trying 127.0.0.1... connected
* Connected to localhost (127.0.0.1) port 4567 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.19.7 (universal-apple-darwin10.0) libcurl/7.19.7 OpenSSL/0.9.8r zlib/1.2.3
> Host: localhost:4567
> Accept: */*
>
localhost - - [13/Aug/2012:01:41:48 BST] "GET / HTTP/1.1" 453 0
- -> /
< HTTP/1.1 453 Client Error
< X-Frame-Options: sameorigin
< X-Xss-Protection: 1; mode=block
< Content-Type: text/html;charset=utf-8
< Content-Length: 0
< Server: WEBrick/1.3.1 (Ruby/1.9.3/2012-04-20)
< Date: Mon, 13 Aug 2012 00:41:48 GMT
< Connection: Keep-Alive
<
* Connection #0 to host localhost left intact
* Closing connection #0
I would recommend not to use custom HTTP status codes. If you think you have something of general use, consider writing an Internet Draft and going through the IETF specification process.

Resources