Empty server response with cntlm proxy and basic auth params in url for yum repo - proxy

I am using cntlm proxy on a CentOs 7 server behind a corporate proxy which needs an authentication.
Here is my cntlm.conf file :
Username user
Domain dom
Auth NTLMv2
PassNTLMv2 **********
Proxy corporateproxy:8080
NoProxy localhost, 127.0.0.*, 10.*, 192.168.*, 172.*, *.local
Listen 0.0.0.0:3128
Everything works ok, except for a yum repo who needs a basic auth :
[datastax-cassandra]
name=datastax-cassandra
humanname=DataStax Repo for DataStax Enterprise
baseurl= http://auser#mail.com:s6pZ4cjORRAqDhG#rpm.datastax.com/entreprise
gpgcheck=0
enabled=1
When running
repoquery --plugins --queryformat '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}' --pkgnarrow=available selinux-policy-devel policycoreutils-python
I get :
Could not match packages: failure: repodata/repomd.xml from datastax-cassandra: [Errno 256] No more mirrors to try.
http://auser#email.com:s6pZ4cjORRAqDhG#rpm.datastax.com/enterprise/repodata/repomd.xml: [Errno 14] curl#52 - "Empty reply from server"
For any other mirror server which do not need basic auth, everything is ok.
Any idea (cntlm configuration, yum repo configuration, ...) ?

There is a patch for cntlm with support for Basic HTTP Auth; see
cntlm-0.35.1 modified to support Basic HTTP Auth with HTTPAUTH parameter

Related

Can't connect to internet via SOCKS5 proxy server

I have configured SOCKS5 proxy server in AWS with dante and it runs fine.
when I try the following command in CMD
curl -L -x socks5://user:password#23.29.xx.xx:1313 http://www.google.com/
it works,
But when I configure my LAN proxy configs, it can't access to the internet via any browser.
for CURL proxy server works but for browsers it doesn't.
please help.
The reason for this is, I have enabled the user authentication in the server. Unlikely in HTTP proxy servers, there is no signin popup in socks5. This was fixed after I added the username and password.

How would yum ( on centos host ) work with proxy that requires an ssl cert

I am trying to setup proxy in /etc/yum.conf with https and ssl cert
Normally, i would have proxy=http://x.x.x.x:80 provided that is the proxy address and since my proxy does not require username and password, that would work. But now i have a requirement, to setup /etc/yum/conf with
proxy=https://x.x.x.x:433
and the yum hosting centos can only talk to internet via a proxy which accepts ssl cert based Authentication.
So how would i install the ssl Cert on the centos host for yum to work with the proxy host on port 443 and one that requires an SSL Cert
It looks like you should be able to use the following config directives taken from the yum.conf manual page.
sslclientcert
Path to the SSL client certificate yum should use to connect to
repos/remote sites Defaults to none. Note that if you are using curl
compiled against NSS (default in Fedora/RHEL), curl treats
sslclientcert values with the same basename as identical. This
version of yum will check that this isn't true and output an error
when the repositories "foo" and "bar" violate this, like so:
sslclientcert basename shared between foo and bar
sslclientkey
Path to the SSL client key yum should use to connect to repos/remote
sites Defaults to none.

How to set a proxy on bosh-cli

I'm trying to upload a bosh release into the director. I use a Virtualbox environment and I'm behind a corporate proxy.
Even when I've tried to set the proxy with
export https_proxy=http://myproxy:3128
or with
export BOSH_ALL_PROXY=http://myproxy:3128
I never manage to do any download
Does someone know how to do ?
MBP-de-Olivier:bosh-deployment olivier$ bosh -e vbox upload-release https://bosh.io/d/github.com/cloudfoundry/cf-release?v=283
Using environment '192.168.50.6' as client 'admin'
Task 13
Task 13 | 15:28:45 | Downloading remote release: Downloading remote release (00:00:05)
L Error: Failed to open TCP connection to bosh.io:443 (Address family not supported by protocol - socket(2) for "bosh.io" port 443)
Task 13 | 15:28:50 | Error: Failed to open TCP connection to bosh.io:443 (Address family not supported by protocol - socket(2) for "bosh.io" port 443)
Have you tried to download the release locally and upload to your bosh director from local host.
I think you should add the following lines:
export http_proxy=http://yourproxy:3128
export https_proxy=http://yourproxy:3128
Are you sure that 3128 is the correct port? This seems as if you are using cntlm (or another similar local proxy). If it is a local proxy: Is the service running? Can the service connect to the corporate proxy?
My guess is that the BOSH director does not know it must use a proxy. I'm under the impression you tried to configure the proxy at the bosh-cli level, but the download is performed by the director itself.
You could try to re-deploy the director with your proxy configuration. You can use this ops file in order to do so.

Why can't login to harbor server form client at plain HTTP case?

Installed harbor on a host. Using plain HTTP protocol.
The IP is 192.168.33.10.
I can login it from harbor server itself:
sudo docker login 192.168.33.10
And can access it from browser:
http://192.168.33.10
But can't login it from other client(Mac, installed docker use it). The error message is:
docker login 192.168.33.10
Username: user1
Password: (my_password)
Error response from daemon: Get https://192.168.33.10/v2/: dial tcp 192.168.33.10:443: getsockopt: connection refused
From Harbor documentation there has this notice:
https://github.com/vmware/harbor/blob/master/docs/installation_guide.md
IMPORTANT: The default installation of Harbor uses HTTP - as such, you will need to add the option --insecure-registry to your client's Docker daemon and restart the Docker service.
Both the harbor host and client host set /etc/docker/daemon.json:
{ "insecure-registries":["192.168.33.10"] }
and restarted docker. However, it not works.
If don't setup harbor under HTTPS protocol now, is there a way to access it from client correctly?
Solution
It's unnecessary to set /etc/docker/daemon.json on client. Mac has another way:
Apply and Restart

Apt-get: Only Basic auth using server hostname with https not working

I have a local web server which is hosting all my debian packages from another machine I am trying to do apt-get update/upgrade to fetch the Package index list and upgrade the machine using https but only basic authorization as my web server is configured to do only basic auth and I do not want to change that to certificate based auth.
apt-get update with https basic auth works fine (i.e the client is able to skip the cert based authentication) when I use IP address of the web server but as soon as I try to use hostname of the web server then it doesn't work I keep getting the error "gnutls_handshake() failed: A TLS warning alert has been received."
Config for IP scenario which works with basic auth without certs
APT Config under apt.conf.d with IP:
Debug::Acquire::https "true";
Acquire::https::10.2.20.1 {
Verify-Host "false";
Verify-Peer "false";
};
source.list.d with IP:
deb [arch=amd64] https://username:password#10.2.20.1:443/foo bar test
Debugs when it works
0% [Working]* About to connect() to 10.2.20.1 port 443 (#0)
* Trying 10.2.20.1... * connected
* found 164 certificates in /etc/ssl/certs/ca-certificates.crt
* server certificate verification SKIPPED
* Server auth using Basic with user 'username'
> GET /foo/dists/bar/Release.gpg HTTP/1.1
Authorization: Basic
Config for hostname scenario doesn't work with basic auth without certs
APT Config under apt.conf.d with IP:
Debug::Acquire::https "true";
Acquire::https::my-foo-test.com {
Verify-Host "false";
Verify-Peer "false";
};
source.list.d with IP:
deb [arch=amd64] https://username:password#my-foo-test.com:443/foo bar test
Debug with TLS warning when hostname is used
root#my:~# apt-get update
0% [Working]* About to connect() to my-foo-test.com port 443 (#0)
* Trying 10.2.20.1... * connected
* found 164 certificates in /etc/ssl/certs/ca-certificates.crt
* gnutls_handshake() failed: A TLS warning alert has been received.
* Closing connection #0
Ign https://my-foo-test.com repo Release.gpg
I have resolved IP to hostname locally on my machine where I am running apt-get update using /etc/hosts file
Entry from /etc/hosts file
10.2.20.1 my-foo-test.com
Event tried below way but didn't work, tried putting this into apt.conf.d/ that didn't work either
apt-get update -o Debug::Acquire::https=true -o Acquire::https::Verify-Host=false -o Acquire::https::Verify-Peer=false -o Dir::Etc::SourceList="/etc/apt/sources.list.d/mysource.list" update
Thanks for the help!

Resources