Apt-get: Only Basic auth using server hostname with https not working - apt

I have a local web server which is hosting all my debian packages from another machine I am trying to do apt-get update/upgrade to fetch the Package index list and upgrade the machine using https but only basic authorization as my web server is configured to do only basic auth and I do not want to change that to certificate based auth.
apt-get update with https basic auth works fine (i.e the client is able to skip the cert based authentication) when I use IP address of the web server but as soon as I try to use hostname of the web server then it doesn't work I keep getting the error "gnutls_handshake() failed: A TLS warning alert has been received."
Config for IP scenario which works with basic auth without certs
APT Config under apt.conf.d with IP:
Debug::Acquire::https "true";
Acquire::https::10.2.20.1 {
Verify-Host "false";
Verify-Peer "false";
};
source.list.d with IP:
deb [arch=amd64] https://username:password#10.2.20.1:443/foo bar test
Debugs when it works
0% [Working]* About to connect() to 10.2.20.1 port 443 (#0)
* Trying 10.2.20.1... * connected
* found 164 certificates in /etc/ssl/certs/ca-certificates.crt
* server certificate verification SKIPPED
* Server auth using Basic with user 'username'
> GET /foo/dists/bar/Release.gpg HTTP/1.1
Authorization: Basic
Config for hostname scenario doesn't work with basic auth without certs
APT Config under apt.conf.d with IP:
Debug::Acquire::https "true";
Acquire::https::my-foo-test.com {
Verify-Host "false";
Verify-Peer "false";
};
source.list.d with IP:
deb [arch=amd64] https://username:password#my-foo-test.com:443/foo bar test
Debug with TLS warning when hostname is used
root#my:~# apt-get update
0% [Working]* About to connect() to my-foo-test.com port 443 (#0)
* Trying 10.2.20.1... * connected
* found 164 certificates in /etc/ssl/certs/ca-certificates.crt
* gnutls_handshake() failed: A TLS warning alert has been received.
* Closing connection #0
Ign https://my-foo-test.com repo Release.gpg
I have resolved IP to hostname locally on my machine where I am running apt-get update using /etc/hosts file
Entry from /etc/hosts file
10.2.20.1 my-foo-test.com
Event tried below way but didn't work, tried putting this into apt.conf.d/ that didn't work either
apt-get update -o Debug::Acquire::https=true -o Acquire::https::Verify-Host=false -o Acquire::https::Verify-Peer=false -o Dir::Etc::SourceList="/etc/apt/sources.list.d/mysource.list" update
Thanks for the help!

Related

How would yum ( on centos host ) work with proxy that requires an ssl cert

I am trying to setup proxy in /etc/yum.conf with https and ssl cert
Normally, i would have proxy=http://x.x.x.x:80 provided that is the proxy address and since my proxy does not require username and password, that would work. But now i have a requirement, to setup /etc/yum/conf with
proxy=https://x.x.x.x:433
and the yum hosting centos can only talk to internet via a proxy which accepts ssl cert based Authentication.
So how would i install the ssl Cert on the centos host for yum to work with the proxy host on port 443 and one that requires an SSL Cert
It looks like you should be able to use the following config directives taken from the yum.conf manual page.
sslclientcert
Path to the SSL client certificate yum should use to connect to
repos/remote sites Defaults to none. Note that if you are using curl
compiled against NSS (default in Fedora/RHEL), curl treats
sslclientcert values with the same basename as identical. This
version of yum will check that this isn't true and output an error
when the repositories "foo" and "bar" violate this, like so:
sslclientcert basename shared between foo and bar
sslclientkey
Path to the SSL client key yum should use to connect to repos/remote
sites Defaults to none.

Cento 7 Firewalld refuses all incoming connections to my web-server

I have Centos7 VM built using vagrant with private IP address of:192.168.56.255
I am running my Spring boot application on that VM on port 8443. It supports HTTPS. My issue is that when try to send https requests to 192.168.56.255 web server via Curl command i got
curl: (7) Couldn't connect to server
I have read many tutorials that explain how to configure my Firewall in Cento7 but still got the same issue one is provided by DigitalOcean
When I type
sudo firewall-cmd --list-all-zones
I got
public
target: default
icmp-block-inversion: no
interfaces:
sources:
services: ssh dhcpv6-client https http mysql
ports: 8443/tcp 3306/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:
As you can see I enabled everything I need and more but still. I even shut down the Firewall but still the connection is refused from my host.
When I made the changes I did reload my firewall
sudo firewall-cmd --realod
So that is not the problem
The problem was not with the Firewalld but with the pre-configured IP address using Vagrant.
The IP address should not be 255 in the first byte as I did 192.168.56.255
because that indicates that this is a broadcast address. So i solved it by changing it to 192.168.56.10

Why can't login to harbor server form client at plain HTTP case?

Installed harbor on a host. Using plain HTTP protocol.
The IP is 192.168.33.10.
I can login it from harbor server itself:
sudo docker login 192.168.33.10
And can access it from browser:
http://192.168.33.10
But can't login it from other client(Mac, installed docker use it). The error message is:
docker login 192.168.33.10
Username: user1
Password: (my_password)
Error response from daemon: Get https://192.168.33.10/v2/: dial tcp 192.168.33.10:443: getsockopt: connection refused
From Harbor documentation there has this notice:
https://github.com/vmware/harbor/blob/master/docs/installation_guide.md
IMPORTANT: The default installation of Harbor uses HTTP - as such, you will need to add the option --insecure-registry to your client's Docker daemon and restart the Docker service.
Both the harbor host and client host set /etc/docker/daemon.json:
{ "insecure-registries":["192.168.33.10"] }
and restarted docker. However, it not works.
If don't setup harbor under HTTPS protocol now, is there a way to access it from client correctly?
Solution
It's unnecessary to set /etc/docker/daemon.json on client. Mac has another way:
Apply and Restart

Empty server response with cntlm proxy and basic auth params in url for yum repo

I am using cntlm proxy on a CentOs 7 server behind a corporate proxy which needs an authentication.
Here is my cntlm.conf file :
Username user
Domain dom
Auth NTLMv2
PassNTLMv2 **********
Proxy corporateproxy:8080
NoProxy localhost, 127.0.0.*, 10.*, 192.168.*, 172.*, *.local
Listen 0.0.0.0:3128
Everything works ok, except for a yum repo who needs a basic auth :
[datastax-cassandra]
name=datastax-cassandra
humanname=DataStax Repo for DataStax Enterprise
baseurl= http://auser#mail.com:s6pZ4cjORRAqDhG#rpm.datastax.com/entreprise
gpgcheck=0
enabled=1
When running
repoquery --plugins --queryformat '%{NAME}_|-%{VERSION}_|-%{RELEASE}_|-%{ARCH}_|-%{REPOID}' --pkgnarrow=available selinux-policy-devel policycoreutils-python
I get :
Could not match packages: failure: repodata/repomd.xml from datastax-cassandra: [Errno 256] No more mirrors to try.
http://auser#email.com:s6pZ4cjORRAqDhG#rpm.datastax.com/enterprise/repodata/repomd.xml: [Errno 14] curl#52 - "Empty reply from server"
For any other mirror server which do not need basic auth, everything is ok.
Any idea (cntlm configuration, yum repo configuration, ...) ?
There is a patch for cntlm with support for Basic HTTP Auth; see
cntlm-0.35.1 modified to support Basic HTTP Auth with HTTPAUTH parameter

Curl bash to same server with https

I have make a request to url from the same server with a cron task. In the past I did this with the next bash script:
#!/bin/sh
curl "http://www.mydomain.es/myfolder/importTwitter" >> /var/log/mydomain/import_twitter.log
I migrated the website to https and the curl command fails returning the next error:
* About to connect() to www.mydomain.es port 443 (#0)
* Trying xxx.xxx.xxx.xxx...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
I have tried to add the next parameters to the curl command, and get the same error:
--cacert -> specify ssl ca root certificate
--cert -> specify ssl pem certificate
--location -> using the http url and force to follow redirects
--insecure -> allows insecure curl connections
Finally I also have tried make the request from another host and works fine, but I have do the request from the same server.
The server have Debian 3.2.65-1+deb7u2 x86_64
Curl version:
curl 7.26.0 (x86_64-pc-linux-gnu) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
* Connection refused
You've got your problem right there. Long before anything with crypto can start, your server simply does not allow a connection from your host.
Make sure your server is configured correctly and that no firewall is blocking loopback connections to port 443.

Resources