The Windows 10 OpenVPN client cannot connect to the server - windows

When trying to connect, an error appears: " i'm trying to parse 'path/openvpn.ovpn' as an --option parameter but i don't see a leading '--' ". How do fix this?
I tried to upgrade to an older version (i have 2.6 installed) and it didn't help.
My client config:
client
dev tun
proto udp
remote ip port
resolv-retry infinite
nobind
remote-cert-tls server
tls-version-min 1.2
verify-x509-name crypt_j619a9d0-68d3-42x0-8d76-82e6bbe654bf name
cipher AES-256-CBC
auth SHA256
auth-nocache
verb 3

Related

How would yum ( on centos host ) work with proxy that requires an ssl cert

I am trying to setup proxy in /etc/yum.conf with https and ssl cert
Normally, i would have proxy=http://x.x.x.x:80 provided that is the proxy address and since my proxy does not require username and password, that would work. But now i have a requirement, to setup /etc/yum/conf with
proxy=https://x.x.x.x:433
and the yum hosting centos can only talk to internet via a proxy which accepts ssl cert based Authentication.
So how would i install the ssl Cert on the centos host for yum to work with the proxy host on port 443 and one that requires an SSL Cert
It looks like you should be able to use the following config directives taken from the yum.conf manual page.
sslclientcert
Path to the SSL client certificate yum should use to connect to
repos/remote sites Defaults to none. Note that if you are using curl
compiled against NSS (default in Fedora/RHEL), curl treats
sslclientcert values with the same basename as identical. This
version of yum will check that this isn't true and output an error
when the repositories "foo" and "bar" violate this, like so:
sslclientcert basename shared between foo and bar
sslclientkey
Path to the SSL client key yum should use to connect to repos/remote
sites Defaults to none.

How do I pass a blank proxy username/password using curl?

I’m using bash shell on Mac El Capitan. How do I pass a blank username/password for a proxy server using curl? I tried this
localhost:tmp davea$ curl http://www.google.com --proxy localhost:9050 --proxy-user "":""
514 Authentication required.
I’m running a tor daemon on my machine using this command
tor --CookieAuthentication 0 --HashedControlPassword "" --ControlPort 9050 --SocksPort 50001
and I’m able to connect through Telnet without entering a password like so
localhost:tmp davea$ telnet localhost 9050
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
AUTHENTICATE
250 OK
so I know that my password, at least, is correct.
Note that when using curl from the command line, the --proxy option is specifically for use with an HTTP proxy which Tor is not, it's a SOCKS proxy.
To get around this, use what Nehal suggested in the comments to use a SOCKS proxy (you'll probably want to use --socks5-hostname instead so the DNS resolution is also performed over Tor as well, otherwise you leak DNS requests locally).
So your call would look like:
curl http://www.google.com -L --socks5-hostname localhost:50001
Side note: The control port (9050) is only used for communicating commands to the controller. This port is not used to proxy requests at all.

Curl bash to same server with https

I have make a request to url from the same server with a cron task. In the past I did this with the next bash script:
#!/bin/sh
curl "http://www.mydomain.es/myfolder/importTwitter" >> /var/log/mydomain/import_twitter.log
I migrated the website to https and the curl command fails returning the next error:
* About to connect() to www.mydomain.es port 443 (#0)
* Trying xxx.xxx.xxx.xxx...
* Connection refused
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
I have tried to add the next parameters to the curl command, and get the same error:
--cacert -> specify ssl ca root certificate
--cert -> specify ssl pem certificate
--location -> using the http url and force to follow redirects
--insecure -> allows insecure curl connections
Finally I also have tried make the request from another host and works fine, but I have do the request from the same server.
The server have Debian 3.2.65-1+deb7u2 x86_64
Curl version:
curl 7.26.0 (x86_64-pc-linux-gnu) libcurl/7.26.0 OpenSSL/1.0.1e zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3
Protocols: dict file ftp ftps gopher http https imap imaps ldap pop3 pop3s rtmp rtsp scp sftp smtp smtps telnet tftp
Features: Debug GSS-Negotiate IDN IPv6 Largefile NTLM NTLM_WB SSL libz TLS-SRP
* Connection refused
You've got your problem right there. Long before anything with crypto can start, your server simply does not allow a connection from your host.
Make sure your server is configured correctly and that no firewall is blocking loopback connections to port 443.

cURL - Unkown SSL protocol error - OS X 10.9

I am trying to use cURL and get the following error on every https request I make. The error is always the same. HTTP requests work flawlessly. The verbose output is quite useless.
bash:$ curl https://google.com -vv
* Adding handle: conn: 0x7fe09b803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fe09b803a00) send_pipe: 1, recv_pipe: 0
* About to connect() to google.com port 443 (#0)
* Trying 74.125.226.129...
* Connected to google.com (74.125.226.129) port 443 (#0)
* Unknown SSL protocol error in connection to google.com:-9805
* Closing connection 0
curl: (35) Unknown SSL protocol error in connection to google.com:-9805
bash:$ curl https://google.com -V
curl 7.30.0 (x86_64-apple-darwin13.0) libcurl/7.30.0 SecureTransport zlib/1.2.5
Protocols: dict file ftp ftps gopher http https imap imaps ldap ldaps pop3 pop3s rtsp smtp smtps telnet tftp
Features: AsynchDNS GSS-Negotiate IPv6 Largefile NTLM NTLM_WB SSL libz
bash:$ openssl s_client -connect google.com:443 < /dev/null
CONNECTED(00000003)
depth=2 /C=US/O=GeoTrust Inc./CN=GeoTrust Global CA
verify error:num=20:unable to get local issuer certificate
verify return:0
24255:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:/SourceCache/OpenSSL098/OpenSSL098-50/src/ssl/s23_lib.c:182:
The results are the same on two different networks, so it does not appear to be network-specific. Attempting to connect using openssl s_client fails similarly so it is not library-dependent either (curl on the Mac uses SecureTransport). The debug output of s_client shows that the SSL handshake proceeds normally to the point where the client sends ChangeCipherSpec and the Finished messages but does not receive ChangeCipherSpec back from the server.
I have tried running these commands on a Debian VM on my Mac, and everything there runs correctly. In addition, using curl to connect to a local OpenSSL server (openssl s_server with a self-signed certificate) also works correctly.
I have looked through other answers on this forum and other places on the internet, but haven't found an answer. Most people's issues involve particular servers and the configuration of SSL on these servers. Mine however is problematic anytime HTTPS is used (with any website).
It was suggested that the issue might be in the certificate store. But if I understand it correctly, if the issue was with the certificate store, it would cause certificates to be rejected by all apps. However, all my browsers (chrome, safari, firefox) negotiate SSL with no problems. There is nothing suspicious in the environment variables for GUI applications or the shell.
Can someone please suggest what I should be looking into to solve the problem? Can it be that something is not properly configured? What should I be looking for?

Openssl TLS extension support configuration (Server Name Indication)

I want to configure openssl client-server to support TLS extensions specifically server name indication (SNI).
I have build the latest openssl 1.0.0e on ubuntu linux without giving any additional config parameter.
./config
make
make install
Not sure if I need to give any additional config parameters while building for this version.
Now I have set up server and connecting to it through openssl client using the standard command line tool provided by openssl, viz s_client and s_server.
My question is: how do I specify the host name to be sent as extension in s_client? Does openssl have the provision to specify server name using some parameter in commandline?
Thanks!
This has been lying dormant for some time. Since I figured this out long back, it would be logical to write the answer and put a closure to this.
The command-line option servername is available to specify SNI.
openssl s_client -connect myweb.address.com:443 -servername myweb.address.com
The above command will trigger TLS client with the given server name present in SNI extension of client hello.
For using s_server you can use the command:
openssl s_server -accept 443 -cert normal_cert.pem -key normal_key.ky -servername xyz.com -cert2 sni_cert.pem -key2 sni_key.ky
Here whenever the client will request the server without servername extension the server will reply with normal_cert and if there is servername extension is client hello then server will reply with the sni_cert.
For using s_client with SNI you can use the command:
openssl s_client -servername xyz.com -connect ip:port
The relevant commandline options are:
starttls prot: use the STARTTLS command before starting TLS for those protocols that support it, where 'prot' defines which one to assume. Currently only "smtp", "pop3", "imap", "ftp" and "xmpp" are supported.
servername host: Set TLS extension servername

Resources