Using the AWS SDK during a chef run errors but running it outside of chef works - ruby

I have a helper library that AWS-SDK to pull information so it can return a list of names like so:
def get_load_balancer_names
self.elb_client.describe_load_balancers[:load_balancer_descriptions].map { |elb| elb[:load_balancer_name] }
end
when this code is run during the chef run I get this error:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed[0m
But when I run the code outside of the chef run it works (I get a list of ELB names like I expect).
I use an IAM role for authentication.
I did find this that added a (potential) fix so you can do:
AWS.config(:ssl_ca_path => '/...')
but this isn't really an option as I would rather deal with the problem itself (unless there is no other way of course).
I think it might be that AWS-SDK is using the chef SSL cert during the chef run and that this might be the cause.
Why is it erroring like this and how do I fix it?

As far as I know this is related to Mozilla, who removed 1024 bit Root CA certs in late 2014. Technically, this is good but unfortunately broke many legacy certificate chains.
The issue is described here in the section "RSA-1024 removed" http://curl.haxx.se/docs/caextract.html.
And in ChefDK at https://github.com/chef/chef-dk/issues/199#issuecomment-60643682
Recent ChefDK and Chef-Client releases from Omnibus (https://downloads.chef.io/) include a root trust with the old RSA-1024 root certificates.
I suggest you update the chef client.
If you have not used the Omnibus installer from chef.io, you need to manually update the root Certs of your distro/OpenSSL.

Related

Configuring Vagrant CA Certificates

I am experiencing problems executing Vagrant commands behind a corporate proxy server and self-signed CA certificates. I have configured environment variables HTTP_PROXY, HTTPS_PROXY, and HTTP_NO_PROXY variables.
I have a Java key store containing all of the corporate certificates. I have used the -exportcert option of the keytool command with numerous options. I have utilized the openssl command also with numerous options and placed the resulting files in multiple locations within the embedded Ruby directories within the Vagrant installation without any success.
I have read a lot of sites containing information about configuring Ruby and curl but have not had any success in getting Vagrant commands to work. All of the posts I have located focus on Ruby and curl options that I do not understand how to utilize with Vagrant which includes Ruby as an embedded component of Vagrant.
Please provide instructions on how to correctly export certificates from the Java key store and optionally convert them and place the resulting files so that Vagrant will successfully be able to communicate through the corporate proxy to the internet.
Vagrant 1.9.5 on Windows 7
Vagrant installation directory C:\Apps\Vagrant\
C:\WorkArea> vagrant plugin install vagrant.proxyconf
ERROR: SSL verification error at depth 3: self signed certificate in certificate chain (19)
ERROR: Root certificate is not trusted (/C=US/O=xxx xxx/OU=xxx xxx Certification Authority/CN=xxx xxx Root Certification Authority 01 G2)
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed (https://api.rubygems.org/specs.4.8.gz)
C:\WorkArea> vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Box 'puppetlabs/ubuntu-16.04-64-puppet' could not be found. Attempting to find and install...
default: Box Provider: virtualbox
default: Box Version: >= 0
The box 'puppetlabs/ubuntu-16.04-64-puppet' could not be found or
could not be accessed in the remote catalog. If this is a private
box on HashiCorp's Atlas, please verify you're logged in via
`vagrant login`. Also, please double-check the name. The expanded
URL and error message are shown below:
URL: ["https://atlas.hashicorp.com/puppetlabs/ubuntu-16.04-64-puppet"]
Error: SSL certificate problem: self signed certificate in certificate chain
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
You don't explain what steps you have taken to try to fix the issue, but it would appear that you are not placing your root certificates in the correct location.
Go to the directory where you installed Vagrant, find the file embedded\cacert.pem, and then append the contents of your corporate certificates to the file. Save it and retry the operation. If you properly exported your root CA certificates then they should be read by Vagrant and allow the connection.
If you are still unable to make it work by combining the files, try running vagrant with SSL_CERT_FILE=/path/to/your/certs.pem in the environment. This will allow you to validate that you have properly exported your corporate certificates.

Cert already in hash table exception

I am using chef dk version 12 and i have done basic setup and uploaded many cookbooks , currently i am using remote_directory in my default.rb
What i have observed is whenever there are too many files /hierarchy in the directory the upload fails with the below exception :-
ERROR: SSL Validation failure connecting to host: xyz.com - SSL_write: cert already in hash table
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.
Original Exception: OpenSSL::SSL::SSLError: SSL_write: cert already in hash table
As mentioned earlier connection to server isnt a problem it happens only when there are too many files/the hierarchy is more .
Can you please suggest what i can do? I have tried searching online for solutions but failed to get a solution
I have checked the question here but it doesnt solve my problem
Chef uses embedded ruby and openssl for people not working with chef
Some updates on suggestion of tensibai,
The exceptions have changed since adding the option of --concurrency 1 ,
Initially i had received,
INFO: HTTP Request Returned 403 Forbidden:ERROR: Failed to upload filepath\file (7a81e65b51f0d514ec645da49de6417d) to example.com:443/bookshelf/… 3088476d373416dfbaf187590b5d5687210a75&Expires=1435139052&Signature=SP/70MZP4C2U‌​dUd9%2B5Ct1jEV1EQ%3D : 403 "Forbidden" <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access Denied</Message>
Then yesterday it has changed to
INFO: HTTP Request Returned 413 Request Entity Too Large: error
ERROR: Request Entity Too Large
Response: JSON must be no more than 1000000 bytes.
Should i decrease the number of files or is there any other option?
Knife --version results in Chef: 12.3.0
Should i decrease the number of files or is there any other option?
Ususally the files inside a cookbook are not intended to be too large and too numerous, if you got a lot of files to ditribute it's a sign you should change the way you distribute thoose files.
One option could be to make a tarball, but this makes harder to manage the deleted files.
Another option if you're on an internal chef-server is to follow the advice here and change the client_max_body_size 2M; value for nginx but I can't guarantee it will work.
I had same error and i ran chef-server-ctl reconfigure on chef server then tried uploading cookbook again and all started working fine again

Docker on Mac behind proxy that changes ssl certificate

My eventual workaround for the issue below was to convince our IT guys not to man-in-the-middle the dockerhub registry. I was not able to get anything else to work, alas.
I am running into a problem with my initial attempt to get Docker running on my Mac at work, which is running 10.8.5. It appears that my company's certificate-rewriting proxy seems to be getting in the way of fetching images:
orflongpmacx8:docker pohl_longsine$ docker run hello-world
Unable to find image 'hello-world:latest' locally
Pulling repository hello-world
FATA[0001] Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "bcauth")
(Indeed, when I log onto the guest wireless – which does not have the meddlesome proxy – I can get past this step. However, I need to figure out how to make this work through the proxy since using the guest wireless is untenable as a long-term solution.)
My issue, on the surface, appears to be very much like the one answered in this question. However, the accepted answer in that question does not work for me, since the root_unix.go file they discuss does not get invoked on a Mac. (From browsing around, I would guess that root_cgo_darwin.go and/or root_darwin.go would be involved instead.)
That doesn't really tell me how, operationally, I need to do the equivalent work of installing some sort of trusted certificate. I managed to get my hands on a *.cer file that I believe to be the one that I need, but I'm at a loss as to what to do with it.
I'm hoping that someone can point me in the right direction.
Edit: I thought that maybe I needed to to something akin to what this page suggests, to add the certificate. Alas, my attempt at following those instructions failed in the following way:
orflongpmacx8:docker pohl_longsine$ sudo security add-trusted-cert -d -r trustRoot -k "/Library/Keychains/System.keychain" "~/Desktop/Certs/redacted.cer"
Password:
***Error reading file ~/Desktop/Certs/redacted.cer***
Error reading file ~/Desktop/Certs/redacted.cer
Edit 2: I may have come one step closer to solving this. I should have known better to use a path with a tilde inside quotation marks. If I use an absolute path instead, I can successfully run the above command to add certs.
Alas, this did not alleviate the ultimate symptom:
FATA[0001] Get https://index.docker.io/v1/repositories/library/hello-world/images: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "bcauth")
According to the boot2docker README
Insecure Registry
As of Docker version 1.3.1, if your registry doesn't support HTTPS, you must add it as an insecure registry.
$ boot2docker init
$ boot2docker up
$ boot2docker ssh
$ echo 'EXTRA_ARGS="--insecure-registry <YOUR INSECURE HOST>"' | sudo tee -a /var/lib/boot2docker/profile
$ sudo /etc/init.d/docker restart
then you should be able to do a docker push/pull.
The source of http://golang.org/src/crypto/x509/root_darwin.go shows that the command:
cmd := exec.Command("/usr/bin/security", "find-certificate", "-a", "-p", "/System/Library/Keychains/SystemRootCertificates.keychain")
is used to find the certificate.
Try adding the .cer file into the OSX certificate key-chain.
If you use the docker-machine
edit $USER/.docker/machine/machines/default/config.json
"EngineOptions": {
"InsecureRegistry": [
"XXX.XXX.virtual"
],
}

Cannot connect SOAP client (savon) to SOAP web services over HTTPS

Before attempting to solve this, I had no clue how certs or SSL worked, so please bear with my n00b-ness.
I'm currently using the Savon gem (v. 0.9.9) to try and connect to a SOAP-based web-service over HTTPS. However, I'm having a difficult time making successful calls.
As I understand the SSL/TSL protocol, the client sends the initial 'client hello' message to the server, to which the server responds with a 'server hello', which includes the server's digital certificate. The client will check that cert's chain against the local Cert Authority bundle to see if said cert can be trusted. That being said, here's what I've tried.
Update RVM CA certs: At first, I was getting the same error described in this SO thread, and I learned that Ruby checks the CA certs. I also found these instructions on updating the CA certs that RVM uses. So I ran the following in iTerm:
rvm osx-ssl-certs status all
and I got the following output:
Certificates for /Users/user-name/.rvm/usr/ssl/cert.pem: Up to date.
However, this still didn't allow me to successfully make SOAP calls over HTTPs.
Check if remote server's SSL cert is valid: I learned about the openssl CI tool from here, and so I figured perhaps the issue isn't me. Perhaps the issue is with the certificate itself. So I ran the following command in iTerm:
openssl s_client -connect [HOST]:[PORT] -showcerts
In addition to the certificate itself, I got the following in the output:
Verify return code: 18 (self signed certificate)
As I understand it, since this cert is self-signed, then unless it itself was a trusted CA, then of course it could never be verified. So the issue isn't with the certificate, the problem is with my local CA bundle.
Update local CA bundle: As I understand it, cert.pem is a list of trusted CA certs. I actually found two such files on my local machine:
/Users/user-name/.rvm/usr/ssl/cert.pem
and
/System/Library/OpenSSL/cert.pem
I wasn't sure which one I should update, so I ended up copying one of those files into my app's directory, copied & pasted the certificate into new local cert.pem, and tried again. Unfortunately I now get the following:
OpenSSL::SSL::SSLError:
hostname does not match the server certificate
At this point, I'm not really sure what to do since as far as I can tell, the certificate should now be treated as a trusted certificate. Here's my code at the moment:
$SOAP_CORE = Savon::Client.new do |wsdl, http|
http.auth.ssl.ca_cert_file = path_to_local_cert.pm
http.auth.ssl.verify_mode = :peer
wsdl.document = path_to_remote_wsdl_over_https
end
As I understand it, since this cert is self-signed, then unless it itself was a trusted CA, then of course it could never be verified. So the issue isn't with the certificate, the problem is with my local CA bundle.
I'm confused how you come to this conclusion. A self-signed certificate isn't going to verify, so the issue is with the certificate. Updating your CA bundle won't help unless the self-signer ends up in there, which seems silly.
Try turning off verification.
http.auth.ssl.verify_mode = :none

Get certificate fingerprint of HTTPS server on Windows 7?

Recently Mercurial has added certificate validation when connecting to HTTPS servers. I'm trying to clone the wiki repository for a Google Code project at https://wiki.droidweight.googlecode.com/hg/, but the certificate is for *.googlecode.com.
Google Code's certificate does not cover multiple subdomains like *.*.googlecode.com.
I'm getting the error:
% hg clone --verbose https://wiki.droidweight.googlecode.com/hg/ -- C:\workspace\wiki
abort: wiki.droidweight.googlecode.com certificate error: certificate is for *.googlecode.com, googlecode.com, *.codespot.com, *.googlesource.com, googlesource.com (use --insecure to connect insecurely)
I need to get the certificate fingerprint. This SO answer says how to do it on *nix.
How would one get the fingerprint on Windows 7 (Home Premium)?
References:
Open issue on Google Code's support site.
Mercurial CA Certificates FAQ.
Which version of Mercurial are you using? 1.8.2 prints the fingerprint when you clone, as per the documentation.
EDIT: After some testing, I realised that Mercurial prints the certificate when you connect insecurely (I don't have web.cacerts configured, so cloning always succeeded, though with a warning). So if you pass --insecure to your hg clone, you'll get a clone and a fingerprint.
Alternatively, install GnuWin32! It makes the Windows command line a fun place to be :) (I have no affiliation with GnuWin32; just hugely appreciative.)

Resources