Cert already in hash table exception - ruby

I am using chef dk version 12 and i have done basic setup and uploaded many cookbooks , currently i am using remote_directory in my default.rb
What i have observed is whenever there are too many files /hierarchy in the directory the upload fails with the below exception :-
ERROR: SSL Validation failure connecting to host: xyz.com - SSL_write: cert already in hash table
ERROR: Could not establish a secure connection to the server.
Use `knife ssl check` to troubleshoot your SSL configuration.
If your Chef Server uses a self-signed certificate, you can use
`knife ssl fetch` to make knife trust the server's certificates.
Original Exception: OpenSSL::SSL::SSLError: SSL_write: cert already in hash table
As mentioned earlier connection to server isnt a problem it happens only when there are too many files/the hierarchy is more .
Can you please suggest what i can do? I have tried searching online for solutions but failed to get a solution
I have checked the question here but it doesnt solve my problem
Chef uses embedded ruby and openssl for people not working with chef
Some updates on suggestion of tensibai,
The exceptions have changed since adding the option of --concurrency 1 ,
Initially i had received,
INFO: HTTP Request Returned 403 Forbidden:ERROR: Failed to upload filepath\file (7a81e65b51f0d514ec645da49de6417d) to example.com:443/bookshelf/… 3088476d373416dfbaf187590b5d5687210a75&Expires=1435139052&Signature=SP/70MZP4C2U‌​dUd9%2B5Ct1jEV1EQ%3D : 403 "Forbidden" <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access Denied</Message>
Then yesterday it has changed to
INFO: HTTP Request Returned 413 Request Entity Too Large: error
ERROR: Request Entity Too Large
Response: JSON must be no more than 1000000 bytes.
Should i decrease the number of files or is there any other option?
Knife --version results in Chef: 12.3.0

Should i decrease the number of files or is there any other option?
Ususally the files inside a cookbook are not intended to be too large and too numerous, if you got a lot of files to ditribute it's a sign you should change the way you distribute thoose files.
One option could be to make a tarball, but this makes harder to manage the deleted files.
Another option if you're on an internal chef-server is to follow the advice here and change the client_max_body_size 2M; value for nginx but I can't guarantee it will work.

I had same error and i ran chef-server-ctl reconfigure on chef server then tried uploading cookbook again and all started working fine again

Related

Turnserver showing WebSocket open error: WebSocket error after trying to join a room

My app has 1:1 video calling feature and for that I set up my own turnserver. The turnserver was running perfectly till yesterday. When I try to call from app, turnserver rejects. I tried turnserver url from browser, it shows something like this,
Immediately, I logged collidermain, it shows
root#<machine_name>:~# <timestamp> Starting collider: tls = true, port = 8443, room-server=https://<mydomain>.com
<timestamp> http: TLS handshake error from 182.160.105.186:43243: remote error: tls: unknown certificate
And this kept showing up everytime, my app try to connect with turnserver.
Thought, ssl certificates got something to do with it, replaced ssl certificates, tried re-installing collidermain, restarting google-cloud-sdk, turnserver, collidermain. Still no luck.
Got two similar questions on stakoverflow.
WebSocket open error: WebSocket error, This is not chrome bug, because I got other two turnserver running perfectly fine at the time I am writing this and server health is pretty good
Websocket open error, websocket register error This is not working too. I re-installed collider. No luck
My question is, what is the root cause of this error and how to fix it?
System spces:
OS: Ubuntu 20.04
AppRTC code running with Google-cloud-sdk, version: 330.0.0
Turnserver version: 4.4.3
Signalling server: collidermain
Certificate issued with let's encrypt certbot
It's solved! Here's what happend.
I copied the certificates issued by certbot into another directory /cert/
and in turnserver.conf file, I pointed certificate path to /cert/ directory. So it worked fine for a while. Certbot certificates are valid for 3 months and it's renew automatically when validity expires. So, certbot renewed certificates and put it into /etc/letsencrypt/live/:domain_name/. turnserver.conf still pointing to outdated certificates resides in /cert/ directory. That's why when I try to join a room, turnserver using outdated certificates and shows TLS handshake error
So, I just changed certificate path from /cert/ to /etc/letsencrypt/live/:domain_name/ in turnserver.conf, it's back online! Yay!!

Setup WSO2 Enterprise Integrator VFS connection towards Windows SFTP server

Running WSO2 Enterprise Integrator 6.5.0. on RHEL 7. We are in the proces of building flows to read files from an sftp server. But setting up the sftp connection towards a Windows SFTP server fails. We can access this Windows SFTP server correctly with Windows clients like FileZilla/WinSCP.
With netstat we see a connection is build towards the Windows SFTP server but the flow isn't moving - no files are being read. On the point of stopping the server the error as shown below is printed in the wso2carbon.log.
When setting up the connection towards a Linux sftp server ( Plain RHEL 7 box with SSHD ) we don't face any issues. We have the matching private key place under .ssh/id_rsa in the home dir of the user running WSO2 EI.
Searching for the error message ( see snippet below ) we should get it resolved by adding the transport.vfs.AvoidPermissionCheck=true parameter to the VFS URL but unfortunately this doesn't solve our issue.
This is the VFS URL we are using.
sftp://SFTPUSER#SERVER.ACMECORP.ORG/inputdir?transport.vfs.AvoidPermissionCheck=true;vfs.passive=true
Is this a configuration that should work and are we missing a configuration option? Or is this a bug in the WSO2 software?
These URL's mention the issue we are facing.
VFS2 Error cannot delete file and could not get the groups id of the current user (error code: -1)
https://issues.apache.org/jira/browse/VFS-617
https://github.com/wso2/product-ei/issues/3725
[2019-12-06 13:48:59,724] [-1] [] [vfs-Worker-2] ERROR {org.apache.synapse.transport.vfs.VFSTransportListener} - Error checking for existence and readability : sftp://SFTPUSER#SERVER.ACMECORP.ORG/inputdir?transport.vfs.AvoidPermissionCheck=true;vfs.passive=true
org.apache.commons.vfs2.FileSystemException: Could not determine if file "sftp://SFTPUSER#SERVER.ACMECORP.ORG/inputdir?transport.vfs.AvoidPermissionCheck=true;vfs.passive=true" is readable.
at org.apache.commons.vfs2.provider.AbstractFileObject.isReadable(AbstractFileObject.java:1494)
at org.apache.synapse.transport.vfs.VFSTransportListener.scanFileOrDirectory(VFSTransportListener.java:295)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTransportListener.java:188)
at org.apache.synapse.transport.vfs.VFSTransportListener.poll(VFSTransportListener.java:134)
at org.apache.axis2.transport.base.AbstractPollingTransportListener$1$1.run(AbstractPollingTransportListener.java:67)
at org.apache.axis2.transport.base.threads.NativeWorkerPool$1.run(NativeWorkerPool.java:172)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.jcraft.jsch.JSchException: Could not get the groups id of the current user (error code: -1)
at org.apache.commons.vfs2.provider.sftp.SftpFileSystem.getGroupsIds(SftpFileSystem.java:219)
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.getPermissions(SftpFileObject.java:250)
at org.apache.commons.vfs2.provider.sftp.SftpFileObject.doIsReadable(SftpFileObject.java:264)
at org.apache.commons.vfs2.provider.AbstractFileObject.isReadable(AbstractFileObject.java:1492)
... 8 more
UPDATE
Using the same URL but then setting up the WSO2 flow to write a file towards the SFTP server works.
Got this resolved with support from WSO2.
The correct VFS url to use is.
sftp://SFTPUSER#SERVER.ACMECORP.ORG/inputdir?transport.vfs.AvoidPermissionCheck=true&vfs.passive=true So a '&' seperator instead of a ';'.
The documentation of WSO2 just is very fuzzy about the correct syntax to use.
They give different examples across their documentation.
https://docs.wso2.com/display/EI650/VFS+Transport
https://docs.wso2.com/display/EI650/File+Inbound+Protocol
https://docs.wso2.com/display/EI650/Configuring+File+Inbound+Protocol+for+FTP%2C+SFTP+and+FILE+Connections

Using the AWS SDK during a chef run errors but running it outside of chef works

I have a helper library that AWS-SDK to pull information so it can return a list of names like so:
def get_load_balancer_names
self.elb_client.describe_load_balancers[:load_balancer_descriptions].map { |elb| elb[:load_balancer_name] }
end
when this code is run during the chef run I get this error:
SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed[0m
But when I run the code outside of the chef run it works (I get a list of ELB names like I expect).
I use an IAM role for authentication.
I did find this that added a (potential) fix so you can do:
AWS.config(:ssl_ca_path => '/...')
but this isn't really an option as I would rather deal with the problem itself (unless there is no other way of course).
I think it might be that AWS-SDK is using the chef SSL cert during the chef run and that this might be the cause.
Why is it erroring like this and how do I fix it?
As far as I know this is related to Mozilla, who removed 1024 bit Root CA certs in late 2014. Technically, this is good but unfortunately broke many legacy certificate chains.
The issue is described here in the section "RSA-1024 removed" http://curl.haxx.se/docs/caextract.html.
And in ChefDK at https://github.com/chef/chef-dk/issues/199#issuecomment-60643682
Recent ChefDK and Chef-Client releases from Omnibus (https://downloads.chef.io/) include a root trust with the old RSA-1024 root certificates.
I suggest you update the chef client.
If you have not used the Omnibus installer from chef.io, you need to manually update the root Certs of your distro/OpenSSL.

Apple apns 'Permission denied' issue

I am facing an issue while sending request to server for push notification. While trying to connect to apple server we are getting the following response,
ApnsPHP[15748]: INFO: Trying ssl://gateway.sandbox.push.apple.com:2195... Tue, 15 Jan 2013 08:20:28 +0100 ApnsPHP[15748]: ERROR: Unable to connect to 'ssl://gateway.sandbox.push.apple.com:2195': Permission denied (13)
We checked the server settings and the server is not blocking any out going requests. We created the p12 certificate as per the guidelines by apple, and we are converting it to .pem by executing following shell command in the server.
openssl pkcs12 -in HSPushNopassword.p12 -out HSPushNopassword.pem -nodes -clcerts
And the same code and certificate is working fine in another server.
Please let me know why we are getting this error? Thanks,
There is a better solution than disabling SELinux completely. The problem is that on most SELinux systems (as RedHat, which I am using) HTTPD is not allowed to create network connections.
You can use this command to enable it:
setsebool -P httpd_can_network_connect=1
-P for permanent setting.
I hope it helps, though this thread is accepted
I sorted it out. Yes, the problem was the server, but it was likely due to when they had root and reloaded things. In a nutshell, there is a very restrictive security system called 'selinux'. So I disabled that, and set the config so it stays off after reboot. And the push notification is working fine now.

git trouble via https: routines:SSL23_GET_SERVER_HELLO

I made my own git server on a centos distribution.
I can contact the server via git protocol at my home. But when I try to access via https at office I obtain:
Cloning into /Users/vito/Documents/... error:
error:14077458:SSL routines:SSL23_GET_SERVER_HELLO:reason(1112) while
accessing https://gitolite#myserverxyz.com/vitorepo.git/info/refs
fatal: HTTP request failed
Where is the problem? On my server or on my office-mac?
I got the exact same response from curl when trying to connect with an ubuntu instance running openssl 1.0.0e. I successfully resolved the problem by adding the -ssl3 flag to the curl command.
It seems that it's a compatibility problem between older version of OpenSSL (0.9.8) acting as a client and recent OpenSSL version (1.0.0) acting as a server with some specific options used by Curl on client side and Apache on server side.
It's probably due to some recent security fix in OpenSSL (probably the one against protocol downgrade attacks).
Try upgrading the OpenSSL library version on the client side to 1.0.0.
See:
https://sourceforge.net/tracker/?func=detail&atid=100976&aid=3395520&group_id=976
In case anyone has this issue with XMLRPC.
Daniel's answer (forcing SSL version 3) solved the issue for me. just specify XMLRPC_SSLVERSION_SSLv3 in the clientXmlTransport_curl options (C++).
The problem began when we upgraded our server to OpenSSL version 1.0.1-4ubuntu5.5 and the clients were still running 0.9.8o-5ubuntu1.7.
I believe this is a host-name matching issue on the server. Error 1112 is SSL_R_TLSV1_UNRECOGNIZED_NAME, and comes from an SNI name mismatch (info on SNI). I was having the same issue in curl.
For me, the work around was to make sure the name I used on the client matched one of the ServerName or ServerAlias configurations on the server. Of course, these commands are for an apache server; I don't know what you need to do for a git server. But I suspect the server names you're using from home and work are different, and the home name is the cannonical name the git server is using (and therefore SNI is working).
The 'real' fix will probably take a client change in git to allow a way to ignore the name-mismatch warning (the way your browser already does).
Not sure if I had exactly the same problem, but the error message was the same. It only seemed to be happening on the ubuntu box I set up a git server on, for some reason the centos box with a git server set up on it was fine.
I only just solved it after 3 or 4 days. It turns out to be because git's underlying Curl library has a broken Keep-alive implementation (I ended up dumping HTTP traffic and verifying the behaviour by hand).
In a nutshell Curl (at least the version used inside every Git implementation I could find, including command line git and eclipse's EGit) doesn't seem to correctly interpret the Connection response header, or more correctly doesn't seem to correctly interpret the absence of it.
To fix the problem you need to configure the SSL virtual host inside the apache that is serving your GIT repository with an extra directive specifically for git. Add these lines just before the </VirtualHost>.
BrowserMatch "git" nokeepalive ssl-unclean-shutdown
You unfortunately can't tell apache to just downgrade to HTTP/1.0 (would be cleaner) because Curl can't handle that, but you can just tell it to force a Connection:close on every request which Curl does know how to handle.
In a misleading coincidence, if you try to test Curl directly without this change it will seem to work, because it makes a single request and then aborts. Only by getting curl to execute two requests on the same keep-alive connection over ssl will this problem become apparent.
I had the same error. The root cause seems to be incompatibility of client/server openssl versions.
I've upgraded my server with apt-get upgrade openssl and upgraded my windows git installation.
The combination of windows git client
git version 1.9.4.msysgit.0, which contains openssl version:
OpenSSL 0.9.8e 23 Feb 2007
And server with openssl version:
OpenSSL 1.0.1c 10 May 2012
seems to work fine together.

Resources