gitlab-shell ssl cert issues - ruby

I've tried now for several hours te set up gitlab and especially gitlab-shell. After being trolled by the documentation I found a sample config, that fitted my needs, but I get an API 500 error :
Running /home/git/gitlab-shell/bin/check
Check GitLab API access: FAILED. code: 500
gitlab-shell self-check failed
Try fixing it:
Make sure GitLab is running;
Check the gitlab-shell configuration file:
sudo -u git -H editor /home/git/gitlab-shell/config.yml
Please fix the error above and rerun the checks.
To explain my current setup:
#/home/git/gitlab-shell/config.yml
user: git
gitlab_url: https://[myfqdn]/
http_settings:
ca_file: "/etc/gitlab-ssl/git-mydomain-chain.pem"
ca_path: "/etc/gitlab-ssl"
self_signed_cert: false
repos_path: "/home/git/repositories/"
auth_file: "/home/git/.ssh/authorized_keys"
redis:
bin: "/usr/bin/redis-cli"
namespace: resque:gitlab
host: localhost
port: 6379
log_level: INFO
audit_usernames: false
In the /etc/gitlab-ssl directory are two files:
* my privatekey git-mydomain-key.pem
* the combinded public key and CA-key git-mydomain-chain.pem
In addition I added the ca-key to the ca-certificates (it's a cacert signed one).
Can anyone help me and tell me what went wrong?

This error has nothing to do with gitlab. This is pure YAML parser (Psych in your case) error.
Line 5 column 3 is:
ca_path:
⇑ HERE
That said you have a strange unterminated string right above:
⇓⇓⇓ WTF?!
ca_file: "/etc/gitlab-ssl/git-mydomain-chain.pem #This file contains my public key and the ca key
Remove everything after hash (inclusive) and close the string quotes.
Hope it helps.

Related

Wkhtmltopdf, binaries not found

I have a website that was developed in 2012 in Symfony 2.4 by Yann, a developer who created about 70% of the site and myself who has been developing on it since about 2013-2014
Our site was originally hosted by Yann and everything worked well. Until the moment he started to stop paying his server, which caused us many site crashes.
So he was asked to migrate to OVH in 2016, we took a web plesk solution for this purpose. And since we migrated we lost a feature, the generation of PDFs.
And no way to make it work again since.
Here's the error we got :
The exit status code '127' says something went wrong:
stderr: "sh: /vendor/wkhtmltopdf/bin: No such file or directory
"
stdout: ""
command: /vendor/wkhtmltopdf/bin --lowquality '/tmp/knp_snappy5b55e3aa348db3.85109382.html' '/tmp/knp_snappy5b55e3aa349b42.36987656.pdf'.
Here's my config.yml :
knp_snappy:
pdf:
enabled: true
binary: /vendor/wkhtmltopdf/bin
options: []
image:
enabled: true
binary: /vendor/wkhtmltoimage/bin
options: []
And i have a folder named /vendor/wkhtmltopdf/bin as this screenshot proves it :
My OVH architecture
So i dunno what to do ...
according to snappy docs, the config.yml needs to point to actual binary, not to just bin folder. please add the binary names in the config.yml and it should all be OK
I just changed for adding binary names like that :
knp_snappy:
pdf:
enabled: true
binary: /vendor/wkhtmltopdf/bin/wkhtmltopdf
options: []
image:
enabled: true
binary: /vendor/wkhtmltoimage/bin/wkhtmltoimage
options: []
And i got the same error :
The exit status code '127' says something went wrong:
stderr: "sh: /vendor/wkhtmltopdf/bin/wkhtmltopdf: No such file or directory
"
stdout: ""
command: /vendor/wkhtmltopdf/bin/wkhtmltopdf --lowquality '/tmp/knp_snappy5b5ef7bd7b0378.87600401.html' '/tmp/knp_snappy5b5ef7bd7b1028.75774701.pdf'.

ProxyAuthentication fails while updating conda

I keep getting an error while updating conda:
CondaHTTPError: HTTP 000 CONNECTION FAILED for url https://repo.anaconda.com/pkgs/r/noarch/repodata.json.bz2
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
ProxyError(MaxRetryError("HTTPSConnectionPool(host='repo.anaconda.com', port=443):
Max retries exceeded with url:
/pkgs/r/noarch/repodata.json.bz2 (
Caused by ProxyError(
'Cannot connect to proxy.',
OSError('Tunnel connection failed: 407 Proxy Authentication Required',)))",
),
)
A reportable application error has occurred. Conda has prepared the above report.
Upload did not complete.
I tried this in the .condarc file
proxy_servers:
http: http://proxy.corp.local:8080
https: https://proxy.corp.local:8080
and
proxy_servers:
http: http://user:pass#corp.com:8080
https: https://user:pass#corp.com:8080
ssl_verify: True
ssl_verify: False
Regards
Bjorn
I also got the same error message and it is solved after googling so much.
Steps for solving the problem is given below
Create a .condrac file using the conda config command in command prompt or use a text editor to create a text file. I used conda config for creating the .condrac file
You can add following line into the .condrac file located in your user home directory or home
directory
proxy_servers:
http: http://username:password#proxyIP/Proxy URL:port no
https: http://username:password#proxyIP/ProxyURL:port no
ssl_verify: True
Same file can be copied into your anaconda home directory.(C:\ProgramData\Anaconda3 in my case)
Then run > conda update conda command from your prompt
Then you will be getting the result like below. C:\Users\USER.DESKTOP-BQVL8L4>conda update conda
Solving environment: done
Rerun your installation command. This solution worked well for me. Please try and let me know.
Copy files libcrypto-1_1-x64.dll and libssl-1_1-x64.dll
from the directory ./Anaconda3/Library/bin/
to ./Anaconda3/DLLs.
This method solved the problem that I had

KeystoneJS with letsencrypt - certificate files required

I am following this tutorial Let’s Encrypt KeystoneJS! in an attempt to get letsencrypt working on my KeystoneJS project.
However, when I start the server I am getting the error:
SSL Not Started: Invalid SSL Configuration (certificate files required)
I've generated the standalone certificate with certbot to the directory /home/example/letenscrypt resulting in:
- accounts
- csr
- keys
- letsencrypt.log
- renewal
- renewal-hooks
I've also tried defining the configdir in my keystone init:
keystone.init({
...
letsencrypt: (process.env.NODE_ENV === 'production') && {
email: 'admin#myapp.com',
domains: ['www.myapp.com', 'myapp.com'],
register: true,
tos: true,
configDir: '/home/example/letsencrypt'
},
...
})
I've also tried configDir: '/home/example/letsencrypt/keys' but I always get the same error, I'm wondering what I'm missing?
Ok, so the issue was the NODE_ENV wasn't correctly set to production. Setting it properly in my .env solved this issue (but raised another unfortunately with an invalid certificate being generated)
https://github.com/keystonejs/keystone/wiki/Deployment-Checklist

Laravel Guzzle : curl error 77 error setting certificate verify locations

OS: Ubuntu 16.04
PHP : 7.2
CURL : curl 7.47.0 (x86_64-pc-linux-gnu) libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.32 librtmp/2.3
Guzzle: 6.3
My project currently is using some packages that depends on Guzzle, e.g: AWS, Mailgun...However, it often threw out this error:
error: cURL error 77: error setting certificate verify locations:
CAfile: /etc/ssl/certs
CApath: /etc/ssl/certs (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
Below is part of my php.ini
[curl]
; A default value for the CURLOPT_CAINFO option. This is required to be an
; absolute path.
curl.cainfo='/etc/ssl/certs/ca-certificates.crt'
[openssl]
; The location of a Certificate Authority (CA) file on the local filesystem
; to use when verifying the identity of SSL/TLS peers. Most users should
; not specify a value for this directive as PHP will attempt to use the
; OS-managed cert stores in its absence. If specified, this value may still
; be overridden on a per-stream basis via the "cafile" SSL stream context
; option.
openssl.cafile='/etc/ssl/certs/ca-certificates.crt'
; If openssl.cafile is not specified or if the CA file is not found, the
; directory pointed to by openssl.capath is searched for a suitable
; certificate. This value must be a correctly hashed certificate directory.
; Most users should not specify a value for this directive as PHP will
; attempt to use the OS-managed cert stores in its absence. If specified,
; this value may still be overridden on a per-stream basis via the "capath"
; SSL stream context option.
openssl.capath='/etc/ssl/certs/'
None of this work, even though retrieving via ini_get() it's ok and fully recognized. For now, I have to make a workaround by modifying vendor/guzzlehttp/guzzle/src/Client.php and adjust default config to 'verify' => '/etc/ssl/certs/ca-certificates.crt' then everything's ok (which I believe not a good option)
retrieving via init_get()
array(8) {
["default_cert_file"]=> string(21) "/usr/lib/ssl/cert.pem"
["default_cert_file_env"]=> string(13) "SSL_CERT_FILE"
["default_cert_dir"]=> string(18) "/usr/lib/ssl/certs"
["default_cert_dir_env"]=> string(12) "SSL_CERT_DIR"
["default_private_dir"]=> string(20) "/usr/lib/ssl/private"
["default_default_cert_area"]=> string(12) "/usr/lib/ssl"
["ini_cafile"]=> string(34) "/etc/ssl/certs/ca-certificates.crt"
["ini_capath"]=> string(15) "/etc/ssl/certs/"
}
openssl.cafile: /etc/ssl/certs/ca-certificates.crt
curl.cainfo: /etc/ssl/certs/ca-certificates.crt
Note: I've tried setting up ~/.curlrc together with export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt but none of this work
Does anyone have any solution or any clue to solve this issue?
Relating to 'SSL certificate problem: unable to get local issuer certificate' error. Rather obviously this applies to the system sending the CURL request (and no the server receiving the request)
Download the latest cacert.pem from https://curl.haxx.se/ca/cacert.pem
Add the following line to php.ini (if this is shared hosting and you don't have access to php.ini then you could add this to .user.ini in public_html)
curl.cainfo="/path/to/downloaded/cacert.pem"
Make sure you enclose the path within double quotation marks!!!
grant permission to your web server user like ngnix or www-data to read the file.
sudo chown www-data /etc/ssl/certs/cacert.pem
last step restart fpm and ngnix or apache

rhc setup Doctype error

I have:
-Windows 10
-ruby 1.9.3p551 (2014-11-13) [i386-mingw32]
-git version 2.6.4.windows.1
When I type in: rhc setup and then i try to log in i get this:
use the server for OpenShift Online: openshift.redhat.com.
Enter the server hostname: |openshift.redhat.com| here is my login
You can add more servers later using 'rhc server'.
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
An unexpected error occurred: invalid character at "<!DOCTYPE "
I need little help here :)
At this step "Enter the server hostname: |openshift.redhat.com|", just hit enter, don't type anything there. If you type something else there it's going to try to connect to that and it won't work.

Resources