SSL error: wrong version number. gsql -p postNum -d databaseName -r -U userName -h xx.xx.xx.xx - open-gauss

I try to connect remotely with a certificate. The database is up and running. But I get message:gsql: SSL error: wrong version number.
gsql -p postNum -d databaseName -r -U userName -h xx.xx.xx.xx

Please check whether the following GUC parameters are correctly configured in postgresql.conf:
ssl: Enables SSL connections.
ssl_cert_file: Location of the SSL server certificate file.
ssl_key_file: Location of the SSL server private key file.
ssl_ca_file: Location of the SSL certificate authority file.
If only file names are set for the preceding files, the files must be stored in the data directory by default and the files permission must be set to 600.
After the preceding parameters are set, restart the database.

Related

JasperStarter DB connection issues

I am trying to run the following command:
/opt/jasperstarter/bin/jasperstarter pr --db-url jdbc:mysql://apps:#192.168.0.232:3306/zinc?useSSL=false \
-f pdf \
-p Y5Ni%234MAC5nosAyEv6B7dEQE%21iMoC%40 \
-o /vagrant/ASB5ff7844ac8ae7 /vagrant/project/templates/pdf/jasper/erp-invoice-workorders.jasper \
-P ID_ORGANIZATION=632 ID_INVOICE=92214 JASPER_DIR='/vagrant/project/templates/pdf/jasper'
It's not erroring - it's producing a PDF but the values are all NULL so it's clearly not connecting to the database.
Here are the original switches (sans the --db-url) which produce an error:
-t mysql \
-u apps \
-H 192.168.0.232 \
--db-port 3306 \
-n zinc \
The error is pretty obvious:
WARN: Establishing SSL connection without server's identity
verification is not recommended. According to MySQL 5.5.45+, 5.6.26+
and 5.7.6+ requirements SSL connection must be established by default
if explicit option isn't set. For compliance with existing
applications not using SSL the verifyServerCertificate property is set
to 'false'. You need either to explicitly disable SSL by setting
useSSL=false, or set useSSL=true and provide truststore for server
certificate verification. Unable to connect to database: Access denied
for user 'apps'#'192.168.1.241' (using password: YES)
I'm at a loss on what I am doing wrong. JasperStarter doesn't have a "disable SSL" switch, so they recommend specifying that option in the DB URL ?useSSL=false which I've done but nothing is happening?!?
I've tried placing the passwords, etc inside the DB URI in all various combos with the same results.
I'm not sure if this is a JasperStarter issue or something trivial in the connection string I'm missing, any thoughts or suggestions?
EDIT | I am curious as to whether I have to install JDBC drivers, now that I am using the DB-URI instead of explicit command switches? Internally, wouldn't the binary use that anyway, so how has this worked for all these years, but now because I am using --db-uri its required?
The commands I have in the vagrant file to install jasperstarter:
# JasperStarter start
sed -i "s/stretch main/stretch main non-free contrib/g" /etc/apt/sources.list && apt-get update && apt-get -y install msttcorefonts
cd /tmp
cp /vagrant/.java/jasperstarter-3.0.0.zip jasperstarter-3.0.0.zip
unzip jasperstarter-3.0.0.zip
mv jasperstarter /opt;
cd /opt/jasperstarter/bin
chmod 777 *
ln -s /usr/share/java/mysql.jar /opt/jasperstarter/jdbc/mysql.jar
apt-get install -y default-jre
# JasperStarter end

Problems with AWS credentials when renewing Let's Encrypt certificates with certbot

I have 4 servers with Let's Encrypt HTTPS certificates which should renew with certbot. They were created with user ubuntu with the flags --dns-route53, --dns-digitalocean and --dns-digitalocean-credentials respectively. When I installed certbot, a file /etc/cron.d/certbot was created:
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q renew
But, I think it runs from root, and the credentials for these flags expect user ubuntu. And I checked, they are not automatically renewed (except one certificate which was created without these flags). How do I configure these certificates to renew automatically?
To renew them manually, I can run sudo certbot renew from user ubuntu, and then type the password.
I think there is a problem with the test -x /usr/bin/certbot -a \! -d /run/systemd/system command, so I want to add this line to root crontab:
27 2-20/12 * * * perl -e 'sleep int(rand(1800))' && certbot -q renew
I found out that running only "certbot -q renew" as root is OK, but I need to change the settings from user ubuntu to root. And I'm using S3, I have a problem with AWS S3 credentials and Route 53 credentials - the server uses one of them instead of the other. Do you know how I configure S3 credentials to be used by S3 and Route 53 credentials to be used by Route 53?
I'm looking how to change AWS_CONFIG_FILE according to this page:
https://certbot-dns-route53.readthedocs.io/en/stable/
But changing it only to certbot and not to S3.
I tried with the following script:
#!/bin/bash
AWS_CONFIG_FILE=/root/.aws/config
certbot -q renew
But it doesn't work if the file /root/.aws/credentials is present. Only if I delete this file, I can run certbot renew successfully. But I need this file to backup my files to S3.
/root/.aws/credentials contains my credentials to S3, and /root/.aws/config contains my credentials to Route 53.

Publish to FTPS using Jenkins

My provider currently only provides FTPS as a means of uploading files to the server.
Now I want to publish files from Jenkins to that server. I can access the server using an FTP client that supports FTPS but neither of the FTP-Publisher plugins, seem to be able to publish using FTPS.
The only reference for FTPS and Jenkins that I found was this open bug.
I know that SSH would be a good option, but since my hosting provider does not support this I wonder how I can efficiently upload files to my server through jenkins.
My jenkins server runs on OSX.
Update: According to my own answer below I tried CURL but got a generic error:
curl -v -T index.html ftps://myusername:mypassword#myserver.com:21/www/
Adding handle: conn: 0x7fa9d500cc00
Adding handle: send: 0
Adding handle: recv: 0
Curl_addHandleToPipeline: length: 1
Conn 0 (0x7fa9d500cc00) send_pipe: 1, recv_pipe: 0
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0*
About to connect() to myserver.com port 21 (#0)
Trying xx.xx.xx.xx...
Connected to myserver.com (xx.xx.xx.xx) port 21 (#0)
Unknown SSL protocol error in connection to myserver.com:-9800
Closing connection 0
curl: (35) Unknown SSL protocol error in connection to myserver.com:-9800
There are currently no Jenkins plugins that will handle FTPS (FTP over SSL). Instead the cURL program is capable of uploading with FTPS.
First check that cURL is installed on the Jenkins host.
On a linux environment try the command:
which curl
Now ensure that cURL is in the path for the Jenkins user account. Alternatively fully qualify the path to cURL.
Now using a post build step, task, or with the promoted builds plugin add a shell script that contains the following:
FILEPATH=$WORKSPACE/path/to/some/file
REMOTEPATH=/new/path/for/file
curl -T $FILEPATH -u username:password ftps://myserver.com$REMOTEPATH
Correct the $FILEPATH and $REMOTEPATH to reflect the environment.
Example:
FILEPATH=$WORKSPACE/index.html
REMOTEPATH=/www/index.html
If a self signed certificate is in use on the remote host then cURL needs to skip verification. This is done with the -k parameter.
curl -T $FILEPATH -u username:password -k ftps://myserver.com$REMOTEPATH
One way of uploading might be to do this via CURL, which is not the best of options since I would rather use a Jenkins Plugin, but at least this would allow me to do it for the time being.
From the Curl docs
UPLOADING
FTP / FTPS / SFTP / SCP
Upload all data on stdin to a specified server:
curl -T - ftp://ftp.upload.com/myfile
Upload data from a specified file, login with user and password:
curl -T uploadfile -u user:passwd ftp://ftp.upload.com/myfile
Upload a local file to the remote site, and use the local file name at the remote site too:
curl -T uploadfile -u user:passwd ftp://ftp.upload.com/
Upload a local file to get appended to the remote file:
curl -T localfile -a ftp://ftp.upload.com/remotefile
Note that using FTPS:// as prefix is the "implicit" way as described in the
standards while the recommended "explicit" way is done by using FTP:// and
the --ftp-ssl option.

ldappaswordmodify doesn't accept -w - option

I use OpenDS package ( This is a very great LDAP soft) and I've got tiny problem with an option of ldappasswordmodify command
:~# ldappasswordmodify --version
OpenDS Directory Server 2.2.0
Build 20091123144827Z
--
Name Build number Revision number
Extension: snmp-mib2605 2.2.0 6181
~# ldappasswordmodify -h localhost -D "cn=Directory Manager" -w - -a "dn:uid=user,ou=People,dc=acme,dc=org"An error occurred while attempting to connect to the Directory Server: The
simple bind attempt failed
:~# ldappasswordmodify -h localhost -D "cn=Directory Manager" -w xxxxxxx -a "dn:uid=user,ou=People,dc=acme,dc=org"
The LDAP password modify operation was successful
Generated Password: F8F2R1W6V
I did research and I found this on the Oracle Site :
http://docs.oracle.com/cd/E19623-01/820-6171/ldappasswordmodify.html
-w, --bindPassword bindPassword
Use the bind password when authenticating to the directory server. This option can be used for simple authentication as well as password-based SASL mechanisms. This option must not be used in conjunction with --bindPasswordFile. To prompt for the password, type -w -.
What I did wrong?
Thank for your help.
I found this line working :
:~# read -s A ; ldappasswordmodify -h localhost -D "cn=Directory Manager" -w $A -a "dn:uid=user,ou=People,dc=acme,dc=org"
The LDAP password modify operation was successful
Generated Password: F8F2R1W6V
Thanks anyway

Using ldapsearch with a server over ssl but no password

Our organization requires SSL for access to our ldap server. When I set up our LDAP server in Mac OS X's Contacts application, I am able to search just fine for people in our organization. However, using the command line app ldapsearch doesn't seem to work.
The problem is that our organization, while using SSL, does not require a username or a password. I can't seem to get ldapsearch to not require a password.
Here's the command I'm using:
ldapsearch -H ldaps://ldap.example.com -b "" -s base "objectclass=*"
SASL/DIGEST-MD5 authentication started
Please enter your password:
Here is the contents of my /etc/openldap/ldap.conf
HOST ldap.example.com
PORT 636
TLS_REQCERT never
Here are the settings that work just fine in Mac OS X's Contacts application, and don't require a username or password:
What's the correct ldapsearch concoction to use for this server?
You need the -x option. Try something like:
ldapsearch -x -H ldaps://ldap.example.com -b "ou=people,dc=examplelabs,dc=com" -s sub "objectclass=inetorgperson"

Resources