Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.
Related
I am trying to find the correct endpoint to use to connect to a minio bucket. I am running minio on a minikube cluster, and I am using argo workflows to launch pods. When I give the addresss I use to login to minio (http://127.0.0.1:29941/), I get:
Error (exit code 1): failed to create new S3 client: Endpoint url cannot have fully qualified paths.
Or when I use minio:9000 as endpoint i get:
Error (exit code 1): failed to put file: Get "http://minio:9000/my-bucket/?location=": dial tcp: lookup minio on 10.96.0.10:53: server misbehaving
Turned out to be the name of the service with the port. In my case for argo workflow it was:
argo-artifacts:9000
I have a postgres RDS instance which my Node.js web application running on an EC2 instance is not able to connect to. The error in my EC2 node logs is: error: password authentication failed for user "ubuntu"
I can confirm that I have the right username, password, database name, etc because it is working correctly on the development build on my machine. I copied all the .env parameters exactly into my ec2 machine for the production build. When attempting to connect to RDS on my production application web page, it fails. I have restarted my Node.js server multiple times and have rebooted the whole ec2 machine. I have confirmed that the env variables are there with printenv.
What would you recommend trying to fix this issue?
EDIT for more details: My nodejs setup should be correct because my nodejs server will call some external APIs that do not require my postgres database and those calls work properly.
EDIT2: This is strange because my username for RDS is postgres, while my username for EC2 is ubuntu. I wonder if somehow there's some clash between env variables. I checked printenv but didn't find any though
EDIT3: See comments for my workaround.
I would suggest to test the database credentials by directly connecting to RDS database using psql client on EC2 instance.
I am trying to run hashicorp vault server as windows service in windows 10 system.
Vault server UI is showing as blank screen.
Please refer my configuration details.
config.hcl
ui = true
backend "consul" {
address = "127.0.0.1:8500"
path = "vault/"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
tls_cert_file = "c:/vault/config/certificate.crt"
tls_key_file = "c:/vault/config/privkey.key"
}
By default vault server is running in this (http://localhost:8200/ui/) local URL. When i navigate to this 8200 port , Blank ui screen is displayed.
Console log of Vault UI
But at the same time hashicorp vault server UI is loading if we run vault as container based application.
Windows service command I used to run vault service:
sc.exe create VaultAgent binPath= "C:\vault\vault.exe server -config=C:\vault\config\config.hcl" displayName= "Vault Agent" start= auto
Note: vault.exe is downloaded from vault windows amd64 version this url.
I am able to receive the response from vault server. Please refer the image.
vault server backend response
Note: Consul service is up and running. Please refer the image.
consul server up and running
How to bring up the Vault server UI up and running? Am i missing something.?
Note: Below are the Vault server UI console logs
unseal:1 Refused to execute script from 'https://localhost:8200/ui/assets/vendor-dd308e6ebdb070a5a829a0c0d6e74f61.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
unseal:1 Refused to execute script from 'https://localhost:8200/ui/assets/vault-8a8f62829e5ad33487e21f63af47c80d.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
unseal:1 Refused to execute script from 'https://localhost:8200/ui/sw-registration-1b862bc1e33e4a8a41781d56c3469209.js' because its MIME type ('text/plain') is not executable, and strict MIME type checking is enabled.
Latest version of the Hashicorp is having an bug and it is in opened stage. Please refer the url.
GitHub issue link
So you can little go back with old version(1.8.8) of vault and try to run it as windows service and UI should be up and running.
Note: v 1.8.8 is having it's own feature and not having all the features of the latest vault version.
Download the 1.8.8 vault for windows
I'm trying to create a jdbc-connection-pool using payara on the console. Using ./asadmin on Payara_Server/bin/
It is Running on Linux and the credentials for the database are user=jc and password=hola123 (dummies), It is for sure this credentials work. I tried them on Mariadb.
I create a connection pool using ./asadmin on Payara, it looks like this:
./asadmin create-jdbc-connection-pool --datasourceclassname org.mariadb.jdbc.MariaDbDataSource
--restype javax.sql.DataSource --property user=jc:password=hola123:DatabaseName=cinev2:ServerName=localhost:port=3306 cinePool
Now, when I try:
./asadmin ping-connection-pool
I get an error like this:
remote failure: Ping Connection Pool failed for cinePool.
Connection could not be allocated because:
Access denied for user 'jc'#'localhost' to database 'cinev2' Please check the server.log for more details.
Command ping-connection-pool failed.
What would be the causes of this Issue other than Credentials? I have checked if the credentials are right and they are, So I've no clue on the issue.
Since it works when locally connecting to the DB it probably really is an access issue.
Please check if you did all steps outlined here: Access denied for user 'root'#'localhost' (using password: YES) after new installation on Ubuntu
Whenever I try to access AWS instance by using ssh I the following error:
Connection blocked because server only allows public key authentication. Please contact your network administrator.
Connection to ec2-54-214-97-39.us-west-2.compute.amazonaws.com closed by remote
host.
Connection to ec2-54-214-97-39.us-west-2.compute.amazonaws.com
closed.
I am accessing by ssh enabled command prompt:
chmod 400 virtue.pem
ssh -i "file.pem" ubuntu#ec2-publicIp.us-west-2.compute.amazonaws.com
I am unable to access aws instance vitual machine .
The error is like the one mentioned here:
https://laracasts.com/discuss/channels/servers/ssh-key-no-longer-working
You need confirm that file.pem is the correct key to access to the instances, and use chmod 400 to give permissions to the .pem in your computer. you can view the logs in the AWS console to verify if there is any message about ssh access.
You can launch other instance with other .pem or detach root volume and attach to other instance to validate the config files
This may be a problem caused by (man-in-the-middle attack).
Change your network to a private one and retry!