I have a minio server running on debian using SystemD and proxied with NGINX and secured with Let's Encrypt. In the docs it suggests the service is comparable to Amazon S3 but I can't figure out how to actually use the service.
Version: 2019-03-27T22:35:21Z
Release-Tag: RELEASE.2019-03-27T22-35-21Z
Commit-ID: 6df05e489dc789cf26e82810cf5cfeefb1d90761
It looks like in order to create a bucket or use the minio cli mc there needs to be a registered TARGET along with accessKey and secretKey. I can't find anywhere on the server where that information is available and it's not clear to me how to create a new target.
Here is the /etc/default/minio file:
MINIO_VOLUMES="/usr/local/share/minio"
MINIO_OPTS="-C /etc/minio --address :9000"
There are no files in /etc/minio.
It's running and set up, but how can I start actually using the minio server?
Edit: Config JSON
I tried creating a new config file and entering new accessKey and secretKey in the credential field. I was not able to sign in to the Minio Browser app using the same keys.
Edit: Key Files
I tried entering a new access key and secret key into the files /etc/minio/access_key and /etc/minio/secret_key and adding the following lines to the /etc/default/minio environment file:
MINIO_ACCESS_KEY_FILE="/etc/minio/access_key"
MINIO_SECRET_KEY_FILE="/etc/minio/secret_key"
I restarted the service systemctl restart minio but I still can't log into the Minio Browser app.
It only worked to provide MINIO_ACCESS_KEY and MINIO_SECRET_KEY into /etc/default/minio environment file. Every other method failed.
I used the following to generate a secret key that resemble AWS access keys in the example. In the CLI help text it looks like access key and secret key would work however.
SecureRandom.urlsafe_base64(30)
Related
I have a golang service which has an API exposed where we try to upload a CSV to a GCP bucket. On my local host, I set the environment variable GOOGLE_APPLICATION_CREDENTIAL
and point this variable to the filepath of service account json. But when deploying to an actual GCP instance, I'm getting the below error while trying to access this API. Ideally,the service should talk to GCP metadata server and fetch the credentials and then store them in a json file. So there are 2 problems here:
Service is not querying the metadata service to get the credentials.
If file is present(I created it manually), it's not able to access due to permission issues.
Any help would be appreciated.
Error while initializing storage Client:dialing: google: error getting credentials using well-known file (/root/.config/gcloud/application_default_credentials.json): open /root/.config/gcloud/application_default_credentials.json: permission denied
Finally, after long debugging and searching over the web, found out that there's already an open PR for the go-storage client which is open: https://github.com/golang/oauth2/issues/337. I had to make a few changes in the code using this method: https://pkg.go.dev/golang.org/x/oauth2/google#ComputeTokenSource where in basically we are trying to fetch the token explicitly from metadata server and then calling subsequent cloud API's.
I'm a newbie to aws and ruby. I'm trying to get access key and the secret key using the aws-sdk-ruby, as due to security reasons we do not have access to keys in the AWS Console. I tried to use the 'get_federation_token' 'assume role with SAML' to get the keys, but I was not successful. I tried both the ways as we use the saml2aws and aws-federator to login to the AWS via the CLI. I would want to replicate that and get the keys to able to connect successfully.
Have you tried setting up ENV vars in a .env file and then fetching via ENV.fetch('variable_name'), like, ENV.fetch('S3_BUCKET_NAME')? This works consistently for me.
I want to write cloud-init script which initializes REX-Ray docker plugin(A service which uses AWS credentials on its configuration).
I have considered the following methods. However, these methods have some disadvantages.
Hard code access key/secret key in cloud-init script.
Problem: This is not secure.
Create IAM role, then refer access key, secret key from instance meta data.
Problem: Access key will expires in a certain period.
So I need to restart REX-Ray daemon process, which causes service temporary unavailable.
Please tell me which is better way to refer access key/secret key, or another way if it exists.
Thanks in advance.
The docker plugin should get the credentials automatically. You don't have to do anything. Do not set any environment variables for AWS credentials.
AWS CLI / AWS SDK will get the credentials automatically from the meta data server.
You can use the following method of authentication
Environment variables
Export both access and secret keys in environment environment as follow:
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credential file
You can use an AWS credentials file to specify your credentials. The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows users. If terraform fail to detect credentials inline, or in the environment, Terraform will check this location
You can optionally specify a different location in the configuration by providing the shared_credentials_file attribute as follow
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
https://www.terraform.io/docs/providers/aws/
I created an Ubuntu server on Amazon AWS.
Then I registered for Forge, and now trying to configure it.
I selected source control to be Bitbucket.
I selected Amazon in Server Provider Section,but now I am not sure what to put in key and secret
I found the answer to this question,
We need to create a IAM user and opt for api access key and secret.
also remember to give access to at least FullEC2Admin Access to this user before initiating the process to create and provision the server via forge.
I am not able to login with AWS ec2 Bitnami instance.
I have created new keypair for the ec2-instance and i have converted the keypair into ppk with the puttygen.
I have tried to use the login with the different user name like bitnami, ec2-user, ubuntu, root but i cant get any success i have read many blogs amazon document, bitnami document but and apply there that process but still not get success.
I have created new user group and provided access for the ssh, http, https with there defult port.
Server Details.
Instance type : m1.small
Description : https://bitnami.com
Status : available
Platform : Ubuntu
Image Size : 10GB
Visibility : Public
bitnami-magento-1.9.0.1-0-linux-ubuntu-12.04.4-x86_64-ebs
Whenever i am trying to login with the ssh i get the error message.
Disconnected: No supported authentication methods available (server sent: publickey)
Help is very much appreciated.
Thanks
This looks like your public key file is got some issue. I am guessing it should be the puTTY bug which requires an extra newline character at the end of the key file.
When creating the public key, open it in puTTYgen and copy and paste (this will make the key to be formatted in one line along with a newline) it to your authorized_keys and try to login.
For more information, read a similar question