How do I upload a file to Azurite from terminal? - bash

I'm using Azurite and wish to create a container/upload a blob etc from the bash terminal!
I've tried using the Azure CLI like this::
az storage container create --account-name devstoreaccount1 --account-key Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== --name mycontainer
But of course it doesn't work and complains of Authentication failure! By the way the correct account key and name are used in that example.
I believe it's not possible to talk to Azurite using the Azure CLI.
All I want to do is create a container and upload a file to it from the terminal.
Does anybody know if this is possible? Or will I have to use a Java client (for example) to do the job?
Thanks

According to my test, when we account key and account name with Azure CLI to create blob container, cli will use https protocol to connect Azurite. But, in default, Azurite just support http protocol. For more details, please refer to here
So I suggest you use connection string to connect Azurite with Azure CLI, the connection string will tell Azure CLI uses http protocol.
For example
Create contanier
az storage container create -n test --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;"
Upload file
az storage blob upload -f D:\test.csv -c test -n test.csv --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;"

Related

How can I download my ADB Wallet using automation scripts?

I've created oci cli and terraform scripts to provision an autonomous database.
Is there a way to download the wallet also with terraform / oci cli ?
Or is this only possible via service console (web) ?
After installing and configuring ocicli by following the doc, you can use the "oci db autonomous-database generate-wallet" command to download the wallet.
oci db autonomous-database generate-wallet --autonomous-database-id ocid1.autonomousdwdatabase.oc1.phx.abyhqljt7jdqblkyzwhwufqjcbfrvs3behzr4eusjkcjc5xjtftbv --file wallet.zip --password Welcome1!

Downloading existing key-pair (pem) file for ECS Instance Alibaba

I am working on a clients project and they have Magento installed on their EC2 instance, in order to ssh into it I need to have the pem file that was generated at the time of setting the key-pair. However I am not able to receive the pem file from their end and I am instead looking for a way to download the existing one. Is it even possible? Or do I create a new key-pair.
I wrote an article about Alibaba SSH Keypairs. If the keypair has been lost, you can replace it if you have Alibaba Cloud credentials (AccessKey and AccessKeySecret). This link to my article goes into specific details.
Alibaba Cloud SSH & ECS KeyPairs
The following commands require that the Alibaba Command Line CLI (aliyuncli) is installed and setup. I would backup (snapshot) the system before making the following changes.
This command will create a new Keypair called "NewKeyPair"
aliyuncli ecs CreateKeyPair --RegionId us-west-1 --KeyPairName NewKeyPair
This command will replace the current keypair with NewKeyPair (Windows syntax).
aliyuncli ecs AttachKeyPair --InstanceIds "[\"i-abcdeftvgllm854abcde\"]" --KeyPairName NewKeyPair
No, you can't download existing key. In order to connect to the server via ssh, you need the key which is generated at the time of server development. You can ask your clients for the key.

How to run portworx backup to minio server

Trying to configure portworx volume backups (ptxctl cloudsnap) to localhost minio server (emulating S3).
First step is to create cloud credentials using ptxctl cred c
e.g.
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000
This results in:
Error configuring cloud provider.Make sure the credentials are correct: RequestError: send request failed caused by: Get https://10.0.0.1:9000/: EOF
disabling SSL (which is not configured as this is just a localhost test) gives me:
./pxctl credentials create --provider s3 --s3-access-key mybadaccesskey --s3-secret-key mybadsecretkey --s3-region local --s3-endpoint 10.0.0.1:9000 --s3-disable-ssl
Which returns:
Not authenticated with the secrets endpoint
I've tried this with both minio gateway (nas) and minio server - same result.
Portworx container is running within Rancher
Any thoughts appreciated
Resolved via instructions at https://docs.portworx.com/secrets/portworx-with-kvdb.html
i.e. set secret type to kvdb in /etc/pwx/config.json
"secret": {
"cluster_secret_key": "",
"secret_type": "kvdb"
},
Then login using ./pxctl secrets kvdb login
After this, credentials create was successful and subsequent cloudsnap backup. Test was using --s3-disable-ssl switch
Note - kvdb is plain text so not suitable for production obvs.

Hdfs to s3 Distcp - Access Keys

For copying the file from HDFS to S3 bucket I used the command
hadoop distcp -Dfs.s3a.access.key=ACCESS_KEY_HERE\
-Dfs.s3a.secret.key=SECRET_KEY_HERE /path/in/hdfs s3a:/BUCKET NAME
But the access key and sectet key are visible here which are not secure .
Is there any method to provide credentials from file .
I dont want to edit config file ,which is one of the method I came across .
I also faced the same situation, and after got temporary credentials from matadata instance. (in case you're using IAM User's credential, please notice that the temporary credentials mentioned here is IAM Role, which attach to EC2 server not human, refer http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
I found only specifying the credentials in the hadoop distcp cmd will not work.
You also have to specify a config fs.s3a.aws.credentials.provider. (refer http://hortonworks.github.io/hdp-aws/s3-security/index.html#using-temporary-session-credentials)
Final command will look like below
hadoop distcp -Dfs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider" -Dfs.s3a.access.key="{AccessKeyId}" -Dfs.s3a.secret.key="{SecretAccessKey}" -Dfs.s3a.session.token="{SessionToken}" s3a://bucket/prefix/file /path/on/hdfs
Recent (2.8+) versions let you hide your credentials in a jceks file; there's some documentation on the Hadoop s3 page there. That way: no need to put any secrets on the command line at all; you just share them across the cluster and then, in the distcp command, set hadoop.security.credential.provider.path to the path, like jceks://hdfs#nn1.example.com:9001/user/backup/s3.jceks
Fan: if you are running in EC2, the IAM role credentials should be automatically picked up from the default chain of credential providers: after looking for the config options & env vars, it tries a GET of the EC2 http endpoint which serves up the session credentials. If that's not happening, make sure that com.amazonaws.auth.InstanceProfileCredentialsProvider is on the list of credential providers. It's a bit slower than the others (and can get throttled), so best to put near the end.
Amazon allows to generate temporary credentials that you can retrieve from http://169.254.169.254/latest/meta-data/iam/security-credentials/
you can read from there
An application on the instance retrieves the security credentials provided by the role from the instance metadata item iam/security-credentials/role-name. The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. These security credentials are temporary and we rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
The following command retrieves the security credentials for an IAM role named s3access.
$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
The following is example output.
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2012-04-27T22:39:16Z"
}
For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you do not have to explicitly get the temporary security credentials — the AWS SDKs, AWS CLI, and Tools for Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use them. To make a call outside of the instance using temporary security credentials (for example, to test IAM policies), you must provide the access key, secret key, and the session token. For more information, see Using Temporary Security Credentials to Request Access to AWS Resources in the IAM User Guide.
if you do not want to use access and secret key (or show them on your scripts) and if your EC2 instance has access to S3 then you can use the instance credentials
hadoop distcp \
-Dfs.s3a.aws.credentials.provider="com.amazonaws.auth.InstanceProfileCredentialsProvider" \
/hdfs_folder/myfolder \
s3a://bucket/myfolder
Not sure if it is because of a version difference, but to use "secrets from credential providers" the -Dfs flag would not work for me, I had to use the -D flag as shown on the hadoop version 3.1.3 "Using_secrets_from_credential_providers" docs.
First I saved my AWS S3 credentials in a Java Cryptography Extension KeyStore (JCEKS) file.
hadoop credential create fs.s3a.access.key \
-provider jceks://hdfs/user/$USER/s3.jceks \
-value <my_AWS_ACCESS_KEY>
hadoop credential create fs.s3a.secret.key \
-provider jceks://hdfs/user/$USER/s3.jceks \
-value <my_AWS_SECRET_KEY>
Then the following distcp command format worked for me.
hadoop distcp \
-D hadoop.security.credential.provider.path=jceks://hdfs/user/$USER/s3.jceks \
/hdfs_folder/myfolder \
s3a://bucket/myfolder

storage blob upload command failed after setting AZURE_STORAGE_CONNECTION_STRING

I'm trying to upload a file to my azure storage. I did
$ set AZURE_STORAGE_CONNECTION_STRING=DefaultEndpointsProtocol=https;AccountName=**;AccountKey=**
but when I did
$ azure storage blob upload PATHFILE mycontainer data/book_270.pdf
then I got the following error:
info: Executing command storage blob upload
error: Please set the storage account parameters or one of the following two environment variables to use the storage command.
AZURE_STORAGE_CONNECTION_STRING
AZURE_STORAGE_ACCOUNT and AZURE_STORAGE_ACCESS_KEY
info: Error information has been recorded to /Users/uqtngu83/.azure/azure.err
error: storage blob upload command failed
But I already set AZURE_STORAGE_CONNECTION_STRING! Please help
As suggested in comment, you are suppose to run the following on your MAC terminal. (Change Defaultblabla with your Azure Storage Connection String)
export AZURE_STORAGE_CONNECTION_STRING="DefaultBlaBlaBla"

Resources