Downloading existing key-pair (pem) file for ECS Instance Alibaba - alibaba-cloud

I am working on a clients project and they have Magento installed on their EC2 instance, in order to ssh into it I need to have the pem file that was generated at the time of setting the key-pair. However I am not able to receive the pem file from their end and I am instead looking for a way to download the existing one. Is it even possible? Or do I create a new key-pair.

I wrote an article about Alibaba SSH Keypairs. If the keypair has been lost, you can replace it if you have Alibaba Cloud credentials (AccessKey and AccessKeySecret). This link to my article goes into specific details.
Alibaba Cloud SSH & ECS KeyPairs
The following commands require that the Alibaba Command Line CLI (aliyuncli) is installed and setup. I would backup (snapshot) the system before making the following changes.
This command will create a new Keypair called "NewKeyPair"
aliyuncli ecs CreateKeyPair --RegionId us-west-1 --KeyPairName NewKeyPair
This command will replace the current keypair with NewKeyPair (Windows syntax).
aliyuncli ecs AttachKeyPair --InstanceIds "[\"i-abcdeftvgllm854abcde\"]" --KeyPairName NewKeyPair

No, you can't download existing key. In order to connect to the server via ssh, you need the key which is generated at the time of server development. You can ask your clients for the key.

Related

How do I upload a file to Azurite from terminal?

I'm using Azurite and wish to create a container/upload a blob etc from the bash terminal!
I've tried using the Azure CLI like this::
az storage container create --account-name devstoreaccount1 --account-key Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw== --name mycontainer
But of course it doesn't work and complains of Authentication failure! By the way the correct account key and name are used in that example.
I believe it's not possible to talk to Azurite using the Azure CLI.
All I want to do is create a container and upload a file to it from the terminal.
Does anybody know if this is possible? Or will I have to use a Java client (for example) to do the job?
Thanks
According to my test, when we account key and account name with Azure CLI to create blob container, cli will use https protocol to connect Azurite. But, in default, Azurite just support http protocol. For more details, please refer to here
So I suggest you use connection string to connect Azurite with Azure CLI, the connection string will tell Azure CLI uses http protocol.
For example
Create contanier
az storage container create -n test --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;"
Upload file
az storage blob upload -f D:\test.csv -c test -n test.csv --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://127.0.0.1:10000/devstoreaccount1;QueueEndpoint=http://127.0.0.1:10001/devstoreaccount1;"

Unable to Clone CloudGoat from RhinoSecurityLabs

I am trying to install the Rhino Security Labs CloudGoat on my AWS Ubuntu 18.04 LTS Free-tier EC2 instance. I followed the directions for setting up an admin user and configuring the AWS CLI and also set up terraform v0.12 per the directions in the linked sites and the directions on GitHub. I also configured my instance's security group to allow All traffic.
However, when I run the git clone command I get "Permission denied" error. See below for full output:
sudo git clone git#github.com:RhinoSecurityLabs/cloudgoat.git ./CloudGoat
Cloning into './CloudGoat'...
The authenticity of host 'github.com (<ipv4>)' can't be established.
RSA key fingerprint is SHA256:<RSA key fingerprint>.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'github.com,<ipv4>' (RSA) to the list of known hosts.
git#github.com: Permission denied (publickey).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Do I need to associate an SSH key on GitHub to my account and if so how do I do that? I'm not sure what else to try at this point. Thanks.
This is because you don't have SSH Keys on EC2 that can authenticate your requests to Github. I encountered the same error when I was installing Cloudgoat on my personal machine (not EC2), and it worked when I setup my SSH keys (generate and add it to my git profile).
You will probably need to do the same with EC2 - Generate a key pair on EC2 and add the public key to your git profile.

Replace Key pairs without rebooting or detaching/attaching the volumes in AWS ec2 instance

I want my existing ec2 instance to replace the keys and create New keys.
I don't want to reboot and detach/attach the volumes of ec2 instance. Is there any way to change the key pairs without creating the image/snapshot of the server?
Using keypairs with SSH is a function of the operating system. It has nothing specific to do with Amazon EC2.
When you use SSH to connect to an instance, you provide a private key. The Linux operating system on the target computer/instance will then look in the designated user's home directory and look in the ~/.ssh/authorized_keys file. If a matching public key is found in that directory, the login is permitted.
For example, if you are connecting with:
ssh -i key.pem ec2-user#IP-ADDRESS
then it will look in:
/home/ec2-user/.ssh/authorized_keys
Therefore, if you wish to add/replace keys for a user, simply edit the contents of the .ssh/authorized_keys file in the home directory.
See:
How to replace public SSH keys on your AWS EC2 instance | VentureBeat
Add New User Accounts with SSH Access to an Amazon EC2 Linux Instance

Hdfs to s3 Distcp - Access Keys

For copying the file from HDFS to S3 bucket I used the command
hadoop distcp -Dfs.s3a.access.key=ACCESS_KEY_HERE\
-Dfs.s3a.secret.key=SECRET_KEY_HERE /path/in/hdfs s3a:/BUCKET NAME
But the access key and sectet key are visible here which are not secure .
Is there any method to provide credentials from file .
I dont want to edit config file ,which is one of the method I came across .
I also faced the same situation, and after got temporary credentials from matadata instance. (in case you're using IAM User's credential, please notice that the temporary credentials mentioned here is IAM Role, which attach to EC2 server not human, refer http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html)
I found only specifying the credentials in the hadoop distcp cmd will not work.
You also have to specify a config fs.s3a.aws.credentials.provider. (refer http://hortonworks.github.io/hdp-aws/s3-security/index.html#using-temporary-session-credentials)
Final command will look like below
hadoop distcp -Dfs.s3a.aws.credentials.provider="org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider" -Dfs.s3a.access.key="{AccessKeyId}" -Dfs.s3a.secret.key="{SecretAccessKey}" -Dfs.s3a.session.token="{SessionToken}" s3a://bucket/prefix/file /path/on/hdfs
Recent (2.8+) versions let you hide your credentials in a jceks file; there's some documentation on the Hadoop s3 page there. That way: no need to put any secrets on the command line at all; you just share them across the cluster and then, in the distcp command, set hadoop.security.credential.provider.path to the path, like jceks://hdfs#nn1.example.com:9001/user/backup/s3.jceks
Fan: if you are running in EC2, the IAM role credentials should be automatically picked up from the default chain of credential providers: after looking for the config options & env vars, it tries a GET of the EC2 http endpoint which serves up the session credentials. If that's not happening, make sure that com.amazonaws.auth.InstanceProfileCredentialsProvider is on the list of credential providers. It's a bit slower than the others (and can get throttled), so best to put near the end.
Amazon allows to generate temporary credentials that you can retrieve from http://169.254.169.254/latest/meta-data/iam/security-credentials/
you can read from there
An application on the instance retrieves the security credentials provided by the role from the instance metadata item iam/security-credentials/role-name. The application is granted the permissions for the actions and resources that you've defined for the role through the security credentials associated with the role. These security credentials are temporary and we rotate them automatically. We make new credentials available at least five minutes prior to the expiration of the old credentials.
The following command retrieves the security credentials for an IAM role named s3access.
$ curl http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access
The following is example output.
{
"Code" : "Success",
"LastUpdated" : "2012-04-26T16:39:16Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "AKIAIOSFODNN7EXAMPLE",
"SecretAccessKey" : "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY",
"Token" : "token",
"Expiration" : "2012-04-27T22:39:16Z"
}
For applications, AWS CLI, and Tools for Windows PowerShell commands that run on the instance, you do not have to explicitly get the temporary security credentials — the AWS SDKs, AWS CLI, and Tools for Windows PowerShell automatically get the credentials from the EC2 instance metadata service and use them. To make a call outside of the instance using temporary security credentials (for example, to test IAM policies), you must provide the access key, secret key, and the session token. For more information, see Using Temporary Security Credentials to Request Access to AWS Resources in the IAM User Guide.
if you do not want to use access and secret key (or show them on your scripts) and if your EC2 instance has access to S3 then you can use the instance credentials
hadoop distcp \
-Dfs.s3a.aws.credentials.provider="com.amazonaws.auth.InstanceProfileCredentialsProvider" \
/hdfs_folder/myfolder \
s3a://bucket/myfolder
Not sure if it is because of a version difference, but to use "secrets from credential providers" the -Dfs flag would not work for me, I had to use the -D flag as shown on the hadoop version 3.1.3 "Using_secrets_from_credential_providers" docs.
First I saved my AWS S3 credentials in a Java Cryptography Extension KeyStore (JCEKS) file.
hadoop credential create fs.s3a.access.key \
-provider jceks://hdfs/user/$USER/s3.jceks \
-value <my_AWS_ACCESS_KEY>
hadoop credential create fs.s3a.secret.key \
-provider jceks://hdfs/user/$USER/s3.jceks \
-value <my_AWS_SECRET_KEY>
Then the following distcp command format worked for me.
hadoop distcp \
-D hadoop.security.credential.provider.path=jceks://hdfs/user/$USER/s3.jceks \
/hdfs_folder/myfolder \
s3a://bucket/myfolder

upload directories from local computer to ec2 server

I was wondering how to set up filezilla or how to upload files to my ec2 server. everytime i try to set up filezilla it says:
Error: Disconnected: No supported authentication methods available (server sent: publickey)
Error: Could not connect to server
and i have to go to downloads folder and login with ssh -i key.pem user#ipaddress every time i want to have access since my mac wont automatically ssh from anywhere since i cant import it into my keychain.
According to the FileZilla Docs, it should be possible:
FileZilla supports the standard SSH agents. If your SSH agent is running, the SSH_AUTH_SOCK environment variable should be set.
Here is a documentation on how to set up ssh agent.
However I personally use Cyberduck as an SFTP client. When creating a new connection there, you can simply check "Use public key authorization" and give the path to your key file. Should be easier to set up.
you can use sshfs to fuse the ec2 instance directory to your local folder.
So, you have to do following steps :
install sshfs on your mac.
put you mac id_rsa.pub key inside authorized keys in .ssh/ folder of ec2 instance . this will allow you to mount ec2 directory to local folder. Also, this will allow you to ssh to ec2 instance without using key.pem.
mount the ec2 instance using following command :
sshfs ubuntu#ec2-xx-xx-xx-xxx.compute-1.amazonaws.com: /<your new folder location>
4. don't forget to give your folders write permissions , so that you can edit them remotely.
Hope it helps.

Resources