aws with eclipse cannot find keypair - amazon-ec2

I just installed the aws eclipse toolkit. I am having problems with the aws toolkit key pairs. When I go to the eclipse->windows->preferences->awstoolkit->key pairs I find no icons or names of my keypairs.pem file. I first downloaded all of the eclipse aws software modules with no issue. I did not include the android module as instructed. I downloaded the credentials and keypairs files from my ec2 instance. I placed the credentials file in .aws directory. I then filled out the aws toolkit credentials window and included in the default profile details a copy/paste of my access key id and secret access key. I think eclipse is seeing it because I noticed that after I rebooted eclipse it created another credentials file with the id and secret key reformatted. I placed the my keypair.pem in the .ec2 directory. Like I said earlier, when I go to the preferences->key pairs window there is nothing in the name field and I cannot associate my private keys with my amazon ec2 key pairs. Any help would be welcome...Best Regards.

Ok, I finally figured it out.
I am new to AWS and did not understand that I had to first set up an IAM user and group and enable the necessary policies. I found a pretty good video that goes over user/group settings.
Once I set up the policies I noticed that eclipse had immediate access to the aws cloud resources. When I clicked the eclipse windows->preferences->keypairs my aws keypairs were displayed. I clicked on the one I set up for the account and everything worked fine.

Related

Keep My Env Parameters Secure While Deploying To AWS

I have a project in Laravel 8 and I have some secret env parameters and I do not want to ship them with my application to github. I will deploy my application with github actions to AWS beanstalk. How do I keep all the secrets secure and put them to EC2 instance when all application deployed.
There are multiple ways to do that and you should not send your env file with your application to github.
You can use beanstalk's own parameter store page. However, if you do that another developer who has access to your AWS account can see all the env parameters. It is simple key value store page.
Benastalk Panel -> (Select Your Environment) -> Configuration -> Software
Under the systems manager there is a service called Parameter Store (this is my prefered way)
In here, You can add as much as parameter as you like securely. You can simply add string parameters as well as secure (like password or api keys) strings also integers but string and secure types are my favorites.
You can split all you parameters by path like "APP_NAME/DB_NAME" etc.
You should get all the parameters from Parameter Store to your EC2 instance and put them on newly created .env file.
There is github secrets in github actions and you can put all your secret parameters to github secrets page. You can get all the secrets from github secrets and put your secrets to your application and ship from github to AWS directly.
You can go to settings in your repository and see this page:

Google Cloud Storage - Handle rotating keys from outside the environment

Need a help on how do I handle rotating keys on google cloud storage thats managed by one google account but being accessed by an app thats running on another google cloud account. I tried searching for solutions but couldn't find an answer
With IAM service you can grant permission at project level, and, for some resources, at resources level.
It's the case for your KMS keys where you can grant permission on the key rings
Or directly at the key level
Choose what works the best for your use case, and grant the service account of the external project with the correct permission (decrypter to read the files in the GCS, encryter to write files)
Note: A key rotation is a new version of a key.

Is there a method preventing dynamic ec2 key pairs from being written to tfstate file?

We are starting to use Terraform to build our aws ec2 infrastructure but would like to do this as securely as possible.
Ideally we would like to do is to create a key pair for each Windows ec2 instance dynamically and store the private key in Vault. This is possible, but I cannot think a way of implementing it without having the private key written to the tfstate file. Yes I know I can store the tfstate file in an encrypted s3 bucket but this does not seem like an optimal secure solution.
I am happy to write custom code if needs be to have the key pair generated via another mechanism and the name passed as a variable to Terraform but dont want to if there are other more robust and tested methods out there. I was hoping we could use Vault to do this but on researching it does not look possible.
Has anyone got any experience of doing something similar? Failing that, any suggestions?
The most secure option is to have an arbitrary keypair you destroy the private key for and user_data that joins the instances to a AWS Managed Microsoft AD domain controller. After that you can use conventional AD users, and groups to control access to the instances (but not group policy in any depth, regrettably). You'll need a domain member server to administrate AD at that level of detail.
If you really need to be able to use local admin on these Windows EC2 instances, then you'll need to create the keypair for decrypting the password once manually and then share it securely through a secret or password manager with other admins using something like Vault or 1Password.
I don't see any security advantage to creating a keypair per instance, just considerable added complexity. If you're concerned about exposure, change the administrator passwords after obtaining them and store those in your secret or password manager.
I still advise going with AD if you are using Windows. Windows with AD enables world-leading unified endpoint management and Microsoft has held the lead on that for a very long time.

trying to use Amazon S3 on Ggost running on Heroku to store all the images on it instead of storing them locally

I've been trying to set up Ghost storage adapter S3 on my 1.7 ghost installation, but I must to be missing something on the way. I created a bucket with policies what are allowing access to IAM user previously created and with AmazonS3FullAccess permissions, so far so good, Ive got added the lines into config.production.json with AccessKey and secretkey from IAM as readme says but its not working properly. I attach a report screen from heroku logs
Well, I couldn't find how to fix it on 1.7 version but after updating Ghost to 1.21.1 it's working right.

How paranoid should I be about my Azure application binary files being stolen?

I need to migrate a huge application to Windows Azure. The application depends on a third-party library that requires an activation key stored in a special binary file on the role instance filesystem.
Obviously that key has to be either included into the role package or stored somewhere where role can fetch it. The activation key will not be bound to the machine (since I have no idea where exactly a role will be run in the cloud) so anyone can use it to make a copy of that library work.
Since Azure roles run somewhere not under our control I feel somewhat paranoid about the key being stolen and made widely available.
How do I evaluate how likely it is to steal a binary file that is included into Azure role? How do I mitigate such risks?
When asking questions like this you need to differentiate your attackers:
A2) Rogue internet hacker
A3) A developer in your organization
A4) A person in your organization who does deployments
Your role binaries are not accessible to A2, however they are very accessible to A3 and A4.
As mentioned in another answer, you can store your secrets in the Role Configuration File.
However, these secrets are still accessible to A4 (anyone with access to the portal).
To defend against A4 you want to encrypt the secrets in the role configuration file with a key that isn't present in the role binaries or the role configuration. Then in your role binaries you decrypt the encrypted role setting, with the decryption key.
The key to use for encrypting the secrets is a certificate, which the Azure SDK will install for you. You can find a post about setting up certificates for azure here.
In summary, for secrets where you need to defend against people in your organization who do deployments/have access to your configuration files you do the following:
Have a trusted party do the following:
Generate a Certificate/Private Key
Encrypt secrets with the Certificate, and store the encrypted settings in your configuration files
Upload the Certificate/Private Key to your service.
Then modify your service to:
Have the service model install the Certificate/PrivateKey
Have your application, load the private key for decrypting secrets
Have your application load the encrypted settings from role configuration, and decrypt them with the private key.
As far security is concerned, unless your team is extremely capable in this area, the default always upgraded Azure OS is probably much more secure than any self-configured host. At least, this is how we (aka my company, Lokad) assess the situation two years after migrating in full to Windows Azure compared to our previous self-hosted situation.
Then, if you have credentials, such as licence keys, then the most logical place to put them is the Role configuration file - not the Role binaries. You can naturally regenerate the special file within the VM as needed, but this way, you don't internally spread your credentials all over the place as you archive your binary versions.

Resources