How to grant EC2 access to SQS - laravel

The docs are very confusing to me. I have read through the SQS access docs. But what really throws me is this page: http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-sqs.html
You can provide your credential profile like in the preceding example,
specify your access keys directly (via key and secret), or you can
choose to omit any credential information if you are using AWS
Identity and Access Management (IAM) roles for EC2 instances or
credentials sourced from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables.
1) Regarding what I have bolded, how is that possible? I cannot find steps whereas you are able to grant EC2 instances access to SQS using IAM roles. This is very confusing.
2) Where would the aforementioned environment variables be placed? And where would you get the key and secret from?
Can someone help clarify?

There are several ways that applications can discover AWS credentials. Any software using the AWS SDK automatically looks in these locations. This includes the AWS Command-Line Interface (CLI), which is a python app that uses the AWS SDK.
Your bold words refer to #3, below:
1. Environment Variables
The SDK will look for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. This is a great way to provide credentials because there is no danger of accidentally including a credentials file in github or other repositories. In Windows, use the System control panel to set the variables. In Mac/Linux, just EXPORT the variables from the shell.
The credentials are provided when IAM users are created. It would be your responsibility to put those credentials into the environment variables.
2. Local Credentials File
The SDK will look in local configuration files, such as:
~/.aws/credentials
C:\users\awsuser\.aws\credentials
These files are great for storing user-specific credentials and can actually store multiple profiles, each with their own credentials. This is useful for switching between different environments such as Dev and Test.
The credentials are provided when IAM users are created. It would be your responsibility to put those credentials into the configuration file.
3. IAM Roles on an Amazon EC2 instance
An IAM role can be associated with an Amazon EC2 instance at launch time. Temporary credentials will then automatically be provided via the instance metadata service via the URL:
http://instance-data/latest/meta-data/iam/security-credentials/<role-name>/
This will return meta-data that contains AWS credentials, for example:
{
"Code" : "Success",
"LastUpdated" : "2015-08-27T05:09:23Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAI5OXLTT3D5NCV5MS",
"SecretAccessKey" : "sGoHyFaVLIsjm4WszUXJfyS1TVN6bAIWIrcFrRlt",
"Token" : "AQoDYXdzED4a4AP79/SbIPdV5N8k....lZwERog07b6rgU=",
"Expiration" : "2015-08-27T11:11:50Z"
}
These credentials have inherit the permissions of the IAM role that was assigned when the instance was launched. They automatically rotate every 6 hours (note the Expiration in this example, approximately 6 hours after the LastUpdated time.
Applications that use the AWS SDK will automatically look at this URL to retrieve security credentials. Of course, they will only be available when running on an Amazon EC2 instance.
Credentials Provider Chain
Each particular AWS SDK (eg Java, .Net, PHP) may look for credentials in different locations. For further details, refer to the appropriate documentation, eg:
Providing AWS Credentials in the AWS SDK for Java
Providing AWS Credentials in the AWS SDK for .Net
Providing AWS Credentials in the AWS SDK for PHP

Related

How can I tell boto3 lambda invoke to use a specified iam role and not the one in my aws config file?

I am trying to invoke an aws lambda from an azure function using boto3. I have everything working using my two personal accounts. I created an aws configuration file using my personal account details. Now I have moved both the azure function and aws lambda to my work dev environments. My work does not want me to use aws credential ("access_key_id" and "secret_access_key"). Is there a way around this? A way to tell boto3 don't use the credentials in aws config file, instead use this role??
client = boto3.client('lambda')
response = client.invoke(
FunctionName='arn:aws:lambda:us-##-#:##############:function:azure-to-s3',
InvocationType='Event',
Payload=json.dumps({
'file_name': filename,
'file_bytes': base64.b85encode(rawfile).decode('utf-8'),
}),
)
Assuming you have access to your work's AWS account via the console, you can give any account permission to invoke a Lambda.
Via the console:
Browse to the specified Lambda
Configuration -> Permissions
Click 'Add permission' under the 'Resource-based policy'-section
The equivalent boto3-function would be add_permission().
After that, you can invoke the Lambda from the Azure account as you would normally, but using your own credentials.
Credentials can be set in one of three ways:
setting the credentials in ~/.aws/credentials, if you're on a VM
setting the credentials as environment variables:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
setting the credentials as parameters when using boto3:
boto3.client("lambda", region_name="us-west-2", aws_access_key_id="ak", aws_secret_access_key="sk")

Google Cloud Logging Authentication / permissions

I am using the Golang library cloud.google.com/go/logging and want to send runtime logging.
Already have a GOOGLE_APPLICATION_CREDENTIALS .json file - and am using google storage and firebase - so I know the credentials are working.
With logging, I get an error "Error 403: The caller does not have permission, forbidden"
The account in the application credentials is a service account and I have been looking at the IAM permissions. There is not an obvious permission for logging (there are other stackdriver permissions, for debug, trace etc but these don't seem to work).
So assuming I am in the right place so far - what permissions does the service account need in order to send logging data to stackdriver logging?
If we look at the API for writing entries to a log we find that the IAM permission logging.logEntries.create is required.
A more detailed article can be found at Access control guide.
This describes a variety of roles including:
roles/logging.logWriter
According to the official documentation:
Using Stackdriver Logging library for Go requires the Cloud IAM Logs
Writer role on Google Cloud. Most Google Cloud environments provide
this role by default.
1.App Engine grants the Logs Writer role by default.
2.On Google Kubernetes Engine, you must add the logging.write access scope when creating the cluster:
3.When using Compute Engine VM instances, add the cloud-platform access scope to each instance.
4.To use the Stackdriver Logging library for Go outside of Google Cloud, including running the library on your own workstation, on your data center's computers, or on the VM instances of another cloud provider, you must supply your Google Cloud project ID and appropriate service account credentials directly to the Stackdriver Logging library for Go.
You can create and obtain service account credentials manually. When specifying the Role field, use the Logs Writer role. For more information on Cloud Identity and Access Management roles, go to Access control guide.
Setting Up Stackdriver Logging for Go
gcloud iam service-accounts list
gcloud projects add-iam-policy-binding my-project-123 \
--member serviceAccount:my-sa-123#my-project-123.iam.gserviceaccount.com \
--role roles/logging.logWriter

How to pass password securely in EC2 bootstrap script?

I have an applications which I am trying to install via bootstrap script for EC2 instance. This application needs a password to be provided during the installation. What is the most secure way to provide the password during bootstrap process?
The recommended method is:
Store the password in the AWS Secrets Manager
Assign an IAM Role to the EC2 instance
Grant permissions to the Role to access the secret in the Secrets Manager
Add code to the startup script to retrieve the secret from the Secrets Manager
The code in the Startup Script will automatically use the permissions assigned to the role that was associated with the EC2 instance.

How to refer AWS access key(secret key) in cloud-init without hard coding

I want to write cloud-init script which initializes REX-Ray docker plugin(A service which uses AWS credentials on its configuration).
I have considered the following methods. However, these methods have some disadvantages.
Hard code access key/secret key in cloud-init script.
Problem: This is not secure.
Create IAM role, then refer access key, secret key from instance meta data.
Problem: Access key will expires in a certain period.
So I need to restart REX-Ray daemon process, which causes service temporary unavailable.
Please tell me which is better way to refer access key/secret key, or another way if it exists.
Thanks in advance.
The docker plugin should get the credentials automatically. You don't have to do anything. Do not set any environment variables for AWS credentials.
AWS CLI / AWS SDK will get the credentials automatically from the meta data server.
You can use the following method of authentication
Environment variables
Export both access and secret keys in environment environment as follow:
$ export AWS_ACCESS_KEY_ID="anaccesskey"
$ export AWS_SECRET_ACCESS_KEY="asecretkey"
Shared Credential file
You can use an AWS credentials file to specify your credentials. The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%.aws\credentials" for Windows users. If terraform fail to detect credentials inline, or in the environment, Terraform will check this location
You can optionally specify a different location in the configuration by providing the shared_credentials_file attribute as follow
provider "aws" {
region = "us-west-2"
shared_credentials_file = "/Users/tf_user/.aws/creds"
profile = "customprofile"
}
https://www.terraform.io/docs/providers/aws/

Forge configuration What to Put in Server Providers=>Amazon=>Key/Secret

I created an Ubuntu server on Amazon AWS.
Then I registered for Forge, and now trying to configure it.
I selected source control to be Bitbucket.
I selected Amazon in Server Provider Section,but now I am not sure what to put in key and secret
I found the answer to this question,
We need to create a IAM user and opt for api access key and secret.
also remember to give access to at least FullEC2Admin Access to this user before initiating the process to create and provision the server via forge.

Resources