I created an AMI on EC2 and shared with another EC2 account, but I can't access the AMI from the other EC2 account. Any help will be appreciated.
Here is what I did so far:
Created an instance using ubuntu 14.04
Logged into the instance and install all tools needed
Created a new AMI based on the instance
Shared the AMI with another EC2 account
Logged into the other EC2 account, but I could not find the AMI under AMIs list
Any help where I can find the AMI?
Thanks a lot.
I am presuming that you have shared the AMI per this document: http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sharingamis-explicit.html
Once shared, then when you login to the other EC2 account, Make sure you have selected Private Images as shown below.
Only private Images option will list./show the AMI that you have shared from another account.
I just had the same issue recently. I know this question is old but it's the first one that comes up on google.
The docs for sharing an encrypted AMI:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/key-policy-requirements-EBS-encryption.html#policy-example-cmk-cross-account-access
I was using an autoscaling group so I made use of the default service linked role (arn:aws:iam::(account_id):role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling)
If sharing from (ACCOUNT 1) -> (ACCOUNT 2):
In (ACCOUNT 1) where the KMS key used to encrypt the AMI lives. Add the following policy:
{
"Sid": "Allow access for Key Administrators",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::(ACCOUNT 1 ID):root"
},
"Action": "kms:*",
"Resource": "*"
},
{
"Sid": "Allow use of the key",
"Effect": "Allow",
"Principal": {
"AWS": [
"arn:aws:iam::(ACCOUNT 2 ID):role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling",
"arn:aws:iam::(ACCOUNT 1 ID):role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling",
"arn:aws:iam::(ACCOUNT 2 ID):root"
]
},
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:DescribeKey",
"kms:ReEncrypt*",
"kms:GenerateDataKey*",
"kms:CreateGrant",
"kms:ListGrants",
"kms:RevokeGrant"
],
"Resource": "*"
}
Now in the console find your AMI, right click, and share with (ACCOUNT 2 ID). You should now see your AMI listed as a "Private" ami in (ACCOUNT 2).
If you try to launch the AMI in (ACCOUNT 2) it will auto stop and throw the ClientError on you. You have to run the next step (via aws cli):
aws kms create-grant --region (REGION WHERE KMS KEY LIVES) --key-id arn:aws:kms:us-west-2:(ACCOUNT 1 ID):key/(ACCOUNT 1 KMS KEY ID) --grantee-principal arn:aws:iam::(ACCOUNT 2 ID):role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling --operations "Encrypt" "Decrypt" "ReEncryptFrom" "ReEncryptTo" "GenerateDataKey" "GenerateDataKeyWithoutPlaintext" "DescribeKey" "CreateGrant"
Now it should all work.
When you are sharing AMI cross account, encrypted AMIs will not be able to launch unless you have setup your key policy to allow the other aws account.
then easily you can list down in aws console by filtering private image section
Related
I have a GCP Project and Anthos Cluster deployed within it.
If I am an admin of an Anthos cluster but not an Owner of the parent project, I have only read rights on Kubernetes and cannot create any resources. Getting:
Error from server (Forbidden)
I've given myself "Kubernetes Engine Admin", "Kubernetes Engine Cluster Admin", "Anthos Multi-cloud Admin" roles, but no success. It seems like "Owner" role is mandatory.
Also my user is attached to ClusterRole/cluster-admin through ClusterRoleBinding/gke-multicloud-cluster-admin, but I definitely need IAM Owner role.
Is this by Anthos design or I am missing something?
This was solved by giving myself these roles:
roles/gkehub.viewer
roles/gkehub.gatewayEditor
Now, I can create Kubernetes resources even if I am not an Owner of the GCP project.
I have a python 3.8 application deployed on a kubernetes cluster on azure that has to access a blob storage container in an account in a different resource group. I'm using a managed identity to authenticate and query the container:
from azure.storage.blob import BlobServiceClient
creds = ManagedIdentityCredential()
url_template = task_config["ACCOUNT_ADDRESS_TEMPLATE"]
account_name = task_config["BLOB_STORAGE_ACCOUNT"]
account_url = url_template.replace("*", account_name)
blob_service_client = BlobServiceClient(account_url=account_url, credential=creds)
if container not in [c.name for c in blob_service_client.list_containers()]:
raise BlobStorageContainerDoesNotExistError(
f"Container {container} does not exist"
)
self.client: ContainerClient = blob_service_client.get_container_client(
container=container
I have verified that the managed identity has been assigned the Storage Blob Data Contributor role in the storage account, and also at the level of the resource group. I have verified that the token generated when instantiating the ManagedIdentityCredential() object references the right managed identity, and I have whitelisted the outbound IP (and every other possible IP just in case) of my python application. Nevertheless, I keep getting this error when attempting to list the containers in the account:
Http ResponseError(response=response, model=error)\nazure.core.exceptions.HttpResponseError: Operation returned an invalid status 'This request is not authorized to perform this operation.'
Could anyone point me in the right direction?
Specs:
azure-identity = "1.5"
azure-storage-blob= "12.8.1"
python = "3.8"
platform: linux docker containers running on kubernetes cluster deployed on azure.
I have tested in my environment
It seems you are using Storage Account to allow access from Selected Networks.
Please make sure to allow access from your AKS VMSS virtual network :
Then you can use the below python script to list the blob containers in the Storage Account :
from azure.storage.blob import BlobServiceClient
from azure.identity import ManagedIdentityCredential
creds = ManagedIdentityCredential ()
blob_service_client = BlobServiceClient(account_url="https://StorageAccountName.blob.core.windows.net/", credential=creds)
test = blob_service_client.list_containers()
for container in test :
print(container.name)
i am using codedeploy to deploy my code to server. 3 days back it was working fine. but suddenly it fails to assume role although it was working fine previously.
error : {
"Code" : "AssumeRoleUnauthorizedAccess",
"Message" : "EC2 cannot assume the role Ec2Codedeploy"}
"LastUpdated" : "2017-07-10T06:49:59Z"
my trust relationship is :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "codedeploy.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
there is contradiction between documentation also.
http://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html
http://docs.aws.amazon.com/IAM/latest/UserGuide/troubleshoot_iam-ec2.html#troubleshoot_iam-ec2_errors-info-doc
no. 1 says service should be "codedeploy.amazonaws.com"
no.2 says service should be "ec2.amazonaws.com"
issue persists after reboot also.
kindly help me in this issue.
It appears that you have a role designed for use by AWS CodeDeploy, but you have assigned it to an Amazon EC2 instance. This is indicated by the error message: EC2 cannot assume the role Ec2Codedeploy
From Create a Service Role for AWS CodeDeploy:
The service role you create for AWS CodeDeploy must be granted the permissions to access the instances to which you will deploy applications. These permissions enable AWS CodeDeploy to read the tags applied to the instances or the Auto Scaling group names associated with the instances.
The permissions you add to the service role specify the operations AWS CodeDeploy can perform when it accesses your Amazon EC2 instances and Auto Scaling groups. To add these permissions, attach an AWS-supplied policy, AWSCodeDeployRole, to the service role.
This is separate to the Role that you would assign to your Amazon EC2 instances, which generates credentials that can be used by applications on the instances.
These should be two separate roles with different assigned permissions.
The docs are very confusing to me. I have read through the SQS access docs. But what really throws me is this page: http://docs.aws.amazon.com/aws-sdk-php/v2/guide/service-sqs.html
You can provide your credential profile like in the preceding example,
specify your access keys directly (via key and secret), or you can
choose to omit any credential information if you are using AWS
Identity and Access Management (IAM) roles for EC2 instances or
credentials sourced from the AWS_ACCESS_KEY_ID and
AWS_SECRET_ACCESS_KEY environment variables.
1) Regarding what I have bolded, how is that possible? I cannot find steps whereas you are able to grant EC2 instances access to SQS using IAM roles. This is very confusing.
2) Where would the aforementioned environment variables be placed? And where would you get the key and secret from?
Can someone help clarify?
There are several ways that applications can discover AWS credentials. Any software using the AWS SDK automatically looks in these locations. This includes the AWS Command-Line Interface (CLI), which is a python app that uses the AWS SDK.
Your bold words refer to #3, below:
1. Environment Variables
The SDK will look for the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables. This is a great way to provide credentials because there is no danger of accidentally including a credentials file in github or other repositories. In Windows, use the System control panel to set the variables. In Mac/Linux, just EXPORT the variables from the shell.
The credentials are provided when IAM users are created. It would be your responsibility to put those credentials into the environment variables.
2. Local Credentials File
The SDK will look in local configuration files, such as:
~/.aws/credentials
C:\users\awsuser\.aws\credentials
These files are great for storing user-specific credentials and can actually store multiple profiles, each with their own credentials. This is useful for switching between different environments such as Dev and Test.
The credentials are provided when IAM users are created. It would be your responsibility to put those credentials into the configuration file.
3. IAM Roles on an Amazon EC2 instance
An IAM role can be associated with an Amazon EC2 instance at launch time. Temporary credentials will then automatically be provided via the instance metadata service via the URL:
http://instance-data/latest/meta-data/iam/security-credentials/<role-name>/
This will return meta-data that contains AWS credentials, for example:
{
"Code" : "Success",
"LastUpdated" : "2015-08-27T05:09:23Z",
"Type" : "AWS-HMAC",
"AccessKeyId" : "ASIAI5OXLTT3D5NCV5MS",
"SecretAccessKey" : "sGoHyFaVLIsjm4WszUXJfyS1TVN6bAIWIrcFrRlt",
"Token" : "AQoDYXdzED4a4AP79/SbIPdV5N8k....lZwERog07b6rgU=",
"Expiration" : "2015-08-27T11:11:50Z"
}
These credentials have inherit the permissions of the IAM role that was assigned when the instance was launched. They automatically rotate every 6 hours (note the Expiration in this example, approximately 6 hours after the LastUpdated time.
Applications that use the AWS SDK will automatically look at this URL to retrieve security credentials. Of course, they will only be available when running on an Amazon EC2 instance.
Credentials Provider Chain
Each particular AWS SDK (eg Java, .Net, PHP) may look for credentials in different locations. For further details, refer to the appropriate documentation, eg:
Providing AWS Credentials in the AWS SDK for Java
Providing AWS Credentials in the AWS SDK for .Net
Providing AWS Credentials in the AWS SDK for PHP
I am trying to create EC2 instance via ansible using IAM roles but I while launching new instance I get error
failed: [localhost] => (item= IAMRole-1) => {"failed": true, "item": " IAMRole-1"}
msg: Instance creation failed => UnauthorizedOperation: You are not authorized to perform
this operation. Encoded authorization failure message: Ckcjt2GD81D5dlF6XakTSDypnwrgeQb0k
ouRMKh3Ol1jue553EZ7OXPt6fk1Q1-4HM-tLNPCkiX7ZgJWXYGSjHg2xP1A9LR7KBiXYeCtFKEQIC
W9cot3KAKPVcNXkHLrhREMfiT5KYEtrsA2A-xFCdvqwM2hNTNf7Y6VGe0Z48EDIyO5p5DxdNFsaSChUcb
iRUhSyRXIGWr_ZKkGM9GoyoVWCBk3Ni2Td7zkZ1EfAIeRJobiOnYXKE6Q
whereas iam role has full ec2 access, with following policy
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "ec2:*",
"Effect": "Allow",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "elasticloadbalancing:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "cloudwatch:*",
"Resource": "*"
},
{
"Effect": "Allow",
"Action": "autoscaling:*",
"Resource": "*"
}
]
}
Any suggestions please.
The problem here is not with the IAM Role for Amazon EC2 itself, rather that you (i.e. the AWS credentials you are using yourself) seem to lack the iam:PassRole permission that is required to 'pass' that role to a requested EC2 instance on start, see section Permissions Required for Using Roles with Amazon EC2 within Granting Applications that Run on Amazon EC2 Instances Access to AWS Resources for details:
To launch an instance with a role, the developer must have permission
to launch Amazon EC2 instances and permission to pass IAM roles.
The following sample policy allows users to use the AWS Management
Console to launch an instance with a role. The policy allows a user to
pass any role and to perform all Amazon EC2 actions by specifying an
asterisk (*). The ListInstanceProfiles action allows users to view all
the roles that are available on the AWS account.
Example Policy that grants a user permission to launch an instance
with any role by using the Amazon EC2 console
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"iam:PassRole",
"iam:ListInstanceProfiles",
"ec2:*"
],
"Resource": "*"
}]
}
The reason for requiring this indirection via the PassRole permission is the ability to restrict which role a user can pass to an Amazon EC2 instance when the user is launching the instance:
This helps prevent the user from running applications that
have more permissions than the user has been granted—that is, from
being able to obtain elevated privileges. For example, imagine that
user Alice has permissions only to launch Amazon EC2 instances and to
work with Amazon S3 buckets, but the role she passes to an Amazon EC2
instance has permissions to work with IAM and DynamoDB. In that case,
Alice might be able to launch the instance, log into it, get temporary
security credentials, and then perform IAM or DynamoDB actions that
she's not authorized for.
You might want to read my answer to How to specify an IAM role for an Amazon EC2 instance being launched via the AWS CLI? for a more elaborate explanation, which also links to Mike Pope's nice article about Granting Permission to Launch EC2 Instances with IAM Roles (PassRole Permission), which explains the subject matter from an AWS point of view.