AWS code deploy deployment failed with github - amazon-ec2

I have created a pipeline for code deployment with github, but it is failing at DownloadBundle with Access Denied error.
I have created a role with AmazonEC2FullAccess and AWSCodeDeployRole to the deployment iam and also crated role for ec2 AmazonEC2FullAccess
I have attached couple of screenshot for setting of code deployment group
also I have placed appspec.yml in the root directory of my repo.
version: 0.0
os: linux
files:
- source: /
destination: /var/www/html/
overwrite: true
file_exists_behavior: OVERWRITE
hooks:
BeforeInstall:
- location: scripts/install_dependencies.sh
timeout: 300
runas: root
ApplicationStart:
- location: scripts/start_server.sh
timeout: 300
runas: root
Note: I am using Auto scaling

For this purpose your EC2 instance has to access S3 as well, check if your EC2 instance has permission to access the related S3 Bucket. Also if you are using KMS for encryption of your bucket, your EC2 instance has to have KMS permissions as well.

Related

Deploy applicaiton to aws windows server that has authentation

I have setup an Angular application codepipeline in AWS. The code pipeline works until the build is generated and uploaded the artificats to s3. But when it comes to deployment it failed everytime.
I am sure there is configuration issue with my appspec.yml not able to correct it.
my appspec.yml
version: 0.0
os: windows
files:
- source: /
destination: C:\sandboxBuildData\project\project-ng\dist\project\dist
overwrite: true
file_exists_behavior: OVERWRITE
hooks:
ApplicationStop:
- location: application_stop.sh
timeout: 300
runas: administrator
ApplicationStart:
- location: application_start.sh
timeout: 300
runas: administrator
I don't know if its correct because I see runas is not requred in windows server. Also the windows server has user with password to access it.
Do I need to install deployment group in windows just like linux ?
How do I stop existing command and run a new ?

How to install large dependencies on AWS EFS via serverless framework

I understand that we can install dependencies in EFS from an EC2 instance and then set the mount path and PythonPath in AWS lambda so that the lambda has now the path of the dependencies folder.
But is there a way to eliminate the EC2 from this approach and rather install those dependencies from serverless framework?
My scenario is to upload a tensorflow2 dependency(which is >500 MB) to an AWS lambda.
Any leads would be helpful and appreciated.
Yes you can. I am not sure if you have already setup EFS on serverless.
But assuming that this have been done, you can then explicitly tell your serverless lambda project what vpc's to connect to and what EFS IAM roles to use.
I do not have the details of the EFS setup but on my project this would look something like this:
name: aws
profile: abcd
runtime: python3.8
region: us-west-1
vpc:
securityGroupIds:
- sg-065647b2292ad63a2
subnetIds:
- subnet-02ad3xxxxxxxxxxxx
- subnet-02ca2xxxxxxxxxxxx
- subnet-01a14xxxxxxxxxxxx
# Allow RW access to EFS services
iamManagedPolicies:
- "arn:aws:iam::aws:policy/AmazonElasticFileSystemClientReadWriteAccess"
under the functions section just make sure that you define your env vars to point to your libs/code:
functions:
myfunc:
runtime: python3.8
handler: myhandler
environment:
PYTHONPATH: /mnt/efs/lib/python3.8/site-packages
LD_LIBRARY_PATH: /mnt/efs/lib/python3.8/site-packages
Finally in resources:
resources:
extensions:
MyfuncLambdaFunction:
Properties:
FileSystemConfigs:
- Arn: arn:aws:elasticfilesystem:us-west-1:123456789012:access-point/fsap-0012abcde1234ab12
LocalMountPath: /mnt/efs
FYI for tensorflow, you can bring it down to around 60MB using a combination of
lambc/docker-lambda and
tensorflow packaging
but in the long run you will be better off with EFS anyway, just was worth mentioning.

Serverless config credentials not working when serverless.yml file present

We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.

How to access private AWS resources in AWS SAM LOCAL when start-api testing

I've been working with AWS SAM Local to create and test a lambda / api gateway stack before shipping it to production. I have recently ran into a brick wall when trying to access private resources (RDS) when testing locally (sam local start-api --profile [profile]). I'm able to connect to some of these private resources if I do some ssh tunneling, but was wondering if I am able to test locally without tunneling using VPC.
Below is an example sam template:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: Example Stack
Globals:
Function:
Timeout: 3
Resources:
ExampleFunction:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.example
Runtime: nodejs8.10
CodeUri: .
Description: 'Just an example'
MemorySize: 128
Role: 'arn:aws:iam::[arn-role]'
VpcConfig:
SecurityGroupIds:
- sg-[12345]
SubnetIds:
- subnet-[12345]
- subnet-[23456]
- subnet-[34567]
Events:
Api1:
Type: Api
Properties:
Path: /example
Method: GET
After reading through a lot of documentation and searching stackoverflow for anything that would help... I ended up joining the #samdev slack channel and asked for help. I was provided some guidance and a great guide on setting up OpenVPN on an EC2 instance.
The set up was super easy (completed in under 30 minutes) and the EC2 instance uses a pre-baked AMI image. Make sure you assign the new EC2 instance to the appropriate VPC containing the resources you need access to.
Here is a link to the OpenVPN guide: https://openvpn.net/index.php/access-server/on-amazon-cloud.html
You can request an invite to the #samdev slack channel here: https://awssamopensource.splashthat.com/

Can't access Google Cloud Datastore from Google Kubernetes Engine cluster

I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.

Resources