I have my app stored on GitHub. To deploy it to Amazon, I use their EB deploy command which takes my git repository and sends it up. It then runs the container commands to load my data.
container_commands:
01_migrate:
command: "django-admin.py migrate"
leader_only: true
02_collectstatic:
command: "source /opt/python/run/venv/bin/activate && python manage.py collectstatic --noinput"
The problem is that I don't want the fixtures in my git. Git should not contain this data since it's shared with other users. How can I get my AWS to load the fixtures some other way?
You can use the old school way: scp to the ec2 instance.
You can go to the EC2 console to see the real EC2 instance associated to your EB environment (I assume you only have one instance). Write down the public ip, and then connect to the instance like you would do with a normal EC2 instance.
For example
scp -i [YOUR_AWS_KEY] [MY_FIXTURE_FILE] ec2-user#[INSTANCE_IP]:[PATH_ON_SERVER]
Note that the username has to be ec2-user.
But I do not recommend this way to deploy the project because you may need to manually execute the commands. This is, however, useful for me to get the fixture from a live server.
To avoid tracking fixtures in the git. I just use a simple workaround: create a local branch for EB deployment and track the fixtures along with other environment-specific credentials. Such EB branches should never be uploaded to the git remote repositories.
Related
I try to run and test an AWS Lambda service written in Golang locally using SAM CLI. I have two problems:
The Lambda does not work locally if I use .zip files. When I deploy the code to AWS, it works without an issue, but if I try to run locally with .zip files, I get the following error:
A required privilege is not held by the client: 'handler' -> 'C:\Users\user\AppData\Local\Temp\tmpbvrpc0a9\bootstrap'
If I don't use .zip, then it works locally, but I still want to deploy as .zip and it is not feasible to change the template.yml every time I want to test locally
If I try to access AWS resources, I need to set the following environment variables:
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN
However, if I set these variables in template.yml and then use sam local start-api --env-vars to fill them with the credentials, then the local environment works and can access AWS resources, but when I deploy the code to the real AWS, it gives an error, since these variables are reserved. I also tried to use different names for these variables, but then the local environment does not work, and also tried to omit these from template.yml and just use the local env-vars, but environment variables must be present in template.yml and cannot be created with env-vars, can only fill existing variables with values.
How can I make local env work but still be able to deploy to AWS?
For accessing AWS resources you need to look at IAM permissions rather than using programmatic access keys, check this document out for cloudformation.
To be clear virtually nothing deployed on AWS needs those keys, it's all about applying permissions to X(lambda, ec2 etc etc) - those keys are only really needed for the aws cli and some local envs like serverless and sam
The serverless framework now supports golang, if you're new I'd say give that a go while you get up to speed with IAM/Cloudformation.
my website(a MERN app) is live on aws and running with ec2 instance. I m using gitlab repo. So I developed some minor modules in my project and pushed it to gitlab(master branch). now I opened ssh ec2 instance terminal by using ssh -I .... ubuntu#xxxx... and its connected successfully from ec2 instance server. Then I went in my project folder by cd /var/www/my_Projectrepo/
and took pull from gitlab master by(git pull origin master), in terminal there I can see my files which I added in my last commit. But when I am visiting my site live (https://www.myproject.com) I can't able to see my latest changes. I also restart the Nginx & pm2 but not worked. Please help me out from this.
Thanks in advance
I have a AWS EC2 instance running with me and there is a maven project running on tomcat7. What I have tried is I am using Jenkins for the CI.So whenever the new push happens to the Git-hub Jenkins starts to build, after completion of build it will upload the war file to the AWS S3.
Where I have stuck is, I am not getting a way to deploy the war file to the AWS Ec2 instance.
I have tried to use Code Deployment where at a point it showed me that it supports only tar, tar.gz and zip is there any way out to deploy the war file to the AWS EC2 instance from the S3.
Thank you.
You can use Amazon Code Deploy which can manage deployment from a S3 bucket and can automate deployment to EC2 instance of your file/scripts.
From the Overview of a Deployment
Here's how it works:
First, you create deployable content – such as web pages, executable
files, setup scripts, and so on – on your local development machine or
similar environment, and then you add an application specification
file (AppSpec file). The AppSpec file is unique to AWS CodeDeploy; it
defines the deployment actions you want AWS CodeDeploy to execute. You
bundle your deployable content and the AppSpec file into an archive
file, and then upload it to an Amazon S3 bucket or a GitHub
repository. This archive file is called an application revision (or
simply a revision).
Next, you provide AWS CodeDeploy with
information about your deployment, such as which Amazon S3 bucket or
GitHub repository to pull the revision from and which set of instances
to deploy its contents to. AWS CodeDeploy calls a set of instances a
deployment group. A deployment group contains individually tagged
instances, Amazon EC2 instances in Auto Scaling groups, or both.
Each time you successfully upload a new application revision that you
want to deploy to the deployment group, that bundle is set as the
target revision for the deployment group. In other words, the
application revision that is currently targeted for deployment is the
target revision. This is also the revision that will be pulled for
automatic deployments.
Next, the AWS CodeDeploy agent on each
instance polls AWS CodeDeploy to determine what and when to pull the
revision from the specified Amazon S3 bucket or GitHub repository.
Finally, the AWS CodeDeploy agent on each instance pulls the target
revision from the specified Amazon S3 bucket or GitHub repository and,
using the instructions in the AppSpec file, deploys the contents to
the instance.
AWS CodeDeploy keeps a record of your deployments so
that you can get information such as deployment status, deployment
configuration parameters, instance health, and so on.
Good part is that code deploy has no additional cost, you only pay for the resources (EC2, S3) that are used in your pipeline
Assuming you have already created a S3 bucket.
Step 1: Create a IAM user / Role who have access to a s3 bucket where in you are placing the WAR file
Step 2: Write a custom script which will download WAR File from S3 to your EC2 instance.
You can also use aws cli to download contents from s3 to your local machine.
Create a startup.sh file and add these contents
aws s3 cp s3://com.yoursitename/warFile/sample.war /tmp
sudo mv /tmp/sample.war /var/lib/tomcat/webapps/ROOT.war
sudo service tomcat restart
I'm working with multiple instances (10 and more), and I want to configure them without accessing to each of them. Currently I look through Puppet and it seems is what I need. I've tried it for two instances and it's ok, but I installed puppet manually in both of instances, and also manually sent certificate from agent via puppet agent. Is there any way to install puppet automatically and send certificate for each node, not accessing them?
You can use scripts within UserData to autoconfigure your instance (see Running Commands on Your Linux Instance at Launch) by installing puppet, configuring it, and running it. Keep in mind that UserData is normally limited to 16kb and that data in there is stored base-64 encoded.
You can also build your own AMI with configuration scripts that run on boot, and then use that to download configuration from a central server, or read it out of userdata (e.g. curl http://169.254.169.254/latest/user-data | bash -s).
For example this is something we had in our Cloudformation template that installed a configuration service on our hosts.
"UserData": { "Fn::Base64" : { "Fn::Join" : [ "\n", [
"#!/bin/sh",
"curl -k -u username:password -f -s -o /etc/init.d/ec2 https://scriptserver.example.com/scripts/ec2",
"chmod a+x /etc/init.d/ec2",
"/etc/init.d/ec2 start"] ] } }
Ideally the 'scriptserver' is in the same VPC since the username and password aren't terribly secure (they're stored unencrypted on the machine, the script server, and in the Cloudformation and EC2 services).
The advantage of bootstrapping everything with userdata instead of building an AMI is flexibility. You can update your bootstrap scripts, generate new instances, and you're done. The disadvantages are speed since you'll have wait for everything to install and configure each time an instance launches (beware Cloudformation timeouts) and stability since if your script installs packages from a public repository (e.g. apt-get install mysql), the packages can be updated at any time, potentially introducing untested software into your environment. The workaround for the latter is to install software from locations you control.
I have a very interesting problem. Following is my current workflow of deployment in Amazon EC2 in classic mode.
Deploy host inside my Company's network.
Deploy Target is EC2 machine in AWS.
Have custom ruby gems inside the company's git account (Hence cannot install gems from outside my companies network).
To overcome the problem mentioned in Point #3. I have used reverse tunnelling between the deploy host and deploy target.
I am using capistrano for deployment.
Now the problem arises when we decided to move from Amazon Classic to Amazon VPC with deploy target having only private ip address. Here is the workflow I thought of for deploying code in VPC instances.
Create a deploy host in Amazon VPC and attach public dns to it so that I can access it from my main deploy host (which is inside my company's network.)
Deploy the code by running the deployment scripts from AWS deploy host.
The problem is that I am not able to find a way to install gems which are hosted inside the git account of my company. Can you guys help me with this problem?
Prior to deployment, you can just setup git mirrors of your production repositories by just pushing to git bare repositories in your AWS deploy host.
Then that AWS deploy host also has access to your VPC so you can do the deployment from there.
Hope it helps.
Download the gems first and then pass it to the ec2 instance in vpc using scp
scp -r -i key ubuntu#ip-address:/ruby-app
Then run gem install gem-name from the folder, it will install gem from within the folder matching with the name.
Run bundle package, this will download all the gems and will be present in vendor/cache folder. Now move this files to the ec2 instance.