I try to skip a directory with yaml in Google Cloud, my app.yaml is:
runtime: php55
api_version: 1
threadsafe: true
skip_files:
- assets/
handlers:
- url: /.*
script: index.php
But, it doesn't work, Do you know which is wrong in my code?
This is the correct syntax and worked on my project with the same php version, api version and thread safety.
Did you deploy and wait a few minutes for it to take effect? It usually takes 3 minutes after the deploy completion for the changes to be in effect.
You can check which versions of your application are running with gcloud app versions list.
Also, if you're using local development (dev_appserver.py), then the skip_files directive will be ignored and they will be served normally.
Related
I'm using gitlab CI for deployment.
I'm running into a problem when the review branch is deleted.
stop_review:
variables:
GIT_STRATEGY: none
stage: cleanup
script:
- echo "$AWS_REGION"
- echo "Stopping review branch"
- serverless config credentials --provider aws --key ${AWS_ACCESS_KEY_ID} --secret ${AWS_SECRET_ACCESS_KEY}
- echo "$CI_COMMIT_REF_NAME"
- serverless remove --stage=$CI_COMMIT_REF_NAME --verbose
only:
- branches
except:
- master
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
when: manual
error is This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I have tried different GIT_STRATEGY, can some point me in right direction?
In order to run serverless remove, you'll need to have the serverless.yml file available, which means the actual repository will need to be cloned. (or that file needs to get to GitLab in some way).
It's required to have a serverless.yml configuration file available when you run serverless remove because the Serverless Framework allows users to provision infrastructure using not only the framework's YML configuration but also additional resources (like CloudFormation in AWS) which may or may not live outside of the specified app or stage CF Stack entirely.
In fact, you can also provision infrastructure into other providers as well (AWS, GCP, Azure, OpenWhisk, or actually any combination of these).
So it's not sufficient to simply identify the stage name when running sls remove, you'll need the full serverless.yml template.
I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.
Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.
I am trying to deploy Laravel on AWS using code deploy, I have attached a sample yml file as well.
As the hook:BeforeInstall: will configure the php, mysql and other configuration that will be needed to run the Laravel Application. I need to know whenever the deployment is made will that hook trigger each time? as I don't want to install the php mysql each time, what I want to do is it should run only on first time and for all other deployments it should not install configurations again.
version: 0.0
os: linux
files:
- source: /*
destination: /var/www/html/my/directory
hooks:
BeforeInstall:
- location: scripts/install_dependencies
timeout: 300
runas: root
- location: scripts/start_server
timeout: 300
runas: root
for first time install php and mysql you can write shell script
for second time write another shell script
each time it call different shell script file....
you can refer this one...
where there is yaml file and script folder contain sh files
https://github.com/enzyme-ops/laravel-code-deploy-template
I am deploying a Python rest_framework application to app engine but my app.yaml fails with the error msg (gcloud.app.deploy) Error Response: [13] Failed to create manifest file.
I have tried to modify the app.yaml file. I have tried to deploy the app by declaring the python version as python27 I succeeded to deploy it. However, other errors arise due to the fact that my virtualenv on my local machine is set to python37.
runtime: python37
entrypoint: gunicorn -b :8080 workshop.wsgi
instance_class: F2
beta_settings:
cloud_sql_instances: neverland:europe-west3:neverlandsql2
env_variables:
SECRET_KEY: "*****************************************"
DJANGO_SETTINGS_MODULE: "workshop.settings.settings"
DEBUG: "True"
handlers:
- url: /static
static_dir: static/
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
I expect app.yaml to function even if the deployment environment is python37.
After more researching and contact google. It seems my project structure could be improved furthermore I was informed by Google that python 3.7 mentioned in my app.yaml is experimental at the moment and any other python can be used.
What is the value of the CLOUDSDK_PYTHON variable at the moment of attempted deployment? Support for version 3.7 is experimental at present see answer below:-
This error is related to the Python version used by the SDK: currently gcloud
requires Python version 2.7.x and there is experimental support for 3.4 and up.
You can check this by running the gcloud topic startup command on the CLI.
Experimental support is what it says, so in this case you cannot deploy your app
with CLOUDSDK_PYTHON set to python37. Things should progress towards full
support; meanwhile, we should exercise patience.
This situation should not prevent you from using whatever Python version you need
for your project, and your app itself.
I have a Node.js application which is being automatically deployed to Amazon Web Service through Codeship using the CodeDeploy AWS deployment system.
During the deployment process I've set in my appspec.yml for the currently running web application to be stopped. Once the deployment is complete, I want the web application to be started up again.
os: linux
files:
- source: /
destination: /var/www/app2
hooks:
AfterInstall:
- location: bash_scripts/stop_forever.sh
runas: ec2-user
ApplicationStart:
- location: bash_scripts/start_forever.sh
runas: ec2-user
However I've not yet been able to have either of these scripts to be called successfully from the appspec.yml file during a deployment.
The current error I'm seeing in the AWS deployment agent log is
Error CodeScriptMissing
Script Name /var/scripts/stop_forever.sh
MessageScript does not exist at specified location: /var/scripts/stop_forever.sh
Log TailLifecycleEvent - ApplicationStop
This seems to refer to an older version of the appspec.yml file which was attempting to run these scripts in a different location. Even though I've changed the contents of the appspec.yml file in the deployed package, this error message remains the same on each deploy.
In addition to appspec.yml file listed above, I've also tried making the following changes:
Not listing a runas parameter for each hook
Referencing a script inside the deployed directory
Referencing a script outside the deployed directory
Having a version parameter initially set to 0.0
Unfortunately there is very little online in terms of appspec.yml troubleshooting, other than the AWS documentation.
What very obvious thing I am doing wrong?
The ApplicationStop hook is being called from the previously installed deployment before trying to run the current deployment appspec.yml file.
In order to prevent this from happening you'll have to remove any previously installed deployment from the server.
Stop the code deploy agent - sudo service codedeploy-agent stop
clear all deployments under /opt/codedeploy-agent/deployment-root
Restart the code deploy agent - sudo service codedeploy-agent start
There is another way documented in the AWS developer forums, which I think is preferable.
Use the --ignore-application-stop-failures option with the CLI tool while doing the deployment, it worked perfectly for me.
Example taken from the forum:
aws deploy create-deployment --application-name APPLICATION --deployment-group-name GROUP --ignore-application-stop-failures --s3-location bundleType=tar,bucket=BUCKET,key=KEY --description "Ignore ApplicationStop failures due to broken script"
https://forums.aws.amazon.com/thread.jspa?threadID=166904