Run PHP script after deployment on AWS Elastic Beanstalk - magento

I've created a PHP script that generates a local.xml file for Magento with the required database settings and credentials. I need to run this after the application is deployed; however I cannot seem to figure out a way to do so. My understanding is that I need to create a .config file inside of a .ebextensions directory. Anyone have a solution?

Technically Josh is not correct. According to the documentation (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html#customize-containers-format-commands): the commands section .. "The commands are processed in alphabetical order by name, and they run before the application and web server are set up and the application version file is extracted."
The closest I am aware of is the "container_commands" section which "The commands in container_commands are processed in alphabetical order by name. They run after the application and web server have been set up and the application version file has been extracted, but before the application version is deployed."
I don't know of a way to truly run a script post deployment (which is why I was here looking for that answer).

Elastic Beanstalk will look files under /opt/elasticbeanstalk/hooks/appdeploy/post directory to run after deployment.
So you can make use of this and do:
commands:
create_post_dir:
command: "mkdir /opt/elasticbeanstalk/hooks/appdeploy/post"
ignoreErrors: true
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/job_after_deploy.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
/var/app/current
** run your php script here **

Yup, .ebextensions are what you are looking for. To see how to bundle the source, take a look at the sample applications. There is a PHP one you can look at as well.
For more info on .ebextensions, take a look at this page.
Here's an example of a custom command. This could go in a file called sample.config within the .ebextensions directory:
commands:
success_command:
command: echo "this will be ran after launching"
Be careful if you copy and paste YAML and double check the format. You can also use JSON which follows the similar format.

Related

Azure DevOps ThirdParty Tools for build / Deployment

List item
pipelines:
default:
- step:
name: Push changes to Commerce Cloud
script:
- dcu --putAll $OCCS_CODE_LOCATION --node $OCCS_ADMIN_URL --applicationKey $OCCS_APPLICATION_KEY
- step:
name: Publish changes Live Storefront
image: Python 3.5.1
script:
python publishDCUAuthoredChanges.py -u $OCCS_ADMIN_URL -k $OCCS_APPLICATION_KEY
environment variables:
$OCCS_CODE_LOCATION: Path to location of all OCCS code
$OCCS_ADMIN_URL: URL for the administration interface on the target Commerce Cloud instance
$OCCS_APPLICATION_KEY: application key to use to log into the target Commerce Cloud administration interface
So I want to use Azure Dev Repository to CI / CD.
in the above code block if you see I have specified - dcu & python code in two task.
dcu is nodejs third party oracle tool which needed to be used to migrate code to cloud system. I want to know how to use that tool in azure dev ops,
Second python (or) nodejs which I want to invoke to REST api to publish the changes.
So where to place those files and how do we invoke it.
*********** Update **************
I hosted the self pool agent and able to access the system.
Just start executing basic bash code, but end up in two issue -
1) the git extract files from the repository it is going to _work/1/s, not sure how that path is decided. How can I change that location s
2) I did 'pwd' to the correct path but it fails in 'dcu' command. I tried with npm and other few commands it fails. But things like mkdir , rmdir it create & remove folder correctly from the desired path. when I tried the 'dcu' cmd from the terminal manually from the system it works fine as expected.
You can follow below steps to use DCU tool and python in azure pipelines.
1, create a azure git repo to include dcu zip file and your .py files. You can follow the steps in this thread to create a azure git repo and push local files to azure repo.
2, create azure build pipeline. Please check here to create a yaml pipeline. Here is a good tutorial for you to get started.
To create a classic UI pipeline, please choose Use the classic editor in the pipeline setup wizard, and choose start with an Empty job to start with an empty pipeline and add your own steps.(I will use classic UI pipeline in below example.)
3, Click "+" and search for Extract files task to unzip the DCU zip file. Click the 3dots on the Destination folder field to select a destination folder for extracted dcu files. eg. $(agent.builddirectory). Please check my answer in this thread more information about predefined variables
4, click "+" to add a powershell task. Run below script in screenshot to install dcu and run dcu command. For environment variables (like $OCCS_CODE_LOCATION), please click the variables tab in below screenshot to define them
cd $(agent.builddirectory) #the folder where the unzipped dcu files reside. eg. $(agent.builddirectory)
npm install -g
.\dcu.cmd --putAll $(OCCS_CODE_LOCATION) --node $(OCCS_ADMIN_URL) --applicationKey $(OCCS_APPLICATION_KEY)
5, add Use python version task to define a python version to execute your .py file.
6, add Python script task to run your .py file. Click the 3dots on Script path field to locate your publishDCUAuthoredChanges.py file(this py file and the dcu zip file have been pushed to azure git repo in the above step 1).
You should be able to run the script of above question in the azure devops pipeline.
Update:
_work/1/s is the default working folder for the agent. You cannot change it. Though there are ways to change the location where the source code is cloned from git, the tasks' workingdirectory is still from the default folder.
However, You can change the workingdirectory inside the tasks. And there are predefined variables you can use to refer to the places in the agents. For below example:
$(Agent.BuildDirectory) is mapped to c:\agent_work\1
%(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a
$(Build.BinariesDirectory) is mapped to c:\agent_work\1\b
$(Build.SourcesDirectory) is mapped to c:\agent_work\1\s
The .sh scripts in the _temp folder are generated automatically by the agent which contains the scripts in the bash task.
For above dcu command not found error. You can try adding dcu command path to the system variables path for your local machine's environment variables. (path in user variables cannot be found by agent jobs, For the agent use a different user account to connect to local machine)
.
Or you can use the physically path to dcu command in the bash task. For example let's say the dcu.cmd in the c:\dcu\dcu.cmd on local machine. Then in the bash task use below script to run dcu command.
c:/dcu/dcu.cmd --putAll ...

Cannot run `source` in AWS Codebuild

I am using AWS CodeBuild along with Terraform for automated deployment of a Lambda based service. I have a very simple buildscript.yml that accomplishes the following:
Get dependencies
Run Tests
Get AWS credentials and save to file (detailed below)
Source the creds file
Run Terraform
The step "source the creds file" is where I am having my difficulty. I have a simply bash one-liner that grabs the AWS container creds off of curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI and then saves them to a file in the following format:
export AWS_ACCESS_KEY_ID=SOMEACCESSKEY
export AWS_SECRET_ACCESS_KEY=MYSECRETKEY
export AWS_SESSION_TOKEN=MYSESSIONTOKEN
Of course, the obvious step is to simply source this file so that these variables can be added to my environment for Terraform to use. However, when I do source /path/to/creds_file.txt, CodeBuild returns:
[Container] 2017/06/28 18:28:26 Running command source /path/to/creds_file.txt
/codebuild/output/tmp/script.sh: 4: /codebuild/output/tmp/script.sh: source: not found
I have tried to install source through apt but then I get an error saying that source cannot be found (yes, I've run apt update etc.). I am using a standard Ubuntu image with the Python 2.7 environment for CodeBuild. What can I do to either get Terraform working credentials for source this credentials file in Codebuild.
Thanks!
Try using . instead of source. source is not POSIX compliant. ss64.com/bash/source.html
CodeBuild now supports bash as your default shell. You just need to specify it in your buildspec.yml.
env:
shell: bash
Reference: https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax
The AWS CodeBuild images ship with a POSIX compliant shell. You can see what's inside the images here: https://github.com/aws/aws-codebuild-docker-images.
If you're using specific shell features (such as source), it is best to wrap your commands in a script file with a shebang specifying the shell you'd like the commands to execute with, and then execute this script from buildspec.yml.
build-script.sh
#!/bin/bash
<commands>
...
buildspec.yml (snippet)
build:
commands:
- path/to/script/build-script.sh
I had a similar issue. I solved it by calling the script directly via /bin/bash <script>.sh
I don't have enough reputation to comment so here it goes an extension of jeffrey's answer which is on spot.
Just in case if your filename starts with a dot(.), the following will fail
. .filename
You will need to qualify the filename with directory name like
. ./.filename

how to create Heroku procfile for windows?

Im a newbie trying to make a django app, but unfortunately my os is windows.
Heroku docs is written for linux so I cant get sufficient information for app development on windows 7.
First how can I make procfile using window cmd?
Is there any command language translation docs?(linux->windows)
Regarding creation of text file in cmd shell:
echo web: run this thing >Procfile
this will create Procfile with web: run this thing inside (obviously).
Also You can use any text editor, notepad will fit perfectly.
And one thing, that wasn't obvious for me, therefore can also be helpfull to someone else.
Procfile should be a text file is a bit misleading, do NOT save Procfile as Procfile.txt or it will not be recognised. Just leave it plain and simple Procfile without any file format.
When you're using Windows for development, and your procfile contains for example $JAVA_OPTS (or anything else system dependent), then
besides of Procfile with Linux syntax (with for example: $JAVA_OPTS), for Heroku, you need
Procfile.windows with Windows syntax (where you can write for example" %JAVA_OPTS%), and you point to it while working with Heroku localy: heroku local web -f procfile.windows
A Procfile should be a text file, called Procfile, sitting in the root directory of your app.
It's the same for Windows or Linux or OS X.
It should specify the command Heroku should use to start your app - so it's not really about linux or windows.
So to answer your question: use a text editor. Any text editor.
Just create the file with name procfile. If your editor is intelligent enough as mine to understand the files like procfile have a Heroku icon
gunicorn doesn't work on Windows so you'll want a Procfile.windows that will locally host your app in a way that doesn't require gunicorn (such as the way you would normally do it).
web: ~what you would normally use to start your app~
web: gunicorn app_name.wsgi
write your own application name instead of app_name and file name just save without any Extention (Procfile).
A file named Procfile is required in the root of your Heroku project. The following is a basic example of the content to be created for a Django project:
web: gunicorn your_app_name.wsgi --log-file
Here the full docs from Heroku.

How to setup Pydevd remote debugging with Heroku

According to this answer I am required to copy the pycharm-debug.egg file to my server, how do I accomplish this with a Heroku app so that I can remotely debug it using Pycharm?
Heroku doesn't expose the File system it uses for running web dyno to users. Means you can't copy the file to the server via ssh.
So, you can do this by following 2 ways:
The best possible way to do this, is by adding this egg file into requirements, so that during deployment it gets installed into the environment hence automatically added to python path. But this would require the package to be pip indexed
Or, Commit this file in your code base, hence when you deploy the file reaches the server.
Also, in the settings file of your project if using django , add this file to python path:
import sys
sys.path.append(relative/path/to/file)

What's the simplest way to run a cron job on a standalone Ruby gem?

I have a gem that packages one .rb file containing my class and associated methods as well as a corresponding .bin file.
Locally, I can run everything just fine like so:
command_to_bin input_file output_file
I don't want to run this manually every day so I'm considering using cron on a server, but I'm a little unsure how to proceed.
Do I throw everything into a directory (.gem file, input file, output file) and just point the above cron command at the directory?
I've looked at this and sort of understand what's going on. I guess what confuses me the most is that when I look at all the web hosting providers, they mention domains and applications, but I just want to know how to have the standalone script run by itself without it being built into a web application or associated with a domain.
Check out the whenever gem. It's a wonderful gem to abstract all the nastiness of cron. Just include the command as you have written above and it should be fine.
You don't need to install Rails. After you wheneverize . the directory and set the schedule in your schedule.rb file, you need to run whenever --update-crontab to set everything to the system. Otherwise your cron jobs never get converted to Unix Cron

Resources