Getting started with chef, and running composer install on deploy - laravel

We're looking to deploy a few Laravel4 based PHP apps on amazon with OpsWorks, this requires a few things:
Grab code from git
Download composer.phar from getcomposer.com
Run php composer.phar install
Change permissions on a few specific folders
I'm completely fresh with it comes to chef, so initially looking for a place to get to grips with the basics of chef, and then how to achieve the tasks above, would appreciate any pointers.

I'm no Chef guru (I usually use Puppet) but try the following:
Grab code from git
You may want to execute a wget command (see examples below).
If you want something more sophisticated see http://docs.opscode.com/resource_deploy.html
deploy_revision "/path/to/application" do
repo 'ssh://name-of-git-repo/repos/repo.git'
migrate false
purge_before_symlink %w{one two folder/three}
create_dirs_before_symlink []
symlinks(
"one" => "one",
"two" => "two",
"three" => "folder/three"
)
before_restart do
# some Ruby code
end
notifies :restart, "service[foo]"
notifies :restart, "service[bar]"
end
Download composer.phar from getcomposer.com
I would execute a wget.
I lifted some code from here: http://cookingclouds.com/2012/06/23/chef-simple-cookbook-example/
It's basically just doing a wget in a specific folder, extracting the contents of the tar & updating some permissions on the new files. It only does this if the folder doesn't already exist.
# Run a bash shell - download and extract composer
bash "install_composer" do
user "root"
cwd "/folder/to/extact/to"
code <<-EOH
wget http://getcomposer.com/composer.tar.gz
tar -xzf composer.tar.gz
chown -R user:group /folder/to/extact/to
EOH
not_if "test -d /folder/to/extact/to"
end
Run php composer.phar install
http://docs.opscode.com/resource_execute.html
execute "composer install" do
command "php composer.phar install && touch /var/log/.php_composer_installed"
creates "/var/log/.php_composer_installed"
action :run
end
This will only run it once, otherwise you can remove the "creates" and it will run it each time.
Change permissions on a few specific folders
http://docs.opscode.com/resource.html
directory "/tmp/folder" do
owner "root"
group "root"
mode 0755
action :create
end
If the directory already exists, nothing will happen. If the directory was changed in any way, the resource is marked as updated.
Finally
I find the search handy, browsing stuff on the Chef site seems to be hopeless (too much stuff to dig through). http://docs.opscode.com/search.html

I would go pretty much with Drew Khoury's answer, with one change. To download compose.phar, you can use the remote_file resource instead of doing a wget in a bash script.

As composer.phar is already an executable, you could simply put it to a dir in your $PATH:
remote_file '/usr/bin/composer' do
source 'http://getcomposer.org/composer.phar'
mode '0755'
action :create_if_missing
end

You can configure your git hook post-receive to something like that:
GIT_WORK_TREE=/path/to/your/site
cd /path/to/your/site
curl -sS https://getcomposer.org/installer | php
php composer.phar install
# do your stuff here
And make sure to give executable permissions to the post-receive file.

Or you can just commit your vendor directory. We have a couple projects running on Laravel 4 and OpsWorks.

I'm in the same position with Laravel and OpsWorks. I was looking for a solution but then...
What happens if someone injects a security flaw into one of the sub modules? Does that mean we trust X number of external code-bases to be 100% secure all the time?
Either there is some fundamental flaw in my understanding or running composer at all on a production server is about the most serious WTF there is.
I've now made sure that the code in my project repo will be deployed 100% as is. No downloading external modules on the production server ever.

Related

Set executable permission on script installed with Homebrew

I wrote my first tap, so I'm still not sure how it works. I wrote this small formula:
class Konversation < Formula
desc "Konversation is a tool to generate rich and diversified responses to the user of a voice application."
homepage "https://github.com/rewe-digital-incubator/Konversation/"
url "https://github.com/rewe-digital-incubator/Konversation/releases/download/1.0.0/konversation-cli.jar"
sha256 "6123d126278faae2419f5de00411a1b67ae57e0cf2265a5d484ed6f9786baaca"
def install
prefix.install "#{buildpath}/konversation-cli.jar"
File.write("#{buildpath}/konversation", "java -jar #{prefix}/konversation-cli.jar $#")
bin.install "#{buildpath}/konversation"
system "chmod", "+x", "#{bin}/konversation"
end
end
However I cannot run my tool since the "konversation" executable has no x permission. I tried to fix that with a system chmod, however I see that my x flag is removed after the installation by brew as some kind of cleanup:
==> Cleaning
Fixing /home/linuxbrew/.linuxbrew/opt/konversation/bin/konversation permissions from 777 to 444
How can I set the file permissions correctly?
Please note that I don't want to host the shell script itself somewhere, since I see no advance in packaging the shell script and the jar file in another zip file for destitution.
If you want to try it yourself try this command:
brew install rekire/packages/konversation
Shell scripts need to have a shebang line, otherwise the postinstall cleaner will set its permissions as though it were not an executable. In this specific case, I suggest:
use bin.write_jar_script instead -- this will set up the correct environment for JAR scripts
install .jars to libexec instead of prefix -- to avoid polluting the prefix with unnecessary files.
Example formula from Homebrew/homebrew-core

Can I run composer update in root folder and update all subfolder composer.json files?

I have a WordPress app that has multiple plugins which are all built with OOP principles using Composer to manage autoloading in each plugin.
Now I'm wondering is it possible to just run
composer install --no-dev
in the project root, and somehow trigger running all the composer installs in the plugins so that the classmap is updated.
This is important when I want to deploy by pulling from a repository and performing the build using some kind of continuous integration.
Or do I need to manually specify in my deploy/build script to perform installation separately for each plugin?
I think that Wordpress still has no real Composer integration, which looks like a shame for everyone (including me) not involved in Wordpress development, but the project might have valid reasons, or it is no simple task (probably both).
That being said: You cannot run Composer in a directory level "one up" and mass-update the subdirectories. But it should only be a task of a simple shell script to iterate over all found directories, and if a composer.json is found inside, to update (or install) the dependencies.
My suggestion (including a random number of bugs) would be:
#!/bin/bash
for dir in {*,subdirA/*,subdirB/*}
do
if [ -d $dir ]
then
pushd $dir
if [ -f composer.json ]
then
composer install
fi
popd
fi
done

composer self-update on Openshift not giving permission?

when I run composer-self-update in Openshift I get the error below. I searched for a while but couldn't understand the solutions properly like this one - How can I composer update on OpenShift?
[Composer\Downloader\FilesystemException]
Filesystem exception:
Composer update failed: the "/var/lib/openshift/.cartridge_repository/redhat-php/0.0.28/usr/bin/composer.phar" file could not be written
The answer in the question you linked does a pretty good job of explaining it if you follow the links, but I'm happy to try and explain it further for you.
Openshift supports action hooks which are scripts that are triggered to run at the appropriate git phase that you link them to.
To use the solution they suggest, you need to:
First; create a directory called .openshift/action_hooks inside the root directory of your project (e.g mkdir .openshift/action_hooks) - by placing it in the root directory it would map like this myproject/.openshift/action_hooks
Second; you now need to create a bash script called post_deploy inside the action_hooks directory that contains the following:
#!/bin/bash
export MY_PHPCOMPOSER=$OPENSHIFT_DATA_DIR/composer.phar
# if composer not exists, download
if [ ! -f $MY_PHPCOMPOSER ]; then
cd $OPENSHIFT_DATA_DIR
echo "Downloading composer..."
php -r "readfile('https://getcomposer.org/installer');" | php
fi
$MY_PHPCOMPOSER -n -q self-update
cd $OPENSHIFT_REPO_DIR
# install
php -dmemory_limit=1G $MY_PHPCOMPOSER install
You should now have a script that maps like this in your project; myproject/.openshift/action_hooks/post_deploy
Now every time you push to your repo in openshift it will execute that script and effectively run composer install.
If you have any trouble then be sure to check out the comments on that answer for a local permissions change you may need to make.
If you get stuck along the way then please comment or ask a new question and we can help you work through it.

PHPUnit - How to add vendor/bin into path?

I installed PHPUnit with composer. Everytime I run it, I have to call vendor/bin/phpunit. How can I put vendor/bin into path, so that next time I only need to call phpunit to run it?
You could add the current directory into your path.
For Linux/Mac add the following into your .bash_profile, Windows would be similar, alter the line below and add it into your PATH.
# include the current `vendor/bin` folder (Notice the `.` - This means current directory)
PATH="./vendor/bin:$PATH"
Remember to restart your terminal or resource your bash_profile.
Now you should be able to run: phpunit and it will automatically look for it within ./vendor/bin and if it exists it will execute using that.
If you are running on Homestead (or some other Linux/Ubuntu system):
alias p='vendor/bin/phpunit'
Then you can just type p and it will run your tests
If you are using Homestead - you can add this alias to your aliases file so it is always there.
Another easy solution, from the composer documentation, is to set your bin-dir setting to ./. This will install the binary in your root directory.
"config": {
"bin-dir": "./"
}
Then you can just run ./phpunit. I typically set bin-dir to bin, then type bin/phpunit. It's short enough for me.
If you already have phpunit installed, you will need to delete the vendor/phpunit directory and rerun composer install before composer will move the binary.

Gemfile git branch for Beanstalk unable to bundle install

In my Gemfile I have
gem 'slim', :git => 'git://github.com/brennancheung/slim.git', :branch => 'angularjs_support'
which is a branch of the slim gem required for me to run AngularJS correctly with my views. I've pushed my code to my beanstalk application but am unable to bundle install according to the logs shown below...
sh: git: command not found
Git error: command `git clone 'git://github.com/brennancheung/slim.git'
"/usr/share/ruby/1.9/gems/1.9.1/cache/bundler/git/slim-700ed452e752ccb6baf9de9d0a46fbded8bb2da5"
--bare --no-hardlinks` in directory /var/app/ondeck has failed.
I'm new to Beanstalk and have no idea how to fix this. Any help on how to get bundle to install successfully would be greatly appreciated. Thanks.
Since git is not installed on by default on EC2 instance, you would have to find a workaround solution:
a. Install git on instance with configuration file and command.
It is the most obvious way to solve the problem, although not be the most efficient.
b. Clone slim repository into your project, so it will be deployed together.
Seems that slim is not being actively developed lately, so having the copy in your project might not be a bad idea. It protects you from github.com being down, yet you will have extra files to carry around.
c. Use configuration file and commands to pull the data from github.com directly with http.
Too many files to work with, and also dependency on third party service.
d. Use a combination of above. Clone slim repository and copy files to S3. Use configuration and commands to copy files from S3 to your instance.
It seems like the most elegant and efficient way to solve the problem.
It might look something like:
$ cat .ebextensions/myapp.config
commands:
10-copy-slim-from-s3
command: "aws s3 cp s3://mybucket/slim slim --recursive"

Resources