Git hook post-receive not running (Ruby script) - ruby

I'm trying to get this script working: https://github.com/zmilojko/git-trello/
In .git/hooks/post-receive (with proper values of course):
#!/usr/bin/env ruby
require 'git-trello'
GitHook.new(
:api_key => 'API_KEY',
:oauth_token => 'OAUTH_TOKEN',
:board_id => 'TRELLO_BOARD_ID',
:list_id_in_progress => 'LIST_ID_IN_PROGRESS',
:list_id_done => 'LIST_ID_IN_DONE',
:commit_url_prefix => 'https://github.com/zmilojko/git-trello/commits/'
).post_receive
File is executable. If I run it in bash ($ .git/hooks/post-receive), it seems to work mostly ok (except for the fact it doesn't receive git's input to stdin).
When doing git push, the script is not run, and no error whatsoever is thrown. Also, the remote URL is of the form git#github.com:...
I'm using rbenv, although I don't see how that could be a problem, could it? If it was, at least an error should be shown, like the ruby command wasn't found or something?

post-receive is a server-side hook. I assume that you are expecting this to run on your local machine, when, from your local machine, you push to GitHub. It doesn't work like that.
Here is a link to all of the server and client side hooks for Git.
http://git-scm.com/book/ch7-3.html#Server-Side-Hooks

Related

Puppet - unable to execute ONLY ONCE ordered chain of Exec commands after notification

TLDR:
I can't configure ordered chain of Puppet "Exec" commands to run ONLY ONCE.
Details:
I want to use Vagrant and Puppet modules to setup VM with installed Redmine and some sample data loaded into it.
I'm using https://forge.puppetlabs.com/johanek/redmine and it works great - Redmine is installed and it works.
My goal:
Now I want to load sample data into Redmine using REST API:
Create 1 test project
Import 2 issues into this project
I want to run 2 simple "Exec", one after another and ONLY ONCE, but I can't achieve this, hence the question.
My current effort:
I've tried to subscribe to one of latest steps in redmine installation
subscribe => [Exec['rails_migrations']]
and then import data, but the first step "create-project1" always notifies second step "import-issues", so it creates duplicated data.
And if run vagrant provision few times, the "import-issues" creates duplicates of this issues.
Here is my code:
exec {'create-project1':
subscribe => [Exec['rails_migrations']],
path => ['/usr/bin', '/usr/sbin', '/bin'],
creates => "$redmine_install_dir/.data_loaded",
command => "curl WHICH_CREATES_PROJECT && touch $redmine_install_dir/.data_loaded",
notify => [Exec['import-issues']],
} ->
exec {'import-issues':
path => ['/usr/bin', '/usr/sbin', '/bin'],
command => "curl WHICH_IMPORTS_ISSUES",
refreshonly => true,
}
Question:
How to configure those Exec commands to run in chain and ONLY ONCE?
Im also thinking about extending this chain to 5 commands in near future, so keep that in mind.
you were almost there with 'ONLY ONCE' - Puppet has onlyif properties that you can include in your exec block to test if a file already exists or not.
you could then do something like
exec {'create-project1':
subscribe => [Exec['rails_migrations']],
path => ['/usr/bin', '/usr/sbin', '/bin'],
onlyif => "test ! -f $redmine_install_dir/.data_loaded"
command => "curl WHICH_CREATES_PROJECT && touch $redmine_install_dir/.data_loaded",
notify => [Exec['import-issues']],
which test on the existence of the $redmine_install_dir/.data_loaded- you should be able to play a bit with that to achieve what you want

Run command after gem install from gem root folder

I'm deploying a Sinatra app as a gem. I have a command that starts the app as a service.
We are using chef to manage our deployments.
How can I run the command to start the app service but only after it's fully installed (including run-time dependencies)?
I've tried Googling for trying to run a post-install script but I haven't found anything that is of use or concrete without some complicated 'extconf.rb' work around
I would prefer not to use an execute resource if I can help it.
EDIT: I tried what was suggested but it breaks thins in way that causes berkshelf not to work in our pipeline.
Here's the code I'm using:
execute "run-service:post_install" do
cwd (f = File.expand_path(__FILE__).split('/')).shift(f.length - 3).join('\\')
timeout 5
command "bundle && rake service:post_install"
# action :nothing
# subscribes :run, "gem_package[gem_name]" , :delayed
end
It doesn't matter if I un-comment or not the last two lines, it just breaks things but if i take out the whole thing it stops breaking things. Obviously I'm doing something wrong but I'm not sure what.
EDIT:
IT's the command itself that breaks it, when I change command to ls and action to :run, it breaks.
EDIT:after changing the command path around a bit I managed to get it to spit out a usable error, it was trying to run the command from chef cook books path, so I've (hopefully) forced it to use the correct path.
Why do you not want to use an execute resource? That is exactly what it is for, running commands from Chef. Chef obeys the order of the resources, so if you have a gem_package followed by an execute they will run in that order.
So, In the end I decided to try using the service resource because it allows you to set start, and stop commands.
The code that I used is :
service service_name do
init_command ("#{%x(gem env gemdir).strip.gsub('/','\\')}\\gems\\gem_name-#{installing_version}")
start_command "rake service:start"
stop_command "rake service:stop"
reload_command "rake service:reload"
restart_command "rake service:restart"
supports start: true, restart: true, reload: true
action [:enable,:start]
end
I'm still having problems but this is of a different sort.

Chef Solo - Role Data in Ruby DSL or JSON?

I am playing with Roles with Chef Solo (11.4.4 and 11.6.0). A bit Confused.
For Chef Solo runs, should roles be written in Ruby or JSON?
As per the official docs: About Roles, Roles can be stored as domain-specific Ruby (DSL) files or JSON data.
NOTE: chef-client uses Ruby for Roles, when these files are uploaded to Chef Server, they are converted to JSON. Whenever chef-repo is refreshed, the contents of all domain-specific Ruby files are re-compiled to JSON and re-uploaded to the server.
My question is, if the requirement is to run Chef in solo mode without a server and roles are needed, should the roles be written in Ruby or JSON (we don't have a server to convert Ruby to JSON)?
My guess is the latter. Does anyone know the correct answer?
BTW: I've seen people mixing Ruby and JSON in role files...
What is the Ruby DSL equivalent for rbenv.rb below?
Example, run rbenv + ruby-build cookbooks to install rbenv on Ubuntu.
rbenv.json
{
"run_list": ["role[rbenv]"]
}
roles/rbenv.rb
name "rbenv"
description "rbenv + ruby-build"
run_list(
"recipe[rbenv]",
"recipe[ruby_build]"
)
override_attributes(
:rbenv => {
:git_repository => "https://github.com/sstephenson/rbenv.git"
},
:ruby_build => {
:git_repository => "https://github.com/sstephenson/ruby-build.git"
}
)
Chef Solo run chef-solo -c solo.rb -j rbenv.json -l debug works as expected. This is to achieve cloning via HTTPS because it easier behind the firewall.
However, using a Ruby DSL version of role rbenv.rb like below
name "rbenv"
description "rbenv + ruby-build"
run_list "recipe[rbenv]", "recipe[ruby_build]"
# default_attributes ":rbenv" => {":install_prefix" => "/opt"}
override_attributes ":rbenv" => {":git_repository" => "https://github.com/sstephenson/rbenv.git"}, ":ruby_build" => {":git_repository" => "https://github.com/sstephenson/ruby-build.git"}
It didn't seem to work because it still used the default attributes (clone via git URL instead of HTTPS).
I am new to Ruby so most likely I made some mistakes in the DSL code, please help;-)
* git[/opt/rbenv] action sync[2013-09-03T03:44:53+00:00] INFO: Processing git[/opt/rbenv] action sync (rbenv::default line 91)
[2013-09-03T03:44:53+00:00] DEBUG: git[/opt/rbenv] finding current git revision
[2013-09-03T03:44:53+00:00] DEBUG: git[/opt/rbenv] resolving remote reference
================================================================================
Error executing action `sync` on resource 'git[/opt/rbenv]'
================================================================================
Mixlib::ShellOut::ShellCommandFailed
------------------------------------
Expected process to exit with [0], but received '128'
---- Begin output of git ls-remote "git://github.com/sstephenson/rbenv.git" master* ----
STDOUT:
STDERR: fatal: unable to connect to github.com:
github.com[0: 192.30.252.128]: errno=Connection timed out
---- End output of git ls-remote "git://github.com/sstephenson/rbenv.git" master* ----
Ran git ls-remote "git://github.com/sstephenson/rbenv.git" master* returned 128
I prefer to use JSON format wherever possible for one simple reason - it's easy to parse and validate with a script. Here are three things that you can do if all your Chef data is in JSON format:
Easily perform a syntax check in a git pre-commit hook, something that's much harder to do when the file is in the Ruby DSL format.
Validate the keys and values in a data bag entry. This can be useful to check that you are not going to deploy invalid or nonsensical data bag entries to production.
Compare (with a little extra work - key ordering in a dictionary needs to be taken into account) the value of an object on a server with what's in git. The --format json argument is useful here.

Access current git commit number from within Heroku app

I know the slug compiler removes the .git directory when creating a heroku slug, but is there any way to configure Heroku so that I can access the currently running git commit number from within my scripts?
I'd like to be able to have a small link on my sinatra app (run within Heroku) which says "running version e72fb274a0" (or something similar). How can I retrieve this, or force the slug compiler to add it to an environment variable?
PROGRESS:
I reckon the best way to do this is to make a custom buildpack which writes the git commit version number to the heroku slug before the .git directory is deleted.
I've tried to do this (see my fork of the ruby buildpack) but the line I've added – line 23 – doesn't seem to be doing the job. Heroku sees & uses the new buildpack, but doesn't seem to write the file to the slug.
Anyone have any idea why my custom buildpack isn't working as expected?
Thanks,
JP
A couple of options...
SOURCE_VERSION environment variable (build-time)
Since 1st April 2015, there's a SOURCE_VERSION environment variable available to builds running on Heroku. For git-pushed builds, this is the git commit SHA-1 of the source being built:
https://devcenter.heroku.com/changelog-items/630
(thanks to #srtech for pointing that out!)
An example of me using that variable in a build - if you look at the HTML served by the deployed app, you'll see the commit id is coming though in an HTML comment near the very bottom: https://gu-who.herokuapp.com/
/etc/heroku/dyno metadata file (run-time)
Heroku have beta functionality to write out a /etc/heroku/dyno metadata file onto your running dyno. If you email support you can probably get added to the beta. Here's a place where Heroku themselves are using it:
https://github.com/heroku/fix/blob/6c8ab7a/lib/heroku_dyno_metadata.rb
The contents look like this:
{
"dyno":{
"physical_id":"161bfad9-9e83-40b7-b385-78305db2f168",
"size":1,
"name":"run.7145"
},
"app":{
"id":null
},
"release":{
"id":50,
"commit":"2c3a0b24069af49b3de35b8e8c26765c1dba9ff0",
"description":null
}
}
..so release.commit is the field you're after. I used to use this method until the SOURCE_VERSION variable became available.
In 2018 this is what you want:
https://devcenter.heroku.com/articles/dyno-metadata
heroku labs:enable runtime-dyno-metadata -a <app name>
You can run a script before deploy that store this information (maybe on a YAML)
using these a = `ls` (note that is not ' "apostrophe" sign is ` "inverse accute" sign)
the a variable will have the result of this bash command,so you can do
git = `git log`
and then find the information you want it and store it.
So you will be able to retrieve it later.
Did this helped ?

how to add a user to mongo from Rails (console or rake)?

I'd like to do the equivalent of:
mongo --port 27xxx
use admin
db.addUser("venkman", "StayPuft!1")
But I'd like to do it as part of my rake db:seed. Seems like something you'd want to to with a fresh database. I thought that some variant of this:
Mongoid.database.eval('use admin; db.addUser("venkman", "StayPuft!1")' )
would do the trick, but I'm not having much luck with it:
Mongo::OperationFailure: Database command '$eval' failed: (errmsg:
'compile failed: JS Error: SyntaxError: missing ; before statement nofile_a:0'; ok: '0.0').
from /home/user/.rvm/gems/ruby-1.9.2-p290/gems/mongo-1.5.2/lib/mongo/db.rb:520:in `command'
from /home/user/.rvm/gems/ruby-1.9.2-p290/gems/mongo-1.5.2/lib/mongo/db.rb:407:in `eval'
I can get a function like this to work:
irb(main):023:0> Mongoid.database.eval(' function() { return 3+3; }' )
=> 6.0
So it seems that database.eval is not the right thing to use for scripting db admin tasks. Reasonably sure this issue isn't related to mongoid, from the stack trace. Is there something that I can use to script a user create as part of my rake db:seed?
The eval fails because the javascript code is executed in the context of the database server, not the client shell. I am not totally sure, but I don't think you can do what you want with Mongoid alone.
However, if you use the Ruby driver, the code will look like this:
# set host and port with appropriate values
admindb = Mongo::Connection.new(host, port)['admin']
admindb.add_user("venkman", "StayPuft!1")
Having said that, I would not recommend putting hard coded passwords in your code were everybody can see it...

Resources